repo_name
stringlengths
6
77
path
stringlengths
8
215
license
stringclasses
15 values
cells
list
types
list
shreyas111/Multimedia_CS523_Project1
MyDeepDreamCode.ipynb
mit
[ "DeepDream implementation Tutorial\nOur Changes\n\nThe calculation for gradient is moved out from the optimize_function and is called only once before processing any image. This helps in saving RAM and gives faster computations.\nThe various values for blur can be used to check out how the variations occur in the final image. The different values of blur are commented out in recursive_optimize function. It works well when the blur is 0.5. Other values (.25, 1.0) are in commented state and can be uncommented to compare the outputs.\nEach of the images - downscaled, upscaled, before and after running deepdream algorithm will be saved under images folder - for now the code is commented to save images. The users can uncomment the lines in optimize_image() and recursive_optimize() function to download all intermediate outputs, upscaled and downscaled images. If recursive function is executed, then all the intermediate images are saved with iteration number appended at the end.\nIn order to test images for different layers, stepsize, rescale factor, number of iterations and number of repeats, the user can use the HTML form to vary the values, select a file (which is already present in the /images folder from where this notebook is running) and set these parameters. The fucntion - process_inputs() can be then executed to check the different variations. All the output images are stored in /images folder.\nThe gradient is usually added in the image to produce smooth patterns. For analysis purpose, we subtracted the gradient from the image and then plotted these images which shows that patterns could not be produced if the gradient is subtracted. The code for subtracting gradient is commented out in optimize_function, this can be uncommented and can be tested for images.\n\nIntroduction\nDeepDream is a computer vision program created by Google which uses a Convolutional Neural Network to find and enhance patterns in images which is basically creating dreamlike hallucinogenic appearance.\nFor showing the implementation of DeepDream, we will be using the Inception Model (deep convolutional network) and TensorFlow. The Inception Model has many layers and TensorFlow is used in order to generate a gradient", "from IPython.display import Image, display", "Optimize Image Function\nThis is the main function of the algorithm. The function takes input the layer-tensor (0-11), the image to be processed, the number of iterations, step size, tile size and show_gradient( to show the intermediate graphs). The function first obtains the gradient for the tensor layer which is basically first squares the tensor, then calculates the reduce_mean and then finds the gradient of this mean on the default graph. Once we obtain the gradient, we then iterate (the number of optimization we want to run) to blend the image with the patterns. The value of gradient is calculated to understand how we can change the image so as to maximize the mean of the given layer-tensor. The gradient is blurred in order to enhance the patterns and obtain a more smooth image. Finally the image is updated with the calculated gradient and this process is repeated for the number of iterations (by default it is 10).\nRecursive Optimization\nSince the Inception Model was trained for a very low resolution images (200-300 pixels) in order to get proper results, the input image is downscaled and deepdream is run. But with downscaling the image, the results of the algorithm are not good, so the process of downscaling the image and running deep dream is done recursively to obtain proper patterns in the output image. Thus first the image is downscaled as per the num of repeats, now each of the downscaled image is passed to the optimize_image function along with adding it with the upscaled image. Thus we finally get the same size image as the original with enhanced patterns.", "# Imports\nget_ipython().magic('matplotlib inline') #2Dplotting lobrary which produces publication quality figures\nimport matplotlib.pyplot as plt\nimport tensorflow as tf\nimport numpy as np #for scientific computing in python\nimport random\nimport math\n\n# Image manipulation.\nimport PIL.Image\nfrom scipy.ndimage.filters import gaussian_filter\n\nfrom random import randrange", "Inception Model\nThe inception model used for this implementation is inception5h because this model works with any image size and the output is more beautiful as compared to Inception v3.", "import inception5h", "Download the data for Inception Model (if it doesn't exists)", "inception5h.maybe_download()", "Load the Inception Model", "model = inception5h.Inception5h()", "Layers in Inception Model used for this implementation : 12", "len(model.layer_tensors)\n\n# printing the first model. Shows: ************************************************************** \nmodel.layer_tensors[0]", "This function loads an image and return its numpy array of floating-points", "def load_image(imageFileName):\n image = PIL.Image.open(imageFileName)\n\n return np.float32(image)\n\nimg = load_image('images/elon_musk_100x100.jpg')\n# print(img)", "Save an image as a jpeg-file. The image is given as a numpy array with pixel-values between 0 and 255.", "def save_image(image, filename):\n # Ensure the pixel-values are between 0 and 255.\n image = np.clip(image, 0.0, 255.0)\n \n # Convert to bytes.\n image = image.astype(np.uint8)\n \n # Write the image-file in jpeg-format.\n with open(filename, 'wb') as file:\n PIL.Image.fromarray(image).save(file, 'jpeg')", "Plot the image using the PIL since matplotlib gives low resolution images.", "def plot_image(image):\n # Assume the pixel-values are scaled between 0 and 255.\n \n if False:\n # Convert the pixel-values to the range between 0.0 and 1.0\n image = np.clip(image/255.0, 0.0, 1.0)\n \n # Plot using matplotlib.\n plt.imshow(image, interpolation='lanczos')\n plt.show()\n else:\n # Ensure the pixel-values are between 0 and 255.\n image = np.clip(image, 0.0, 255.0)\n \n # Convert pixels to bytes.\n image = image.astype(np.uint8)\n\n # Convert to a PIL-image and display it.\n display(PIL.Image.fromarray(image))", "Normalize an image so its values are between 0.0 and 1.0. This is useful for plotting the gradient.", "def normalize_image(x):\n # Get the min and max values for all pixels in the input.\n x_min = x.min()\n x_max = x.max()\n\n # Normalize so all values are between 0.0 and 1.0\n x_norm = (x - x_min) / (x_max - x_min)\n \n return x_norm", "Plot the gradient after normalizing the image", "def plot_gradient(gradient):\n # Normalize the gradient so it is between 0.0 and 1.0\n gradient_normalized = normalize_image(gradient)\n \n # Plot the normalized gradient.\n plt.imshow(gradient_normalized, interpolation='bilinear')\n plt.show()", "Resize the image : this function resizes the image to the desired pixels or to the rescaling factor.", "def resize_image(image, size=None, factor=None):\n # If a rescaling-factor is provided then use it.\n if factor is not None:\n # Scale the numpy array's shape for height and width.\n size = np.array(image.shape[0:2]) * factor\n \n # The size is floating-point because it was scaled.\n # PIL requires the size to be integers.\n size = size.astype(int)\n else:\n # Ensure the size has length 2.\n size = size[0:2]\n \n # The height and width is reversed in numpy vs. PIL.\n size = tuple(reversed(size))\n\n # Ensure the pixel-values are between 0 and 255.\n img = np.clip(image, 0.0, 255.0)\n \n # Convert the pixels to 8-bit bytes.\n img = img.astype(np.uint8)\n \n # Create PIL-object from numpy array.\n img = PIL.Image.fromarray(img)\n \n # Resize the image.\n img_resized = img.resize(size, PIL.Image.LANCZOS)\n \n \n # Convert 8-bit pixel values back to floating-point.\n img_resized = np.float32(img_resized)\n \n # print(img_resized)\n\n return img_resized", "The Inception Model can accept image of any size, but this may require more RAM for processing. In order to get the results from the DeepDream algorithm, if we downscale the image directly to 200*200 pixels (on which the model is actually trained) this will result in an image in which the patterns may not be clearly visible. Thus this algorithm splits the image into smaller tiles and then use TensorFlow to calculate gradient for each of the tiles.\nBelow function is used to determine the appropritate tile size. The desired tile-size default value = 400*400 pixels and the actual tile-size depends on the image-dimensions.", "def get_tile_size(num_pixels, tile_size=400):\n \"\"\"\n num_pixels is the number of pixels in a dimension of the image.\n tile_size is the desired tile-size.\n \"\"\"\n\n # How many times can we repeat a tile of the desired size.\n num_tiles = int(round(num_pixels / tile_size))\n \n # Ensure that there is at least 1 tile.\n num_tiles = max(1, num_tiles)\n \n # The actual tile-size.\n actual_tile_size = math.ceil(num_pixels / num_tiles)\n \n return actual_tile_size", "This function calculates the gradient for an input image. The input image is split into tiles and the gradient is calculated for each of the tile. The tiles are chosen randomly - this is to avoid visible lines in the final output image from DeepDream.", "def tiled_gradient(gradient, image, tile_size=400):\n # Allocate an array for the gradient of the entire image.\n grad = np.zeros_like(image)\n\n # Number of pixels for the x- and y-axes.\n x_max, y_max, _ = image.shape\n\n # Tile-size for the x-axis.\n x_tile_size = get_tile_size(num_pixels=x_max, tile_size=tile_size)\n # 1/4 of the tile-size.\n x_tile_size4 = x_tile_size // 4\n\n # Tile-size for the y-axis.\n y_tile_size = get_tile_size(num_pixels=y_max, tile_size=tile_size)\n # 1/4 of the tile-size\n y_tile_size4 = y_tile_size // 4\n\n # Random start-position for the tiles on the x-axis.\n # The random value is between -3/4 and -1/4 of the tile-size.\n # This is so the border-tiles are at least 1/4 of the tile-size,\n # otherwise the tiles may be too small which creates noisy gradients.\n x_start = random.randint(-3*x_tile_size4, -x_tile_size4)\n\n while x_start < x_max:\n # End-position for the current tile.\n x_end = x_start + x_tile_size\n \n # Ensure the tile's start- and end-positions are valid.\n x_start_lim = max(x_start, 0)\n x_end_lim = min(x_end, x_max)\n\n # Random start-position for the tiles on the y-axis.\n # The random value is between -3/4 and -1/4 of the tile-size.\n y_start = random.randint(-3*y_tile_size4, -y_tile_size4)\n\n while y_start < y_max:\n # End-position for the current tile.\n y_end = y_start + y_tile_size\n\n # Ensure the tile's start- and end-positions are valid.\n y_start_lim = max(y_start, 0)\n y_end_lim = min(y_end, y_max)\n\n # Get the image-tile.\n img_tile = image[x_start_lim:x_end_lim,\n y_start_lim:y_end_lim, :]\n\n # Create a feed-dict with the image-tile.\n feed_dict = model.create_feed_dict(image=img_tile)\n\n # Use TensorFlow to calculate the gradient-value.\n g = session.run(gradient, feed_dict=feed_dict)\n\n # Normalize the gradient for the tile. This is\n # necessary because the tiles may have very different\n # values. Normalizing gives a more coherent gradient.\n g /= (np.std(g) + 1e-8)\n\n # Store the tile's gradient at the appropriate location.\n grad[x_start_lim:x_end_lim,\n y_start_lim:y_end_lim, :] = g\n \n # Advance the start-position for the y-axis.\n y_start = y_end\n\n # Advance the start-position for the x-axis.\n x_start = x_end\n\n return grad", "In order to process the images fast and preventing unnecessary memory usage, the get_gradient function in inception5h is called just once before we process any image and obtain the gradient for a particular tensor layer.", "def call_get_gradient(layer_tensor):\n gradient = model.get_gradient(layer_tensor)\n return gradient", "Optimize Image\nThis is an Optimization that runs in a loop which forms a main part of DeepDream algorithm. It calculates the gradient of the given layer of Inception Model with respect to the input image which is then added to the input image. This increases the mean value of the layer-tensor and this process is repeated a number of times which helps in amplifying the patterns which the Inception Model sees in the input image.", "def optimize_image(layer_tensor, image, gradient, \n num_iterations=10, step_size=3.0, tile_size=400,\n show_gradient=True, filename='test'):\n \"\"\"\n Use gradient ascent to optimize an image so it maximizes the\n mean value of the given layer_tensor.\n \n Parameters:\n layer_tensor: Reference to a tensor that will be maximized.\n image: Input image used as the starting point.\n num_iterations: Number of optimization iterations to perform.\n step_size: Scale for each step of the gradient ascent.\n tile_size: Size of the tiles when calculating the gradient.\n show_gradient: Plot the gradient in each iteration.\n \"\"\"\n\n # Copy the image so we don't overwrite the original image.\n img = image.copy()\n \n print(\"Image before:\")\n plot_image(img)\n \n # save the file showing the before image\n filename1 = 'images/deepdream_BeforeO_'+filename+'.jpg'\n \n # kruti sharme - uncomment the below line to save intermediate results\n #save_image(img,filename=filename1)\n\n print(\"Processing image: \", end=\"\")\n\n #kruti sharma - the below function is called outside optimize function now. This is called only once for each tensor layer.\n # Use TensorFlow to get the mathematical function for the\n # gradient of the given layer-tensor with regard to the\n # input image. This may cause TensorFlow to add the same\n # math-expressions to the graph each time this function is called.\n \n #gradient = model.get_gradient(layer_tensor)\n \n \n for i in range(num_iterations):\n # Calculate the value of the gradient.\n # This tells us how to change the image so as to\n # maximize the mean of the given layer-tensor.\n grad = tiled_gradient(gradient=gradient, image=img)\n \n # Blur the gradient with different amounts and add\n # them together. The blur amount is also increased\n # during the optimization. This was found to give\n # nice, smooth images. You can try and change the formulas.\n # The blur-amount is called sigma (0=no blur, 1=low blur, etc.)\n # We could call gaussian_filter(grad, sigma=(sigma, sigma, 0.0))\n # which would not blur the colour-channel. This tends to\n # give psychadelic / pastel colours in the resulting images.\n # When the colour-channel is also blurred the colours of the\n # input image are mostly retained in the output image.\n sigma = (i * 4.0) / num_iterations + 0.5\n grad_smooth1 = gaussian_filter(grad, sigma=sigma)\n grad_smooth2 = gaussian_filter(grad, sigma=sigma*2)\n grad_smooth3 = gaussian_filter(grad, sigma=sigma*0.5)\n grad = (grad_smooth1 + grad_smooth2 + grad_smooth3)\n\n # Scale the step-size according to the gradient-values.\n # This may not be necessary because the tiled-gradient\n # is already normalized.\n step_size_scaled = step_size / (np.std(grad) + 1e-8)\n\n # Update the image by following the gradient.\n img += grad * step_size_scaled\n \n # kruti sharma - subtracting the gradient instead of adding that to the image.\n #img -= grad * step_size_scaled\n\n if show_gradient:\n # Print statistics for the gradient.\n msg = \"Gradient min: {0:>9.6f}, max: {1:>9.6f}, stepsize: {2:>9.2f}\"\n print(msg.format(grad.min(), grad.max(), step_size_scaled))\n\n # Plot the gradient.\n plot_gradient(grad)\n else:\n # Otherwise show a little progress-indicator.\n print(\". \", end=\"\")\n\n print()\n print(\"Image after:\")\n plot_image(img)\n filename1 = 'images/deepdream_AfterO_'+filename+'.jpg'\n \n # kruti sharme - uncomment the below line to save intermediate results\n #save_image(img,filename=filename1)\n \n return img", "Recursive Image Optimization\nIn order to downscale the input image, the below helper function downscales the input image which helps to speed up the processing of DeepDream algorithm and also produces proper patterns from the Inception Model. This downscales the image several times (depending on the num_repeats param) and runs each of the downscaled version through optimize_image() function (as defined above).", "def recursive_optimize(layer_tensor, image, gradient, \n num_repeats=4, rescale_factor=0.7, blend=0.2,\n num_iterations=10, step_size=3.0,\n tile_size=400, filename='test'):\n \"\"\"\n Recursively blur and downscale the input image.\n Each downscaled image is run through the optimize_image()\n function to amplify the patterns that the Inception model sees.\n\n Parameters:\n image: Input image used as the starting point.\n rescale_factor: Downscaling factor for the image.\n num_repeats: Number of times to downscale the image.\n blend: Factor for blending the original and processed images.\n\n Parameters passed to optimize_image():\n layer_tensor: Reference to a tensor that will be maximized.\n num_iterations: Number of optimization iterations to perform.\n step_size: Scale for each step of the gradient ascent.\n tile_size: Size of the tiles when calculating the gradient.\n \"\"\"\n\n # Do a recursive step?\n if num_repeats>0:\n # Blur the input image to prevent artifacts when downscaling.\n # The blur amount is controlled by sigma. Note that the\n # colour-channel is not blurred as it would make the image gray.\n sigma = 0.5\n \n # kruti sharma : changing the blur value to check how the downscaling is impacted\n #sigma = 1.0\n \n # kruti sharma : changing the blur value to check how the downscaling is impacted\n #sigma = 0.25\n \n img_blur = gaussian_filter(image, sigma=(sigma, sigma, 0.0))\n\n # Downscale the image.\n img_downscaled = resize_image(image=img_blur,\n factor=rescale_factor)\n print('!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!Downscale image in Recursive Level: ', num_repeats)\n plot_image(img_downscaled)\n \n dfilename = 'images/downscale_'+filename+'_'+str(num_repeats)+'.jpg'\n \n # kruti sharma - uncomment the below line to save the downscaled file\n #save_image(img_downscaled, filename=dfilename)\n \n # Recursive call to this function.\n # Subtract one from num_repeats and use the downscaled image.\n img_result = recursive_optimize(layer_tensor=layer_tensor,\n image=img_downscaled, \n gradient=gradient, \n num_repeats=num_repeats-1,\n rescale_factor=rescale_factor,\n blend=blend,\n num_iterations=num_iterations,\n step_size=step_size,\n tile_size=tile_size,\n filename=filename)\n \n # Upscale the resulting image back to its original size.\n img_upscaled = resize_image(image=img_result, size=image.shape)\n print('*****************************Upscaled Image in Recursive Level: ', num_repeats)\n plot_image(img_upscaled)\n ufilename = 'images/upscale_'+filename+'_'+str(num_repeats)+'.jpg'\n \n # kruti sharma - uncomment the below line to save the downscaled file\n #save_image(img_upscaled, filename=ufilename)\n\n # Blend the original and processed images.\n image = blend * image + (1.0 - blend) * img_upscaled\n \n\n print(\"Recursive level:\", num_repeats)\n\n # Process the image using the DeepDream algorithm.\n filename1 = filename+'_'+str(num_repeats)\n img_result = optimize_image(layer_tensor=layer_tensor,\n image=image,\n gradient=gradient,\n num_iterations=num_iterations,\n step_size=step_size,\n tile_size=tile_size,\n filename=filename1)\n \n return img_result", "TensorFlow session to see all the outputs for the image.", "session = tf.InteractiveSession(graph=model.graph)", "Test the algorithm for Willu Wonka Old image.", "image = load_image('images/willy_wonka_old.jpg')\nfilename = 'willy_wonka_old'\nplot_image(image)", "Now using the 3rd Layer (layer index = 2) of the Inception Model on the input image\nThe layer_tensor will hold the inception model 3rd layer and shows that it has 192 channels", "layer_tensor = model.layer_tensors[2]\nlayer_tensor", "Running the DeepDream Optimization algorithm with iterations as 10, step size as 6.0.", "gradient = call_get_gradient(layer_tensor)\nimg_result = optimize_image(layer_tensor, image, gradient, \n num_iterations=20, step_size=3.0, tile_size=400, \n show_gradient=False, filename=filename)\n\n\ndef process_inputs():\n \n print('Tensor Layer to be Used: '+layer_tensor_ip)\n new_layer_tensor_ip = model.layer_tensors[int(layer_tensor_ip)]\n \n print('*************************************************')\n print('layer tensor actual value after input from user: ')\n print(new_layer_tensor_ip)\n \n print('*************************************************')\n \n if image_ip == \"\":\n image_value = 'willy_wonka_new.jpg'\n \n filename_ip = 'images/'+image_ip\n new_image_ip = load_image(filename_ip)\n print('New Input image from user')\n print('*************************************************')\n plot_image(new_image_ip)\n print('*************************************************')\n \n print('Step Size: '+step_size_ip)\n print('*************************************************')\n \n print('Rescale factor: '+rescale_factor_ip)\n print('*************************************************')\n \n print('Number of Iterations: '+num_iterations_ip)\n print('*************************************************')\n \n print('Number of Repeats: '+num_repeats_ip)\n print('*************************************************')\n \n \n print('*************** PROCESSING with Optimize Image **********************')\n \n parts = image_ip.split('.') \n inputImage = parts[0]\n print('New input image: ',inputImage)\n \n # calling the gradient function outside the optimize_image() function - to reduce the memory consumption\n gradient = call_get_gradient(new_layer_tensor_ip)\n \n img_result = optimize_image(new_layer_tensor_ip, new_image_ip, gradient, \n num_iterations=int(num_iterations_ip), step_size=float(step_size_ip), tile_size=400, \n show_gradient=True, filename=inputImage)\n \n \n frac= str(rescale_factor_ip).split('.')\n ss = str(step_size_ip).split('.')\n filename_ip = 'images/deepdream_O'+parts[0]+'_'+layer_tensor_ip+'_'+ss[0]+'_0'+frac[1]+'.'+parts[1]\n print('New Filename for Optimize: '+filename_ip)\n \n save_image(img_result, filename=filename_ip)\n \n print('*************** PROCESSING with Recursive Optimize Image **********************') \n \n img_result = recursive_optimize(new_layer_tensor_ip, new_image_ip, gradient, \n num_repeats=int(num_repeats_ip), rescale_factor=float(rescale_factor_ip), blend=0.2,\n num_iterations=10, step_size=float(step_size_ip),\n tile_size=400, filename=inputImage)\n \n filename_ip = 'images/deepdream_R'+parts[0]+'_'+layer_tensor_ip+'_'+ss[0]+'_0'+frac[1]+'.'+parts[1]\n print('New Filename for Recursive Optimize: '+filename_ip)\n \n save_image(img_result, filename=filename_ip)\n \n\nfrom IPython.display import HTML\n\ninput_form = \"\"\"\n<div style=\"background-color:gainsboro; border:solid black; width:800px; padding:20px;\">\n\n<B>Tensor Layer:</B>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; <input type=\"text\" id=\"layer_tensor\" value=\"3\"> Value between 0 - 11 <br> <br>\n\n<B>Step Size:</B>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; <input type=\"text\" id=\"step_size\" value=\"3.0\"> <br> <br>\n\n<B>Rescale Factor:</B>&nbsp;&nbsp;&nbsp; <input type=\"text\" id=\"rescale_factor\" value=\"0.7\"> <br> <br>\n\n<B>Iterations:</B>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; <input type=\"text\" id=\"num_iterations\" value=\"10\"> Value >= 10 <br> <br>\n\n<B>Repeats:</B>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp <input type=\"text\" id=\"num_repeats\" value=\"4\"> Value >= 3 <br> <br>\n\n<input type=\"file\" id=\"file\"/><br><br>\n\n<button onclick=\"process_image()\">Set Parameters</button><br> <br>\n\n<span id=\"output\"></span>\n\n</div>\n\"\"\"\n\njavascript = \"\"\"\n<script type=\"text/Javascript\">\nvar count=0;\n\nprocess_image();\n\ndocument.getElementById('file').onchange = function(event) {\n var value = this.value;\n console.log(event.target.files[0].name);\n \n var image_name = 'image_ip';\n var image_value = event.target.files[0].name;\n \n count++;\n var filecommand = image_name + \" = '\" + image_value + \"'\";\n console.log(\"File Click: Executing Command: \" + filecommand);\n \n var kernel = IPython.notebook.kernel;\n kernel.execute(filecommand);\n };\n \n function process_image(){\n \n var layer_tensor_name = 'layer_tensor_ip';\n var layer_tensor_value = document.getElementById('layer_tensor').value;\n \n var step_size_name = 'step_size_ip';\n var step_size_value = document.getElementById('step_size').value;\n \n var rescale_factor_name = 'rescale_factor_ip';\n var rescale_factor_value = document.getElementById('rescale_factor').value;\n \n var num_iterations_name = 'num_iterations_ip';\n var num_iterations_value = document.getElementById('num_iterations').value;\n \n var num_repeats_name = 'num_repeats_ip';\n var num_repeats_value = document.getElementById('num_repeats').value;\n \n var kernel = IPython.notebook.kernel;\n var command = layer_tensor_name + \" = '\" + layer_tensor_value + \"'\";\n \n console.log(\"Executing Command: \" + command);\n \n kernel.execute(command);\n \n command = step_size_name + \" = '\" + step_size_value + \"'\";\n \n console.log(\"Executing Command: \" + command);\n \n kernel.execute(command);\n \n command = rescale_factor_name + \" = '\" + rescale_factor_value + \"'\";\n \n console.log(\"Executing Command: \" + command);\n \n kernel.execute(command);\n \n command = num_iterations_name + \" = '\" + num_iterations_value + \"'\";\n \n console.log(\"Executing Command: \" + command);\n \n kernel.execute(command);\n \n command = num_repeats_name + \" = '\" + num_repeats_value + \"'\";\n \n console.log(\"Executing Command: \" + command);\n \n kernel.execute(command);\n \n if(count == 0){\n var image_name = 'image_ip';\n var image_value = 'willy_wonka_new.jpg';\n \n var filecommand = image_name + \" = '\" + image_value + \"'\";\n console.log(\"Executing Command: \" + filecommand);\n \n var kernel = IPython.notebook.kernel;\n kernel.execute(filecommand);\n }\n \n document.getElementById(\"output\").textContent=\"Change parameters and uncomment and execute process_inputs() to see output\";\n \n }\n \n</script>\n\"\"\"\n \n\nHTML(input_form + javascript)", "Uncomment the below line (process_inputs()) after executing the above form. This will run both Optimize and Recursive Optimize Function. The final output images are saved in /images folder. If all the intermediate images are required, then uncomment the save_image() lines in Optimize_Image and Recursive_Optimize() function.", "#process_inputs()\n\n# The below code is commented. The users can uncomment once they have done the run through.\n\n# session.close()", "Conclusion\nRunning over different sets of parameters, we could see that a better result set is generated when we have a Rescale Factor between 0.4 - 0.8, number of iterations that we run Optimize function between 10-20 gives a smooth image with defined patterns. With less number of iterations, the patterns will not be visible. The recursive optimize function is run for atleast 4-5 times (parameter: number of repeats) and hence blends the image with more lines and patterns but if the number of repeats is increased too much, the output does not produces a smooth image.\nThe gradient plays a major role. Adding up different gradient with varying blur helps in creating a smooth final image where the patterns and the original image blends well. For a very high blur the original image itself looses the lines and smootheness. Thus a blur of 0.5 is good.\nFor analysing and understanding, each of the intermediate ouputs can be saved - the codes are in commented form to avoid unnecessary saving of multiple files. These lines can uncommented to save each of the intermediate outputs. The final images are saved in the local drive in /images folder.\nLicense (MIT)\nCopyright (c) 2016 by Magnus Erik Hvass Pedersen\nPermission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the \"Software\"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions:\nThe above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software.\nTHE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
ga7g08/ga7g08.github.io
_notebooks/2015-03-19-Income-Distribution-in-the-UK.ipynb
mit
[ "import matplotlib.pyplot as plt\nimport pandas as pd\n%matplotlib inline\nimport wget\nimport os\n# Dependencies: wget, xlrd", "Getting hold of and cleaning up income data for the UK\nIn this post I will be downloading data from the HMRC on income in the UK. The data provides the mean\nand median income before and after tax split by age and gender. You can see the original data sources:\n\nFor the last few years on the www.gov.uk site\n\nand \n\nFor the years (1999-2010) on the national achive\n\nThe data is several xls files. These have no standard naming convention, so we will need to download them on a somewhat adhoc bases. Let's do this first, saving them to a local dir with names year-range.xls.", "data_dir = \"./DataIncomeInvestigation\"\nif not os.path.isdir(data_dir):\n os.makedirs(data_dir)", "Downlading the data\nFrom 1999 to 2009", "file_names_1999_2009 = [ \n\"table3_2_september04.xls\",\n\"table3-2-2000-01.xls\",\n\"table-32-2001-02.xls\",\n\"table3_2.xls\",\n\"3_2_apr06.xls\",\n\"table3-2-2004-05.xls\",\n\"table3-2-jan08.xls\",\n\"3-2tabledec08.xls\",\n\"3-2table-jan2010.xls\",\n\"\",\n\"3-2table-feb2012.xls\"]\n\nyears_1999_2009 = [\"{}-{}\".format(i, i+1) for i in range(1999, 2010)]\n\nurl_base = \"http://webarchive.nationalarchives.gov.uk/20120405152450/http://hmrc.gov.uk/stats/income_distribution/{}\"\n\nfor file, year in zip(file_names_1999_2009, years_1999_2009):\n fname = wget.download(url_base.format(file), \n out=\"{}/{}.xls\".format(data_dir, year))\n \n print \"Downloaded {} for year {} to {}\".format(file, year, fname) ", "From 2010 to 2013", "url_list_2010_2013 = ['https://www.gov.uk/government/uploads/system/uploads/attachment_data/file/267112/table3-2-1.xls',\n 'https://www.gov.uk/government/uploads/system/uploads/attachment_data/file/276222/table3-2-12.xls',\n 'https://www.gov.uk/government/uploads/system/uploads/attachment_data/file/399053/Table_3_2_13.xls']\n\nyears_2010_2013 = ['2010-2011', '2011-2012', '2012-2013']\n\nfor url, year in zip(url_list_2010_2013, years_2010_2013):\n fname = wget.download(url, out=\"{}/{}.xls\".format(data_dir, year))\n print \"Downloaded {} \\nfor year {} to {}\".format(url, year, fname) ", "Importing the data\nOkay, now that we have downloaded the data we will import it. Unfortunately although all\nthe spreadsheets are in a similar format, they are not exactly the same. To simplify the\nprocess then we will first construct some helper functions. The general principle will be \nto read each spreadsheet as a dataframe, sanitise it, then add it to a total dataframe df. \nEach spreadsheet will have the data in a slightly different place, we we will, by hand,\nset the row and columns of interest and check that the correct data is imported:\nNote: For now we restrict our focus to the total data and ignore the gender split.", "years = [\"{}-{}\".format(i, i+1) for i in range(1999, 2013)]\n\ndef ReadData(file_name, **kwargs):\n \"\"\" Read in the data and print it for checking \"\"\"\n print(\"Reading in data from {}\".format(file_name))\n df = pd.read_excel(file_name, \n index_col=None,\n header=None, \n names=['age', \n 'Number', \n 'MedianIncomeBeforeTax', \n 'MeanIncomeBeforetax'],\n **kwargs) \n return df\n\ndef RecordData(year, df=None, **kwargs):\n \"\"\" Helper function to add the data for year to a data frame df \"\"\"\n if type(df) != pd.core.frame.DataFrame:\n df = pd.DataFrame()\n file_name = \"{}/{}.xls\".format(data_dir, year)\n \n dfNew = ReadData(file_name, **kwargs)\n dfNew.dropna(inplace=True)\n dfNew['year'] = year\n print \"Adding the following data to the frame:\"\n print(dfNew)\n return df.append(dfNew, ignore_index=True)\n\ni = 0\ndf = RecordData(years[i], skiprows=68, skip_footer=12, parse_cols=[0, 2, 3, 5])\n\ni = 1\ndf = RecordData(years[i], df=df, skiprows=68, skip_footer=10, parse_cols=[0, 2, 4, 9])\n\ndf = RecordData(years[2], df=df, skiprows=68, skip_footer=11, parse_cols=[0, 2, 4, 9])\n\ndf = RecordData(years[3], df=df, skiprows=68, skip_footer=12, parse_cols=[0, 1, 2, 4])\n\ndf = RecordData(years[4], df=df, skiprows=14, skip_footer=65, parse_cols=[0, 2, 3, 5])\n\ndf = RecordData(years[5], df=df, skiprows=14, skip_footer=65, parse_cols=[0, 2, 3, 5])\n\ndf = RecordData(years[6], df=df, skiprows=14, skip_footer=65, parse_cols=[0, 2, 3, 5])\n\ndf = RecordData(years[7], df=df, skiprows=14, skip_footer=65, parse_cols=[0, 2, 3, 5])\n\ndf = RecordData(years[8], df=df, skiprows=14, skip_footer=66, parse_cols=[0, 2, 3, 5])\n\n# THE DATA FOR 2008-2009 DOES NOT EXIST: So we create Nans\ndf_2008_2009 = df[df.year=='2007-2008'].copy()\ndf_2008_2009.set_value(df_2008_2009.index, \n ['Number', 'MedianIncomeBeforeTax', 'MeanIncomeBeforetax'],\n np.nan)\ndf_2008_2009['year'] = years[9]\ndf = df.append(df_2008_2009, ignore_index=True)\n\n\ndf = RecordData(years[10], df=df, skiprows=14, skip_footer=109, parse_cols=[0, 2, 3, 5],\n sheetname=\"3.2\")\n\ndf = RecordData(years[11], df=df, skiprows=14, skip_footer=106, parse_cols=[0, 2, 3, 5])\n\ndf = RecordData(years[12], df=df, skiprows=14, skip_footer=101, parse_cols=[0, 2, 3, 5])\n\ndf = RecordData(years[13], df=df, skiprows=14, skip_footer=101, parse_cols=[0, 2, 3, 5])", "Checking the data\nWe now drop all all duplicates: these should only come \nrepeated calls to RecordData. But\nwe can sanity check later by plotting.", "df = df.drop_duplicates()\ndf", "Plotting the data\nIncome against time by age", "years_total_val = [int(s.split(\"-\")[0]) for s in years]\n\nage_ranges = df[df.year == '1999-2000'].age.values\n\nfig, ax = plt.subplots(figsize=(10, 5))\nNUM_COLORS = len(age_ranges)\ncm = plt.get_cmap('gist_rainbow')\nax.set_color_cycle([cm(1.*i/NUM_COLORS) for i in range(NUM_COLORS)])\n\nfor age in age_ranges:\n ax.plot(years_total_val, df[df.age == age]['MeanIncomeBeforetax'], \"-o\", \n lw=2, label=age)\n ax.set_xticks(years_total_val)\n ax.set_xticklabels(years, rotation=45)\n\nplt.legend(bbox_to_anchor=(1.3, 1.0))\nplt.show()", "Income distribution as a function of time", "age_ranges_values = range(len(age_ranges))\n\nfig, ax = plt.subplots(figsize=(10, 5))\nNUM_COLORS = len(years)\ncm = plt.get_cmap('gist_rainbow')\nax.set_color_cycle([cm(1.*i/NUM_COLORS) for i in range(NUM_COLORS)])\n\nfor yr in years:\n ax.plot(age_ranges_values, df[df.year == yr]['MeanIncomeBeforetax'], \"-o\",\n label=yr)\n \nax.set_xticks(age_ranges_values)\nax.set_xticklabels(age_ranges, rotation=45)\nplt.legend(bbox_to_anchor=(1.3, 1.0))\nplt.show()" ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
ES-DOC/esdoc-jupyterhub
notebooks/mpi-m/cmip6/models/mpi-esm-1-2-lr/toplevel.ipynb
gpl-3.0
[ "ES-DOC CMIP6 Model Properties - Toplevel\nMIP Era: CMIP6\nInstitute: MPI-M\nSource ID: MPI-ESM-1-2-LR\nSub-Topics: Radiative Forcings. \nProperties: 85 (42 required)\nModel descriptions: Model description details\nInitialized From: -- \nNotebook Help: Goto notebook help page\nNotebook Initialised: 2018-02-15 16:54:17\nDocument Setup\nIMPORTANT: to be executed each time you run the notebook", "# DO NOT EDIT ! \nfrom pyesdoc.ipython.model_topic import NotebookOutput \n\n# DO NOT EDIT ! \nDOC = NotebookOutput('cmip6', 'mpi-m', 'mpi-esm-1-2-lr', 'toplevel')", "Document Authors\nSet document authors", "# Set as follows: DOC.set_author(\"name\", \"email\") \n# TODO - please enter value(s)", "Document Contributors\nSpecify document contributors", "# Set as follows: DOC.set_contributor(\"name\", \"email\") \n# TODO - please enter value(s)", "Document Publication\nSpecify document publication status", "# Set publication status: \n# 0=do not publish, 1=publish. \nDOC.set_publication_status(0)", "Document Table of Contents\n1. Key Properties\n2. Key Properties --&gt; Flux Correction\n3. Key Properties --&gt; Genealogy\n4. Key Properties --&gt; Software Properties\n5. Key Properties --&gt; Coupling\n6. Key Properties --&gt; Tuning Applied\n7. Key Properties --&gt; Conservation --&gt; Heat\n8. Key Properties --&gt; Conservation --&gt; Fresh Water\n9. Key Properties --&gt; Conservation --&gt; Salt\n10. Key Properties --&gt; Conservation --&gt; Momentum\n11. Radiative Forcings\n12. Radiative Forcings --&gt; Greenhouse Gases --&gt; CO2\n13. Radiative Forcings --&gt; Greenhouse Gases --&gt; CH4\n14. Radiative Forcings --&gt; Greenhouse Gases --&gt; N2O\n15. Radiative Forcings --&gt; Greenhouse Gases --&gt; Tropospheric O3\n16. Radiative Forcings --&gt; Greenhouse Gases --&gt; Stratospheric O3\n17. Radiative Forcings --&gt; Greenhouse Gases --&gt; CFC\n18. Radiative Forcings --&gt; Aerosols --&gt; SO4\n19. Radiative Forcings --&gt; Aerosols --&gt; Black Carbon\n20. Radiative Forcings --&gt; Aerosols --&gt; Organic Carbon\n21. Radiative Forcings --&gt; Aerosols --&gt; Nitrate\n22. Radiative Forcings --&gt; Aerosols --&gt; Cloud Albedo Effect\n23. Radiative Forcings --&gt; Aerosols --&gt; Cloud Lifetime Effect\n24. Radiative Forcings --&gt; Aerosols --&gt; Dust\n25. Radiative Forcings --&gt; Aerosols --&gt; Tropospheric Volcanic\n26. Radiative Forcings --&gt; Aerosols --&gt; Stratospheric Volcanic\n27. Radiative Forcings --&gt; Aerosols --&gt; Sea Salt\n28. Radiative Forcings --&gt; Other --&gt; Land Use\n29. Radiative Forcings --&gt; Other --&gt; Solar \n1. Key Properties\nKey properties of the model\n1.1. Model Overview\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nTop level overview of coupled model", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.model_overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "1.2. Model Name\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nName of coupled model.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.model_name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "2. Key Properties --&gt; Flux Correction\nFlux correction properties of the model\n2.1. Details\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDescribe if/how flux corrections are applied in the model", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.flux_correction.details') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "3. Key Properties --&gt; Genealogy\nGenealogy and history of the model\n3.1. Year Released\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nYear the model was released", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.genealogy.year_released') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "3.2. CMIP3 Parent\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nCMIP3 parent if any", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.genealogy.CMIP3_parent') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "3.3. CMIP5 Parent\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nCMIP5 parent if any", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.genealogy.CMIP5_parent') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "3.4. Previous Name\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nPreviously known as", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.genealogy.previous_name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "4. Key Properties --&gt; Software Properties\nSoftware properties of model\n4.1. Repository\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nLocation of code for this component.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.software_properties.repository') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "4.2. Code Version\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nCode version identifier.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.software_properties.code_version') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "4.3. Code Languages\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N\nCode language(s).", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.software_properties.code_languages') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "4.4. Components Structure\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe how model realms are structured into independent software components (coupled via a coupler) and internal software components.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.software_properties.components_structure') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "4.5. Coupler\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nOverarching coupling framework for model.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.software_properties.coupler') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"OASIS\" \n# \"OASIS3-MCT\" \n# \"ESMF\" \n# \"NUOPC\" \n# \"Bespoke\" \n# \"Unknown\" \n# \"None\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "5. Key Properties --&gt; Coupling\n**\n5.1. Overview\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOverview of coupling in the model", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.coupling.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "5.2. Atmosphere Double Flux\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nIs the atmosphere passing a double flux to the ocean and sea ice (as opposed to a single one)?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.coupling.atmosphere_double_flux') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "5.3. Atmosphere Fluxes Calculation Grid\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nWhere are the air-sea fluxes calculated", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.coupling.atmosphere_fluxes_calculation_grid') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Atmosphere grid\" \n# \"Ocean grid\" \n# \"Specific coupler grid\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "5.4. Atmosphere Relative Winds\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nAre relative or absolute winds used to compute the flux? I.e. do ocean surface currents enter the wind stress calculation?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.coupling.atmosphere_relative_winds') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "6. Key Properties --&gt; Tuning Applied\nTuning methodology for model\n6.1. Description\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nGeneral overview description of tuning: explain and motivate the main targets and metrics/diagnostics retained. Document the relative weight given to climate performance metrics/diagnostics versus process oriented metrics/diagnostics, and on the possible conflicts with parameterization level tuning. In particular describe any struggle with a parameter value that required pushing it to its limits to solve a particular model deficiency.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.tuning_applied.description') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "6.2. Global Mean Metrics Used\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N\nList set of metrics/diagnostics of the global mean state used in tuning model", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.tuning_applied.global_mean_metrics_used') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "6.3. Regional Metrics Used\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N\nList of regional metrics/diagnostics of mean state (e.g THC, AABW, regional means etc) used in tuning model/component", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.tuning_applied.regional_metrics_used') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "6.4. Trend Metrics Used\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N\nList observed trend metrics/diagnostics used in tuning model/component (such as 20th century)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.tuning_applied.trend_metrics_used') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "6.5. Energy Balance\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDescribe how energy balance was obtained in the full system: in the various components independently or at the components coupling stage?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.tuning_applied.energy_balance') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "6.6. Fresh Water Balance\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDescribe how fresh_water balance was obtained in the full system: in the various components independently or at the components coupling stage?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.tuning_applied.fresh_water_balance') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "7. Key Properties --&gt; Conservation --&gt; Heat\nGlobal heat convervation properties of the model\n7.1. Global\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDescribe if/how heat is conserved globally", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.conservation.heat.global') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "7.2. Atmos Ocean Interface\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe if/how heat is conserved at the atmosphere/ocean coupling interface", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.conservation.heat.atmos_ocean_interface') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "7.3. Atmos Land Interface\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDescribe if/how heat is conserved at the atmosphere/land coupling interface", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.conservation.heat.atmos_land_interface') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "7.4. Atmos Sea-ice Interface\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe if/how heat is conserved at the atmosphere/sea-ice coupling interface", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.conservation.heat.atmos_sea-ice_interface') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "7.5. Ocean Seaice Interface\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe if/how heat is conserved at the ocean/sea-ice coupling interface", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.conservation.heat.ocean_seaice_interface') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "7.6. Land Ocean Interface\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe if/how heat is conserved at the land/ocean coupling interface", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.conservation.heat.land_ocean_interface') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "8. Key Properties --&gt; Conservation --&gt; Fresh Water\nGlobal fresh water convervation properties of the model\n8.1. Global\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDescribe if/how fresh_water is conserved globally", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.global') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "8.2. Atmos Ocean Interface\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe if/how fresh_water is conserved at the atmosphere/ocean coupling interface", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.atmos_ocean_interface') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "8.3. Atmos Land Interface\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDescribe if/how fresh water is conserved at the atmosphere/land coupling interface", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.atmos_land_interface') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "8.4. Atmos Sea-ice Interface\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe if/how fresh water is conserved at the atmosphere/sea-ice coupling interface", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.atmos_sea-ice_interface') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "8.5. Ocean Seaice Interface\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe if/how fresh water is conserved at the ocean/sea-ice coupling interface", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.ocean_seaice_interface') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "8.6. Runoff\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe how runoff is distributed and conserved", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.runoff') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "8.7. Iceberg Calving\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe if/how iceberg calving is modeled and conserved", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.iceberg_calving') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "8.8. Endoreic Basins\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe if/how endoreic basins (no ocean access) are treated", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.endoreic_basins') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "8.9. Snow Accumulation\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe how snow accumulation over land and over sea-ice is treated", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.snow_accumulation') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "9. Key Properties --&gt; Conservation --&gt; Salt\nGlobal salt convervation properties of the model\n9.1. Ocean Seaice Interface\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe if/how salt is conserved at the ocean/sea-ice coupling interface", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.conservation.salt.ocean_seaice_interface') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "10. Key Properties --&gt; Conservation --&gt; Momentum\nGlobal momentum convervation properties of the model\n10.1. Details\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe if/how momentum is conserved in the model", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.conservation.momentum.details') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "11. Radiative Forcings\nRadiative forcings of the model for historical and scenario (aka Table 12.1 IPCC AR5)\n11.1. Overview\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOverview of radiative forcings (GHG and aerosols) implementation in model", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "12. Radiative Forcings --&gt; Greenhouse Gases --&gt; CO2\nCarbon dioxide forcing\n12.1. Provision\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nHow this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CO2.provision') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"N/A\" \n# \"M\" \n# \"Y\" \n# \"E\" \n# \"ES\" \n# \"C\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "12.2. Additional Information\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nAdditional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CO2.additional_information') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "13. Radiative Forcings --&gt; Greenhouse Gases --&gt; CH4\nMethane forcing\n13.1. Provision\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nHow this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CH4.provision') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"N/A\" \n# \"M\" \n# \"Y\" \n# \"E\" \n# \"ES\" \n# \"C\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "13.2. Additional Information\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nAdditional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CH4.additional_information') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "14. Radiative Forcings --&gt; Greenhouse Gases --&gt; N2O\nNitrous oxide forcing\n14.1. Provision\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nHow this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.N2O.provision') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"N/A\" \n# \"M\" \n# \"Y\" \n# \"E\" \n# \"ES\" \n# \"C\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "14.2. Additional Information\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nAdditional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.N2O.additional_information') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "15. Radiative Forcings --&gt; Greenhouse Gases --&gt; Tropospheric O3\nTroposheric ozone forcing\n15.1. Provision\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nHow this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.tropospheric_O3.provision') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"N/A\" \n# \"M\" \n# \"Y\" \n# \"E\" \n# \"ES\" \n# \"C\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "15.2. Additional Information\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nAdditional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.tropospheric_O3.additional_information') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "16. Radiative Forcings --&gt; Greenhouse Gases --&gt; Stratospheric O3\nStratospheric ozone forcing\n16.1. Provision\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nHow this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.stratospheric_O3.provision') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"N/A\" \n# \"M\" \n# \"Y\" \n# \"E\" \n# \"ES\" \n# \"C\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "16.2. Additional Information\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nAdditional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.stratospheric_O3.additional_information') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "17. Radiative Forcings --&gt; Greenhouse Gases --&gt; CFC\nOzone-depleting and non-ozone-depleting fluorinated gases forcing\n17.1. Provision\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nHow this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CFC.provision') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"N/A\" \n# \"M\" \n# \"Y\" \n# \"E\" \n# \"ES\" \n# \"C\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "17.2. Equivalence Concentration\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDetails of any equivalence concentrations used", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CFC.equivalence_concentration') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"N/A\" \n# \"Option 1\" \n# \"Option 2\" \n# \"Option 3\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "17.3. Additional Information\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nAdditional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CFC.additional_information') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "18. Radiative Forcings --&gt; Aerosols --&gt; SO4\nSO4 aerosol forcing\n18.1. Provision\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nHow this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.SO4.provision') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"N/A\" \n# \"M\" \n# \"Y\" \n# \"E\" \n# \"ES\" \n# \"C\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "18.2. Additional Information\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nAdditional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.SO4.additional_information') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "19. Radiative Forcings --&gt; Aerosols --&gt; Black Carbon\nBlack carbon aerosol forcing\n19.1. Provision\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nHow this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.black_carbon.provision') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"N/A\" \n# \"M\" \n# \"Y\" \n# \"E\" \n# \"ES\" \n# \"C\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "19.2. Additional Information\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nAdditional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.black_carbon.additional_information') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "20. Radiative Forcings --&gt; Aerosols --&gt; Organic Carbon\nOrganic carbon aerosol forcing\n20.1. Provision\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nHow this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.organic_carbon.provision') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"N/A\" \n# \"M\" \n# \"Y\" \n# \"E\" \n# \"ES\" \n# \"C\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "20.2. Additional Information\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nAdditional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.organic_carbon.additional_information') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "21. Radiative Forcings --&gt; Aerosols --&gt; Nitrate\nNitrate forcing\n21.1. Provision\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nHow this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.nitrate.provision') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"N/A\" \n# \"M\" \n# \"Y\" \n# \"E\" \n# \"ES\" \n# \"C\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "21.2. Additional Information\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nAdditional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.nitrate.additional_information') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "22. Radiative Forcings --&gt; Aerosols --&gt; Cloud Albedo Effect\nCloud albedo effect forcing (RFaci)\n22.1. Provision\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nHow this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_albedo_effect.provision') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"N/A\" \n# \"M\" \n# \"Y\" \n# \"E\" \n# \"ES\" \n# \"C\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "22.2. Aerosol Effect On Ice Clouds\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nRadiative effects of aerosols on ice clouds are represented?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_albedo_effect.aerosol_effect_on_ice_clouds') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "22.3. Additional Information\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nAdditional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_albedo_effect.additional_information') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "23. Radiative Forcings --&gt; Aerosols --&gt; Cloud Lifetime Effect\nCloud lifetime effect forcing (ERFaci)\n23.1. Provision\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nHow this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_lifetime_effect.provision') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"N/A\" \n# \"M\" \n# \"Y\" \n# \"E\" \n# \"ES\" \n# \"C\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "23.2. Aerosol Effect On Ice Clouds\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nRadiative effects of aerosols on ice clouds are represented?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_lifetime_effect.aerosol_effect_on_ice_clouds') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "23.3. RFaci From Sulfate Only\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nRadiative forcing from aerosol cloud interactions from sulfate aerosol only?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_lifetime_effect.RFaci_from_sulfate_only') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "23.4. Additional Information\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nAdditional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_lifetime_effect.additional_information') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "24. Radiative Forcings --&gt; Aerosols --&gt; Dust\nDust forcing\n24.1. Provision\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nHow this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.dust.provision') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"N/A\" \n# \"M\" \n# \"Y\" \n# \"E\" \n# \"ES\" \n# \"C\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "24.2. Additional Information\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nAdditional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.dust.additional_information') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "25. Radiative Forcings --&gt; Aerosols --&gt; Tropospheric Volcanic\nTropospheric volcanic forcing\n25.1. Provision\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nHow this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.tropospheric_volcanic.provision') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"N/A\" \n# \"M\" \n# \"Y\" \n# \"E\" \n# \"ES\" \n# \"C\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "25.2. Historical Explosive Volcanic Aerosol Implementation\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nHow explosive volcanic aerosol is implemented in historical simulations", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.tropospheric_volcanic.historical_explosive_volcanic_aerosol_implementation') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Type A\" \n# \"Type B\" \n# \"Type C\" \n# \"Type D\" \n# \"Type E\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "25.3. Future Explosive Volcanic Aerosol Implementation\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nHow explosive volcanic aerosol is implemented in future simulations", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.tropospheric_volcanic.future_explosive_volcanic_aerosol_implementation') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Type A\" \n# \"Type B\" \n# \"Type C\" \n# \"Type D\" \n# \"Type E\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "25.4. Additional Information\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nAdditional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.tropospheric_volcanic.additional_information') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "26. Radiative Forcings --&gt; Aerosols --&gt; Stratospheric Volcanic\nStratospheric volcanic forcing\n26.1. Provision\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nHow this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.stratospheric_volcanic.provision') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"N/A\" \n# \"M\" \n# \"Y\" \n# \"E\" \n# \"ES\" \n# \"C\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "26.2. Historical Explosive Volcanic Aerosol Implementation\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nHow explosive volcanic aerosol is implemented in historical simulations", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.stratospheric_volcanic.historical_explosive_volcanic_aerosol_implementation') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Type A\" \n# \"Type B\" \n# \"Type C\" \n# \"Type D\" \n# \"Type E\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "26.3. Future Explosive Volcanic Aerosol Implementation\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nHow explosive volcanic aerosol is implemented in future simulations", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.stratospheric_volcanic.future_explosive_volcanic_aerosol_implementation') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Type A\" \n# \"Type B\" \n# \"Type C\" \n# \"Type D\" \n# \"Type E\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "26.4. Additional Information\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nAdditional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.stratospheric_volcanic.additional_information') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "27. Radiative Forcings --&gt; Aerosols --&gt; Sea Salt\nSea salt forcing\n27.1. Provision\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nHow this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.sea_salt.provision') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"N/A\" \n# \"M\" \n# \"Y\" \n# \"E\" \n# \"ES\" \n# \"C\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "27.2. Additional Information\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nAdditional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.sea_salt.additional_information') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "28. Radiative Forcings --&gt; Other --&gt; Land Use\nLand use forcing\n28.1. Provision\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nHow this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.other.land_use.provision') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"N/A\" \n# \"M\" \n# \"Y\" \n# \"E\" \n# \"ES\" \n# \"C\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "28.2. Crop Change Only\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nLand use change represented via crop change only?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.other.land_use.crop_change_only') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "28.3. Additional Information\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nAdditional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.other.land_use.additional_information') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "29. Radiative Forcings --&gt; Other --&gt; Solar\nSolar forcing\n29.1. Provision\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nHow solar forcing is provided", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.other.solar.provision') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"N/A\" \n# \"irradiance\" \n# \"proton\" \n# \"electron\" \n# \"cosmic ray\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "29.2. Additional Information\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nAdditional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.other.solar.additional_information') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "©2017 ES-DOC" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
yandexdataschool/gumbel_lstm
demo_gumbel_softmax.ipynb
mit
[ "from gumbel_softmax import GumbelSoftmax, GumbelSoftmaxLayer\nimport theano.tensor as T\nimport numpy as np", "Simple demo\n\nSample from gumbel-softmax\nAverage over samples", "temperature = 0.01\nlogits = np.linspace(-2,2,10).reshape([1,-1])\ngumbel_softmax = GumbelSoftmax(t=temperature)(logits)\nsoftmax = T.nnet.softmax(logits)\n\nimport matplotlib.pyplot as plt\n%matplotlib inline\nplt.title('gumbel-softmax samples')\nfor i in range(100):\n plt.plot(range(10),gumbel_softmax.eval()[0],marker='o',alpha=0.25)\nplt.ylim(0,1)\nplt.show()\n\nplt.title('average over samples')\nplt.plot(range(10),np.mean([gumbel_softmax.eval()[0] for _ in range(500)],axis=0),\n marker='o',label='gumbel-softmax average')\n\nplt.plot(softmax.eval()[0],marker='+',label='regular softmax')\nplt.legend(loc='best')", "Autoencoder with gumbel-softmax\n\nWe do not use any bayesian regularization, simply optimizer by backprop\nHidden layer contains 32 units, split into 8 blocks of 4 variables\nGumbel-softmax is computed over each block", "from sklearn.datasets import load_digits\nX = load_digits().data\n\nimport lasagne\nfrom lasagne.layers import *\nimport theano\n\n#graph inputs and shareds\ninput_var = T.matrix()\ntemp = theano.shared(np.float32(1),'temperature',allow_downcast=True)\n\n#architecture: encoder\nnn = l_in = InputLayer((None,64),input_var)\nnn = DenseLayer(nn,64,nonlinearity=T.tanh)\nnn = DenseLayer(nn,32,nonlinearity=T.tanh)\n\n#bottleneck\nnn = DenseLayer(nn,32,nonlinearity=None)\nnn = reshape(nn,(-1,4)) #reshape so that softmax would be applied over blocks of 4\nnn = GumbelSoftmaxLayer(nn,t=temp)\nnn = bottleneck = reshape(nn,(-1,32))\n\n#decoder\nnn = DenseLayer(nn,32,nonlinearity=T.tanh)\nnn = DenseLayer(nn,64,nonlinearity=T.tanh)\nnn = DenseLayer(nn,64,nonlinearity=None)\n\n#loss and updates\nloss = T.mean((get_output(nn)-input_var)**2)\nupdates = lasagne.updates.adam(loss,get_all_params(nn))\n\n#compile\ntrain_step = theano.function([input_var],loss,updates=updates)\nevaluate = theano.function([input_var],loss)", "Training loop\n\nWe gradually reduce temperature from 1 to 0.01 over time", "for i,t in enumerate(np.logspace(0,-2,10000)):\n sample = X[np.random.choice(len(X),32)]\n temp.set_value(t)\n mse = train_step(sample)\n if i %100 ==0:\n print '%.3f'%evaluate(X),\n\n#functions for visualization\nget_sample = theano.function([input_var],get_output(nn))\nget_sample_hard = theano.function([input_var],get_output(nn,hard_max=True))\nget_code = theano.function([input_var],get_output(bottleneck,hard_max=False))\n\n\nfor i in range(10):\n X_sample = X[np.random.randint(len(X)),None,:]\n plt.figure(figsize=[12,4])\n plt.subplot(1,4,1)\n plt.title(\"original\")\n plt.imshow(X_sample.reshape([8,8]),interpolation='none',cmap='gray')\n plt.subplot(1,4,2)\n plt.title(\"gumbel\")\n plt.imshow(get_sample(X_sample).reshape([8,8]),interpolation='none',cmap='gray')\n plt.subplot(1,4,3)\n plt.title(\"hard-max\")\n plt.imshow(get_sample_hard(X_sample).reshape([8,8]),interpolation='none',cmap='gray')\n plt.subplot(1,4,4)\n plt.title(\"code\")\n plt.imshow(get_code(X_sample).reshape(8,4),interpolation='none',cmap='gray')\n plt.show()" ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
robertclf/FAFT
FAFT_64-points_R2C/nbFAFT128_offset_xyz_3D.ipynb
bsd-3-clause
[ "3D Fast Accurate Fourier Transform\nwith an extra gpu array for the 33th complex values", "import numpy as np\nimport ctypes\nfrom ctypes import *\n\nimport pycuda.gpuarray as gpuarray\nimport pycuda.driver as cuda\nimport pycuda.autoinit\nfrom pycuda.compiler import SourceModule\n\nimport matplotlib.pyplot as plt\nimport matplotlib.mlab as mlab\nimport math\n\nimport time\n\n%matplotlib inline ", "Loading FFT routines", "gridDIM = 64\n\nsize = gridDIM*gridDIM*gridDIM\n\naxes0 = 0\naxes1 = 1\naxes2 = 2\n\nmakeC2C = 0\nmakeR2C = 1\nmakeC2R = 1\n\naxesSplit_0 = 0\naxesSplit_1 = 1\naxesSplit_2 = 2\n\nsegment_axes0 = 0\nsegment_axes1 = 0\nsegment_axes2 = 0\n\nDIR_BASE = \"/home/robert/Documents/new1/FFT/code/\"\n\n# FAFT\n_faft128_3D = ctypes.cdll.LoadLibrary( DIR_BASE+'FAFT128_3D_R2C.so' )\n_faft128_3D.FAFT128_3D_R2C.restype = int\n_faft128_3D.FAFT128_3D_R2C.argtypes = [ctypes.c_void_p, ctypes.c_void_p, \n ctypes.c_float, ctypes.c_float, ctypes.c_int, \n ctypes.c_int, ctypes.c_int, ctypes.c_int]\n\ncuda_faft = _faft128_3D.FAFT128_3D_R2C\n\n# Inv FAFT\n_ifaft128_3D = ctypes.cdll.LoadLibrary(DIR_BASE+'IFAFT128_3D_C2R.so')\n_ifaft128_3D.IFAFT128_3D_C2R.restype = int\n_ifaft128_3D.IFAFT128_3D_C2R.argtypes = [ctypes.c_void_p, ctypes.c_void_p, \n ctypes.c_float, ctypes.c_float, ctypes.c_int, \n ctypes.c_int, ctypes.c_int, ctypes.c_int]\n\ncuda_ifaft = _ifaft128_3D.IFAFT128_3D_C2R", "Initializing Data\nGaussian", "def Gaussian(x,mu,sigma):\n return np.exp( - (x-mu)**2/sigma**2/2. )/(sigma*np.sqrt( 2*np.pi ))\n\ndef fftGaussian(p,mu,sigma):\n return np.exp(-1j*mu*p)*np.exp( - p**2*sigma**2/2. )\n\n# Gaussian parameters\nmu_x = 1.5\nsigma_x = 1.\n\nmu_y = 1.5\nsigma_y = 1.\n\nmu_z = 1.5\nsigma_z = 1.\n\n# Grid parameters\nx_amplitude = 5.\np_amplitude = 6. # With the traditional method p amplitude is fixed to: 2 * np.pi /( 2*x_amplitude ) \n\ndx = 2*x_amplitude/float(gridDIM) # This is dx in Bailey's paper\ndp = 2*p_amplitude/float(gridDIM) # This is gamma in Bailey's paper\n\ndelta = dx*dp/(2*np.pi)\n\nx_range = np.linspace( -x_amplitude, x_amplitude-dx, gridDIM) \np = np.linspace( -p_amplitude, p_amplitude-dp, gridDIM) \n\nx = x_range[ np.newaxis, np.newaxis, : ] \ny = x_range[ np.newaxis, :, np.newaxis ] \nz = x_range[ :, np.newaxis, np.newaxis ] \n\nf = Gaussian(x,mu_x,sigma_x)*Gaussian(y,mu_y,sigma_y)*Gaussian(z,mu_z,sigma_z)\n\nplt.imshow( f[:, :, 0], extent=[-x_amplitude , x_amplitude-dx, -x_amplitude , x_amplitude-dx] )\n\naxis_font = {'size':'24'}\nplt.text( 0., 5.1, '$W$' , **axis_font)\nplt.colorbar()\n\n#plt.ylim(0,0.44)\n\n\nprint ' Amplitude x = ',x_amplitude\nprint ' Amplitude p = ',p_amplitude\nprint ' '\n\nprint 'mu_x = ', mu_x\nprint 'mu_y = ', mu_y\nprint 'mu_z = ', mu_z\nprint 'sigma_x = ', sigma_x\nprint 'sigma_y = ', sigma_y\nprint 'sigma_z = ', sigma_z\nprint ' '\n\nprint 'n = ', x.size\nprint 'dx = ', dx\nprint 'dp = ', dp\nprint ' standard fft dp = ',2 * np.pi /( 2*x_amplitude ) , ' '\nprint ' '\nprint 'delta = ', delta\n\nprint ' '\n\nprint 'The Gaussian extends to the numerical error in single precision:' \nprint ' min = ', np.min(f)", "$W$ TRANSFORM FROM AXES-0\nAfter the transfom, f_gpu[:, :32, :] contains real values and f_gpu[:, 32:, :] contains imaginary values. f33_gpu contains the 33th. complex values", "# Matrix for the 33th. complex values\n\nf33 = np.zeros( [64, 1 ,64], dtype = np.complex64 )\n\n# Copy to GPU\n\nif 'f_gpu' in globals():\n f_gpu.gpudata.free()\n \nif 'f33_gpu' in globals():\n f33_gpu.gpudata.free()\n\nf_gpu = gpuarray.to_gpu( np.ascontiguousarray( f , dtype = np.float32 ) )\nf33_gpu = gpuarray.to_gpu( np.ascontiguousarray( f33 , dtype = np.complex64 ) )", "Forward Transform", "# Executing FFT\n\nt_init = time.time() \n\ncuda_faft( int(f_gpu.gpudata), int(f33_gpu.gpudata), dx, delta, segment_axes0, axes0, makeR2C, axesSplit_0 )\ncuda_faft( int(f_gpu.gpudata), int(f33_gpu.gpudata), dx, delta, segment_axes1, axes1, makeC2C, axesSplit_0 )\ncuda_faft( int(f_gpu.gpudata), int(f33_gpu.gpudata), dx, delta, segment_axes2, axes2, makeC2C, axesSplit_0 )\n\nt_end = time.time() \n\nprint 'computation time = ', t_end - t_init\n\nplt.imshow( np.append( f_gpu.get()[:, :32, :], f33_gpu.get().real, axis=1 )[32,:,:]\n /float(np.sqrt(size)), \n extent=[-p_amplitude , p_amplitude-dp, 0, p_amplitude-dp] )\n\nplt.colorbar()\n\naxis_font = {'size':'24'}\nplt.text( 0., 5.2, '$Re \\\\mathcal{F}(W)$', **axis_font )\n\nplt.xlim(-x_amplitude , x_amplitude-dx)\nplt.ylim(0 , x_amplitude)\n\nplt.imshow( np.append( f_gpu.get()[:, 32:, :], f33_gpu.get().imag, axis=1 )[32,:,:]\n /float(np.sqrt(size)), \n extent=[-p_amplitude , p_amplitude-dp, 0, p_amplitude-dp] )\n\nplt.colorbar()\n\naxis_font = {'size':'24'}\nplt.text( 0., 5.2, '$Im \\\\mathcal{F}(W)$', **axis_font )\n\nplt.xlim(-x_amplitude , x_amplitude-dx)\nplt.ylim(0 , x_amplitude)", "Inverse Transform", "# Executing iFFT\n\ncuda_ifaft( int(f_gpu.gpudata), int(f33_gpu.gpudata), dx, -delta, segment_axes2, axes2, makeC2C, axesSplit_0 )\ncuda_ifaft( int(f_gpu.gpudata), int(f33_gpu.gpudata), dx, -delta, segment_axes1, axes1, makeC2C, axesSplit_0 )\ncuda_ifaft( int(f_gpu.gpudata), int(f33_gpu.gpudata), dx, -delta, segment_axes0, axes0, makeC2R, axesSplit_0 )\n\nplt.imshow( f_gpu.get()[32,:,:]/float(size) ,\n extent=[-x_amplitude , x_amplitude-dx, -x_amplitude , x_amplitude-dx] )\n\n\nplt.colorbar()\n\naxis_font = {'size':'24'}\nplt.text( -1, 5.2, '$W_{xy}$', **axis_font )\n\nplt.xlim(-x_amplitude , x_amplitude-dx)\nplt.ylim(-x_amplitude , x_amplitude-dx)\n\nplt.imshow( f_gpu.get()[:,32,:]/float(size) ,\n extent=[-x_amplitude , x_amplitude-dx, -x_amplitude , x_amplitude-dx] )\n\n\nplt.colorbar()\n\naxis_font = {'size':'24'}\nplt.text( -1, 5.2, '$W_{xz}$', **axis_font )\n\nplt.xlim(-x_amplitude , x_amplitude-dx)\nplt.ylim(-x_amplitude , x_amplitude-dx)\n\nplt.imshow( f_gpu.get()[:,:,32]/float(size) ,\n extent=[-x_amplitude , x_amplitude-dx, -x_amplitude , x_amplitude-dx] )\n\n\nplt.colorbar()\n\naxis_font = {'size':'24'}\nplt.text( -1, 5.2, '$W_{yz}$', **axis_font )\n\nplt.xlim(-x_amplitude , x_amplitude-dx)\nplt.ylim(-x_amplitude , x_amplitude-dx)", "$W$ TRANSFORM FROM AXES-1\nAfter the transfom, f_gpu[:, :, :64] contains real values and f_gpu[:, :, 64:] contains imaginary values. f33_gpu contains the 33th. complex values", "# Matrix for the 33th. complex values\n\nf33 = np.zeros( [64, 64, 1], dtype = np.complex64 )\n\n# One gpu array.\n\nif 'f_gpu' in globals():\n f_gpu.gpudata.free()\n \nif 'f33_gpu' in globals():\n f33_gpu.gpudata.free()\n\nf_gpu = gpuarray.to_gpu( np.ascontiguousarray( f , dtype = np.float32 ) )\nf33_gpu = gpuarray.to_gpu( np.ascontiguousarray( f33 , dtype = np.complex64 ) )", "Forward Transform", "# Executing FFT\n\nt_init = time.time() \n\ncuda_faft( int(f_gpu.gpudata), int(f33_gpu.gpudata), dx, delta, segment_axes1, axes1, makeR2C, axesSplit_1 )\ncuda_faft( int(f_gpu.gpudata), int(f33_gpu.gpudata), dx, delta, segment_axes0, axes0, makeC2C, axesSplit_1 )\ncuda_faft( int(f_gpu.gpudata), int(f33_gpu.gpudata), dx, delta, segment_axes2, axes2, makeC2C, axesSplit_1 )\n\nt_end = time.time() \n\nprint 'computation time = ', t_end - t_init\n\nplt.imshow( np.append( f_gpu.get()[:, :, :32], f33_gpu.get().real, axis=2 )[32,:,:]\n /float(np.sqrt(size)), \n extent=[-p_amplitude , 0, -p_amplitude , p_amplitude-dp] )\n\nplt.colorbar()\n\naxis_font = {'size':'24'}\nplt.text( 0., 5.2, '$Re \\\\mathcal{F}(W)$', **axis_font )\n\nplt.xlim(-x_amplitude , 0)\nplt.ylim(-x_amplitude , x_amplitude-dx)\n\nplt.imshow( np.append( f_gpu.get()[:, :, 32:], f33_gpu.get().imag, axis=2 )[32,:,:]\n /float(np.sqrt(size)), \n extent=[-p_amplitude , 0, -p_amplitude , p_amplitude-dp] )\n\nplt.colorbar()\n\naxis_font = {'size':'24'}\nplt.text( 0., 5.2, '$Im \\\\mathcal{F}(W)$', **axis_font )\n\nplt.xlim(-x_amplitude , 0)\nplt.ylim(-x_amplitude , x_amplitude-dx)", "Inverse Transform", "# Executing iFFT\n\ncuda_ifaft( int(f_gpu.gpudata), int(f33_gpu.gpudata), dx, -delta, segment_axes2, axes2, makeC2C, axesSplit_1 )\ncuda_ifaft( int(f_gpu.gpudata), int(f33_gpu.gpudata), dx, -delta, segment_axes0, axes0, makeC2C, axesSplit_1 )\ncuda_ifaft( int(f_gpu.gpudata), int(f33_gpu.gpudata), dx, -delta, segment_axes1, axes1, makeC2R, axesSplit_1 )\n\nplt.imshow( f_gpu.get()[32,:,:]/float(size) ,\n extent=[-x_amplitude , x_amplitude-dx, -x_amplitude , x_amplitude-dx] )\n\n\nplt.colorbar()\n\naxis_font = {'size':'24'}\nplt.text( -1, 5.2, '$W_{xy}$', **axis_font )\n\nplt.xlim(-x_amplitude , x_amplitude-dx)\nplt.ylim(-x_amplitude , x_amplitude-dx)\n\nplt.imshow( f_gpu.get()[:,32,:]/float(size) ,\n extent=[-x_amplitude , x_amplitude-dx, -x_amplitude , x_amplitude-dx] )\n\n\nplt.colorbar()\n\naxis_font = {'size':'24'}\nplt.text( -1, 5.2, '$W_{xz}$', **axis_font )\n\nplt.xlim(-x_amplitude , x_amplitude-dx)\nplt.ylim(-x_amplitude , x_amplitude-dx)\n\nplt.imshow( f_gpu.get()[:,:,32]/float(size) ,\n extent=[-x_amplitude , x_amplitude-dx, -x_amplitude , x_amplitude-dx] )\n\n\nplt.colorbar()\n\naxis_font = {'size':'24'}\nplt.text( -1, 5.2, '$W_{yz}$', **axis_font )\n\nplt.xlim(-x_amplitude , x_amplitude-dx)\nplt.ylim(-x_amplitude , x_amplitude-dx)", "$W$ TRANSFORM FROM AXES-2\nAfter the transfom, f_gpu[:64, :, :] contains real values and f_gpu[64:, :, :] contains imaginary values. f33_gpu contains the 33th. complex values", "# Matrix for the 33th. complex values\n\nf33 = np.zeros( [1, 64, 64], dtype = np.complex64 )\n\n# One gpu array.\n\nif 'f_gpu' in globals():\n f_gpu.gpudata.free()\n \nif 'f33_gpu' in globals():\n f33_gpu.gpudata.free()\n\nf_gpu = gpuarray.to_gpu( np.ascontiguousarray( f , dtype = np.float32 ) )\nf33_gpu = gpuarray.to_gpu( np.ascontiguousarray( f33 , dtype = np.complex64 ) )", "Forward Transform", "# Executing FFT\n\nt_init = time.time() \n\ncuda_faft( int(f_gpu.gpudata), int(f33_gpu.gpudata), dx, delta, segment_axes2, axes2, makeR2C, axesSplit_2 )\ncuda_faft( int(f_gpu.gpudata), int(f33_gpu.gpudata), dx, delta, segment_axes1, axes1, makeC2C, axesSplit_2 )\ncuda_faft( int(f_gpu.gpudata), int(f33_gpu.gpudata), dx, delta, segment_axes0, axes0, makeC2C, axesSplit_2 )\n\nt_end = time.time() \n\nprint 'computation time = ', t_end - t_init\n\nplt.imshow( np.append( f_gpu.get()[:32, :, :], f33_gpu.get().real, axis=0 )[:,:,32]\n /float(np.sqrt(size)), \n extent=[-p_amplitude , p_amplitude-dp, 0, p_amplitude-dp] )\n\nplt.colorbar()\n\naxis_font = {'size':'24'}\nplt.text( 0., 5.2, '$Re \\\\mathcal{F}(W)$', **axis_font )\n\nplt.xlim(-x_amplitude , x_amplitude-dx)\nplt.ylim(0 , x_amplitude-dx)\n\nplt.imshow( np.append( f_gpu.get()[32:, :, :], f33_gpu.get().imag, axis=0 )[:,:,32]\n /float(np.sqrt(size)), \n extent=[-p_amplitude , p_amplitude-dp, 0, p_amplitude-dp] )\n\nplt.colorbar()\n\naxis_font = {'size':'24'}\nplt.text( 0., 5.2, '$Im \\\\mathcal{F}(W)$', **axis_font )\n\nplt.xlim(-x_amplitude , x_amplitude-dx)\nplt.ylim(0 , x_amplitude-dx)", "Inverse Transform", "# Executing iFFT\n\ncuda_ifaft( int(f_gpu.gpudata), int(f33_gpu.gpudata), dx, -delta, segment_axes0, axes0, makeC2C, axesSplit_2 )\ncuda_ifaft( int(f_gpu.gpudata), int(f33_gpu.gpudata), dx, -delta, segment_axes1, axes1, makeC2C, axesSplit_2 )\ncuda_ifaft( int(f_gpu.gpudata), int(f33_gpu.gpudata), dx, -delta, segment_axes2, axes2, makeC2R, axesSplit_2 )\n\nplt.imshow( f_gpu.get()[32,:,:]/float(size) ,\n extent=[-x_amplitude , x_amplitude-dx, -x_amplitude , x_amplitude-dx] )\n\n\nplt.colorbar()\n\naxis_font = {'size':'24'}\nplt.text( -1, 5.2, '$W_{xy}$', **axis_font )\n\nplt.xlim(-x_amplitude , x_amplitude-dx)\nplt.ylim(-x_amplitude , x_amplitude-dx)\n\nplt.imshow( f_gpu.get()[:,32,:]/float(size) ,\n extent=[-x_amplitude , x_amplitude-dx, -x_amplitude , x_amplitude-dx] )\n\n\nplt.colorbar()\n\naxis_font = {'size':'24'}\nplt.text( -1, 5.2, '$W_{xz}$', **axis_font )\n\nplt.xlim(-x_amplitude , x_amplitude-dx)\nplt.ylim(-x_amplitude , x_amplitude-dx)\n\nplt.imshow( f_gpu.get()[:,:,32]/float(size) ,\n extent=[-x_amplitude , x_amplitude-dx, -x_amplitude , x_amplitude-dx] )\n\n\nplt.colorbar()\n\naxis_font = {'size':'24'}\nplt.text( -1, 5.2, '$W_{yz}$', **axis_font )\n\nplt.xlim(-x_amplitude , x_amplitude-dx)\nplt.ylim(-x_amplitude , x_amplitude-dx)" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
ES-DOC/esdoc-jupyterhub
notebooks/ncc/cmip6/models/noresm2-mh/atmos.ipynb
gpl-3.0
[ "ES-DOC CMIP6 Model Properties - Atmos\nMIP Era: CMIP6\nInstitute: NCC\nSource ID: NORESM2-MH\nTopic: Atmos\nSub-Topics: Dynamical Core, Radiation, Turbulence Convection, Microphysics Precipitation, Cloud Scheme, Observation Simulation, Gravity Waves, Solar, Volcanos. \nProperties: 156 (127 required)\nModel descriptions: Model description details\nInitialized From: -- \nNotebook Help: Goto notebook help page\nNotebook Initialised: 2018-02-15 16:54:24\nDocument Setup\nIMPORTANT: to be executed each time you run the notebook", "# DO NOT EDIT ! \nfrom pyesdoc.ipython.model_topic import NotebookOutput \n\n# DO NOT EDIT ! \nDOC = NotebookOutput('cmip6', 'ncc', 'noresm2-mh', 'atmos')", "Document Authors\nSet document authors", "# Set as follows: DOC.set_author(\"name\", \"email\") \n# TODO - please enter value(s)", "Document Contributors\nSpecify document contributors", "# Set as follows: DOC.set_contributor(\"name\", \"email\") \n# TODO - please enter value(s)", "Document Publication\nSpecify document publication status", "# Set publication status: \n# 0=do not publish, 1=publish. \nDOC.set_publication_status(0)", "Document Table of Contents\n1. Key Properties --&gt; Overview\n2. Key Properties --&gt; Resolution\n3. Key Properties --&gt; Timestepping\n4. Key Properties --&gt; Orography\n5. Grid --&gt; Discretisation\n6. Grid --&gt; Discretisation --&gt; Horizontal\n7. Grid --&gt; Discretisation --&gt; Vertical\n8. Dynamical Core\n9. Dynamical Core --&gt; Top Boundary\n10. Dynamical Core --&gt; Lateral Boundary\n11. Dynamical Core --&gt; Diffusion Horizontal\n12. Dynamical Core --&gt; Advection Tracers\n13. Dynamical Core --&gt; Advection Momentum\n14. Radiation\n15. Radiation --&gt; Shortwave Radiation\n16. Radiation --&gt; Shortwave GHG\n17. Radiation --&gt; Shortwave Cloud Ice\n18. Radiation --&gt; Shortwave Cloud Liquid\n19. Radiation --&gt; Shortwave Cloud Inhomogeneity\n20. Radiation --&gt; Shortwave Aerosols\n21. Radiation --&gt; Shortwave Gases\n22. Radiation --&gt; Longwave Radiation\n23. Radiation --&gt; Longwave GHG\n24. Radiation --&gt; Longwave Cloud Ice\n25. Radiation --&gt; Longwave Cloud Liquid\n26. Radiation --&gt; Longwave Cloud Inhomogeneity\n27. Radiation --&gt; Longwave Aerosols\n28. Radiation --&gt; Longwave Gases\n29. Turbulence Convection\n30. Turbulence Convection --&gt; Boundary Layer Turbulence\n31. Turbulence Convection --&gt; Deep Convection\n32. Turbulence Convection --&gt; Shallow Convection\n33. Microphysics Precipitation\n34. Microphysics Precipitation --&gt; Large Scale Precipitation\n35. Microphysics Precipitation --&gt; Large Scale Cloud Microphysics\n36. Cloud Scheme\n37. Cloud Scheme --&gt; Optical Cloud Properties\n38. Cloud Scheme --&gt; Sub Grid Scale Water Distribution\n39. Cloud Scheme --&gt; Sub Grid Scale Ice Distribution\n40. Observation Simulation\n41. Observation Simulation --&gt; Isscp Attributes\n42. Observation Simulation --&gt; Cosp Attributes\n43. Observation Simulation --&gt; Radar Inputs\n44. Observation Simulation --&gt; Lidar Inputs\n45. Gravity Waves\n46. Gravity Waves --&gt; Orographic Gravity Waves\n47. Gravity Waves --&gt; Non Orographic Gravity Waves\n48. Solar\n49. Solar --&gt; Solar Pathways\n50. Solar --&gt; Solar Constant\n51. Solar --&gt; Orbital Parameters\n52. Solar --&gt; Insolation Ozone\n53. Volcanos\n54. Volcanos --&gt; Volcanoes Treatment \n1. Key Properties --&gt; Overview\nTop level key properties\n1.1. Model Overview\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOverview of atmosphere model", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.key_properties.overview.model_overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "1.2. Model Name\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nName of atmosphere model code (CAM 4.0, ARPEGE 3.2,...)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.key_properties.overview.model_name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "1.3. Model Family\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nType of atmospheric model.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.key_properties.overview.model_family') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"AGCM\" \n# \"ARCM\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "1.4. Basic Approximations\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nBasic approximations made in the atmosphere.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.key_properties.overview.basic_approximations') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"primitive equations\" \n# \"non-hydrostatic\" \n# \"anelastic\" \n# \"Boussinesq\" \n# \"hydrostatic\" \n# \"quasi-hydrostatic\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "2. Key Properties --&gt; Resolution\nCharacteristics of the model resolution\n2.1. Horizontal Resolution Name\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nThis is a string usually used by the modelling group to describe the resolution of the model grid, e.g. T42, N48.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.key_properties.resolution.horizontal_resolution_name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "2.2. Canonical Horizontal Resolution\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nExpression quoted for gross comparisons of resolution, e.g. 2.5 x 3.75 degrees lat-lon.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.key_properties.resolution.canonical_horizontal_resolution') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "2.3. Range Horizontal Resolution\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nRange of horizontal resolution with spatial details, eg. 1 deg (Equator) - 0.5 deg", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.key_properties.resolution.range_horizontal_resolution') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "2.4. Number Of Vertical Levels\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nNumber of vertical levels resolved on the computational grid.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.key_properties.resolution.number_of_vertical_levels') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "2.5. High Top\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDoes the atmosphere have a high-top? High-Top atmospheres have a fully resolved stratosphere with a model top above the stratopause.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.key_properties.resolution.high_top') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "3. Key Properties --&gt; Timestepping\nCharacteristics of the atmosphere model time stepping\n3.1. Timestep Dynamics\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nTimestep for the dynamics, e.g. 30 min.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.key_properties.timestepping.timestep_dynamics') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "3.2. Timestep Shortwave Radiative Transfer\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nTimestep for the shortwave radiative transfer, e.g. 1.5 hours.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.key_properties.timestepping.timestep_shortwave_radiative_transfer') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "3.3. Timestep Longwave Radiative Transfer\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nTimestep for the longwave radiative transfer, e.g. 3 hours.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.key_properties.timestepping.timestep_longwave_radiative_transfer') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "4. Key Properties --&gt; Orography\nCharacteristics of the model orography\n4.1. Type\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nTime adaptation of the orography.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.key_properties.orography.type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"present day\" \n# \"modified\" \n# TODO - please enter value(s)\n", "4.2. Changes\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nIf the orography type is modified describe the time adaptation changes.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.key_properties.orography.changes') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"related to ice sheets\" \n# \"related to tectonics\" \n# \"modified mean\" \n# \"modified variance if taken into account in model (cf gravity waves)\" \n# TODO - please enter value(s)\n", "5. Grid --&gt; Discretisation\nAtmosphere grid discretisation\n5.1. Overview\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOverview description of grid discretisation in the atmosphere", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.grid.discretisation.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "6. Grid --&gt; Discretisation --&gt; Horizontal\nAtmosphere discretisation in the horizontal\n6.1. Scheme Type\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nHorizontal discretisation type", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.grid.discretisation.horizontal.scheme_type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"spectral\" \n# \"fixed grid\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "6.2. Scheme Method\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nHorizontal discretisation method", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.grid.discretisation.horizontal.scheme_method') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"finite elements\" \n# \"finite volumes\" \n# \"finite difference\" \n# \"centered finite difference\" \n# TODO - please enter value(s)\n", "6.3. Scheme Order\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nHorizontal discretisation function order", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.grid.discretisation.horizontal.scheme_order') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"second\" \n# \"third\" \n# \"fourth\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "6.4. Horizontal Pole\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nHorizontal discretisation pole singularity treatment", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.grid.discretisation.horizontal.horizontal_pole') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"filter\" \n# \"pole rotation\" \n# \"artificial island\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "6.5. Grid Type\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nHorizontal grid type", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.grid.discretisation.horizontal.grid_type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Gaussian\" \n# \"Latitude-Longitude\" \n# \"Cubed-Sphere\" \n# \"Icosahedral\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "7. Grid --&gt; Discretisation --&gt; Vertical\nAtmosphere discretisation in the vertical\n7.1. Coordinate Type\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nType of vertical coordinate system", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.grid.discretisation.vertical.coordinate_type') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"isobaric\" \n# \"sigma\" \n# \"hybrid sigma-pressure\" \n# \"hybrid pressure\" \n# \"vertically lagrangian\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "8. Dynamical Core\nCharacteristics of the dynamical core\n8.1. Overview\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOverview description of atmosphere dynamical core", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.dynamical_core.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "8.2. Name\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nCommonly used name for the dynamical core of the model.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.dynamical_core.name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "8.3. Timestepping Type\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nTimestepping framework type", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.dynamical_core.timestepping_type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Adams-Bashforth\" \n# \"explicit\" \n# \"implicit\" \n# \"semi-implicit\" \n# \"leap frog\" \n# \"multi-step\" \n# \"Runge Kutta fifth order\" \n# \"Runge Kutta second order\" \n# \"Runge Kutta third order\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "8.4. Prognostic Variables\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nList of the model prognostic variables", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.dynamical_core.prognostic_variables') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"surface pressure\" \n# \"wind components\" \n# \"divergence/curl\" \n# \"temperature\" \n# \"potential temperature\" \n# \"total water\" \n# \"water vapour\" \n# \"water liquid\" \n# \"water ice\" \n# \"total water moments\" \n# \"clouds\" \n# \"radiation\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "9. Dynamical Core --&gt; Top Boundary\nType of boundary layer at the top of the model\n9.1. Top Boundary Condition\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nTop boundary condition", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.dynamical_core.top_boundary.top_boundary_condition') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"sponge layer\" \n# \"radiation boundary condition\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "9.2. Top Heat\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nTop boundary heat treatment", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.dynamical_core.top_boundary.top_heat') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "9.3. Top Wind\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nTop boundary wind treatment", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.dynamical_core.top_boundary.top_wind') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "10. Dynamical Core --&gt; Lateral Boundary\nType of lateral boundary condition (if the model is a regional model)\n10.1. Condition\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nType of lateral boundary condition", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.dynamical_core.lateral_boundary.condition') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"sponge layer\" \n# \"radiation boundary condition\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "11. Dynamical Core --&gt; Diffusion Horizontal\nHorizontal diffusion scheme\n11.1. Scheme Name\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nHorizontal diffusion scheme name", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.dynamical_core.diffusion_horizontal.scheme_name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "11.2. Scheme Method\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nHorizontal diffusion scheme method", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.dynamical_core.diffusion_horizontal.scheme_method') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"iterated Laplacian\" \n# \"bi-harmonic\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "12. Dynamical Core --&gt; Advection Tracers\nTracer advection scheme\n12.1. Scheme Name\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nTracer advection scheme name", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.dynamical_core.advection_tracers.scheme_name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Heun\" \n# \"Roe and VanLeer\" \n# \"Roe and Superbee\" \n# \"Prather\" \n# \"UTOPIA\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "12.2. Scheme Characteristics\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nTracer advection scheme characteristics", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.dynamical_core.advection_tracers.scheme_characteristics') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Eulerian\" \n# \"modified Euler\" \n# \"Lagrangian\" \n# \"semi-Lagrangian\" \n# \"cubic semi-Lagrangian\" \n# \"quintic semi-Lagrangian\" \n# \"mass-conserving\" \n# \"finite volume\" \n# \"flux-corrected\" \n# \"linear\" \n# \"quadratic\" \n# \"quartic\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "12.3. Conserved Quantities\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nTracer advection scheme conserved quantities", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.dynamical_core.advection_tracers.conserved_quantities') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"dry mass\" \n# \"tracer mass\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "12.4. Conservation Method\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nTracer advection scheme conservation method", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.dynamical_core.advection_tracers.conservation_method') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"conservation fixer\" \n# \"Priestley algorithm\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "13. Dynamical Core --&gt; Advection Momentum\nMomentum advection scheme\n13.1. Scheme Name\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nMomentum advection schemes name", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.dynamical_core.advection_momentum.scheme_name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"VanLeer\" \n# \"Janjic\" \n# \"SUPG (Streamline Upwind Petrov-Galerkin)\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "13.2. Scheme Characteristics\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nMomentum advection scheme characteristics", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.dynamical_core.advection_momentum.scheme_characteristics') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"2nd order\" \n# \"4th order\" \n# \"cell-centred\" \n# \"staggered grid\" \n# \"semi-staggered grid\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "13.3. Scheme Staggering Type\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nMomentum advection scheme staggering type", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.dynamical_core.advection_momentum.scheme_staggering_type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Arakawa B-grid\" \n# \"Arakawa C-grid\" \n# \"Arakawa D-grid\" \n# \"Arakawa E-grid\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "13.4. Conserved Quantities\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nMomentum advection scheme conserved quantities", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.dynamical_core.advection_momentum.conserved_quantities') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Angular momentum\" \n# \"Horizontal momentum\" \n# \"Enstrophy\" \n# \"Mass\" \n# \"Total energy\" \n# \"Vorticity\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "13.5. Conservation Method\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nMomentum advection scheme conservation method", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.dynamical_core.advection_momentum.conservation_method') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"conservation fixer\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "14. Radiation\nCharacteristics of the atmosphere radiation process\n14.1. Aerosols\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nAerosols whose radiative effect is taken into account in the atmosphere model", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.aerosols') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"sulphate\" \n# \"nitrate\" \n# \"sea salt\" \n# \"dust\" \n# \"ice\" \n# \"organic\" \n# \"BC (black carbon / soot)\" \n# \"SOA (secondary organic aerosols)\" \n# \"POM (particulate organic matter)\" \n# \"polar stratospheric ice\" \n# \"NAT (nitric acid trihydrate)\" \n# \"NAD (nitric acid dihydrate)\" \n# \"STS (supercooled ternary solution aerosol particle)\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "15. Radiation --&gt; Shortwave Radiation\nProperties of the shortwave radiation scheme\n15.1. Overview\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOverview description of shortwave radiation in the atmosphere", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.shortwave_radiation.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "15.2. Name\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nCommonly used name for the shortwave radiation scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.shortwave_radiation.name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "15.3. Spectral Integration\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nShortwave radiation scheme spectral integration", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.shortwave_radiation.spectral_integration') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"wide-band model\" \n# \"correlated-k\" \n# \"exponential sum fitting\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "15.4. Transport Calculation\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nShortwave radiation transport calculation methods", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.shortwave_radiation.transport_calculation') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"two-stream\" \n# \"layer interaction\" \n# \"bulk\" \n# \"adaptive\" \n# \"multi-stream\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "15.5. Spectral Intervals\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nShortwave radiation scheme number of spectral intervals", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.shortwave_radiation.spectral_intervals') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "16. Radiation --&gt; Shortwave GHG\nRepresentation of greenhouse gases in the shortwave radiation scheme\n16.1. Greenhouse Gas Complexity\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nComplexity of greenhouse gases whose shortwave radiative effects are taken into account in the atmosphere model", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.shortwave_GHG.greenhouse_gas_complexity') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"CO2\" \n# \"CH4\" \n# \"N2O\" \n# \"CFC-11 eq\" \n# \"CFC-12 eq\" \n# \"HFC-134a eq\" \n# \"Explicit ODSs\" \n# \"Explicit other fluorinated gases\" \n# \"O3\" \n# \"H2O\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "16.2. ODS\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N\nOzone depleting substances whose shortwave radiative effects are explicitly taken into account in the atmosphere model", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.shortwave_GHG.ODS') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"CFC-12\" \n# \"CFC-11\" \n# \"CFC-113\" \n# \"CFC-114\" \n# \"CFC-115\" \n# \"HCFC-22\" \n# \"HCFC-141b\" \n# \"HCFC-142b\" \n# \"Halon-1211\" \n# \"Halon-1301\" \n# \"Halon-2402\" \n# \"methyl chloroform\" \n# \"carbon tetrachloride\" \n# \"methyl chloride\" \n# \"methylene chloride\" \n# \"chloroform\" \n# \"methyl bromide\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "16.3. Other Flourinated Gases\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N\nOther flourinated gases whose shortwave radiative effects are explicitly taken into account in the atmosphere model", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.shortwave_GHG.other_flourinated_gases') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"HFC-134a\" \n# \"HFC-23\" \n# \"HFC-32\" \n# \"HFC-125\" \n# \"HFC-143a\" \n# \"HFC-152a\" \n# \"HFC-227ea\" \n# \"HFC-236fa\" \n# \"HFC-245fa\" \n# \"HFC-365mfc\" \n# \"HFC-43-10mee\" \n# \"CF4\" \n# \"C2F6\" \n# \"C3F8\" \n# \"C4F10\" \n# \"C5F12\" \n# \"C6F14\" \n# \"C7F16\" \n# \"C8F18\" \n# \"c-C4F8\" \n# \"NF3\" \n# \"SF6\" \n# \"SO2F2\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "17. Radiation --&gt; Shortwave Cloud Ice\nShortwave radiative properties of ice crystals in clouds\n17.1. General Interactions\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nGeneral shortwave radiative interactions with cloud ice crystals", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.shortwave_cloud_ice.general_interactions') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"scattering\" \n# \"emission/absorption\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "17.2. Physical Representation\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nPhysical representation of cloud ice crystals in the shortwave radiation scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.shortwave_cloud_ice.physical_representation') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"bi-modal size distribution\" \n# \"ensemble of ice crystals\" \n# \"mean projected area\" \n# \"ice water path\" \n# \"crystal asymmetry\" \n# \"crystal aspect ratio\" \n# \"effective crystal radius\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "17.3. Optical Methods\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nOptical methods applicable to cloud ice crystals in the shortwave radiation scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.shortwave_cloud_ice.optical_methods') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"T-matrix\" \n# \"geometric optics\" \n# \"finite difference time domain (FDTD)\" \n# \"Mie theory\" \n# \"anomalous diffraction approximation\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "18. Radiation --&gt; Shortwave Cloud Liquid\nShortwave radiative properties of liquid droplets in clouds\n18.1. General Interactions\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nGeneral shortwave radiative interactions with cloud liquid droplets", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.shortwave_cloud_liquid.general_interactions') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"scattering\" \n# \"emission/absorption\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "18.2. Physical Representation\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nPhysical representation of cloud liquid droplets in the shortwave radiation scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.shortwave_cloud_liquid.physical_representation') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"cloud droplet number concentration\" \n# \"effective cloud droplet radii\" \n# \"droplet size distribution\" \n# \"liquid water path\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "18.3. Optical Methods\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nOptical methods applicable to cloud liquid droplets in the shortwave radiation scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.shortwave_cloud_liquid.optical_methods') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"geometric optics\" \n# \"Mie theory\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "19. Radiation --&gt; Shortwave Cloud Inhomogeneity\nCloud inhomogeneity in the shortwave radiation scheme\n19.1. Cloud Inhomogeneity\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nMethod for taking into account horizontal cloud inhomogeneity", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.shortwave_cloud_inhomogeneity.cloud_inhomogeneity') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Monte Carlo Independent Column Approximation\" \n# \"Triplecloud\" \n# \"analytic\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "20. Radiation --&gt; Shortwave Aerosols\nShortwave radiative properties of aerosols\n20.1. General Interactions\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nGeneral shortwave radiative interactions with aerosols", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.shortwave_aerosols.general_interactions') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"scattering\" \n# \"emission/absorption\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "20.2. Physical Representation\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nPhysical representation of aerosols in the shortwave radiation scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.shortwave_aerosols.physical_representation') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"number concentration\" \n# \"effective radii\" \n# \"size distribution\" \n# \"asymmetry\" \n# \"aspect ratio\" \n# \"mixing state\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "20.3. Optical Methods\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nOptical methods applicable to aerosols in the shortwave radiation scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.shortwave_aerosols.optical_methods') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"T-matrix\" \n# \"geometric optics\" \n# \"finite difference time domain (FDTD)\" \n# \"Mie theory\" \n# \"anomalous diffraction approximation\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "21. Radiation --&gt; Shortwave Gases\nShortwave radiative properties of gases\n21.1. General Interactions\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nGeneral shortwave radiative interactions with gases", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.shortwave_gases.general_interactions') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"scattering\" \n# \"emission/absorption\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "22. Radiation --&gt; Longwave Radiation\nProperties of the longwave radiation scheme\n22.1. Overview\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOverview description of longwave radiation in the atmosphere", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.longwave_radiation.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "22.2. Name\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nCommonly used name for the longwave radiation scheme.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.longwave_radiation.name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "22.3. Spectral Integration\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nLongwave radiation scheme spectral integration", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.longwave_radiation.spectral_integration') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"wide-band model\" \n# \"correlated-k\" \n# \"exponential sum fitting\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "22.4. Transport Calculation\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nLongwave radiation transport calculation methods", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.longwave_radiation.transport_calculation') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"two-stream\" \n# \"layer interaction\" \n# \"bulk\" \n# \"adaptive\" \n# \"multi-stream\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "22.5. Spectral Intervals\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nLongwave radiation scheme number of spectral intervals", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.longwave_radiation.spectral_intervals') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "23. Radiation --&gt; Longwave GHG\nRepresentation of greenhouse gases in the longwave radiation scheme\n23.1. Greenhouse Gas Complexity\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nComplexity of greenhouse gases whose longwave radiative effects are taken into account in the atmosphere model", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.longwave_GHG.greenhouse_gas_complexity') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"CO2\" \n# \"CH4\" \n# \"N2O\" \n# \"CFC-11 eq\" \n# \"CFC-12 eq\" \n# \"HFC-134a eq\" \n# \"Explicit ODSs\" \n# \"Explicit other fluorinated gases\" \n# \"O3\" \n# \"H2O\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "23.2. ODS\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N\nOzone depleting substances whose longwave radiative effects are explicitly taken into account in the atmosphere model", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.longwave_GHG.ODS') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"CFC-12\" \n# \"CFC-11\" \n# \"CFC-113\" \n# \"CFC-114\" \n# \"CFC-115\" \n# \"HCFC-22\" \n# \"HCFC-141b\" \n# \"HCFC-142b\" \n# \"Halon-1211\" \n# \"Halon-1301\" \n# \"Halon-2402\" \n# \"methyl chloroform\" \n# \"carbon tetrachloride\" \n# \"methyl chloride\" \n# \"methylene chloride\" \n# \"chloroform\" \n# \"methyl bromide\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "23.3. Other Flourinated Gases\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N\nOther flourinated gases whose longwave radiative effects are explicitly taken into account in the atmosphere model", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.longwave_GHG.other_flourinated_gases') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"HFC-134a\" \n# \"HFC-23\" \n# \"HFC-32\" \n# \"HFC-125\" \n# \"HFC-143a\" \n# \"HFC-152a\" \n# \"HFC-227ea\" \n# \"HFC-236fa\" \n# \"HFC-245fa\" \n# \"HFC-365mfc\" \n# \"HFC-43-10mee\" \n# \"CF4\" \n# \"C2F6\" \n# \"C3F8\" \n# \"C4F10\" \n# \"C5F12\" \n# \"C6F14\" \n# \"C7F16\" \n# \"C8F18\" \n# \"c-C4F8\" \n# \"NF3\" \n# \"SF6\" \n# \"SO2F2\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "24. Radiation --&gt; Longwave Cloud Ice\nLongwave radiative properties of ice crystals in clouds\n24.1. General Interactions\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nGeneral longwave radiative interactions with cloud ice crystals", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.longwave_cloud_ice.general_interactions') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"scattering\" \n# \"emission/absorption\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "24.2. Physical Reprenstation\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nPhysical representation of cloud ice crystals in the longwave radiation scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.longwave_cloud_ice.physical_reprenstation') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"bi-modal size distribution\" \n# \"ensemble of ice crystals\" \n# \"mean projected area\" \n# \"ice water path\" \n# \"crystal asymmetry\" \n# \"crystal aspect ratio\" \n# \"effective crystal radius\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "24.3. Optical Methods\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nOptical methods applicable to cloud ice crystals in the longwave radiation scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.longwave_cloud_ice.optical_methods') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"T-matrix\" \n# \"geometric optics\" \n# \"finite difference time domain (FDTD)\" \n# \"Mie theory\" \n# \"anomalous diffraction approximation\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "25. Radiation --&gt; Longwave Cloud Liquid\nLongwave radiative properties of liquid droplets in clouds\n25.1. General Interactions\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nGeneral longwave radiative interactions with cloud liquid droplets", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.longwave_cloud_liquid.general_interactions') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"scattering\" \n# \"emission/absorption\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "25.2. Physical Representation\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nPhysical representation of cloud liquid droplets in the longwave radiation scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.longwave_cloud_liquid.physical_representation') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"cloud droplet number concentration\" \n# \"effective cloud droplet radii\" \n# \"droplet size distribution\" \n# \"liquid water path\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "25.3. Optical Methods\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nOptical methods applicable to cloud liquid droplets in the longwave radiation scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.longwave_cloud_liquid.optical_methods') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"geometric optics\" \n# \"Mie theory\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "26. Radiation --&gt; Longwave Cloud Inhomogeneity\nCloud inhomogeneity in the longwave radiation scheme\n26.1. Cloud Inhomogeneity\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nMethod for taking into account horizontal cloud inhomogeneity", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.longwave_cloud_inhomogeneity.cloud_inhomogeneity') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Monte Carlo Independent Column Approximation\" \n# \"Triplecloud\" \n# \"analytic\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "27. Radiation --&gt; Longwave Aerosols\nLongwave radiative properties of aerosols\n27.1. General Interactions\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nGeneral longwave radiative interactions with aerosols", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.longwave_aerosols.general_interactions') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"scattering\" \n# \"emission/absorption\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "27.2. Physical Representation\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nPhysical representation of aerosols in the longwave radiation scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.longwave_aerosols.physical_representation') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"number concentration\" \n# \"effective radii\" \n# \"size distribution\" \n# \"asymmetry\" \n# \"aspect ratio\" \n# \"mixing state\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "27.3. Optical Methods\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nOptical methods applicable to aerosols in the longwave radiation scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.longwave_aerosols.optical_methods') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"T-matrix\" \n# \"geometric optics\" \n# \"finite difference time domain (FDTD)\" \n# \"Mie theory\" \n# \"anomalous diffraction approximation\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "28. Radiation --&gt; Longwave Gases\nLongwave radiative properties of gases\n28.1. General Interactions\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nGeneral longwave radiative interactions with gases", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.longwave_gases.general_interactions') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"scattering\" \n# \"emission/absorption\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "29. Turbulence Convection\nAtmosphere Convective Turbulence and Clouds\n29.1. Overview\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOverview description of atmosphere convection and turbulence", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.turbulence_convection.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "30. Turbulence Convection --&gt; Boundary Layer Turbulence\nProperties of the boundary layer turbulence scheme\n30.1. Scheme Name\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nBoundary layer turbulence scheme name", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.turbulence_convection.boundary_layer_turbulence.scheme_name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Mellor-Yamada\" \n# \"Holtslag-Boville\" \n# \"EDMF\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "30.2. Scheme Type\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nBoundary layer turbulence scheme type", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.turbulence_convection.boundary_layer_turbulence.scheme_type') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"TKE prognostic\" \n# \"TKE diagnostic\" \n# \"TKE coupled with water\" \n# \"vertical profile of Kz\" \n# \"non-local diffusion\" \n# \"Monin-Obukhov similarity\" \n# \"Coastal Buddy Scheme\" \n# \"Coupled with convection\" \n# \"Coupled with gravity waves\" \n# \"Depth capped at cloud base\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "30.3. Closure Order\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nBoundary layer turbulence scheme closure order", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.turbulence_convection.boundary_layer_turbulence.closure_order') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "30.4. Counter Gradient\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nUses boundary layer turbulence scheme counter gradient", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.turbulence_convection.boundary_layer_turbulence.counter_gradient') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "31. Turbulence Convection --&gt; Deep Convection\nProperties of the deep convection scheme\n31.1. Scheme Name\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDeep convection scheme name", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.turbulence_convection.deep_convection.scheme_name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "31.2. Scheme Type\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nDeep convection scheme type", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.turbulence_convection.deep_convection.scheme_type') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"mass-flux\" \n# \"adjustment\" \n# \"plume ensemble\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "31.3. Scheme Method\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nDeep convection scheme method", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.turbulence_convection.deep_convection.scheme_method') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"CAPE\" \n# \"bulk\" \n# \"ensemble\" \n# \"CAPE/WFN based\" \n# \"TKE/CIN based\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "31.4. Processes\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nPhysical processes taken into account in the parameterisation of deep convection", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.turbulence_convection.deep_convection.processes') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"vertical momentum transport\" \n# \"convective momentum transport\" \n# \"entrainment\" \n# \"detrainment\" \n# \"penetrative convection\" \n# \"updrafts\" \n# \"downdrafts\" \n# \"radiative effect of anvils\" \n# \"re-evaporation of convective precipitation\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "31.5. Microphysics\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N\nMicrophysics scheme for deep convection. Microphysical processes directly control the amount of detrainment of cloud hydrometeor and water vapor from updrafts", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.turbulence_convection.deep_convection.microphysics') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"tuning parameter based\" \n# \"single moment\" \n# \"two moment\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "32. Turbulence Convection --&gt; Shallow Convection\nProperties of the shallow convection scheme\n32.1. Scheme Name\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nShallow convection scheme name", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.turbulence_convection.shallow_convection.scheme_name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "32.2. Scheme Type\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nshallow convection scheme type", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.turbulence_convection.shallow_convection.scheme_type') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"mass-flux\" \n# \"cumulus-capped boundary layer\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "32.3. Scheme Method\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nshallow convection scheme method", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.turbulence_convection.shallow_convection.scheme_method') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"same as deep (unified)\" \n# \"included in boundary layer turbulence\" \n# \"separate diagnosis\" \n# TODO - please enter value(s)\n", "32.4. Processes\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nPhysical processes taken into account in the parameterisation of shallow convection", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.turbulence_convection.shallow_convection.processes') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"convective momentum transport\" \n# \"entrainment\" \n# \"detrainment\" \n# \"penetrative convection\" \n# \"re-evaporation of convective precipitation\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "32.5. Microphysics\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N\nMicrophysics scheme for shallow convection", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.turbulence_convection.shallow_convection.microphysics') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"tuning parameter based\" \n# \"single moment\" \n# \"two moment\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "33. Microphysics Precipitation\nLarge Scale Cloud Microphysics and Precipitation\n33.1. Overview\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOverview description of large scale cloud microphysics and precipitation", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.microphysics_precipitation.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "34. Microphysics Precipitation --&gt; Large Scale Precipitation\nProperties of the large scale precipitation scheme\n34.1. Scheme Name\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nCommonly used name of the large scale precipitation parameterisation scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.microphysics_precipitation.large_scale_precipitation.scheme_name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "34.2. Hydrometeors\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nPrecipitating hydrometeors taken into account in the large scale precipitation scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.microphysics_precipitation.large_scale_precipitation.hydrometeors') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"liquid rain\" \n# \"snow\" \n# \"hail\" \n# \"graupel\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "35. Microphysics Precipitation --&gt; Large Scale Cloud Microphysics\nProperties of the large scale cloud microphysics scheme\n35.1. Scheme Name\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nCommonly used name of the microphysics parameterisation scheme used for large scale clouds.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.microphysics_precipitation.large_scale_cloud_microphysics.scheme_name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "35.2. Processes\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nLarge scale cloud microphysics processes", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.microphysics_precipitation.large_scale_cloud_microphysics.processes') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"mixed phase\" \n# \"cloud droplets\" \n# \"cloud ice\" \n# \"ice nucleation\" \n# \"water vapour deposition\" \n# \"effect of raindrops\" \n# \"effect of snow\" \n# \"effect of graupel\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "36. Cloud Scheme\nCharacteristics of the cloud scheme\n36.1. Overview\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOverview description of the atmosphere cloud scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.cloud_scheme.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "36.2. Name\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nCommonly used name for the cloud scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.cloud_scheme.name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "36.3. Atmos Coupling\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N\nAtmosphere components that are linked to the cloud scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.cloud_scheme.atmos_coupling') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"atmosphere_radiation\" \n# \"atmosphere_microphysics_precipitation\" \n# \"atmosphere_turbulence_convection\" \n# \"atmosphere_gravity_waves\" \n# \"atmosphere_solar\" \n# \"atmosphere_volcano\" \n# \"atmosphere_cloud_simulator\" \n# TODO - please enter value(s)\n", "36.4. Uses Separate Treatment\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDifferent cloud schemes for the different types of clouds (convective, stratiform and boundary layer)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.cloud_scheme.uses_separate_treatment') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "36.5. Processes\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nProcesses included in the cloud scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.cloud_scheme.processes') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"entrainment\" \n# \"detrainment\" \n# \"bulk cloud\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "36.6. Prognostic Scheme\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nIs the cloud scheme a prognostic scheme?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.cloud_scheme.prognostic_scheme') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "36.7. Diagnostic Scheme\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nIs the cloud scheme a diagnostic scheme?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.cloud_scheme.diagnostic_scheme') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "36.8. Prognostic Variables\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N\nList the prognostic variables used by the cloud scheme, if applicable.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.cloud_scheme.prognostic_variables') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"cloud amount\" \n# \"liquid\" \n# \"ice\" \n# \"rain\" \n# \"snow\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "37. Cloud Scheme --&gt; Optical Cloud Properties\nOptical cloud properties\n37.1. Cloud Overlap Method\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nMethod for taking into account overlapping of cloud layers", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.cloud_scheme.optical_cloud_properties.cloud_overlap_method') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"random\" \n# \"maximum\" \n# \"maximum-random\" \n# \"exponential\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "37.2. Cloud Inhomogeneity\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nMethod for taking into account cloud inhomogeneity", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.cloud_scheme.optical_cloud_properties.cloud_inhomogeneity') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "38. Cloud Scheme --&gt; Sub Grid Scale Water Distribution\nSub-grid scale water distribution\n38.1. Type\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nSub-grid scale water distribution type", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_water_distribution.type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"prognostic\" \n# \"diagnostic\" \n# TODO - please enter value(s)\n", "38.2. Function Name\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nSub-grid scale water distribution function name", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_water_distribution.function_name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "38.3. Function Order\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nSub-grid scale water distribution function type", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_water_distribution.function_order') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "38.4. Convection Coupling\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nSub-grid scale water distribution coupling with convection", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_water_distribution.convection_coupling') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"coupled with deep\" \n# \"coupled with shallow\" \n# \"not coupled with convection\" \n# TODO - please enter value(s)\n", "39. Cloud Scheme --&gt; Sub Grid Scale Ice Distribution\nSub-grid scale ice distribution\n39.1. Type\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nSub-grid scale ice distribution type", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_ice_distribution.type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"prognostic\" \n# \"diagnostic\" \n# TODO - please enter value(s)\n", "39.2. Function Name\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nSub-grid scale ice distribution function name", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_ice_distribution.function_name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "39.3. Function Order\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nSub-grid scale ice distribution function type", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_ice_distribution.function_order') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "39.4. Convection Coupling\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nSub-grid scale ice distribution coupling with convection", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_ice_distribution.convection_coupling') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"coupled with deep\" \n# \"coupled with shallow\" \n# \"not coupled with convection\" \n# TODO - please enter value(s)\n", "40. Observation Simulation\nCharacteristics of observation simulation\n40.1. Overview\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOverview description of observation simulator characteristics", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.observation_simulation.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "41. Observation Simulation --&gt; Isscp Attributes\nISSCP Characteristics\n41.1. Top Height Estimation Method\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nCloud simulator ISSCP top height estimation methodUo", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.observation_simulation.isscp_attributes.top_height_estimation_method') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"no adjustment\" \n# \"IR brightness\" \n# \"visible optical depth\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "41.2. Top Height Direction\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nCloud simulator ISSCP top height direction", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.observation_simulation.isscp_attributes.top_height_direction') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"lowest altitude level\" \n# \"highest altitude level\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "42. Observation Simulation --&gt; Cosp Attributes\nCFMIP Observational Simulator Package attributes\n42.1. Run Configuration\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nCloud simulator COSP run configuration", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.observation_simulation.cosp_attributes.run_configuration') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Inline\" \n# \"Offline\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "42.2. Number Of Grid Points\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nCloud simulator COSP number of grid points", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.observation_simulation.cosp_attributes.number_of_grid_points') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "42.3. Number Of Sub Columns\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nCloud simulator COSP number of sub-cloumns used to simulate sub-grid variability", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.observation_simulation.cosp_attributes.number_of_sub_columns') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "42.4. Number Of Levels\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nCloud simulator COSP number of levels", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.observation_simulation.cosp_attributes.number_of_levels') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "43. Observation Simulation --&gt; Radar Inputs\nCharacteristics of the cloud radar simulator\n43.1. Frequency\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: FLOAT&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nCloud simulator radar frequency (Hz)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.observation_simulation.radar_inputs.frequency') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "43.2. Type\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nCloud simulator radar type", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.observation_simulation.radar_inputs.type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"surface\" \n# \"space borne\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "43.3. Gas Absorption\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nCloud simulator radar uses gas absorption", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.observation_simulation.radar_inputs.gas_absorption') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "43.4. Effective Radius\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nCloud simulator radar uses effective radius", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.observation_simulation.radar_inputs.effective_radius') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "44. Observation Simulation --&gt; Lidar Inputs\nCharacteristics of the cloud lidar simulator\n44.1. Ice Types\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nCloud simulator lidar ice type", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.observation_simulation.lidar_inputs.ice_types') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"ice spheres\" \n# \"ice non-spherical\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "44.2. Overlap\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nCloud simulator lidar overlap", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.observation_simulation.lidar_inputs.overlap') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"max\" \n# \"random\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "45. Gravity Waves\nCharacteristics of the parameterised gravity waves in the atmosphere, whether from orography or other sources.\n45.1. Overview\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOverview description of gravity wave parameterisation in the atmosphere", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.gravity_waves.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "45.2. Sponge Layer\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nSponge layer in the upper levels in order to avoid gravity wave reflection at the top.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.gravity_waves.sponge_layer') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Rayleigh friction\" \n# \"Diffusive sponge layer\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "45.3. Background\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nBackground wave distribution", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.gravity_waves.background') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"continuous spectrum\" \n# \"discrete spectrum\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "45.4. Subgrid Scale Orography\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nSubgrid scale orography effects taken into account.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.gravity_waves.subgrid_scale_orography') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"effect on drag\" \n# \"effect on lifting\" \n# \"enhanced topography\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "46. Gravity Waves --&gt; Orographic Gravity Waves\nGravity waves generated due to the presence of orography\n46.1. Name\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nCommonly used name for the orographic gravity wave scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.gravity_waves.orographic_gravity_waves.name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "46.2. Source Mechanisms\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nOrographic gravity wave source mechanisms", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.gravity_waves.orographic_gravity_waves.source_mechanisms') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"linear mountain waves\" \n# \"hydraulic jump\" \n# \"envelope orography\" \n# \"low level flow blocking\" \n# \"statistical sub-grid scale variance\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "46.3. Calculation Method\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nOrographic gravity wave calculation method", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.gravity_waves.orographic_gravity_waves.calculation_method') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"non-linear calculation\" \n# \"more than two cardinal directions\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "46.4. Propagation Scheme\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOrographic gravity wave propogation scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.gravity_waves.orographic_gravity_waves.propagation_scheme') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"linear theory\" \n# \"non-linear theory\" \n# \"includes boundary layer ducting\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "46.5. Dissipation Scheme\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOrographic gravity wave dissipation scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.gravity_waves.orographic_gravity_waves.dissipation_scheme') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"total wave\" \n# \"single wave\" \n# \"spectral\" \n# \"linear\" \n# \"wave saturation vs Richardson number\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "47. Gravity Waves --&gt; Non Orographic Gravity Waves\nGravity waves generated by non-orographic processes.\n47.1. Name\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nCommonly used name for the non-orographic gravity wave scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.gravity_waves.non_orographic_gravity_waves.name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "47.2. Source Mechanisms\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nNon-orographic gravity wave source mechanisms", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.gravity_waves.non_orographic_gravity_waves.source_mechanisms') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"convection\" \n# \"precipitation\" \n# \"background spectrum\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "47.3. Calculation Method\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nNon-orographic gravity wave calculation method", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.gravity_waves.non_orographic_gravity_waves.calculation_method') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"spatially dependent\" \n# \"temporally dependent\" \n# TODO - please enter value(s)\n", "47.4. Propagation Scheme\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nNon-orographic gravity wave propogation scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.gravity_waves.non_orographic_gravity_waves.propagation_scheme') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"linear theory\" \n# \"non-linear theory\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "47.5. Dissipation Scheme\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nNon-orographic gravity wave dissipation scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.gravity_waves.non_orographic_gravity_waves.dissipation_scheme') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"total wave\" \n# \"single wave\" \n# \"spectral\" \n# \"linear\" \n# \"wave saturation vs Richardson number\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "48. Solar\nTop of atmosphere solar insolation characteristics\n48.1. Overview\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOverview description of solar insolation of the atmosphere", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.solar.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "49. Solar --&gt; Solar Pathways\nPathways for solar forcing of the atmosphere\n49.1. Pathways\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nPathways for the solar forcing of the atmosphere model domain", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.solar.solar_pathways.pathways') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"SW radiation\" \n# \"precipitating energetic particles\" \n# \"cosmic rays\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "50. Solar --&gt; Solar Constant\nSolar constant and top of atmosphere insolation characteristics\n50.1. Type\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nTime adaptation of the solar constant.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.solar.solar_constant.type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"fixed\" \n# \"transient\" \n# TODO - please enter value(s)\n", "50.2. Fixed Value\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: FLOAT&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nIf the solar constant is fixed, enter the value of the solar constant (W m-2).", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.solar.solar_constant.fixed_value') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "50.3. Transient Characteristics\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nsolar constant transient characteristics (W m-2)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.solar.solar_constant.transient_characteristics') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "51. Solar --&gt; Orbital Parameters\nOrbital parameters and top of atmosphere insolation characteristics\n51.1. Type\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nTime adaptation of orbital parameters", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.solar.orbital_parameters.type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"fixed\" \n# \"transient\" \n# TODO - please enter value(s)\n", "51.2. Fixed Reference Date\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nReference date for fixed orbital parameters (yyyy)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.solar.orbital_parameters.fixed_reference_date') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "51.3. Transient Method\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDescription of transient orbital parameters", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.solar.orbital_parameters.transient_method') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "51.4. Computation Method\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nMethod used for computing orbital parameters.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.solar.orbital_parameters.computation_method') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Berger 1978\" \n# \"Laskar 2004\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "52. Solar --&gt; Insolation Ozone\nImpact of solar insolation on stratospheric ozone\n52.1. Solar Ozone Impact\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDoes top of atmosphere insolation impact on stratospheric ozone?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.solar.insolation_ozone.solar_ozone_impact') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "53. Volcanos\nCharacteristics of the implementation of volcanoes\n53.1. Overview\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOverview description of the implementation of volcanic effects in the atmosphere", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.volcanos.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "54. Volcanos --&gt; Volcanoes Treatment\nTreatment of volcanoes in the atmosphere\n54.1. Volcanoes Implementation\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nHow volcanic effects are modeled in the atmosphere.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.volcanos.volcanoes_treatment.volcanoes_implementation') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"high frequency solar constant anomaly\" \n# \"stratospheric aerosols optical thickness\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "©2017 ES-DOC" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
shinys825/HYStudy
scripts/[HYStudy 16th] Matplotlib 3.ipynb
mit
[ "# import libraries\nimport matplotlib.pylab as plt\nimport numpy as np\nimport seaborn as sns\n\nsns.set(palette='hls', font_scale=1.5)", "Various Bar Charts", "# set variables name\nproduct = ('Burger', 'Pizza', 'Coke', 'Fry')\n\n# set index(will be labeld to be 'product')\np_range = np.arange(len(product))\n\n# sales range\nsales = 10 * np.random.rand(len(product))\n# error range\nerror = 0.5 * np.random.rand(len(product))\n\n# xeer: error(on x axis) range, alpha: opacity\nplt.barh(p_range, sales, xerr=error, alpha=0.6)\nplt.yticks(p_range, product)\nplt.xlabel('Sales(million $)')\nplt.ylabel('Products')\nplt.show()\n\n# the number of bar chart groups\nn_groups = 4\n\n# sales and std range on '15\nsales_15 = 10 * np.random.rand(len(product))\nstd_15 = 0.5 * np.random.rand(len(product))\n\n# sales and std range on '16\nsales_16 = 15 * np.random.rand(len(product))\nstd_16 = 0.8 * np.random.rand(len(product))\n\n# index, bar_width, opacity\nindex = np.arange(n_groups)\nbar_width = 0.4\nopacity = 0.6\n\n# error bar color\nerror_config = {'ecolor' : '0.6'}\n\n\n# sales_15 plot\nplt.bar(index, sales_15, bar_width, alpha=opacity,\n yerr=std_15, error_kw=error_config,\n label='Sales on \\'15')\n\n# sales_16 plot\n## sales_16 plot will be placed on x axis(index + bar_width')\nplt.bar(index+bar_width, sales_16, bar_width, alpha=opacity,\n yerr=std_16, error_kw=error_config,\n label='Sales on \\'16')\n'''\n# stacked bar chart\n## on x axis(index), can stack bar chart with 'bottom' arg.\nplt.bar(index, sales_16, bar_width, alpha=opacity,\n yerr=std_16, error_kw=error_config,\n bottom=sales_15 # set the bottom plot\n label='Sales on \\'16')\n'''\n\nplt.xlabel('Product')\nplt.ylabel('Sales(million $)')\nplt.title('Product Sales on 2016 and 2017')\n\n# set the label position on between two plots\nplt.xticks(index+bar_width/2, product)\n\nplt.legend()\nplt.tight_layout()\n\nplt.show()", "Scatter plot\n\nDoc: https://matplotlib.org/api/pyplot_api.html#matplotlib.pyplot.scatter", "x = np.random.randn(100)\ny = np.random.randn(100)\n\n# make points on coordinate(x, y)\nplt.scatter(x, y)\nplt.show()\n\n# make points on coordinate(x, y) with style\nplt.scatter(x, y,\n s=np.random.randint(10, 500, 100), # size\n c=np.random.randn(100), # color\n edgecolors='black') # edge color\nplt.show()", "Imshow\n\nDoc: http://matplotlib.org/1.5.1/api/pyplot_api.html#matplotlib.pyplot.imshow", "# display image with array\nx = np.random.rand(5, 5)\nprint(x)\n\nplt.imshow(x)\nplt.grid(False) # off grid display\nplt.show()", "subplot_kw: call args from add_subplot()\nadd_subplot(): https://matplotlib.org/api/figure_api.html?highlight=add_subplot#matplotlib.figure.Figure.add_subplot", "# various method\nmethods = [None, 'none', 'nearest', 'bilinear', 'bicubic',\n 'spline16', 'spline36', 'hanning', 'hamming',\n 'hermite', 'kaiser', 'quadric', 'catrom',\n 'gaussian', 'bessel', 'mitchell', 'sinc', 'lanczos']\n\nfig, axes = plt.subplots(3, 6,\n subplot_kw={'xticks':[], 'yticks':[]})\n\n# axes.flat: returns the axes as 1-dimensional(flat) array\nfor ax, method in zip(axes.flat, methods):\n ax.imshow(x, interpolation=method)\n ax.grid(False)\n ax.set_title(method)\n\nplt.show()" ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
MATH497project/MATH497-DiabeticRetinopathy
data_aggregation/encounter_data_entities.ipynb
mit
[ "import pandas as pd\nfrom pprint import pprint\nimport json\nimport numpy as np\n\nfrom IPython.core.display import display, HTML\ndisplay(HTML(\"<style>.container { width:100% !important; }</style>\"))\npd.set_option('display.width', 1000)\n\n# ICD_list table must be re-built from, presumably, ICD_for_Enc due to some entries being\n# pre-18th birthday. ICD_list entries are not timestamped!\ntable_names = ['all_encounter_data', 'demographics', 'encounters', 'family_hist_for_Enc',\n 'family_hist_list', 'ICD_for_Enc', 'ICD_list', 'macula_findings_for_Enc',\n 'SL_Lens_for_Enc', 'SNOMED_problem_list', 'systemic_disease_for_Enc', 'systemic_disease_list']\n\nperson_data = ['demographics','family_hist_list', 'systemic_disease_list', 'SNOMED_problem_list']\n\nencounter_data = ['all_encounter_data', 'encounters', 'family_hist_for_Enc', 'ICD_for_Enc', 'macula_findings_for_Enc',\n 'SL_Lens_for_Enc', 'systemic_disease_for_Enc']\n\npath = 'E:\\\\anil\\\\IIT Sop\\\\Term02\\\\MATH497\\\\ICO_data\\\\original_pickle\\\\'\n\n# read tables into dataframes\ndfs = [ pd.read_pickle(path + name + '.pickle') if name != 'ICD_list' else None\n for name in table_names ]\n\n# rename columns in all dataframes to avoid unicode decode error\nfor df in dfs:\n if df is not None:\n df.columns = [col.decode(\"utf-8-sig\") for col in df.columns]\n\n# aggregate encounter nbrs under person number from tables with encounter numbers\nencounter_key = 'u'Enc_Nbr'\nfor df in dfs:\n if df is not None:\n print(df.columns.values)", "Grouping all encounter nbrs under respective person nbr", "encounter_key = 'Enc_Nbr'\nperson_key = 'Person_Nbr'\nencounters_by_person = {}\nfor df in dfs:\n if df is not None:\n df_columns =set(df.columns.values)\n if encounter_key in df_columns and person_key in df_columns:\n for row_index, dfrow in df.iterrows():\n rowdict = dict(dfrow)\n person_nbr = rowdict[person_key]\n encounter_nbr = rowdict[encounter_key]\n encounters_by_person.setdefault(person_nbr, set()).add(encounter_nbr)\n\nfor person_nbr in encounters_by_person:\n if len(encounters_by_person[person_nbr])>5:\n pprint(encounters_by_person[person_nbr])\n break", "Now grouping other measurements and properties under encounter_nbrs", "encounter_key = 'Enc_Nbr'\n# columns_to_ignore = [u'Person_ID', u'Person_Nbr', u'Enc_ID', u'Enc_Nbr', u'Enc_Date']\ndata_by_encounters = {}\ndata_by_encounters_type = {}\nfor df_index, df in enumerate(dfs):\n df_name = table_names[df_index]\n print df_name\n data_by_encounters[df_name] = {}\n if df is not None:\n df_columns =set(df.columns.values)\n if encounter_key in df_columns:\n # check if encounter is primary key in the table\n if len(df) == len(df[encounter_key].unique()):\n data_by_encounters_type[df_name] = 'single'\n for row_index, dfrow in df.iterrows():\n rowdict = dict(dfrow)\n \n for k, v in rowdict.iteritems():\n if isinstance(v, pd.tslib.Timestamp):\n rowdict[k] = v.toordinal()\n \n encounter_nbr = rowdict[encounter_key]\n data_by_encounters[df_name][encounter_nbr] = rowdict\n else:\n data_by_encounters_type[df_name] = 'list'\n for row_index, dfrow in df.iterrows():\n rowdict = dict(dfrow)\n for k, v in rowdict.iteritems():\n if isinstance(v, pd.tslib.Timestamp):\n rowdict[k] = v.toordinal()\n encounter_nbr = rowdict[encounter_key]\n data_by_encounters[df_name].setdefault(encounter_nbr, []).append(rowdict)", "Aggregating encounter entities under respective person entity", "all_persons = []\nfor person_nbr in encounters_by_person:\n person_object = {person_key:person_nbr, 'encounter_objects':[]}\n for enc_nbr in encounters_by_person[person_nbr]:\n encounter_object = {encounter_key: enc_nbr}\n for df_name in data_by_encounters_type:\n if enc_nbr in data_by_encounters[df_name]:\n encounter_object[df_name] = data_by_encounters[df_name][enc_nbr]\n if data_by_encounters_type[df_name] !=\"single\":\n encounter_object[df_name+\"_count\"] = len(data_by_encounters[df_name][enc_nbr])\n person_object['encounter_objects'].append(encounter_object)\n\n all_persons.append(person_object)\n\n# checking for aggregation consistency\nn = 0\nfor person in all_persons:\n person_nbr=person[person_key]\n for enc_obj in person['encounter_objects']:\n enc_nbr=enc_obj[encounter_key]\n for df_name in data_by_encounters_type:\n if data_by_encounters_type[df_name] == \"single\":\n if df_name in enc_obj:\n if person_key in enc_obj[df_name]:\n if person_nbr != enc_obj[df_name][person_key]:\n print \"Person nbr does not match\", person_nbr, enc_nbr, df_name\n if encounter_key in enc_obj[df_name]:\n if enc_nbr != enc_obj[df_name][encounter_key]:\n print \"Encounter nbr does not match\", person_nbr, enc_nbr, df_name\n \n else:\n if df_name in enc_obj:\n for rp_index, repeated_property in enumerate(enc_obj[df_name]):\n if person_key in repeated_property:\n if person_nbr != repeated_property[person_key]:\n print \"Person nbr does not match\", person_nbr, enc_nbr, df_name, rp_index\n if encounter_key in repeated_property:\n if enc_nbr != repeated_property[encounter_key]:\n print \"Encounter nbr does not match\", person_nbr, enc_nbr, df_name, rp_index\n \n \n# n+=1\n# if n>2:break", "Dropping duplicated columns and then full na rows across tables", "with open('20170224_encounter_objects_before_duplicate_fields_drop.json', 'w') as fh:\n json.dump(all_persons, fh)\n\n# drop repeated columns in nested fields except from table \"encounters\"\n\n\ncolumns_to_drop = ['Enc_ID', 'Enc_Nbr', 'Enc_Date', 'Person_ID', 'Person_Nbr','Date_Created', 'Enc_Timestamp']\n\n\nfor person_index in range(len(all_persons)):\n \n for enc_obj_index in range(len(all_persons[person_index]['encounter_objects'])):\n \n enc_obj = all_persons[person_index]['encounter_objects'][enc_obj_index]\n \n for df_name in data_by_encounters_type:\n if data_by_encounters_type[df_name] == \"single\":\n if df_name in enc_obj and df_name!='encounters':\n for column_to_drop in columns_to_drop:\n try:\n del enc_obj[df_name][column_to_drop]\n except:\n pass\n \n else:\n if df_name in enc_obj and df_name!='encounters':\n for rp_index in range(len(enc_obj[df_name])):\n for column_to_drop in columns_to_drop:\n try:\n del enc_obj[df_name][rp_index][column_to_drop]\n except:\n pass\n \n \n all_persons[person_index]['encounter_objects'][enc_obj_index] = enc_obj\n\n# drop full na object rows\n# !does not seem to be working!!\n\nfor person_index in range(len(all_persons)):\n \n for enc_obj_index in range(len(all_persons[person_index]['encounter_objects'])):\n enc_obj = all_persons[person_index]['encounter_objects'][enc_obj_index]\n for df_name in data_by_encounters_type:\n if data_by_encounters_type[df_name] == \"single\":\n if df_name in enc_obj:\n if all(pd.isnull(enc_obj[df_name].values())):\n enc_obj[df_name] = float('nan')\n else:\n if df_name in enc_obj:\n for rp_index in reversed(range(len(enc_obj[df_name]))):\n if all(pd.isnull(enc_obj[df_name][rp_index].values())):\n del enc_obj[df_name][rp_index]\n \n all_persons[person_index]['encounter_objects'][enc_obj_index] = enc_obj\n\nwith open('20170224_encounter_objects.json', 'w') as fh:\n json.dump(all_persons, fh)\n\n# creating a dataframe from aggregated data\ncombined_ecounters_df = pd.DataFrame.from_dict({(person_obj[person_key],enc_obj[encounter_key]): enc_obj\n for person_obj in all_persons\n for enc_obj in person_obj['encounter_objects']},\n orient='index')\n\ncombined_ecounters_df.head(10)\n\ncombined_ecounters_df.loc[89,'family_hist_for_Enc']" ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
liganega/Gongsu-DataSci
ref_materials/excs/Lab-03.ipynb
gpl-3.0
[ "Training\ntraining3.py 파일에 아래에 예제들에서 설명되는 함수들을 정의하라.\n예제 1\n인자로 x 라디안(호도, radian)을 입력받아 각도(degree)로 계산하여 되돌려주는 함수 degree(x)를 정의하라. \n`degree(x) = (x * 360) / (2 * pi)`\n\n여기서 pi는 원주율을 나타내며, 라디안(호오) 설명은 아래 사이트 참조.\nhttps://namu.wiki/w/%EB%9D%BC%EB%94%94%EC%95%88\n활용 예:\nIn [ ]: degree(math.pi)\nOut[ ]: 180.0", "import math # math 모듈을 임포트해야 pi 값을 사용할 수 있다.\n\ndef degree(x):\n return (x *360.0) / (2 * math.pi)\n\ndegree(math.pi)", "예제 2\n리스트 자료형 xs를 입력받아 리스트 내의 값들의 최소값 xmin과 최대값 xmax 계산하여 순서쌍 (xmin, xmax) 형태로 되돌려주는 함수 min_max(xs)를 정의하라.\n활용 예:\nIn [ ]: min_max([0, 1, 2, 10, -5, 3])\nOut[ ]: (-5, 10)", "def min_max(xs):\n return (min(xs), max(xs))\n\n# 튜플을 이용하여 최소값과 최대값을 쌍으로 묶어 리턴하였다.\n# 따라서 리턴값을 쪼개어 사용할 수도 있다.\n\na, b = min_max([0, 1, 2, 10, -5, 3])\na", "min과 max 함수는 모든 시퀀스 자료형에 활용할 수 있는 함수들이다.", "min((1, 20))", "파이썬에서 다루는 모든 값과 문자들을 비교할 수 있다. 많은 예제들을 테스하면서 순서에 대한 감을 익힐 필요가 있다.", "max(\"abcABC + $\")\n\nmin(\"abcABC + $\")\n\nmax([1, 1.0, [1], (1.0), [[1]]])\n\nmin([1, 1.0, [1], (1.0), [[1]]])", "예제 3\n리스트 자료형 xs를 입력받아 리스트 내의 값들의 기하평균을 되돌려주는 함수 geometric_mean(xs)를 정의하라.\n기하평균에 대한 설명은 아래 사이트 참조할 것.\nhttps://ko.wikipedia.org/wiki/%EA%B8%B0%ED%95%98_%ED%8F%89%EA%B7%A0\n활용 예:\nIn [ ]: geometric_mean([1, 2])\nOut[ ]: 1.4142135623730951", "def geometric_mean(xs):\n g = 1.0\n for m in xs:\n g = g * m\n return g ** (1.0/len(xs))\n\ngeometric_mean([1,2])", "연습문제\n아래 연습문제들에서 사용되는 함수들을 lab3.py 파일로 저장하라.\n연습문제 1\n다음 조건을 만족시키는 함수 swing_time(L) 함수를 정의하라.\n길이가 L인 진자(pendulum)가 한 번 왔다갔다 하는 데에 걸리는 시간(주기, 초단위)을 계산하여 되돌려 준다. \n진자와 주기 관련해서 아래 사이트 참조.\nhttps://ko.wikipedia.org/wiki/%EC%A7%84%EC%9E%90\n활용 예:\nIn [ ]: swing_time(1)\nOut[ ]: 2.0060666807106475", "g = 9.81\n\ndef swing_time(L):\n return 2 * math.pi * math.sqrt(L / g)\n\nswing_time(1)", "연습문제 2\n음수가 아닌 정수 n을 입력 받아 아래 형태의 리스트를 되돌려주는 range_squared(n) 함수를 정의하라.\n[0, 1, 4, 9, 16, 25, ..., (n-1)** 2]\n\nn=0인 경우에는 비어있는 리스트를 리턴한다. \n활용 예:\nIn [ ]: range_squared(3)\nOut[ ]: [0, 1, 4]", "def range_squared(n):\n L = []\n for index in range(n):\n L.append(index ** 2)\n return L\n\nrange_squared(3)", "연습문제 3\n시퀀스 자료형 seq가 주어졌을 때 element 라는 값이 seq에 몇 번 나타나는지를 알려주는 함수 count(element, seq)를 정의하라.\n활용 예:\nIn [ ]: count('dog',['dog', 'cat', 'mouse', 'dog'])\nOut[ ]: 2\n\nIn [ ]: count(2, range(5))\nOut[ ]: 1", "def count(element, seq):\n return seq.count(element)\n\ncount(2, range(5))" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
drphilmarshall/SpaceWarps
analysis/make_lens_catalog.ipynb
mit
[ "Run this notebook to produce the cutout catalogs!\nPotential TODO: Write code for creating the pickles?\nPotential TODO: Write code for downloading all the fields in advance?\nCreate the annotated csv catalog", "import pandas as pd\nimport swap\n\nbase_collection_path = '/nfs/slac/g/ki/ki18/cpd/swap/pickles/15.09.02/'\nbase_directory = '/nfs/slac/g/ki/ki18/cpd/swap_catalog_diagnostics/'\nannotated_catalog_path = base_directory + 'annotated_catalog.csv'\ncut_empty = True\n\n\nstages = [1, 2]\ncategories = ['ID', 'ZooID', 'location', 'mean_probability', 'category', 'kind', 'flavor', \n 'state', 'status', 'truth', 'stage', 'line']\nannotation_categories = ['At_X', 'At_Y', 'PD', 'PL']\n\ncatalog = []\nfor stage in stages:\n print(stage)\n collection_path = base_collection_path + 'stage{0}'.format(stage) + '/CFHTLS_collection.pickle'\n collection = swap.read_pickle(collection_path, 'collection')\n for ID in collection.list():\n\n subject = collection.member[ID]\n catalog_i = []\n\n # for stage1 we shall skip the tests for now\n if (stage == 1) * (subject.category == 'test'):\n continue\n\n # flatten out x and y. also cut out empty entries\n annotationhistory = subject.annotationhistory\n x_unflat = annotationhistory['At_X']\n x = np.array([xi for xj in x_unflat for xi in xj])\n\n # cut out catalogs with no clicks\n if (len(x) < 1) and (cut_empty):\n continue\n # oh yeah there's that absolutely nutso entry with 50k clicks\n if len(x) > 10000:\n continue\n\n for category in categories:\n if category == 'stage':\n catalog_i.append(stage)\n elif category == 'line':\n catalog_i.append(line)\n else:\n catalog_i.append(subject.__dict__[category])\n for category in annotation_categories:\n catalog_i.append(list(annotationhistory[category]))\n\n catalog.append(catalog_i)\ncatalog = pd.DataFrame(catalog, columns=categories + annotation_categories)\n\n# save catalog\ncatalog.to_csv(annotated_catalog_path)", "Create the knownlens catalog", "knownlens_dir = '/nfs/slac/g/ki/ki18/cpd/code/strongcnn/catalog/knownlens/'\nknownlensID = pd.read_csv(knownlens_dir + 'knownlensID', sep=' ')\nlistfiles_d1_d11 = pd.read_csv(knownlens_dir + 'listfiles_d1_d11.txt', sep=' ')\nknownlenspath = knownlens_dir + 'knownlens.csv'\n\nX2 = listfiles_d1_d11[listfiles_d1_d11['CFHTID'].isin(knownlensID['CFHTID'])] # cuts down to like 212 entries.\n\nZooID = []\n\nfor i in range(len(Y)):\n ZooID.append(X2['ZooID'][X2['CFHTID'] == knownlensID['CFHTID'][i]].values[0])\n\nknownlensID['ZooID'] = ZooID\n\nknownlensID.to_csv(knownlenspath)", "Convert the annotated catalog and knownlens catalog into cluster catalogs and cutouts", "# code to regenerate the catalogs\nbase_directory = '/nfs/slac/g/ki/ki18/cpd/swap_catalog_diagnostics/'\ncluster_directory = base_directory\n\n## uncomment this line when updating the shared catalog!\n# base_directory = '/nfs/slac/g/ki/ki18/cpd/swap_catalog/'\n# cluster_directory = base_directory + 'clusters/'\n\n\nfield_directory = base_directory\nknownlens_path = base_directory + 'knownlens.csv'\ncollection_path = base_directory + 'annotated_catalog.csv'\ncatalog_path = cluster_directory + 'catalog.csv'\n\n# if we're rerunning this code, we should remove the old cluster pngs,\n# all of which have *_*.png\nfrom glob import glob\nfiles_to_delete = glob(cluster_directory + '*_*.png')\nfrom os import remove\nfor delete_this_file in files_to_delete:\n remove(delete_this_file)\n\n\n# run create catalog code. This can take a while.\nfrom subprocess import call\ncommand = ['python', '/nfs/slac/g/ki/ki18/cpd/code/strongcnn/code/create_catalogs.py',\n '--collection', collection_path,\n '--knownlens', knownlens_path,\n '--clusters', cluster_directory,\n '--fields', field_directory,\n #'--augment', augmented_directory,\n #'--do_a_few', '100',\n ]\ncall(command)" ]
[ "markdown", "code", "markdown", "code", "markdown", "code" ]
kbase/data_api
examples/notebooks/plot_feature_positions.ipynb
mit
[ "Introduction\nThis notebook shows how to use the Data API to plot the features in a genome. After initialization, this is broken into just a few high-level steps:\n* Load up a workspace (namespace for the data, but also each narrative has its own workspace)\n* Find genomes in the workspace\n* Select one of those genomes\n* Get the feature positions in the selected genome\n* Plot the feature positions\nInitialize", "%matplotlib notebook\nimport seaborn as sns\nimport os\nfrom doekbase import data_api\nfrom doekbase.data_api import display\n\nb = data_api.browse(1013)\ng = b['kb|g.1'].object\ng", "Get genomes from workspace 654", "# Get a \"browser\" for the workspace\nb = data_api.browse(654)\n\n# Get API object for 2nd genome (index 1)\ng1 = b.filter(type_re='KBaseGenomesCondensedPrototypeV2.GenomeAnnotation-.*')[1].object\n\ndisplay.Organism(g1)", "Get feature positions in one of the genomes", "f = display.FeaturePositions(g1)\n\nreload(qgrid)\nqgrid.nbinstall()\nimport pandas as pd\nf2 = pd.DataFrame({'foo': (1,2,3), 'bar': {'a', 'b', 'c'}})\nqgrid.show_grid(f2)", "Plot the features\nA 'stripplot' shows each feature as a dot, with the X coordinate being the start position in the sequence and on the Y axis is each type of feature in the dataset (sorted, by default, alphabetically). This kind of plot doesn't help with any detailed analysis, but it provides a good simple overview of the feature data.\nBecause we did %matplotlib notebook to load matplotlib, we automatically get zooming and panning. In essence, this makes our plat a mini-genome-browser with \"tracks\" for each feature.", "import numpy as np\n\nimport pickle\nf = pickle.load(open('featurepos'))\n\nsns.stripplot(x='start', y='type', marker='.', size=10, data=f)\n\nmax(f['len'])\n\nf.to_pickle('featurepos')" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
eggie5/UCSD-MAS-DSE230
hmwk1/HW-1.ipynb
mit
[ "HomeWork 1\nUnigrams, bigrams, and in general n-grams are 1,2 or n words that appear consecutively in a single sentence. Consider the sentence:\n\"to know you is to love you.\"\n\nThis sentence contains:\nUnigrams(single words): to(2 times), know(1 time), you(2 times), is(1 time), love(1 time)\nBigrams: \"to know\",\"know you\",\"you is\", \"is to\",\"to love\", \"love you\" (all 1 time)\nTrigrams: \"to know you\", \"know you is\", \"you is to\", \"is to love\", \"to love you\" (all 1 time)\n\nThe goal of this HW is to find the most common n-grams in the text of Moby Dick.\nYour task is to:\n\nConvert all text to lower case, remove all punctuations. (Finally, the text should contain only letters, numbers and spaces)\nCount the occurance of each word and of each 2,3,4,5 - gram\nList the 5 most common elements for each order (word, bigram, trigram...). For each element, list the sequence of words and the number of occurances.\n\nBasically, you need to change all punctuations to a space and define as a word anything that is between whitespace or at the beginning or the end of a sentence, and does not consist of whitespace (strings consisiting of only white spaces should not be considered as words). The important thing here is to be simple, not to be 100% correct in terms of parsing English. Evaluation will be primarily based on identifying the 5 most frequent n-grams in correct order for all values of n. Some slack will be allowed in the values of frequency of ngrams to allow flexibility in text processing. \nThis text is short enough to process on a single core using standard python. However, you are required to solve it using RDD's for the whole process. At the very end you can use .take(5) to bring the results to the central node for printing.\nThe code for reading the file and splitting it into sentences is shown below:", "import findspark\nfindspark.init()\nimport pyspark\nsc = pyspark.SparkContext()\n\ntextRDD = sc.newAPIHadoopFile('Data/Moby-Dick.txt',\n 'org.apache.hadoop.mapreduce.lib.input.TextInputFormat',\n 'org.apache.hadoop.io.LongWritable',\n 'org.apache.hadoop.io.Text',\n conf={'textinputformat.record.delimiter': \"\\r\\n\\r\\n\"}) \\\n .map(lambda x: x[1])\n\n\nsentences=textRDD.flatMap(lambda x: x.split(\". \")).map(lambda x: x.encode('utf-8'))\n\ndef find_ngrams(input_list, n):\n return zip(*[input_list[i:] for i in range(n)])\n\nimport string\nreplace_punctuation = string.maketrans(string.punctuation, ' '*len(string.punctuation))\nsentences.map(lambda x: ' '.join(x.split()).lower())\\\n .map(lambda x: x.translate(None, string.punctuation))\\\n .flatMap(lambda x: find_ngrams(x.split(\" \"), 5))\\\n .map(lambda x: (x,1))\\\n .reduceByKey(lambda x,y: x+y)\\\n .map(lambda x:(x[1],x[0])) \\\n .sortByKey(False)\\\n .take(100)", "Note: For running the file on cluster, change the file path to '/data/Moby-Dick.txt'\nLet freq_ngramRDD be the final result RDD containing the n-grams sorted by their frequency in descending order. Use the following function to print your final output:", "def printOutput(n,freq_ngramRDD):\n top=freq_ngramRDD.take(5)\n print '\\n============ %d most frequent %d-grams'%(5,n)\n print '\\nindex\\tcount\\tngram'\n for i in range(5):\n print '%d.\\t%d: \\t\"%s\"'%(i+1,top[i][0],' '.join(top[i][1]))", "Your output for unigrams should look like:\n```\n============ 5 most frequent 1-grams\nindex count ngram\n1. 40: \"a\"\n2. 25: \"the\"\n3. 21: \"and\"\n4. 16: \"to\"\n5. 9: \"of\"\n```\nNote: This is just a sample output and does not resemble the actual results in any manner.\nYour final program should generate an output using the following code:", "for n in range(1,6):\n # Put your logic for generating the sorted n-gram RDD here and store it in freq_ngramRDD variable\n freq_ngramRDD = sentences.map(lambda x: x.lower())\\\n .map(lambda x: x.translate(replace_punctuation))\\\n .flatMap(lambda x: find_ngrams(' '.join(x.split()).split(\" \"), n))\\\n .map(lambda x: (x,1))\\\n .reduceByKey(lambda x,y: x+y)\\\n .map(lambda x:(x[1],x[0])) \\\n .sortByKey(False)\n printOutput(n,freq_ngramRDD)\n \n freq_ngramRDD = sentences.map(lambda x: x.lower())\\\n .map(lambda x: x.translate(replace_punctuation))\\\n .flatMap(lambda x: find_ngrams(' '.join(x.split()).split(\" \"), n))\\\n .map(lambda x: (x,1))\\\n .reduceByKey(lambda x,y: x+y)\\\n .map(lambda x:(x[1],x[0]))\\\n .sortByKey(False)" ]
[ "markdown", "code", "markdown", "code", "markdown", "code" ]
nslatysheva/data_science_blogging
model_optimization/polished_prediction.ipynb
gpl-3.0
[ "Polished prediction: how to tune machine learning models\nIntroduction\nWhen doing machine learning using Python's scikit-learn library, we can often get reasonable model performance by using out-of-the-box settings. However, the payoff can be huge if you invest at least some time into tuning models to your specific problem and dataset. In the previous post, we explored the concepts of overfitting, cross-validation, and the bias-variance tradeoff. These ideas turn out to be central to doing a good job at optimizing the hyperparameters (roughly, the settings) of algorithms. In this post, we will explore the concepts behind hyperparameter optimization and demonstrate the process of tuning and training a random forest classifier.\nYou'll be working with the famous (well, machine learning famous!) spam dataset, which contains loads of NLP-mined features of spam and non-spam emails, like the frequencies of the words \"money\", \"free\" and \"viagra\". Our goal is to tune and apply a random forest to these features in order to predict whether a given email is spam. \nThe steps we'll cover in this blog post can be summarized as follows:\n\nIn the next two posts, you will learn about different strategies for model optimization and how to tune a support vector machine and logistic regression classifier. You will also find out how to take several different tuned models and combine them to build an ensemble model, which is a type of aggregated meta-model that often has higher accuracy and lower overfitting than its constituents.\nLet's get cracking.\nLoading and exploring the dataset\nWe start off by collecting the dataset. It can be found both online and (in a slightly nicer form) in our GitHub repository, so we can just fetch it via wget (note: make sure you first type pip install wget into your terminal since wget is not a preinstalled Python library). It will download a copy of the dataset to your current working directory.", "import wget\nimport pandas as pd\n\n# Import the dataset\ndata_url = 'https://raw.githubusercontent.com/nslatysheva/data_science_blogging/master/datasets/spam/spam_dataset.csv'\ndataset = wget.download(data_url)\ndataset = pd.read_csv(dataset, sep=\",\")\n\n# Take a peak at the first few columns of the data\nfirst_5_columns = dataset.columns[0:5]\ndataset[first_5_columns].head()", "You can examine the dimensions of the dataset and the column names:", "# Examine shape of dataset and the column names\nprint (dataset.shape)\nprint (dataset.columns.values)", "Get some summary statistics on the features using describe():", "# Summarise feature values\ndataset.describe()[first_5_columns]", "Now convert the pandas dataframe into a numpy array and isolate the outcome variable you'd like to predict (here, 0 means 'non-spam' and 1 means 'spam'). This is needed to feed the data into a machine learning pipeline:", "import numpy as np\n\n# Convert the dataframe to a numpy array and split the\n# data into an input matrix X and class label vector y\nnpArray = np.array(dataset)\nX = npArray[:,:-1].astype(float)\ny = npArray[:,-1]", "Next up, let's split the dataset into a training and test set. The training set will be used to develop and tune our predictive models. The test will be completely left alone until the very end, at which point you'll run your finished models on it. Having a test set will allow you to get a good estimate of how well your models would perform out in the wild on unseen data, which is what you're actually interested in when you model data (see previous post).", "from sklearn.cross_validation import train_test_split\n\n# Split into training and test sets\nXTrain, XTest, yTrain, yTest = train_test_split(X, y, random_state=1)", "You are first going to try to predict spam emails with a random forest classifier. Chapter 8 of the Introduction to Statistical Learning book provides a truly excellent introduction to the theory behind classification trees, bagged trees, and random forests. It's worth a read if you have time.\nBriefly, random forests build a collection of classification trees, which each try to classify data points by recursively splitting the data on the features (and feature values) that separate the classes best. Each tree is trained on bootstrapped data, and each bifurcation point is only allowed to 'see' a subset of the available variables when deciding on the best split. So, an element of randomness is introduced when constructing each tree, which means that a variety of different trees are built. The random forest ensembles these base learners together, i.e. it combines these trees into an aggregated model. When making a new prediction, the individual trees each make their individual predictions, and the random forest surveys these opinions and accepts the majority position. This often leads to improved accuracy, generalizability, and stability in the predictions.\nOut of the box, scikit's random forest classifier already performs quite well on the spam dataset:", "from sklearn.ensemble import RandomForestClassifier\nfrom sklearn import metrics\n\nrf = RandomForestClassifier()\nrf.fit(XTrain, yTrain)\n\nrf_predictions = rf.predict(XTest)\n\nprint (metrics.classification_report(yTest, rf_predictions))\nprint (\"Overall Accuracy:\", round(metrics.accuracy_score(yTest, rf_predictions),2))", "This overall accuracy of 0.94-0.96 is extremely good, but keep in mind that such high accuracies are not common in most dataset that you will encounter. Next up, you are going to learn how to pick the best values for the hyperparameters of the random forest algorithm in order to get better models with (hopefully!) even higher accuracy than this baseline.\nBetter modelling through hyperparameter optimization\nWe've glossed over what a hyperparameter actually is. Let's explore the topic now. Often, when setting out to train a machine learning algorithm on your dataset of interest, you must first specify a number of arguments or hyperparameters (HPs). An HP is just a variable than influences the performance of your model, but isn't directly tuned during model training. For example, when using the k-nearest neighbours algorithm to do classification (see these two previous posts), the value of k (the number of nearest neighbours the model considers) is a hyperparameter that must be supplied in advance. As another example, when building a neural network, the number of layers in the network and the number of neurons per layer are both hyperparameters that must be specified before training commences. By contrast, the weights and biases in a neural network are parameters (not hyperparameters) because they are explicitly tuned during training. \nIt turns out that scikit-learn generally provides reasonable hyperparameter default values, such that it is possible to quickly build an e.g. kNN classifier by simply typing KNeighborsClassifier() and then fitting it to your data. Behind the scenes, we can can get the documentation on what hyperparameter values that the classifier has automatically assumed, but you can also examine models directly using get_params:", "from sklearn.neighbors import KNeighborsClassifier\n\n# Create a default kNN classifer and print params\nknn_default = KNeighborsClassifier()\nprint (knn_default.get_params)", "So you see that the default kNN classifier has the number of nearest neighbours it considers set to 5 (n_neighbors=5) and gives all datapoints equal importance (weights=uniform), and so on.\nOften, the default hyperparameters values will do a decent job (as we saw above with the random forest example), so it may be tempting to skip the topic of model tuning completely. However, it is basically always a good idea to do some level of hyperparameter optimization, due to the potential for substantial improvements in your learning algorithm's performance.\nBut how do you know what values to set the hyperparameters to in order to get the best performance from your learning algorithms? \nYou optimize hyperparameters in exactly the way that you might expect - you try different values and see what works best. However, some care is needed when deciding how exactly to measure if certain values work well, and which strategy to use to systematically explore\nhyperparameter space. In a later post, we will introduce model ensembling, in which individual models can be considered 'hyper-hyper parameters' (&trade;; &copy;; &reg;; patent pending; T-shirts printing).\nTuning your random forest\nIn order to build the best possible model that does a good job at describing the underlying trends in a dataset, we need to pick the right HP values. As we mentioned above, HPs are not optimised while an algorithm is learning. Hence, we need other strategies to optimise them. The most basic way to do this would be just to test different possible values for the HPs and see how the model performs. \nIn a random forest, some hyperparameters we can optimise are n_estimators and max_features. n_estimators controls the number of trees in the forest - the more the better (with diminishing returns), but more trees come at the expense of longer training time. max_features controls the size of the random selection of features the algorithm is allowed to consider when splitting a node. Larger values help if the individual predictors aren't that great. Smaller values can be helpful if the features in the dataset are decent and/or highly correlated.\nLet's try out some HP values.", "# manually specifying some HP values\nparameter_combinations = [\n {\"n_estimators\": 5, \"max_features\": 10}, # parameter combination 1...\n {\"n_estimators\": 50, \"max_features\": 40} # 2\n]", "We can manually write a small loop to test out how well the different combinations of these potential HP values fare (later, we'll find out better ways to do this):", "import itertools\n\n# test out different HP combinations\nfor hp_combo in parameter_combinations:\n \n # Train and output accuracies\n rf = RandomForestClassifier(n_estimators=hp_combo[\"n_estimators\"], \n max_features=hp_combo[\"max_features\"])\n \n rf.fit(XTrain, yTrain)\n RF_predictions = rf.predict(XTest)\n print ('When n_estimators is {} and max_features is {}, test set accuracy is {}'.format(\n hp_combo[\"n_estimators\"],\n hp_combo[\"max_features\"], \n round(metrics.accuracy_score(yTest, RF_predictions),2))\n )\n ", "Looks like the second combination of HPs might do better. However, manually searching for the best HPs in this way is not efficient, a bit random, and could potentially lead to models that perform well on this specific dataset, but do not generalise well to new data, which is the important thing. This phenomenon of building models that do not generalise well, or that are fitting too closely to the dataset, is called overfitting. \nHere, you trained different models on the training dataset using manually selected HP values. We then tested on the test dataset. This is not as bad as training a model and evaluating it on the training set, but it is still bad - since you repeatedly evaluated on the test dataset, knowledge of the test set can leak into the model bulding phase. You are at risk of inadvertenly learning something about the test set, and hence are susceptible to overfitting.\nk-fold cross validation for hyperparameter tuning\nSo, you have to be careful not to overfit to our data. But wait, didn't we also say that the test set is not meant to be touched until you are completely done training our model? How are you meant to optimize our hyperparameters then? \nEnter k-fold cross-validation, which is a handy technique for measuring a model's performance using only the training set. k-fold CV is a general method (see an explanation here), and is not specific to hyperparameter optimization, but is very useful for that purpose. We simply try out different HP values, get several different estimates of model performance for each HP value (or combination of HP values), and choose the model with the lowest CV error. The process looks like this: \n\nIn the context of HP optimization, we perform k-fold cross validation together with grid search or randomized search to get a more robust estimate of the model performance associated with specific HP values. \nGrid search\nTraditionally and perhaps most intuitively, scanning for good HPs values can be done with the grid search (also called parameter sweep). This strategy exhaustively searches through some manually prespecified HP values and reports the best option. It is common to try to optimize multiple HPs simultaneously - grid search tries each combination of HPs in turn, hence the name. This is a more convenient and complete way of searching through hyperparameter space than manually specifying combinations.\nThe combination of grid search and k-fold cross validation is very popular for finding the models with good performance and generalisability. So, in HP optimisation we are actually trying to do two things: (i) find the best possible combination of HPs that define a model and (ii) making sure that the pick generalises well to new data. In order to address the second concern, CV is often the method of choice. Scikit-learn makes this process very easy and slick, and even supports parallel distributing of the search (via the n_jobs argument). \nYou use grid search to tune a random forest like this:", "from sklearn.grid_search import GridSearchCV, RandomizedSearchCV\n\n# Search for good hyperparameter values\n# Specify values to grid search over\nn_estimators = list(np.arange(10, 50, 15))\nmax_features = list(np.arange(5, X.shape[1], 25))\n\nhyperparameters = {'n_estimators': n_estimators, \n 'max_features': max_features}\n\nprint (hyperparameters)\n\n# Grid search using cross-validation\ngridCV = GridSearchCV(RandomForestClassifier(), param_grid=hyperparameters, cv=10, n_jobs=4)\ngridCV.fit(XTrain, yTrain)\n\n# Identify optimal hyperparameter values\nbest_n_estim = gridCV.best_params_['n_estimators']\nbest_max_features = gridCV.best_params_['max_features'] \n\nprint(\"The best performing n_estimators value is: {:5.1f}\".format(best_n_estim))\nprint(\"The best performing max_features value is: {:5.1f}\".format(best_max_features))\n\n# Train classifier using optimal hyperparameter values\n# We could have also gotten this model out from gridCV.best_estimator_\nrf = RandomForestClassifier(n_estimators=best_n_estim,\n max_features=best_max_features)\n\nrf.fit(XTrain, yTrain)\nRF_predictions = rf.predict(XTest)\n\nprint (metrics.classification_report(yTest, RF_predictions))\nprint (\"Overall Accuracy:\", round(metrics.accuracy_score(yTest, RF_predictions),2))", "We now get ~0.96 accuracy. In this case, we did not improve much on the (unrealistic) baseline of 0.94-0.96, but in real life model tuning would usually have a much larger effect. Still, in the context of spam email detection, even this relatively small change would have a large effect on reducing the annoyance of users. How could you try to improve on this result?\nNote that grid search with k-fold CV simply returns the best HP values out of the available options, and is therefore not guaranteed to return a global optimum. It makes sense to choose a diverse collection of possible values that is somewhat centred around an empirically sensible default.\nYou tuned your random forest classifier!\nSo, that was an overview of the concepts and practicalities involved when tuning a random forest classifer. We could also choose to tune various other hyperpramaters, like max_depth (the maximum depth of a tree, which controls how tall we grow our trees and influences overfitting) and the choice of the purity criterion (which are specific formulas for calculating how good or 'pure' the splits we choose are, as judged by how well they separate the classes in our dataset). The two HPs we chose to tune are regarded as the most important. Have a look at tuning more than just the n_estimators and max_features HPs and see what happens.\nQuick quiz:\n\n\nHow do you think that altering the n_estimators and max_depth HPs would affect the bias and variance of the random forest classifier?\n\n\nIt is interesting that the random forest performs better with quite low values of max_features on this dataset. What do you think this says about the features in the dataset? \n\n\nTry max_features=1. What does this force the trees in the random forest to do? \n\n\nTo get more of an intuition of how random forests operate, play around with printing the importance of the features with print (rf.feature_importances_) under different conditions and experiment with setting max_depth=0.\n\n\nConclusion\nIn this post, we started with the motivation for tuning machine learning algorithms (i.e. nicer, bigger numbers in your models' performance reports!). You evaluated different candidate models by simple trial and error, as well as by using k-fold cross validation. You then ran your tuned models on the test set. \nIn this post, you were keeping an eye on the accuracy of models in order to optimize hyperparameters, but there are problems for which you might want to maximize something else, like the model's specificity or the sensitivity. For example, if you were doing medical diagnostics and trying to detect a deadly illness, it would be very bad to accidentally label a sick person as healthy (this would be called a \"false negative\" in the classification lingo). Maybe it's not so bad if you misclassify healthy people as sick people (\"false positive\"), since in the worst case you would just annoy people by having them retake the diagnostic test. Hence, you might want your diagnostic model to be weighted towards optimizing sensitivity. Here is a good introduction to sensitivity and specificity which continues with the example of diagnostic tests.\nArguably, in spam email detection, it is worse to misclassify real email as spam (false positive) than to let a few spam emails pass through your filter (false negative) and show up in people's mailboxes. In this case, you might aim to maximize specificity. Of course, you cannot be so focused on improving the specificity of your classifier that you completely tank your sensitivity. There is a natural trade-off between these quantities (see this primer on ROC curves), and part of our job as statistical modellers is to practice the dark art of deciding where to draw the line.\nSometimes there is no model tuning to be done. For example, a Naive Bayes (NB) classifier just operates by calculating conditional probabilities, and there is no real hyperparameter optimization stage. NB is actually a very interesting algorithm that is famous for classifying text documents (and the spam dataset in particular), so if you have time, check out a great overview and Python implementation here. It's a \"naive\" classifier because it rests on the assumption that the features in your dataset are independent, which is often not strictly true. In the spam dataset, you can image that the occurence of the strings \"win\", \"money\", and \"!!!\" is probably not independent. Despite this, NB often still does a decent job at classification tasks. \nIn our next post, we will explore different ways to tune models and optimise a support vector machine and logistic regression classifier. Stay... tuned! Cue groans." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
GoogleCloudPlatform/vertex-ai-samples
notebooks/community/managed_notebooks/inventory-prediction/inventory_prediction.ipynb
apache-2.0
[ "Inventory prediction on ecommerce data using Vertex AI\nTable of contents\n\nOverview\nDataset\nObjective\nCosts\nLoad the required data from BigQuery\nExplore and analyze the dataset\nFeature preprocessing\nModel building \nTrain the model\nEvaluate the model\nSave the model to a Cloud Storage bucket \nCreate a model in Vertex AI \nCreate an endpoint \nDeploy the model to the created endpoint\nWhat-If Tool \nClean up \n\nOverview\n<a name=\"section-1\"></a>\nThis notebook explores how to build a machine learning model for inventory prediction on an ecommerce dataset. This notebook includes steps for deploying the model on Vertex AI using the Vertex AI SDK and analyzing the deployed model using the What-If Tool.\nNote: This notebook file was designed to run in a Vertex AI Workbench managed notebooks instance using the TensorFlow 2 (Local) kernel. Some components of this notebook may not work in other notebook environments.\nDataset\n<a name=\"section-2\"></a>\nThe dataset used in this notebook consists of inventory data since 2018 for an ecommerce store. This dataset is publicly available as a BigQuery table named looker-private-demo.ecomm.inventory_items, which can be accessed by pinning the looker-private-demo project in BigQuery. The table consists of various fields related to ecommerce inventory items such as id, product_id, cost, when the item arrived at the store, and when it was sold. This notebook makes use of the following fields assuming their purpose is as described below:\n\nid: The ID of the inventory item\nproduct_id: The ID of the product\ncreated_at: When the item arrived in the inventory/at the store\nsold_at: When the item was sold (Null if still unsold)\ncost: Cost at which the item was sold\nproduct_category: Category of the product\nproduct_brand: Brand of the product (dropped later as there are too many values)\nproduct_retail_price: Price of the product\nproduct_department: Department to which the product belonged to\nproduct_distribution_center_id: Which distribution center (an approximation of regions) the product was sold from\n\nThe dataset is encoded to hide any private information. For example, the distribution centers have been assigned ID numbers ranging from 1 to 10.\nObjectives\n<a name=\"section-3\"></a>\nThe objectives of this notebook include:\n\nLoad the dataset from BigQuery using the \"BigQuery in Notebooks\" integration.\nAnalyze the dataset.\nPreprocess the features in the dataset.\nBuild a random forest classifier model that predicts whether a product will get sold in the next 60 days.\nEvaluate the model.\nDeploy the model using Vertex AI.\nConfigure and test the What-If Tool.\n\nCosts\n<a name=\"section-4\"></a>\nThis tutorial uses the following billable components of Google Cloud:\n\nVertex AI\nBigQuery\nCloud Storage\n\nLearn about Vertex AI\npricing, BigQuery pricing and Cloud Storage\npricing, and use the Pricing\nCalculator\nto generate a cost estimate based on your projected usage.\nBefore you begin\nSet your project ID\nIf you don't know your project ID, you may be able to get your project ID using gcloud.", "import os\n\nPROJECT_ID = \"\"\n\n# Get your Google Cloud project ID from gcloud\nif not os.getenv(\"IS_TESTING\"):\n shell_output = !gcloud config list --format 'value(core.project)' 2>/dev/null\n PROJECT_ID = shell_output[0]\n print(\"Project ID: \", PROJECT_ID)", "Otherwise, set your project ID here.", "if PROJECT_ID == \"\" or PROJECT_ID is None:\n PROJECT_ID = \"[your-project-id]\" # @param {type:\"string\"}", "Timestamp\nIf you are in a live tutorial session, you might be using a shared test account or project. To avoid name collisions between users on resources created, you create a timestamp for each instance session, and append it onto the name of resources you create in this tutorial.", "from datetime import datetime\n\nTIMESTAMP = datetime.now().strftime(\"%Y%m%d%H%M%S\")", "Create a Cloud Storage bucket\nThe following steps are required, regardless of your notebook environment.\nWhen you submit a training job using the Cloud SDK, you upload a Python package containing your training code to a Cloud Storage bucket. Vertex AI runs the code from this package. In this tutorial, Vertex AI needs the trained model to be saved to Cloud Storage bucket for deployment. Using the model artifact, you can then create Vertex AI model and endpoint resources in order to serve the online predictions.\nSet the name of your Cloud Storage bucket below. It must be unique across all Cloud Storage buckets.\nYou may also change the REGION variable, which is used for operations throughout the rest of this notebook. Make sure to choose a region where Vertex AI services are available. You may not use a Multi-Regional Storage bucket for training with Vertex AI.", "BUCKET_NAME = \"[your-bucket-name]\"\nBUCKET_URI = f\"gs://{BUCKET_NAME}\"\nREGION = \"[your-region]\"\n\nif BUCKET_NAME == \"\" or BUCKET_NAME is None or BUCKET_NAME == \"[your-bucket-name]\":\n TIMESTAMP = datetime.now().strftime(\"%Y%m%d%H%M%S\")\n BUCKET_NAME = PROJECT_ID + \"aip-\" + TIMESTAMP\n BUCKET_URI = \"gs://\" + BUCKET_NAME", "Only if your bucket doesn't already exist: Run the following cell to create your Cloud Storage bucket.", "! gsutil mb -l $REGION $BUCKET_NAME", "Finally, validate access to your Cloud Storage bucket by examining its contents:", "! gsutil ls -al $BUCKET_NAME", "Tutorial\nImport the required libraries", "import os\n\nimport joblib\nimport matplotlib.pyplot as plt\nimport numpy as np\nimport pandas as pd\nimport sklearn.metrics as metrics\nfrom google.cloud import storage\nfrom google.cloud.bigquery import Client\nfrom sklearn.ensemble import RandomForestClassifier\nfrom sklearn.model_selection import train_test_split\nfrom sklearn.preprocessing import MinMaxScaler\nfrom witwidget.notebook.visualization import WitConfigBuilder, WitWidget", "Load the required data from BigQuery\n<a name=\"section-5\"></a>\nThe following cell integrates with BigQuery data from the same project through the Vertex AI's \"BigQuery in Notebooks\" integration. It can run an SQL query as it would run in the BigQuery console. \nNote: This feature only works in a notebook running on a Vertex AI Workbench managed-notebook instance.\n@bigquery\nSELECT \n id,\n product_id, \n created_at,\n sold_at,\n cost,\n product_category,\n product_brand,\n product_retail_price,\n product_department,\n product_distribution_center_id\nFROM \nlooker-private-demo.ecomm.inventory_items\nAfter executing the above cell, clicking Query and load as DataFrame button adds the following python cell that loads the queried data into a pandas dataframe.", "# The following two lines are only necessary to run once.\n# Comment out otherwise for speed-up.\nclient = Client()\n\nquery = \"\"\"SELECT \n id,\n product_id, \n created_at,\n sold_at,\n cost,\n product_category,\n product_brand,\n product_retail_price,\n product_department,\n product_distribution_center_id\nFROM \nlooker-private-demo.ecomm.inventory_items\"\"\"\njob = client.query(query)\ndf = job.to_dataframe()", "Explore and analyze the dataset\n<a name=\"section-6\"></a>\nCheck the first five rows of the dataset.", "df.head(5)", "Check the fields in the dataset and their data types and number of null values.", "df.info()", "Apart from the sold_at datetime field, there aren't any fields that consist of null values in the dataset. As you are dealing with the inventory-item data, it is absolutely plausible that there will be some items that haven't been sold yet and hence the null values.\nNext, convert the date fields to a proper date format to process them in the next steps.", "# convert to proper date columns\ndf[\"created_at\"] = pd.to_datetime(df[\"created_at\"], format=\"%Y-%m-%d\")\ndf[\"sold_at\"] = pd.to_datetime(df[\"sold_at\"].dt.strftime(\"%Y-%m-%d\"))", "Check the date ranges.", "# check the date ranges\nprint(\"Min-sold_at : \", df[\"sold_at\"].min())\nprint(\"Max-sold_at : \", df[\"sold_at\"].max())\n\nprint(\"Min-created_at : \", df[\"created_at\"].min())\nprint(\"Max-created_at : \", df[\"created_at\"].max())", "Extract the month from the date field created_at.", "# calculate the month when the item has arrived\ndf[\"arrival_month\"] = df[\"created_at\"].dt.month", "Calculate the average number of days a product had been in the inventory until it was sold.", "# calculate the number of days the item hasn't been sold.\ndf[\"shelf_days\"] = (df[\"sold_at\"] - df[\"created_at\"]).dt.days", "Calculate the discount percentages that apply to the products.", "# calculate the discount offered\ndf[\"discount_perc\"] = (df[\"product_retail_price\"] - df[\"cost\"]) / df[\n \"product_retail_price\"\n]", "Check the unique products and their brands in the data.", "# check total unique items\ndf[\"product_id\"].unique().shape, df[\"product_brand\"].unique().shape", "The fields product_id and product_brand seem to have a lot of unique values. For the purpose of prediction, use product_id as the primary-key and product_brand is dropped as it has too many values/levels. \nSegregate the required numerical and categorical fields to analyze the dataset.", "categ_cols = [\n \"product_category\",\n \"product_department\",\n \"product_distribution_center_id\",\n \"arrival_month\",\n]\nnum_cols = [\"cost\", \"product_retail_price\", \"discount_perc\", \"shelf_days\"]", "Check the count of individual categories for each categorical field.", "for i in categ_cols:\n print(i, \" - \", df[i].unique().shape[0])", "Check the distribution of the numerical fields.", "df[num_cols].describe().T", "Generate bar plots for categorical fields and histograms and box plots for numerical fields to check their distributions in the dataset.", "for i in categ_cols:\n df[i].value_counts(normalize=True).plot(kind=\"bar\")\n plt.title(i)\n plt.show()\n\nfor i in num_cols:\n _, ax = plt.subplots(1, 2, figsize=(10, 4))\n df[i].plot(kind=\"box\", ax=ax[0])\n df[i].plot(kind=\"hist\", ax=ax[1])\n ax[0].set_title(i + \"-Boxplot\")\n ax[1].set_title(i + \"-Histogram\")\n plt.show()", "Most of the fields like discount, department, distribution center-id have a decent distribution. For the field product_category, there are some categories that don't constitute 2% of the dataset at least. Although there are outliers in some numerical fields, they are exempted from removing as there can be products that are expensive or belonging to a particular category that doesn't often see many sales. \nFeature preprocessing\n<a name=\"section-7\"></a>\nNext, aggregate the data based on suitable categorical fields in the data and take the average number of days it took for the product to get sold. For a given product_id, there can be multiple item id's in this dataset and you want to predict at the product level whether that particular product is going to be sold in the next couple of months. You are aggregating the data based on each of the product configurations present in this dataset like the price, cost, category and at which center it is being sold. This way the model can predict whether a product with so and so properties is going to be sold in the next couple of months.\nFor the number of days a product got sold in, find the average of the shelf_days field.", "groupby_cols = [\n \"product_id\",\n \"product_distribution_center_id\",\n \"product_category\",\n \"product_department\",\n \"arrival_month\",\n \"product_retail_price\",\n \"cost\",\n \"discount_perc\",\n]\nvalue_cols = [\"shelf_days\"]\n\n\ndf_prod = df[groupby_cols + value_cols].groupby(by=groupby_cols).mean().reset_index()", "Check the aggregated product level data.", "df_prod.head()", "Look for null values in the data.", "df_prod.isna().sum() / df.shape[0]", "Only the shelf_days field has null values that correspond to the product_id's that have no sold items. \nPlot the distribution of the aggregated shelf_days field by generating a box plot.", "df_prod[\"shelf_days\"].plot(kind=\"box\")", "Here, you can see that most of the products are sold within 60 days since they've arrived in the inventory/store. In this tutorial, you will train a machine learning model that predicts the probability of a product being sold within 60 days.\nEncode the categorical fields\nEncode the the shelf_days field to generate the target field sold_in_2mnt indicating whether the product was sold in 60 days.", "df_prod[\"sold_in_2mnt\"] = df_prod[\"shelf_days\"].apply(\n lambda x: 1 if x >= 0 and x < 60 else 0\n)\ndf_prod[\"sold_in_2mnt\"].value_counts(normalize=True)", "Segregate the features into variables for model building.", "target = \"sold_in_2mnt\"\ncateg_cols = [\n \"product_category\",\n \"product_department\",\n \"product_distribution_center_id\",\n \"arrival_month\",\n]\nnum_cols = [\"product_retail_price\", \"cost\", \"discount_perc\"]", "Encode the product_department field.", "df[\"product_deprtment\"] = (\n df[\"product_department\"].apply(lambda x: 1 if x == \"Women\" else 0).value_counts()\n)", "Encode the rest of the categorical fields for model building.", "# Create dummy variables for each categ. variable\nfor i in categ_cols:\n ml = pd.get_dummies(df_prod[i], prefix=i + \"_\", drop_first=True)\n df_new = pd.concat([df_prod, ml], axis=1)\n\ndf_new.drop(columns=categ_cols, inplace=True)\ndf_new.shape", "Normalize the numerical fields\nNormalize the fields product_retail_price and cost to the 0-1 scale using Min-Max normalization technique.", "scaler = MinMaxScaler()\nscaler = scaler.fit(df_new[[\"product_retail_price\", \"cost\"]])\ndf_new[[\"product_retail_price_norm\", \"cost_norm\"]] = scaler.transform(\n df_new[[\"product_retail_price\", \"cost\"]]\n)", "Model building\n<a name=\"section-8\"></a>\nTrain the model\n<a name=\"section-9\"></a>\nCollect the required fields from the dataframe.", "cols = [\n \"discount_perc\",\n \"arrival_month__2\",\n \"arrival_month__3\",\n \"arrival_month__4\",\n \"arrival_month__5\",\n \"arrival_month__6\",\n \"arrival_month__7\",\n \"arrival_month__8\",\n \"arrival_month__9\",\n \"arrival_month__10\",\n \"arrival_month__11\",\n \"arrival_month__12\",\n \"product_retail_price_norm\",\n \"cost_norm\",\n]", "Split the data into train(80%) and test(20%) sets.", "X = df_new[cols].copy()\ny = df_new[target].copy()\ntrain_X, test_X, train_y, test_y = train_test_split(\n X, y, train_size=0.8, test_size=0.2, random_state=7\n)", "Create the classifier and fit it on the training data.", "model = RandomForestClassifier(random_state=7, n_estimators=100)\nmodel.fit(train_X[cols], train_y)", "Evaluate the model\n<a name=\"section-10\"></a>\nPredict on the test set and check the accuracy of the model.", "pred_y = model.predict(test_X[cols])\n\n# Calculate the accuracy as our performance metric\naccuracy = metrics.accuracy_score(test_y, pred_y)\nprint(\"Accuracy: \", accuracy)", "Generate the confusion-matrix on the test set.", "confusion = metrics.confusion_matrix(test_y, pred_y)\nprint(f\"Confusion matrix:\\n{confusion}\")\n\nprint(\"\\nNormalized confusion matrix:\")\nfor row in confusion:\n print(row / row.sum())", "The model performance can be stated in terms of specificity (True-negative rate) and sensitivity (True-positive rate). In the normalized confusion matrix, the top left value represents the True-negative rate and the bottom right value represents the True-positive rate.\nSave the model to a Cloud Storage bucket\n<a name=\"section-11\"></a>\nNext, save the model to the created Cloud Storage bucket for deployment.", "# save the trained model to a local file \"model.joblib\"\nFILE_NAME = \"model.joblib\"\njoblib.dump(model, FILE_NAME)\n\n# Upload the saved model file to Cloud Storage\nBLOB_PATH = \"inventory_prediction/\"\nBLOB_NAME = os.path.join(BLOB_PATH, FILE_NAME)\n\nbucket = storage.Client().bucket(BUCKET_NAME)\n\nblob = bucket.blob(BLOB_NAME)\nblob.upload_from_filename(FILE_NAME)", "Create a model in Vertex AI\n<a name=\"section-12\"></a>\nSpecify the corresponding model parameters.", "MODEL_DISPLAY_NAME = \"inventory_prediction_model\"\nARTIFACT_GCS_PATH = f\"gs://{BUCKET_NAME}/{BLOB_PATH}\"", "Create a Vertex AI model resource.", "from google.cloud import aiplatform\n\naiplatform.init(project=PROJECT_ID, location=REGION)\n\nmodel = aiplatform.Model.upload(\n display_name=MODEL_DISPLAY_NAME,\n artifact_uri=ARTIFACT_GCS_PATH,\n serving_container_image_uri=\"us-docker.pkg.dev/vertex-ai/prediction/sklearn-cpu.1-0:latest\",\n)\n\nmodel.wait()\n\nprint(model.display_name)\nprint(model.resource_name)", "Create an Endpoint\n<a name=\"section-13\"></a>\nSet the display name for the endpoint.", "ENDPOINT_DISPLAY_NAME = \"inventory_prediction_endpoint\"", "Create an endpoint resource on Vertex AI.", "endpoint = aiplatform.Endpoint.create(display_name=ENDPOINT_DISPLAY_NAME)\n\nprint(endpoint.display_name)\nprint(endpoint.resource_name)", "Deploy the model to the created Endpoint\n<a name=\"section-14\"></a>\nConfigure the deployment name, machine type, and other parameters for the deployment.", "DEPLOYED_MODEL_NAME = \"inventory_prediction_deployment\"\nMACHINE_TYPE = \"n1-standard-2\"", "Deploy the model to the created endpoint.", "model.deploy(\n endpoint=endpoint,\n deployed_model_display_name=DEPLOYED_MODEL_NAME,\n machine_type=MACHINE_TYPE,\n)\n\nmodel.wait()\n\nprint(\"Model display-name - \", model.display_name)\nprint(\"Model resource-name - \", model.resource_name)\nendpoint.list_models()", "Note the DEPLOYED_MODEL_ID for deleting the deployment during clean up.", "DEPLOYED_MODEL_ID = \"\"", "What-If Tool\n<a name=\"section-15\"></a>\nThe What-If Tool can be used to analyze the model predictions on test data. See a brief introduction to the What-If Tool. In this tutorial, the What-If Tool is configured and run on the model deployed on Vertex AI Endpoints in the previous steps.\nWitConfigBuilder provides the set_ai_platform_model() method to configure the What-If Tool with a model deployed as a version on Ai Platform models. This feature currently supports only Ai Platform but not Vertex AI models. Fortunately, there is also an option to pass a custom function for generating predictions through the set_custom_predict_fn() method where either the locally trained model or a function that returns predictions from a Vertex AI model can be passed.\nPrepare test samples\nSave some samples from the test data for both the available classes (Fraud/not-Fraud) to analyze the model using the What-If Tool.", "# collect some samples for each class-label from the test data\nsample_size = 200\npos_samples = test_y[test_y == 1].sample(sample_size).index\nneg_samples = test_y[test_y == 0].sample(sample_size).index\ntest_samples_y = pd.concat([test_y.loc[pos_samples], test_y.loc[neg_samples]])\ntest_samples_X = test_X.loc[test_samples_y.index].copy()", "Running the What-If Tool on the deployed Vertex AI model\nDefine a function to fetch the predictions from the deployed model and run it on the created test data configuring the What-If tool.", "# configure the target and class-labels\nTARGET_FEATURE = target\nLABEL_VOCAB = [\"not-sold\", \"sold\"]\n\n# function to return predictions from the deployed Model\n\n\ndef endpoint_predict_sample(instances: list):\n prediction = endpoint.predict(instances=instances)\n preds = [[1 - i, i] for i in prediction.predictions]\n return preds\n\n\n# Combine the features and labels into one array for the What-If Tool\ntest_examples = np.hstack(\n (test_samples_X.to_numpy(), test_samples_y.to_numpy().reshape(-1, 1))\n)\n\n# Configure the WIT with the prediction function\nconfig_builder = (\n WitConfigBuilder(test_examples.tolist(), test_samples_X.columns.tolist() + [target])\n .set_custom_predict_fn(endpoint_predict_sample)\n .set_target_feature(TARGET_FEATURE)\n .set_label_vocab(LABEL_VOCAB)\n)\n\n# run the WIT-widget\nWitWidget(config_builder, height=800)", "Understanding the What-If tool\nIn the Datapoint editor tab, you can highlight a dot in the result set and ask the What-If Tool to pick the \"nearest counterfactual\". This is a row of data closest to the row of data you selected but with the opposite outcome. Features in the left-hand table are editable and can show what tweaks are needed to get a particular row of data to flip from one outcome to another. For example, altering the discount_percentage feature would show how it impacts the prediction. \n<img src=\"images/Datapoint_editor.png\">\nUnder the Performance & Fairness tab, you can slice the prediction results by a second variable. This allows digging deeper and understanding how different segments of the data react to the model's predictions. For example, in the following image, the higher the discount_percentage, the lesser the false negatives and the lower the discount_percentage, the higher the false positives. \n<img src=\"images/Performance_and_fairness.png\">\nThe Features tab in the end provides you an intuitive and interactive way to understand the features present in the data. Similar to the exploratory data analysis steps performed in this notebook, the What-If Tool provides a visual and statistical description on the features.\n<img src=\"images/features.PNG\">\nClean up\n<a name=\"section-16\"></a>\nTo clean up all Google Cloud resources used in this project, you can delete the Google Cloud project you used for the tutorial.\nOtherwise, you can delete the individual resources you created in this tutorial:\nUndeploy the model.", "endpoint.undeploy(deployed_model_id=DEPLOYED_MODEL_ID)", "Delete the endpoint.", "endpoint.delete()", "Delete the model.", "model.delete()", "Remove the contents of the Cloud Storage bucket.", "! gsutil -m rm -r $BUCKET_URI" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
bjshaw/phys202-2015-work
assignments/assignment05/InteractEx01.ipynb
mit
[ "Interact Exercise 01\nImport", "%matplotlib inline\nfrom matplotlib import pyplot as plt\nimport numpy as np\n\nfrom IPython.html.widgets import interact, interactive, fixed\nfrom IPython.display import display", "Interact basics\nWrite a print_sum function that prints the sum of its arguments a and b.", "def print_sum(a, b):\n \"\"\"Print the sum of the arguments a and b.\"\"\"\n print(a + b)", "Use the interact function to interact with the print_sum function.\n\na should be a floating point slider over the interval [-10., 10.] with step sizes of 0.1\nb should be an integer slider the interval [-8, 8] with step sizes of 2.", "interact(print_sum, a=(-10.,10.), b=(-8,8,2))\n\nassert True # leave this for grading the print_sum exercise", "Write a function named print_string that prints a string and additionally prints the length of that string if a boolean parameter is True.", "def print_string(s, length=False):\n \"\"\"Print the string s and optionally its length.\"\"\"\n print(s)\n if length == True:\n print(len(s))", "Use the interact function to interact with the print_string function.\n\ns should be a textbox with the initial value \"Hello World!\".\nlength should be a checkbox with an initial value of True.", "interact(print_string, s=\"Hello World!\", length=False);\n\nassert True # leave this for grading the print_string exercise" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
enbanuel/phys202-2015-work
assignments/assignment05/MatplotlibEx03.ipynb
mit
[ "Matplotlib Exercise 3\nImports", "%matplotlib inline\nimport matplotlib.pyplot as plt\nimport numpy as np", "Contour plots of 2d wavefunctions\nThe wavefunction of a 2d quantum well is:\n$$ \\psi_{n_x,n_y}(x,y) = \\frac{2}{L}\n \\sin{\\left( \\frac{n_x \\pi x}{L} \\right)} \n \\sin{\\left( \\frac{n_y \\pi y}{L} \\right)} $$\nThis is a scalar field and $n_x$ and $n_y$ are quantum numbers that measure the level of excitation in the x and y directions. $L$ is the size of the well.\nDefine a function well2d that computes this wavefunction for values of x and y that are NumPy arrays.", "def well2d(x, y, nx, ny, L=1.0):\n \"\"\"Compute the 2d quantum well wave function.\"\"\"\n # YOUR CODE HERE\n psi_x_y = (2/L)*np.sin((nx*np.pi*x)/L)*np.sin((ny*np.pi*y)/L)\n return psi_x_y\n\npsi = well2d(np.linspace(0,1,10), np.linspace(0,1,10), 1, 1)\nassert len(psi)==10\nassert psi.shape==(10,)", "The contour, contourf, pcolor and pcolormesh functions of Matplotlib can be used for effective visualizations of 2d scalar fields. Use the Matplotlib documentation to learn how to use these functions along with the numpy.meshgrid function to visualize the above wavefunction:\n\nUse $n_x=3$, $n_y=2$ and $L=0$.\nUse the limits $[0,1]$ for the x and y axis.\nCustomize your plot to make it effective and beautiful.\nUse a non-default colormap.\nAdd a colorbar to you visualization.\n\nFirst make a plot using one of the contour functions:", "# YOUR CODE HERE\nf = plt.figure(figsize=(10,6))\nx = np.linspace(0, 1, 100)\ny = np.linspace(0, 1, 100)\nw, v = np.meshgrid(x,y)\nr = plt.contourf(well2d(w, v, 3, 2), cmap='BuGn')\nplt.title('Wavefunction graph')\ncbar = r.colorbar(ticks=[-1, 0, 1], orientation='horizontal')\n\"\"\"\nColormap Possible values are: OrRd, flag, nipy_spectral, coolwarm, hsv_r, gnuplot2, prism, BrBG, afmhot_r, Spectral, Purples, Blues_r, YlGnBu, bone, summer_r, gnuplot2_r, Paired, YlGn, brg, gray_r, binary, ocean_r, spectral, Pastel2, afmhot, BrBG_r, YlGnBu_r, Set3_r, YlGn_r, binary_r, gist_gray, YlOrBr, Dark2_r, PuBuGn_r, Greys_r, winter, RdPu_r, Dark2, Pastel1, PuOr, RdBu, flag_r, GnBu_r, RdBu_r, copper, Paired_r, cool, brg_r, PRGn_r, PuOr_r, Oranges, gnuplot, Greys, hot_r, cool_r, RdYlBu_r, terrain_r, autumn, BuGn, gnuplot_r, bone_r, RdYlBu, Greens, gist_gray_r, spring_r, seismic, coolwarm_r, gist_earth, Set3, jet, RdYlGn_r, terrain, gist_rainbow_r, gist_ncar, PuBu, BuGn_r, Wistia, RdGy_r, summer, rainbow_r, CMRmap, hsv, Reds_r, YlOrRd_r, pink_r, Set2, YlOrBr_r, gray, BuPu_r, PRGn, Set1_r, rainbow, Spectral_r, gist_heat, spectral_r, RdYlGn, bwr, GnBu, CMRmap_r, gist_stern, copper_r, jet_r, gist_rainbow, PuRd, Pastel1_r, PuRd_r, Accent, Wistia_r, Reds, Greens_r, prism_r, BuPu, Pastel2_r, Purples_r, RdGy, Set2_r, Blues, autumn_r, Set1, pink, Oranges_r, gist_stern_r, ocean, gist_yarg, nipy_spectral_r, PuBuGn, Accent_r, gist_earth_r, spring, PiYG_r, RdPu, cubehelix_r, winter_r, seismic_r, bwr_r, PiYG, PuBu_r, gist_ncar_r, OrRd_r, YlOrRd, cubehelix, gist_yarg_r, hot, gist_heat_r\n\n\n\"\"\"\n\nassert True # use this cell for grading the contour plot", "Next make a visualization using one of the pcolor functions:", "# YOUR CODE HERE\nf = plt.figure(figsize=(10,6))\nplt.pcolormesh(well2d(w, v, 3, 2), cmap='PuBuGn')\nplt.title('Wavefunction graph')\n\nassert True # use this cell for grading the pcolor plot" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
dennisobrien/PublicNotebooks
fivethirtyeight/2020-05-15 Can You Find the Best Dungeons and Dragons Strategy.ipynb
mit
[ "Can You Find the Best Dungeons & Dragons Strategy?\nThe Riddler - 2020-05-15\n\nThe fifth edition of Dungeons & Dragons introduced a system of “advantage and disadvantage.” When you roll a die “with advantage,” you roll the die twice and keep the higher result. Rolling “with disadvantage” is similar, except you keep the lower result instead. The rules further specify that when a player rolls with both advantage and disadvantage, they cancel out, and the player rolls a single die. Yawn!\nThere are two other, more mathematically interesting ways that advantage and disadvantage could be combined. First, you could have “advantage of disadvantage,” meaning you roll twice with disadvantage and then keep the higher result. Or, you could have “disadvantage of advantage,” meaning you roll twice with advantage and then keep the lower result. With a fair 20-sided die, which situation produces the highest expected roll: advantage of disadvantage, disadvantage of advantage or rolling a single die?\nExtra Credit: Instead of maximizing your expected roll, suppose you need to roll N or better with your 20-sided die. For each value of N, is it better to use advantage of disadvantage, disadvantage of advantage or rolling a single die?\n\nIntuition\nMy intuition says that \"disadvantage of advantage\" is a better strategy. One way to think about it is that \"advantage\" will clearly have a better expected value than than \"disadvantage\", so picking the worse of \"advantage\" rolls will still be selecting from a population with higher expected values.\nAnother way to think about this is what rolls get eliminated. In the \"disadvantage of advantage\", we first eliminate two low rolls, one from each pair, then eliminate the highest from the two remaining \"advantage\" rolls. So we will never select the highest value of the four rolls, nor the lowest of the four rolls (this is true for both \"disadvantage of advantage\" and \"advantage of disadvantage\"), but we will only be selecting the lower of the two remaining if the two pairs of rolls do not overlap in values.", "import matplotlib.pyplot as plt\nimport numpy as np\nimport pandas as pd", "Expected value of \"advantage\" and \"disadvantage\"\nThere is probably a way to figure out the expected value in closed form, but running simulation is just so much easier.\nWe will use np.random.randint to sample rolls from an N-sided dice.", "np.random.randint(low=1, high=20+1, size=1)", "And if we want to simulate two rolls of the die, repeated 4 times, we pass a shape of (5, 2).", "rolls = np.random.randint(low=1, high=20+1, size=(5, 2))\nrolls", "For \"advantage\", we take the max of the two rolls. Running this along axis=1 (running max along the columns) should give us what we want.", "rolls.max(axis=1)", "And finally we can take the mean. With a large enough sample size, this will approximate the expected value.\nNow let's wrap this up in methods.", "def expected_value_advantage(n_rolls: int=100, n_sides: int=20) -> float:\n rolls = np.random.randint(low=1, high=n_sides+1, size=(n_rolls, 2))\n return rolls.max(axis=1).mean()\n\nexpected_value_advantage(n_rolls=10**6, n_sides=20)", "We can do something similar for \"disadvantage\".", "def expected_value_disadvantage(n_rolls: int=100, n_sides: int=20) -> float:\n rolls = np.random.randint(low=1, high=n_sides+1, size=(n_rolls, 2))\n return rolls.min(axis=1).mean()\n\nexpected_value_disadvantage(n_rolls=10**6, n_sides=20)", "And as a comparison, we should make sure we understand the expected value of a straight up roll.", "def expected_value_roll(n_rolls: int=100, n_sides: int=20) -> float:\n rolls = np.random.randint(low=1, high=n_sides+1, size=n_rolls)\n return rolls.mean()\n\nexpected_value_roll(n_rolls=10**6, n_sides=20)", "Expected value of \"advantage of disadvantage\"\nWe can take advantage of numpy's n-dimensional arrays by adding another dimension. Whereas previously, rolls[0] represented the first roll of \"advantage\" or \"disadvantage\" (meaning, it was two rolls, then min or max applied), now rolls[0] will represent the first set of \"disadvantage\" rolls.", "rolls = np.random.randint(low=1, high=20+1, size=(5, 2, 2))\nrolls\n\nrolls[0]\n\nrolls[0, 0]", "So again, we want to apply min (for \"disadvantage\") across the last dimension.", "rolls.min(axis=2)", "And for \"advantage\" of these pairs of (\"disadvantage\") rolls, we apply max across the last dimension.", "rolls.min(axis=2).max(axis=1)", "Now we can roll it up in a method and run the simulation.", "def expected_value_advantage_of_disadvantage(n_rolls: int=100, n_sides: int=20) -> float:\n rolls = np.random.randint(low=1, high=n_sides+1, size=(n_rolls, 2, 2))\n return rolls.min(axis=2).max(axis=1).mean()\n\nexpected_value_advantage_of_disadvantage(n_rolls=10**6, n_sides=20)\n\ndef expected_value_disadvantage_of_advantage(n_rolls: int=100, n_sides: int=20) -> float:\n rolls = np.random.randint(low=1, high=n_sides+1, size=(n_rolls, 2, 2))\n return rolls.max(axis=2).min(axis=1).mean()\n\nexpected_value_disadvantage_of_advantage(n_rolls=10**6, n_sides=20)", "So we see that our intuition was correct and that \"disadvantage of advantage\" is a better strategy than \"advantage of disadvantage\".\nExtra credit\nHere we are not just interested in maximizing the expected value, but we want the strategy that is best to roll N or better on the 20-sided die.\nI'm surprised by this question because it suggests that the strategy for maximizing the expected value is not necessarily the best strategy to roll N or better for any given N.\nTo get an understanding of this, we can return distributions instead of mean values.", "def dist_roll(n_rolls: int=100, n_sides: int=20) -> pd.DataFrame:\n rolls = np.random.randint(low=1, high=n_sides+1, size=n_rolls)\n hist, bins = np.histogram(rolls, bins=n_sides, range=(1, n_sides+1), density=True)\n df = pd.DataFrame(data={'roll': hist}, index=(int(x) for x in bins[:-1]))\n df['roll_or_higher'] = df['roll'][::-1].cumsum()\n return df\n\ndist_roll(n_rolls=10**6, n_sides=20)\n\ndef dist_advantage_of_disadvantage(n_rolls: int=100, n_sides: int=20) -> pd.DataFrame:\n rolls = np.random.randint(low=1, high=n_sides+1, size=(n_rolls, 2, 2))\n values = rolls.min(axis=2).max(axis=1)\n hist, bins = np.histogram(values, bins=n_sides, range=(1, n_sides+1), density=True)\n df = pd.DataFrame(data={'aod': hist}, index=(int(x) for x in bins[:-1]))\n df['aod_or_higher'] = df['aod'][::-1].cumsum()\n return df\n\ndist_advantage_of_disadvantage(n_rolls=10**6, n_sides=20)\n\ndef dist_disadvantage_of_advantage(n_rolls: int=100, n_sides: int=20) -> pd.DataFrame:\n rolls = np.random.randint(low=1, high=n_sides+1, size=(n_rolls, 2, 2))\n values = rolls.max(axis=2).min(axis=1)\n hist, bins = np.histogram(values, bins=n_sides, range=(1, n_sides+1), density=True)\n df = pd.DataFrame(data={'doa': hist}, index=(int(x) for x in bins[:-1]))\n df['doa_or_higher'] = df['doa'][::-1].cumsum()\n return df\n\ndist_disadvantage_of_advantage(n_rolls=10**6, n_sides=20)\n\ndef plot_strategies(n_rolls=100, n_sides=20):\n df_roll = dist_roll(n_rolls=n_rolls, n_sides=n_sides)\n df_aod = dist_advantage_of_disadvantage(n_rolls=n_rolls, n_sides=n_sides)\n df_doa = dist_disadvantage_of_advantage(n_rolls=n_rolls, n_sides=n_sides)\n df = pd.concat([df_roll, df_aod, df_doa], axis=1)\n ax = df.plot.line(y=['roll_or_higher', 'aod_or_higher', 'doa_or_higher'], figsize=(10,6))\n ax.set_title('Comparative Strategies')\n ax.set_ylim(top=1.0, bottom=0.0)\n ax.set_xlim(left=1, right=n_sides)\n ax.set_xticks(range(1, n_sides+1))\n ax.grid(True, axis='x', alpha=0.5)\n\nplot_strategies(n_rolls=10**6, n_sides=20)", "This chart gives us a few insights:\n\n\"Disadvantage of advantage\" strictly dominates \"advantage of disadvantage\".\n\"Disadvantage of advantage\" beats the \"single roll\" strategy for values of N up to and including 13. But for 14 and higher, it is better to choose the \"single roll\" strategy.\n\"Advantage of disadvantage\" beats the \"single roll\" strategy for values on N up to and including 8. But for 9 and higher, \"single roll\" is a better strategy." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
abulbasar/machine-learning
Telecomm Churn Analysis Using XGBoost.ipynb
apache-2.0
[ "Problem Statement\nIn this example I will build a classifier for churn prediction using a dataset from telecomm industry. You can find the data set in github in the following links.\nhttps://github.com/abulbasar/data/tree/master/Churn%20prediction\nThere are two files \n- churn-bigml-80.csv training data\n- churn-bigml-20.csv test data", "import xgboost as xgb\nimport pandas as pd\nfrom sklearn import *\nimport matplotlib.pyplot as plt\n\n%matplotlib inline", "Load the training data", "df_train = pd.read_csv(\"/data/churn-bigml-80.csv\")\ndf_train.head()", "Let's check number of records, number of columns, types of columns and whether the data contains NULL values.\nAs we see it contains 2665 records, 20 columns, and no null values. There are three catagorical values.", "df_train.info()", "Let's check distribution of the output class. As it shows it contains 85% records are negative. It gives a sense of desired accuracy - which is closure to 90% or more.", "df_train.Churn.value_counts()\n\ndf_train.Churn.value_counts()/len(df_train)\n\ndf_train.columns", "Loaded the test data and performed similar analysis as before.", "df_test = pd.read_csv(\"/data/churn-bigml-20.csv\")\ndf_test.info()\n\ndf_test.Churn.value_counts()/len(df_test)\n\nlen(df_test)/len(df_train)", "Sort out of categorical and numeric columns so that it can be passed to pipeline for pre-proceessing steps. In the processing steps, we are doing the following \n- replace any missing numeric values with column median\n- perform standard scaling for numeric values\n- one hot encode the categorical columns\nAlthought the Area Code is numeric, here I am considering this as categorical since it is a qualitative variable in nature.", "cat_columns = ['State', 'Area code', 'International plan', 'Voice mail plan']\nnum_columns = ['Account length', 'Number vmail messages', 'Total day minutes',\n 'Total day calls', 'Total day charge', 'Total eve minutes',\n 'Total eve calls', 'Total eve charge', 'Total night minutes',\n 'Total night calls', 'Total night charge', 'Total intl minutes',\n 'Total intl calls', 'Total intl charge', 'Customer service calls']\n\ntarget = \"Churn\"\nX_train = df_train.drop(columns=target)\ny_train = df_train[target]\nX_test = df_test.drop(columns=target)\ny_test = df_test[target]\n\ncat_pipe = pipeline.Pipeline([\n ('imputer', impute.SimpleImputer(strategy='constant', fill_value='missing')),\n ('onehot', preprocessing.OneHotEncoder(handle_unknown='error', drop=\"first\"))\n]) \n\nnum_pipe = pipeline.Pipeline([\n ('imputer', impute.SimpleImputer(strategy='median')),\n ('scaler', preprocessing.StandardScaler()),\n])\n\npreprocessing_pipe = compose.ColumnTransformer([\n (\"cat\", cat_pipe, cat_columns),\n (\"num\", num_pipe, num_columns)\n])\n\nX_train = preprocessing_pipe.fit_transform(X_train)\nX_test = preprocessing_pipe.transform(X_test)\n\npd.DataFrame(X_train.toarray()).describe()", "Build a basic logistic regression model and decision tree models and check the accuracy. Basic logistic regression model gives accuracy of 85%.", "est = linear_model.LogisticRegression(solver=\"liblinear\")\nest.fit(X_train, y_train)\ny_test_pred = est.predict(X_test)\nest.score(X_test, y_test)\n\nest = tree.DecisionTreeClassifier(max_depth=6)\nest.fit(X_train, y_train)\ny_test_pred = est.predict(X_test)\nest.score(X_test, y_test)", "Print classification report. The report shows that precision and recall score quite poor. Accuracy is 85%. Confusion metrics shows a high number of false positive and false negatives.", "print(metrics.classification_report(y_test, y_test_pred))\n\nmetrics.confusion_matrix(y_test, y_test_pred)", "Next, we build a similar model using XGBoost. Performance the model is slightly better than logistic regression model.", "eval_sets = [\n (X_train, y_train),\n (X_test, y_test)\n]\n\ncls = xgb.XGBRFClassifier(silent=False, \n scale_pos_weight=1,\n learning_rate=0.1, \n colsample_bytree = 0.99,\n subsample = 0.8,\n objective='binary:logistic', \n n_estimators=100, \n reg_alpha = 0.003,\n max_depth=10, \n gamma=10,\n min_child_weight = 1\n \n )\n\nprint(cls.fit(X_train\n , y_train\n , eval_set = eval_sets\n , early_stopping_rounds = 10\n , eval_metric = [\"error\", \"logloss\"]\n , verbose = True\n ))\nprint(\"test accuracy: \" , cls.score(X_test, y_test))\n\ncls.evals_result()\n\ny_test_pred = cls.predict(X_test)\n\nmetrics.confusion_matrix(y_test, y_test_pred)\n\ny_test_prob = cls.predict_proba(X_test)[:, 1]\ny_test_prob\n\nauc = metrics.roc_auc_score(y_test, y_test_prob)\nauc\n\nftr, tpr, thresholds = metrics.roc_curve(y_test, y_test_prob)\n\nplt.rcParams['figure.figsize'] = 8,8\nplt.plot(ftr, tpr)\nplt.xlabel(\"FPR\")\nplt.ylabel(\"TPR\")\nplt.title(\"ROC, auc: \" + str(auc))", "Cross validate the model\nXGBoost cross validation parameters\n\nnum_boost_round: denotes the number of trees you build (analogous to n_estimators)\nmetrics: tells the evaluation metrics to be watched during CV\nas_pandas: to return the results in a pandas DataFrame.\nearly_stopping_rounds: finishes training of the model early if the hold-out metric (\"rmse\" in our case) does not improve for a given number of rounds.\nseed: for reproducibility of results.", "params = { 'objective': \"binary:logistic\"\n , 'colsample_bytree': 0.9\n , 'learning_rate': 0.01\n , 'max_depth': 10\n , 'alpha': 0.5\n , 'min_child_weight': 1\n , 'subsample': 1\n , 'eval_metric': \"auc\"\n , 'n_estimators': 300\n , 'verbose': True\n }\n\ndata_dmatrix = xgb.DMatrix(data=X_train,label=y_train) \n\ncv_results = xgb.cv(dtrain=data_dmatrix\n , params=params\n , nfold=5\n , maximize = \"auc\"\n , num_boost_round=100\n , early_stopping_rounds=10\n , metrics=[\"logloss\", \"error\", \"auc\"]\n , as_pandas=True\n , seed=123\n , verbose_eval=True\n )\n\ncv_results\n\ncv_results[[\"train-error-mean\"]].plot()", "Install graphviz to display the decision graph\n$ conda install graphviz python-graphviz", "plt.rcParams['figure.figsize'] = 50,50\n\nxgb.plot_tree(cls, num_trees=0, rankdir='LR')", "These plots provide insight into how the model arrived at its final decisions and what splits it made to arrive at those decisions.\nNote that if the above plot throws the 'graphviz' error on your system, consider installing the graphviz package via pip install graphviz on cmd. You may also need to run sudo apt-get install graphviz on cmd.", "plt.rcParams['figure.figsize'] =15, 15\nxgb.plot_importance(cls, )\n\ncls.feature_importances_\n\none_hot_encoder = preprocessing_pipe.transformers_[0][1].steps[1][1]\none_hot_encoder\n\none_hot_encoder.get_feature_names()\n\npreprocessing_pipe.transformers_[0][1]\n\nparameters = {\n 'max_depth': range (2, 10, 1),\n 'n_estimators': range(60, 220, 40),\n 'learning_rate': [0.1, 0.01, 0.05]\n}\n\n\ncls = xgb.XGBRFClassifier(silent=False, \n scale_pos_weight=1,\n learning_rate=0.01, \n colsample_bytree = 0.99,\n subsample = 0.8,\n objective='binary:logistic', \n n_estimators=100, \n reg_alpha = 0.003,\n max_depth=10, \n gamma=10,\n min_child_weight = 1\n )\n\ngrid_search = model_selection.GridSearchCV(\n estimator=cls,\n param_grid=parameters,\n scoring = 'roc_auc',\n n_jobs = 12,\n cv = 10,\n verbose=True,\n return_train_score=True\n)\n\ngrid_search.fit(X_train, y_train)\n\ngrid_search.best_estimator_\n\ngrid_search.best_params_\n\ngrid_search.best_score_\n\npd.DataFrame(grid_search.cv_results_)\n\nfolds = 5\nparam_comb = 5\n\ncls = xgb.XGBRFClassifier(silent=False, \n scale_pos_weight=1,\n learning_rate=0.01, \n colsample_bytree = 0.99,\n subsample = 0.8,\n objective='binary:logistic', \n n_estimators=100, \n reg_alpha = 0.003,\n max_depth=10, \n gamma=10,\n min_child_weight = 1\n )\n\nskf = model_selection.StratifiedKFold(n_splits=folds, shuffle = True, random_state = 1001)\nrandom_search = model_selection.RandomizedSearchCV(cls, \n param_distributions=parameters, \n n_iter=param_comb, \n scoring='accuracy', \n n_jobs=12, \n cv=skf.split(X_train,y_train), \n verbose=3, \n random_state=1001 )\n\nrandom_search.fit(X_train, y_train)\n\nrandom_search.best_score_, random_search.best_params_\n\npd.DataFrame(random_search.cv_results_)" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
google/prog-edu-assistant
exercises/dataframe-pre1-master.ipynb
apache-2.0
[ "import io\n\nimport numpy as np\nimport pandas as pd\n\n# MASTER ONLY\nimport re\n# imports %%solution, %%submission, %%template, %%inlinetest, %%studenttest, %autotest\n%load_ext prog_edu_assistant_tools.magics\nfrom prog_edu_assistant_tools.magics import report, autotest", "lang:enIn this exercise, we will consider what is a data frame and how to represent\nthe data in a \"tidy\" way. We will use the pandas data frame library.\nlang:jaこの講義では、「データフレーム」を使って、データをキレイに(tidy) 表現する方法を説明します。\n本講義では データフレームのためのライブラリとしてpandas を使用します。\nWhat is a CSV format\nlang:en\nThere are many ways to represent the tabular data, spreadsheets being the most popular one among general computer users. However, for the programmatic access, a simpler format may be even more useful.\nIt is easy to generate, even by typing manually, and relatively easy to parse. CSV stands for comma-separated values, so it uses a comma , to separate the values in a single row.\nHere are the rules of the CSV data format:\n\nEvery line has the same number of fields separated by commas. In CSV speak, each line is called a record.\nThe values of fields should not contain commas or newline characters. In the event that comma needs to be a part of the value, the field value should be enclosed in double quotes.\nIf the contents of the field needs to contain double quote character itself, it should be doubled inside.\nThe first line in the file may be a header, i.e. contain the human-readable column names. This is not required, but having a header line makes the data more self-describing, and makes the code to handle them more robust.\n\nTypically the CSV format is used in files with .csv suffix, but Python language makes it easy enough to parse CSV defined directly in the source code in string literals. This is one of the easiest way to define small data frames in Jupyter notebooks. Here is an example. \nCVS形式とは (What is CSV format)\nlang:ja\n表のようなデータを表現できる方法は複数がありますが、プログラムでデータを扱うのために特に使いやすいのはCSV形式です。\nCSV形式は、プログラムによって生成または手動の生成、両方とも簡単で、読み込みも簡単にできます。\nCSVはComma-separated valuesの略で、カンマ区切りという意味です。\nCSV形式のルールは以下です。\n\n各行はカンマで区切っているいくつかの値から成り立っています。一つの値はフィールドといいます。\n各行はフィールドの数は同じです。 一行はレコードといいます。\n値のなかではカンマ、改行、引用符は原則として入りません。\nもしカンマ、改行を入れなければいけない場合、引用符の中に入れます: \"a,b\"\n引用符を入れなければいけない場合は、引用符の中に二重しなければなりません: \"a\"\"b\"\nファイルの最初の一行はヘッダ行を入れることができます。必須ではありませんが、できればあった方がいいです。\n\n普段はCSV形式.csvのファイルとして保存しますが、Pythonでは直接のプログラムへの組み込みも可能です。\n以下の例をご覧ください。", "df2 = pd.read_csv(io.StringIO(\"\"\"\nx,y\n1,2\n3,4\n\"\"\"))\ndf2", "lang:enIn case you are curious, pd.read_csv accepts file-like objects to read the data from, and io.StringIO is way to create a file-like object from a string literal. Triple quotes \"\"\" are a Python syntax that allows to define multi-line string literal.\nlang:ja詳しく見ると、pd.read_csvはファイルのようなものを受け取ります、そしてio.StringIOは文字からファイルのようなオブジェクトを作っています。\nlang:enHere is the example of CSV data that we will use throughout this notebook.\nlang:ja以下では、次のCSV形式のファイルを例に説明していきます。", "with open(\"data/tokyo-weather.csv\") as f:\n [print(next(f), end='') for i in range(5)]", "データフレームとは (What is a data frame)\n```\nASSIGNMENT METADATA\nassignment_id: \"DataFrame1\"\n```\nlang:enA data frame is a table containing the data where every column has a name, and the data within each column has a uniform type (e.g. only numbers or only strings). For example, a standard spreadsheet with a data\ncan often be thought of as a data frame. Here is the definition\nof the DataFrame class in pandas library: https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.html#pandas.DataFrame\nLet's look at an example.\nlang:jaデータフレームとは2次元の表形式のデータ(tabular data)を扱うためのデータ構造です。各列は型や名前がついています。列はそれぞれ型が異なってもよいです。\nたとえば、スプレッドシートのデータはデータフレームとしてみることができます。\npandasのライブラリでのDataFrameクラスの定義はこちらを参考にしてください: https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.html#pandas.DataFrame\n例をみてみましょう。", "df = pd.read_csv('data/tokyo-weather.csv')\ndf.head(5)", "lang:enHere, the read_csv call reads the data from a CSV file into a data frame. \n```python\nRead the CSV file into a new data frame.\ndf = pd.read_csv('data/tokyo-weather.csv')\n```\nAnd the df.head(5) call displays the first few lines of the data frame.\n```python\nDisplay the first 5 rows of the data frame.\ndf.head(5)\n```\nlang:jaread_csvはCSV形式のファイルからデータを読み込んでいます。\n```python\nCSV形式のファイルからデータを読み込みます。\ndf = pd.read_csv('data/tokyo-weather.csv')\n```\ndf.head(5)はデータの最初の5つの行を表示します。\n```python\n最初の5つの行を表示します。\ndf.head(5)\n```\nlang:enThe data frame has columns, rows and the cells holding the values. The values in the cells can be numeric (including NaN to represent missing numbers), or they can be string values to represent text data or categorical data, but each column must have a single type. In a data frame, it is possible to address individual\ncolumns or rows of data.\nThe good way for representing the data using the data frame comes from statistics.\nEach column in the data frame corresponds to a variable, that is something that either\ncan be measured, or can be controlled by us. Each row corresponds to one observation, with\nvalues in different columns logically being related. For example, in the table above,\none row coresonds to the weather data for 1 hour.\nIn Python Pandas library, the column types can be inspected using dtypes property. Note that numeric types\nare further subdivided into integer (int64) and floating point (float64) types. The string data is represented with dtype object.\nlang:jaデータフレームは値を含む「セル」が2次元格子状にならんだデータ構造です。\n各セルには数値または文字列のデータを保存できます。\n上の例ではいくつかのセルにNaN という値が入っていますが、これはNot A Numberの意味で、値が不正または欠けていることを表します\n一つのデータフルームのよい考え方は統計分析に由来しますが、統計分析以外にもその考え方が役に立ちます。\n各行は観測値を表し、各列は変数を表します。変数は直接に設定できる、または観測して図るものとします。\n一つの観測値は同時に図るものなので、一つの行に入っている値は一つのものを記述します。\n上記の例の表では、一つの行は一時間の観測を表しています。\nPythonのpandasのライブラリでは、列の型を知るためにdtypesというプロパティを使用できます。\n数値型は更に整数(int64)や浮動小数点(float64)の型に分けられます。文字の場合はオブジェクトの型(object)になります。", "# データフレームの列の型をご覧できます。\n# 因子はCSVの中で文字列として\ndf.dtypes\n\n# 一目で分かるデータの平均値や標準偏差\ndf.describe()", "Tidy data frames: How to think about data frame structure\nlang:en\nThere are many possible ways how one can put the same data into the tabular format.\n| Date | Rainfall | Wind |\n| ------------- |---------------|-------|\n| 2019-08-08 | 50 | NE |\n| 2019-08-07 | 0 | E |\n| Rainfall.8/8 | Rainfall.8/7 | Wind.8/8 | Wind.8/7 |\n| ------------- |---------------|----------|----------|\n| 50 | 0 | NE | E | \n| Date | Variable | Value |\n|----------|----------|-------|\n|2019-08-08|Rainfall | 50 |\n|2019-08-08|Wind | NE |\n|2019-08-07|Rainfall | 0 |\n|2019-08-07|Wind | E |\nOne particularly useful way to think of the data has been inspired by statistics and looks like an experiment report.\nIt is called tidy data and satisfies the following conditions:\n\nEach kind of \"experiment\" is kept in a separate data frame. The \"experiment\" here just means a group\n of related data that can be measured together.\nIn a table, one row is \"one observation\", and one column is one variable. A \"variable\" in this context\n is anything that can be either measured or controlled (e.g. a temperature reading or a time of measurement).\n One row collects the related measurements, for example something that was measured at the same moment of time.\nVariables (columns) can be subdivided into controlled (how we set up an experiment), and measured\n (the values that we are measuring). This way of thinking explains what do we mean by each row\n corresponding to one observation.\nThe values are in the fields only, i.e. the values should never occur in column headers. The variable names\n should be in column header only, i.e. variable names should never occur in field values.\n\nAll other possible formats of data that are not tidy are called messy by contrast.\nOf the examples above, only the first table is a tidy data frame. The second and third are messy.\nThere is some connection of tidy data frames to 3rd normal form in the database theory, but data frames tend to be more flexible and malleable. It is also worth noting, that depending on the purpose of data analysis and required computations, the definition of \"one observation\" may be different. For example, let's assume that we have the data about flight arrival and departure times. If we want to study flight durations, then it is convenient to have departure and arrival as independent variables in separate columns, which makes it really easy to compute flight duration. If on the other hand we want to study how the air stripe at an airport is used, then depatures and arrivals are just timestamps of events related to the airstripe use, and arrival/departure is better to be thought as an additional categorical variable.\nThere are two benefits to tidy data frames\n\n\nBringing all data into tidy frame format makes your life easier as you do not need\n to remember and handle various data format pecularities. Data handing becomes\n uniform.\n\n\nThere is an existing set of tools that work best when the data is in tidy format. The most\n important of those tools is a plotting library used for data visualiation.\n We will see some examples later in this unit.\n\n\nSee the paper https://vita.had.co.nz/papers/tidy-data.pdf for more details about tidy data frames.\nキレイな(tidy)データフレーム (Tidy data frames)\nlang:ja\nデータフレームにデータを入れる方法はたくさんありますが、それはどちらでもよいという訳はありません。以下の例を見ましょう。\n| 日付   | 降水量 | 風向 |\n| ------------- |------------|-------|\n| 2019-08-08 | 50 | NE |\n| 2019-08-07 | 0 | E |\n| 降水量.8/8 | 降水量.8/7 | 風向.8/8 |風向.8/7 |\n| ------------- |---------|---------|---------|\n| 50 | 0 | NE | E | \n| 日付 | 変数 | 値 |\n|----------|----------|-------|\n|2019-08-08|降水量 | 50 |\n|2019-08-08|風向 | NE |\n|2019-08-07|降水量 | 0 |\n|2019-08-07|風向 | E |\n以上のデータの表現方法の中から一つは特に役に立ちます。それは「キレイな(tidy)データフレーム」といい、以下の条件に当てはまるデータフレームです。\n\n一つのデータフレームに入るデータは一つの観測値として考えられ、変数は全て関連します。\n一つの列は変数になります。列のヘッダは変数名です。変数の値はヘッダに絶対に入りません。\nーつの行は一つの観測として考えられます。つまり、関係しないデータは一つの行に入りません。\n または、関連している観測した変数は一つの列に入れます。\n\nキレイな(tidy)データフレームの条件に当てはまらないデータフレームは汚い(messy)といいます。\n上の例では、1つ目の表はtidyで、2つ目と3つ目はmessyです。\nデータ解析の目的によって観測値の定義は異なる場合もあります。たとえば、飛行機の出発時間や到着時間は\n別々の変数でしょうか。 飛行時間の解析であれば、別々の変数の扱いは便利です。なぜかというと、観測値ごとに\n簡単に飛行時間を計算できるからです。 もし空港の飛行場の使い方の解析の場合は、離陸も着陸も飛行場を使う\n機会なので、同じデータであっても、一つの変数にした方が解析しやすいのです。\n詳しくキレイなデータフレームについてこちらの論文ご参考ください: https://vita.had.co.nz/papers/tidy-data.pdf (英語)\n予習課題: 記述からデータフレームを生成 (Create data frame from textual description)\n```\nEXERCISE METADATA\nexercise_id: \"CreateDataFrameFromText\"\n```\nlang:enIn this exercise, you task is to create a tidy data frame based on the textual description\nprovided below. A person (Aliсe) wants to do a data analysis on her coffee drinking habits.\nHere is the Alices description of her week:\n\nAlice goes to office every weekday\nAlice drops by the coffee shop before work every day except Wednesdays\nIn the morning of work days, Alice buys an S-size coffee cup\nAlice goes to gym every Tuesday and Thursday.\nAfter gym Alice goes to the coffee shop and has a L-size coffee.\nWhen not going to gym, Alice goes straight home and goes to sleep without coffee.\nOn weekends, Alice does not go to coffee shops, but brews coffee at home, once on Saturday and once on\n Sunday. Her coffee maker makes 500 ml of coffee.\nS-size cup is 200 ml. L-size cup is 300 ml.\n\nYour task: create a data frame named coffee that would describe how much coffee Alice drinks on each day of the week, with the following columns describing the day:\n\nday: integer, describes the day (1: Monday, ... 7 = Sunday)\nwork: boolean (True/False) describes whether the day is workday (true) or weekends (false).\ngym: boolean (True/False) describes whether Alice goes to the gym on that day (true - goes to gym, false - \ndoes not go to gym).\ncoffee_ml: integer, describes how much coffee Alice drinks in the day.\n\nlang:jaアリスはコーヒーを大好きで、よく飲みます。コーヒーの消費量に気になってデータ解析を行いたいので、以下の記述を読んで、データフレームをCSV形式で作ってください。\nアリスの一週間の説明こちらです:\n\nアリスは平日は毎日に会社に通います。\nアリスは会社に着く前に毎日にコーヒーを飲みます。ただし、水曜日は飲みません。\n平日の朝は、いつもSサイズのコップを買います。\nアリスは毎週火曜日と木曜日にジムに通います。\nジムが終わったら、アリスはLサイズのコーヒーを飲んでいます。\nジムがない日はコーヒー屋さんによらず直接に帰ります。\n週末(土曜日と日曜日)は、アリスはコーヒーを家で一日一回作ります。一回の量は500mlです。\nSサイズのコップは200ml, Lサイズのコップは300mlです。\n\n課題として、データフレームを作ってcoffeeという名前をつけてください。データフレームには以下の列を入れましょう。\n\nday: 整数、一週間の中の一日を記述します (1:月曜日, 2:火曜日, ..., 6:土曜日, 7:日曜日)\nwork: 真理値、その日に会社に行くかどうか(1:会社に行く、0:行かない)\ngym: 真理値、その日にジムに行くかどうか(1:ジムに行く、0:行かない)\ncoffee_ml: 整数、その日にコーヒーの消費量、mlの単位", "%%solution\n\"\"\" # BEGIN PROMPT\ncoffee = pd.read_csv(io.StringIO('''day,work,gym,coffee_ml\n...\n'''))\n\"\"\" # END PROMPT\n# BEGIN SOLUTION\ncoffee = pd.read_csv(io.StringIO(\"\"\"day,work,gym,coffee_ml\n1,true,false,200\n2,true,true,500\n3,true,false,0\n4,true,true,500\n5,true,false,200\n6,false,false,500\n7,false,false,500\n\"\"\"))\n# END SOLUTION\n\n# Inspect the resulting data frame\ncoffee\n\n%%studenttest StudentTest\n# Test the data frame. **lang:en**\n# MASTER ONLY\nassert len(coffee) == 7, \"Your dataframe should have 7 rows for each day of the week\"\nassert 'day' in coffee, \"Your dataframe should have a 'day' column\"\nassert 'coffee_ml' in coffee, \"Your dataframe should have a 'coffee_ml' column\"\nassert 'work' in coffee, \"Your dataframe should have a 'work' column\"\nassert 'gym' in coffee, \"Your dataframe should have a 'gym' column\"\n\n%%studenttest StudentTest\n# Test the data frame. **lang:ja**\nassert len(coffee) == 7, \"データフレームには7つの行が入らなければなりません\"\nassert 'day' in coffee, \"データフレームには'day'の列が入らなければなりません\"\nassert 'coffee_ml' in coffee, \"データフレームには'coffee_ml'の列が入らなければなりません\"\nassert 'work' in coffee, \"データフレームには'work'の列が入らなければなりません\"\nassert 'gym' in coffee, \"データフレームには'gym'の列が入らなければなりません\"\n\n%%inlinetest AutograderTest\n# This test is not shown to student, but used by the autograder.\nassert 'coffee' in globals(), \"Did you define the data frame named 'coffee' in the solution cell?\"\nassert coffee.__class__ == pd.core.frame.DataFrame, \"Did you define a data frame named 'coffee'? There was a %s instead\" % coffee.__class__\nassert len(coffee) == 7, \"The data frame should have 7 rows, you have %d\" % len(coffee)\nassert len(np.unique(coffee['day']) == 7), \"The data frame should have 7 unique values of the 'day', you have %d\" % len(np.unique(coffee['day']))\nassert str(np.sort(np.unique(coffee['coffee_ml'])).astype(list)) == '[0 200 500]', \"The daily coffee_ml amount should have values of 0, 200, and 500, but you have got: %s\" % (str(np.sort(np.unique(coffee['coffee_ml'])).astype(list)))\nassert np.sum(coffee['coffee_ml']) == 2400, \"The coffee amount is not correct, total should be 2400 ml per week, but you data frame has %d\" % np.sum(coffee['coffee_ml']) \nassert np.sum(coffee['work'].astype(int)) == 5, \"There should be 5 work days in a week\"\nassert np.sum(coffee['gym'].astype(int)) == 2, \"There should be 2 gym days in a week\"\nassert np.all(coffee.loc[coffee['gym'].astype(bool)]['coffee_ml'] == 500), \"coffee_ml should be 500 ml on gym days\"\nassert np.all(coffee.loc[np.logical_not(coffee['work'].astype(bool))]['coffee_ml'] == 500), \"coffee_ml should be 500 on weekends\"\nassert np.sum(coffee.loc[np.logical_and(coffee['work'].astype(bool), np.logical_not(coffee['gym'].astype(bool)))]['coffee_ml']) == 400, \"coffee_ml should be 200 on Monday and Friday, and 0 on Wednesday\"\n\n%%submission\n1,1,0,200\n2,1,1,500\n3,1,0,0\n4,1,1,500\n5,1,0,200\n6,0,0,500\n7,0,0,500\n\nresult, log = %autotest AutograderTest\nreport(AutograderTest, results=result.results)", "MASTER ONLY. Try the AutograderTest with various inputs", "%%submission\ncoffee = pd.read_csv(io.StringIO(\"\"\"day,coffee_ml,work,gym\nMonday,201,true,false\nTuesday,500,true,true\nWednesday,0,true,false\nThursday,500,true,true\nFriday,200,true,false\nSaturday,500,false,false\nSunday,500,false,false\n\"\"\"))\n\nresult, logs = %autotest AutograderTest\nassert re.search(r'should have values of 0, 200, and 500', str(result.results['error']))\nreport(AutograderTest, results=result.results)\n\n%%submission\ncoffee = True\n\nresult, logs = %autotest AutograderTest\nassert re.search(r'Did you define a data frame named .coffee.', str(result.results['error']))\nreport(AutograderTest, results=result.results, source=submission_source.source)\n\nresult, logs = %autotest StudentTest\nreport(StudentTest, results=result.results)\n\n%%submission\ncoffee = pd.read_csv(io.StringIO(\"\"\"day,coffee_ml,work,gym\nMonday,200,1,0\nTuesday,500,1,0\nWednesday,0,1,0\nThursday,500,1,1\nFriday,200,1,0\nSaturday,500,0,0\nSunday,500,0,0\n\"\"\"))\n\nresult, logs = %autotest StudentTest\nassert result.results['passed']\nreport(StudentTest, results=result.results)" ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
nsaunier/CIV8760
05-donnees-spatiales.ipynb
mit
[ "< 3. Traitement de données | Contents | 6. Analyse statistique >", "import geopandas", "Géocodage\nLe géocodage consiste à obtenir les points de référence géographique d'objets du monde réel. Un cas intéressant est celui des adresses physiques. \nIl est possible de faire du géocodage à la main dans les outils cartographique publiques tels que Google ou OpenStreetMap. Il est aussi possible d'utiliser des bibliothèques Python comme geopandas pour faire du géocodage systématique. Le service nominatim d'OpenStreetMap permet le géocodage. \nAutre exemple de geopandas: https://geopandas.org/geocoding.html", "geopandas.tools.geocode('2900 boulevard Edouard Montpetit, Montreal', provider='nominatim', user_agent=\"mon-application\")", "< 3. Traitement de données | Contents | 6. Analyse statistique >" ]
[ "markdown", "code", "markdown", "code", "markdown" ]
intel-analytics/BigDL
python/chronos/use-case/AIOps/AIOps_anomaly_detect_unsupervised_forecast_based.ipynb
apache-2.0
[ "Unsupervised Anomaly Detection based on Forecasts\nAnomaly detection detects data points in data that does not fit well with the rest of data. In this notebook we demonstrate how to do anomaly detection using Chronos's built-in model MTNet\nFor demonstration, we use the publicly available cluster trace data cluster-trace-v2018 of Alibaba Open Cluster Trace Program. You can find the dataset introduction <a href=\"https://github.com/alibaba/clusterdata/blob/master/cluster-trace-v2018/trace_2018.md\" target=\"_blank\">here</a>. In particular, we use machine usage data to demonstrate anomaly detection, you can download the separate data file directly with <a href=\"http://clusterdata2018pubcn.oss-cn-beijing.aliyuncs.com/machine_usage.tar.gz\" target=\"_blank\">machine_usage</a>.\nHelper functions\nThis section defines some helper functions to be used in the following procedures. You can refer to it later when they're used.", "def get_result_df(y_true_unscale, y_pred_unscale, ano_index, look_back,target_col='cpu_usage'):\n \"\"\"\n Add prediction and anomaly value to dataframe.\n \"\"\"\n result_df = pd.DataFrame({\"y_true\": y_true_unscale.squeeze(), \"y_pred\": y_pred_unscale.squeeze()})\n result_df['anomalies'] = 0\n result_df.loc[result_df.index[ano_index], 'anomalies'] = 1\n result_df['anomalies'] = result_df['anomalies'] > 0\n return result_df \n\ndef plot_anomalies_value(date, y_true, y_pred, anomalies):\n \"\"\"\n plot the anomalies value\n \"\"\"\n fig, axs = plt.subplots(figsize=(16,6))\n \n axs.plot(date, y_true,color='blue', label='y_true')\n axs.plot(date, y_pred,color='orange', label='y_pred')\n axs.scatter(date[anomalies].tolist(), y_true[anomalies], color='red', label='anomalies value')\n axs.set_title('the anomalies value')\n \n plt.xlabel('datetime')\n plt.legend(loc='upper left')\n plt.show()", "Download raw dataset and load into dataframe\nNow we download the dataset and load it into a pandas dataframe.Steps are as below:\n* First, download the raw data <a href=\"http://clusterdata2018pubcn.oss-cn-beijing.aliyuncs.com/machine_usage.tar.gz\" target=\"_blank\">machine_usage</a>. Or run the script get_data.sh to download the raw data.It will download the resource usage of each machine from m_1932 to m_2085. \n* Second, run grep m_1932 machine_usage.csv &gt; m_1932.csv to extract records of machine 1932. Or run extract_data.sh.We use machine 1932 as an example in this notebook.You can choose any machines in the similar way.\n* Finally, use pandas to load m_1932.csv into a dataframe as shown below.", "import os \nimport numpy as np\nimport pandas as pd\nimport matplotlib.pyplot as plt\n%matplotlib inline\n\ndf_1932 = pd.read_csv(\"m_1932.csv\", header=None, usecols=[1,2,3], names=[\"time_step\", \"cpu_usage\",\"mem_usage\"])", "Below are some example records of the data", "df_1932.head()\n\ndf_1932.sort_values(by=\"time_step\", inplace=True)\ndf_1932.reset_index(inplace=True)\ndf_1932.sort_values(by=\"time_step\").plot(y=\"cpu_usage\", x=\"time_step\", figsize=(16,6),title=\"cpu_usage of machine 1932\")", "Data pre-processing\nNow we need to do data cleaning and preprocessing on the raw data. Note that this part could vary for different dataset. \nFor the machine_usage data, the pre-processing convert the time step in seconds to timestamp starting from 2018-01-01.", "df_1932.reset_index(inplace=True)\ndf_1932[\"time_step\"] = pd.to_datetime(df_1932[\"time_step\"], unit='s', origin=pd.Timestamp('2018-01-01'))", "Feature Engineering & Data Preperation\nFor feature engineering, we use hour as feature in addition to the target cpu usage.\nFor data preperation, we resample the average of cpu_usage in minutes, impute the data to handle missing data and scale the data. At last we generate the sample in numpy ndarray for Forecaster to use.\nWe generate a built-in TSDataset to complete the whole processing.", "from bigdl.chronos.data import TSDataset\nfrom sklearn.preprocessing import StandardScaler\n\n# we look back one hour data which is of the frequency of 1min.\nlook_back = 60\nhorizon = 1\n\ntsdata_train, tsdata_val, tsdata_test = TSDataset.from_pandas(df_1932, dt_col=\"time_step\", target_col=\"cpu_usage\", with_split=True, val_ratio = 0.1, test_ratio=0.1)\nstandard_scaler = StandardScaler()\n\nfor tsdata in [tsdata_train, tsdata_val, tsdata_test]:\n tsdata.resample(interval='1min', merge_mode=\"mean\")\\\n .impute(mode=\"last\")\\\n .gen_dt_feature()\\\n .scale(standard_scaler, fit=(tsdata is tsdata_train))\\\n .roll(lookback=look_back, horizon=horizon, feature_col = [\"HOUR\"])\\\n\nx_train, y_train = tsdata_train.to_numpy()\nx_val, y_val = tsdata_val.to_numpy()\nx_test, y_test = tsdata_test.to_numpy()\ny_train, y_val, y_test = y_train[:, 0, :], y_val[:, 0, :], y_test[:, 0, :]\nx_train.shape, y_train.shape, x_val.shape, y_val.shape, x_test.shape, y_test.shape", "Time series forecasting", "from bigdl.chronos.forecaster.tf.mtnet_forecaster import MTNetForecaster", "First, we initialize a mtnet_forecaster according to input data shape. Specifcally, look_back should equal (long_series_num+1)*series_length . Details refer to chronos docs <a href=\"https://bigdl.readthedocs.io/en/latest/doc/Chronos/Overview/chronos.html\" target=\"_blank\">here</a>.", "mtnet_forecaster = MTNetForecaster(target_dim=horizon,\n feature_dim=x_train.shape[-1],\n long_series_num=3,\n series_length=15\n )", "Now we train the model and wait till it finished.", "%%time\nmtnet_forecaster.fit(data=(x_train, y_train), batch_size=128, epochs=20)", "Use the model for prediction and inverse the scaling of the prediction results.", "y_pred_val = mtnet_forecaster.predict(x_val)\ny_pred_test = mtnet_forecaster.predict(x_test)\n\ny_pred_val_unscale = tsdata_val.unscale_numpy(np.expand_dims(y_pred_val, axis=1))[:, 0, :]\ny_pred_test_unscale = tsdata_test.unscale_numpy(np.expand_dims(y_pred_test, axis=1))[:, 0, :]\ny_val_unscale = tsdata_val.unscale_numpy(np.expand_dims(y_val, axis=1))[:, 0, :]\ny_test_unscale = tsdata_test.unscale_numpy(np.expand_dims(y_test, axis=1))[:, 0, :]", "Calculate the symetric mean absolute percentage error.", "# evaluate with sMAPE\nfrom bigdl.orca.automl.metrics import Evaluator\nsmape = Evaluator.evaluate(\"smape\", y_test_unscale, y_pred_test_unscale)\nprint(f\"sMAPE is {'%.2f' % smape}\")", "Anomaly detection", "from bigdl.chronos.detector.anomaly import ThresholdDetector\n\nratio=0.01\n\nthd=ThresholdDetector()\nthd.set_params(ratio=ratio)\nthd.fit(y_val_unscale,y_pred_val_unscale)\nprint(\"The threshold of validation dataset is:\",thd.th)\n\nanomaly_scores_val = thd.score()\nval_res_ano_idx = np.where(anomaly_scores_val > 0)[0]\nprint(\"The index of anomalies in validation dataset is:\",val_res_ano_idx)\n\nanomaly_scores_test = thd.score(y_test_unscale,y_pred_test_unscale)\ntest_res_ano_idx = np.where(anomaly_scores_test > 0)[0]\nprint(\"The index of anoalies in test dataset is:\",test_res_ano_idx)", "Get a new dataframe which contains y_true,y_pred,anomalies value.", "val_result_df = get_result_df(y_val_unscale, y_pred_val_unscale, val_res_ano_idx, look_back)\ntest_result_df = get_result_df(y_test_unscale, y_pred_test_unscale, test_res_ano_idx, look_back)", "Draw anomalies in line chart.", "plot_anomalies_value(val_result_df.index, val_result_df.y_true, val_result_df.y_pred, val_result_df.anomalies)\n\nplot_anomalies_value(test_result_df.index, test_result_df.y_true, test_result_df.y_pred, test_result_df.anomalies)" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
jdstokes/nsc211
notebooks/2.0-jds-tf_udacity_notMNIST.ipynb
mit
[ "Multinomial logistic classification\nInput X --> Linear model (Wx + b) --> Turned into logits/scores (y)--> into Softmax to turn into probabilities --> Cross-entropy to compare the probabilities to 1-hot labels\nNormalized inputs and inital values\nAssignment 1\nThe objective of this assignment is to learn about simple data curation practices, and familiarize you with some of the data we'll be reusing later.\nThis notebook uses the notMNIST dataset to be used with python experiments. This dataset is designed to look like the classic MNIST dataset, while looking a little more like real data: it's a harder task, and the data is a lot less 'clean' than MNIST.", "# These are all the modules we'll be using later. Make sure you can import them\n# before proceeding further.\nfrom __future__ import print_function\nimport matplotlib.pyplot as plt\n\nimport numpy as np\nimport os\nimport sys\nimport tarfile\nfrom IPython.display import display, Image\nfrom scipy import ndimage\nfrom sklearn.linear_model import LogisticRegression\nfrom six.moves.urllib.request import urlretrieve\nfrom six.moves import cPickle as pickle\nfrom skimage import io\n# Config the matplotlib backend as plotting inline in IPython\n%matplotlib inline", "First, we'll download the dataset to our local machine. The data consists of characters rendered in a variety of fonts on a 28x28 image. The labels are limited to 'A' through 'J' (10 classes). The training set has about 500k and the testset 19000 labelled examples. Given these sizes, it should be possible to train models quickly on any machine.", "url = 'http://commondatastorage.googleapis.com/books1000/'\nlast_percent_reported = None\ndata_root = '../data' # Change me to store data elsewhere\n\ndef download_progress_hook(count, blockSize, totalSize):\n \"\"\"A hook to report the progress of a download. This is mostly intended for users with\n slow internet connections. Reports every 5% change in download progress.\n \"\"\"\n global last_percent_reported\n percent = int(count * blockSize * 100 / totalSize)\n\n if last_percent_reported != percent:\n if percent % 5 == 0:\n sys.stdout.write(\"%s%%\" % percent)\n sys.stdout.flush()\n else:\n sys.stdout.write(\".\")\n sys.stdout.flush()\n \n last_percent_reported = percent\n \ndef maybe_download(filename, expected_bytes, force=False):\n \"\"\"Download a file if not present, and make sure it's the right size.\"\"\"\n dest_filename = os.path.join(data_root, filename)\n if force or not os.path.exists(dest_filename):\n print('Attempting to download:', filename) \n filename, _ = urlretrieve(url + filename, dest_filename, reporthook=download_progress_hook)\n print('\\nDownload Complete!')\n statinfo = os.stat(dest_filename)\n if statinfo.st_size == expected_bytes:\n print('Found and verified', dest_filename)\n else:\n raise Exception(\n 'Failed to verify ' + dest_filename + '. Can you get to it with a browser?')\n return dest_filename\n\ntrain_filename = maybe_download('notMNIST_large.tar.gz', 247336696)\ntest_filename = maybe_download('notMNIST_small.tar.gz', 8458043)", "Extract the dataset from the compressed .tar.gz file. This should give you a set of directories, labelled A through J.", "num_classes = 10\nnp.random.seed(133)\n\ndef maybe_extract(filename, force=False):\n root = os.path.splitext(os.path.splitext(filename)[0])[0] # remove .tar.gz\n if os.path.isdir(root) and not force:\n # You may override by setting force=True.\n print('%s already present - Skipping extraction of %s.' % (root, filename))\n else:\n print('Extracting data for %s. This may take a while. Please wait.' % root)\n tar = tarfile.open(filename)\n sys.stdout.flush()\n tar.extractall(data_root)\n tar.close()\n data_folders = [\n os.path.join(root, d) for d in sorted(os.listdir(root))\n if os.path.isdir(os.path.join(root, d))]\n if len(data_folders) != num_classes:\n raise Exception(\n 'Expected %d folders, one per class. Found %d instead.' % (\n num_classes, len(data_folders)))\n print(data_folders)\n return data_folders\n \ntrain_folders = maybe_extract(train_filename)\ntest_folders = maybe_extract(test_filename)", "Problem 1\nLet's take a peek at some of the data to make sure it looks sensible. Each exemplar should be an image of a character A through J rendered in a different font. Display a sample of the images that we just downloaded. Hint: you can use the package IPython.display.", "from os import listdir\nfrom os.path import isfile, join, isdir\nimport pandas as pd\n\ndef get_paths(foldNames):\n \n paths = dict.fromkeys(foldNames)\n\n for idx,g in enumerate(foldNames):\n fileNames = [f for f in listdir(join(trainPath,g)) if isfile(join(trainPath,g, f))]\n for i,f in enumerate(fileNames):\n fileNames[i] = join(trainPath,g,f) \n paths[g] = fileNames\n \n return paths\n\n\ntrainPath = '../data/notMNIST_large/'\nclass_names = [f for f in listdir(trainPath) if isdir(join(trainPath, f))]\ngroup_data = pd.DataFrame ({'group': class_names})\nclass_paths = get_paths(class_names)\n\n\nfor cname in class_names:\n i = Image(filename=class_paths[cname][1],width=100,height=100)\n print(\"class \" + cname)\n display(i)\n# i = io.imread(class_paths[cname][1])\n \n# plt.figure(figsize=(2, 2))\n# plt.imshow(i, cmap='gray', interpolation='nearest')\n# plt.axis('off')\n# plt.tight_layout()\n# plt.show()\n", "Now let's load the data in a more manageable format. Since, depending on your computer setup you might not be able to fit it all in memory, we'll load each class into a separate dataset, store them on disk and curate them independently. Later we'll merge them into a single dataset of manageable size.\nWe'll convert the entire dataset into a 3D array (image index, x, y) of floating point values, normalized to have approximately zero mean and standard deviation ~0.5 to make training easier down the road.\nA few images might not be readable, we'll just skip them.", "image_size = 28 # Pixel width and height.\npixel_depth = 255.0 # Number of levels per pixel.\n\ndef load_letter(folder, min_num_images):\n \"\"\"Load the data for a single letter label.\"\"\"\n image_files = os.listdir(folder)\n dataset = np.ndarray(shape=(len(image_files), image_size, image_size),\n dtype=np.float32)\n print(folder)\n num_images = 0\n for image in image_files:\n image_file = os.path.join(folder, image)\n try:\n image_data = (ndimage.imread(image_file).astype(float) - \n pixel_depth / 2) / pixel_depth\n if image_data.shape != (image_size, image_size):\n raise Exception('Unexpected image shape: %s' % str(image_data.shape))\n dataset[num_images, :, :] = image_data\n num_images = num_images + 1\n except IOError as e:\n print('Could not read:', image_file, ':', e, '- it\\'s ok, skipping.')\n \n dataset = dataset[0:num_images, :, :]\n if num_images < min_num_images:\n raise Exception('Many fewer images than expected: %d < %d' %\n (num_images, min_num_images))\n \n print('Full dataset tensor:', dataset.shape)\n print('Mean:', np.mean(dataset))\n print('Standard deviation:', np.std(dataset))\n return dataset\n\n\n\ndef maybe_pickle(data_folders, min_num_images_per_class, force=False):\n dataset_names = []\n for folder in data_folders:\n set_filename = folder + '.pickle'\n dataset_names.append(set_filename)\n if os.path.exists(set_filename) and not force:\n # You may override by setting force=True.\n print('%s already present - Skipping pickling.' % set_filename)\n else:\n print('Pickling %s.' % set_filename)\n dataset = load_letter(folder, min_num_images_per_class)\n try:\n with open(set_filename, 'wb') as f:\n pickle.dump(dataset, f, pickle.HIGHEST_PROTOCOL)\n except Exception as e:\n print('Unable to save data to', set_filename, ':', e)\n \n return dataset_names\n\ntrain_datasets = maybe_pickle(train_folders, 45000)\ntest_datasets = maybe_pickle(test_folders, 1800)", "Problem 2\nLet's verify that the data still looks good. Display a sample of the labels and images from the ndarray. Hing: you can use matplotlib.pyplot", "for idx,val in enumerate(class_names):\n data = pickle.load(open(train_datasets[idx]))\n plt.figure(figsize=(1, 1))\n\n plt.imshow(data[0], cmap='gray', interpolation='nearest')\n\n plt.axis('off')\n plt.tight_layout()\n plt.title(\"class type: \" + class_names[idx])\n", "Problem 3\nAnother check: we expect the data to be balanced across classes Verify that.", "def get_image_info(paths,class_names):\n s = np.empty(len(class_names))\n h = np.empty(len(class_names))\n w = np.empty(len(class_names))\n\n for i,val in enumerate(class_names):\n data = pickle.load(open(paths[i]))\n s[i] = data.shape[0]\n h[i] = data.shape[1]\n w[i] = data.shape[2]\n \n return pd.DataFrame({'samples':s,'height':h,'width':w})\n\ndf = get_image_info(train_datasets,class_names)\n\ndf\n\ndef make_arrays(nb_rows, img_size):\n if nb_rows:\n dataset = np.ndarray((nb_rows, img_size, img_size), dtype=np.float32)\n labels = np.ndarray(nb_rows, dtype=np.int32)\n else:\n dataset, labels = None, None\n return dataset, labels\n\ndef merge_datasets(pickle_files, train_size, valid_size=0):\n num_classes = len(pickle_files)\n valid_dataset, valid_labels = make_arrays(valid_size, image_size)\n train_dataset, train_labels = make_arrays(train_size, image_size)\n vsize_per_class = valid_size // num_classes\n tsize_per_class = train_size // num_classes\n \n start_v, start_t = 0, 0\n end_v, end_t = vsize_per_class, tsize_per_class\n end_l = vsize_per_class+tsize_per_class\n for label, pickle_file in enumerate(pickle_files): \n try:\n with open(pickle_file, 'rb') as f:\n letter_set = pickle.load(f)\n # let's shuffle the letters to have random validation and training set\n np.random.shuffle(letter_set)\n if valid_dataset is not None:\n valid_letter = letter_set[:vsize_per_class, :, :]\n valid_dataset[start_v:end_v, :, :] = valid_letter\n valid_labels[start_v:end_v] = label\n start_v += vsize_per_class\n end_v += vsize_per_class\n \n train_letter = letter_set[vsize_per_class:end_l, :, :]\n train_dataset[start_t:end_t, :, :] = train_letter\n train_labels[start_t:end_t] = label\n start_t += tsize_per_class\n end_t += tsize_per_class\n except Exception as e:\n print('Unable to process data from', pickle_file, ':', e)\n raise\n \n return valid_dataset, valid_labels, train_dataset, train_labels\n \n \ntrain_size = 200000\nvalid_size = 10000\ntest_size = 10000\n\nvalid_dataset, valid_labels, train_dataset, train_labels = merge_datasets(\n train_datasets, train_size, valid_size)\n_, _, test_dataset, test_labels = merge_datasets(test_datasets, test_size)\n\nprint('Training:', train_dataset.shape, train_labels.shape)\nprint('Validation:', valid_dataset.shape, valid_labels.shape)\nprint('Testing:', test_dataset.shape, test_labels.shape)\n", "Next, we'll randomize the data. It's important to have the labels well shuffled for the training and test distributions to match.", "def randomize(dataset, labels):\n permutation = np.random.permutation(labels.shape[0])\n shuffled_dataset = dataset[permutation,:,:]\n shuffled_labels = labels[permutation]\n return shuffled_dataset, shuffled_labels\ntrain_dataset, train_labels = randomize(train_dataset, train_labels)\ntest_dataset, test_labels = randomize(test_dataset, test_labels)\nvalid_dataset, valid_labels = randomize(valid_dataset, valid_labels)", "Problem 4\nConvince yourself that the data is still good after shuffling!\nnot sure how to do this\nSaving the data now for later use", "pickle_file = os.path.join(data_root, 'notMNIST.pickle')\n\ntry:\n f = open(pickle_file, 'wb')\n save = {\n 'train_dataset': train_dataset,\n 'train_labels': train_labels,\n 'valid_dataset': valid_dataset,\n 'valid_labels': valid_labels,\n 'test_dataset': test_dataset,\n 'test_labels': test_labels,\n }\n pickle.dump(save, f, pickle.HIGHEST_PROTOCOL)\n f.close()\nexcept Exception as e:\n print('Unable to save data to', pickle_file, ':', e)\n raise\n\n%%time\nstatinfo = os.stat(pickle_file)\nprint('Compressed pickle size:', statinfo.st_size)", "Problem 5\nBy construction, this dataset might contain a lot of overlapping samples, including training data that's also contained in the validation and test set! Overlap between training and test can skew the results if you expect to use your model in an environment where there is never an overlap, but are actually ok if you expect to see training samples recur when you use it. Measure how much overlap there is between training, validation and test samples.\nOptional questions:\nWhat about near duplicates between datasets? (images that are almost identical)\nCreate a sanitized validation and test set, and compare your accuracy on those in subsequent assignments.\nProblem 6\nLet's get an idea of what an off-the-shelf classifier can give you on this data. It's always good to check that there is something to learn, and that it's a problem that is not so trivial that a canned solution solves it.\nTrain a simple model on this data using 50, 100, 1000 and 5000 training samples. Hint: you can use the LogisticRegression model from sklearn.linear_model.\nOptional question: train an off-the-shelf model on all the data!", "%%time\nfrom sklearn import datasets, neighbors, linear_model\n\ndef reshape_image_data(data):\n dim = data.shape\n return np.reshape(data,(dim[0],dim[1]*dim[2]))\n\ntrain_dataset = reshape_image_data(train_dataset)\nvalid_dataset = reshape_image_data(valid_dataset)\n\n\nknn = neighbors.KNeighborsClassifier()\nlogistic = linear_model.LogisticRegression()\n\nprint('KNN score: %f' % knn.fit(train_dataset, train_labels).score(valid_dataset, valid_labels))\nprint('LogisticRegression score: %f'\n % logistic.fit(train_dataset, train_labels).score(valid_dataset, valid_labels))\n\n%%time\nprint('KNN score: %f' % knn.fit(train_dataset, train_labels).score(valid_dataset, valid_labels))\n\n\nvalid_dataset.shape" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
mathLab/RBniCS
tutorials/07_nonlinear_elliptic/tutorial_nonlinear_elliptic_eim.ipynb
lgpl-3.0
[ "Tutorial 07 - Non linear Elliptic problem\nKeywords: EIM, POD-Galerkin\n1. Introduction\nIn this tutorial, we consider a non linear elliptic problem in a two-dimensional spatial domain $\\Omega=(0,1)^2$. We impose a homogeneous Dirichlet condition on the boundary $\\partial\\Omega$. The source term is characterized by the following expression\n$$\ng(\\boldsymbol{x}; \\boldsymbol{\\mu}) = 100\\sin(2\\pi x_0)cos(2\\pi x_1) \\quad \\forall \\boldsymbol{x} = (x_0, x_1) \\in \\Omega.\n$$\nThis problem is characterized by two parameters. The first parameter $\\mu_0$ controls the strength of the sink term and the second parameter $\\mu_1$ the strength of the nonlinearity. The range of the two parameters is the following:\n$$\n\\mu_0,\\mu_1\\in[0.01,10.0]\n$$\nThe parameter vector $\\boldsymbol{\\mu}$ is thus given by\n$$\n\\boldsymbol{\\mu} = (\\mu_0,\\mu_1)\n$$\non the parameter domain\n$$\n\\mathbb{P}=[0.01,10]^2.\n$$\nIn order to obtain a faster approximation of the problem, we pursue a model reduction by means of a POD-Galerkin reduced order method. In order to preserve the affinity assumption empirical interpolation method will be used on the forcing term $g(\\boldsymbol{x}; \\boldsymbol{\\mu})$.\n2. Parametrized formulation\nLet $u(\\boldsymbol{\\mu})$ be the solution in the domain $\\Omega$.\nThe strong formulation of the parametrized problem is given by:\n<center>for a given parameter $\\boldsymbol{\\mu}\\in\\mathbb{P}$, find $u(\\boldsymbol{\\mu})$ such that</center>\n$$ -\\nabla^2u(\\boldsymbol{\\mu})+\\frac{\\mu_0}{\\mu_1}(\\exp{\\mu_1u(\\boldsymbol{\\mu})}-1)=g(\\boldsymbol{x}; \\boldsymbol{\\mu})$$\n<br>\nThe corresponding weak formulation reads:\n<center>for a given parameter $\\boldsymbol{\\mu}\\in\\mathbb{P}$, find $u(\\boldsymbol{\\mu})\\in\\mathbb{V}$ such that</center>\n$$a\\left(u(\\boldsymbol{\\mu}),v;\\boldsymbol{\\mu}\\right)+c\\left(u(\\boldsymbol{\\mu}),v;\\boldsymbol{\\mu}\\right)=f(v;\\boldsymbol{\\mu})\\quad \\forall v\\in\\mathbb{V}$$\nwhere\n\nthe function space $\\mathbb{V}$ is defined as\n$$\n\\mathbb{V} = {v\\in H_1(\\Omega) : v|_{\\partial\\Omega}=0}\n$$\nthe parametrized bilinear form $a(\\cdot, \\cdot; \\boldsymbol{\\mu}): \\mathbb{V} \\times \\mathbb{V} \\to \\mathbb{R}$ is defined by\n$$a(u, v;\\boldsymbol{\\mu})=\\int_{\\Omega} \\nabla u\\cdot \\nabla v \\ d\\boldsymbol{x},$$\nthe parametrized bilinear form $c(\\cdot, \\cdot; \\boldsymbol{\\mu}): \\mathbb{V} \\times \\mathbb{V} \\to \\mathbb{R}$ is defined by\n$$c(u, v;\\boldsymbol{\\mu})=\\mu_0\\int_{\\Omega} \\frac{1}{\\mu_1}\\big(\\exp{\\mu_1u} - 1\\big)v \\ d\\boldsymbol{x},$$\nthe parametrized linear form $f(\\cdot; \\boldsymbol{\\mu}): \\mathbb{V} \\to \\mathbb{R}$ is defined by\n$$f(v; \\boldsymbol{\\mu})= \\int_{\\Omega}g(\\boldsymbol{x}; \\boldsymbol{\\mu})v \\ d\\boldsymbol{x}.$$\n\nThe output of interest $s(\\boldsymbol{\\mu})$ is given by\n$$s(\\boldsymbol{\\mu}) = \\int_{\\Omega} v \\ d\\boldsymbol{x}$$\nis computed for each $\\boldsymbol{\\mu}$.", "from dolfin import *\nfrom rbnics import *", "3. Affine Decomposition\nFor this problem the affine decomposition is straightforward:\n$$a(u,v;\\boldsymbol{\\mu})=\\underbrace{1}{\\Theta^{a}_0(\\boldsymbol{\\mu})}\\underbrace{\\int{\\Omega}\\nabla u \\cdot \\nabla v \\ d\\boldsymbol{x}}{a_0(u,v)},$$\n$$c(u,v;\\boldsymbol{\\mu})=\\underbrace{\\mu_0}{\\Theta^{c}0(\\boldsymbol{\\mu})}\\underbrace{\\int{\\Omega}\\frac{1}{\\mu_1}\\big(\\exp{\\mu_1u} - 1\\big)v \\ d\\boldsymbol{x}}{c_0(u,v)},$$\n$$f(v; \\boldsymbol{\\mu}) = \\underbrace{100}{\\Theta^{f}0(\\boldsymbol{\\mu})} \\underbrace{\\int{\\Omega}\\sin(2\\pi x_0)cos(2\\pi x_1)v \\ d\\boldsymbol{x}}{f_0(v)}.$$\nWe will implement the numerical discretization of the problem in the class\nclass NonlinearElliptic(NonlinearEllipticProblem):\nby specifying the coefficients $\\Theta^{a}(\\boldsymbol{\\mu})$, $\\Theta^{c}_(\\boldsymbol{\\mu})$ and $\\Theta^{f}(\\boldsymbol{\\mu})$ in the method\ndef compute_theta(self, term):\nand the bilinear forms $a_(u, v)$, $c(u, v)$ and linear forms $f_(v)$ in\ndef assemble_operator(self, term):", "@EIM(\"online\")\n@ExactParametrizedFunctions(\"offline\")\nclass NonlinearElliptic(NonlinearEllipticProblem):\n\n # Default initialization of members\n def __init__(self, V, **kwargs):\n # Call the standard initialization\n NonlinearEllipticProblem.__init__(self, V, **kwargs)\n # ... and also store FEniCS data structures for assembly\n assert \"subdomains\" in kwargs\n assert \"boundaries\" in kwargs\n self.subdomains, self.boundaries = kwargs[\"subdomains\"], kwargs[\"boundaries\"]\n self.du = TrialFunction(V)\n self.u = self._solution\n self.v = TestFunction(V)\n self.dx = Measure(\"dx\")(subdomain_data=self.subdomains)\n self.ds = Measure(\"ds\")(subdomain_data=self.boundaries)\n # Store the forcing term expression\n self.f = Expression(\"sin(2*pi*x[0])*sin(2*pi*x[1])\", element=self.V.ufl_element())\n # Customize nonlinear solver parameters\n self._nonlinear_solver_parameters.update({\n \"linear_solver\": \"mumps\",\n \"maximum_iterations\": 20,\n \"report\": True\n })\n\n # Return custom problem name\n def name(self):\n return \"NonlinearEllipticEIM\"\n\n # Return theta multiplicative terms of the affine expansion of the problem.\n @compute_theta_for_derivatives\n def compute_theta(self, term):\n mu = self.mu\n if term == \"a\":\n theta_a0 = 1.\n return (theta_a0,)\n elif term == \"c\":\n theta_c0 = mu[0]\n return (theta_c0,)\n elif term == \"f\":\n theta_f0 = 100.\n return (theta_f0,)\n elif term == \"s\":\n theta_s0 = 1.0\n return (theta_s0,)\n else:\n raise ValueError(\"Invalid term for compute_theta().\")\n\n # Return forms resulting from the discretization of the affine expansion of the problem operators.\n def assemble_operator(self, term):\n v = self.v\n dx = self.dx\n if term == \"a\":\n du = self.du\n a0 = inner(grad(du), grad(v)) * dx\n return (a0,)\n elif term == \"c\":\n u = self.u\n mu = self.mu\n c0 = (exp(mu[1] * u) - 1) / mu[1] * v * dx\n return (c0,)\n elif term == \"dc\": # preferred over derivative() computation which does not cancel out trivial mu[1] factors\n du = self.du\n u = self.u\n mu = self.mu\n dc0 = exp(mu[1] * u) * du * v * dx\n return (dc0,)\n elif term == \"f\":\n f = self.f\n f0 = f * v * dx\n return (f0,)\n elif term == \"s\":\n s0 = v * dx\n return (s0,)\n elif term == \"dirichlet_bc\":\n bc0 = [DirichletBC(self.V, Constant(0.0), self.boundaries, 1)]\n return (bc0,)\n elif term == \"inner_product\":\n du = self.du\n x0 = inner(grad(du), grad(v)) * dx\n return (x0,)\n else:\n raise ValueError(\"Invalid term for assemble_operator().\")\n\n\n# Customize the resulting reduced problem\n@CustomizeReducedProblemFor(NonlinearEllipticProblem)\ndef CustomizeReducedNonlinearElliptic(ReducedNonlinearElliptic_Base):\n class ReducedNonlinearElliptic(ReducedNonlinearElliptic_Base):\n def __init__(self, truth_problem, **kwargs):\n ReducedNonlinearElliptic_Base.__init__(self, truth_problem, **kwargs)\n self._nonlinear_solver_parameters.update({\n \"report\": True,\n \"line_search\": \"wolfe\"\n })\n\n return ReducedNonlinearElliptic", "4. Main program\n4.1. Read the mesh for this problem\nThe mesh was generated by the data/generate_mesh.ipynb notebook.", "mesh = Mesh(\"data/square.xml\")\nsubdomains = MeshFunction(\"size_t\", mesh, \"data/square_physical_region.xml\")\nboundaries = MeshFunction(\"size_t\", mesh, \"data/square_facet_region.xml\")", "4.2. Create Finite Element space (Lagrange P1)", "V = FunctionSpace(mesh, \"Lagrange\", 1)", "4.3. Allocate an object of the NonlinearElliptic class", "problem = NonlinearElliptic(V, subdomains=subdomains, boundaries=boundaries)\nmu_range = [(0.01, 10.0), (0.01, 10.0)]\nproblem.set_mu_range(mu_range)", "4.4. Prepare reduction with a POD-Galerkin method", "reduction_method = PODGalerkin(problem)\nreduction_method.set_Nmax(20, EIM=21)\nreduction_method.set_tolerance(1e-8, EIM=1e-4)", "4.5. Perform the offline phase", "reduction_method.initialize_training_set(50, EIM=60)\nreduced_problem = reduction_method.offline()", "4.6. Perform an online solve", "online_mu = (0.3, 9.0)\nreduced_problem.set_mu(online_mu)\nreduced_solution = reduced_problem.solve()\nplot(reduced_solution, reduced_problem=reduced_problem)", "4.7. Perform an error analysis", "reduction_method.initialize_testing_set(50, EIM=60)\nreduction_method.error_analysis()", "4.8. Perform a speedup analysis", "reduction_method.speedup_analysis()" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
santipuch590/deeplearning-tf
dl_tf_BDU/3.RNN/ML0120EN-3.1-Review-LSTM-MNIST-Database.ipynb
mit
[ "<center> Sequence classification with LSTM on MNIST</center>\n<div class=\"alert alert-block alert-info\">\n<font size = 3><strong>In this notebook you will learn the How to use TensorFlow for create a Recurrent Neural Network</strong></font>\n<br> \n- <a href=\"#intro\">Introduction</a>\n<br>\n- <p><a href=\"#arch\">Architectures</a></p>\n - <a href=\"#lstm\">Long Short-Term Memory Model (LSTM)</a>\n\n- <p><a href=\"#build\">Building a LSTM with TensorFlow</a></p>\n</div>\n\n<a id=\"intro\"/> Introduction\nRecurrent Neural Networks are Deep Learning models with simple structures and a feedback mechanism builted-in, or in different words, the output of a layer is added to the next input and fed back to the same layer.\nThe Recurrent Neural Network is a specialized type of Neural Network that solves the issue of maintaining context for Sequential data -- such as Weather data, Stocks, Genes, etc. At each iterative step, the processing unit takes in an input and the current state of the network, and produces an output and a new state that is re-fed into the network.\nHowever, this model has some problems. It's very computationally expensive to maintain the state for a large amount of units, even more so over a long amount of time. Additionally, Recurrent Networks are very sensitive to changes in their parameters. As such, they are prone to different problems with their Gradient Descent optimizer -- they either grow exponentially (Exploding Gradient) or drop down to near zero and stabilize (Vanishing Gradient), both problems that greatly harm a model's learning capability.\nTo solve these problems, Hochreiter and Schmidhuber published a paper in 1997 describing a way to keep information over long periods of time and additionally solve the oversensitivity to parameter changes, i.e., make backpropagating through the Recurrent Networks more viable.\n(In this notebook, we will cover only LSTM and its implementation using TensorFlow)\n<a id=\"arch\"/>Architectures\n\nFully Recurrent Network\nRecursive Neural Networks\nHopfield Networks\nElman Networks and Jordan Networks\nEcho State Networks\nNeural history compressor\nThe Long Short-Term Memory Model (LSTM)\n\n<img src=\"https://ibm.box.com/shared/static/v7p90neiaqghmpwawpiecmz9n7080m59.png\" alt=\"Representation of a Recurrent Neural Network\" width=80%>\n<a id=\"lstm\"/>LSTM\nLSTM is one of the proposed solutions or upgrades to the Recurrent Neural Network model. \nIt is an abstraction of how computer memory works. It is \"bundled\" with whatever processing unit is implemented in the Recurrent Network, although outside of its flow, and is responsible for keeping, reading, and outputting information for the model. The way it works is simple: you have a linear unit, which is the information cell itself, surrounded by three logistic gates responsible for maintaining the data. One gate is for inputting data into the information cell, one is for outputting data from the input cell, and the last one is to keep or forget data depending on the needs of the network.\nThanks to that, it not only solves the problem of keeping states, because the network can choose to forget data whenever information is not needed, it also solves the gradient problems, since the Logistic Gates have a very nice derivative.\nLong Short-Term Memory Architecture\nAs seen before, the Long Short-Term Memory is composed of a linear unit surrounded by three logistic gates. The name for these gates vary from place to place, but the most usual names for them are:\n- the \"Input\" or \"Write\" Gate, which handles the writing of data into the information cell, \n- the \"Output\" or \"Read\" Gate, which handles the sending of data back onto the Recurrent Network, and \n- the \"Keep\" or \"Forget\" Gate, which handles the maintaining and modification of the data stored in the information cell.\n<img src=https://ibm.box.com/shared/static/zx10duv5egw0baw6gh2hzsgr8ex45gsg.png width=\"720\"/>\n<center>Diagram of the Long Short-Term Memory Unit</center>\nThe three gates are the centerpiece of the LSTM unit. The gates, when activated by the network, perform their respective functions. For example, the Input Gate will write whatever data it is passed onto the information cell, the Output Gate will return whatever data is in the information cell, and the Keep Gate will maintain the data in the information cell. These gates are analog and multiplicative, and as such, can modify the data based on the signal they are sent.\n\n<a id=\"build\"/> Building a LSTM with TensorFlow\nLSTM for Classification\nAlthough RNN is mostly used to model sequences and predict sequential data, we can still classify images using a LSTM network. If we consider every image row as a sequence of pixels, we can feed a LSTM network for classification. Lets use the famous MNIST dataset here. Because MNIST image shape is 28*28px, we will then handle 28 sequences of 28 steps for every sample.\nMNIST Dataset\nTensor flow already provides helper functions to download and process the MNIST dataset.", "%matplotlib inline\nimport warnings\nwarnings.filterwarnings('ignore')\n\nimport numpy as np\nimport matplotlib.pyplot as plt\n\nimport tensorflow as tf\n\nfrom tensorflow.examples.tutorials.mnist import input_data\nmnist = input_data.read_data_sets(\"../../data/MNIST/\", one_hot=True)", "The function input_data.read_data_sets(...) loads the entire dataset and returns an object tensorflow.contrib.learn.python.learn.datasets.mnist.DataSets\nThe argument (one_hot=False) creates the label arrays as 10-dimensional binary vectors (only zeros and ones), in which the index cell for the number one, is the class label.", "trainimgs = mnist.train.images\ntrainlabels = mnist.train.labels\ntestimgs = mnist.test.images\ntestlabels = mnist.test.labels \n\nntrain = trainimgs.shape[0]\nntest = testimgs.shape[0]\ndim = trainimgs.shape[1]\nnclasses = trainlabels.shape[1]\nprint(\"Train Images: \", trainimgs.shape)\nprint(\"Train Labels \", trainlabels.shape)\nprint()\nprint(\"Test Images: \" , testimgs.shape)\nprint(\"Test Labels: \", testlabels.shape)", "Let's get one sample, just to understand the structure of MNIST dataset\nThe next code snippet prints the label vector (one_hot format), the class and actual sample formatted as image:", "samplesIdx = [100, 101, 102] #<-- You can change these numbers here to see other samples\n\nfrom mpl_toolkits.mplot3d import Axes3D\nfig = plt.figure()\n\nax1 = fig.add_subplot(121)\nax1.imshow(testimgs[samplesIdx[0]].reshape([28,28]), cmap='gray')\n\n\nxx, yy = np.meshgrid(np.linspace(0,28,28), np.linspace(0,28,28))\nX = xx ; Y = yy\nZ = 100*np.ones(X.shape)\n\nimg = testimgs[77].reshape([28,28])\nax = fig.add_subplot(122, projection='3d')\nax.set_zlim((0,200))\n\n\noffset=200\nfor i in samplesIdx:\n img = testimgs[i].reshape([28,28]).transpose()\n ax.contourf(X, Y, img, 200, zdir='z', offset=offset, cmap=\"gray\")\n offset -= 100\n\n ax.set_xticks([])\nax.set_yticks([])\nax.set_zticks([])\n\nplt.show()\n\n\nfor i in samplesIdx:\n print(\"Sample: {0} - Class: {1} - Label Vector: {2} \".format(i, np.nonzero(testlabels[i])[0], testlabels[i]))", "Let's Understand the parameters, inputs and outputs\nWe will treat the MNIST image $\\in \\mathcal{R}^{28 \\times 28}$ as $28$ sequences of a vector $\\mathbf{x} \\in \\mathcal{R}^{28}$. \nOur simple RNN consists of\n\nOne input layer which converts a $28$ dimensional input to an $128$ dimensional hidden layer, \nOne intermediate recurrent neural network (LSTM) \nOne output layer which converts an $128$ dimensional output of the LSTM to $10$ dimensional output indicating a class label.", "n_input = 28 # MNIST data input (img shape: 28*28)\nn_steps = 28 # timesteps\nn_hidden = 128 # hidden layer num of features\nn_classes = 10 # MNIST total classes (0-9 digits)\n\n\nlearning_rate = 0.001\ntraining_iters = 100000\nbatch_size = 100\ndisplay_step = 10", "Construct a Recurrent Neural Network", "x = tf.placeholder(dtype=\"float\", shape=[None, n_steps, n_input], name=\"x\")\ny = tf.placeholder(dtype=\"float\", shape=[None, n_classes], name=\"y\")\n\nweights = {\n 'out': tf.Variable(tf.random_normal([n_hidden, n_classes]))\n}\nbiases = {\n 'out': tf.Variable(tf.random_normal([n_classes]))\n}", "The input should be a Tensor of shape: [batch_size, max_time, ...], in our case it would be (?, 28, 28)", "# Define a lstm cell with tensorflow\nlstm_cell = tf.contrib.rnn.BasicLSTMCell(n_hidden, forget_bias=1.0, state_is_tuple=True)\n\n#initial state\n#initial_state = (tf.zeros([1,n_hidden]),)*2\n\ndef RNN(x, weights, biases):\n\n # Prepare data shape to match `rnn` function requirements\n # Current data input shape: (batch_size, n_steps, n_input) [100x28x28]\n\n # Define a lstm cell with tensorflow\n lstm_cell = tf.contrib.rnn.BasicLSTMCell(n_hidden, forget_bias=1.0)\n \n\n # Get lstm cell output\n outputs, states = tf.nn.dynamic_rnn(lstm_cell, inputs=x, dtype=tf.float32)\n\n # Get lstm cell output\n #outputs, states = lstm_cell(x , initial_state)\n \n # The output of the rnn would be a [100x28x128] matrix. we use the linear activation to map it to a [?x10 matrix]\n # Linear activation, using rnn inner loop last output\n # output [100x128] x weight [128, 10] + []\n output = tf.reshape(tf.split(outputs, 28, axis=1, num=None, name='split')[-1],[-1,128])\n return tf.matmul(output, weights['out']) + biases['out']\n\nwith tf.variable_scope('forward3'):\n pred = RNN(x, weights, biases)", "labels and logits should be tensors of shape [100x10]", "cost = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(labels=y, logits=pred ))\noptimizer = tf.train.AdamOptimizer(learning_rate=learning_rate).minimize(cost)\n\ncorrect_pred = tf.equal(tf.argmax(pred,1), tf.argmax(y,1))\naccuracy = tf.reduce_mean(tf.cast(correct_pred, tf.float32))\n\naccuracy_v2 = tf.contrib.metrics.accuracy(\n labels=tf.arg_max(y, dimension=1), \n predictions=tf.arg_max(pred, dimension=1)\n)", "Just recall that we will treat the MNIST image $\\in \\mathcal{R}^{28 \\times 28}$ as $28$ sequences of a vector $\\mathbf{x} \\in \\mathcal{R}^{28}$.", "sess = tf.InteractiveSession()\ninit = tf.global_variables_initializer()\n\nsess.run(init)\nstep = 1\n# Keep training until reach max iterations\nwhile step * batch_size < training_iters:\n\n # We will read a batch of 100 images [100 x 784] as batch_x\n # batch_y is a matrix of [100x10]\n batch_x, batch_y = mnist.train.next_batch(batch_size)\n\n # We consider each row of the image as one sequence\n # Reshape data to get 28 seq of 28 elements, so that, batxh_x is [100x28x28]\n batch_x = batch_x.reshape((batch_size, n_steps, n_input))\n\n\n # Run optimization op (backprop)\n sess.run(optimizer, feed_dict={x: batch_x, y: batch_y})\n\n\n if step % display_step == 0:\n # Calculate batch accuracy\n acc, acc2 = sess.run([accuracy, accuracy_v2], feed_dict={x: batch_x, y: batch_y})\n # Calculate batch loss\n loss = sess.run(cost, feed_dict={x: batch_x, y: batch_y})\n print(\"({} / {}) Minibatch loss={:.6f} Accuracy={:.5f} Accuracy (tf)={:.5f}\".format(\n step*batch_size,\n training_iters,\n loss,\n acc,\n acc2\n ))\n step += 1\nprint(\"Optimization Finished!\")\n\n# Calculate accuracy for the whole test set\ntest_data = mnist.test.images.reshape((-1, n_steps, n_input))\ntest_label = mnist.test.labels\nprint(\"Testing Accuracy: {:.3%}\".format(sess.run(accuracy, feed_dict={x: test_data, y: test_label})))\n\nsess.close()", "Created by <a href=\"https://br.linkedin.com/in/walter-gomes-de-amorim-junior-624726121\">Walter Gomes de Amorim Junior</a> , <a href = \"https://linkedin.com/in/saeedaghabozorgi\"> Saeed Aghabozorgi </a></h4>" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
DaveBackus/Data_Bootcamp
Code/IPython/bootcamp_exam_s16_answerkey.ipynb
mit
[ "Data Bootcamp \"Learning Experience\"\nNYU Stern School of Business | March 2016\nPlease answer the questions below in this IPython notebook. Add cells as needed. When you're done, save it and email to Dave Backus (db3@nyu.edu). Use the subject line: \"bootcamp exam\" plus \"UG\" or \"MBA\", as appropriate. Make sure you have the correct email address. And the correct file. Doing this correctly is worth 10 points. \nThis IPython notebook was created by Dave Backus, Chase Coleman, and Spencer Lyon for the NYU Stern course Data Bootcamp. \nImport packages\nRun this code. Really.", "# import packages \nimport pandas as pd # data management\nimport matplotlib.pyplot as plt # graphics \nimport datetime as dt # check today's date \nimport sys # check Python version \n\n# IPython command, puts plots in notebook \n%matplotlib inline\n\nprint('Today is', dt.date.today())\nprint('Python version:\\n', sys.version, sep='') ", "Question 0\n\nChange the file name by adding _YourLastName to it in the textbox at the top. \nAdd a markdown cell directly above this one that includes your name in bold, your student number, and your email address. \n\n(10 points)\nQuestion 1\nFor each part (a)-(e), describe the type and value of the variable with the corresponding name:\n(a) a = 2*3 \n(b) b = 2.0*3\n(c) c = 'abc'\n(d) d = ['This', \"is\", 'not', \"a\", 'string']\n(e) e = d[3]\n(25 points)", "# experiment in this box \n\na = 2*3\nb = 2.0*3\nc = 'abc'\nd = ['This', \"is\", 'not', \"a\", 'string']\ne = d[3]\n\n# do this with a function because we're lazy and value our time\ndef valuetype(x):\n \"\"\"\n print value and type of input x\n \"\"\"\n print('Value and type: ', x, ', ', type(x), sep='')\n\n# (a)\nvaluetype(a)\n\n# (b)\nvaluetype(b)\n\n# (c)\nvaluetype(c)\n\n# (d)\nvaluetype(d)\n\n# (e)\nvaluetype(e)", "Question 2\nAs above describe the value and type of each variable. (These are more challenging.) \n(f) f = (1, 2, 3)\n(g) g = {1: 'Chase', 2: 'Dave', 3: 'Spencer'} \n(h) h = 'foo' + 'bar'\n(i) i = (1 != 0) # parens not needed, but they make code more understandable\n(20 points)", "f = (1, 2, 3)\ng = {1: 'Chase', 2: 'Dave', 3: 'Spencer'}\nh = 'foo' + 'bar'\ni = (1 != 0) \n\n# (f)\nvaluetype(f)\n\n# (g)\nvaluetype(g)\n\n# (h)\nvaluetype(h)\n\n# (i)\nvaluetype(i)", "Question 3\nExplain the code below -- briefly -- in a Markdown cell. What happens if we change the first line to torf = False? \n(10 points)", "torf = True\n\nif torf: \n x = 1\nelse:\n x = 2\n \nprint('x =', x) ", "Changed cell to Markdown with menu at top \nThe code from if on:\n\nif torf is True, set x=1\nif torf is False, we set x=2\n\nAt the top, torf is True, so we do the first one (x=1). If we change it to False, we do the second one (x=2).\nQuestion 4\nTake the first and last variables defined in the cell below and do the following with them: \n(a) Extract the first letter of last. \n(b) Find a method to split last into two components at the hyphen. \n(c) Define a new string variable named combo consisting of first (the first name), a space, the first letter of last, and a period. \n(d) Define a function that takes as inputs first and last names (both strings) and returns combo (also a string, consisting of the first name plus the first letter of the last name and a period). Apply it to the variables first and last and to your own first and last names. \n(20 points)", "first = 'Sarah'\nlast = 'Beckett-Hile' \n\n# (a)\nfirstoffirst = first[0]\nfirstoffirst\n\n# (b) \nlast.split('-')\n\n# (c) \ncombo = first + ' ' + last[0] + '.'\ncombo\n\n# (d) \ndef lastinitial(name1, name2):\n combo = name1 + ' ' + name2[0] + '.'\n return combo\n\nlastinitial(first, last)\n\nlastinitial('Chase', 'Coleman')", "Question 5\nConsider the variable things = [1, '2', 3.0, 'four']. \n(a) Write a loop that goes through the elements of things and prints them and their type.\n(b) Modify the loop to print only those elements that are integers. \n(10 points) \n(c) Bonus (not graded): Can you do parts (a) and (b) with a list comprehension?", "things = [1, '2', 3.0, 'four']\n\n# (a) \nfor thing in things:\n print('Value and type: ', thing, ', ', type(thing), sep='')\n\n# (b) \nfor thing in things:\n if type(thing) == int:\n print('Value and type: ', thing, ', ', type(thing), sep='')\n\n# (c) \n[print('Value and type: ', thing, ', ', type(thing), sep='') for thing in things]\n\n[print('Value and type: ', thing, ', ', type(thing), sep='') for thing in things\n if type(thing) == int]", "Question 6\nNext up: We explore the Census's Business Dynamics Statistics, a huge collection of data about firms. We've extracted a small piece of one of their databases that includes these variables for 2013:\n\nSize: size category of firms based on number of employees \nFirms: number of firms in this size category\nEmp: number of employees in this size category \n\nRun the code cell below to load the data and use the result to answer these questions: \n(a) What type of object is bsd?\n(b) What are its dimensions?\n(c) What are its column labels? Row labels?\n(d) What dtypes are the columns? \n(20 points)", "data = {'Size': ['1 to 4', '5 to 9', '10 to 19', '20 to 49', '50 to 99',\n '100 to 249', '250 to 499', '500 to 999', '1000 to 2499',\n '2500 to 4999', '5000 to 9999', '10000+'], \n 'Firms': [2846416, 1020772, 598153, 373345, 115544, 63845,\n 19389, 9588, 6088, 2287, 1250, 1357], \n 'Emp': [5998912, 6714924, 8151891, 11425545, 8055535, 9788341, \n 6611734, 6340775, 8321486, 6738218, 6559020, 32556671]}\nbds = pd.DataFrame(data) \nbds = bds.set_index('Size')\n\n# (a)\ntype(bds)\n\n# (b)\nbds.shape\n\n(c)\nlist(bds) # or bsd.columns\n\nbds.index\n\n# (d)\nbds.dtypes", "Question 7\nContinuing with the same data: \n(a) Create a new variable AvgEmp equal to the ratio of Emp to Firms and add it as a new column in bsd.\n(b) Use a dataframe method to change the name of Emp to Employees. \n(c) Create a bar chart of the number of employees in each size category. \n(15 points)", "# (a) \nbds['AvgEmp'] = bds['Emp']/bds['Firms']\nbds.head(3)\n\n# (b) \nbds = bds.rename(columns={'Emp': 'Employment'})\nbds.head(3)\n\n# (c)\nbds['Employment'].plot.bar()", "Question 8\nStill continuing with the same data: \n(a) Create figure and axis objects. \n(b) Add a horizontal bar chart of the number of firms in each category to the axis object you created. \n(c) Make the bars red. \n(d) Add a title.\n(e) Change the style to fivethirtyeight. \n(25 points)", "# everything has to be in same cell to apply to the same figure\nplt.style.use('fivethirtyeight') # (e) \nfig, ax = plt.subplots() # (a) \nbds['Firms'].plot.barh(ax=ax, color='red') # (b,c) \nax.set_title('Numbers of firms by employment category') # (d) ", "Comment. Evidently there are lots of small firms." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
Linlinzhao/linlinzhao.github.io
_drafts/.ipynb_checkpoints/多种方法实现一个两层神经网络-checkpoint.ipynb
mit
[ "为了更好地熟悉PyTorch和对比其与其他框架的区别,将官网上的例程自己都写一遍并做更详细的注释。例程中,只快速实现两层神经网络的核心部分,因此训练数据是随机生成的,而且只实现了对参数的更新调整,未涉及对代价函数的优化过程。完成全套例程,会对神经网络的前向通道及反向传播有更好的理解。具体实现的方法有:\n\n利用Numpy实现 (CPU)\n利用PyTorch的tensor实现 (CPU和GPU)\n利用PyTorch的autograd模块实现\n利用Tensorflow实现,对比静态图与动态图的区别\n\n1. Numpy实现", "import numpy as np\n\n#先定义网络结构: batch_size, Input Dimension, Hidden Dimension, Output Dimension \nN, D_in, D_hidden, D_out = 10, 20, 30, 5 \n\n#随机生成输入和输出数据\nx = np.random.randn(N, D_in)\ny = np.random.randn(N, D_out)\n\n#对输入层和输出层的参数进行初始化\nw1 = np.random.randn(D_in, D_hidden)\nw2 = np.random.randn(D_hidden, D_out)\n\nlearning_rate = 0.001\n\n#循环更新参数,每个循环前向和反向各计算一次\nfor i in xrange(50):\n \n # 计算前向通道\n h_linear = x.dot(w1) #10x20 and 20x30 produce 10x30, which is the shape of h_linear\n h_relu = np.maximum(h_linear, 0) #note one have to use np.maximum but not np.max, 10x30\n y_pred = h_relu.dot(w2) #10x30 and 30x5 produce 10x5\n \n #定义代价函数\n loss = 0.5 * np.sum(np.square(y_pred - y)) #sum squared error as loss\n \n # 反向求导\n grad_y_pred = y_pred - y #10x5\n grad_w2 = h_relu.T.dot(grad_y_pred) #30x10 and 10x5 produce the dimension of w2: 30x5\n grad_h_relu = grad_y_pred.dot(w2.T) #30x5 and 10x5 produce the dimension of h_relu: 10x30\n grad_h = grad_h_relu.copy()\n grad_h[h_linear < 0] = 0 #替代针对隐含层导数中的负数为零\n grad_w1 = x.T.dot(grad_h) #20x10 and 10x30 produce 20x30 \n \n #梯度下降法更新参数\n w1 -= learning_rate * grad_w1\n w2 -= learning_rate * grad_w2\n", "2. PyTorch的tensor实现\n只需将numpy的程序稍作调整就能实现tensor的实现,从而是程序能够部署到GPU上运算。", "import torch as T\n\n#先定义网络结构: batch_size, Input Dimension, Hidden Dimension, Output Dimension \nN, D_in, D_hidden, D_out = 10, 20, 30, 5 \n\n#随机生成输入和输出数据\nx = T.randn(N, D_in)\ny = T.randn(N, D_out)\n\n#对输入层和输出层的参数进行初始化\nw1 = T.randn(D_in, D_hidden)\nw2 = T.randn(D_hidden, D_out)\n\nlearning_rate = 0.001\n\n#循环更新参数,每个循环前向和反向各计算一次\nfor i in xrange(50):\n \n # 计算前向通道\n #mm should also work as x is a matrix. The matrix multiplication will be summarized in another post\n h_linear = x.matmul(w1) #10x20 and 20x30 produce 10x30, which is the shape of h_linear\n h_relu = h_linear.clamp(min=0) #note one have to use np.maximum but not np.max, 10x30\n y_pred = h_relu.matmul(w2) #10x30 and 30x5 produce 10x5\n \n #定义代价函数\n loss = 0.5 * (y_pred - y).pow(2).sum() #sum squared error as loss\n \n # 反向求导\n grad_y_pred = y_pred - y #10x5\n grad_w2 = h_relu.t().mm(grad_y_pred) #30x10 and 10x5 produce the dimension of w2: 30x5\n grad_h_relu = grad_y_pred.dot(w2.t()) #30x5 and 10x5 produce the dimension of h_relu: 10x30\n grad_h = grad_h_relu.clone()\n grad_h[h_linear < 0] = 0 #替代针对隐含层导数中的负数为零\n grad_w1 = x.t().mm(grad_h) #20x10 and 10x30 produce 20x30 \n \n #梯度下降法更新参数\n w1 -= learning_rate * grad_w1\n w2 -= learning_rate * grad_w2", "3. 利用PyTorch的Tensor和autograd实现\n两层网络的反向求导比较容易,但如果层数加多,在手动求导就会变得很复杂。因此深度学习平台都提供了自动求导功能,PyTorch的Autograd中的自动求导功能可以使反向求导简捷且灵活。要注意的是计算图的构建需要用autograd中的Variable将需要并入计算图中的变量进行封装,并设置相关属性。", "import torch as T\nfrom torch.autograd import Variable\n\n#先定义网络结构: batch_size, Input Dimension, Hidden Dimension, Output Dimension \nN, D_in, D_hidden, D_out = 10, 20, 30, 5 \n\n#随机生成输入和输出数据, 并用Variable对输入输出进行封装,同时在计算图形中不要求求导\nx = Variable(T.randn(N, D_in), requires_grad=False)\ny = Variable(T.randn(N, D_out), requires_grad=False)\n\n#对输入层和输出层的参数进行初始化,并用Variable封装,同时要求求导\nw1 = Variable(T.randn(D_in, D_hidden), requires_grad=True)\nw2 = Variable(T.randn(D_hidden, D_out), requires_grad=True)\n\nlearning_rate = 0.001\n\n#循环更新参数,每个循环前向和反向各计算一次\nfor i in xrange(50):\n \n # 计算前向通道\n #mm should also work as x is a matrix. The matrix multiplication will be summarized in another post\n h_linear = x.matmul(w1) #10x20 and 20x30 produce 10x30, which is the shape of h_linear\n h_relu = h_linear.clamp(min=0) #note one have to use np.maximum but not np.max, 10x30\n y_pred = h_relu.matmul(w2) #10x30 and 30x5 produce 10x5\n \n #定义代价函数\n loss = 0.5 * (y_pred - y).pow(2).sum() #sum squared error as loss\n \n loss.backward()\n \n \n #梯度下降法更新参数\n w1.data -= learning_rate * w1.grad.data #note that we are updating the 'data' of Variable w1\n w2.data -= learning_rate * w2.grad.data\n \n #PyTorch中,将grad中的值在循环中进行累积,当不须此操作时,应清零\n w1.grad.data.zero_()\n w2.grad.data.zero_()" ]
[ "markdown", "code", "markdown", "code", "markdown", "code" ]
zhouqifanbdh/liupengyuan.github.io
chapter2/homework/localization/4-5/201611680168.ipynb
mit
[ "练习 1:求n个随机整数均值的平方根,整数范围在m与k之间。", "import random, math\n\ndef test():\n i = 0\n total = 0\n average = 0\n number = random.randint(m, k)\n \n while i < n:\n i += 1\n total += number\n number = random.randint(m, k)\n print('随机数是:', number)\n average = int(total/n)\n \n return math.sqrt(average)\n \n#主程序\nm=int(input('请输入一个整数下限:'))\nk=int(input('请输入一个整数上限:'))\nn=int(input('随机整数的个数是:'))\ntest()", "练习 2:写函数,共n个随机整数,整数范围在m与k之间,(n,m,k由用户输入)。求1:西格玛log(随机整数),2:西格玛1/log(随机整数)", "import random, math\n\ndef test1():\n i = 0\n total = 0\n number = random.randint(m,k)\n result = math.log10(number)\n \n while i < n:\n i += 1\n number = random.randint(m,k)\n print('执行1的随机整数是:', number)\n result += math.log10(number)\n \n return result \n \n \ndef test2():\n i = 0\n total = 0\n number = random.randint(m,k)\n result = 1/(math.log10(number))\n \n while i < n:\n i += 1\n number = random.randint(m,k)\n print('执行2的随机整数是:', number)\n result += 1/(math.log10(number))\n \n return result \n \n#主程序\nn = int(input('随机整数的个数是:'))\nm = int(input('请输入一个整数下限:'))\nk = int(input('请输入一个整数上限:'))\n\nprint()\nprint('执行1的结果是:', test1())\nprint()\nprint('执行2的结果是:', test2())", "练习 3:写函数,求s=a+aa+aaa+aaaa+aa...a的值,其中a是[1,9]之间的随机整数。例如2+22+222+2222+22222(此时共有5个数相加),几个数相加由键盘输入。", "import random\n\ndef test():\n a = random.randint(1,9)\n print('随机整数a是:', a)\n i = 0\n s = 0\n number = 0\n total = 0\n \n \n while i < n:\n s = 10**i\n number += a * s\n total += number\n i += 1\n \n return total\n\n#主程序\nn = int(input('需要相加的个数是:'))\nprint('结果是:', test())", "挑战性练习:仿照task5,将猜数游戏改成由用户随便选择一个整数,让计算机来猜测的猜数游戏,要求和task5中人猜测的方法类似,但是人机角色对换,由人来判断猜测是大、小还是相等,请写出完整的猜数游戏。", "import random, math\n\n\ndef win():\n print(\n '''\n ======YOU WIN=======\n \n \n .\"\". .\"\",\n | | / /\n | | / /\n | | / /\n | |/ ;-._ \n } ` _/ / ;\n | /` ) / /\n | / /_/\\_/\\\n |/ / |\n ( ' \\ '- |\n \\ `. /\n | |\n | |\n \n ======YOU WIN=======\n '''\n )\n \ndef lose():\n print(\n '''\n ======YOU LOSE=======\n \n \n \n\n .-\" \"-.\n / \\\n | |\n |, .-. .-. ,|\n | )(__/ \\__)( |\n |/ /\\ \\|\n (@_ (_ ^^ _)\n _ ) \\_______\\__|IIIIII|__/__________________________\n (_)@8@8{}<________|-\\IIIIII/-|___________________________>\n )_/ \\ /\n (@ `--------`\n \n \n \n ======YOU LOSE=======\n '''\n )\n \ndef game_over():\n print(\n '''\n ======GAME OVER=======\n \n _________ \n / ======= \\ \n / __________\\ \n | ___________ | \n | | - | | \n | | | | \n | |_________| |________________ \n \\=____________/ ) \n / \"\"\"\"\"\"\"\"\"\"\" \\ / \n / ::::::::::::: \\ =D-' \n (_________________) \n\n \n ======GAME OVER=======\n '''\n )\n\ndef show_team():\n print('''\n ***声明***\n 本游戏由PXS小机智开发''')\n\ndef show_instruction():\n print('''\n 游戏说明\n玩家选择一个任意整数,计算机来猜测该数。\n若计算机在规定次数内猜中该数,则计算机获胜。\n若规定次数内没有猜中,则玩家获胜。''')\n \ndef menu():\n print('''\n =====游戏菜单=====\n 1. 游戏说明\n 2. 开始游戏\n 3. 退出游戏\n 4. 制作团队\n =====游戏菜单=====''') \n\ndef guess_game():\n n = int(input('请输入一个大于0的整数,作为神秘整数的上界,回车结束。'))\n max_times = int(math.log(n,2))\n print('规定猜测次数是:', max_times, '次')\n print()\n guess = random.randint(1, n)\n print('我猜这个数是:', guess)\n guess_times = 1\n max_number = n\n min_number = 1\n \n while guess_times < max_times:\n answer = input('我猜对了吗?(请输入“对”或“不对”)')\n if answer == '对':\n print(lose())\n break\n if answer == '不对':\n x = input('我猜大了还是小了?(请输入“大”或“小”)')\n print()\n if x == '大':\n max_number = guess-1\n guess = random.randint(min_number,max_number)\n print('我猜这个数是:', guess)\n guess_times += 1 \n print('我已经猜了', guess_times, '次')\n print()\n if guess_times == max_times:\n ask = input('''***猜测已达规定次数*** \n 我猜对了吗?(请输入“对”或“不对”)''')\n if ask == '不对':\n end()\n break\n else:\n lose()\n if x == '小':\n min_number = guess + 1\n guess = random.randint(min_number,max_number)\n print('我猜这个数是:', guess)\n guess_times += 1\n print('我已经猜了', guess_times, '次')\n print()\n if guess_times == max_times:\n ask = input('''***猜测已达规定次数*** \n 我猜对了吗?(请输入“对”或“不对”)''')\n if ask == '不对':\n end()\n break\n else:\n lose()\n \ndef end():\n a = input('你的神秘数字是:')\n print()\n print('原来是', a, '啊!')\n win()\n\n#主函数\ndef main():\n while True:\n menu()\n choice = int(input('请输入你的选择'))\n if choice == 1:\n show_instruction()\n elif choice == 2:\n guess_game()\n elif choice == 3:\n game_over()\n break\n else:\n show_team()\n\n\n#主程序\nif __name__ == '__main__':\n main()" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
pyrdr/charlas
intro_text_mining/notebooks/Text Mining.ipynb
mit
[ "import re\nimport nltk\nfrom prep import helpers", "Mineria de Texto\nUna brevísima introducción\nMinería de Texto: una brevísima introducción\n\nQué es?\nY qué se hace con eso?\nQué no es?\nMás o menos, cómo funciona?\n\nMinería de Texto: una brevísima introducción\n\nTemás prácticos:\nFuentes de texto\nDatos estructurados, semi-estructurados y no estructurados\nMojibake!\nRegEx: Cómo resolver un problema con otro problema\n\nMinería de Texto: una brevísima introducción\n\nConceptos fundamentales\nRepresentación del texto para análisis\nbag of words\ntf-idf\nSimilaridad entre documentos\n\nQué es Minería de Texto?\n<br/><center><b>Destilar conocimiento a partir del texto</b></center>\nDesde el punto de vista de el analisis de datos, el texto es un desorden. Las técnicas de analisis desde la más sencilla hasta la más avanzada esperan trabajar con datos de entrada con formato bien definido y fijo.\nLa minería de texto es la colección de métodos, algoritmos y prácticas utilizadas para ordenar este desorden y poder realizar análisis sobre grandes cuerpos de texto.\nY qué se hace con eso?\n\nBúsqueda (Information Retrieval)\nClasificación de texto (SPAM! SPAM!)\nModelado de tópicos (temas)\nClustering\nAnalisis de sentimiento\netc... etc... etc...\n\nQué no es Minería de Texto?\nProcesamiento de Lenguaje Natural o Natural Language Processing es un área de las ciencas computacionales que se ocupa de la interacción entre las computadoras y el lenguaje humano (natural).\nText Mining depende de muchas de las herramientas de NLP, de hecho en los ejemplos estaremos usando la librería NLTK - Natural Language ToolKit. Dos ejemplos famosos de NLP son: chatbots y Siri\nMás o menos, cómo funciona?\n<div align='center'>\n <img src=\"../images/text_mining_workflow.png\" width=\"640px\"/>\n</div>\n\nTemas prácticos - Fuentes de texto\n\nRedes sociales (tweets, posts)\nArticulos, libros, periódicos\nBases de datos\n...\n\nTemas Prácticos - Tipos de datos como fuentes de texto\n<br/>\n<div align='center'>\n <img src=\"../images/semi_un_structured_data.png\" width=\"640px\"/>\n</div>\n\nTemas Prácticos - Codificación de caracteres (encodings, encodings)\nLa codificación de caracteres determina cómo se convierten los bits en caracteres legibles y viceversa.\n<div align='center'>\n <img src=\"../images/encoding.png\" width=\"640px\"/>\n</div>\nSe hace vital conocer la codificación usada durante la creación de un documento para poder incluirlo en un proceso de minería de texto.\nLamentablemente, no siempre se sabe esto. Pero, se puede adivinar con mucho exito y se puede convertir de una codificación a otra.\nTemas Prácticos - Mojibakeeeee???\n<br/>\n<div align='center'>\n <img src=\"../images/mojibake1.gif\" width=\"640px\"/>\n</div>\n\n<br/>\n<div align='center'>\n <img src=\"../images/mojibake2.gif\" width=\"640px\"/>\n</div>\n\nEsto le puede pasar a usted si procesa texto con la codificación de caracteres (character encoding) equivocada:\n<br/>\n<div align='center'>\n <img src=\"../images/Mojibakevector.png\" width=\"800px\"/>\n</div>\n\nCómo salgo de un lío mojibake?\nStackOverflow al rescate:\nhttps://stackoverflow.com/questions/64860/best-way-to-convert-text-files-between-character-sets. En resumen: herramientas de línea de comando (*nix y Windows)\nTambién su librería de R/Python para la manipulación de datos (tidyverse/Pandas) le permite especificar la codificación tanto de lectura como de escritura.\nSi algún día se ve en la necesidad de adivinar, empiece por aquí:\nhttps://readr.tidyverse.org/reference/encoding.html\nTemas Prácticos - RegEx\nCómo resolver un problema con otro problema\nEn esencia las expresiones regulares permiten:\n* Encontrar instancias de un término\n* ...Aún cuando este escrito de formas ligeramente diferentes cada vez\n* Convertirlo en otra cosa\n* Convertir frases enteras en otras\n* Hacer cambios a un documento de una sola vez\n* ... lograr que estos cambios sean consistente y repetibles a otros documentos\nTemas Prácticos - RegEx: Ejemplo", "# Funcion para quitar todo el texto que este entre parentesis, \n# lo que no sea letras y sustituir series de espacios en blanco por uno solo\ndef cleanup_str(raw):\n rs = re.sub(\"\\\\(.*?\\\\)|[^a-zA-Z\\\\s]\",\" \",raw)\n rs = re.sub(\"\\\\s+\",\" \",rs).strip().lower()\n return rs\n\nmy_str = \"\"\"\nSome people, when confronted with a problem, think \n“I know, I'll use regular expressions.” Now they have two problems.\n -- Jamie Zawinsk (Usenet) 1997 o fue 1999??\n\"\"\"\n\nprint(cleanup_str(my_str))", "Visite https://regexr.com/ para aprender y practicar. Pero, recuerde:\nLas expresiones regulares son una herramienta extremadamente poderosa, uselas con moderación y cuidado\nConceptos Fundamentales de Mineria de Texto\n<br/>\n<dl>\n <dt>Documento</dt>\n <dd>Unidad mínima de texto sobre la cual se quiere realizar analisis, inferencias y responder preguntas</dd>\n\n <dt>Corpus</dt>\n <dd>Conjunto de documentos que será minado (piense _training data_)</dd>\n</dl>\n\nConceptos Fundamentales de Mineria de Texto\n<br/>\n<dl>\n <dt>Token</dt>\n <dd>Serie de caracteres de texto con significado propio, resultante de dividir un documento por un _separador_: tipicamente palabras.</dd>\n\n <dt>Separador</dt>\n <dd>Serie de caraceteres utilizadas para dividir un documento en _tokens_: tipicamente _whitespace_ ( )</dd>\n</dl>", "nltk.word_tokenize(\"conceptos fundamentales de mineria de texto\")", "Conceptos Fundamentales de Mineria de Texto\n<br/>\n<dl>\n <dt>Vocabulario</dt>\n <dd>Conjunto de todos los tokens presentes en un _corpus_</dd>\n\n <dt>n-gram</dt>\n <dd>Secuencia continua de una o más partes de un documento, tipicamente tokens</dd>\n</dl>", "helpers.get_bigrams(nltk.word_tokenize(\"conceptos fundamentales de mineria de texto\"))", "Conceptos Fundamentales de Mineria de Texto\n<br/>\n<dl>\n <dt>Stopwords</dt>\n <dd>Palabras en un idioma particular que pueden eliminarse de los documentos durante preprocesamiento (palabras muy comunes, preposiciones, etc.)</dd>\n\n <dt>Stemming</dt>\n <dd>Proceso de extracción y sustitución de palabras por su _\"raíz\"_</dd>\n</dl>", "helpers.remove_stopwords(\"This is not the stopword\")\n\nhelpers.stem(\"natural language processing and text mining\")", "Representación de datos de texto\n<br/>\n<div align='center'>\n <img src=\"../images/bagofwords.002.jpeg\" width=\"640px\">\n</div>\n\n<div align='center'>\n <img src=\"../images/bagofwords.003.jpeg\" width=\"640px\">\n</div>\n\n<div align='center'>\n <img src=\"../images/bagofwords.004.jpeg\" width=\"640px\">\n</div>\n\nRepresentación de datos de Texto\n<br/>\n<div align='center'>\n <img src=\"../images/tfidf.jpeg\" width=\"640px\">\n</div>\n\n<div align='center'>\n <img src=\"../images/tfidf02.jpeg\" width=\"640px\">\n</div>\n\nRepresentación de datos de Texto\n<br/>\n<dl>\n <dt>Term Frequency (TF)</dt>\n <dd>Estadistica que representa lo común que es un término dentro de un documento en particular. La versión más simple es un conteo de las repeticiones del termino.</dd>\n\n <dt>Inverse Document Frequency (IDF)</dt>\n <dd>Estadistica que captura lo _raro_ que es un termino dentro de un corpus. Es grande para palabras que ocurren poco y pequeño para palabras muy comunes.</dd>\n</dl>\n\nLa formula más comun de TF-IDF:\n$$tfidf(t,d,D) = f_{t,d} * \\log \\frac{N}{n_t}$$\ndonde:\n$t =$ termino, token o palabra\n$d =$ documento\n$f_{t,d} =$ term frequency del termino t en el documento d\n$D =$ corpus\n$N =$ cantidad de documentos (tamaño del corpus)\n$n_t =$ cantidad de documentos donde aparece el termino $t$\nCon tan solo convertir un corpus a una de estas dos representaciones se pueden responder preguntas interesantes, como:\n\nCuales son las palabras mas frecuentes?\nCuales son las palabras mas raras?\nCuales son los documentos con más palabras distintas?\n\nSe les ocurren otras?\nComparación de documentos\nHabiendolos representado como vectores (BoW o TF-IDF) podemos compararlos directamente en relación de los terminos que los componen con una simple distancia entre dos vectores:\n<div align='center'>\n <img src=\"../images/cosine_similarity.png\" width=\"360px\">\n</div>\n\n$$similaridad = \\cos(\\theta) = \\frac{\\sum_i^N A_i B_i}{\\sum_i^N A_i^2 \\sum B_i^2}$$\nCon lo descrito hasta ahora tenemos los elementos necesarios para un buscador rudimentario:\n\nlos documentos ya son vectores comparables\npodemos convertir un nuevo documento a un vector (un \"query\" de búsqueda por ejemplo)\nsencillamente buscar en el corpus los documentos que más se parezcan (que tengan la similaridad más alta)\n...\nGoogle-Killer!" ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
borja876/Thinkful-DataScience-Borja
Challenge+Preparing+a+dataset+for+modeling+%28Feature+Selection%29.ipynb
mit
[ "%matplotlib inline\n#import numpy as np\n#import pandas as pd\nimport pandas\nimport numpy\nimport scipy\nimport sklearn\nimport matplotlib.pyplot as plt\nimport seaborn as sns\nimport math\nimport scipy.stats as stats\nimport seaborn as sns\nfrom matplotlib.mlab import PCA as mlabPCA\nfrom sklearn.preprocessing import StandardScaler\nfrom sklearn.decomposition import PCA \nfrom sklearn import preprocessing\nfrom sklearn.feature_selection import SelectKBest\nfrom sklearn.feature_selection import chi2\nfrom sklearn.feature_selection import RFE\nfrom sklearn.linear_model import LogisticRegression\n\n# Loading the data.\ndf = pd.read_fwf('https://raw.githubusercontent.com/borja876/Thinkful-DataScience-Borja/master/auto-mpg.data.txt', header=None)\ndf.columns = [\"mpg\", \"cylinders\", \"displacement\", \"horsepower\", 'weight', 'acceleration','modelyear','origin','carname']\ndf.head(3)", "Getting a grasp of the data set. Which variables are continuous and which ones are categorical", "df['horsepower'].unique()\n\ndf['cylinders'].unique()\n\ndf['origin'].unique()\n\ndf['modelyear'].unique()\n\ndf['mpg'].unique()", "Cleaning data and assigning it the same type of values", "df = df.drop( df[(df.horsepower == '?')].index )\ndf[[\"mpg\", \"cylinders\", \"displacement\", \"horsepower\", 'weight', 'acceleration']] = df[[\"mpg\", \"cylinders\", \"displacement\", \"horsepower\", 'weight', 'acceleration']].astype(float)\n\n#Checking the type of data in the dataset\ndf.info()\ndf.head()", "The Research Questions is: Which are the features of the cars that best explain the miles per gallon.\n1. Using a dataset of your choice, select an outcome variable and then pick four or five other variables (one to two categorical, three to four continuous) to act as the basis for features\nOutcome variable: mpg\nCategorical variables: cylynders, origin and year\nContinuous: displacement, horsepower, weight, acceleration, displacement\nUnderstand the relationship and behavior of the variables", "#Plotting the relationships between variables\nsns.set_style(\"white\")\n\n#Drop the variables that will not be used\ndfcont = df.drop(['carname','cylinders','modelyear','origin'], axis=1)\n# Scatterplot matrix.\ng = sns.PairGrid(dfcont, diag_sharey=False)\ng.map_upper(plt.scatter, alpha=.5)\n# Fit line summarizing the linear relationship of the two variables.\ng.map_lower(sns.regplot, scatter_kws=dict(alpha=0))\n# Give information about the univariate distributions of the variables.\ng.map_diag(sns.kdeplot, lw=3)\nplt.show()", "Understand the correlation between variables", "# Make the correlation matrix.\ncorrmat = dfcont.corr()\nprint(corrmat)\n\n# Set up the matplotlib figure.\nf, ax = plt.subplots(figsize=(12, 9))\n\n# Draw the heatmap using seaborn.\nsns.heatmap(corrmat, vmax=.8, square=True)\nplt.show()", "From the correlation matrix it seems that displacement, horsepower and weight are strongly correlated. Acceleration is less correlated with the rest thus providing more information.", "#Arrange data for a bloxplot \ndf1 = df.drop(['carname'], axis=1)\ndf1.head()\n\n# Plot all the variables with boxplots\ndfb = df1.drop(['origin','modelyear'], axis=1)\ndf_long = dfb\ndf_long = pd.melt(df_long, id_vars=['cylinders'])\n\n\ng = sns.FacetGrid(df_long, col=\"variable\",size=10, aspect=.5)\ng = g.map(sns.boxplot, \"cylinders\", \"value\")\ng.fig.get_axes()[0].set_yscale('log')\nsns.despine(left=True)\nplt.show()", "For cylinders = 6 & 8: mpg, displacement & horsepower present outliers\nFor cylinders = 4, acceleration present outliers", "# Descriptive statistics for the categorical variable: Cylinders\ndf1.groupby('cylinders').describe().transpose()", "The number of counts for cylinders = 3 and 5 is very small so they are discarded considering only 4, 6 & 8", "#Drop cylinders categories: 3 and 5\n\ndf1['cylinders'] = df1[\"cylinders\"].astype(float)\ndf1 = df1.drop( df[(df.cylinders == 3.0)].index )\ndf1 = df1.drop( df[(df.cylinders == 5.0)].index )\n\n#Clean and have the final dataset to create features\n\ndf1['cylinders'] = df1['cylinders'].astype(str)\ndffinal1 = df1[['cylinders','modelyear','origin','mpg','displacement','horsepower','weight','acceleration']]\ndffinal1.head()\n\n#Check the unique values for cylinders\n\ndffinal1['cylinders'].unique()\n\n#Check that the differences between mpg and acceleration are significant for each value of cylinders\n\n#Test significant difference between values of mpg and acceleration when cylinders equals 4 and 6\nfor col in dffinal1.loc[:,'mpg':'acceleration'].columns:\n print('Difference when cylinders equals 4 and 6')\n print(col)\n print(stats.ttest_ind(\n dffinal1[dffinal1['cylinders'] == '4.0'][col],\n dffinal1[dffinal1['cylinders'] == '6.0'][col]\n ))\n#Test significant difference between values of mpg and acceleration when cylinders equals 4 and 8\nfor col in dffinal1.loc[:,'mpg':'acceleration'].columns:\n print('Difference when cylinders equals 4 and 8')\n print(col)\n print(stats.ttest_ind(\n dffinal1[dffinal1['cylinders'] == '4.0'][col],\n dffinal1[dffinal1['cylinders'] == '8.0'][col]\n ))\n#Test significant difference between values of mpg and acceleration when cylinders equals 6 and 8\nfor col in dffinal1.loc[:,'mpg':'acceleration'].columns:\n print('Difference when cylinders equals 6 and 8')\n print(col)\n print(stats.ttest_ind(\n dffinal1[dffinal1['cylinders'] == '6.0'][col],\n dffinal1[dffinal1['cylinders'] == '8.0'][col]\n ))", "The difference for all variables for each cylinders value is significant (except for acceleration when comparing 4 & 6)", "#Plot the values for all modelyears per cylinder type\nplt.figure(figsize=(20,5))\nax = sns.countplot(x=\"modelyear\", hue='cylinders', data=dffinal1, palette=\"Set3\")\nplt.show()\n\n# Table of counts\ncounttable = pd.crosstab(dffinal1['modelyear'], dffinal1['cylinders'])\nprint(counttable)\n\n#Equivalency, differences and size of populations\n\nprint(stats.chisquare(counttable, axis=None))", "Modelyear on average is equivalent regarding the population per year. There are differences regarding the cylinders values.\nThe group size differences are large enough to reflect differences on the population.\nCreate 10 new features", "#Feature 1: Standard number of cylinders vs high end number of cylinders\n\nfeatures = pd.get_dummies(dffinal1['cylinders'])\nfeatures['High_end'] = np.where((dffinal1['cylinders'].isin(['6.0', '8.0'])), 1, 0)\nprint(pd.crosstab(features['High_end'], dffinal1['cylinders']))\n\n#Feature 2: # Cars from the 70s and cars from the 80s.\n\nfeatures = pd.get_dummies(dffinal1['modelyear'])\nfeatures['decade'] = np.where((dffinal1['modelyear'].isin(range(70,80))), 1, 0)\nprint(pd.crosstab(features['decade'], dffinal1['modelyear']))\n\n# Feature 3: National cars vs imported cars\n\nfeatures = pd.get_dummies(dffinal1['origin'])\nfeatures['national'] = np.where((dffinal1['origin'].isin(['1'])), 1, 0)\nprint(pd.crosstab(features['national'], dffinal1['origin']))\n\n# Feature 4: Nacceleration: Normalized acceleration\n# Making a four-panel plot.\nfig = plt.figure()\n\nfig.add_subplot(221)\nplt.hist(dffinal1['acceleration'].dropna())\nplt.title('Raw')\n\nfig.add_subplot(222)\nplt.hist(np.log(dffinal1['acceleration'].dropna()))\nplt.title('Log')\n\nfig.add_subplot(223)\nplt.hist(np.sqrt(dffinal1['acceleration'].dropna()))\nplt.title('Square root')\n\nax3=fig.add_subplot(224)\nplt.hist(1/df['acceleration'].dropna())\nplt.title('Inverse')\nplt.show()\n\n#Creation and storage of new feature\nfeatures['nacceleration'] = np.sqrt(dffinal1['acceleration'])\n\n# Feature 5: CAR DHW. Composite of highly correlated variables\n\ncorrmat = dffinal1.corr()\n\n# Set up the matplotlib figure.\nf, ax = plt.subplots(figsize=(12, 9))\n\n# Draw the heatmap using seaborn\nsns.heatmap(corrmat, vmax=.8, square=True)\nplt.show()\n\n\nmeans = dffinal1[['displacement','horsepower','weight']].mean(axis=0)\nstds = dffinal1[['displacement','horsepower','weight']].std(axis=0)\nfeatures['car_dhw'] = ((dffinal1[['displacement','horsepower','weight']] - means) / stds).mean(axis=1)\n\n# Check how well the composite correlates with each of the individual variables.\nplotdffinal1= dffinal1.loc[:, ['displacement','horsepower','weight']]\nplotdffinal1['dhw'] = features['car_dhw'] \ncorrmat2 = plotdffinal1.corr()\n\nprint(corrmat2)\n\n# Feature 6: Carperformance. Relationship between car_dhw & nacceleration\nfeatures['carperformance'] = features['car_dhw'] * features['nacceleration']\n\n# A plot of an interaction.\n# Add the 'tvtot' feature to the features data frame for plotting.\nfeatures['mpg'] = dffinal1['mpg']\nsns.lmplot(\n x='carperformance',\n y='mpg',\n\n data=features,\n scatter=False\n)\nplt.show()\n\n# Feature 7: Carperformance (squared).\nsns.regplot(\n features['carperformance'],\n y=dffinal1['mpg'],\n y_jitter=.49,\n order=2,\n scatter_kws={'alpha':0.3},\n line_kws={'color':'black'},\n ci=None\n)\nplt.show()\n\n#Creation and storage of new feature\nfeatures['carperformance_sq'] = features['carperformance'] * features['carperformance']\n\n# Feature 8: standardised carperformance (squared).\nmeans = features[['carperformance_sq']].mean(axis=0)\nstds = features[['carperformance_sq']].std(axis=0)\n\n#Creation and storage of new feature\nfeatures['standcarperformance_sq'] = ((features[['carperformance_sq']] - means) / stds).mean(axis=1)\n\n# Feature 9: Acceleration (squared).\nsns.regplot(\n dffinal1['acceleration'],\n y=dffinal1['mpg'],\n y_jitter=.49,\n order=2,\n scatter_kws={'alpha':0.3},\n line_kws={'color':'black'},\n ci=None\n)\nplt.show()\n\n#Creation and storage of new feature\nfeatures['acceleration_sq'] = dffinal1['acceleration'] * dffinal1['acceleration']\n\n# Feature 10: Dhw composite value abs.\nsns.regplot(\n dffinal1['acceleration'],\n y=features['car_dhw'],\n y_jitter=.49,\n order=2,\n scatter_kws={'alpha':0.3},\n line_kws={'color':'black'},\n ci=None\n)\nplt.show()\n\nfeatures['dhw_abs'] = features['car_dhw'].abs()\n\n#Scaling all features\n\n# Select only numeric variables to scale.\ndf_num = features.select_dtypes(include=[np.number]).dropna()\ndf_num = df_num.drop(1, 1)\ndf_num = df_num.rename(columns={2: 'high_end', 3: 'decade'})\n# Save the column names.\nnames = df_num.columns\n\n# Scale, then turn the resulting numpy array back into a data frame with the correct column names.\ndf_scaled = pd.DataFrame(preprocessing.scale(df_num), columns=names)\n\n# The new features contain all the information of the old ones, but on a new scale.\nplt.scatter(df_num['carperformance_sq'], df_scaled['carperformance_sq'])\nplt.show()\n\n# Lookit all those matching means and standard deviations!\nprint(df_scaled.describe())", "Running the PCA to see how the dimensional reduction", "# Normalize the data so that all variables have a mean of 0 and standard deviation of 1.\n\nX = StandardScaler().fit_transform(df_scaled)\n\n# The NumPy covariance function assumes that variables are represented by rows, not columns, so we transpose X.\n\nXt = X.T\nCx = np.cov(Xt)\nprint('Covariance Matrix:\\n', Cx)\n\n# Calculating eigenvalues and eigenvectors.\neig_val_cov, eig_vec_cov = np.linalg.eig(Cx)\n\n# Inspecting the eigenvalues and eigenvectors.\nfor i in range(len(eig_val_cov)):\n eigvec_cov = eig_vec_cov[:, i].reshape(1, 11).T\n print('Eigenvector {}: \\n{}'.format(i + 1, eigvec_cov))\n print('Eigenvalue {}: {}'.format(i + 1, eig_val_cov[i]))\n print(40 * '-')\n\nprint(\n 'The percentage of total variance in the dataset explained by each',\n 'component calculated by hand.\\n',\n eig_val_cov / sum(eig_val_cov)\n)\n\n#From the Scree plot we could use onle the first 3 components that will explain 46%, 23% and 13% approx.\n\nplt.plot(eig_val_cov)\nplt.show()\n\n# Create P, which we will use to transform Cx into Cy to get Y, the dimensionally-reduced representation of X.\n\nP = eig_vec_cov[:, 0]\n\n# Transform X into Y.\nY = P.T.dot(Xt)\n\n# Combine X and Y for plotting purposes.\ndata_to_plot = df_scaled[['nacceleration','car_dhw','carperformance','carperformance_sq','standcarperformance_sq','acceleration_sq','dhw_abs']]\ndata_to_plot['Component'] = Y\ndata_to_plot = pd.melt(data_to_plot, id_vars='Component')\n\ng = sns.FacetGrid(data_to_plot, col=\"variable\", size=4, aspect=.5)\ng = g.map(\n sns.regplot,\n \"Component\",\n \"value\",\n x_jitter=.49,\n y_jitter=.49,\n fit_reg=False\n)\nplt.show()\n\n#Reduce components to 5 as stated in the problem\n\nsklearn_pca = PCA(n_components=5)\nY_sklearn = sklearn_pca.fit_transform(X)\n\nprint(\n 'The percentage of total variance in the dataset explained by each',\n 'component from Sklearn PCA.\\n',\n sklearn_pca.explained_variance_ratio_\n)\n\n# Compare the sklearn solution to ours – a perfect match.\nplt.plot(Y_sklearn[:, 0], Y, 'o')\nplt.title('Comparing solutions')\nplt.ylabel('Sklearn Component 1')\nplt.xlabel('By-hand Component 1')\nplt.show()", "Feature selection using filtering methods\nFeature Extraction with Univariate Statistical Tests (Chi-squared for classification)", "# Feature Extraction with Univariate Statistical Tests (Chi-squared for classification)\n\n# Arrange data and transform the values into positive values\ndf_positive = df_scaled.abs().astype(str)\narray = df_positive.values\nX = array[:,0:10]\nY = array[:,7]\n# feature extraction\ntest = SelectKBest(score_func=chi2, k=5)\nfit = test.fit(X, Y)\n# summarize scores\nnumpy.set_printoptions(precision=3)\nprint(fit.scores_)\nfeatures = fit.transform(X)\n#summarize selected features\nprint(features[0:5,:])\ndf_positive.head(0)", "Following the Chi-squared classification method: We pick the features that have a highest score, being:\n1. 'high_end'\n2. 'dhw_abs'\n3. 'standcarperformance_sq'\n4. 'acceleration_sq'\n5. 'nacceleration'\nFrom the PCA analysis we know that we can reduce the number of features to 3 (being the first four following the Chi-squared classification filtering method).\nRecursive Feature Elimination", "#Change the dataframes into arrays to be able to run the model\n# Arrange data\ndf_neutral = df_scaled.astype(str)\narray = df_neutral.values\nX = array[:,0:10]\nY = array[:,7]\n# feature extraction\nmodel = LogisticRegression()\nrfe = RFE(model, 5)\nfit = rfe.fit(X, Y)\nprint(\"Num Features: %d\" % fit.n_features_)\nprint(\"Selected Features: %s\" % fit.support_)\nprint(\"Feature Ranking: %s\" % fit.ranking_)\ndf_positive.head(0)", "Following the Recursive Feature Elimination method: We pick the features that are 'True' and have a higher ranking, being:\n\n'decade'\n'nacceleration'\n'carperformance'\n'standcarperformance_sq'\n'acceleration_sq'" ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
maxhutch/thesis-notebooks
WilkinsonAndJacobs.ipynb
gpl-3.0
[ "Figure 17, Plot of overall Froude number vs dimensionless amplitude\nStart by loading some boiler plate: matplotlib, numpy, scipy, json, functools, and a convenience class.", "%matplotlib inline\nimport matplotlib\nmatplotlib.rcParams['figure.figsize'] = (10.0, 8.0)\nimport matplotlib.pyplot as plt\nimport numpy as np\nfrom scipy.interpolate import InterpolatedUnivariateSpline\nfrom scipy.interpolate import UnivariateSpline\nimport json\nimport pandas as pd\nfrom functools import partial\nclass Foo: pass", "And some more specialized dependencies:\n 1. Slict provides a convenient slice-able dictionary interface\n 2. Chest is an out-of-core dictionary that we'll hook directly to a globus remote using...\n 3. glopen is an open-like context manager for remote globus files", "from chest import Chest\nfrom slict import CachedSlict\nfrom glopen import glopen, glopen_many", "Helper routines", "def load_from_archive(names, arch):\n cs = []\n for name in names:\n cs.append(Chest(path = \"{:s}-results\".format(name),\n open = partial(glopen, endpoint=arch),\n open_many = partial(glopen_many, endpoint=arch)))\n scs = [CachedSlict(c) for c in cs]\n\n ps = []\n for name in names:\n with glopen(\n \"{:s}.json\".format(name), mode='r',\n endpoint = arch,\n ) as f:\n ps.append(json.load(f))\n return cs, scs, ps", "Configuration for this figure.", "config = Foo()\nconfig.names = [\n# \"Wilk/Wilk_kmin_2.5/Wilk_kmin_2.5\", \n# \"Wilk/Wilk_kmin_3.5/Wilk_kmin_3.5\",\n# \"Wilk/Wilk_kmin_4.5/Wilk_kmin_4.5\",\n \"Wilk/Wilk_long/Wilk_long\",\n]\n#config.arch_end = \"maxhutch#alpha-admin/~/pub/\"\n#config.arch_end = \"alcf#dtn_mira/projects/alpha-nek/experiments/\"\nconfig.arch_end = \"alcf#dtn_mira/projects/PetaCESAR/maxhutch/\"\nheight = 'H_exp'", "Open a chest located on a remote globus endpoint and load a remote json configuration file.", "cs, scs, ps = load_from_archive(config.names, config.arch_end);", "We want to plot the spike depth, which is the 'H' field in the chest.\nChests can prefetch lists of keys more quickly than individual ones, so we'll prefetch the keys we want.", "for c,sc in zip(cs, scs):\n c.prefetch(sc[:,height].full_keys())", "Use a spline to compute the derivative of 'H' vs time: the Froude number.", "spls = []\nfor sc, p in zip(scs, ps):\n T = np.array(sc[:,height].keys())\n H = np.array(sc[:,height].values()) #- 2 * np.sqrt(p['conductivity']* (T + p['delta']**2 / p['conductivity'] / 4))\n spls.append(UnivariateSpline(T,\n H,\n k = 5,\n s = 1.e-12))\nFrs = [spl.derivative() for spl in spls]\nTss = [np.linspace(sc[:,height].keys()[0], sc[:,height].keys()[-1], 1000) for sc in scs]\n\nRun37 = pd.DataFrame.from_csv('WRun37 4.49.56 PM 7_3_07.txt', sep='\\t')\nRun58 = pd.DataFrame.from_csv('WRun058 4.32.52 PM 7_3_07.txt', sep='\\t')\nRun78 = pd.DataFrame.from_csv('WRun078 4.49.56 PM 7_3_07.txt', sep='\\t')\n\ndef plot_exp(data, n, fmt):\n norm = .5*( np.sqrt(data[\"Atwood\"]/(1-data[\"Atwood\"])*data[\"Accel. [mm/sec^2]\"]* 76 / n) \n + np.sqrt(data[\"Atwood\"]/(1+data[\"Atwood\"])*data[\"Accel. [mm/sec^2]\"]* 76 / n))\n axs.plot(\n data[\"AvgAmp (mm)\"] * n / 76, \n data[\"Average Velocity\"]/norm, fmt);\n #data[\"Froude Average\"], fmt);\n return", "Plot the Froude number, non-dimensionalized by the theoretical dependence on Atwood, acceleration, and wave number, vs the spike depth, normalized by wave-length.\nThe dotted line is the theoretical prediction of Goncharaov. The solid black line is the farthest that Wilkinson and Jacobs were able to get.", "fig, axs = plt.subplots(1,1)\n\nfor p, spl, Fr, T in zip(ps, spls, Frs, Tss):\n axs.plot(\n spl(T) * p[\"kmin\"], \n Fr(T)/ np.sqrt(p[\"atwood\"]*p[\"g\"] / p[\"kmin\"]),\n label=\"{:3.1f} modes\".format(p[\"kmin\"]));\n#axs.plot(Run37[\"AvgAmp (mm)\"] * 2.5 / 76, Run37[\"Froude Average\"], \"bx\");\n#plot_exp(Run37, 2.5, \"bx\")\n#plot_exp(Run78, 3.5, \"gx\")\nplot_exp(Run58, 4.5, \"bx\")\naxs.plot([0,10], [np.sqrt(1/np.pi), np.sqrt(1/np.pi)], 'k--')\naxs.axvline(x=1.4, color='k');\naxs.set_ylabel(r'Fr')\naxs.set_xlabel(r'$h/\\lambda$');\naxs.legend(loc=4);\naxs.set_xbound(0,3);\naxs.set_ybound(0,1.5);\nplt.savefig('Figure17_long.png')\n\nfig, axs = plt.subplots(1,1)\n\nfor sc, p, spl, Fr, T in zip(scs, ps, spls, Frs, Tss):\n axs.plot(\n T,\n spl(T) * p[\"kmin\"], \n label=\"{:3.1f} modes\".format(p[\"kmin\"]));\n axs.plot(\n sc[:,height].keys(),\n np.array(sc[:,height].values())*p['kmin'],\n 'bo');\n#axs.plot(Run37[\"Time (sec)\"]-.5, Run37[\"AvgAmp (mm)\"] * 2.5 / 76, \"bx\");\naxs.plot(Run58[\"Time (sec)\"]-.515, Run58[\"AvgAmp (mm)\"] * 4.5 / 78, \"bx\");\n#axs.plot(Run78[\"Time (sec)\"]-.5, Run78[\"AvgAmp (mm)\"] * 3.5 / 76, \"gx\");\naxs.set_ylabel(r'$h/\\lambda$')\naxs.set_xlabel(r'T (s)');\naxs.set_xbound(0.0,1.5);\naxs.set_ybound(-0.0,4);\naxs.legend(loc=4);\n\n%install_ext http://raw.github.com/jrjohansson/version_information/master/version_information.py \n%load_ext version_information \n%version_information numpy, matplotlib, slict, chest, glopen, globussh" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
james-prior/euler
euler-001-multiples-of-3-and-5-20160319.ipynb
mit
[ "If we list all the natural numbers below 10 that are multiples of 3 or 5, we get 3, 5, 6 and 9.\nThe sum of these multiples is 23.\nFind the sum of all the multiples of 3 or 5 below 1000.", "from __future__ import print_function\n\ndef foo(n):\n return sum(filter(lambda x: x % 3 == 0 or x % 5 == 0, range(n)))\n\nn=10\nfoo(n)\n\nn=1000\n%timeit foo(n)\nfoo(n)\n\ndef foo(n):\n total = 0\n for i in range(n):\n if i % 3 == 0 or i % 5 == 0:\n total += i\n return total\n\nn=1000\n%timeit foo(n)\nfoo(n)", "It is surprising that the naive code immediately above is faster than the functional programming version.", "def foo(n):\n a = []\n for i in range(n):\n if i % 3 == 0 or i % 5 == 0:\n a.append(i)\n return sum(a)\n\nn=1000\n%timeit foo(n)\nfoo(n)\n\ndef foo(n):\n return sum([i for i in range(n) if i % 3 == 0 or i % 5 == 0])\n\nn=1000\n%timeit foo(n)\nfoo(n)\n\ndef foo(n):\n return sum((i for i in range(n) if i % 3 == 0 or i % 5 == 0))\n\nn=1000\n%timeit foo(n)\nfoo(n)\n\ndef foo(n):\n return sum((j for j in (i for i in range(n) if i % 3 == 0) if j % 5 == 0))\n\nn=1000\n%timeit foo(n)\nfoo(n)", "Wow, the nested generator expressions above were the fastest yet, also ugliest, and irredeemably wrong. The flaw is that only numbers that are multiples of both 3 and 5 are summed. I.e., the bug was 'and' instead of 'or'.\nThanks to Eric for catching that cells 13 through 16 are broken. It is easy to fast when one is wrong. So nevermind cells 13 through 16.\nThe unnested list comprehension was faster than the unnested generator expression,\nso let's try nested list comprehensions.", "def foo(n):\n return sum([j for j in [i for i in range(n) if i % 3 == 0] if j % 5 == 0])\n\nn=1000\n%timeit foo(n)\nfoo(n)", "I expected the nested list comprehensions to be a little bit faster than the nested generator expressions, \nso I was surprised by the big speed increase.\nLet's play with generators more.", "def threes_or_fives(gen):\n for i in gen:\n if i % 3 == 0:\n yield i\n elif i % 5 == 0:\n yield i\n\ndef foo(n):\n return sum(threes_or_fives(range(n)))\n\nn=1000\n%timeit foo(n)\nfoo(n)\n\ndef threes_or_fives(gen):\n for i in gen:\n if i % 3 == 0 or i % 5 == 0:\n yield i\n\ndef foo(n):\n return sum(threes_or_fives(range(n)))\n\nn=1000\n%timeit foo(n)\nfoo(n)", "It is surprising that the naive verbose generator function with two if statements\nis faster than the generator function with the combined if statement.\nI thought of one more way later, using sets, that should be much more elegant.", "def foo(n):\n return sum(set(range(0, n, 3)) | set(range(0, n, 5)))\n\nn=1000\n%timeit foo(n)\nfoo(n)", "Holy smokes! It is easier to read, but I did not expect it to be so fast.\nI can not resist the temptation to generalize.", "def foo(n, divisors):\n all_multiples = set([])\n for multiples in (set(range(0, n, divisor)) for divisor in divisors):\n all_multiples |= multiples\n return sum(multiples)\n\nn=1000\ndivisors = (3, 5)\n%timeit foo(n, divisors)\nfoo(n, divisors)", "Thanks to Eric for catching that cells 23 and 24 are broken. I summed the wrong thing. This is fixed below.", "def foo(n, divisors):\n all_multiples = set([])\n for multiples in (set(range(0, n, divisor)) for divisor in divisors):\n all_multiples |= multiples\n return sum(all_multiples)\n\nn=1000\ndivisors = (3, 5)\n%timeit foo(n, divisors)\nfoo(n, divisors)", "That is terribly ugly. I was trying to do some kind of union of a set comprehension, but that is just not available as far as I know.\nWhen I give up on that, it becomes simple below.", "def foo(n, divisors):\n multiples = set([])\n for divisor in divisors:\n multiples |= set(range(0, n, divisor))\n return sum(multiples)\n\nn=1000\ndivisors = (3, 5)\n%timeit foo(n, divisors)\nfoo(n, divisors)", "I will have another go at the union thing, this time as a function.", "def union(sets):\n u = set([])\n for s in sets:\n u |= s\n return u\n\ndef foo(n, divisors):\n return sum(union(set(range(0, n, d)) for d in divisors))\n\nn=1000\ndivisors = (3, 5)\n%timeit foo(n, divisors)\nfoo(n, divisors)", "It works, but is not as readable, so forget it.\nOf course, none of this compares to what I did last year." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
TimothyHelton/k2datascience
notebooks/yelp.ipynb
bsd-3-clause
[ "Yelp Dataset Challenge\n\nTimothy Helton\nYelp is a website that allows patrons to review restaurants they have been to. The company runs a regular challenge to see if anyone can derive additional insights from the raw user reviews.\nMore information about the challenge may be found\nhere.\n\nFor excerises 1-4, use the Yelp business json file. For exercises 5-6, use the Yelp review json file.\n\n<br>\n<font color=\"red\">\n NOTE:\n <br>\n This notebook uses code found in the\n <a href=\"https://github.com/TimothyHelton/k2datascience/blob/master/k2datascience/yelp.py\">\n <strong>k2datascience.yelp</strong></a> module.\n To execute all the cells do one of the following items:\n <ul>\n <li>Install the k2datascience package to the active Python interpreter.</li>\n <li>Add k2datascience/k2datascience to the PYTHON_PATH system variable.</li>\n <li>Create a link to the yelp.py file in the same directory as this notebook.</li>\n</font>\n\nImports", "from k2datascience import yelp\n\nfrom IPython.core.interactiveshell import InteractiveShell\nInteractiveShell.ast_node_interactivity = \"all\"\n%matplotlib inline", "Load Data\nCreate Instance of Yelp Class", "ydc = yelp.YDC()\n\nydc.load_data()\n\nbusiness = ydc.file_data['business']\nbusiness.shape\nbusiness.head()\nbusiness.tail()", "Exercise 1: Create a new column that contains only the zipcode.", "ydc.get_zip_codes()\nbusiness.head()", "Exercise 2: The table contains a column called 'categories' and each entry in this column is populated by a list. We are interested in those businesses that are restaurants. Create a new column 'Restaurant_type' that contains a description of the restaurant based on the other elements of 'categories.\nThat is, if we have '[Sushi Bars, Japanese, Restaurants]' in categories the 'Restaurant_type will be '{'SushiBars': 1, 'Japanese': 1, 'Mexican': 0, ...}'", "ydc.get_restaurant_type()\nbusiness.head()\n\nbusiness.restaurant_type.ix[0]", "Exercise 3: Lets clean the 'attributes' column. The entries in this column are dictionaries. We need to do two things:\n1) Turn all the True or False in the dictionary to 1s and 0s.\n2) There are some entries within dictionaries that are dictionaries themselves, lets turn the whole entry into just one dictionary, for example if we have\n'{'Accepts Credit Cards': True, 'Alcohol': 'none','Ambience': {'casual': False,'classy': False}}'\nthen turn it into\n'{'Accepts Credit Cards':1, 'Alcohol_none': 1, 'Ambience_casual': 0, 'Ambience_classy': 0}'.\nThere might be other entries like {'Price Range': 1} where the values are numerical so we might want to change that into {'Price_Range_1': 1}.\nThe reason we modify categorical variables like this is that machine learning algorithms cannot interpret textual data like \"True\" and \"False\". They need numerical inputs such as 1 and 0.", "business.attributes = yelp.convert_boolean(business.attributes)\nbusiness.head()\n\nbusiness.attributes.ix[0]", "Exercise 4: Create a new column for every day of the week and fill it with the amount of hours the business is open that day.\nYour approach should handle businesses that stay open late like bars and nightclubs.", "ydc.calc_open_hours()\nbusiness.head(6)", "Exercise 5: Create a table with the average review for a business.\nYou will need to pull in a new json file and merge DataFrames for the next 2 exercises.", "review = ydc.file_data['review']\nreview.shape\nreview.head()\nreview.tail()\n\nydc.get_avg_stars()\nydc.file_data['business'].head()", "Exercise 6: Create a new table that only contains restaurants with the following schema:\nBusiness_Name | Restaurant_type | Friday hours | Saturday hours | Attributes | Zipcode | Average Rating", "mask = ['name', 'restaurant_type', 'Friday', 'Saturday',\n 'attributes', 'zip_code', 'stars_avg']\nydc.file_data['business'].loc[:, mask]" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
hjelmj/electrochem
notebooks/DoubleLayerCapacitance.ipynb
mit
[ "# Python dependencies\n# from __future__ import division, print_function\nimport numpy as np\nimport matplotlib as mpl\nimport matplotlib.pyplot as plt\nfrom scipy.constants import codata\n\n# change the default font set (matplotlib.rc)\nmpl.rc('mathtext', fontset='stixsans', default='regular')\n\n# increase text size somewhat\nmpl.rcParams.update({'axes.labelsize':12, 'font.size': 12})\n\n# set up notebook for inline plotting\n%matplotlib inline\n\n# get constants from CODATA 2010\nF = codata.physical_constants['Faraday constant'][0]\nR = codata.physical_constants['molar gas constant'][0]\nEPSILON_0 = codata.physical_constants['electric constant'][0]\n# check the constants\nF,R,EPSILON_0", "Double-Layer Capacitance\nThe following is based mainly on the treatment presented in the book by Oldham, Myland, and Bond: Electrochemical Science and Technology.\nOther excellent introductions to electrical double-layers can be found in the books Electrochemical Methods by Bard & Faulkner and in Electrochemical Systems by Newman and Thomas-Alyea.\nDefinition of Capacitance\nCapacitance is defined as the ratio of charge, $Q$, and voltage (potential difference), $V$, across the capacitor:\n$$C = \\dfrac{Q}{V}$$\nIn electrochemical cells the capacitance usually varies somewhat with the electrode potential, so it is useful to think of capacitance as a differential quantity, i.e. $C_{differential} = C_{d} = \\dfrac{\\mathrm{d}Q}{\\mathrm{d}V}$ or $C_{d} = \\dfrac{\\mathrm{d}Q}{\\mathrm{d}E}$ for a single electrode, with the electrode potential $E$.\nThe integral capacitance is given by \n$$C_{integral} = \\dfrac{Q(E)}{E-E_{zc}} = \\dfrac{1}{E-E_{zc}} \\int_{E_{zc}}^E {C_{d}}\\mathrm{d}E$$\nwhere $Q$ is a function of $E$ and $E_{zc}$ is the potential of zero charge.\nCapacitance of a Parallel Plate Capacitor\nA parallel plate capacitor basically is a two-terminal device that can be used to store energy electrostatically. The capacitance of a parallel plate capacitor is given by:\n$$C_{pp} = \\dfrac{\\epsilon \\epsilon_0 A}{x}$$\nwhere $\\epsilon$ is the relative static permittivity (dielectric constant), $\\epsilon_0$ is the permittivity of free space [F m$^{-1}$], and A and x are the area and distance between the plates, respectively.\nCapacitance of the Helmholtz Layer\nThe Helmholtz model is essentially a parallel plate capacitor model, and thus the capacitance of the Helmholtz layer is given by:\n$$\\dfrac{C_{H}}{A} = \\dfrac{\\mathrm{d}q}{\\mathrm{d}E}= \\dfrac{\\epsilon_H \\epsilon_0}{x_H}$$\nwhere $q$ is the charge density, $\\epsilon_H$ is the relative dielectric constant of the Helmholtz layer, $\\epsilon_0$ is the permittivity of free space [F], and $A$ and $x_H$ are the area of the electrode and the thickness of the Helmholtz layer, respectively.", "def cap_helmholtz(epsilon_H=6.8, x=0.2e-9, A=1e-4):\n \"\"\"\n *input*\n epsilon_H - the relative permittivity of the Helmholtz layer, default is 6.8\n x, the thickness of the Helmholtz layer [m], default is 0.2 nm, i.e. 0.2e-9 cm.\n A, electrode area [m²], default is 1 cm², i.e. 1e-4\n *output*\n C, [F]\n \"\"\"\n C_H = (epsilon_H*EPSILON_0*A)/x\n return C_H", "The relative permittivity of bulk water at 25$\\,^{\\circ}{\\rm C}$ is 78.5. The solvent molecules and ions in the Helmholtz (or compact) layer are confined in a narrow region and ordered by an intense local electric field, leading to significantly lower relative permittivities than their corresponding bulk values. For water some estimates suggests it could be as low as 5 for water molecules in the inner Helmholtz layer and around 30 for water molecules in the outer Helmholtz layer. \nTo take into account different relative permittivities we have to divide the Helmholtz layer into a number of series connected parallel plate capacitors, but to keep things simple we will treat it as if it had only a single permittivity here.", "#For a Helmholtz layer thickness of 0.2 nm (roughly half the diameter of a hydrated cation), and a relative permittivity of 5:\nprint(round(cap_helmholtz(5,0.2e-9,100e-6),7), 'F')", "Capacitance of the Diffuse Layer\nThe Gouy-Chapman model predicts the capacitance of the diffuse double layer to be given by (for a 1:1 electrolyte and monovalent ions):\n$$ \\dfrac{C_{GC}}{A} = \\sqrt{\\dfrac{2 F^2 \\epsilon \\epsilon_0 c}{RT}} \\mathrm{cosh} \\left[ \\dfrac{F}{2RT} (E-E_{zc}) \\right] $$\nwhere $c$ is the concentration of the electrolyte (i.e. $c = c_{cation} = c_{anion}$) in the solution.\nThe Stern Model\nThe total double layer capacitance can be modeled using a series combination of the Gouy-Chapman model, which models the capacitance of the diffuse double-layer, and the Helmholtz model, which describes the capacitance of the compact double layer that is located closest to the electrode surface, as suggested originally by Stern. This is sometimes referred to as the Gouy-Chapman-Stern model, or just the Stern model.\nThe total capacitance of the two layers would thus be:\n$$\\dfrac{1}{C_S} = \\dfrac{1}{C_H} + \\dfrac{1}{C_{GC}}$$", "def cap_gouychapman(E,Epz=0,c=10,T=298.15,epsilon=78.54,A=1e-4):\n \"\"\"\n *input*\n E, electrode potential [V]\n Epz, potential of zero charge [V], default is 0 V\n c [mol/m³], default is 10 mM, i.e. 10e-3 mol/dm³ = 10 mol/m³ \n T [K], default is 298.15 K\n epsilon [dimensionless]\n A, the electrode area in m² \n *output*\n C_GC_spec, the capacitance (C_GC) of \n the diffuse double layer as predicted by the Gouy-Chapman model for \n a 1:1 electrolyte with monovalent ions.\n \"\"\"\n C_GC = A*np.sqrt((2*(F**2)*epsilon*EPSILON_0*c)/(R*T))*np.cosh(F*(E-Epz)/(2*R*T))\n return C_GC\n\ndef cap_stern(E,Epz=0,epsilon_H=6.8,x=0.2e-9,c=10,T=298.15,epsilon=78.54,A=1e-4):\n \"\"\"Returns the total double-layer capacitance as predicted by the\n Gouy-Chapman-Stern model\"\"\"\n C_h = cap_helmholtz(epsilon_H=epsilon_H,x=x,A=A)\n C_gc = cap_gouychapman(E,Epz=Epz,c=c,T=T,epsilon=epsilon,A=A)\n recip_C_S = 1/C_h + 1/C_gc\n return 1/recip_C_S", "Relationship between Capacitance and Electrode Potential\nIn the following we will use matplotlib to visualize the relationship between capacitance and electrode potential around the potential of zero charge.", "#generate a range of potentials\nE = np.linspace(-0.2,0.2, num=401)\nE[:5] #check the first five values in the array\n\n#now plot the component capacitances and the total as well\nfig, ax = plt.subplots(nrows=1, ncols=1)\n\n#plot the data,multiply by 1e6 to get µF\nax.plot(E, 1e6*cap_helmholtz()*np.ones(len(E)), 'b-', label='Helmholtz')\nax.plot(E, 1e6*cap_gouychapman(E), 'g-', label='Gouy-Chapman')\nax.plot(E, 1e6*cap_stern(E), 'r-', label='Stern')\n\n#set axis labels\nax.set_xlabel('$E-E_{pz}$ [V]')\nax.set_ylabel('C [$\\mu F \\cdot cm^{-2}$]')\n\n#set axis limits\nax.set_ylim(0,100)\nax.set_xlim(-0.2,0.2)\n\n#figure legend\nax.legend(loc='best', ncol=1, frameon=False, fontsize=10)\n\n#savefig\n#plt.savefig('double-layer-cap_vs_potential.png', dpi=200)\n\nplt.show()", "Total Capacitance as a Function of Electrolyte Concentration\nStill considering a 1:1 electrolyte with monovalent ions, but now we'll calculate the total capacitance as a function of the electrolyte concentration.", "#electrolyte concentrations (in M)\n\nc_electrolyte_M = [0.01, 0.02, 0.05, 0.1, 0.2, 0.5, 1.0, 2.0, 5] \n\n#electrolyte concentration (convert to array and to units of mol/m³ and then back again to a list)\n\nc_electrolyte = list(np.array(c_electrolyte_M)*1e3)\n\n#now plot the component capacitances and the total as well\nfig, ax = plt.subplots(nrows=1, ncols=1)\n\ncmap = plt.cm.Greens(range(50,255,20))\n\n#plot the data,multiply by 1e6 to get µF\nfor i,conc in enumerate(c_electrolyte):\n ax.plot(E, 1e6*cap_stern(E, c=conc), ls='-', color=cmap[i], label=str(conc/1e3)+' M')\n \nax.plot(E, 1e6*cap_helmholtz()*np.ones(len(E)), 'k--', label='Helmholtz')\n\n#set axis labels\nax.set_xlabel('$E-E_{pz}$ [V]')\nax.set_ylabel('C [$\\mu F \\cdot cm^{-2}]$')\n\n#figure legend\nax.legend(loc='best', ncol=3, frameon=False, fontsize=10)\n\n#set axis limits\nax.set_ylim(0,32)\nax.set_xlim(-0.2,0.2)\n\n#save figure\n#plt.savefig('double-layer-cap_vs_conc.png', dpi=200)\n\nplt.show()" ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
undercertainty/ou_nlp
.ipynb_checkpoints/Untitled-checkpoint.ipynb
apache-2.0
[ "A simple (ie. no error checking or sensible engineering) notebook to extract the student answer data from an xml file. \nThe semeval data here is obtained from the semeval 2013 website\nI'm not 100% sure what we actually need for the moment, so I'm just going to extract the student answer data from a single file. That is, I'm not at first going to use the reference answer etc.", "filename='semeval2013-task7/semeval2013-Task7-5way/beetle/train/Core/FaultFinding-BULB_C_VOLTAGE_EXPLAIN_WHY1.xml'\n\nimport pandas as pd\n\nfrom xml.etree import ElementTree as ET\n\ntree=ET.parse(filename)", "The reference answers are the third daughter node of the tree:", "r=tree.getroot()\nr[2]", "Now iterate over the student answers to get the specific responses. For the moment, we'll just stick to the text and the accuracy. I'll also add an index term to make it a bit easier to convert to a dataframe.", "responses_ls=[{'accuracy':a.attrib['accuracy'], 'text':a.text, 'idx':i} for (i, a) in enumerate(r[2])]\n\nresponses_ls", "Next, we need to carry out whatever analysis we want on the answers. In this case, we'll split on whitespace, convert to lower case, and strip punctuation. Feel free to redefine the to_tokens function to do whatever analysis you prefer.", "from string import punctuation\n\ndef to_tokens(textIn):\n '''Convert the input textIn to a list of tokens'''\n tokens_ls=[t.lower().strip(punctuation) for t in textIn.split()]\n # remove any empty tokens\n return [t for t in tokens_ls if t]\n\nstr='\"Help!\" yelped the banana, who was obviously scared out of his skin.'\nprint(str)\nprint(to_tokens(str))", "So now we can apply the to_tokens function to each of the student responses:", "for resp_dict in responses_ls:\n resp_dict['tokens']=to_tokens(resp_dict['text'])\nresponses_ls", "OK, good. So now let's see how big the vocabulary is for the complete set:", "vocab_set=set()\nfor resp_dict in responses_ls:\n vocab_set=vocab_set.union(set(resp_dict['tokens']))\n \nlen(vocab_set)", "Now we can set up a document frequency dict:", "docFreq_dict={}\n\nfor t in vocab_set:\n docFreq_dict[t]=len([resp_dict for resp_dict in responses_ls if t in resp_dict['tokens']])\n \ndocFreq_dict", "Now add a tf.idf dict to each of the responses:", "for resp_dict in responses_ls:\n resp_dict['tfidf']={t:resp_dict['tokens'].count(t)/docFreq_dict[t] for t in resp_dict['tokens']}\n \nresponses_ls[6]", "Finally, convert the response data into a dataframe:", "out_df=pd.DataFrame(index=docFreq_dict.keys())\nfor resp_dict in responses_ls:\n out_df[resp_dict['idx']]=pd.Series(resp_dict['tfidf'], index=out_df.index)\n\nout_df=out_df.fillna(0).T\nout_df.head()\n\naccuracy_ss=pd.Series({r['idx']:r['accuracy'] for r in responses_ls})\naccuracy_ss.head()" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
GoogleCloudPlatform/training-data-analyst
courses/machine_learning/deepdive2/introduction_to_tensorflow/solutions/fraud_detection_with_tensorflow_bigquery.ipynb
apache-2.0
[ "Building a Fraud Detection model on Vertex AI with TensorFlow Enterprise and BigQuery\nLearning objectives\n\nAnalyze the data in BigQuery.\nIngest records from BigQuery.\nPreprocess the data.\nBuild the model.\nTrain the model.\nEvaluate the model.\n\nIntroduction\nIn this notebook, you'll directly ingest a BigQuery dataset and train a fraud detection model with TensorFlow Enterprise on Vertex AI.\nYou've also walked through all the steps of building a model. Finally, you learned a bit about how to handle imbalanced classification problems.\nEach learning objective will correspond to a #TODO in the student lab notebook -- try to complete that notebook first before reviewing this solution notebook. \nIngest records from BigQuery\nStep 1: Import Python packages\nRun the below cell to import the python packages.", "import tensorflow as tf\nimport tensorflow.keras as keras\nimport tensorflow.keras.layers as layers\n\nfrom tensorflow_io.bigquery import BigQueryClient\n\nimport functools", "Step 2: Define constants\nLet's next define some constants for use in the project. Change GCP_PROJECT_ID to the actual project ID you are using. Go ahead and run new cells as you create them.", "GCP_PROJECT_ID = 'qwiklabs-gcp-00-b1e00ce17168' # Replace with your Project-ID\nDATASET_GCP_PROJECT_ID = GCP_PROJECT_ID # A copy of the data is saved in the user project\nDATASET_ID = 'tfe_codelab'\nTRAIN_TABLE_ID = 'ulb_fraud_detection_train'\nVAL_TABLE_ID = 'ulb_fraud_detection_val'\nTEST_TABLE_ID = 'ulb_fraud_detection_test'\n\nFEATURES = ['Time','V1','V2','V3','V4','V5','V6','V7','V8','V9','V10','V11','V12','V13','V14','V15','V16','V17','V18','V19','V20','V21','V22','V23','V24','V25','V26','V27','V28','Amount']\nLABEL='Class'\nDTYPES=[tf.float64] * len(FEATURES) + [tf.int64]", "Step 3: Define helper functions\nNow, let's define a couple functions. read_session() reads data from a BigQuery table. extract_labels() is a helper function to separate the label column from the rest, so that the dataset is in the format expected by keras.model_fit() later on.", "client = BigQueryClient()\n\ndef read_session(TABLE_ID):\n return client.read_session(\n \"projects/\" + GCP_PROJECT_ID, DATASET_GCP_PROJECT_ID, TABLE_ID, DATASET_ID,\n FEATURES + [LABEL], DTYPES, requested_streams=2\n)\n\ndef extract_labels(input_dict):\n features = dict(input_dict)\n label = tf.cast(features.pop(LABEL), tf.float64)\n return (features, label)", "Step 4: Ingest data\nFinally, let's create each dataset and then print the first batch from the training dataset. Note that we have defined a BATCH_SIZE of 32. This is an important parameter that will impact the speed and accuracy of training.", "BATCH_SIZE = 32\n\n# TODO 1\n# Create the datasets\nraw_train_data = read_session(TRAIN_TABLE_ID).parallel_read_rows().map(extract_labels).batch(BATCH_SIZE)\nraw_val_data = read_session(VAL_TABLE_ID).parallel_read_rows().map(extract_labels).batch(BATCH_SIZE)\nraw_test_data = read_session(TEST_TABLE_ID).parallel_read_rows().map(extract_labels).batch(BATCH_SIZE)\n\nnext(iter(raw_train_data)) # Print first batch", "Build the model\nStep 1: Preprocess data\nLet's create feature columns for each feature in the dataset. In this particular dataset, all of the columns are of type numeric_column, but there a number of other column types (e.g. categorical_column).\nYou will also norm the data to center around zero so that the network converges faster. You've precalculated the means of each feature to use in this calculation.", "# TODO 2\nMEANS = [94816.7387536405, 0.0011219465482001268, -0.0021445914636999603, -0.002317402958335562,\n -0.002525792169927835, -0.002136576923287782, -3.7586818983702984, 8.135919975738768E-4,\n -0.0015535579268265718, 0.001436137140461279, -0.0012193712736681508, -4.5364970422902533E-4,\n -4.6175444671576083E-4, 9.92177789685366E-4, 0.002366229151475428, 6.710217226762278E-4,\n 0.0010325807119864225, 2.557260815835395E-4, -2.0804190062322664E-4, -5.057391100818653E-4,\n -3.452114767842334E-6, 1.0145936326270006E-4, 3.839214074518535E-4, 2.2061197469126577E-4,\n -1.5601580596677608E-4, -8.235017846415852E-4, -7.298316615408554E-4, -6.898459943652376E-5,\n 4.724125688297753E-5, 88.73235686453587]\n\ndef norm_data(mean, data):\n data = tf.cast(data, tf.float32) * 1/(2*mean)\n return tf.reshape(data, [-1, 1])\n\nnumeric_columns = []\n\nfor i, feature in enumerate(FEATURES):\n num_col = tf.feature_column.numeric_column(feature, normalizer_fn=functools.partial(norm_data, MEANS[i]))\n numeric_columns.append(num_col)\n\nnumeric_columns", "Step 2: Build the model\nNow we are ready to create a model. We will feed the columns we just created into the network. Then we will compile the model. We are including the Precision/Recall AUC metric, which is useful for imbalanced datasets.", "# TODO 3\nmodel = keras.Sequential([\n tf.keras.layers.DenseFeatures(numeric_columns),\n layers.Dense(64, activation='relu'),\n layers.Dense(64, activation='relu'),\n layers.Dense(1, activation='sigmoid')\n])\n\n# Compile the model\nmodel.compile(loss='binary_crossentropy',\n optimizer='adam',\n metrics=['accuracy', tf.keras.metrics.AUC(curve='PR')])", "Step 3: Train the model\nThere are a number of techniques to handle imbalanced data, including oversampling (generating new data in the minority class) and undersampling (reducing the data in the majority class).\nFor the purposes of this codelab, let's use a technique that overweights the loss when misclassifying the minority class. You'll specify a class_weight parameter when training and weight \"1\" (fraud) higher, since it is much less prevalent.\nYou will use 3 epochs (passes through the data) in this lab so training is quicker. In a real-world scenario, You'd want to run it long enough to the point where the stop seeing increases in accuracy of the validation set.", "# TODO 4\nCLASS_WEIGHT = {\n 0: 1,\n 1: 100\n}\nEPOCHS = 3\n\ntrain_data = raw_train_data.shuffle(10000)\nval_data = raw_val_data\ntest_data = raw_test_data\n\n# Train the model using model.fit()\nmodel.fit(train_data, validation_data=val_data, class_weight=CLASS_WEIGHT, epochs=EPOCHS)", "Step 4: Evaluate the model\nThe evaluate() function can be applied to test data that the model has never seen to provide an objective assessment. Fortunately, we've set aside test data just for that!", "# TODO 5\n# Evaluate the model\nmodel.evaluate(test_data)", "Step 5: Exploration\nIn this lab, you've demonstrated how to ingest a large data set from BigQuery directly into a TensorFlow Keras model. You've also walked through all the steps of building a model. Finally, you learned a bit about how to handle imbalanced classification problems.\nFeel free to keep playing around with different architectures and approaches to the imbalanced dataset, to see if you can improve the accuracy!" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
GoogleCloudPlatform/mlops-on-gcp
immersion/guided_projects/guided_project_3_nlp_starter/keras_for_text_classification.ipynb
apache-2.0
[ "Keras for Text Classification\nLearning Objectives\n1. Learn how to create a text classification datasets using BigQuery\n1. Learn how to tokenize and integerize a corpus of text for training in Keras\n1. Learn how to do one-hot-encodings in Keras\n1. Learn how to use embedding layers to represent words in Keras\n1. Learn about the bag-of-word representation for sentences\n1. Learn how to use DNN/CNN/RNN model to classify text in keras\nIntroduction\nIn this notebook, we will implement text models to recognize the probable source (Github, Tech-Crunch, or The New-York Times) of the titles we have in the title dataset we constructed in the first task of the lab.\nIn the next step, we will load and pre-process the texts and labels so that they are suitable to be fed to a Keras model. For the texts of the titles we will learn how to split them into a list of tokens, and then how to map each token to an integer using the Keras Tokenizer class. What will be fed to our Keras models will be batches of padded list of integers representing the text. For the labels, we will learn how to one-hot-encode each of the 3 classes into a 3 dimensional basis vector.\nThen we will explore a few possible models to do the title classification. All models will be fed padded list of integers, and all models will start with a Keras Embedding layer that transforms the integer representing the words into dense vectors.\nThe first model will be a simple bag-of-word DNN model that averages up the word vectors and feeds the tensor that results to further dense layers. Doing so means that we forget the word order (and hence that we consider sentences as a “bag-of-words”). In the second and in the third model we will keep the information about the word order using a simple RNN and a simple CNN allowing us to achieve the same performance as with the DNN model but in much fewer epochs.", "import os\n\nfrom google.cloud import bigquery\nimport pandas as pd\n\n%load_ext google.cloud.bigquery", "Replace the variable values in the cell below:", "PROJECT = \"qwiklabs-gcp-04-14242c0aa6a7\" # Replace with your PROJECT\nBUCKET = PROJECT # defaults to PROJECT\nREGION = \"us-central1\" # Replace with your REGION\nSEED = 0", "Create a Dataset from BigQuery\nHacker news headlines are available as a BigQuery public dataset. The dataset contains all headlines from the sites inception in October 2006 until October 2015. \nHere is a sample of the dataset:", "%%bigquery --project $PROJECT\n\nSELECT\n url, title, score\nFROM\n `bigquery-public-data.hacker_news.stories`\nWHERE\n LENGTH(title) > 10\n AND score > 10\n AND LENGTH(url) > 0\nLIMIT 10", "Let's do some regular expression parsing in BigQuery to get the source of the newspaper article from the URL. For example, if the url is http://mobile.nytimes.com/...., I want to be left with <i>nytimes</i>", "%%bigquery --project $PROJECT\n\nSELECT\n ARRAY_REVERSE(SPLIT(REGEXP_EXTRACT(url, '.*://(.[^/]+)/'), '.'))[OFFSET(1)] AS source,\n COUNT(title) AS num_articles\nFROM\n `bigquery-public-data.hacker_news.stories`\nWHERE\n REGEXP_CONTAINS(REGEXP_EXTRACT(url, '.*://(.[^/]+)/'), '.com$')\n AND LENGTH(title) > 10\nGROUP BY\n source\nORDER BY num_articles DESC\n LIMIT 100", "Now that we have good parsing of the URL to get the source, let's put together a dataset of source and titles. This will be our labeled dataset for machine learning.", "regex = '.*://(.[^/]+)/'\n\n\nsub_query = \"\"\"\nSELECT\n title,\n ARRAY_REVERSE(SPLIT(REGEXP_EXTRACT(url, '{0}'), '.'))[OFFSET(1)] AS source\n \nFROM\n `bigquery-public-data.hacker_news.stories`\nWHERE\n REGEXP_CONTAINS(REGEXP_EXTRACT(url, '{0}'), '.com$')\n AND LENGTH(title) > 10\n\"\"\".format(regex)\n\n\nquery = \"\"\"\nSELECT \n LOWER(REGEXP_REPLACE(title, '[^a-zA-Z0-9 $.-]', ' ')) AS title,\n source\nFROM\n ({sub_query})\nWHERE (source = 'github' OR source = 'nytimes' OR source = 'techcrunch')\n\"\"\".format(sub_query=sub_query)\n\nprint(query)", "For ML training, we usually need to split our dataset into training and evaluation datasets (and perhaps an independent test dataset if we are going to do model or feature selection based on the evaluation dataset). AutoML however figures out on its own how to create these splits, so we won't need to do that here.", "bq = bigquery.Client(project=PROJECT)\ntitle_dataset = bq.query(query).to_dataframe()\ntitle_dataset.head()", "AutoML for text classification requires that\n* the dataset be in csv form with \n* the first column being the texts to classify or a GCS path to the text \n* the last colum to be the text labels\nThe dataset we pulled from BiqQuery satisfies these requirements.", "print(\"The full dataset contains {n} titles\".format(n=len(title_dataset)))", "Let's make sure we have roughly the same number of labels for each of our three labels:", "title_dataset.source.value_counts()", "Finally we will save our data, which is currently in-memory, to disk.\nWe will create a csv file containing the full dataset and another containing only 1000 articles for development.\nNote: It may take a long time to train AutoML on the full dataset, so we recommend to use the sample dataset for the purpose of learning the tool.", "DATADIR = './data/'\n\nif not os.path.exists(DATADIR):\n os.makedirs(DATADIR)\n\nFULL_DATASET_NAME = 'titles_full.csv'\nFULL_DATASET_PATH = os.path.join(DATADIR, FULL_DATASET_NAME)\n\n# Let's shuffle the data before writing it to disk.\ntitle_dataset = title_dataset.sample(n=len(title_dataset))\n\ntitle_dataset.to_csv(\n FULL_DATASET_PATH, header=False, index=False, encoding='utf-8')", "Now let's sample 1000 articles from the full dataset and make sure we have enough examples for each label in our sample dataset (see here for further details on how to prepare data for AutoML).", "sample_title_dataset = title_dataset.sample(n=1000)\nsample_title_dataset.source.value_counts()", "Let's write the sample datatset to disk.", "SAMPLE_DATASET_NAME = 'titles_sample.csv'\nSAMPLE_DATASET_PATH = os.path.join(DATADIR, SAMPLE_DATASET_NAME)\n\nsample_title_dataset.to_csv(\n SAMPLE_DATASET_PATH, header=False, index=False, encoding='utf-8')\n\nsample_title_dataset.head()\n\nimport os\nimport shutil\n\nimport pandas as pd\nimport tensorflow as tf\nfrom tensorflow.keras.callbacks import TensorBoard, EarlyStopping\nfrom tensorflow.keras.layers import (\n Embedding,\n Flatten,\n GRU,\n Conv1D,\n Lambda,\n Dense,\n)\nfrom tensorflow.keras.models import Sequential\nfrom tensorflow.keras.preprocessing.sequence import pad_sequences\nfrom tensorflow.keras.preprocessing.text import Tokenizer\nfrom tensorflow.keras.utils import to_categorical\n\n\nprint(tf.__version__)\n\n%matplotlib inline", "Let's start by specifying where the information about the trained models will be saved as well as where our dataset is located:", "LOGDIR = \"./text_models\"\nDATA_DIR = \"./data\"", "Loading the dataset\nOur dataset consists of titles of articles along with the label indicating from which source these articles have been taken from (GitHub, Tech-Crunch, or the New-York Times).", "DATASET_NAME = \"titles_full.csv\"\nTITLE_SAMPLE_PATH = os.path.join(DATA_DIR, DATASET_NAME)\nCOLUMNS = ['title', 'source']\n\ntitles_df = pd.read_csv(TITLE_SAMPLE_PATH, header=None, names=COLUMNS)\ntitles_df.head()", "Integerize the texts\nThe first thing we need to do is to find how many words we have in our dataset (VOCAB_SIZE), how many titles we have (DATASET_SIZE), and what the maximum length of the titles we have (MAX_LEN) is. Keras offers the Tokenizer class in its keras.preprocessing.text module to help us with that:", "tokenizer = Tokenizer()\ntokenizer.fit_on_texts(titles_df.title)\n\nintegerized_titles = tokenizer.texts_to_sequences(titles_df.title)\nintegerized_titles[:3]\n\nVOCAB_SIZE = len(tokenizer.index_word)\nVOCAB_SIZE\n\nDATASET_SIZE = tokenizer.document_count\nDATASET_SIZE\n\nMAX_LEN = max(len(sequence) for sequence in integerized_titles)\nMAX_LEN", "Let's now implement a function create_sequence that will \n* take as input our titles as well as the maximum sentence length and \n* returns a list of the integers corresponding to our tokens padded to the sentence maximum length\nKeras has the helper functions pad_sequence for that on the top of the tokenizer methods.", "# TODO 1\ndef create_sequences(texts, max_len=MAX_LEN):\n sequences = tokenizer.texts_to_sequences(texts)\n padded_sequences = pad_sequences(sequences, max_len, padding='post')\n return padded_sequences\n\nsequences = create_sequences(titles_df.title[:3])\nsequences\n\ntitles_df.source[:4]", "We now need to write a function that \n* takes a title source and\n* returns the corresponding one-hot encoded vector\nKeras to_categorical is handy for that.", "CLASSES = {\n 'github': 0,\n 'nytimes': 1,\n 'techcrunch': 2\n}\nN_CLASSES = len(CLASSES)\n\n# TODO 2\ndef encode_labels(sources):\n classes = [CLASSES[source] for source in sources]\n one_hots = to_categorical(classes)\n return one_hots\n\nencode_labels(titles_df.source[:4])", "Preparing the train/test splits\nLet's split our data into train and test splits:", "N_TRAIN = int(DATASET_SIZE * 0.80)\n\ntitles_train, sources_train = (\n titles_df.title[:N_TRAIN], titles_df.source[:N_TRAIN])\n\ntitles_valid, sources_valid = (\n titles_df.title[N_TRAIN:], titles_df.source[N_TRAIN:])", "To be on the safe side, we verify that the train and test splits\nhave roughly the same number of examples per classes.\nSince it is the case, accuracy will be a good metric to use to measure\nthe performance of our models.", "sources_train.value_counts()\n\nsources_valid.value_counts()", "Using create_sequence and encode_labels, we can now prepare the\ntraining and validation data to feed our models.\nThe features will be\npadded list of integers and the labels will be one-hot-encoded 3D vectors.", "X_train, Y_train = create_sequences(titles_train), encode_labels(sources_train)\nX_valid, Y_valid = create_sequences(titles_valid), encode_labels(sources_valid)\n\nX_train[:3]\n\nY_train[:3]", "Building a DNN model\nThe build_dnn_model function below returns a compiled Keras model that implements a simple embedding layer transforming the word integers into dense vectors, followed by a Dense softmax layer that returns the probabilities for each class.\nNote that we need to put a custom Keras Lambda layer in between the Embedding layer and the Dense softmax layer to do an average of the word vectors returned by the embedding layer. This is the average that's fed to the dense softmax layer. By doing so, we create a model that is simple but that loses information about the word order, creating a model that sees sentences as \"bag-of-words\".", "def build_dnn_model(embed_dim):\n\n model = Sequential([\n Embedding(VOCAB_SIZE + 1, embed_dim, input_shape=[MAX_LEN]), # TODO 3\n Lambda(lambda x: tf.reduce_mean(x, axis=1)), # TODO 4\n Dense(N_CLASSES, activation='softmax') # TODO 5\n ])\n\n model.compile(\n optimizer='adam',\n loss='categorical_crossentropy',\n metrics=['accuracy']\n )\n return model", "Below we train the model on 100 epochs but adding an EarlyStopping callback that will stop the training as soon as the validation loss has not improved after a number of steps specified by PATIENCE . Note that we also give the model.fit method a Tensorboard callback so that we can later compare all the models using TensorBoard.", "%%time\n\ntf.random.set_seed(33)\n\nMODEL_DIR = os.path.join(LOGDIR, 'dnn')\nshutil.rmtree(MODEL_DIR, ignore_errors=True)\n\nBATCH_SIZE = 300\nEPOCHS = 100\nEMBED_DIM = 10\nPATIENCE = 0\n\ndnn_model = build_dnn_model(embed_dim=EMBED_DIM)\n\ndnn_history = dnn_model.fit(\n X_train, Y_train,\n epochs=EPOCHS,\n batch_size=BATCH_SIZE,\n validation_data=(X_valid, Y_valid),\n callbacks=[EarlyStopping(patience=PATIENCE), TensorBoard(MODEL_DIR)],\n)\n\npd.DataFrame(dnn_history.history)[['loss', 'val_loss']].plot()\npd.DataFrame(dnn_history.history)[['accuracy', 'val_accuracy']].plot()\n\ndnn_model.summary()", "Building a RNN model\nThe build_dnn_model function below returns a compiled Keras model that implements a simple RNN model with a single GRU layer, which now takes into account the word order in the sentence.\nThe first and last layers are the same as for the simple DNN model.\nNote that we set mask_zero=True in the Embedding layer so that the padded words (represented by a zero) are ignored by this and the subsequent layers.", "def build_rnn_model(embed_dim, units):\n\n model = Sequential([\n Embedding(VOCAB_SIZE + 1, embed_dim, input_shape=[MAX_LEN], mask_zero=True), # TODO 3\n GRU(units), # TODO 5\n Dense(N_CLASSES, activation='softmax')\n ])\n\n model.compile(\n optimizer='adam',\n loss='categorical_crossentropy',\n metrics=['accuracy']\n )\n return model", "Let's train the model with early stoping as above. \nObserve that we obtain the same type of accuracy as with the DNN model, but in less epochs (~3 v.s. ~20 epochs):", "%%time\n\ntf.random.set_seed(33)\n\nMODEL_DIR = os.path.join(LOGDIR, 'rnn')\nshutil.rmtree(MODEL_DIR, ignore_errors=True)\n\nEPOCHS = 100\nBATCH_SIZE = 300\nEMBED_DIM = 10\nUNITS = 16\nPATIENCE = 0\n\nrnn_model = build_rnn_model(embed_dim=EMBED_DIM, units=UNITS)\n\nhistory = rnn_model.fit(\n X_train, Y_train,\n epochs=EPOCHS,\n batch_size=BATCH_SIZE,\n validation_data=(X_valid, Y_valid),\n callbacks=[EarlyStopping(patience=PATIENCE), TensorBoard(MODEL_DIR)],\n)\n\npd.DataFrame(history.history)[['loss', 'val_loss']].plot()\npd.DataFrame(history.history)[['accuracy', 'val_accuracy']].plot()\n\nrnn_model.summary()", "Build a CNN model\nThe build_dnn_model function below returns a compiled Keras model that implements a simple CNN model with a single Conv1D layer, which now takes into account the word order in the sentence.\nThe first and last layers are the same as for the simple DNN model, but we need to add a Flatten layer betwen the convolution and the softmax layer.\nNote that we set mask_zero=True in the Embedding layer so that the padded words (represented by a zero) are ignored by this and the subsequent layers.", "def build_cnn_model(embed_dim, filters, ksize, strides):\n\n model = Sequential([\n Embedding(\n VOCAB_SIZE + 1,\n embed_dim,\n input_shape=[MAX_LEN],\n mask_zero=True), # TODO 3\n Conv1D( # TODO 5\n filters=filters,\n kernel_size=ksize,\n strides=strides,\n activation='relu',\n ),\n Flatten(), # TODO 5\n Dense(N_CLASSES, activation='softmax')\n ])\n\n model.compile(\n optimizer='adam',\n loss='categorical_crossentropy',\n metrics=['accuracy']\n )\n return model", "Let's train the model. \nAgain we observe that we get the same kind of accuracy as with the DNN model but in many fewer steps.", "%%time\n\ntf.random.set_seed(33)\n\nMODEL_DIR = os.path.join(LOGDIR, 'cnn')\nshutil.rmtree(MODEL_DIR, ignore_errors=True)\n\nEPOCHS = 100\nBATCH_SIZE = 300\nEMBED_DIM = 5\nFILTERS = 200\nSTRIDES = 2\nKSIZE = 3\nPATIENCE = 0\n\n\ncnn_model = build_cnn_model(\n embed_dim=EMBED_DIM,\n filters=FILTERS,\n strides=STRIDES,\n ksize=KSIZE,\n)\n\ncnn_history = cnn_model.fit(\n X_train, Y_train,\n epochs=EPOCHS,\n batch_size=BATCH_SIZE,\n validation_data=(X_valid, Y_valid),\n callbacks=[EarlyStopping(patience=PATIENCE), TensorBoard(MODEL_DIR)],\n)\n\npd.DataFrame(cnn_history.history)[['loss', 'val_loss']].plot()\npd.DataFrame(cnn_history.history)[['accuracy', 'val_accuracy']].plot()\n\ncnn_model.summary()", "Copyright 2019 Google Inc. Licensed under the Apache License, Version 2.0 (the \"License\"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
ES-DOC/esdoc-jupyterhub
notebooks/miroc/cmip6/models/sandbox-3/landice.ipynb
gpl-3.0
[ "ES-DOC CMIP6 Model Properties - Landice\nMIP Era: CMIP6\nInstitute: MIROC\nSource ID: SANDBOX-3\nTopic: Landice\nSub-Topics: Glaciers, Ice. \nProperties: 30 (21 required)\nModel descriptions: Model description details\nInitialized From: -- \nNotebook Help: Goto notebook help page\nNotebook Initialised: 2018-02-20 15:02:41\nDocument Setup\nIMPORTANT: to be executed each time you run the notebook", "# DO NOT EDIT ! \nfrom pyesdoc.ipython.model_topic import NotebookOutput \n\n# DO NOT EDIT ! \nDOC = NotebookOutput('cmip6', 'miroc', 'sandbox-3', 'landice')", "Document Authors\nSet document authors", "# Set as follows: DOC.set_author(\"name\", \"email\") \n# TODO - please enter value(s)", "Document Contributors\nSpecify document contributors", "# Set as follows: DOC.set_contributor(\"name\", \"email\") \n# TODO - please enter value(s)", "Document Publication\nSpecify document publication status", "# Set publication status: \n# 0=do not publish, 1=publish. \nDOC.set_publication_status(0)", "Document Table of Contents\n1. Key Properties\n2. Key Properties --&gt; Software Properties\n3. Grid\n4. Glaciers\n5. Ice\n6. Ice --&gt; Mass Balance\n7. Ice --&gt; Mass Balance --&gt; Basal\n8. Ice --&gt; Mass Balance --&gt; Frontal\n9. Ice --&gt; Dynamics \n1. Key Properties\nLand ice key properties\n1.1. Overview\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOverview of land surface model.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.landice.key_properties.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "1.2. Model Name\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nName of land surface model code", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.landice.key_properties.model_name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "1.3. Ice Albedo\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nSpecify how ice albedo is modelled", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.landice.key_properties.ice_albedo') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"prescribed\" \n# \"function of ice age\" \n# \"function of ice density\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "1.4. Atmospheric Coupling Variables\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nWhich variables are passed between the atmosphere and ice (e.g. orography, ice mass)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.landice.key_properties.atmospheric_coupling_variables') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "1.5. Oceanic Coupling Variables\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nWhich variables are passed between the ocean and ice", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.landice.key_properties.oceanic_coupling_variables') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "1.6. Prognostic Variables\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nWhich variables are prognostically calculated in the ice model", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.landice.key_properties.prognostic_variables') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"ice velocity\" \n# \"ice thickness\" \n# \"ice temperature\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "2. Key Properties --&gt; Software Properties\nSoftware properties of land ice code\n2.1. Repository\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nLocation of code for this component.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.landice.key_properties.software_properties.repository') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "2.2. Code Version\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nCode version identifier.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.landice.key_properties.software_properties.code_version') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "2.3. Code Languages\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N\nCode language(s).", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.landice.key_properties.software_properties.code_languages') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "3. Grid\nLand ice grid\n3.1. Overview\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOverview of the grid in the land ice scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.landice.grid.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "3.2. Adaptive Grid\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nIs an adative grid being used?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.landice.grid.adaptive_grid') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "3.3. Base Resolution\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: FLOAT&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nThe base resolution (in metres), before any adaption", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.landice.grid.base_resolution') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "3.4. Resolution Limit\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: FLOAT&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nIf an adaptive grid is being used, what is the limit of the resolution (in metres)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.landice.grid.resolution_limit') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "3.5. Projection\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nThe projection of the land ice grid (e.g. albers_equal_area)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.landice.grid.projection') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "4. Glaciers\nLand ice glaciers\n4.1. Overview\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOverview of glaciers in the land ice scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.landice.glaciers.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "4.2. Description\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDescribe the treatment of glaciers, if any", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.landice.glaciers.description') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "4.3. Dynamic Areal Extent\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDoes the model include a dynamic glacial extent?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.landice.glaciers.dynamic_areal_extent') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "5. Ice\nIce sheet and ice shelf\n5.1. Overview\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOverview of the ice sheet and ice shelf in the land ice scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.landice.ice.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "5.2. Grounding Line Method\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nSpecify the technique used for modelling the grounding line in the ice sheet-ice shelf coupling", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.landice.ice.grounding_line_method') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"grounding line prescribed\" \n# \"flux prescribed (Schoof)\" \n# \"fixed grid size\" \n# \"moving grid\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "5.3. Ice Sheet\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nAre ice sheets simulated?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.landice.ice.ice_sheet') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "5.4. Ice Shelf\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nAre ice shelves simulated?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.landice.ice.ice_shelf') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "6. Ice --&gt; Mass Balance\nDescription of the surface mass balance treatment\n6.1. Surface Mass Balance\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDescribe how and where the surface mass balance (SMB) is calulated. Include the temporal coupling frequeny from the atmosphere, whether or not a seperate SMB model is used, and if so details of this model, such as its resolution", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.landice.ice.mass_balance.surface_mass_balance') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "7. Ice --&gt; Mass Balance --&gt; Basal\nDescription of basal melting\n7.1. Bedrock\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe the implementation of basal melting over bedrock", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.landice.ice.mass_balance.basal.bedrock') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "7.2. Ocean\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe the implementation of basal melting over the ocean", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.landice.ice.mass_balance.basal.ocean') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "8. Ice --&gt; Mass Balance --&gt; Frontal\nDescription of claving/melting from the ice shelf front\n8.1. Calving\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe the implementation of calving from the front of the ice shelf", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.landice.ice.mass_balance.frontal.calving') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "8.2. Melting\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe the implementation of melting from the front of the ice shelf", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.landice.ice.mass_balance.frontal.melting') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "9. Ice --&gt; Dynamics\n**\n9.1. Description\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nGeneral description if ice sheet and ice shelf dynamics", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.landice.ice.dynamics.description') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "9.2. Approximation\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nApproximation type used in modelling ice dynamics", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.landice.ice.dynamics.approximation') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"SIA\" \n# \"SAA\" \n# \"full stokes\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "9.3. Adaptive Timestep\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nIs there an adaptive time scheme for the ice scheme?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.landice.ice.dynamics.adaptive_timestep') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "9.4. Timestep\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nTimestep (in seconds) of the ice scheme. If the timestep is adaptive, then state a representative timestep.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.landice.ice.dynamics.timestep') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "©2017 ES-DOC" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
ioam/scipy-2017-holoviews-tutorial
solutions/08-deploying-bokeh-apps-with-solutions.ipynb
bsd-3-clause
[ "<a href='http://www.holoviews.org'><img src=\"assets/hv+bk.png\" alt=\"HV+BK logos\" width=\"40%;\" align=\"left\"/></a>\n<div style=\"float:right;\"><h2>08. Deploying Bokeh Apps</h2></div>\n\nIn the previous sections we discovered how to use a HoloMap to build a Jupyter notebook with interactive visualizations that can be exported to a standalone HTML file, as well as how to use DynamicMap and Streams to set up dynamic interactivity backed by the Jupyter Python kernel. However, frequently we want to package our visualization or dashboard for wider distribution, backed by Python but run outside of the notebook environment. Bokeh Server provides a flexible and scalable architecture to deploy complex interactive visualizations and dashboards, integrating seamlessly with Bokeh and with HoloViews.\nFor a detailed background on Bokeh Server see the bokeh user guide. In this tutorial we will discover how to deploy the visualizations we have created so far as a standalone bokeh server app, and how to flexibly combine HoloViews and Bokeh APIs to build highly customized apps. We will also reuse a lot of what we have learned so far---loading large, tabular datasets, applying datashader operations to them, and adding linked streams to our app.\nA simple bokeh app\nThe preceding sections of this tutorial focused solely on the Jupyter notebook, but now let's look at a bare Python script that can be deployed using Bokeh Server:", "with open('./apps/server_app.py', 'r') as f:\n print(f.read())", "Of the three parts of this app, part 2 should be very familiar by now -- load some taxi dropoff locations, declare a Points object, datashade them, and set some plot options.\nStep 1 is new: Instead of loading the bokeh extension using hv.extension('bokeh'), we get a direct handle on a bokeh renderer using the hv.renderer function. This has to be done at the top of the script, to be sure that options declared are passed to the Bokeh renderer. \nStep 3 is also new: instead of typing app to see the visualization as we would in the notebook, here we create a Bokeh document from it by passing the HoloViews object to the renderer.server_doc method. \nSteps 1 and 3 are essentially boilerplate, so you can now use this simple skeleton to turn any HoloViews object into a fully functional, deployable Bokeh app!\nDeploying the app\nAssuming that you have a terminal window open with the hvtutorial environment activated, in the notebooks/ directory, you can launch this app using Bokeh Server:\nbokeh serve --show apps/server_app.py\nIf you don't already have a favorite way to get a terminal, one way is to open it from within Jupyter, then make sure you are in the notebooks directory, and activate the environment using source activate hvtutorial (or activate tutorial on Windows). You can also open the app script file in the inbuilt text editor, or you can use your own preferred editor.", "# Exercise: Modify the app to display the pickup locations and add a tilesource, then run the app with bokeh serve\n# Tip: Refer to the previous notebook\nwith open('./apps/server_app_with_solutions.py', 'r') as f:\n print(f.read())\n \n# Run using: bokeh serve --show apps/server_app_with_solutions.py", "Iteratively building a bokeh app in the notebook\nThe above app script can be built entirely without using Jupyter, though we displayed it here using Jupyter for convenience in the tutorial. Jupyter notebooks are also often helpful when initially developing such apps, allowing you to quickly iterate over visualizations in the notebook, deploying it as a standalone app only once we are happy with it.\nTo illustrate this process, let's quickly go through such a workflow. As before we will set up our imports, load the extension, and load the taxi dataset:", "import holoviews as hv\nimport geoviews as gv\nimport dask.dataframe as dd\n\nfrom holoviews.operation.datashader import datashade, aggregate, shade\nfrom bokeh.models import WMTSTileSource\n\nhv.extension('bokeh', logo=False)\n\nusecols = ['tpep_pickup_datetime', 'dropoff_x', 'dropoff_y']\nddf = dd.read_csv('../data/nyc_taxi.csv', parse_dates=['tpep_pickup_datetime'], usecols=usecols)\nddf['hour'] = ddf.tpep_pickup_datetime.dt.hour\nddf = ddf.persist()", "Next we define a Counter stream which we will use to select taxi trips by hour.", "stream = hv.streams.Counter()\npoints = hv.Points(ddf, kdims=['dropoff_x', 'dropoff_y'])\ndmap = hv.DynamicMap(lambda counter: points.select(hour=counter%24).relabel('Hour: %s' % (counter % 24)),\n streams=[stream])\nshaded = datashade(dmap)\n\nhv.opts('RGB [width=800, height=600, xaxis=None, yaxis=None]')\n\nurl = 'https://server.arcgisonline.com/ArcGIS/rest/services/World_Imagery/MapServer/tile/{Z}/{Y}/{X}.jpg'\nwmts = gv.WMTS(WMTSTileSource(url=url))\n\noverlay = wmts * shaded", "Up to this point, we have a normal HoloViews notebook that we could display using Jupyter's rich display of overlay, as we would with an any notebook. But having come up with the objects we want interactively in this way, we can now display the result as a Bokeh app, without leaving the notebook. To do that, first edit the following cell to change \"8888\" to whatever port your jupyter session is using, in case your URL bar doesn't say \"localhost:8888/\".\nThen run this cell to launch the Bokeh app within this notebook:", "renderer = hv.renderer('bokeh')\nserver = renderer.app(overlay, show=True, websocket_origin='localhost:8888')", "We could stop here, having launched an app, but so far the app will work just the same as in the normal Jupyter notebook, responding to user inputs as they occur. Having defined a Counter stream above, let's go one step further and add a series of periodic events that will let the visualization play on its own even without any user input:", "dmap.periodic(1)", "You can stop this ongoing process by clearing the cell displaying the app.\nNow let's open the text editor again and make this edit to a separate app, which we can then launch using Bokeh Server separately from this notebook.", "# Exercise: Copy the example above into periodic_app.py and modify it so it can be run with bokeh serve\n# Hint: Use hv.renderer and renderer.server_doc\n# Note that you have to run periodic **after** creating the bokeh document\nwith open('./apps/periodic_app.py', 'r') as f:\n print(f.read())\n \n# Run using: bokeh serve --show apps/periodic_app.py", "Combining HoloViews with bokeh models\nNow for a last hurrah let's put everything we have learned to good use and create a bokeh app with it. This time we will go straight to a Python script containing the app. If you run the app with bokeh serve --show ./apps/player_app.py from your terminal you should see something like this:\n<img src=\"./assets/tutorial_app.gif\"></img>\nThis more complex app consists of several components:\n\nA datashaded plot of points for the indicated hour of the daty (in the slider widget)\nA linked PointerX stream, to compute a cross-section\nA set of custom bokeh widgets linked to the hour-of-day stream\n\nWe have already covered 1. and 2. so we will focus on 3., which shows how easily we can combine a HoloViews plot with custom Bokeh models. We will not look at the precise widgets in too much detail, instead let's have a quick look at the callback defined for slider widget updates:\npython\ndef slider_update(attrname, old, new):\n stream.event(hour=new)\nWhenever the slider value changes this will trigger a stream event updating our plots. The second part is how we combine HoloViews objects and Bokeh models into a single layout we can display. Once again we can use the renderer to convert the HoloViews object into something we can display with Bokeh:\npython\nrenderer = hv.renderer('bokeh')\nplot = renderer.get_plot(hvobj, doc=curdoc())\nThe plot instance here has a state attribute that represents the actual Bokeh model, which means we can combine it into a Bokeh layout just like any other Bokeh model:\npython\nlayout = layout([[plot.state], [slider, button]], sizing_mode='fixed')\ncurdoc().add_root(layout)", "# Advanced Exercise: Add a histogram to the bokeh layout next to the datashaded plot\n# Hint: Declare the histogram like this: hv.operation.histogram(aggregated, bin_range=(0, 20))\n# then use renderer.get_plot and hist_plot.state and add it to the layout\nwith open('./apps/player_app_with_solutions.py', 'r') as f:\n print(f.read())\n \n# Run using: bokeh serve --show apps/player_app_with_solutions.py", "Onwards\nAlthough the code above is more complex than in previous sections, it's actually providing a huge range of custom types of interactivity, which if implemented in Bokeh alone would have required far more than a notebook cell of code. Hopefully it is clear that arbitrarily complex collections of visualizations and interactive controls can be built from the components provided by HoloViews, allowing you to make simple analyses very easily and making it practical to make even quite complex apps when needed. The user guide, gallery, and reference gallery should have all the information you need to get started with all this power on your own datasets and tasks. Good luck!" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
ES-DOC/esdoc-jupyterhub
notebooks/mohc/cmip6/models/ukesm1-0-ll/ocnbgchem.ipynb
gpl-3.0
[ "ES-DOC CMIP6 Model Properties - Ocnbgchem\nMIP Era: CMIP6\nInstitute: MOHC\nSource ID: UKESM1-0-LL\nTopic: Ocnbgchem\nSub-Topics: Tracers. \nProperties: 65 (37 required)\nModel descriptions: Model description details\nInitialized From: -- \nNotebook Help: Goto notebook help page\nNotebook Initialised: 2018-02-15 16:54:15\nDocument Setup\nIMPORTANT: to be executed each time you run the notebook", "# DO NOT EDIT ! \nfrom pyesdoc.ipython.model_topic import NotebookOutput \n\n# DO NOT EDIT ! \nDOC = NotebookOutput('cmip6', 'mohc', 'ukesm1-0-ll', 'ocnbgchem')", "Document Authors\nSet document authors", "# Set as follows: DOC.set_author(\"name\", \"email\") \n# TODO - please enter value(s)", "Document Contributors\nSpecify document contributors", "# Set as follows: DOC.set_contributor(\"name\", \"email\") \n# TODO - please enter value(s)", "Document Publication\nSpecify document publication status", "# Set publication status: \n# 0=do not publish, 1=publish. \nDOC.set_publication_status(0)", "Document Table of Contents\n1. Key Properties\n2. Key Properties --&gt; Time Stepping Framework --&gt; Passive Tracers Transport\n3. Key Properties --&gt; Time Stepping Framework --&gt; Biology Sources Sinks\n4. Key Properties --&gt; Transport Scheme\n5. Key Properties --&gt; Boundary Forcing\n6. Key Properties --&gt; Gas Exchange\n7. Key Properties --&gt; Carbon Chemistry\n8. Tracers\n9. Tracers --&gt; Ecosystem\n10. Tracers --&gt; Ecosystem --&gt; Phytoplankton\n11. Tracers --&gt; Ecosystem --&gt; Zooplankton\n12. Tracers --&gt; Disolved Organic Matter\n13. Tracers --&gt; Particules\n14. Tracers --&gt; Dic Alkalinity \n1. Key Properties\nOcean Biogeochemistry key properties\n1.1. Model Overview\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOverview of ocean biogeochemistry model", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.model_overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "1.2. Model Name\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nName of ocean biogeochemistry model code (PISCES 2.0,...)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.model_name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "1.3. Model Type\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nType of ocean biogeochemistry model", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.model_type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Geochemical\" \n# \"NPZD\" \n# \"PFT\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "1.4. Elemental Stoichiometry\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDescribe elemental stoichiometry (fixed, variable, mix of the two)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.elemental_stoichiometry') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Fixed\" \n# \"Variable\" \n# \"Mix of both\" \n# TODO - please enter value(s)\n", "1.5. Elemental Stoichiometry Details\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDescribe which elements have fixed/variable stoichiometry", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.elemental_stoichiometry_details') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "1.6. Prognostic Variables\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nList of all prognostic tracer variables in the ocean biogeochemistry component", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.prognostic_variables') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "1.7. Diagnostic Variables\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nList of all diagnotic tracer variables in the ocean biogeochemistry component", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.diagnostic_variables') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "1.8. Damping\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe any tracer damping used (such as artificial correction or relaxation to climatology,...)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.damping') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "2. Key Properties --&gt; Time Stepping Framework --&gt; Passive Tracers Transport\nTime stepping method for passive tracers transport in ocean biogeochemistry\n2.1. Method\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nTime stepping framework for passive tracers", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.time_stepping_framework.passive_tracers_transport.method') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"use ocean model transport time step\" \n# \"use specific time step\" \n# TODO - please enter value(s)\n", "2.2. Timestep If Not From Ocean\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nTime step for passive tracers (if different from ocean)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.time_stepping_framework.passive_tracers_transport.timestep_if_not_from_ocean') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "3. Key Properties --&gt; Time Stepping Framework --&gt; Biology Sources Sinks\nTime stepping framework for biology sources and sinks in ocean biogeochemistry\n3.1. Method\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nTime stepping framework for biology sources and sinks", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.time_stepping_framework.biology_sources_sinks.method') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"use ocean model transport time step\" \n# \"use specific time step\" \n# TODO - please enter value(s)\n", "3.2. Timestep If Not From Ocean\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nTime step for biology sources and sinks (if different from ocean)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.time_stepping_framework.biology_sources_sinks.timestep_if_not_from_ocean') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "4. Key Properties --&gt; Transport Scheme\nTransport scheme in ocean biogeochemistry\n4.1. Type\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nType of transport scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.transport_scheme.type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Offline\" \n# \"Online\" \n# TODO - please enter value(s)\n", "4.2. Scheme\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nTransport scheme used", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.transport_scheme.scheme') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Use that of ocean model\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "4.3. Use Different Scheme\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDecribe transport scheme if different than that of ocean model", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.transport_scheme.use_different_scheme') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "5. Key Properties --&gt; Boundary Forcing\nProperties of biogeochemistry boundary forcing\n5.1. Atmospheric Deposition\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDescribe how atmospheric deposition is modeled", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.boundary_forcing.atmospheric_deposition') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"from file (climatology)\" \n# \"from file (interannual variations)\" \n# \"from Atmospheric Chemistry model\" \n# TODO - please enter value(s)\n", "5.2. River Input\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDescribe how river input is modeled", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.boundary_forcing.river_input') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"from file (climatology)\" \n# \"from file (interannual variations)\" \n# \"from Land Surface model\" \n# TODO - please enter value(s)\n", "5.3. Sediments From Boundary Conditions\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nList which sediments are speficied from boundary condition", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.boundary_forcing.sediments_from_boundary_conditions') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "5.4. Sediments From Explicit Model\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nList which sediments are speficied from explicit sediment model", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.boundary_forcing.sediments_from_explicit_model') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "6. Key Properties --&gt; Gas Exchange\n*Properties of gas exchange in ocean biogeochemistry *\n6.1. CO2 Exchange Present\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nIs CO2 gas exchange modeled ?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.CO2_exchange_present') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "6.2. CO2 Exchange Type\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe CO2 gas exchange", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.CO2_exchange_type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"OMIP protocol\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "6.3. O2 Exchange Present\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nIs O2 gas exchange modeled ?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.O2_exchange_present') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "6.4. O2 Exchange Type\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe O2 gas exchange", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.O2_exchange_type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"OMIP protocol\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "6.5. DMS Exchange Present\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nIs DMS gas exchange modeled ?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.DMS_exchange_present') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "6.6. DMS Exchange Type\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nSpecify DMS gas exchange scheme type", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.DMS_exchange_type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "6.7. N2 Exchange Present\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nIs N2 gas exchange modeled ?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.N2_exchange_present') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "6.8. N2 Exchange Type\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nSpecify N2 gas exchange scheme type", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.N2_exchange_type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "6.9. N2O Exchange Present\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nIs N2O gas exchange modeled ?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.N2O_exchange_present') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "6.10. N2O Exchange Type\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nSpecify N2O gas exchange scheme type", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.N2O_exchange_type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "6.11. CFC11 Exchange Present\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nIs CFC11 gas exchange modeled ?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.CFC11_exchange_present') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "6.12. CFC11 Exchange Type\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nSpecify CFC11 gas exchange scheme type", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.CFC11_exchange_type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "6.13. CFC12 Exchange Present\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nIs CFC12 gas exchange modeled ?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.CFC12_exchange_present') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "6.14. CFC12 Exchange Type\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nSpecify CFC12 gas exchange scheme type", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.CFC12_exchange_type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "6.15. SF6 Exchange Present\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nIs SF6 gas exchange modeled ?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.SF6_exchange_present') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "6.16. SF6 Exchange Type\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nSpecify SF6 gas exchange scheme type", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.SF6_exchange_type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "6.17. 13CO2 Exchange Present\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nIs 13CO2 gas exchange modeled ?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.13CO2_exchange_present') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "6.18. 13CO2 Exchange Type\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nSpecify 13CO2 gas exchange scheme type", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.13CO2_exchange_type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "6.19. 14CO2 Exchange Present\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nIs 14CO2 gas exchange modeled ?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.14CO2_exchange_present') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "6.20. 14CO2 Exchange Type\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nSpecify 14CO2 gas exchange scheme type", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.14CO2_exchange_type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "6.21. Other Gases\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nSpecify any other gas exchange", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.other_gases') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "7. Key Properties --&gt; Carbon Chemistry\nProperties of carbon chemistry biogeochemistry\n7.1. Type\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDescribe how carbon chemistry is modeled", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.carbon_chemistry.type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"OMIP protocol\" \n# \"Other protocol\" \n# TODO - please enter value(s)\n", "7.2. PH Scale\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nIf NOT OMIP protocol, describe pH scale.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.carbon_chemistry.pH_scale') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Sea water\" \n# \"Free\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "7.3. Constants If Not OMIP\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nIf NOT OMIP protocol, list carbon chemistry constants.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.carbon_chemistry.constants_if_not_OMIP') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "8. Tracers\nOcean biogeochemistry tracers\n8.1. Overview\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOverview of tracers in ocean biogeochemistry", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.tracers.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "8.2. Sulfur Cycle Present\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nIs sulfur cycle modeled ?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.tracers.sulfur_cycle_present') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "8.3. Nutrients Present\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nList nutrient species present in ocean biogeochemistry model", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.tracers.nutrients_present') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Nitrogen (N)\" \n# \"Phosphorous (P)\" \n# \"Silicium (S)\" \n# \"Iron (Fe)\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "8.4. Nitrous Species If N\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N\nIf nitrogen present, list nitrous species.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.tracers.nitrous_species_if_N') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Nitrates (NO3)\" \n# \"Amonium (NH4)\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "8.5. Nitrous Processes If N\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N\nIf nitrogen present, list nitrous processes.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.tracers.nitrous_processes_if_N') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Dentrification\" \n# \"N fixation\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "9. Tracers --&gt; Ecosystem\nEcosystem properties in ocean biogeochemistry\n9.1. Upper Trophic Levels Definition\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDefinition of upper trophic level (e.g. based on size) ?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.upper_trophic_levels_definition') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "9.2. Upper Trophic Levels Treatment\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDefine how upper trophic level are treated", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.upper_trophic_levels_treatment') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "10. Tracers --&gt; Ecosystem --&gt; Phytoplankton\nPhytoplankton properties in ocean biogeochemistry\n10.1. Type\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nType of phytoplankton", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.phytoplankton.type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"None\" \n# \"Generic\" \n# \"PFT including size based (specify both below)\" \n# \"Size based only (specify below)\" \n# \"PFT only (specify below)\" \n# TODO - please enter value(s)\n", "10.2. Pft\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N\nPhytoplankton functional types (PFT) (if applicable)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.phytoplankton.pft') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Diatoms\" \n# \"Nfixers\" \n# \"Calcifiers\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "10.3. Size Classes\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N\nPhytoplankton size classes (if applicable)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.phytoplankton.size_classes') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Microphytoplankton\" \n# \"Nanophytoplankton\" \n# \"Picophytoplankton\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "11. Tracers --&gt; Ecosystem --&gt; Zooplankton\nZooplankton properties in ocean biogeochemistry\n11.1. Type\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nType of zooplankton", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.zooplankton.type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"None\" \n# \"Generic\" \n# \"Size based (specify below)\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "11.2. Size Classes\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N\nZooplankton size classes (if applicable)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.zooplankton.size_classes') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Microzooplankton\" \n# \"Mesozooplankton\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "12. Tracers --&gt; Disolved Organic Matter\nDisolved organic matter properties in ocean biogeochemistry\n12.1. Bacteria Present\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nIs there bacteria representation ?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.tracers.disolved_organic_matter.bacteria_present') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "12.2. Lability\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDescribe treatment of lability in dissolved organic matter", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.tracers.disolved_organic_matter.lability') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"None\" \n# \"Labile\" \n# \"Semi-labile\" \n# \"Refractory\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "13. Tracers --&gt; Particules\nParticulate carbon properties in ocean biogeochemistry\n13.1. Method\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nHow is particulate carbon represented in ocean biogeochemistry?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.tracers.particules.method') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Diagnostic\" \n# \"Diagnostic (Martin profile)\" \n# \"Diagnostic (Balast)\" \n# \"Prognostic\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "13.2. Types If Prognostic\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N\nIf prognostic, type(s) of particulate matter taken into account", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.tracers.particules.types_if_prognostic') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"POC\" \n# \"PIC (calcite)\" \n# \"PIC (aragonite\" \n# \"BSi\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "13.3. Size If Prognostic\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nIf prognostic, describe if a particule size spectrum is used to represent distribution of particules in water volume", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.tracers.particules.size_if_prognostic') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"No size spectrum used\" \n# \"Full size spectrum\" \n# \"Discrete size classes (specify which below)\" \n# TODO - please enter value(s)\n", "13.4. Size If Discrete\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nIf prognostic and discrete size, describe which size classes are used", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.tracers.particules.size_if_discrete') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "13.5. Sinking Speed If Prognostic\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nIf prognostic, method for calculation of sinking speed of particules", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.tracers.particules.sinking_speed_if_prognostic') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Constant\" \n# \"Function of particule size\" \n# \"Function of particule type (balast)\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "14. Tracers --&gt; Dic Alkalinity\nDIC and alkalinity properties in ocean biogeochemistry\n14.1. Carbon Isotopes\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nWhich carbon isotopes are modelled (C13, C14)?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.tracers.dic_alkalinity.carbon_isotopes') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"C13\" \n# \"C14)\" \n# TODO - please enter value(s)\n", "14.2. Abiotic Carbon\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nIs abiotic carbon modelled ?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.tracers.dic_alkalinity.abiotic_carbon') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "14.3. Alkalinity\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nHow is alkalinity modelled ?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.tracers.dic_alkalinity.alkalinity') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Prognostic\" \n# \"Diagnostic)\" \n# TODO - please enter value(s)\n", "©2017 ES-DOC" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
SylvainCorlay/bqplot
examples/Marks/Object Model/Pie.ipynb
apache-2.0
[ "from bqplot import Pie, Figure\nimport numpy as np\nimport string", "Basic Pie Chart", "data = np.random.rand(3)\npie = Pie(sizes=data, display_labels='outside', labels=list(string.ascii_uppercase))\nfig = Figure(marks=[pie], animation_duration=1000)\nfig", "Update Data", "n = np.random.randint(1, 10)\npie.sizes = np.random.rand(n)", "Display Values", "with pie.hold_sync():\n pie.display_values = True\n pie.values_format = '.1f'", "Enable sort", "pie.sort = True", "Set different styles for selected slices", "pie.selected_style = {'opacity': 1, 'stroke': 'white', 'stroke-width': 2}\npie.unselected_style = {'opacity': 0.2}\npie.selected = [1]\n\npie.selected = None", "For more on piechart interactions, see the Mark Interactions notebook\nModify label styling", "pie.label_color = 'Red'\npie.font_size = '20px'\npie.font_weight = 'bold'", "Update pie shape and style", "pie1 = Pie(sizes=np.random.rand(6), inner_radius=0.05)\nfig1 = Figure(marks=[pie1], animation_duration=1000)\nfig1", "Change pie dimensions", "# As of now, the radius sizes are absolute, in pixels\nwith pie1.hold_sync():\n pie1.radius = 150\n pie1.inner_radius = 100\n\n# Angles are in radians, 0 being the top vertical\nwith pie1.hold_sync():\n pie1.start_angle = -90\n pie1.end_angle = 90", "Move the pie around\nx and y attributes control the position of the pie in the figure.\nIf no scales are passed for x and y, they are taken in absolute \nfigure coordinates, between 0 and 1.", "pie1.y = 0.1\npie1.x = 0.6\npie1.radius = 180", "Change slice styles\nPie slice colors cycle through the colors and opacities attribute, as the Lines Mark.", "pie1.stroke = 'brown'\npie1.colors = ['orange', 'darkviolet']\npie1.opacities = [.1, 1]\nfig1", "Represent an additional dimension using Color\nThe Pie allows for its colors to be determined by data, that is passed to the color attribute. \nA ColorScale with the desired color scheme must also be passed.", "from bqplot import ColorScale, ColorAxis\n\nNslices = 7\nsize_data = np.random.rand(Nslices)\ncolor_data = np.random.randn(Nslices)\n\nsc = ColorScale(scheme='Reds')\n# The ColorAxis gives a visual representation of its ColorScale\nax = ColorAxis(scale=sc)\n\npie2 = Pie(sizes=size_data, scales={'color': sc}, color=color_data)\nFigure(marks=[pie2], axes=[ax])", "Position the Pie using custom scales\nPies can be positioned, via the x and y attributes, \nusing either absolute figure scales or custom 'x' or 'y' scales", "from datetime import datetime\nfrom bqplot.traits import convert_to_date\nfrom bqplot import DateScale, LinearScale, Axis\n\navg_precipitation_days = [(d/30., 1-d/30.) for d in [2, 3, 4, 6, 12, 17, 23, 22, 15, 4, 1, 1]]\ntemperatures = [9, 12, 16, 20, 22, 23, 22, 22, 22, 20, 15, 11]\n\ndates = [datetime(2010, k, 1) for k in range(1, 13)]\n\nsc_x = DateScale()\nsc_y = LinearScale()\nax_x = Axis(scale=sc_x, label='Month', tick_format='%b')\nax_y = Axis(scale=sc_y, orientation='vertical', label='Average Temperature')\n\npies = [Pie(sizes=precipit, x=date, y=temp,display_labels='none',\n scales={'x': sc_x, 'y': sc_y}, radius=30., stroke='navy',\n apply_clip=False, colors=['navy', 'navy'], opacities=[1, .1]) \n for precipit, date, temp in zip(avg_precipitation_days, dates, temperatures)]\n\nFigure(title='Kathmandu Precipitation', marks=pies, axes=[ax_x, ax_y],\n padding_x=.05, padding_y=.1)" ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
sdpython/ensae_teaching_cs
_doc/notebooks/td2a_eco/td2a_eco_sql_correction.ipynb
mit
[ "2A.eco - Python et la logique SQL - correction\nCorrection d'exercices sur SQL.", "from jyquickhelper import add_notebook_menu\nadd_notebook_menu()", "SQL permet de créer des tables, de rechercher, d'ajouter, de modifier ou de supprimer des données dans les bases de données. \nUn peu ce que vous ferez bientôt tous les jours. C’est un langage de management de données, pas de nettoyage, d’analyse ou de statistiques avancées.\nLes instructions SQL s'écrivent d'une manière qui ressemble à celle de phrases ordinaires en anglais. Cette ressemblance voulue vise à faciliter l'apprentissage et la lecture. Il est néanmoins important de respecter un ordre pour les différentes instructions.\nDans ce TD, nous allons écrire des commandes en SQL via Python.\nPour plus de précisions sur SQL et les commandes qui existent, rendez-vous là SQL, PRINCIPES DE BASE.\nSe connecter à une base de données\nA la différence des tables qu'on utilise habituellement, la base de données n'est pas visible directement en ouvrant Excel ou un éditeur de texte. Pour avoir une vue de ce que contient la base de données, il est nécessaire d'avoir un autre type de logiciel.\nPour le TD, nous vous recommandans d'installer SQLLiteSpy (disponible à cette adresse SqliteSpy ou sqlite_bro si vous voulez voir à quoi ressemble les données avant de les utiliser avec Python.", "import sqlite3\n# on va se connecter à une base de données SQL vide\n# SQLite stocke la BDD dans un simple fichier\nfilepath = \"./DataBase.db\"\nopen(filepath, 'w').close() #crée un fichier vide\nCreateDataBase = sqlite3.connect(filepath)\n\nQueryCurs = CreateDataBase.cursor()", "La méthode cursor() est un peu particulière : \nIl s'agit d'une sorte de tampon mémoire intermédiaire, destiné à mémoriser temporairement les données en cours de traitement, ainsi que les opérations que vous effectuez sur elles, avant leur transfert définitif dans la base de données. Tant que la méthode .commit() n'aura pas été appelée, aucun ordre ne sera appliqué à la base de données.\n\nA présent que nous sommes connectés à la base de données, on va créer une table qui contient plusieurs variables de format différents\n- ID sera la clé primaire de la base\n- Nom, Rue, Ville, Pays seront du text\n- Prix sera un réel", "# On définit une fonction de création de table\ndef CreateTable(nom_bdd):\n QueryCurs.execute('''CREATE TABLE IF NOT EXISTS ''' + nom_bdd + '''\n (id INTEGER PRIMARY KEY, Name TEXT,City TEXT, Country TEXT, Price REAL)''')\n\n# On définit une fonction qui permet d'ajouter des observations dans la table \ndef AddEntry(nom_bdd, Nom,Ville,Pays,Prix):\n QueryCurs.execute('''INSERT INTO ''' + nom_bdd + ''' \n (Name,City,Country,Price) VALUES (?,?,?,?)''',(Nom,Ville,Pays,Prix))\n \ndef AddEntries(nom_bdd, data):\n \"\"\" data : list with (Name,City,Country,Price) tuples to insert\n \"\"\"\n QueryCurs.executemany('''INSERT INTO ''' + nom_bdd + ''' \n (Name,City,Country,Price) VALUES (?,?,?,?)''',data)\n \n \n### On va créer la table clients\n\nCreateTable('Clients')\n\nAddEntry('Clients','Toto','Munich','Germany',5.2)\nAddEntries('Clients',\n [('Bill','Berlin','Germany',2.3),\n ('Tom','Paris','France',7.8),\n ('Marvin','Miami','USA',15.2),\n ('Anna','Paris','USA',7.8)])\n\n# on va \"commit\" c'est à dire qu'on va valider la transaction. \n# > on va envoyer ses modifications locales vers le référentiel central - la base de données SQL\n\nCreateDataBase.commit()", "Voir la table\nPour voir ce qu'il y a dans la table, on utilise un premier Select où on demande à voir toute la table", "QueryCurs.execute('SELECT * FROM Clients')\nValues = QueryCurs.fetchall()\nprint(Values)", "Passer en pandas\nRien de plus simple : plusieurs manières de faire", "import pandas as pd\n# méthode SQL Query\ndf1 = pd.read_sql_query('SELECT * FROM Clients', CreateDataBase)\nprint(\"En utilisant la méthode read_sql_query \\n\", df1.head(), \"\\n\")\n\n\n#méthode DataFrame en utilisant la liste issue de .fetchall()\ndf2 = pd.DataFrame(Values, columns=['ID','Name','City','Country','Price'])\nprint(\"En passant par une DataFrame \\n\", df2.head())", "Comparaison SQL et pandas\nSELECT\nEn SQL, la sélection se fait en utilisant des virgules ou * si on veut sélectionner toutes les colonnes", "# en SQL\nQueryCurs.execute('SELECT ID,City FROM Clients LIMIT 2')\nValues = QueryCurs.fetchall()\nprint(Values)", "En pandas, la sélection de colonnes se fait en donnant une liste", "#sur la table\ndf2[['ID','City']].head(2)", "WHERE\nEn SQL, on utilise WHERE pour filtrer les tables selon certaines conditions", "QueryCurs.execute('SELECT * FROM Clients WHERE City==\"Paris\"')\nprint(QueryCurs.fetchall())", "Avec Pandas, on peut utiliser plusieurs manières de faire : \n - avec un booléen\n - en utilisant la méthode 'query'", "df2[df2['City'] == \"Paris\"]\n\ndf2.query('City == \"Paris\"')", "Pour mettre plusieurs conditions, on utilise : \n- & en Python, AND en SQL\n- | en python, OR en SQL", "QueryCurs.execute('SELECT * FROM Clients WHERE City==\"Paris\" AND Country == \"USA\"')\nprint(QueryCurs.fetchall())\n\ndf2.query('City == \"Paris\" & Country == \"USA\"')\n\ndf2[(df2['City'] == \"Paris\") & (df2['Country'] == \"USA\")]", "GROUP BY\nEn pandas, l'opération GROUP BY de SQL s'effectue avec une méthode similaire : groupby() \ngroupby() sert à regrouper des observations en groupes selon les modalités de certaines variables en appliquant une fonction d'aggrégation sur d'autres variables.", "QueryCurs.execute('SELECT Country, count(*) FROM Clients GROUP BY Country')\nprint(QueryCurs.fetchall())", "Attention, en pandas, la fonction count() ne fait pas la même chose qu'en SQL. Count() s'applique à toutes les colonnes et compte toutes les observations non nulles.", "df2.groupby('Country').count()", "Pour réaliser la même chose qu'en SQL, il faut utiliser la méthode size()", "df2.groupby('Country').size()", "On peut aussi appliquer des fonctions plus sophistiquées lors d'un groupby", "QueryCurs.execute('SELECT Country, AVG(Price), count(*) FROM Clients GROUP BY Country')\nprint(QueryCurs.fetchall())", "Avec pandas, on peut appeler les fonctions classiques de numpy", "import numpy as np\ndf2.groupby('Country').agg({'Price': np.mean, 'Country': np.size})", "Ou utiliser des fonctions lambda", "# par exemple calculer le prix moyen et le multiplier par 2\ndf2.groupby('Country')['Price'].apply(lambda x: 2*x.mean())\n\nQueryCurs.execute('SELECT Country, 2*AVG(Price) FROM Clients GROUP BY Country').fetchall()\n\nQueryCurs.execute('SELECT * FROM Clients WHERE Country == \"Germany\"')\nprint(QueryCurs.fetchall())\nQueryCurs.execute('SELECT * FROM Clients WHERE City==\"Berlin\" AND Country == \"Germany\"')\nprint(QueryCurs.fetchall())\nQueryCurs.execute('SELECT * FROM Clients WHERE Price BETWEEN 7 AND 20')\nprint(QueryCurs.fetchall())", "Enregistrer une table SQL sous un autre format\nOn utilise le package csv, l'option 'w' pour 'write'. \nOn crée l'objet \"writer\", qui vient du package csv.\nCet objet a deux méthodes : \n- writerow pour les noms de colonnes : une liste\n- writerows pour les lignes : un ensemble de liste", "data = QueryCurs.execute('SELECT * FROM Clients')\n\nimport csv\n\nwith open('./output.csv', 'w') as file:\n writer = csv.writer(file)\n writer.writerow(['id','Name','City','Country','Price'])\n writer.writerows(data)", "On peut également passer par un DataFrame pandas et utiliser .to_csv()", "QueryCurs.execute('''DROP TABLE Clients''')\n#QueryCurs.close()", "Exercice\nDans cet exercice, nous allons manipuler les tables de la base de données World. \nAvant tout, connectez vous à la base de donénes en utilisant sqlite3 et connect\nLien vers la base de données : World.db3 ou \nfrom ensae_teaching_cs.data import simple_database\nname = simple_database()", "#Se connecter à la base de données WORLD\nCreateDataBase = sqlite3.connect(\"./World.db3\")\nQueryCurs = CreateDataBase.cursor()", "Familiarisez vous avec la base de données : quelles sont les tables ? quelles sont les variables de ces tables ? \n - utilisez la fonction PRAGMA pour obtenir des informations sur les tables", "# pour obtenir la liste des tables dans la base de données\ntables = QueryCurs.execute(\"SELECT name FROM sqlite_master WHERE type='table';\").fetchall()\n\n# on veut voir les colonnes de chaque table ainsi que la première ligne \nfor table in tables : \n print(\"Table :\", table[0])\n schema = QueryCurs.execute(\"PRAGMA table_info({})\".format(table[0])).fetchall()\n print(\"Colonnes\", [\"{}\".format(x[1]) for x in schema])\n print(\"1ère ligne\", QueryCurs.execute('SELECT * FROM {} LIMIT 1'.format(table[0])).fetchall(), \"\\n\")", "Question 1\n\nQuels sont les 10 pays qui ont le plus de langues ?\nQuelle langue est présente dans le plus de pays ?", "QueryCurs.execute(\"\"\"SELECT CountryCode, COUNT(*) as NB \n FROM CountryLanguage \n GROUP BY CountryCode \n ORDER BY NB DESC\n LIMIT 10\"\"\").fetchall()\n\nQueryCurs.execute('''SELECT Language, COUNT(*) as NB \n FROM CountryLanguage \n GROUP BY Language \n ORDER BY -NB\n LIMIT 1''').fetchall()", "Question 2\n\nQuelles sont les différentes formes de gouvernements dans les pays du monde ?\nQuels sont les 3 gouvernements où la population est la plus importante ?", "QueryCurs.execute('''SELECT DISTINCT GovernmentForm FROM Country''').fetchall()\n\nQueryCurs.execute('''SELECT GovernmentForm, SUM(Population) as Pop_Totale_Gouv\n FROM Country\n GROUP BY GovernmentForm\n ORDER BY Pop_Totale_Gouv DESC\n LIMIT 3\n ''').fetchall()", "Question 3\n\n\nCombien de pays ont Elisabeth II à la tête de leur gouvernement ?\n\n\nQuelle proporition des sujets de Sa Majesté ne parlent pas anglais ?\n\n78 % ou 83% ?", "QueryCurs.execute('''SELECT HeadOfState, Count(*)\nFROM Country\nWHERE HeadOfState = \"Elisabeth II\" ''').fetchall()\n\n# la population totale \npopulation_queen_elisabeth = QueryCurs.execute('''SELECT HeadOfState, SUM(Population)\nFROM Country\nWHERE HeadOfState = \"Elisabeth II\"''').fetchall()\n\n# La part de la population parlant anglais\nPart_parlant_anglais= QueryCurs.execute('''SELECT Language, SUM(Percentage*0.01*Population)\nFROM \nCountry\nLEFT JOIN \nCountryLanguage \nON Country.Code = CountryLanguage.CountryCode\nWHERE HeadOfState = \"Elisabeth II\"\nAND Language = \"English\"\n''').fetchall()\n\n# La réponse est 78% d'après ces données\nPart_parlant_anglais[0][1]/population_queen_elisabeth[0][1]\n\n## on trouve 83% si on ne fait pas attention au fait que dans certaines zones, 0% de la population parle anglais\n## La population totale n'est alors pas la bonne, comme dans cet exemple\n\nQueryCurs.execute('''SELECT Language,\nSUM(Population_pays*0.01*Percentage) as Part_parlant_anglais, SUM(Population_pays) as Population_totale\nFROM (SELECT Language, Code, Percentage, SUM(Population) as Population_pays\nFROM \n Country\nLEFT JOIN \n CountryLanguage \nON Country.Code = CountryLanguage.CountryCode\nWHERE HeadOfState = \"Elisabeth II\" AND Language == \"English\"\nGROUP BY Code)''').fetchall()", "Conclusion: il vaut mieux écrire deux requêtes simples et lisibles pour obtenir le bon résultat, plutôt qu'une requête qui fait tout en une seule passe mais dont on va devoir vérifier la correction longuement...\nQuestion 4 - passons à Pandas\nCréer une DataFrame qui contient les informations suivantes par pays :\n- le nom\n- le code du pays\n- le nombre de langues parlées\n- le nombre de langues officielles\n- la population\n- le GNP\n- l'espérance de vie\nIndice : utiliser la commande pd.read_sql_query\nQue dit la matrice de corrélation de ces variables ?", "df = pd.read_sql_query('''SELECT Code, Name, Population, GNP , LifeExpectancy,\n COUNT(*) as Nb_langues_parlees, SUM(IsOfficial) as Nb_langues_officielles\n FROM Country\n INNER JOIN CountryLanguage ON Country.Code = CountryLanguage.CountryCode\n GROUP BY Country.Code''',\n CreateDataBase)\ndf.head()\n\ndf.corr()" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
tatyana-perlova/py-taxis
examples/Full_walkthrough.ipynb
gpl-3.0
[ "Ipython notebook magic", "%matplotlib inline\n%load_ext autoreload\n%autoreload", "Python libraries", "import pylab\nimport sys\nimport matplotlib.pyplot as plt\nimport numpy as np\nimport seaborn\nimport re\nimport time\nimport datetime\nimport pandas as pd\nimport random\nimport trackpy as tp\nfrom sklearn.externals import joblib\n\nimport trackpy.diag\ntrackpy.diag.performance_report()", "My libraries", "sys.path.append('/home/perlusha/Work/2017/2017.05.30-2D_tracking_library_4Roshni/lib/')\nimport image #contains image/video analysis functions\nreload(image)\n\nimport traj_proc_2_1 #library for calculating swimming statistics\nreload(traj_proc_2_1)\n\nimport plot #plots\nreload(plot)", "Define local constants", "pixsize = 0.26 #size of the image pixel in microns\nmnsize = 5 #minimal size of bacteria in pixels\nmxsize = 250 #maximal size of bacteria in pixels\nimdim = 2048 #size of the image in pixels\nfps = 12 #video framerate\nmax_search_range = 45/(fps*pixsize) #maximum displacement of the bacteria between consequent frames in pixels\nmin_search_range = 10\nmin_len = 20 #length cut-off for trajectories in frames\n\nregexps = {'folder': '.+(?=\\/)',\n 'filename': '(?<=\\/)[^\\/]+(?=.avi)',\n 'date_time': '(?<=_)[0-9, -]+(?=-000)'}\ntime_format = '%Y-%m-%d-%H%M%S'\n\nsuffix_traj = 'traj&params'\nsuffix_traj_proc = 'runs&tumbles'\nsuffix_trace = 'trace'\n\naggregations = {'vel': lambda x: np.nanpercentile(x, 95),\n 'vel_angle': np.nanmean,\n 'frame': 'count'}\nparams4filtering = ['vel', 'vel_angle']", "Plotting parameters", "seaborn.set_style('white')\n\ncurrent_palette = seaborn.color_palette(\"Set2\", 10)\nseaborn.palplot(current_palette)", "Detecting bacteria and finding trajectories", "movie = '../data/fc2_save_2016-11-17-151445-0000.avi'", "Get basic information about the movie.", "file_info = {}\nfor key in regexps.keys():\n file_info[key] = re.findall(re.compile(regexps[key]), movie)[0]\nfile_info['date_time'] = datetime.datetime(*time.strptime(file_info['date_time'], time_format)[:6])\nfile_info['date_time'] = pd.to_datetime(file_info['date_time'])\n\nfile_info", "First let's find background frame and make sure that default settings make sense. \nChanging alpha will change the weight of each individual frame in the background accumulation procedure and therefore will change brightness of the background frame and affect the number of bacteria detected", "background, frame = image.test_detection(movie, mnsize = mnsize, mxsize = mxsize, alpha = 0.005, show = True)", "Now we are going to detect bacteria in every frame of the movie. This function also plots number of bacteria vs frame number for testing purposes.", "coords, img = image.find_cells_video(movie, background, maxframe = None, mnsize = mnsize, mxsize = mxsize, write = False)", "Coords is a dataframe with coordinates, lengths and angles of every feature found in every frame of the movie. That is the input format for the next stages of analysis - linking, calculating parameters etc, although only 'frame', 'x' and 'y' columns are necessary.", "coords.head()", "Coordinates are linked into trajectories using linking function from TrackPy library.", "traj = tp.link_df(coords, search_range = max_search_range, adaptive_stop = min_search_range, adaptive_step=0.98, memory = 1)", "In addition to headers from 'coords' dataframe output of linking contains particle #.", "traj.head()", "Now let's calculate parameters of the trajectories. In addition to coordinates we now have velocities and accelerations.", "traj_proc_2_1.calc_params(traj, wind = 1, fps = fps, pix_size = pixsize)\n\ntraj.loc[:, 'date_time'] = file_info['date_time']\ntraj.head()", "Save processed file and add it to the file database.", "file_info['filename_traj'] = '{0}/{1}_{2}.csv'.format(file_info['folder'], file_info['filename'], suffix_traj)\ntraj.to_csv(file_info['filename_traj'])", "Filtering\nFor further analysis all trajectories shorter than 20 frames should be removed so before doing that let's look at the distribution of trajectory lengths.", "plot.traj_len_dist(traj, bw = 15, cutoffs = [20, 50, 100])", "Remove trajectories shorter than min_len. Needs to be set to at least 20 for tumble bias assignment.", "traj = tp.filter_stubs(traj, min_len)", "For each trajectory calculate statistics according to 'aggregations'. Essentially for each trajectories we calculate a set of numbers that are later used for filtering.", "traj_stats = traj.groupby([u'particle'], as_index = False).agg(aggregations)\n\ntraj_stats.head()", "Find KDE, MADs and center of the main cluster in the new parameter space of trajectories.", "(x0, y0), (MADx, MADy), (Z, extent) = traj_proc_2_1.find_MADs_KDE(traj_stats[params4filtering[0]], traj_stats[params4filtering[1]])\ndistances = traj_proc_2_1.assign_dist(traj_stats, params = params4filtering, center = (x0, y0), R = (MADx, MADy))\nlayers = np.arange(max(distances), stop = 2, step = -2)\nlayers.sort()\n\n\ntraj_stats.head()", "Plot resulsing KDE along with ellipses corresponding to certain distance from the center and example trajectories in each layer.", "plot.plot_KDE(Z, extent, N_traj = len(traj_stats), tick_step = 1)\nplot.plot_ellipses(plt.gca(), (MADx, MADy), (x0, y0), layers, colors = current_palette)\nplt.plot(x0, y0, 'o', color = 'Red', markersize = 7, label = 'center')\ni = 0\ntraj2plot = []\nfor layer in np.sort(layers):\n\n particles = traj_stats[(traj_stats.distance <= layer)&(traj_stats.distance > layer - 2)&(traj_stats.frame > 50)].particle \n particles = random.sample(particles, min(100, len(particles)))\n plt.plot(traj_stats[traj_stats.particle.isin(particles)].vel, \n traj_stats[traj_stats.particle.isin(particles)].vel_angle, 'o', color = current_palette[i],\n label = '')\n traj2plot.append({'layer': layer, 'particles': particles})\n i += 1\nplt.xlabel(r'95th percentil of the velocity, $\\mu m/s$')\nplt.ylabel(r'Average angular velocity, rad/s')\nplt.legend(loc='center right', bbox_to_anchor=(1.7, 0.9))\nplt.gca().set_aspect(1.8*np.diff(plt.gca().get_xlim())[0]/np.diff(plt.gca().get_ylim())[0])", "Now plot trajectories from each layer.", "i = 0\nplt.figure(figsize=(14, 10))\n\nfor i in range(len(traj2plot)):\n particles = traj2plot[i]['particles']\n plt.subplot(2,3,i+1)\n plot.plot_traj(traj[traj.particle.isin(particles)].reset_index(drop = True), imdim)\n plt.title('MAD {0}'.format(traj2plot[i]['layer']))\n plt.gca().set_aspect(1.0)\n for spine in plt.gca().spines.values():\n spine.set_edgecolor(current_palette[i])\n spine.set_linewidth(3)", "Remove trajectories further than certain distance away from the center.", "particles_filt = traj_stats[traj_stats.distance <= 3].particle.unique()\ntraj_filtered = traj[traj.particle.isin(particles_filt)].copy(deep = True)", "Run-tumble assignment\nLoad model for run-tumble detection. You can use trained model or train it on the current dataset as well.", "HMM_model = joblib.load('../lib/HMM_model_10:29.pkl')", "Assign runs and tumbles.", "traj_filtered = traj_filtered[~np.isnan(traj_filtered.acc_angle)]\ntraj_filtered.sort_values(by = ['particle', 'frame'], inplace=True)\n\n_, traj_filtered = traj_proc_2_1.find_tumbles(traj_filtered,\n model = HMM_model,\n params = ['vel_norm', 'acc_norm', 'acc_angle'], \n n_components = 3, \n threshold = 1,\n covariance_type = 'diag',\n model_type = 'HMM')", "Plot resulting distribution of parameters in run and tumble states.", "colors = ['light red', 'cerulean']\npalette = seaborn.xkcd_palette(colors)\nplot.dist_by_state(traj_filtered, 'tbias_HMM', ('vel', 'acc', 'acc_angle'), palette = palette)\n", "Save analyzed trajectories to file.", "file_info['filename_proc'] = '{0}/{1}_{2}.csv'.format(file_info['folder'], file_info['filename'], suffix_traj_proc)\ntraj_filtered.to_csv(file_info['filename_proc'])", "Time trace\nCalculate and save time traces.", "traj2save = tp.filter_stubs(tracks = traj_filtered, threshold = 50)\ntrace = traj2save.groupby(['frame', 'date_time'], \n as_index = False)[['vel_angle', 'vel_run', \n 'tbias_HMM', 'vel']].agg(['mean', 'std', 'count'])\ntrace.reset_index(inplace = True)\n\nfile_info['filename_trace'] = '{0}/{1}_{2}.csv'.format(file_info['folder'], file_info['filename'], suffix_trace)\ntrace.to_csv(file_info['filename_trace'])", "Plot time traces", "f, ax = plt.subplots(1, 1, figsize = (7, 3), sharex=True)\n\n_ = plot.plot_trace(ax, wind = 5, shift = 5, data = trace[['frame', 'tbias_HMM']], fps = fps, \n color = palette[0], tbias = True)\nplt.xlabel('Time, s')\nplt.ylabel('Tumble bias')" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
giacomov/3ML
docs/notebooks/Point_source_plotting.ipynb
bsd-3-clause
[ "Point source plotting basics\nIn 3ML, we distinguish between data and model plotting. Data plots contian real data points and the over-plotted model is (sometimes) folded through an instrument response. Therefore, the x-axis is not always in the same units across instruments if there is energy dispersion. \nHowever, all instuments see the same model and a multi-wavelength fit can be viewed in model space without complication. 3ML uses one interface to plot both MLE and Bayesian fitted models. To demonstrate we will use toy data simulated from a powerlaw and two gaussians for MLE fits and an exponentially cutoff power law with one gaussian for Bayesian fits.\nFirst we load the analysis results:", "%matplotlib inline\njtplot.style(context=\"talk\", fscale=1, ticks=True, grid=False)\n\n\nimport matplotlib.pyplot as plt\n\nplt.style.use(\"mike\")\nimport numpy as np\n\nfrom threeML import *\nfrom threeML.io.package_data import get_path_of_data_file\n\n#mle1 = load_analysis_results(get_path_of_data_file(\"datasets/toy_xy_mle1.fits\"))\nbayes1 = load_analysis_results(get_path_of_data_file(\"datasets/toy_xy_bayes2.fits\"))", "Plotting a single analysis result\nThe easiest way to plot is to call plot_point_source_spectra. By default, it plots in photon space with a range of 10-40000 keV evaluated at 100 logrithmic points:", "_ = plot_point_source_spectra(mle1,ene_min=1,ene_max=1E3)", "Flux and energy units\nWe use astropy units to specify both the flux and energy units. \n* The plotting routine understands photon, energy ($F_{\\nu}$) and $\\nu F_{\n\\nu}$ flux units;\n\n\nenergy units can be energy, frequency, or wavelength\n\n\na custom range can be applied.\n\n\nchanging flux units", "_ = plot_point_source_spectra(mle1,ene_min=1,ene_max=1E3,flux_unit='1/(m2 s MeV)')\n_ = plot_point_source_spectra(mle1,ene_min=1,ene_max=1E3,flux_unit='erg/(cm2 day keV)')\n_ = plot_point_source_spectra(mle1,ene_min=1,ene_max=1E3,flux_unit='keV2/(cm2 s keV)')", "changing energy units", "_ = plot_point_source_spectra(mle1,\n ene_min=.001,\n ene_max=1E3,\n energy_unit='MeV')\n\n# energy ranges can also be specified in units\n_ = plot_point_source_spectra(mle1,\n ene_min=1*astropy_units.keV,\n ene_max=1*astropy_units.MeV)\n\n_ = plot_point_source_spectra(mle1,\n ene_min=1E3*astropy_units.Hz,\n ene_max=1E7*astropy_units.Hz)\n\n_ = plot_point_source_spectra(mle1,\n ene_min=1E1*astropy_units.nm,\n ene_max=1E3*astropy_units.nm,\n xscale='linear') # plotting with a linear scale\n", "Plotting components\nSometimes it is interesting to see the components in a composite model. We can specify the use_components switch. Here we will use Bayesian results. Note that all features work with MLE of Bayesian results.", "_ = plot_point_source_spectra(bayes1,\n ene_min=1,\n ene_max=1E3,\n use_components=True\n )\n\n_=plt.ylim(bottom=1)", "Notice that the duplicated components have the subscripts n1 and n2. If we want to specify which components to plot, we must use these subscripts.", "_ = plot_point_source_spectra(mle1,\n flux_unit='erg/(cm2 s keV)',\n ene_min=1,\n ene_max=1E3,\n use_components=True,\n components_to_use=['Gaussian_n1','Gaussian_n2'])\n\n_=plt.ylim(bottom=1E-20)", "If we want to see the total model with the components, just add total to the components list.\nAdditionally, we can change the confidence interval for the contours from the default of 1$\\sigma$ (0.68) to 2$\\sigma$ (0.95).", "_ = plot_point_source_spectra(bayes1,\n flux_unit='erg/(cm2 s keV)',\n ene_min=1,\n ene_max=1E3,\n use_components=True,\n components_to_use=['total','Gaussian'],\n confidence_level=0.95)\n \n\n\n_=plt.ylim(bottom=1E-9)\n\n_ = plot_point_source_spectra(mle1,\n flux_unit='erg/(cm2 s keV)',\n ene_min=1,\n ene_max=1E3,\n use_components=True,\n fit_cmap='jet', # specify a color map\n contour_colors='k', # specify a color for all contours\n components_to_use=['total','Gaussian_n2','Gaussian_n1'])\n \n\n\n_=plt.ylim(bottom=1E-16)", "Additional features\nExplore the docstring to see all the available options. Default configurations can be altered in the 3ML config file.\n\nUse asymmetric errors and alter the default color map", "threeML_config['model plot']['point source plot']['fit cmap'] = 'plasma'\n_ = plot_point_source_spectra(mle1, equal_tailed=False)", "turn of contours and the legend and increase the number of points plotted", "_ = plot_point_source_spectra(mle1, show_legend=False, show_contours=False, num_ene=500)", "colors or color maps can be specfied", "_ = plot_point_source_spectra(mle1, fit_colors='orange', contour_colors='blue')", "Further modifications to plotting style, legend style, etc. can be modified either in the 3ML configuration:", "threeML_config['model plot']['point source plot']", "or by directly passing dictionary arguments to the the plot command. Examine the docstring for more details!\nPlotting multiple results\nAny number of results can be plotted together. Simply provide them as arguments. You can mix and match MLE and Bayesian results as well as plotting their components.", "_ = plot_point_source_spectra(mle1, bayes1,ene_min=1)\n\n_=plt.ylim(bottom=1E-1)", "Specify particular colors for each analysis and broaden the contours", "_ = plot_point_source_spectra(mle1,\n bayes1,\n ene_min=1.,\n confidence_level=.95,\n equal_tailed=False,\n fit_colors=['orange','green'],\n contour_colors='blue')\n_ =plt.ylim(bottom=1E-1)", "As with single results, we can choose to plot the components for all the sources.", "_ = plot_point_source_spectra(mle1,\n bayes1,\n ene_min=1.,\n use_components=True)\n_=plt.ylim(bottom=1E-4)" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
cdt15/lingam
examples/CausalEffect(LightGBM).ipynb
mit
[ "Causal Effect for Non-linear Regression\nImport and settings\nIn this example, we need to import numpy, pandas, and graphviz in addition to lingam.", "import numpy as np\nimport pandas as pd\nimport graphviz\nimport lingam\n\nprint([np.__version__, pd.__version__, graphviz.__version__, lingam.__version__])\n\nnp.set_printoptions(precision=3, suppress=True)\nnp.random.seed(0)", "Utility function\nWe define a utility function to draw the directed acyclic graph.", "def make_graph(adjacency_matrix, labels=None):\n idx = np.abs(adjacency_matrix) > 0.01\n dirs = np.where(idx)\n d = graphviz.Digraph(engine='dot')\n names = labels if labels else [f'x{i}' for i in range(len(adjacency_matrix))]\n for to, from_, coef in zip(dirs[0], dirs[1], adjacency_matrix[idx]):\n d.edge(names[from_], names[to], label=f'{coef:.2f}')\n return d", "Test data\nWe use 'Auto MPG Data Set' (http://archive.ics.uci.edu/ml/datasets/Auto+MPG)", "X = pd.read_csv('http://archive.ics.uci.edu/ml/machine-learning-databases/auto-mpg/auto-mpg.data-original',\n delim_whitespace=True, header=None,\n names = ['mpg', 'cylinders', 'displacement',\n 'horsepower', 'weight', 'acceleration',\n 'model year', 'origin', 'car name'])\nX.dropna(inplace=True)\nX.drop(['model year', 'origin', 'car name'], axis=1, inplace=True)\nprint(X.shape)\nX.head()", "Causal Discovery\nTo run causal discovery, we create a DirectLiNGAM object and call the fit method.", "model = lingam.DirectLiNGAM()\nmodel.fit(X)\nlabels = [f'{i}. {col}' for i, col in enumerate(X.columns)]\nmake_graph(model.adjacency_matrix_, labels)", "Prediction Model\nWe create the linear regression model.", "import lightgbm as lgb\n\ntarget = 0 # mpg\nfeatures = [i for i in range(X.shape[1]) if i != target]\nreg = lgb.LGBMRegressor(random_state=0)\nreg.fit(X.iloc[:, features], X.iloc[:, target])", "Identification of Feature with Greatest Causal Influence on Prediction\nTo identify of the feature having the greatest intervention effect on the prediction, we create a CausalEffect object and call the estimate_effects_on_prediction method.", "ce = lingam.CausalEffect(model)\neffects = ce.estimate_effects_on_prediction(X, target, reg)\n\ndf_effects = pd.DataFrame()\ndf_effects['feature'] = X.columns\ndf_effects['effect_plus'] = effects[:, 0]\ndf_effects['effect_minus'] = effects[:, 1]\ndf_effects\n\nmax_index = np.unravel_index(np.argmax(effects), effects.shape)\nprint(X.columns[max_index[0]])", "Estimation of Optimal Intervention\nestimate_optimal_intervention method of CausalEffect is available only for linear regression models." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
WNoxchi/Kaukasos
FADL2/darknet_imagenet_sample.ipynb
mit
[ "Darknet53 ImageNet Sampleset\nImport", "%matplotlib inline\n%reload_ext autoreload\n%autoreload 2\n\nfrom fastai.conv_learner import *\nfrom fastai.models import darknet\n\nfrom pathlib import Path\n\nPATH = Path('data/imagenet')\nPATH_TRAIN = PATH/'train'", "Setup", "def reset_valset(path):\n path_val = path/'valid'\n path_trn = path/'train'\n \n if not os.path.exists(path_val):\n print('No validation directory to reset.')\n return\n \n for folder in path_val.iterdir():\n for f in folder.iterdir():\n os.rename(f, path_trn / str(f).split('valid/')[-1])\n\ndef create_valset(path, p=0.15, seed=0):\n np.random.seed(seed=seed)\n \n path_val = path/'valid'\n path_trn = path/'train'\n reset_valset(path)\n \n # move random p-percent selection from train/ to valid/\n for folder in path_trn.iterdir():\n os.makedirs(path_val/str(folder).split('train/')[-1], exist_ok=True)\n flist = list(folder.iterdir())\n n_move = int(np.round(len(flist) * p))\n fmoves = np.random.choice(flist, n_move, replace=False)\n \n for f in fmoves:\n os.rename(f, path_val / str(f).split('train/')[-1])\n\ndef count_files(path):\n count = 0\n for folder in path.iterdir():\n count += len(list(folder.glob('*')))\n return count\n\ncreate_valset(PATH, p=0.2)\ncount_files(PATH_TRAIN), count_files(PATH/'valid')", "Weight Decay of 1e-5 used bc it's a near what JHoward uses in CIFAR-10 Darknet notebook. See Fast.ai DL1 bit on Weight Decay in Lesson 5 - 2:12:01.\nI may experiment with cycle length and cycle split. lr, clr_div, and cut_div values determined by looking at results in the darknet_test.ipynb notebook, and guessing.", "bs = 32\nsz = 256\nwd = 1e-5\n\ndarknet53 = darknet.darknet_53()\n\ntfms = tfms_from_stats(imagenet_stats, sz, aug_tfms=transforms_side_on, \n max_zoom=1.05, pad=sz//8)\nmodel_data = ImageClassifierData.from_paths(PATH, bs=bs, tfms=tfms)\n\nlearner = ConvLearner.from_model_data(darknet53, model_data, crit=F.cross_entropy)\n\nlearner.lr_find()\nlearner.sched.plot()", "Train", "learner.fit(lrs=1e-2, n_cycle=1, wds=wd, cycle_len=3, use_clr=(40, 10))\n\n# learner.save('darknet53_imagenet_sample_00')\nlearner.load('darknet53_imagenet_sample_00')\n\nlearner.fit(lrs=1e-2, n_cycle=1, wds=wd, cycle_len=3, use_clr=(40, 10))\n\nlearner.fit(lrs=1e-2, n_cycle=1, wds=wd, cycle_len=10, use_clr=(40, 10))\n\nlearner.save('darknet53_imagenet_sample_01')" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
sonyahanson/assaytools
examples/ipynbs/models/binding-assay-modeling/Just Modeling - Bayes with Modeled data.ipynb
lgpl-2.1
[ "In this notebook we see how well we can reproduce Kd from simulated experimental data.\nIn this notebook we play with data generated in the 'Just Modeling - Two Component Binding' notebook, and see how our bayesian models do at reproducing the Kd.", "import numpy as np\nimport matplotlib.pyplot as plt\nimport pymc\nimport seaborn as sns\n\n%pylab inline", "The setup.\nWe use the same setup here as we did in 'Just Modeling - Two Component Binding'.\nExperimentally we won't know the Kd, but we know the P, PL, and L concentrations.", "Kd = 2e-9 # M\n\nPtot = 1e-9 # M\n\nLtot = 20.0e-6 / np.array([10**(float(i)/2.0) for i in range(12)]) # M\n\ndef two_component_binding(Kd, Ptot, Ltot):\n \"\"\"\n Parameters\n ----------\n Kd : float\n Dissociation constant\n Ptot : float\n Total protein concentration\n Ltot : float\n Total ligand concentration\n \n Returns\n -------\n P : float\n Free protein concentration\n L : float\n Free ligand concentration\n PL : float\n Complex concentration\n \"\"\"\n \n PL = 0.5 * ((Ptot + Ltot + Kd) - np.sqrt((Ptot + Ltot + Kd)**2 - 4*Ptot*Ltot)) # complex concentration (uM)\n P = Ptot - PL; # free protein concentration in sample cell after n injections (uM) \n L = Ltot - PL; # free ligand concentration in sample cell after n injections (uM) \n return [P, L, PL]\n\n[L, P, PL] = two_component_binding(Kd, Ptot, Ltot)\n\n# y will be complex concentration\n# x will be total ligand concentration\nplt.semilogx(Ltot,PL, 'o')\nplt.xlabel('$[L]_{tot}$ / M')\nplt.ylabel('$[PL]$ / M')\nplt.ylim(0,1.3e-9)\nplt.axhline(Ptot,color='0.75',linestyle='--',label='$[P]_{tot}$')\nplt.legend();", "Now make this a fluorescence experiment.", "# Making max 400 relative fluorescence units, and scaling all of PL to that\nnpoints = len(Ltot)\nsigma = 10.0 # size of noise\nF_i = (400/1e-9)*PL + sigma * np.random.randn(npoints)\nPstated = np.ones([npoints],np.float64)*Ptot\nLstated = Ltot\n\n# Uncertainties in protein and ligand concentrations.\ndPstated = 0.10 * Pstated # protein concentration uncertainty\ndLstated = 0.08 * Lstated # ligand concentraiton uncertainty (due to gravimetric preparation and HP D300 dispensing)", "The test.", "# Define our two-component binding system again.\ndef two_component_binding(DeltaG, P, L):\n Kd = np.exp(DeltaG)\n PL = 0.5 * ((P + L + Kd) - np.sqrt((P + L + Kd)**2 - 4*P*L)); # complex concentration (M) \n P = P - PL; # free protein concentration in sample cell after n injections (M) \n L = L - PL; # free ligand concentration in sample cell after n injections (M) \n return [P, L, PL]\n\n# Create a pymc model\ndef make_model(Pstated, dPstated, Lstated, dLstated, Fobs_i):\n N = len(Lstated)\n # Prior on binding free energies.\n DeltaG = pymc.Uniform('DeltaG', lower=-40, upper=+40, value=0.0) # binding free energy (kT), uniform over huge range\n \n # Priors on true concentrations of protein and ligand.\n Ptrue = pymc.Normal('Ptrue', mu=Pstated, tau=dPstated**(-2)) # protein concentration (M)\n Ltrue = pymc.Normal('Ltrue', mu=Lstated, tau=dLstated**(-2)) # ligand concentration (M)\n Ltrue_control = pymc.Normal('Ltrue_control', mu=Lstated, tau=dLstated**(-2)) # ligand concentration (M)\n\n # Priors on fluorescence intensities of complexes (later divided by a factor of Pstated for scale).\n Fmax = Fobs_i.max()\n F_background = pymc.Uniform('F_background', lower=0.0, upper=Fmax) # background \n F_PL = pymc.Uniform('F_PL', lower=0.0, upper=Fmax/min(Pstated.max(),Lstated.max())) # complex fluorescence\n F_L = pymc.Uniform('F_L', lower=0.0, upper=Fmax/Lstated.max()) # ligand fluorescence\n\n # Unknown experimental measurement error.\n log_sigma = pymc.Uniform('log_sigma', lower=-10, upper=3, value=0.0) \n @pymc.deterministic\n def precision(log_sigma=log_sigma): # measurement precision\n return np.exp(-2*log_sigma)\n\n # Fluorescence model.\n @pymc.deterministic\n def Fmodel(F_background=F_background, F_PL=F_PL, F_L=F_L, Ptrue=Ptrue, Ltrue=Ltrue, DeltaG=DeltaG):\n Fmodel_i = np.zeros([N])\n for i in range(N):\n [P, L, PL] = two_component_binding(DeltaG, Ptrue[i], Ltrue[i])\n Fmodel_i[i] = (F_PL*PL + F_L*L) + F_background\n return Fmodel_i\n \n # Experimental error on fluorescence observations.\n Fobs_model = pymc.Normal('Fobs_i', mu=Fmodel, tau=precision, size=[N], observed=True, value=Fobs_i) # observed data\n \n # Construct dictionary of model variables.\n pymc_model = { 'Ptrue' : Ptrue, \n 'Ltrue' : Ltrue, \n 'Ltrue_control' : Ltrue_control, \n 'log_sigma' : log_sigma, \n 'precision' : precision, \n 'F_PL' : F_PL, \n 'F_L' : F_L, \n 'F_background' : F_background,\n 'Fmodel_i' : Fmodel,\n 'Fobs_model' : Fobs_model, \n 'DeltaG' : DeltaG # binding free energy\n }\n return pymc_model\n\n# Build model.\npymc_model = pymc.Model(make_model(Pstated, dPstated, Lstated, dLstated, F_i))\n\n# Sample with MCMC\nmcmc = pymc.MCMC(pymc_model, db='ram', name='Sampler', verbose=True)\nmcmc.sample(iter=100000, burn=10000, thin=50, progress_bar=False)\n\n# Plot trace of DeltaG.\nrcParams['figure.figsize'] = [15, 3]\nplot(mcmc.DeltaG.trace(), 'o');\nxlabel('MCMC sample');\nylabel('$\\Delta G$ / $k_B T$');\n\n# Plot trace of true protein concentration.\nrcParams['figure.figsize'] = [15, 3]\nplot(mcmc.Ptrue.trace()*1e6, 'o');\nxlabel('MCMC sample');\nylabel('$[P]_{tot}$ / $\\mu$M');\n\n# Plot trace of true protein concentration.\nrcParams['figure.figsize'] = [15, 3]\nplot(mcmc.Ltrue.trace()*1e6, 'o');\nxlabel('MCMC sample');\nylabel('$[L]_{tot}$ ($\\mu$M)');\nprint mcmc.Ltrue.trace().min()\n\n# Plot histogram of DeltaG.\nrcParams['figure.figsize'] = [15, 3]\nhist(mcmc.DeltaG.trace()[-1000:], 40);\nxlabel('$\\Delta G$ / $k_B T$');\nylabel('$P(\\Delta G)$');\n\n# Plot trace of intrinsic fluorescence parameters.\nrcParams['figure.figsize'] = [15, 3]\nsemilogy(mcmc.F_PL.trace(), 'o', mcmc.F_L.trace(), 'o', mcmc.F_background.trace(), 'o');\nlegend(['complex fluorescence', 'ligand fluorescence', 'background fluorescence']);\nxlabel('MCMC sample');\nylabel('relative fluorescence intensity');\n\n# Plot model fit.\nrcParams['figure.figsize'] = [15, 3]\nfigure = pyplot.gcf() # get current figure\nFmodels = mcmc.Fmodel_i.trace()\nclf();\nhold(True)\nfor Fmodel in Fmodels:\n semilogx(Lstated, Fmodel, 'k-')\nsemilogx(Lstated, F_i, 'ro')\nhold(False)\nxlabel('$[L]_{tot}$ / M');\nylabel('fluorescence units');", "Did it work?", "nlast = 500 # number of final samples to analyze\nDeltaG = mcmc.DeltaG.trace()[-nlast:].mean()\ndDeltaG = mcmc.DeltaG.trace()[-nlast:].std()\nprint \"DeltaG: %.3f +- %.3f kT\" % (DeltaG, dDeltaG)\n\nKd_calc = np.exp(mcmc.DeltaG.trace()[-nlast:]).mean()\ndKd_calc = np.exp(mcmc.DeltaG.trace()[-nlast:]).std()\nprint \"Kd = %.3f +- %.3f nM [true: %.3f nM]\" % (Kd_calc/1e-9, dKd_calc/1e-9, Kd/1e-9)\n\nrelative_error = np.abs(Kd_calc-Kd) / np.abs(Kd)\nprint \"Relative error in Kd is %.5f %%\" % (relative_error * 100)", "Basically we modeled data for a Kd of 2 nM, and with Bayes even with ideal data, it still thought that the Kd was 0.9 nM.\nCan we get a better result just by improving our data?\nLet's make a 'better' set of data.", "def make_two_component_binding(Kd, Ptot, Ltot):\n \n PL = 0.5 * ((Ptot + Ltot + Kd) - np.sqrt((Ptot + Ltot + Kd)**2 - 4*Ptot*Ltot)) # complex concentration (uM)\n P = Ptot - PL; # free protein concentration in sample cell after n injections (uM) \n L = Ltot - PL; # free ligand concentration in sample cell after n injections (uM) \n return [P, L, PL]", "All we need to do make 'better' data is refine our Ligand range.", "Lnew = 1.0e-7 / np.array([10**(float(i)/8.0) for i in range(24)]) \n\n[L, P, PL] = make_two_component_binding(2e-9,Ptot,Lnew)\nprint PL\n\n# y will be complex concentration\n# x will be total ligand concentration\nplt.semilogx(Lnew,PL, 'o')\nplt.xlabel('L')\nplt.ylabel('PL')\nplt.ylim(0,1.3e-9)\nplt.axhline(Ptot,color='0.75',linestyle='--',label='[Ptot]')\nplt.legend();", "Great! Now let's see how pymc does.", "# Making max 400 relative fluorescence units, and scaling all of PL to that\nnpoints = len(Lnew)\nF_i = (400/1e-9)*PL + sigma * np.random.randn(npoints)\nPstated = np.ones([npoints],np.float64)*Ptot\nLstated = Lnew\ndPstated = 0.10 * Pstated\ndLstated = 0.08 * Lstated\n\n# Build model.\npymc_model = pymc.Model(make_model(Pstated, dPstated, Lstated, dLstated, F_i))\n\n# Sample with MCMC\nmcmc = pymc.MCMC(pymc_model, db='ram', name='Sampler', verbose=True)\nmcmc.sample(iter=100000, burn=10000, thin=50, progress_bar=False)\n\n# Plot trace of true protein concentration.\nrcParams['figure.figsize'] = [15, 3]\nplot(mcmc.Ptrue.trace()*1e6, 'o');\nxlabel('MCMC sample');\nylabel('$[P]_{tot}$ / $\\mu$M');\n\n# Plot trace of true protein concentration.\nrcParams['figure.figsize'] = [15, 3]\nplot(mcmc.Ltrue.trace()*1e6, 'o');\nxlabel('MCMC sample');\nylabel('$[L]_{tot}$ ($\\mu$M)');\nprint mcmc.Ltrue.trace().min()\n\n# Plot trace of DeltaG.\nrcParams['figure.figsize'] = [15, 3]\nplot(mcmc.DeltaG.trace(), 'o');\nxlabel('MCMC sample');\nylabel('$\\Delta G$ ($k_B T$)');\n\n# Plot histogram of DeltaG.\nrcParams['figure.figsize'] = [15, 3]\nhist(mcmc.DeltaG.trace(), 40);\nxlabel('$\\Delta G$ ($k_B T$)');\nylabel('$P(\\Delta G)$');\n\n# Plot trace of intrinsic fluorescence parameters.\nrcParams['figure.figsize'] = [15, 3]\nsemilogy(mcmc.F_PL.trace(), 'o', mcmc.F_L.trace(), 'o', mcmc.F_background.trace(), 'o');\nlegend(['complex fluorescence', 'ligand fluorescence', 'background fluorescence']);\nxlabel('MCMC sample');\nylabel('relative fluorescence intensity');\n\n# Plot model fit.\nrcParams['figure.figsize'] = [15, 3]\nfigure = pyplot.gcf() # get current figure\nFmodels = mcmc.Fmodel_i.trace()\nclf();\nhold(True)\nfor Fmodel in Fmodels:\n semilogx(Lstated, Fmodel, 'k-')\nsemilogx(Lstated, F_i, 'ro')\nhold(False)\nxlabel('$[L]_s$ (M)');\nylabel('fluorescence units');", "Did it work?", "nlast = 500 # number of final samples to analyze\nDeltaG = mcmc.DeltaG.trace()[-nlast:].mean()\ndDeltaG = mcmc.DeltaG.trace()[-nlast:].std()\nprint \"DeltaG: %.3f +- %.3f kT\" % (DeltaG, dDeltaG)\n\nKd_calc = np.exp(mcmc.DeltaG.trace()[-nlast:]).mean()\ndKd_calc = np.exp(mcmc.DeltaG.trace()[-nlast:]).std()\nprint \"Kd = %.3f +- %.3f nM [true: %.3f nM]\" % (Kd_calc/1e-9, dKd_calc/1e-9, Kd/1e-9)\n\nrelative_error = np.abs(Kd_calc-Kd) / np.abs(Kd)\nprint \"Relative error in Kd is %.5f %%\" % (relative_error * 100)" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
QuantCrimAtLeeds/PredictCode
examples/Networks/Case study Chicago/Cross-Validation grid.ipynb
artistic-2.0
[ "import sys, os\nsys.path.insert(0, os.path.join(\"..\", \"..\", \"..\"))", "Cross-Validation on a grid\nContinuing to work with the Rosser et al. paper, we also want to cross validate the grid based \"prospective hotspotting\".", "%matplotlib inline\nimport matplotlib.pyplot as plt\nimport matplotlib.collections\nimport numpy as np\nimport descartes\nimport zipfile, pickle\n\nimport open_cp.sources.chicago\nimport open_cp.geometry\nimport open_cp.prohotspot\nimport open_cp.predictors\n\ndata_path = os.path.join(\"/media\", \"disk\", \"Data\")\n#data_path = os.path.join(\"..\", \"..\", \"..\", \"..\", \"..\", \"..\", \"Data\")\nopen_cp.sources.chicago.set_data_directory(data_path)\nsouth_side = open_cp.sources.chicago.get_side(\"South\")\n\ngrid = open_cp.data.Grid(xsize=150, ysize=150, xoffset=0, yoffset=0)\ngrid = open_cp.geometry.mask_grid_by_intersection(south_side, grid)\n\nfilename = open_cp.sources.chicago.get_default_filename()\ntimed_points = open_cp.sources.chicago.load(filename, [\"BURGLARY\"])\ntimed_points.number_data_points, timed_points.time_range\n\ntimed_points = open_cp.geometry.intersect_timed_points(timed_points, south_side)\ntimed_points.number_data_points", "Use old data instead", "filename = os.path.join(data_path, \"chicago_all_old.csv\")\ntimed_points = open_cp.sources.chicago.load(filename, [\"BURGLARY\"], type=\"all\")\ntimed_points.number_data_points, timed_points.time_range\n\ntimed_points = open_cp.geometry.intersect_timed_points(timed_points, south_side)\ntimed_points.number_data_points", "What do Rosser et al do?\nThey seem to use a \"hybrid\" approach, which we have (fortuitously) implemented as ProspectiveHotSpotContinuous. That is, they use a continuous KDE method, with both a space and time component, and then convert this to a grid as a final step.\nThe exact formula used is\n$$ \\lambda(t,s) = \\sum_{i : t_i<t} f(\\|s-s_i\\|) g(t-t_i) $$\nwhere\n$$ f(\\Delta s) = \\begin{cases} \\frac{h_S - \\Delta s}{h_S^2} & :\\text{if } \\Delta s \\leq h_S, \\ 0 &:\\text{otherwise.}\n\\end{cases} \n\\qquad\ng(\\Delta t) = \\frac{1}{h_T} \\exp\\Big( -\\frac{\\Delta t}{h_T} \\Big). $$\nNotice that this is not normalised because when converting from two dimensions to a (positive) number using the Euclidean norm $\\|\\cdot\\|$ we map the infinitesimal annulus $r \\leq \\sqrt{x^2+y^2} \\leq r+dr$ to the interval $[r, r+dr]$; the former has area $\\pi((r+dr)^2 - r^2) = 2\\pi r dr$ while the latter has length $dr$.\nNormalisation\nLet us think a bit harder about normalisation. We treat $\\lambda$ as a \"kernel\" in time and space, we presumably, mathematically, we allow $s$ to vary over the whole plane, but constrain $t\\geq 0$ (assuming all events occur in positive time; in which case $\\lambda$ is identically zero for $t<0$ anyway). Thus, that $\\lambda$ is \"normalised\" should mean that\n$$ \\int_0^\\infty \\int_{\\mathbb R^2} \\lambda(t, s) \\ ds \\ dt = 1. $$\nHow do we actually use $\\lambda$? In Rosser et al. it is first used to find the \"optimal bandwidth selection\" by constructing $\\lambda$ using all events up to time $T$ and then computing the log likelihood\n$$ \\sum_{T \\leq t_i < T+\\delta} \\log \\lambda(t_i, s_i) $$\nwhere, if we're using a time unit of days, $\\delta=1$ (i.e. we look at the events in the next day).\nTo make predictions, we take point estimates of $\\lambda$, or use the mean value. How we treat space versus time is a little unclear in the literature. There are perhaps two approaches:\n\nFix a time $t$ and then compute the mean value of $\\lambda(t, s)$ as $s$ varies across the grid cell.\nCompute the mean value of $\\lambda(t, s)$ as $t$ varies across the day (or other time period) we are predicting for, and as $s$ varies across the grid cell.\n\nTypically we use a monte carlo approach to estimate the mean from point estimates. Currently our code implements the first method by fixing time at the start of the day. The example below shows a roughly 2% (maximum) difference between (1) and (2) with little change if we vary the fixed time $t$ in (1).\nNotice that this introduces a difference between finding the optimal bandwidth selection and making a prediction. The former uses the exact timestamps of events, while the latter makes one prediction for the whole day, and then \"validates\" this against all events which occur in that day.\nWe thus have a number of different \"normalisations\" to consider. We could normalise $\\lambda$ so that it is a probability kernel-- this is needed if we are to use point evaluations of $\\lambda$. When forming a prediction, the resulting grid of values will (almost) never by normalised, as we are not integrating over all time. Thus we should almost certainly normalise the resulting grid based prediction, if we are to compare different predictions.\nNormalising $\\lambda$\nAfter private communication with the authors, it appears they are well aware of the normalisation issue, and that this is due to a typo in the paper. Using Polar coordinates we wish to have that\n$$ 1 = \\int_0^{2\\pi} \\int_0^\\infty r f(r) \\ dr \\ d\\theta = 2\\pi \\int_0^{h_S} rf(r) \\ dr. $$\nThe natural change to make is to define\n$$ f'(\\Delta s) = \\begin{cases} \\frac{h_S - \\Delta s}{\\pi h_s^2\\Delta s} & :\\text{if } \\Delta s \\leq h_S, \\ 0 &:\\text{otherwise.}\n\\end{cases} $$\nHowever, this introduces a singularity at $\\Delta s = 0$ which is computationally hard to deal with (essentially, the monte carlo approach to integration we use becomes much noisier, as some experiments show).\nAn alternative is to simply divide $f$ by a suitable constant. In our case, the constant is\n$$ 2\\pi \\int_0^{h_S} \\frac{h_S - r}{h_S^2} r \\ dr = 2\\pi \\Big[ \\frac{r^2}{2h_S} - \\frac{r^3}{3h_S^2} \\Big]_0^{h_S}\n= 2\\pi \\Big( \\frac{h_S}{2} - \\frac{h_S}{3} \\Big)\n= 2\\pi \\Big( \\frac{h_S}{2} - \\frac{h_S}{3} \\Big)\n= \\pi h_S / 3. $$", "predictor = open_cp.prohotspot.ProspectiveHotSpotContinuous(grid_size=150, time_unit=np.timedelta64(1, \"D\"))\npredictor.data = timed_points[timed_points.timestamps >= np.datetime64(\"2013-01-01\")]\n\nclass OurWeight():\n def __init__(self):\n self.time_bandwidth = 100\n self.space_bandwidth = 10\n \n def __call__(self, dt, dd):\n kt = np.exp(-dt / self.time_bandwidth) / self.time_bandwidth\n dd = np.atleast_1d(np.asarray(dd))\n #ks = (self.space_bandwidth - dd) / (self.space_bandwidth * self.space_bandwidth * dd * np.pi)\n ks = ((self.space_bandwidth - dd) / (self.space_bandwidth * self.space_bandwidth\n * np.pi * self.space_bandwidth) * 3)\n mask = dd > self.space_bandwidth\n ks[mask] = 0\n return kt * ks\n\npredictor.weight = OurWeight()\npredictor.weight.space_bandwidth = 1\n\ntend = np.datetime64(\"2013-01-01\") + np.timedelta64(180, \"D\")\nprediction = predictor.predict(tend, tend)\nprediction.samples = 50\ngrid_pred = open_cp.predictors.GridPredictionArray.from_continuous_prediction_grid(prediction, grid)\ngrid_pred.mask_with(grid)\ngrid_pred = grid_pred.renormalise()\n\nfig, ax = plt.subplots(ncols=2, figsize=(16,8))\n\nfor a in ax:\n a.set_aspect(1)\n a.add_patch(descartes.PolygonPatch(south_side, fc=\"none\", ec=\"Black\"))\n \nax[0].pcolormesh(*grid_pred.mesh_data(), grid_pred.intensity_matrix, cmap=\"Blues\")\nax[0].set_title(\"Prediction\")\n\npoints = predictor.data.events_before(tend)\nax[1].scatter(points.xcoords, points.ycoords, marker=\"x\", color=\"black\", alpha=0.5)\nNone\n\ngrid_pred2 = predictor.grid_predict(tend, tend, tend + np.timedelta64(1, \"D\"), grid, samples=1)\ngrid_pred2.mask_with(grid)\ngrid_pred2 = grid_pred2.renormalise()\n\nfig, ax = plt.subplots(ncols=3, figsize=(16,6))\n\nfor a in ax:\n a.set_aspect(1)\n a.add_patch(descartes.PolygonPatch(south_side, fc=\"none\", ec=\"Black\"))\n \nmp = ax[0].pcolormesh(*grid_pred.mesh_data(), grid_pred.intensity_matrix, cmap=\"Blues\")\nax[0].set_title(\"Point prediction\")\nfig.colorbar(mp, ax=ax[0])\n\nmp = ax[1].pcolormesh(*grid_pred2.mesh_data(), grid_pred2.intensity_matrix, cmap=\"Blues\")\nax[1].set_title(\"With meaned time\")\nfig.colorbar(mp, ax=ax[1])\n\nmp = ax[2].pcolormesh(*grid_pred2.mesh_data(),\n np.abs(grid_pred.intensity_matrix - grid_pred2.intensity_matrix), cmap=\"Blues\")\nax[2].set_title(\"Difference\")\nfig.colorbar(mp, ax=ax[2])\n\nfig.tight_layout()\nNone", "Direct calculation of optimal bandwidth\nFollowing Rosser et al. closely, we don't need to form a grid prediction, and hence actually don't need to use (much of) our library code.\n\nWe find the maximum likelihood at 500m and 35--45 days, a tighter bandwidth than Rosser et al.\nThis mirrors what we saw for the network; perhaps because of using Chicago and not UK data", "tstart = np.datetime64(\"2013-01-01\")\ntend = np.datetime64(\"2013-01-01\") + np.timedelta64(180, \"D\")\n\ndef log_likelihood(start, end, weight):\n data = timed_points[(timed_points.timestamps >= tstart) & \n (timed_points.timestamps < start)]\n validate = timed_points[(timed_points.timestamps >= start) & \n (timed_points.timestamps <= end)]\n dt = validate.timestamps[None, :] - data.timestamps[:, None]\n dt = dt / np.timedelta64(1, \"D\")\n dx = validate.xcoords[None, :] - data.xcoords[:, None]\n dy = validate.ycoords[None, :] - data.ycoords[:, None]\n dd = np.sqrt(dx*dx + dy*dy)\n ll = np.sum(weight(dt, dd), axis=0)\n ll[ll < 1e-30] = 1e-30\n return np.sum(np.log(ll))\n\ndef score(weight):\n out = 0.0\n for day in range(60):\n start = tend + np.timedelta64(1, \"D\") * day\n end = tend + np.timedelta64(1, \"D\") * (day + 1)\n out += log_likelihood(start, end, weight)\n return out\n\ntime_lengths = list(range(5,100,5))\nspace_lengths = list(range(50, 2000, 50))\n\nscores = {}\nfor sl in space_lengths:\n for tl in time_lengths:\n weight = OurWeight()\n weight.space_bandwidth = sl\n weight.time_bandwidth = tl\n key = (sl, tl)\n scores[key] = score(weight)\n\ndata = np.empty((39,19))\nfor i, sl in enumerate(space_lengths):\n for j, tl in enumerate(time_lengths):\n data[i,j] = scores[(sl,tl)]\n\nordered = data.copy().ravel()\nordered.sort()\ncutoff = ordered[int(len(ordered) * 0.25)]\ndata = np.ma.masked_where(data<cutoff, data)\n\nfig, ax = plt.subplots(figsize=(8,6))\nmappable = ax.pcolor(range(5,105,5), range(50,2050,50), data, cmap=\"Blues\")\nax.set(xlabel=\"Time (days)\", ylabel=\"Space (meters)\")\nfig.colorbar(mappable, ax=ax)\nNone\n\nprint(max(scores.values()))\n[k for k, v in scores.items() if v > -7775]", "Scoring the grid\nWe'll now use the grid prediction; firstly using the \"fully averaged\" version.\n\nWe find the maximum likelihood at 500m and 80 days.\nI wonder what explains the slight difference from above?", "def log_likelihood(grid_pred, timed_points):\n logli = 0\n for x, y in zip(timed_points.xcoords, timed_points.ycoords):\n risk = grid_pred.risk(x, y)\n if risk < 1e-30:\n risk = 1e-30\n logli += np.log(risk)\n return logli\n\ntstart = np.datetime64(\"2013-01-01\")\ntend = np.datetime64(\"2013-01-01\") + np.timedelta64(180, \"D\")\n\ndef score_grids(grids):\n out = 0\n for day in range(60):\n start = tend + np.timedelta64(1, \"D\") * day\n end = tend + np.timedelta64(1, \"D\") * (day + 1)\n grid_pred = grids[start]\n mask = (predictor.data.timestamps > start) & (predictor.data.timestamps <= end)\n timed_points = predictor.data[mask]\n out += log_likelihood(grid_pred, timed_points)\n return out\n\ndef score(predictor):\n grids = dict()\n for day in range(60):\n start = tend + np.timedelta64(1, \"D\") * day\n end = tend + np.timedelta64(1, \"D\") * (day + 1)\n grid_pred = predictor.grid_predict(start, start, end, grid, samples=5)\n grid_pred.mask_with(grid)\n grids[start] = grid_pred.renormalise()\n return score_grids(grids), grids\n\ntime_lengths = list(range(5,100,5))\nspace_lengths = list(range(50, 2000, 50))\npredictor = open_cp.prohotspot.ProspectiveHotSpotContinuous(grid_size=150, time_unit=np.timedelta64(1, \"D\"))\npredictor.data = timed_points[timed_points.timestamps >= np.datetime64(\"2013-01-01\")]\npredictor.weight = OurWeight()\n\nresults = dict()\nzp = zipfile.ZipFile(\"grids.zip\", \"w\", compression=zipfile.ZIP_DEFLATED)\n\nfor sl in space_lengths:\n for tl in time_lengths:\n key = (sl, tl)\n predictor.weight = OurWeight()\n predictor.weight.space_bandwidth = sl / predictor.grid\n predictor.weight.time_bandwidth = tl\n results[key], grids = score(predictor)\n with zp.open(\"{}_{}.grid\".format(sl,tl), \"w\") as f:\n f.write(pickle.dumps(grids))\n print(\"Done\", sl, tl, file=sys.__stdout__)\n\nzp.close()\n\ndata = np.empty((39,19))\nfor i, sl in enumerate(space_lengths):\n for j, tl in enumerate(time_lengths):\n data[i,j] = results[(sl,tl)]\n\nordered = data.copy().ravel()\nordered.sort()\ncutoff = ordered[int(len(ordered) * 0.25)]\ndata = np.ma.masked_where(data<cutoff, data)\n\nfig, ax = plt.subplots(figsize=(8,6))\nmappable = ax.pcolor(range(5,105,5), range(50,2050,50), data, cmap=\"Blues\")\nax.set(xlabel=\"Time (days)\", ylabel=\"Space (meters)\")\nfig.colorbar(mappable, ax=ax)\nNone\n\nprint(max(results.values()))\n[k for k, v in results.items() if v > -3660]", "Where did we get to?", "zp = zipfile.ZipFile(\"grids.zip\")\n\nwith zp.open(\"500_80.grid\") as f:\n grids = pickle.loads(f.read())\n one = list(grids)[0]\n one = grids[one]\nwith zp.open(\"500_90.grid\") as f:\n grids = pickle.loads(f.read())\n two = list(grids)[0]\n two = grids[two]\n\nfig, ax = plt.subplots(ncols=2, figsize=(17,8))\n\nfor a in ax:\n a.set_aspect(1)\n a.add_patch(descartes.PolygonPatch(south_side, fc=\"none\", ec=\"Black\"))\n \nfor a, g in zip([0,1], [one,two]):\n mp = ax[a].pcolormesh(*g.mesh_data(), g.intensity_matrix, cmap=\"Blues\")\n fig.colorbar(mp, ax=ax[a])\nNone", "Again, with normal grid\nInstead of averaging in time, we just take a point estimate.\n\nWe find the maximum likelihood at 500m and 60--85 days.", "def score(predictor):\n grids = dict()\n for day in range(60):\n start = tend + np.timedelta64(1, \"D\") * day\n end = tend + np.timedelta64(1, \"D\") * (day + 1)\n prediction = predictor.predict(tend, tend)\n prediction.samples = 5\n grid_pred = open_cp.predictors.GridPredictionArray.from_continuous_prediction_grid(prediction, grid)\n grid_pred.mask_with(grid)\n grids[start] = grid_pred.renormalise()\n return score_grids(grids), grids\n\nresults = dict()\nzp = zipfile.ZipFile(\"grids.zip\", \"w\", compression=zipfile.ZIP_DEFLATED)\n\nfor sl in space_lengths:\n for tl in time_lengths:\n key = (sl, tl)\n predictor.weight = OurWeight()\n predictor.weight.space_bandwidth = sl / predictor.grid\n predictor.weight.time_bandwidth = tl\n results[key], grids = score(predictor)\n with zp.open(\"{}_{}.grid\".format(sl,tl), \"w\") as f:\n f.write(pickle.dumps(grids))\n print(\"Done\", sl, tl, file=sys.__stdout__)\n\nzp.close()\n\ndata = np.empty((39,19))\nfor i, sl in enumerate(space_lengths):\n for j, tl in enumerate(time_lengths):\n data[i,j] = results[(sl,tl)]\n\nordered = data.copy().ravel()\nordered.sort()\ncutoff = ordered[int(len(ordered) * 0.25)]\ndata = np.ma.masked_where(data<cutoff, data)\n\nfig, ax = plt.subplots(figsize=(8,6))\nmappable = ax.pcolor(range(5,105,5), range(50,2050,50), data, cmap=\"Blues\")\nax.set(xlabel=\"Time (days)\", ylabel=\"Space (meters)\")\nfig.colorbar(mappable, ax=ax)\nNone\n\nprint(max(results.values()))\n[k for k, v in results.items() if v > -3680]\n\nzp = zipfile.ZipFile(\"grids.zip\")\n\nwith zp.open(\"500_80.grid\") as f:\n grids = pickle.loads(f.read())\n one = list(grids)[0]\n one = grids[one]\nwith zp.open(\"500_85.grid\") as f:\n grids = pickle.loads(f.read())\n two = list(grids)[0]\n two = grids[two]\n \nfig, ax = plt.subplots(ncols=2, figsize=(17,8))\n\nfor a in ax:\n a.set_aspect(1)\n a.add_patch(descartes.PolygonPatch(south_side, fc=\"none\", ec=\"Black\"))\n \nfor a, g in zip([0,1], [one,two]):\n mp = ax[a].pcolormesh(*g.mesh_data(), g.intensity_matrix, cmap=\"Blues\")\n fig.colorbar(mp, ax=ax[a])\nNone" ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
NGSchool2016/ngschool2016-materials
jupyter/fbrazdovic/.ipynb_checkpoints/NGSchool_python_USERS-checkpoint.ipynb
gpl-3.0
[ "Set the matplotlib magic to notebook enable inline plots.", "%pylab inline", "Calculate the Nonredundant Read Fraction (NRF)\nSAM format example:\nSRR585264.8766235 0 1 4 15 35M * 0 0 CTTAAACAATTATTCCCCCTGCAAACATTTTCAAT GGGGGGGGGGGGGGGGGGGGGGFGGGGGGGGGGGG XT:A:U NM:i:1 X0:i:1 X1:i:6 XM:i:1 XO:i:0 XG:i:0 MD:Z:8T26\nImport the required modules", "import subprocess\nimport matplotlib.pyplot as plt\nimport random\nimport numpy as np", "Make figures prettier and biger", "plt.style.use('ggplot')\nfigsize(10,5)", "Parse the SAM file and extract the unique start coordinates.\nFirst store the file name in the variable", "file = \"/ngschool/chip_seq/bwa/input.sorted.bam\"", "Next we read the file using samtools. From each read we need to store the flag, chromosome name and start coordinate.", "p = subprocess.Popen([\"samtools\", \"view\", \"-q10\", \"-F260\", file],\n stdout=subprocess.PIPE)\ncoords = []\nfor line in p.stdout:\n flag, chrom, start = line.decode('utf-8').split()[1:4]\n coords.append([ flag, chrom, start])\n\ncoords[:5]\n\ncoords[-5:]", "What is the total number of our unique reads?", "len(coords)", "In python we can randomly sample the coordinates to get 1M for NRF calculations", "random.seed(1234)\nsample = random.sample(coords, 1000000)", "How many of those coordinates are unique? (We will use the set python object which only the unique items.)", "len(sample)\n\nuniqueStarts = {'watson': set(), 'crick': set()}\nfor coord in sample:\n flag, ref, start = coord\n if int(flag) & 16:\n uniqueStarts['crick'].add((ref, start))\n else:\n uniqueStarts['watson'].add((ref, start))", "How many on the Watson strand?", "len(uniqueStarts['watson'])", "And on the Crick?", "len(uniqueStarts['crick'])", "Calculate the NRF", "NRF_input = (len(uniqueStarts['watson']) + len(uniqueStarts['crick']))*1.0 /\nprint(NRF_input)", "Lets create a function from what we did above and apply it to all of our files!\nTo use our function on the real sequencing datasets (not only on a small subset) we need to optimize our method a bit- we will use python module called numpy.", "def calculateNRF(filePath, pickSample=True, sampleSize=10000000, seed=1234):\n p = subprocess.Popen(['samtools', 'view', '-q10', '-F260', filePath],\n stdout=subprocess.PIPE)\n coordType = np.dtype({'names': ['flag', 'ref', 'start'],\n 'formats': ['uint16', 'U10', 'uint32']})\n coordArray = np.empty(10000000, dtype=coordType)\n i = 0\n for line in p.stdout:\n if i >= len(coordArray):\n coordArray = np.append(coordArray, np.empty(1000000, dtype=coordType), axis=0)\n fg, rf, st = line.decode('utf-8').split()[1:4]\n coordArray[i] = np.array((fg, rf, st), dtype=coordType)\n i += 1\n coordArray = coordArray[:i]\n sample = coordArray\n if pickSample and len(coordArray) > sampleSize:\n np.random.seed(seed)\n sample = np.random.choice(coordArray, sampleSize, replace=False)\n uniqueStarts = {'watson': set(), 'crick': set()}\n for read in sample:\n flag, ref, start = read\n if flag & 16:\n uniqueStarts['crick'].add((ref, start))\n else:\n uniqueStarts['watson'].add((ref, start))\n NRF = (len(uniqueStarts['watson']) + len(uniqueStarts['crick']))*1.0/len(sample)\n return NRF", "Calculate the NRF for the chip-seq sample", "NRF_chip = calculateNRF(\"\", sampleSize=1000000)\nprint(NRF_chip)", "Plot the NRF!", "plt.bar([0,2],[NRF_input, NRF_chip], width=1)\nplt.xlim([-0.5,3.5]), plt.xticks([0.5, 2.5], ['Input', 'ChIP'])\nplt.xlabel('Sample')\nplt.ylabel('NRF')\nplt.ylim([0, 1.25]), plt.yticks(np.arange(0, 1.2, 0.2))\nplt.plot((-0.5,3.5), (0.8,0.8), 'red', linestyle='dashed')\nplt.show()", "Calculate the Signal Extraction Scaling\nLoad the results from the coverage calculations. Lets take the input sample first.\n20 0 1000 6\n20 1000 2000 15\n20 2000 3000 13\n...", "countList = []\nwith open('/ngschool/chip_seq/bedtools/input_coverage.bed', 'r') as covFile:\n for line in covFile:\n countList.append(int(line.strip('\\n').split('\\t')[3]))\n\ncountlist[:5]\n\nplot(range(101267), countlist)", "Lets see where do our reads align to the genome. Plot the distribution of tags along the genome.", "plt.plot(range(len(countList)), countList)\nplt.xlabel('Bin number')\nplt.ylabel('Bin coverage')\nplt.xlim([0, len(countList)])\nplt.show()", "Now sort the list- order the windows based on the tag count", "countList.sort()", "What do you suppose is in the beginning of our list?\nSum all the aligned tags", "countSum = sum()\ncountSum", "Calculate the summaric fraction of tags along the ordered windows.", "countFraction = []\nfor i, count in enumerate(countList):\n if i == 0:\n countFraction.append(count*1.0 / countSum)\n else:\n countFraction.append((count*1.0 / countSum) + countFraction[i-1])", "Look at the last five items of the list:\nCalculate the number of windows.", "winNumber = \nwinNumber", "Calculate what fraction of a whole is the position of each window.", "winFraction = []\nfor i in range(winNumber):\n winFraction.append(i*1.0 / winNumber)", "Look at the last five items of our new list:\nNow prepare the function!", "def calculateSES(filePath):\n countList = []\n with open(filePath, 'r') as covFile:\n for line in covFile:\n countList.append(int(line.strip('\\n').split('\\t')[3]))\n plt.plot(range(len(countList)), countList)\n plt.xlabel('Bin number')\n plt.ylabel('Bin coverage')\n plt.xlim([0, len(countList)])\n plt.show()\n countList.sort()\n countSum = sum(countList)\n countFraction = []\n for i, count in enumerate(countList):\n if i == 0:\n countFraction.append(count*1.0 / countSum)\n else:\n countFraction.append((count*1.0 / countSum) + countFraction[i-1])\n winNumber = len(countFraction)\n winFraction = []\n for i in range(winNumber):\n winFraction.append(i*1.0 / winNumber)\n return [winFraction, countFraction]", "Use our function to calculate the signal extraction scaling for the Sox2 ChIP sample:", "chipSes = calculateSES(\"\")", "Now we can plot the calculated fractions for both the input and ChIP sample:", "plt.plot(winFraction, countFraction, label='input')\nplt.plot(chipSes[0], chipSes[1], label='Sox2 ChIP')\nplt.ylim([0,1])\nplt.xlabel('Ordered window franction')\nplt.ylabel('Genome coverage fraction')\nplt.legend(loc='best')\nplt.show()" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
TomTranter/OpenPNM
examples/simulations/Working with Source and Sink Terms.ipynb
mit
[ "Using Source and Sink Terms for a Chemical Reaction", "import warnings\nimport scipy as sp\nimport numpy as np\nimport openpnm as op\nimport matplotlib.pyplot as plt\nnp.set_printoptions(precision=5)\nnp.random.seed(10)\n%matplotlib inline", "Start by creating the network, geometry, phase and physics objects as usual:", "pn = op.network.Cubic(shape=[40, 40], spacing=1e-4)\ngeo = op.geometry.StickAndBall(network=pn, pores=pn.Ps, throats=pn.Ts)\ngas = op.phases.Air(network=pn)\nphys = op.physics.Standard(network=pn, phase=gas, geometry=geo)", "Now add the source and sink models to the physics object. In this case we'll think of the as chemical reactions. We'll add one source term and one sink term, meaning one negative reaction rate and one positive reaction rate", "gas['pore.concentration'] = 0\nphys['pore.sinkA'] = -1e-10\nphys['pore.sinkb'] = 1\nphys.add_model(propname='pore.sink', model=op.models.physics.generic_source_term.power_law,\n A1='pore.sinkA', A2='pore.sinkb', X='pore.concentration')\nphys['pore.srcA'] = 1e-11\nphys['pore.srcb'] = 1\nphys.add_model(propname='pore.source', model=op.models.physics.generic_source_term.power_law,\n A1='pore.srcA', A2='pore.srcb', X='pore.concentration')", "Now we setup a FickianDiffusion algorithm, with concentration boundary conditions on two side, and apply the sink term to 3 pores:", "rx = op.algorithms.FickianDiffusion(network=pn)\nrx.setup(phase=gas)\nrx.set_source(propname='pore.sink', pores=[420, 820, 1220])\nrx.set_value_BC(values=1, pores=pn.pores('front'))\nrx.set_value_BC(values=1, pores=pn.pores('back'))\nrx.run()", "Because the network is a 2D cubic, it is convenient to visualize it as an image, so we reshape the 'pore.concentration' array that is produced by the FickianDiffusion algorithm upon running, and turn it into a colormap representing concentration in each pore.", "#NBVAL_IGNORE_OUTPUT\nim = np.reshape(rx['pore.concentration'], [40, 40])\nfig = plt.figure(figsize=[6, 6])\nplt.imshow(im);", "Similarly, for the source term:", "rx = op.algorithms.FickianDiffusion(network=pn)\nrx.setup(phase=gas)\nrx.set_source(propname='pore.source', pores=[420, 820, 1220])\nrx.set_value_BC(values=1, pores=pn.pores('front'))\nrx.set_value_BC(values=1, pores=pn.pores('back'))\nrx.run()\n\n#NBVAL_IGNORE_OUTPUT\nim = np.reshape(rx['pore.concentration'], [40, 40])\nfig = plt.figure(figsize=[6, 6])\nplt.imshow(im);" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
msampathkumar/kaggle-quora-tensorflow
dhira_team.ipynb
apache-2.0
[ "import numpy as np # linear algebra\nimport pandas as pd # data processing, CSV file I/O (e.g. pd.read_csv)\nimport os\nimport re\nimport gc\nimport codecs\nimport matplotlib.pyplot as plt\nimport seaborn as sns\nimport tensorflow as tf\nfrom bs4 import BeautifulSoup\nfrom nltk.corpus import stopwords\nfrom keras.preprocessing.text import Tokenizer\n\n%matplotlib inline\n%load_ext autotime\n\npal = sns.color_palette()\n\n# Paths\n\nif os.path.isdir('data'):\n QUORA_DATA_DIR = \"data/\"\n GLOVE_DATA_DIR = \"data/\"\nelse:\n QUORA_DATA_DIR = \"/opt/datasets/quora/\"\n GLOVE_DATA_DIR = \"\"/opt/datasets/glove/\"\n\nTRAIN_CSV = QUORA_DATA_DIR + 'train.csv'\nTEST_CSV = QUORA_DATA_DIR + 'test.csv'\n\nglove_840B_300d = GLOVE_DATA_DIR + 'glove.840B.300d.txt'\nGLOVE_DATA_FILE = glove_840B_300d\n\nEMBEDDING_DIM = 300\nMAX_SEQUENCE_LENGTH = 45\nMAX_NB_WORDS = 200000\nEMBEDDING_DIM = 300\nVALIDATION_SPLIT = 0.01", "Data Analysis", "df_train = pd.read_csv(TRAIN_CSV)\ndf_test = pd.read_csv(TEST_CSV)\n\n# Train Data\ntrain_feature_1_string = pd.Series(df_train['question1'].tolist()).astype(str)\ntrain_feature_2_string = pd.Series(df_train['question2'].tolist()).astype(str)\n\ntarget = pd.Series(df_train['is_duplicate'].tolist())\n\nall_train_qs = train_feature_1_string + train_feature_2_string\n\n# Test Data\ntest_feature_1_string = pd.Series(df_test['question1'].tolist()).astype(str)\ntest_feature_2_string = pd.Series(df_test['question2'].tolist()).astype(str)\n\nall_test_qs = test_feature_1_string + test_feature_2_string\n\nall_qs = all_train_qs + all_test_qs\n\nprint(all_train_qs.tolist()[:10])\n\ndf_train.head()\n\ndf_test.head()", "Text Analysis", "dt_all_qids = df_train.qid1 + df_train.qid1\n\nplt.figure(figsize=(12, 5))\nplt.hist(dt_all_qids.value_counts(), bins=50)\nplt.yscale('log', nonposy='clip')\nplt.title('Log-Histogram of question appearance counts')\nplt.xlabel('Number of occurences of question')\nplt.ylabel('Number of questions')\nprint()\n\nall_qids = df_train.qid1 + df_train.qid2\n\ntrain_qs = df_train.question1 + df_train.question2\n\ntotal_ques_pairs = len(df_train)\nprint('Total number of question pairs for training: {}'.format(total_ques_pairs))\n\nduplicate_ques_pairs = round(df_train['is_duplicate'].mean()*100, 2)\nprint('Duplicate pairs: {}%'.format(duplicate_ques_pairs))\n\nunique_qids = len(np.unique(train_qs.fillna(\"\")))\nprint('Total number of questions in the training data: {}'.format(unique_qids))\n\nprint('Number of questions that appear multiple times: {}'.format(np.sum(all_qids.value_counts() > 1)))\n\nprint(\"Total number of questions in Quora dataset: {}\".format(len(all_qs)))\n\n\n# dist_train = train_qs.apply(len)\n# dist_test = test_qs.apply(len)\n\n# plt.figure(figsize=(15, 10))\n# plt.hist(dist_train, bins=200, range=[0, 200], color=pal[2], normed=True, label='train')\n# plt.hist(dist_test, bins=200, range=[0, 200], color=pal[1], normed=True, alpha=0.5, label='test')\n# plt.title('Normalised histogram of character count in questions', fontsize=15)\n# plt.legend()\n# plt.xlabel('Number of characters', fontsize=15)\n# plt.ylabel('Probability', fontsize=15)\n\n# print('mean-train {:.2f} std-train {:.2f} mean-test {:.2f} std-test {:.2f} max-train {:.2f} max-test {:.2f} max-train{:.2f}'.format(dist_train.mean(), \n# dist_train.std(), dist_test.mean(), dist_test.std(), dist_train.max(), dist_test.max(), dist_train.max()))\n\n# from wordcloud import WordCloud\n# cloud = WordCloud(width=1440, height=1080).generate(\" \".join(train_qs.astype(str)))\n# plt.figure(figsize=(20, 15))\n# plt.imshow(cloud)\n# plt.axis('off')\n\nif ('embeddings_index' not in dir()):\n print('Indexing word vectors.')\n embeddings_index = {}\n with codecs.open(GLOVE_DATA_FILE, encoding='utf-8') as f:\n for line in f:\n # line for '<key> <vector coeffecients>'\n # Example 'A 0.2341 0.12313 0.31432 0.123414 ....'\n values = line.split(' ')\n embeddings_index[values[0]] = np.asarray(values[1:], dtype='float32')\n break\n print('Found %s word vectors.' % len(embeddings_index))\nelse:\n print('Skipped to save some time!')\n\n# embeddings_index['best']\n\n# %%time\n# all_questions_text = train_feature_1_text + train_feature_2_text + test_feature_1_text + test_feature_2_text\n# all_questions_text = list(filter(lambda q: type(q) == str, all_questions_text))\n\n# ques_lengths = list(map(lambda q: len(str(q).split(' ')) if(type(q) is str) else 0, all_questions_text))\n# #Beware the questions has nan and numbers\n\n# print(max(ques_lengths)) \n\ndef getTokenizeModel(list_of_all_string, max_number_words):\n tokenizer = Tokenizer(nb_words = max_number_words)\n tokenizer.fit_on_texts(dt_all_questions_text)\n\n# %time\n# tokenizer = Tokenizer(nb_words=dt_MAX_NB_WORDS)\n# tokenizer.fit_on_texts(all_questions_text)\n# train_sequences_1 = dt_tokenizer.texts_to_sequences(train_feature_1_text)\n# train_sequences_2 = dt_tokenizer.texts_to_sequences(train_feature_2_text)\n# word_index = tokenizer.word_index\n# print('Found %s unique tokens.' % len(word_index))\n\n# dt_test_sequences_1 = tokenizer.texts_to_sequences(test_feature_1_text)\n# dt_test_sequences_2 = tokenizer.texts_to_sequences(test_feature_2_text)\n\n# data_1 = pad_sequences(train_sequences_1, maxlen=MAX_SEQUENCE_LENGTH)\n# data_2 = pad_sequences(train_sequences_2, maxlen=MAX_SEQUENCE_LENGTH)\n# labels = np.array(labels)\n# print('Shape of data tensor:', data_1.shape)\n# print('Shape of label tensor:', labels.shape)\n\n# test_data_1 = pad_sequences(test_sequences_1, maxlen=MAX_SEQUENCE_LENGTH)\n# test_data_2 = pad_sequences(test_sequences_2, maxlen=MAX_SEQUENCE_LENGTH)\n# test_labels = np.array(test_labels)\n# del test_sequences_1\n# del test_sequences_2\n# del train_sequences_1\n# del train_sequences_2\n# import gc\n# gc.collect()\n\nl = [\"just say a word\",\"of some length\"]\nll = list(map(lambda l: l.split(\" \"), l))\nseq_length = 5\nll\ne = {\"just\": [0,0,0,0,1],\n \"say\" : [0,0,0,1,1],\n \"a\" : [0,1,1,1,0],\n \"word\" : [1,0,0,1,0],\n \"of\" : [0,1,0,1,0],\n \"some\" : [0,1,0,1,0],\n \"length\" : [0,0,1,1,0],\n '<PAD>': [0,0,0,0,0]\n }\n\n# list of words --> lows\n\nfrom pprint import pprint as pp\n\ndef lows_padding(list_of_words, seq_length=5, append=True):\n \"\"\"Pads/slices given bag of words for specified length.\"\"\"\n if len(list_of_words) == seq_length:\n return list_of_words\n if len(list_of_words) > seq_length:\n return list_of_words[:seq_length]\n \n tmp = ['<PAD>' for i in range(seq_length - len(list_of_words))]\n if append:\n return list_of_words + tmp\n return tmp + list_of_words\n\ndef lows_embedding(list_of_words, serializer):\n \"\"\"To serializer/string Tokenize the list of words.\"\"\"\n return list(map(lambda x: serializer[x], list_of_words))\n\ndef lows_transformer(list_of_words, serializer, seq_length, append):\n \"\"\"To pad the given list of words and serialiase them.\"\"\"\n return lows_embedding(lows_padding(list_of_words, seq_length, append),\n serializer=e)\n\nbag_of_lows_transformer = lambda list_of_words: lows_transformer(list_of_words, serializer=e,\n seq_length=4, append=True)\n\npp(list(map(bag_of_lows_transformer, ll)))\n\ndef get_feature(embed_dict, list_of_strings, max_seq_length, embed_dim):\n features = np.zeros((len(list_of_strings), max_seq_length, embed_dim), dtype=float)\n list_of_bag_of_words = map(lambda l: list_of_words_padding(list_of_words=l.split(),\n seq_length=max_seq_length,\n append=True),\n list_of_strings)\n return list_of_bag_of_words\n\n \nlist(get_feature(e, l, 7, 5)) ", "Build the Graph", "lstm_size = 256\nlstm_layers = 1\nbatch_size = 250\nlearning_rate = 0.001\n\n# Create the graph object\ngraph = tf.Graph()\n# Add nodes to the graph\nwith graph.as_default():\n inputs_ = tf.placeholder(tf.float32, [None, None, None], name='inputs') #[Number of ques, Seq/Ques Length, Embed Dims]\n labels_ = tf.placeholder(tf.float32, [None, None], name='labels')\n keep_prob = tf.placeholder(tf.float32, name='keep_prob')", "http://suriyadeepan.github.io/2016-06-28-easy-seq2seq/\nhttps://www.kaggle.com/c/word2vec-nlp-tutorial/details/part-1-for-beginners-bag-of-words\nhttps://github.com/yuhaozhang/sentence-convnet", "500 / 60" ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
scollins83/deep-learning
tv-script-generation/dlnd_tv_script_generation_SEC.ipynb
mit
[ "TV Script Generation\nIn this project, you'll generate your own Simpsons TV scripts using RNNs. You'll be using part of the Simpsons dataset of scripts from 27 seasons. The Neural Network you'll build will generate a new TV script for a scene at Moe's Tavern.\nGet the Data\nThe data is already provided for you. You'll be using a subset of the original dataset. It consists of only the scenes in Moe's Tavern. This doesn't include other versions of the tavern, like \"Moe's Cavern\", \"Flaming Moe's\", \"Uncle Moe's Family Feed-Bag\", etc..", "\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL\n\"\"\"\nimport helper\n\ndata_dir = './data/simpsons/moes_tavern_lines.txt'\ntext = helper.load_data(data_dir)\n# Ignore notice, since we don't use it for analysing the data\ntext = text[81:]", "Explore the Data\nPlay around with view_sentence_range to view different parts of the data.", "view_sentence_range = (0, 10)\n\n\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL\n\"\"\"\nimport numpy as np\n\nprint('Dataset Stats')\nprint('Roughly the number of unique words: {}'.format(len({word: None for word in text.split()})))\nscenes = text.split('\\n\\n')\nprint('Number of scenes: {}'.format(len(scenes)))\nsentence_count_scene = [scene.count('\\n') for scene in scenes]\nprint('Average number of sentences in each scene: {}'.format(np.average(sentence_count_scene)))\n\nsentences = [sentence for scene in scenes for sentence in scene.split('\\n')]\nprint('Number of lines: {}'.format(len(sentences)))\nword_count_sentence = [len(sentence.split()) for sentence in sentences]\nprint('Average number of words in each line: {}'.format(np.average(word_count_sentence)))\n\nprint()\nprint('The sentences {} to {}:'.format(*view_sentence_range))\nprint('\\n'.join(text.split('\\n')[view_sentence_range[0]:view_sentence_range[1]]))\n\nlen(sentences)", "Implement Preprocessing Functions\nThe first thing to do to any dataset is preprocessing. Implement the following preprocessing functions below:\n- Lookup Table\n- Tokenize Punctuation\nLookup Table\nTo create a word embedding, you first need to transform the words to ids. In this function, create two dictionaries:\n- Dictionary to go from the words to an id, we'll call vocab_to_int\n- Dictionary to go from the id to word, we'll call int_to_vocab\nReturn these dictionaries in the following tuple (vocab_to_int, int_to_vocab)", "import numpy as np\nimport problem_unittests as tests\n\ndef create_lookup_tables(text):\n \"\"\"\n Create lookup tables for vocabulary\n :param text: The text of tv scripts split into words\n :return: A tuple of dicts (vocab_to_int, int_to_vocab)\n \"\"\"\n # Create a set for the vocabulary\n vocabulary = set()\n \n # Add word tokens from text to the vocabulary set\n for word in text:\n vocabulary.add(word)\n \n # Convert to a list to be able to access by index\n vocab = list(vocabulary)\n \n # Populate dictionary of words in the vocabulary mapped to index positions and vice versa\n vocab_to_int = {}\n int_to_vocab = {}\n \n for i, word in enumerate(vocab):\n vocab_to_int[word] = i\n int_to_vocab[i] = word\n \n return vocab_to_int, int_to_vocab\n\n\n\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE\n\"\"\"\ntests.test_create_lookup_tables(create_lookup_tables)", "Tokenize Punctuation\nWe'll be splitting the script into a word array using spaces as delimiters. However, punctuations like periods and exclamation marks make it hard for the neural network to distinguish between the word \"bye\" and \"bye!\".\nImplement the function token_lookup to return a dict that will be used to tokenize symbols like \"!\" into \"||Exclamation_Mark||\". Create a dictionary for the following symbols where the symbol is the key and value is the token:\n- Period ( . )\n- Comma ( , )\n- Quotation Mark ( \" )\n- Semicolon ( ; )\n- Exclamation mark ( ! )\n- Question mark ( ? )\n- Left Parentheses ( ( )\n- Right Parentheses ( ) )\n- Dash ( -- )\n- Return ( \\n )\nThis dictionary will be used to token the symbols and add the delimiter (space) around it. This separates the symbols as it's own word, making it easier for the neural network to predict on the next word. Make sure you don't use a token that could be confused as a word. Instead of using the token \"dash\", try using something like \"||dash||\".", "def token_lookup():\n \"\"\"\n Generate a dict to turn punctuation into a token.\n :return: Tokenize dictionary where the key is the punctuation and the value is the token\n \"\"\"\n # TODO: Implement Function\n \n # Instantiate punctuation dict\n punctuation_dict = {}\n \n # Populate the dictionary\n punctuation_dict['.'] = 'Period'\n punctuation_dict[','] = 'Comma'\n punctuation_dict['\"'] = 'Quotation_Mark'\n punctuation_dict[';'] = 'Semicolon'\n punctuation_dict['!'] = 'Exclamation_Mark'\n punctuation_dict['?'] = 'Question_Mark'\n punctuation_dict['('] = 'Left_Parenthesis'\n punctuation_dict[')'] = 'Right_Parenthesis'\n punctuation_dict['--'] = 'Dash'\n punctuation_dict['\\n'] = 'Return'\n \n return punctuation_dict\n\n\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE\n\"\"\"\ntests.test_tokenize(token_lookup)", "Preprocess all the data and save it\nRunning the code cell below will preprocess all the data and save it to file.", "\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL\n\"\"\"\n# Preprocess Training, Validation, and Testing Data\nhelper.preprocess_and_save_data(data_dir, token_lookup, create_lookup_tables)", "Check Point\nThis is your first checkpoint. If you ever decide to come back to this notebook or have to restart the notebook, you can start from here. The preprocessed data has been saved to disk.", "\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL\n\"\"\"\nimport helper\nimport numpy as np\nimport problem_unittests as tests\n\nint_text, vocab_to_int, int_to_vocab, token_dict = helper.load_preprocess()", "Build the Neural Network\nYou'll build the components necessary to build a RNN by implementing the following functions below:\n- get_inputs\n- get_init_cell\n- get_embed\n- build_rnn\n- build_nn\n- get_batches\nCheck the Version of TensorFlow and Access to GPU", "\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL\n\"\"\"\nfrom distutils.version import LooseVersion\nimport warnings\nimport tensorflow as tf\n\n# Check TensorFlow Version\nassert LooseVersion(tf.__version__) >= LooseVersion('1.0'), 'Please use TensorFlow version 1.0 or newer'\nprint('TensorFlow Version: {}'.format(tf.__version__))\n\n# Check for a GPU\nif not tf.test.gpu_device_name():\n warnings.warn('No GPU found. Please use a GPU to train your neural network.')\nelse:\n print('Default GPU Device: {}'.format(tf.test.gpu_device_name()))", "Input\nImplement the get_inputs() function to create TF Placeholders for the Neural Network. It should create the following placeholders:\n- Input text placeholder named \"input\" using the TF Placeholder name parameter.\n- Targets placeholder\n- Learning Rate placeholder\nReturn the placeholders in the following tuple (Input, Targets, LearningRate)", "def get_inputs():\n \"\"\"\n Create TF Placeholders for input, targets, and learning rate.\n :return: Tuple (input, targets, learning rate)\n \"\"\"\n # TODO: Implement Function\n\n inputs = tf.placeholder(tf.int32, [None, None], name='input')\n targets = tf.placeholder(tf.int32, [None, None], name='target')\n learning_rate = tf.placeholder(tf.float32, shape=None, name='lr')\n \n return inputs, targets, learning_rate\n\n\n\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE\n\"\"\"\ntests.test_get_inputs(get_inputs)", "Build RNN Cell and Initialize\nStack one or more BasicLSTMCells in a MultiRNNCell.\n- The Rnn size should be set using rnn_size\n- Initalize Cell State using the MultiRNNCell's zero_state() function\n - Apply the name \"initial_state\" to the initial state using tf.identity()\nReturn the cell and initial state in the following tuple (Cell, InitialState)", "def get_init_cell(batch_size, rnn_size):\n \"\"\"\n Create an RNN Cell and initialize it.\n :param batch_size: Size of batches\n :param rnn_size: Size of RNNs\n :return: Tuple (cell, initialize state)\n \"\"\"\n # TODO: Implement Function\n num_layers=1\n keep_prob = .8\n # Use a basic LSTM cell\n lstm = tf.contrib.rnn.BasicLSTMCell(rnn_size)\n \n # Add dropout to the cell\n drop = tf.contrib.rnn.DropoutWrapper(lstm, output_keep_prob=keep_prob)\n \n # Stack up multiple LSTM layers, for deep learning\n cell = tf.contrib.rnn.MultiRNNCell([drop] * num_layers)\n initial_state = tf.identity(cell.zero_state(batch_size, tf.float32), name='initial_state')\n \n return cell, initial_state\n\n\n\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE\n\"\"\"\ntests.test_get_init_cell(get_init_cell)", "Word Embedding\nApply embedding to input_data using TensorFlow. Return the embedded sequence.", "def get_embed(input_data, vocab_size, embed_dim):\n \"\"\"\n Create embedding for <input_data>.\n :param input_data: TF placeholder for text input.\n :param vocab_size: Number of words in vocabulary.\n :param embed_dim: Number of embedding dimensions\n :return: Embedded input.\n \"\"\"\n # TODO: Implement Function\n # Embed the words for training\n \n \n embed = tf.Variable(tf.random_uniform((vocab_size, embed_dim),\n -1, 1))\n embedded = tf.nn.embedding_lookup(embed, input_data)\n return embedded\n\n\n\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE\n\"\"\"\ntests.test_get_embed(get_embed)", "Build RNN\nYou created a RNN Cell in the get_init_cell() function. Time to use the cell to create a RNN.\n- Build the RNN using the tf.nn.dynamic_rnn()\n - Apply the name \"final_state\" to the final state using tf.identity()\nReturn the outputs and final_state state in the following tuple (Outputs, FinalState)", "def build_rnn(cell, inputs):\n \"\"\"\n Create a RNN using a RNN Cell\n :param cell: RNN Cell\n :param inputs: Input text data\n :return: Tuple (Outputs, Final State)\n \"\"\"\n # TODO: Implement Function\n #tf.reset_default_graph()\n \n outputs, state = tf.nn.dynamic_rnn(cell, inputs, dtype=tf.float32)\n final_state = tf.identity(state, name='final_state')\n return outputs, final_state\n\n\n\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE\n\"\"\"\ntests.test_build_rnn(build_rnn)", "Build the Neural Network\nApply the functions you implemented above to:\n- Apply embedding to input_data using your get_embed(input_data, vocab_size, embed_dim) function.\n- Build RNN using cell and your build_rnn(cell, inputs) function.\n- Apply a fully connected layer with a linear activation and vocab_size as the number of outputs.\nReturn the logits and final state in the following tuple (Logits, FinalState)", "def build_nn(cell, rnn_size, input_data, vocab_size, embed_dim):\n \"\"\"\n Build part of the neural network\n :param cell: RNN cell\n :param rnn_size: Size of rnns\n :param input_data: Input data\n :param vocab_size: Vocabulary size\n :param embed_dim: Number of embedding dimensions\n :return: Tuple (Logits, FinalState)\n \"\"\"\n # TODO: Implement Function\n embedding = get_embed(input_data, vocab_size, rnn_size)\n output, final_state = build_rnn(cell, embedding)\n\n logits = tf.contrib.layers.fully_connected(output, vocab_size, \n activation_fn=None, \n weights_initializer=tf.truncated_normal_initializer(stddev=0.1),\n biases_initializer=tf.zeros_initializer())\n \n return logits, final_state\n\n\n\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE\n\"\"\"\ntests.test_build_nn(build_nn)", "Batches\nImplement get_batches to create batches of input and targets using int_text. The batches should be a Numpy array with the shape (number of batches, 2, batch size, sequence length). Each batch contains two elements:\n- The first element is a single batch of input with the shape [batch size, sequence length]\n- The second element is a single batch of targets with the shape [batch size, sequence length]\nIf you can't fill the last batch with enough data, drop the last batch.\nFor exmple, get_batches([1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15], 2, 3) would return a Numpy array of the following:\n```\n[\n # First Batch\n [\n # Batch of Input\n [[ 1 2 3], [ 7 8 9]],\n # Batch of targets\n [[ 2 3 4], [ 8 9 10]]\n ],\n# Second Batch\n [\n # Batch of Input\n [[ 4 5 6], [10 11 12]],\n # Batch of targets\n [[ 5 6 7], [11 12 13]]\n ]\n]\n```", "def get_batches(int_text, batch_size, seq_length):\n n_batches = len(int_text)//(batch_size*seq_length)\n inputs = np.array(int_text[:n_batches*(batch_size*seq_length)])\n #targets = np.array(int_text[1:n_batches*(batch_size*seq_length)+1])\n targets = np.roll(inputs, -1)\n input_batches = np.split(inputs.reshape(batch_size,-1),n_batches,1)\n target_batches = np.split(targets.reshape(batch_size,-1),n_batches,1)\n \n output = np.array(list(zip(input_batches,target_batches)))\n return output\n\n\n\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE\n\"\"\"\ntests.test_get_batches(get_batches)", "Neural Network Training\nHyperparameters\nTune the following parameters:\n\nSet num_epochs to the number of epochs.\nSet batch_size to the batch size.\nSet rnn_size to the size of the RNNs.\nSet embed_dim to the size of the embedding.\nSet seq_length to the length of sequence.\nSet learning_rate to the learning rate.\nSet show_every_n_batches to the number of batches the neural network should print progress.", "# Number of Epochs\nnum_epochs = 20\n# Batch Size\nbatch_size = 50\n# RNN Size\nrnn_size = 300\n# Embedding Dimension Size\nembed_dim = 300\n# Sequence Length\nseq_length = 20\n# Learning Rate\nlearning_rate = 0.01\n# Show stats for every n number of batches\nshow_every_n_batches = 1\n\n\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE\n\"\"\"\nsave_dir = './save'", "Build the Graph\nBuild the graph using the neural network you implemented.", "\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL\n\"\"\"\nfrom tensorflow.contrib import seq2seq\n\ntrain_graph = tf.Graph()\nwith train_graph.as_default():\n vocab_size = len(int_to_vocab)\n input_text, targets, lr = get_inputs()\n input_data_shape = tf.shape(input_text)\n cell, initial_state = get_init_cell(input_data_shape[0], rnn_size)\n logits, final_state = build_nn(cell, rnn_size, input_text, vocab_size, embed_dim)\n\n # Probabilities for generating words\n probs = tf.nn.softmax(logits, name='probs')\n\n # Loss function\n cost = seq2seq.sequence_loss(\n logits,\n targets,\n tf.ones([input_data_shape[0], input_data_shape[1]]))\n\n # Optimizer\n optimizer = tf.train.AdamOptimizer(lr)\n\n # Gradient Clipping\n gradients = optimizer.compute_gradients(cost)\n capped_gradients = [(tf.clip_by_value(grad, -1., 1.), var) for grad, var in gradients if grad is not None]\n train_op = optimizer.apply_gradients(capped_gradients)", "Train\nTrain the neural network on the preprocessed data. If you have a hard time getting a good loss, check the forms to see if anyone is having the same problem.", "\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL\n\"\"\"\nbatches = get_batches(int_text, batch_size, seq_length)\n\nwith tf.Session(graph=train_graph) as sess:\n sess.run(tf.global_variables_initializer())\n\n for epoch_i in range(num_epochs):\n state = sess.run(initial_state, {input_text: batches[0][0]})\n\n for batch_i, (x, y) in enumerate(batches):\n feed = {\n input_text: x,\n targets: y,\n initial_state: state,\n lr: learning_rate}\n train_loss, state, _ = sess.run([cost, final_state, train_op], feed)\n\n # Show every <show_every_n_batches> batches\n if (epoch_i * len(batches) + batch_i) % show_every_n_batches == 0:\n print('Epoch {:>3} Batch {:>4}/{} train_loss = {:.3f}'.format(\n epoch_i,\n batch_i,\n len(batches),\n train_loss))\n\n # Save Model\n saver = tf.train.Saver()\n saver.save(sess, save_dir)\n print('Model Trained and Saved')", "Save Parameters\nSave seq_length and save_dir for generating a new TV script.", "\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL\n\"\"\"\n# Save parameters for checkpoint\nhelper.save_params((seq_length, save_dir))", "Checkpoint", "\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL\n\"\"\"\nimport tensorflow as tf\nimport numpy as np\nimport helper\nimport problem_unittests as tests\n\n_, vocab_to_int, int_to_vocab, token_dict = helper.load_preprocess()\nseq_length, load_dir = helper.load_params()", "Implement Generate Functions\nGet Tensors\nGet tensors from loaded_graph using the function get_tensor_by_name(). Get the tensors using the following names:\n- \"input:0\"\n- \"initial_state:0\"\n- \"final_state:0\"\n- \"probs:0\"\nReturn the tensors in the following tuple (InputTensor, InitialStateTensor, FinalStateTensor, ProbsTensor)", "def get_tensors(loaded_graph):\n \"\"\"\n Get input, initial state, final state, and probabilities tensor from <loaded_graph>\n :param loaded_graph: TensorFlow graph loaded from file\n :return: Tuple (InputTensor, InitialStateTensor, FinalStateTensor, ProbsTensor)\n \"\"\"\n # TODO: Implement Function\n input_tensor = loaded_graph.get_tensor_by_name(\"input:0\")\n initial_state_tensor = loaded_graph.get_tensor_by_name(\"initial_state:0\")\n final_state_tensor = loaded_graph.get_tensor_by_name(\"final_state:0\")\n probs_tensor = loaded_graph.get_tensor_by_name(\"probs:0\")\n return input_tensor, initial_state_tensor, final_state_tensor, probs_tensor\n\n\n\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE\n\"\"\"\ntests.test_get_tensors(get_tensors)", "Choose Word\nImplement the pick_word() function to select the next word using probabilities.", "def pick_word(probabilities, int_to_vocab):\n \"\"\"\n Pick the next word in the generated text\n :param probabilities: Probabilites of the next word\n :param int_to_vocab: Dictionary of word ids as the keys and words as the values\n :return: String of the predicted word\n \"\"\"\n # TODO: Implement Function\n p = np.squeeze(probabilities)\n p[np.argsort(p)[:-1]] = 0\n p = p / np.sum(p)\n c = np.random.choice(len(int_to_vocab), 1, p=p)[0]\n return int_to_vocab[c]\n\n\n\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE\n\"\"\"\ntests.test_pick_word(pick_word)", "Generate TV Script\nThis will generate the TV script for you. Set gen_length to the length of TV script you want to generate.", "gen_length = 200\n# homer_simpson, moe_szyslak, or Barney_Gumble\nprime_word = 'moe_szyslak'\n\n\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE\n\"\"\"\nloaded_graph = tf.Graph()\nwith tf.Session(graph=loaded_graph) as sess:\n # Load saved model\n loader = tf.train.import_meta_graph(load_dir + '.meta')\n loader.restore(sess, load_dir)\n\n # Get Tensors from loaded model\n input_text, initial_state, final_state, probs = get_tensors(loaded_graph)\n\n # Sentences generation setup\n gen_sentences = [prime_word + ':']\n prev_state = sess.run(initial_state, {input_text: np.array([[1]])})\n\n # Generate sentences\n for n in range(gen_length):\n # Dynamic Input\n dyn_input = [[vocab_to_int[word] for word in gen_sentences[-seq_length:]]]\n dyn_seq_length = len(dyn_input[0])\n\n # Get Prediction\n probabilities, prev_state = sess.run(\n [probs, final_state],\n {input_text: dyn_input, initial_state: prev_state})\n \n pred_word = pick_word(probabilities[dyn_seq_length-1], int_to_vocab)\n\n gen_sentences.append(pred_word)\n \n # Remove tokens\n tv_script = ' '.join(gen_sentences)\n for key, token in token_dict.items():\n ending = ' ' if key in ['\\n', '(', '\"'] else ''\n tv_script = tv_script.replace(' ' + token.lower(), key)\n tv_script = tv_script.replace('\\n ', '\\n')\n tv_script = tv_script.replace('( ', '(')\n \n print(tv_script)", "The TV Script is Nonsensical\nIt's ok if the TV script doesn't make any sense. We trained on less than a megabyte of text. In order to get good results, you'll have to use a smaller vocabulary or get more data. Luckly there's more data! As we mentioned in the begging of this project, this is a subset of another dataset. We didn't have you train on all the data, because that would take too long. However, you are free to train your neural network on all the data. After you complete the project, of course.\nSubmitting This Project\nWhen submitting this project, make sure to run all the cells before saving the notebook. Save the notebook file as \"dlnd_tv_script_generation.ipynb\" and save it as a HTML file under \"File\" -> \"Download as\". Include the \"helper.py\" and \"problem_unittests.py\" files in your submission." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
stsouko/CGRtools
doc/tutorial/6_reactor.ipynb
lgpl-3.0
[ "6. Reactor\n\n(c) 2019, 2020 Dr. Ramil Nugmanov;\n(c) 2019 Dr. Timur Madzhidov; Ravil Mukhametgaleev\n\nInstallation instructions of CGRtools package information and tutorial's files see on https://github.com/stsouko/CGRtools\nNOTE: Tutorial should be performed sequentially from the start. Random cell running will lead to unexpected results.", "import pkg_resources\nif pkg_resources.get_distribution('CGRtools').version.split('.')[:2] != ['4', '0']:\n print('WARNING. Tutorial was tested on 4.0 version of CGRtools')\nelse:\n print('Welcome!')\n\n# load data for tutorial\nfrom pickle import load\nfrom traceback import format_exc\n\nwith open('reactions.dat', 'rb') as f:\n reactions = load(f) # list of ReactionContainer objects\n\nr1 = reactions[0] # reaction\nm6 = r1.reactants[1]\nm6copy = m6.copy()\nm6copy.atom(5)._Core__isotope = 13", "Reactor objects stores single transformation and can apply it to molecules or CGRs.\nTransformations is ReactionContainer object which in reactant side consist of query for matching group and in product side patch for updating matched atoms and bonds", "from CGRtools import CGRReactor, Reactor # import of Reactor\nfrom CGRtools.containers import * # import of required objects\nfrom CGRtools.containers.bonds import DynamicBond", "6.1. Products generation\nReactor works similar to ChemAxon Reactions enumeration.\nExample here presents application of it to create esters from acids.\nFirst we need to construct carboxy group matcher query. Then, ether group need to be specified. \nAtom numbers in query and patch should be mapped to each other. The same atoms should have same numbers.", "acid = QueryContainer() # this query matches acids. Use construction possibilities.\nacid.add_atom('C', neighbors=3) # add carboxyl carbon. Hybridization is irrelevant here\nacid.add_atom('O', neighbors=1) # add hydroxyl oxygen. Hybridization is irrelevant here \nacid.add_atom('O') # add carbonyl oxygen. Number of neighbors is irrelevant here.\nacid.add_bond(1, 2, 1) # create single bond between carbon and hydroxyl oxygen\nacid.add_bond(1, 3, 2) # create double bond\nprint(acid)\nacid.clean2d()\nacid\n\nmethyl_ester = QueryContainer() # create patch - how carboxyl group should be changed. We write methylated group\nmethyl_ester.add_atom('C', 1) # second argument is predefined atom mapping. Notice that mapping corresponds... \nmethyl_ester.add_atom('O', 2) # ... to order in already created acid group. Atom 2 is released water.\nmethyl_ester.add_atom('O', 4)\nmethyl_ester.add_atom('O', 3)\nmethyl_ester.add_atom('C', 5)\nmethyl_ester.add_bond(1, 4, 1)\nmethyl_ester.add_bond(1, 3, 2)\nmethyl_ester.add_bond(4, 5, 1)\n# No bond between atom 1 and atom 2. This bond will be broken. \nmethyl_ester.clean2d()\nmethyl_ester\n\nm6 # acid\n\ntemplate = ReactionContainer([acid], [methyl_ester]) # merge query and patch in template, which is ReactionContainer\nreactor = CGRReactor(template) # Reactor is initialized\nreacted_acid = next(reactor(m6)) # application of Reactor to molecule\n\nreacted_acid.clean2d() # calculate coordinates\nreacted_acid # desired methylated ester have been generated", "One can notice presence of separate oxygen (water) and ester group.\nThe second group can substituted by calling reactor on observed product.", "second_stage = next(reactor(reacted_acid)) # apply transformation on product of previous transformation\nsecond_stage.clean2d() # recalculate coordinates for correct drawing\nsecond_stage", "second_stage has 3 components in a single MoleculeContainer object. We can split it into individual molecules and place all molecules into ReactionContainer object. Since in CGRtools atom-to-atom mapping corresponds to numbering of atoms in molecules, the resulting product has AAM according to the rule applied. Thus, reaction has correct AAM and nothing special should be made to keep or find it.", "products = second_stage.split() # split product into individual molecules\nreact = ReactionContainer([m6], products) # unite reagent and product into reaction. \nreact", "For multicomponent reactions one can merge molecules of reactants into single MoleculeContainer object and apply reactor on it.\nIt is possible to generate all available products in case that molecule has several groups matching the query.", "m6copy\n\nenums = set() # the set enums is used to select structurally diverse products\nfor m in reactor(m6copy): # limit=0 is enumeration of all possible products by reactor\n print(m) # print signatures for observed molecules. Notice presence of water as component of product\n m.clean2d() # recalculate coordinates\n enums.update(m.split()) # split product into separate molecules\nenums = list(enums) # set of all resulting molecules", "Let's have a look at molecules in set.\nNote to lost isotope mark.", "enums[0]\n\nenums[1]\n\nenums[2]", "6.2. MetaReactions (reactions on CGRs).\nReactor could be applied to CGR to introduce changes into reaction. \n6.2.1. Example of atom-to-atom mapping fixing.", "reactions[1] # reaction under study\n\ncgr = ~reactions[1] # generate reaction CGR\nprint(cgr)\ncgr.clean2d()\ncgr\n\ncgr.centers_list # reaction has two reaction centers. [10,11,12] - pseudo reaction appeared due to AAM error", "Reaction has AAM error in nitro-group\nLets try to use Reactor for AAM fixing", "nitro = QueryCGRContainer() # construct query for invalid reaction center - CGR of wrongly mapped nitro-group\nnitro.add_atom('N', charge=1, p_charge=1) # atom 1\nnitro.add_atom('O', charge=0, p_charge=-1) # atom 2. Notice that due to AAM error charge was changed\nnitro.add_atom('O', charge=-1, p_charge=0) # atom 3. Notice that due to AAM error charge was changed\nnitro.add_atom('C') # atom 4\n\nnitro.add_bond(1, 2, DynamicBond(2, 1)) # bond between atoms 1 and 2. Due to AAM error bond is dynamic ('2>1' type) \nnitro.add_bond(1, 3, DynamicBond(1, 2)) # bond between atoms 1 and 3. Due to AAM error bond is dynamic ('1>2' type) \nnitro.add_bond(1, 4, 1) # ordinary bond\nprint(nitro)\n# this query matches reaction center in CGR appeared due to AAM error.\nnitro.clean2d()\nnitro\n\nnitro < cgr # query matches CGR of reaction with error.\n\nvalid_nitro = QueryCGRContainer() # construct nitro group without dynamic atoms. Notice that atom order should correspond object nitro\nvalid_nitro.add_atom('N', charge=1, p_charge=1) # ordinary N atom\nvalid_nitro.add_atom('O', charge=-1, p_charge=-1) # ordinary negatively charged oxygen atom\nvalid_nitro.add_atom('O') # ordinary oxygen atom\n\nvalid_nitro.add_bond(1, 2, 1) # ordinary single bond\nvalid_nitro.add_bond(1, 3, 2) # ordinary double bond\nprint(valid_nitro)\nvalid_nitro.clean2d()\nvalid_nitro", "Now time to prepare and apply Template to CGR based on reaction with incorrect AAM.\nTemplate is Reaction container with query in reactants and patch in products", "template = ReactionContainer([nitro], [valid_nitro]) # template shows how wrong part of CGR is transformed into correct one.\nprint(template) # notice complex structure of query: CGR signature is given in braces, then >> and molecule signature\ntemplate", "Reactor class accept single template. Existence of dynamic bond in it is not a problem.", "reactor = CGRReactor(template)", "Reactor object is callable and accept as argument molecule or CGR.\nNOTE: fixed is new CGR object", "fixed = next(reactor(cgr)) # fix CGR", "CGRreactor returns None if template could not be applied, otherwise patched structure is returned.", "print(fixed)\nfixed", "One can see that nitro group has no dynamic bonds any more. CGR corresponds only to substitution.", "fixed.centers_list # reaction center appeared due to AAM error before does not exist. Only 1 reaction center is found", "6.2.2 Reaction transformation\nExample of E2 to SN2 transformation.\nE2 and SN2 are concurrent reactions.\nWe can easily change reaction center of E2 reaction to SN2. It could be achieved by substitution of reaction center corresponding to double bond formation in E2 reaction by the one corresponding to formation of new single bond with base as in SN2.", "from CGRtools.files import MRVRead\nfrom io import StringIO\n\ne2 = next(MRVRead('e2.mrv')) # read E2 reaction from ChemAxon MRV file\ne2\n\n# create CGR query for E2 reaction side\ne2query = QueryCGRContainer() \ne2query.add_atom('C', 1) # create carbon with mapping number 1\ne2query.add_atom('C', 2) # create carbon with mapping number 2\n# addition of iodine atom\ne2query.add_atom('I', 3, neighbors=1, p_neighbors=0, charge=0, p_charge=-1)\n# addition of OH- or RO- groups\ne2query.add_atom('O', 4, neighbors=[0, 1], p_neighbors=[0, 1], charge=-1, p_charge=0)\n\ne2query.add_bond(1, 2, DynamicBond(1, 2)) # bond between two carbon corresponds to formation of double from single\ne2query.add_bond(1, 3, DynamicBond(1)) # bond between carbon and halogen breaks in E2 reaction\nprint(e2query) # it is CGR of E2 reaction center\ne2query.clean2d()\ne2query\n\ne2_cgr = ~e2 # compose reaction into CGR\ne2_cgr\n\ne2query < e2_cgr # E2 CGR pattern works!\n\n# create patch creating SN2 reaction. Notice that ordering of atoms correspond to that of E2 CGR query\nsn2patch = QueryCGRContainer()\nsn2patch.add_atom('C', 1) # save atom unchanged.\nsn2patch.add_atom('C', 2) # it is central atom.\nsn2patch.add_atom('I', 3, charge=0, p_charge=-1)\nsn2patch.add_atom('O', 4, charge=-1, p_charge=0)\n\nsn2patch.add_bond(1, 2, 1) # set carbon - carbon single bond that is unchanged in SN2 reaction\nsn2patch.add_bond(1, 3, DynamicBond(1)) # this bond is broken in SN2 reaction\nsn2patch.add_bond(1, 4, DynamicBond(None, 1)) # it corresponds to formation of bond O(S)-C bond in SN2 reaction\nprint(sn2patch)\nsn2patch.clean2d()\nsn2patch\n\nreactor = CGRReactor(ReactionContainer([e2query], [sn2patch])) # create template and pass it to Reactor\nsn2_cgr = next(reactor(e2_cgr)) # apply Reactor on E2 reaction\n\nprint(sn2_cgr)\nsn2_cgr\n\n# decompose CGR into reaction\nsn2 = ReactionContainer.from_cgr(sn2_cgr)\nsn2.clean2d()\nsn2" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
amueller/scipy-2017-sklearn
notebooks/20.Unsupervised_learning-Hierarchical_and_density-based_clustering_algorithms.ipynb
cc0-1.0
[ "%matplotlib inline\nimport numpy as np\nfrom matplotlib import pyplot as plt", "Unsupervised learning: Hierarchical and density-based clustering algorithms\nIn a previous notebook, \"08 Unsupervised Learning - Clustering.ipynb\", we introduced one of the essential and widely used clustering algorithms, K-means. One of the advantages of K-means is that it is extremely easy to implement, and it is also computationally very efficient compared to other clustering algorithms. However, we've seen that one of the weaknesses of K-Means is that it only works well if the data can be grouped into a globular or spherical shape. Also, we have to assign the number of clusters, k, a priori -- this can be a problem if we have no prior knowledge about how many clusters we expect to find. \nIn this notebook, we will take a look at 2 alternative approaches to clustering, hierarchical clustering and density-based clustering. \nHierarchical Clustering\nOne nice feature of hierachical clustering is that we can visualize the results as a dendrogram, a hierachical tree. Using the visualization, we can then decide how \"deep\" we want to cluster the dataset by setting a \"depth\" threshold. Or in other words, we don't need to make a decision about the number of clusters upfront.\nAgglomerative and divisive hierarchical clustering\nFurthermore, we can distinguish between 2 main approaches to hierarchical clustering: Divisive clustering and agglomerative clustering. In agglomerative clustering, we start with a single sample from our dataset and iteratively merge it with other samples to form clusters -- we can see it as a bottom-up approach for building the clustering dendrogram.\nIn divisive clustering, however, we start with the whole dataset as one cluster, and we iteratively split it into smaller subclusters -- a top-down approach. \nIn this notebook, we will use agglomerative clustering.\nSingle and complete linkage\nNow, the next question is how we measure the similarity between samples. One approach is the familiar Euclidean distance metric that we already used via the K-Means algorithm. As a refresher, the distance between 2 m-dimensional vectors $\\mathbf{p}$ and $\\mathbf{q}$ can be computed as:\n\\begin{align} \\mathrm{d}(\\mathbf{q},\\mathbf{p}) & = \\sqrt{(q_1-p_1)^2 + (q_2-p_2)^2 + \\cdots + (q_m-p_m)^2} \\[8pt]\n& = \\sqrt{\\sum_{j=1}^m (q_j-p_j)^2}.\\end{align} \nHowever, that's the distance between 2 samples. Now, how do we compute the similarity between subclusters of samples in order to decide which clusters to merge when constructing the dendrogram? I.e., our goal is to iteratively merge the most similar pairs of clusters until only one big cluster remains. There are many different approaches to this, for example single and complete linkage. \nIn single linkage, we take the pair of the most similar samples (based on the Euclidean distance, for example) in each cluster, and merge the two clusters which have the most similar 2 members into one new, bigger cluster.\nIn complete linkage, we compare the pairs of the two most dissimilar members of each cluster with each other, and we merge the 2 clusters where the distance between its 2 most dissimilar members is smallest.\n\nTo see the agglomerative, hierarchical clustering approach in action, let us load the familiar Iris dataset -- pretending we don't know the true class labels and want to find out how many different follow species it consists of:", "from sklearn.datasets import load_iris\n\niris = load_iris()\nX = iris.data[:, [2, 3]]\ny = iris.target\nn_samples, n_features = X.shape\n\nplt.scatter(X[:, 0], X[:, 1], c=y);", "First, we start with some exploratory clustering, visualizing the clustering dendrogram using SciPy's linkage and dendrogram functions:", "from scipy.cluster.hierarchy import linkage\nfrom scipy.cluster.hierarchy import dendrogram\n\nclusters = linkage(X, \n metric='euclidean',\n method='complete')\n\ndendr = dendrogram(clusters)\n\nplt.ylabel('Euclidean Distance');", "Next, let's use the AgglomerativeClustering estimator from scikit-learn and divide the dataset into 3 clusters. Can you guess which 3 clusters from the dendrogram it will reproduce?", "from sklearn.cluster import AgglomerativeClustering\n\nac = AgglomerativeClustering(n_clusters=3,\n affinity='euclidean',\n linkage='complete')\n\nprediction = ac.fit_predict(X)\nprint('Cluster labels: %s\\n' % prediction)\n\nplt.scatter(X[:, 0], X[:, 1], c=prediction);", "Density-based Clustering - DBSCAN\nAnother useful approach to clustering is Density-based Spatial Clustering of Applications with Noise (DBSCAN). In essence, we can think of DBSCAN as an algorithm that divides the dataset into subgroup based on dense regions of point.\nIn DBSCAN, we distinguish between 3 different \"points\":\n\nCore points: A core point is a point that has at least a minimum number of other points (MinPts) in its radius epsilon.\nBorder points: A border point is a point that is not a core point, since it doesn't have enough MinPts in its neighborhood, but lies within the radius epsilon of a core point.\nNoise points: All other points that are neither core points nor border points.\n\n\nA nice feature about DBSCAN is that we don't have to specify a number of clusters upfront. However, it requires the setting of additional hyperparameters such as the value for MinPts and the radius epsilon.", "from sklearn.datasets import make_moons\nX, y = make_moons(n_samples=400,\n noise=0.1,\n random_state=1)\nplt.scatter(X[:,0], X[:,1])\nplt.show()\n\nfrom sklearn.cluster import DBSCAN\n\ndb = DBSCAN(eps=0.2,\n min_samples=10,\n metric='euclidean')\nprediction = db.fit_predict(X)\n\nprint(\"Predicted labels:\\n\", prediction)\n\nplt.scatter(X[:, 0], X[:, 1], c=prediction);", "Exercise\n<div class=\"alert alert-success\">\n <b>EXERCISE</b>:\n <ul>\n <li>\n Using the following toy dataset, two concentric circles, experiment with the three different clustering algorithms that we used so far: `KMeans`, `AgglomerativeClustering`, and `DBSCAN`.\n\nWhich clustering algorithms reproduces or discovers the hidden structure (pretending we don't know `y`) best?\n\nCan you explain why this particular algorithm is a good choice while the other 2 \"fail\"?\n </li>\n </ul>\n</div>", "from sklearn.datasets import make_circles\n\nX, y = make_circles(n_samples=1500, \n factor=.4, \n noise=.05)\n\nplt.scatter(X[:, 0], X[:, 1], c=y);\n\n# %load solutions/20_clustering_comparison.py" ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
YuriyGuts/kaggle-quora-question-pairs
notebooks/feature-magic-frequencies.ipynb
mit
[ "Feature: Question Occurrence Frequencies\nThis is a \"magic\" (leaky) feature published by Jared Turkewitz that doesn't rely on the question text. Questions that occur more often in the training and test sets are more likely to be duplicates.\nImports\nThis utility package imports numpy, pandas, matplotlib and a helper kg module into the root namespace.", "from pygoose import *", "Config\nAutomatically discover the paths to various data folders and compose the project structure.", "project = kg.Project.discover()", "Identifier for storing these features on disk and referring to them later.", "feature_list_id = 'magic_frequencies'", "Read data\nPreprocessed and tokenized questions.", "tokens_train = kg.io.load(project.preprocessed_data_dir + 'tokens_lowercase_spellcheck_train.pickle')\ntokens_test = kg.io.load(project.preprocessed_data_dir + 'tokens_lowercase_spellcheck_test.pickle')", "Build features\nUnique question texts.", "df_all_pairs = pd.DataFrame(\n [\n [' '.join(pair[0]), ' '.join(pair[1])]\n for pair in tokens_train + tokens_test\n ],\n columns=['question1', 'question2'],\n)\n\ndf_unique_texts = pd.DataFrame(np.unique(df_all_pairs.values.ravel()), columns=['question'])\n\nquestion_ids = pd.Series(df_unique_texts.index.values, index=df_unique_texts['question'].values).to_dict()", "Mark every question with its number according to the uniques table.", "df_all_pairs['q1_id'] = df_all_pairs['question1'].map(question_ids)\ndf_all_pairs['q2_id'] = df_all_pairs['question2'].map(question_ids)", "Map to frequency space.", "q1_counts = df_all_pairs['q1_id'].value_counts().to_dict()\nq2_counts = df_all_pairs['q2_id'].value_counts().to_dict()\n\ndf_all_pairs['q1_freq'] = df_all_pairs['q1_id'].map(lambda x: q1_counts.get(x, 0) + q2_counts.get(x, 0))\ndf_all_pairs['q2_freq'] = df_all_pairs['q2_id'].map(lambda x: q1_counts.get(x, 0) + q2_counts.get(x, 0))", "Calculate ratios.", "df_all_pairs['freq_ratio'] = df_all_pairs['q1_freq'] / df_all_pairs['q2_freq']\ndf_all_pairs['freq_ratio_inverse'] = df_all_pairs['q2_freq'] / df_all_pairs['q1_freq']", "Build final features.", "columns_to_keep = [\n 'q1_freq',\n 'q2_freq',\n 'freq_ratio',\n 'freq_ratio_inverse',\n]\n\nX_train = df_all_pairs[columns_to_keep].values[:len(tokens_train)]\nX_test = df_all_pairs[columns_to_keep].values[len(tokens_train):]\n\nprint('X train:', X_train.shape)\nprint('X test :', X_test.shape)", "Save features", "feature_names = [\n 'magic_freq_q1',\n 'magic_freq_q2',\n 'magic_freq_q1_q2_ratio',\n 'magic_freq_q2_q1_ratio',\n]\n\nproject.save_features(X_train, X_test, feature_names, feature_list_id)" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
nickcdryan/hep_ml
notebooks/DemoReweighting.ipynb
apache-2.0
[ "Demonstration of distribution reweighting\nhep_ml.reweight contains methods to reweight distributions. \nTypically we use reweighting of monte-carlo to fight drawbacks of simulation, though there are many applications.\nIn this example we reweight multidimensional distibutions: original and target, the aim is to find new weights for original distribution, such that these multidimensional distributions will coincide. \nThese is a toy example without real physical meaning.\nPay attention: equality of distibutions for each feature $\\neq$ equality of multivariate dist", "%pylab inline\nfigsize(16, 8)\n\nimport root_numpy\nimport pandas\nfrom hep_ml import reweight", "Downloading data", "storage = 'https://github.com/arogozhnikov/hep_ml/blob/data/data_to_download/'\n!wget -O ../data/MC_distribution.root -nc $storage/MC_distribution.root?raw=true\n!wget -O ../data/RD_distribution.root -nc $storage/RD_distribution.root?raw=true\n\ncolumns = ['hSPD', 'pt_b', 'pt_phi', 'vchi2_b', 'mu_pt_sum']\n\noriginal = root_numpy.root2array('../data/MC_distribution.root', branches=columns)\ntarget = root_numpy.root2array('../data/RD_distribution.root', branches=columns)\n\noriginal = pandas.DataFrame(original)\ntarget = pandas.DataFrame(target)\n\noriginal_weights = numpy.ones(len(original))\n\nfrom hep_ml.metrics_utils import ks_2samp_weighted\nhist_settings = {'bins': 100, 'normed': True, 'alpha': 0.7}\n\ndef draw_distributions(new_original_weights):\n for id, column in enumerate(columns, 1):\n xlim = numpy.percentile(numpy.hstack([target[column]]), [0.01, 99.99])\n subplot(2, 3, id)\n hist(original[column], weights=new_original_weights, range=xlim, **hist_settings)\n hist(target[column], range=xlim, **hist_settings)\n title(column)\n print 'KS over ', column, ' = ', ks_2samp_weighted(original[column], target[column], \n weights1=new_original_weights, weights2=numpy.ones(len(target), dtype=float)) ", "Original distributions\nKS = Kolmogorov-Smirnov distance", "# pay attention, actually we have very few data\nlen(original), len(target)\n\ndraw_distributions(original_weights)", "Bins-based reweighting in n dimensions\nTypical way to reweight distributions is based on bins.", "bins_reweighter = reweight.BinsReweighter(n_bins=20, n_neighs=1.)\nbins_reweighter.fit(original, target)\n\nbins_weights = bins_reweighter.predict_weights(original)\ndraw_distributions(bins_weights)", "Gradient Boosted Reweighter\nThis algorithm is inspired by gradient boosting and is able to fight curse of dimensionality.\nIt uses decision trees and special loss functiion (ReweightLossFunction).\nGBReweighter supports negative weights (to reweight MC to splotted real data).", "reweighter = reweight.GBReweighter(n_estimators=50, learning_rate=0.1, max_depth=3, min_samples_leaf=1000, \n gb_args={'subsample': 0.6})\nreweighter.fit(original, target)\n\ngb_weights = reweighter.predict_weights(original)\ndraw_distributions(gb_weights)", "Comparing some simple expressions:\nthe most interesting is checking some other variables in multidimensional distributions (those are expressed via original variables).", "def check_ks_of_expression(expression):\n col_original = original.eval(expression, engine='python')\n col_target = target.eval(expression, engine='python')\n w_target = numpy.ones(len(col_target), dtype='float')\n print 'No reweight KS:', ks_2samp_weighted(col_original, col_target, weights1=original_weights, weights2=w_target) \n print 'Bins reweight KS:', ks_2samp_weighted(col_original, col_target, weights1=bins_weights, weights2=w_target)\n print 'GB Reweight KS:', ks_2samp_weighted(col_original, col_target, weights1=gb_weights, weights2=w_target)\n\ncheck_ks_of_expression('hSPD')\n\ncheck_ks_of_expression('hSPD * pt_phi')\n\ncheck_ks_of_expression('hSPD * pt_phi * vchi2_b')\n\ncheck_ks_of_expression('pt_b * pt_phi / hSPD ')\n\ncheck_ks_of_expression('hSPD * pt_b * vchi2_b / pt_phi')", "GB-discrimination\nlet's check how well the classifier is able to distinguish these distributions. ROC AUC is taken as measure of quality.\nFor this puprose we split data into train and test, then train a classifier do distinguish these distributions.\nIf ROC AUC = 0.5 on test, distibutions are equal, if ROC AUC = 1.0, they are ideally separable.", "from sklearn.ensemble import GradientBoostingClassifier\nfrom sklearn.cross_validation import train_test_split\nfrom sklearn.metrics import roc_auc_score\n\ndata = numpy.concatenate([original, target])\nlabels = numpy.array([0] * len(original) + [1] * len(target))\n\nweights = {}\nweights['original'] = original_weights\nweights['bins'] = bins_weights\nweights['gb_weights'] = gb_weights\n\n\nfor name, new_weights in weights.items():\n W = numpy.concatenate([new_weights / new_weights.sum() * len(target), [1] * len(target)])\n Xtr, Xts, Ytr, Yts, Wtr, Wts = train_test_split(data, labels, W, random_state=42, train_size=0.51)\n clf = GradientBoostingClassifier(subsample=0.3, n_estimators=30).fit(Xtr, Ytr, sample_weight=Wtr)\n \n print name, roc_auc_score(Yts, clf.predict_proba(Xts)[:, 1], sample_weight=Wts)" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
JackDi/phys202-2015-work
assignments/assignment04/MatplotlibEx01.ipynb
mit
[ "Matplotlib Exercise 1\nImports", "%matplotlib inline\nimport matplotlib.pyplot as plt\nimport numpy as np", "Line plot of sunspot data\nDownload the .txt data for the \"Yearly mean total sunspot number [1700 - now]\" from the SILSO website. Upload the file to the same directory as this notebook.", "import os\nassert os.path.isfile('yearssn.dat')", "Use np.loadtxt to read the data into a NumPy array called data. Then create two new 1d NumPy arrays named years and ssc that have the sequence of year and sunspot counts.", "data=np.loadtxt('yearssn.dat')\nssc=data[:,1]\nyear=data[:,0]\n\n\nassert len(year)==315\nassert year.dtype==np.dtype(float)\nassert len(ssc)==315\nassert ssc.dtype==np.dtype(float)", "Make a line plot showing the sunspot count as a function of year.\n\nCustomize your plot to follow Tufte's principles of visualizations.\nAdjust the aspect ratio/size so that the steepest slope in your plot is approximately 1.\nCustomize the box, grid, spines and ticks to match the requirements of this data.", "f=plt.figure(figsize=(25,4))\nplt.plot(year,ssc)\nplt.title(\"Sun Spots Seen Per Year Since 1700\")\nplt.xlabel(\"Year\")\nplt.ylabel(\"Number of Sun Spots Seen\")\nplt.xlim(1700,2015)\nplt.ylim(0,180)\n\nassert True # leave for grading", "Describe the choices you have made in building this visualization and how they make it effective.\nI made the figure extra long, relative to its height, in order to accomodate the long range of data in the x direction. I also chose the x and y limits so as to show all of the data. The axis labels and titles are concise and show all of the information needed.\nNow make 4 subplots, one for each century in the data set. This approach works well for this dataset as it allows you to maintain mild slopes while limiting the overall width of the visualization. Perform similar customizations as above:\n\nCustomize your plot to follow Tufte's principles of visualizations.\nAdjust the aspect ratio/size so that the steepest slope in your plot is approximately 1.\nCustomize the box, grid, spines and ticks to match the requirements of this data.", "# YOUR CODE HERE\nf=plt.figure(figsize=(25,4))\nseventeen=data[:100,:]\neighteen=data[100:200,:]\nnineteen=data[200:300,:]\ntwo=data[300:,:]\n\nplt.subplot(2,2,1)\nplt.plot(seventeen[:,0],seventeen[:,1])\nplt.title(\"Sun Spots seen per Year During the 1700's\")\n\nplt.subplot(2,2,2)\nplt.plot(eighteen[:,0],eighteen[:,1])\nplt.title(\"Sun Spots seen per Year During the 1800's\")\n\n\n\nplt.subplot(2,2,3)\nplt.plot(nineteen[:,0],nineteen[:,1])\nplt.title(\"Sun Spots seen per Year During the 1900's\")\n\nplt.subplot(2,2,4)\nplt.plot(two[:,0],two[:,1])\nplt.title(\"Sun Spots seen per Year During the 2000's\")\n\nplt.tight_layout()\n\n\nassert True # leave for grading" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
keras-team/keras-io
examples/generative/ipynb/lstm_character_level_text_generation.ipynb
apache-2.0
[ "Character-level text generation with LSTM\nAuthor: fchollet<br>\nDate created: 2015/06/15<br>\nLast modified: 2020/04/30<br>\nDescription: Generate text from Nietzsche's writings with a character-level LSTM.\nIntroduction\nThis example demonstrates how to use a LSTM model to generate\ntext character-by-character.\nAt least 20 epochs are required before the generated text\nstarts sounding locally coherent.\nIt is recommended to run this script on GPU, as recurrent\nnetworks are quite computationally intensive.\nIf you try this script on new data, make sure your corpus\nhas at least ~100k characters. ~1M is better.\nSetup", "from tensorflow import keras\nfrom tensorflow.keras import layers\n\nimport numpy as np\nimport random\nimport io\n", "Prepare the data", "path = keras.utils.get_file(\n \"nietzsche.txt\", origin=\"https://s3.amazonaws.com/text-datasets/nietzsche.txt\"\n)\nwith io.open(path, encoding=\"utf-8\") as f:\n text = f.read().lower()\ntext = text.replace(\"\\n\", \" \") # We remove newlines chars for nicer display\nprint(\"Corpus length:\", len(text))\n\nchars = sorted(list(set(text)))\nprint(\"Total chars:\", len(chars))\nchar_indices = dict((c, i) for i, c in enumerate(chars))\nindices_char = dict((i, c) for i, c in enumerate(chars))\n\n# cut the text in semi-redundant sequences of maxlen characters\nmaxlen = 40\nstep = 3\nsentences = []\nnext_chars = []\nfor i in range(0, len(text) - maxlen, step):\n sentences.append(text[i : i + maxlen])\n next_chars.append(text[i + maxlen])\nprint(\"Number of sequences:\", len(sentences))\n\nx = np.zeros((len(sentences), maxlen, len(chars)), dtype=np.bool)\ny = np.zeros((len(sentences), len(chars)), dtype=np.bool)\nfor i, sentence in enumerate(sentences):\n for t, char in enumerate(sentence):\n x[i, t, char_indices[char]] = 1\n y[i, char_indices[next_chars[i]]] = 1\n\n", "Build the model: a single LSTM layer", "model = keras.Sequential(\n [\n keras.Input(shape=(maxlen, len(chars))),\n layers.LSTM(128),\n layers.Dense(len(chars), activation=\"softmax\"),\n ]\n)\noptimizer = keras.optimizers.RMSprop(learning_rate=0.01)\nmodel.compile(loss=\"categorical_crossentropy\", optimizer=optimizer)\n", "Prepare the text sampling function", "\ndef sample(preds, temperature=1.0):\n # helper function to sample an index from a probability array\n preds = np.asarray(preds).astype(\"float64\")\n preds = np.log(preds) / temperature\n exp_preds = np.exp(preds)\n preds = exp_preds / np.sum(exp_preds)\n probas = np.random.multinomial(1, preds, 1)\n return np.argmax(probas)\n\n", "Train the model", "epochs = 40\nbatch_size = 128\n\nfor epoch in range(epochs):\n model.fit(x, y, batch_size=batch_size, epochs=1)\n print()\n print(\"Generating text after epoch: %d\" % epoch)\n\n start_index = random.randint(0, len(text) - maxlen - 1)\n for diversity in [0.2, 0.5, 1.0, 1.2]:\n print(\"...Diversity:\", diversity)\n\n generated = \"\"\n sentence = text[start_index : start_index + maxlen]\n print('...Generating with seed: \"' + sentence + '\"')\n\n for i in range(400):\n x_pred = np.zeros((1, maxlen, len(chars)))\n for t, char in enumerate(sentence):\n x_pred[0, t, char_indices[char]] = 1.0\n preds = model.predict(x_pred, verbose=0)[0]\n next_index = sample(preds, diversity)\n next_char = indices_char[next_index]\n sentence = sentence[1:] + next_char\n generated += next_char\n\n print(\"...Generated: \", generated)\n print()\n" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
tpin3694/tpin3694.github.io
machine-learning/converting_a_dictionary_into_a_matrix.ipynb
mit
[ "Title: Converting A Dictionary Into A Matrix\nSlug: converting_a_dictionary_into_a_matrix \nSummary: How to convert a dictionary into a feature matrix for machine learning in Python. \nDate: 2016-09-06 12:00\nCategory: Machine Learning\nTags: Preprocessing Structured Data\nAuthors: Chris Albon\nPreliminaries", "# Load library\nfrom sklearn.feature_extraction import DictVectorizer", "Create Dictionary", "# Our dictionary of data\ndata_dict = [{'Red': 2, 'Blue': 4},\n {'Red': 4, 'Blue': 3},\n {'Red': 1, 'Yellow': 2},\n {'Red': 2, 'Yellow': 2}]", "Feature Matrix From Dictionary", "# Create DictVectorizer object\ndictvectorizer = DictVectorizer(sparse=False)\n\n# Convert dictionary into feature matrix\nfeatures = dictvectorizer.fit_transform(data_dict)\n\n# View feature matrix\nfeatures", "View column names", "# View feature matrix column names\ndictvectorizer.get_feature_names()" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
appleby/fastai-courses
deeplearning1/nbs/lesson6-ma.ipynb
apache-2.0
[ "from theano.sandbox import cuda\ncuda.use('gpu1')\n\n%matplotlib inline\nimport utils; reload(utils)\nfrom utils import *\nfrom __future__ import division, print_function", "Setup\nWe're going to download the collected works of Nietzsche to use as our data for this class.", "path = get_file('nietzsche.txt', origin=\"https://s3.amazonaws.com/text-datasets/nietzsche.txt\")\ntext = open(path).read()\nprint('corpus length:', len(text))\n\nchars = sorted(list(set(text)))\nvocab_size = len(chars)+1\nprint('total chars:', vocab_size)", "Sometimes it's useful to have a zero value in the dataset, e.g. for padding", "chars.insert(0, \"\\0\")\n\n''.join(chars[1:])", "Map from chars to indices and back again", "char_indices = dict((c, i) for i, c in enumerate(chars))\nindices_char = dict((i, c) for i, c in enumerate(chars))", "idx will be the data we use from now own - it simply converts all the characters to their index (based on the mapping above)", "idx = [char_indices[c] for c in text]\n\nidx[:10]\n\n''.join(indices_char[i] for i in idx[:70])", "3 char model\nCreate inputs\nCreate a list of every 4th character, starting at the 0th, 1st, 2nd, then 3rd characters", "cs=3\nc1_dat = [idx[i] for i in xrange(0, len(idx)-1-cs, cs)]\nc2_dat = [idx[i+1] for i in xrange(0, len(idx)-1-cs, cs)]\nc3_dat = [idx[i+2] for i in xrange(0, len(idx)-1-cs, cs)]\nc4_dat = [idx[i+3] for i in xrange(0, len(idx)-1-cs, cs)]\n\nlen(idx)//3, len(c1_dat), len(c2_dat), len(c3_dat), len(c4_dat)", "Our inputs", "[indices_char[x] for xs in (c1_dat[-2:], c2_dat[-2:], c3_dat[-2:]) for x in xs]\n\nidx[-16:], c1_dat[-2:], c2_dat[-2:], c3_dat[-2:], c4_dat[-2:]\n\nx1 = np.stack(c1_dat[:-2])\nx2 = np.stack(c2_dat[:-2])\nx3 = np.stack(c3_dat[:-2])", "Our output", "y = np.stack(c4_dat[:-2])", "The first 4 inputs and outputs", "x1[:4], x2[:4], x3[:4]\n\ny[:4]\n\nx1.shape, y.shape", "The number of latent factors to create (i.e. the size of the embedding matrix)", "n_fac = 42", "Create inputs and embedding outputs for each of our 3 character inputs", "def embedding_input(name, n_in, n_out):\n inp = Input(shape=(1,), dtype='int64', name=name)\n emb = Embedding(n_in, n_out, input_length=1)(inp)\n return inp, Flatten()(emb)\n\nc1_in, c1 = embedding_input('c1', vocab_size, n_fac)\nc2_in, c2 = embedding_input('c2', vocab_size, n_fac)\nc3_in, c3 = embedding_input('c3', vocab_size, n_fac)", "Create and train model\nPick a size for our hidden state", "n_hidden = 256", "This is the 'green arrow' from our diagram - the layer operation from input to hidden.", "dense_in = Dense(n_hidden, activation='relu')", "Our first hidden activation is simply this function applied to the result of the embedding of the first character.", "c1_hidden = dense_in(c1)", "This is the 'orange arrow' from our diagram - the layer operation from hidden to hidden.", "dense_hidden = Dense(n_hidden, activation='tanh')", "Our second and third hidden activations sum up the previous hidden state (after applying dense_hidden) to the new input state.", "c2_dense = dense_in(c2)\nhidden_2 = dense_hidden(c1_hidden)\nc2_hidden = merge([c2_dense, hidden_2])\n\nc3_dense = dense_in(c3)\nhidden_3 = dense_hidden(c2_hidden)\nc3_hidden = merge([c3_dense, hidden_3])", "This is the 'blue arrow' from our diagram - the layer operation from hidden to output.", "dense_out = Dense(vocab_size, activation='softmax')", "The third hidden state is the input to our output layer.", "c4_out = dense_out(c3_hidden)\n\nmodel = Model([c1_in, c2_in, c3_in], c4_out)\n\nmodel.compile(loss='sparse_categorical_crossentropy', optimizer=Adam())\n\nmodel.optimizer.lr.set_value(0.000001)\n\nmodel.fit([x1,x2,x3], y, batch_size=64, nb_epoch=4)\n\nmodel.optimizer.lr.set_value(0.01)\n\nmodel.fit([x1,x2,x3], y, batch_size=64, nb_epoch=4)\n\nmodel.optimizer.lr.set_value(0.000001)\n\nmodel.fit([x1,x2,x3], y, batch_size=64, nb_epoch=4)\n\nmodel.optimizer.lr.set_value(0.01)\n\nmodel.fit([x1,x2,x3], y, batch_size=64, nb_epoch=4)", "Test model", "def get_next(inp):\n idxs = [char_indices[c] for c in inp]\n arrs = [np.array(i)[np.newaxis] for i in idxs]\n p = model.predict(arrs)\n i = np.argmax(p)\n return chars[i]\n\nget_next('phi')\n\nget_next(' th')\n\nget_next(' an')\n\nmodel_path = \"data/rnn/models/\"\n%mkdir -p $model_path\n\nmodel.save_weights(model_path+'model1.h5')\n\nmodel.load_weights(model_path+'model1.h5')", "Our first RNN!\nCreate inputs\nThis is the size of our unrolled RNN.", "cs=8", "For each of 0 through 7, create a list of every 8th character with that starting point. These will be the 8 inputs to out model.", "c_in_dat = [[idx[i+n] for i in xrange(0, len(idx)-1-cs, cs)]\n for n in xrange(cs)]", "Then create a list of the next character in each of these series. This will be the labels for our model.", "c_out_dat = [idx[i+cs] for i in xrange(0, len(idx)-1-cs, cs)]\n\nxs = [np.stack(c[:-2]) for c in c_in_dat]\n\nlen(xs), xs[0].shape\n\ny = np.stack(c_out_dat[:-2])", "So each column below is one series of 8 characters from the text.", "[xs[n][:cs] for n in range(cs)]", "...and this is the next character after each sequence.", "y[:cs]\n\nn_fac = 42", "Create and train model", "def embedding_input(name, n_in, n_out):\n inp = Input(shape=(1,), dtype='int64', name=name+'_in')\n emb = Embedding(n_in, n_out, input_length=1, name=name+'_emb')(inp)\n return inp, Flatten()(emb)\n\nc_ins = [embedding_input('c'+str(n), vocab_size, n_fac) for n in range(cs)]\n\nn_hidden = 256\n\ndense_in = Dense(n_hidden, activation='relu')\ndense_hidden = Dense(n_hidden, activation='relu', init='identity')\ndense_out = Dense(vocab_size, activation='softmax')", "The first character of each sequence goes through dense_in(), to create our first hidden activations.", "hidden = dense_in(c_ins[0][1])", "Then for each successive layer we combine the output of dense_in() on the next character with the output of dense_hidden() on the current hidden state, to create the new hidden state.", "for i in range(1,cs):\n c_dense = dense_in(c_ins[i][1])\n hidden = dense_hidden(hidden)\n hidden = merge([c_dense, hidden])", "Putting the final hidden state through dense_out() gives us our output.", "c_out = dense_out(hidden)", "So now we can create our model.", "model = Model([c[0] for c in c_ins], c_out)\nmodel.compile(loss='sparse_categorical_crossentropy', optimizer=Adam())\n\nmodel.fit(xs, y, batch_size=64, nb_epoch=12)", "Test model", "def get_next(inp):\n idxs = [np.array(char_indices[c])[np.newaxis] for c in inp]\n p = model.predict(idxs)\n return chars[np.argmax(p)]\n\nget_next('for thos')\n\nget_next('part of ')\n\nget_next('queens a')\n\nmodel.save_weights(model_path+'model2.h5')\n\nmodel.load_weights(model_path+'model2.h5')", "Our first RNN with keras!", "n_hidden, n_fac, cs, vocab_size = (256, 42, 8, 86)", "This is nearly exactly equivalent to the RNN we built ourselves in the previous section.", "model=Sequential([\n Embedding(vocab_size, n_fac, input_length=cs),\n SimpleRNN(n_hidden, activation='relu', inner_init='identity'),\n Dense(vocab_size, activation='softmax')\n ])\n\nmodel.summary()\n\nmodel.compile(loss='sparse_categorical_crossentropy', optimizer=Adam())\n\nmodel.fit(np.concatenate(xs,axis=1), y, batch_size=64, nb_epoch=8)\n\ndef get_next_keras(inp):\n idxs = [char_indices[c] for c in inp]\n arrs = np.array(idxs)[np.newaxis,:]\n p = model.predict(arrs)[0]\n return chars[np.argmax(p)]\n\nget_next_keras('this is ')\n\nget_next_keras('part of ')\n\nget_next_keras('queens a')\n\nmodel.save_weights(model_path+'model3.h5')\n\nmodel.load_weights(model_path+'model3.h5')", "Returning sequences\nCreate inputs\nTo use a sequence model, we can leave our input unchanged - but we have to change our output to a sequence (of course!)\nHere, c_out_dat is identical to c_in_dat, but moved across 1 character.", "#c_in_dat = [[idx[i+n] for i in xrange(0, len(idx)-1-cs, cs)]\n# for n in range(cs)]\nc_out_dat = [[idx[i+n] for i in xrange(1, len(idx)-cs, cs)]\n for n in range(cs)]\n\nys = [np.stack(c[:-2]) for c in c_out_dat]\n\nlen(ys), ys[0].shape", "Reading down each column shows one set of inputs and outputs.", "[xs[n][:cs] for n in range(cs)]\n\n[ys[n][:cs] for n in range(cs)]", "Create and train model", "dense_in = Dense(n_hidden, activation='relu')\ndense_hidden = Dense(n_hidden, activation='relu', init='identity')\ndense_out = Dense(vocab_size, activation='softmax', name='output')", "We're going to pass a vector of all zeros as our starting point - here's our input layers for that:", "inp1 = Input(shape=(n_fac,), name='zeros')\nhidden = dense_in(inp1)\n\nouts = []\n\nfor i in range(cs):\n c_dense = dense_in(c_ins[i][1])\n hidden = dense_hidden(hidden)\n hidden = merge([c_dense, hidden], mode='sum')\n # every layer now has an output\n outs.append(dense_out(hidden))\n\nmodel = Model([inp1]+[c[0] for c in c_ins], outs)\nmodel.compile(loss='sparse_categorical_crossentropy', optimizer=Adam())\n\nzeros = np.tile(np.zeros(n_fac), (len(xs[0]),1))\nzeros.shape\n\nmodel.fit([zeros]+xs, ys, batch_size=64, nb_epoch=12)\n\nys[0].shape", "Test model", "def get_nexts(inp):\n idxs = [char_indices[c] for c in inp]\n arrs = [np.array(i)[np.newaxis] for i in idxs]\n p = model.predict([np.zeros(n_fac)[np.newaxis,:]] + arrs)\n print(list(inp))\n return [chars[np.argmax(o)] for o in p]\n\nget_nexts(' this is')\n\nget_nexts(' part of')", "Sequence model with keras", "n_hidden, n_fac, cs, vocab_size", "To convert our previous keras model into a sequence model, simply add the 'return_sequences=True' parameter, and add TimeDistributed() around our dense layer.", "model=Sequential([\n Embedding(vocab_size, n_fac, input_length=cs),\n SimpleRNN(n_hidden, activation='relu', inner_init='identity', return_sequences=True),\n TimeDistributed(Dense(vocab_size, activation='softmax'))\n ])\n\nmodel.summary()\n\nmodel.compile(loss='sparse_categorical_crossentropy', optimizer=Adam())\n\nxs[0].shape, ys[0].shape\n\nx_rnn=np.stack(xs, axis=1)\ny_rnn=np.atleast_3d(np.stack(ys, axis=1)) # only need to expand dims on ys if fit was not called, above\n\nx_rnn.shape, y_rnn.shape\n\nmodel.fit(x_rnn, y_rnn, batch_size=64, nb_epoch=8)\n\ndef get_nexts_keras(inp):\n idxs = [char_indices[c] for c in inp]\n arr = np.array(idxs)[np.newaxis,:]\n p = model.predict(arr)[0]\n print(list(inp))\n return [chars[np.argmax(o)] for o in p]\n\nget_nexts_keras(' this is')\n\nmodel.save_weights(model_path+'model5.h5')\n\nmodel.load_weights(model_path+'model5.h5')", "One-hot sequence model with keras\nThis is the keras version of the theano model that we're about to create.", "model=Sequential([\n SimpleRNN(n_hidden, activation='relu', inner_init='identity',\n input_shape=(cs, vocab_size), return_sequences=True),\n TimeDistributed(Dense(vocab_size, activation='softmax'))\n ])\nmodel.compile(loss='categorical_crossentropy', optimizer=Adam())\n\noh_ys = [to_categorical(y, vocab_size) for y in ys]\noh_y_rnn=np.stack(oh_ys, axis=1)\n\noh_xs = [to_categorical(x, vocab_size) for x in xs]\noh_x_rnn=np.stack(oh_xs, axis=1)\n\noh_x_rnn.shape, oh_y_rnn.shape\n\nmodel.fit(oh_x_rnn, oh_y_rnn, batch_size=64, nb_epoch=8)\n\ndef get_nexts_oh(inp):\n idxs = np.array([char_indices[c] for c in inp])\n arr = to_categorical(idxs, vocab_size)\n p = model.predict(arr[np.newaxis,:])[0]\n print(list(inp))\n return [chars[np.argmax(o)] for o in p]\n\nget_nexts_oh(' this is')\n\nmodel.save_weights(model_path+'model6.h5')\n\nmodel.load_weights(model_path+'model6.h5')", "Stateful model with keras", "bs=64", "A stateful model is easy to create (just add \"stateful=True\") but harder to train. We had to add batchnorm and use LSTM to get reasonable results.\nWhen using stateful in keras, you have to also add 'batch_input_shape' to the first layer, and fix the batch size there.", "model=Sequential([\n Embedding(vocab_size, n_fac, input_length=cs, batch_input_shape=(bs,cs)),\n BatchNormalization(),\n LSTM(n_hidden, activation='relu', return_sequences=True, stateful=True),\n TimeDistributed(Dense(vocab_size, activation='softmax'))\n ])\n\nmodel.compile(loss='sparse_categorical_crossentropy', optimizer=Adam())", "Since we're using a fixed batch shape, we have to ensure our inputs and outputs are a even multiple of the batch size.", "mx = len(x_rnn)//bs*bs\n\nmodel.fit(x_rnn[:mx], y_rnn[:mx], batch_size=bs, nb_epoch=4, shuffle=False)\n\nmodel.optimizer.lr=1e-4\n\nmodel.fit(x_rnn[:mx], y_rnn[:mx], batch_size=bs, nb_epoch=4, shuffle=False)\n\nmodel.fit(x_rnn[:mx], y_rnn[:mx], batch_size=bs, nb_epoch=4, shuffle=False)\n\nmodel.save_weights(model_path+'model7.h5')\n\nmodel.load_weights(model_path+'model7.h5')", "Theano RNN", "n_input = vocab_size\nn_output = vocab_size", "Using raw theano, we have to create our weight matrices and bias vectors ourselves - here are the functions we'll use to do so (using glorot initialization).\nThe return values are wrapped in shared(), which is how we tell theano that it can manage this data (copying it to and from the GPU as necessary).", "def init_wgts(rows, cols): \n scale = math.sqrt(2/rows)\n return shared(normal(scale=scale, size=(rows, cols)).astype(np.float32))\ndef init_bias(rows): \n return shared(np.zeros(rows, dtype=np.float32))", "We return the weights and biases together as a tuple. For the hidden weights, we'll use an identity initialization (as recommended by Hinton.)", "def wgts_and_bias(n_in, n_out): \n return init_wgts(n_in, n_out), init_bias(n_out)\ndef id_and_bias(n): \n return shared(np.eye(n, dtype=np.float32)), init_bias(n)", "Theano doesn't actually do any computations until we explicitly compile and evaluate the function (at which point it'll be turned into CUDA code and sent off to the GPU). So our job is to describe the computations that we'll want theano to do - the first step is to tell theano what inputs we'll be providing to our computation:", "t_inp = T.matrix('inp')\nt_outp = T.matrix('outp')\nt_h0 = T.vector('h0')\nlr = T.scalar('lr')\n\nall_args = [t_h0, t_inp, t_outp, lr]", "Now we're ready to create our intial weight matrices.", "W_h = id_and_bias(n_hidden)\nW_x = wgts_and_bias(n_input, n_hidden)\nW_y = wgts_and_bias(n_hidden, n_output)\nw_all = list(chain.from_iterable([W_h, W_x, W_y]))", "Theano handles looping by using the GPU scan operation. We have to tell theano what to do at each step through the scan - this is the function we'll use, which does a single forward pass for one character:", "def step(x, h, W_h, b_h, W_x, b_x, W_y, b_y):\n # Calculate the hidden activations\n h = nnet.relu(T.dot(x, W_x) + b_x + T.dot(h, W_h) + b_h)\n # Calculate the output activations\n y = nnet.softmax(T.dot(h, W_y) + b_y)\n # Return both (the 'Flatten()' is to work around a theano bug)\n return h, T.flatten(y, 1)", "Now we can provide everything necessary for the scan operation, so we can setup that up - we have to pass in the function to call at each step, the sequence to step through, the initial values of the outputs, and any other arguments to pass to the step function.", "[v_h, v_y], _ = theano.scan(step, sequences=t_inp, outputs_info=[t_h0, None], non_sequences=w_all)", "We can now calculate our loss function, and all of our gradients, with just a couple of lines of code!", "error = nnet.categorical_crossentropy(v_y, t_outp).sum()\ng_all = T.grad(error, w_all)", "We even have to show theano how to do SGD - so we set up this dictionary of updates to complete after every forward pass, which apply to standard SGD update rule to every weight.", "def upd_dict(wgts, grads, lr): \n return OrderedDict({w: w-lr*g for (w,g) in zip(wgts,grads)})\n\nupd = upd_dict(w_all, g_all, lr)", "We're finally ready to compile the function!", "fn = theano.function(all_args, error, updates=upd, allow_input_downcast=True)\n\nX = oh_x_rnn\nY = oh_y_rnn\nX.shape, Y.shape", "To use it, we simply loop through our input data, calling the function compiled above, and printing our progress from time to time.", "err=0.0; l_rate=0.01\nfor i in range(len(X)): \n err+=fn(np.zeros(n_hidden), X[i], Y[i], l_rate)\n if i % 2000 == 1999: \n print (\"Error:{:.3f}\".format(err/2000))\n err=0.0\n\nf_y = theano.function([t_h0, t_inp], v_y, allow_input_downcast=True)\n\npred = np.argmax(f_y(np.zeros(n_hidden), X[6]), axis=1)\n\nact = np.argmax(X[6], axis=1)\n\n[indices_char[o] for o in act]\n\n[indices_char[o] for o in pred]", "Pure python RNN!\nSet up basic functions\nNow we're going to try to repeat the above theano RNN, using just pure python (and numpy). Which means, we have to do everything ourselves, including defining the basic functions of a neural net! Below are all of the definitions, along with tests to check that they give the same answers as theano. The functions ending in _d are the derivatives of each function.", "def sigmoid(x): return 1/(1+np.exp(-x))\ndef sigmoid_d(x): \n output = sigmoid(x)\n return output * (1-output)\n\ndef relu(x): return np.maximum(0., x)\ndef relu_d(x): return (x > 0.)*1.\n\nrelu(np.array([3.,-3.])), relu_d(np.array([3.,-3.]))\n\ndef dist(a,b): return pow(a-b,2)\ndef dist_d(a,b): return 2*(a-b)\n\nimport pdb\n\neps = 1e-7\ndef x_entropy(pred, actual): \n return -np.sum(actual * np.log(np.clip(pred, eps, 1-eps)))\ndef x_entropy_d(pred, actual): return -actual/pred\n\ndef softmax(x): return np.exp(x)/np.exp(x).sum()\n\ndef softmax_d(x):\n sm = softmax(x)\n res = np.expand_dims(-sm,-1)*sm\n res[np.diag_indices_from(res)] = sm*(1-sm)\n return res\n\ntest_preds = np.array([0.2,0.7,0.1])\ntest_actuals = np.array([0.,1.,0.])\nnnet.categorical_crossentropy(test_preds, test_actuals).eval()\n\nx_entropy(test_preds, test_actuals)\n\ntest_inp = T.dvector()\ntest_out = nnet.categorical_crossentropy(test_inp, test_actuals)\ntest_grad = theano.function([test_inp], T.grad(test_out, test_inp))\n\ntest_grad(test_preds)\n\nx_entropy_d(test_preds, test_actuals)\n\npre_pred = random(oh_x_rnn[0][0].shape)\npreds = softmax(pre_pred)\nactual = oh_x_rnn[0][0]\n\nnp.allclose(softmax_d(pre_pred).dot(x_entropy_d(preds,actual)), preds-actual)\n\nsoftmax(test_preds)\n\nnnet.softmax(test_preds).eval()\n\ntest_out = T.flatten(nnet.softmax(test_inp))\n\ntest_grad = theano.function([test_inp], theano.gradient.jacobian(test_out, test_inp))\n\ntest_grad(test_preds)\n\nsoftmax_d(test_preds)\n\nact=relu\nact_d=relu_d\n\nloss=x_entropy\nloss_d=x_entropy_d", "We also have to define our own scan function. Since we're not worrying about running things in parallel, it's very simple to implement:", "def scan(fn, start, seq):\n res = []\n prev = start\n for s in seq:\n app = fn(prev, s)\n res.append(app)\n prev = app\n return res", "...for instance, scan on + is the cumulative sum.", "scan(lambda prev,curr: prev+curr, 0, range(5))", "Set up training\nLet's now build the functions to do the forward and backward passes of our RNN. First, define our data and shape.", "inp = oh_x_rnn\noutp = oh_y_rnn\nn_input = vocab_size\nn_output = vocab_size\n\ninp.shape, outp.shape", "Here's the function to do a single forward pass of an RNN, for a single character.", "def one_char(prev, item):\n # Previous state\n tot_loss, pre_hidden, pre_pred, hidden, ypred = prev\n # Current inputs and output\n x, y = item\n pre_hidden = np.dot(x, w_x) + np.dot(hidden, w_h)\n hidden = act(pre_hidden)\n pre_pred = np.dot(hidden, w_y)\n ypred = softmax(pre_pred)\n return (\n # Keep track of loss so we can report it\n tot_loss + loss(ypred, y),\n # Used in backprop\n pre_hidden, pre_pred, \n # Used in next iteration\n hidden, \n # To provide predictions\n ypred)", "We use scan to apply the above to a whole sequence of characters.", "def get_chars(n): return zip(inp[n], outp[n])\ndef one_fwd(n): return scan(one_char, (0,0,0,np.zeros(n_hidden),0), get_chars(n))", "Now we can define the backward step. We use a loop to go through every element of the sequence. The derivatives are applying the chain rule to each step, and accumulating the gradients across the sequence.", "# \"Columnify\" a vector\ndef col(x): return x[:,newaxis]\n\ndef one_bkwd(args, n):\n global w_x,w_y,w_h\n\n i=inp[n] # 8x86\n o=outp[n] # 8x86\n d_pre_hidden = np.zeros(n_hidden) # 256\n for p in reversed(range(len(i))):\n totloss, pre_hidden, pre_pred, hidden, ypred = args[p]\n x=i[p] # 86\n y=o[p] # 86\n d_pre_pred = softmax_d(pre_pred).dot(loss_d(ypred, y)) # 86\n d_pre_hidden = act_d(pre_hidden) * (np.dot(d_pre_pred, w_y.T) + np.dot(d_pre_hidden, w_h.T)) # 256\n\n # d(loss)/d(w_y) = d(loss)/d(pre_pred) * d(pre_pred)/d(w_y)\n w_y -= col(hidden) * d_pre_pred * alpha\n # d(loss)/d(w_h) = d(loss)/d(pre_hidden[p-1]) * d(pre_hidden[p-1])/d(w_h)\n if (p>0): w_h -= args[p-1][3].dot(d_pre_hidden) * alpha\n w_x -= col(x) * d_pre_hidden * alpha\n return d_pre_hidden", "Now we can set up our initial weight matrices. Note that we're not using bias at all in this example, in order to keep things simpler.", "scale=math.sqrt(2./n_input)\nw_x = normal(scale=scale, size=(n_input, n_hidden))\nw_y = normal(scale=scale, size=(n_hidden, n_output))\nw_h = np.eye(n_hidden, dtype=np.float32)", "Our loop looks much like the theano loop in the previous section, except that we have to call the backwards step ourselves.", "overallError=0\nalpha=0.0001\nfor n in range(10000):\n res = one_fwd(n)\n overallError+=res[-1][0]\n deriv = one_bkwd(res, n)\n if(n % 1000 == 999):\n print (\"Error:{:.4f}; Gradient:{:.5f}\".format(\n overallError/1000, np.linalg.norm(deriv)))\n overallError=0", "Keras GRU\nIdentical to the last keras rnn, but a GRU!", "model=Sequential([\n GRU(n_hidden, return_sequences=True, input_shape=(cs, vocab_size),\n activation='relu', inner_init='identity'),\n TimeDistributed(Dense(vocab_size, activation='softmax')),\n ])\nmodel.compile(loss='categorical_crossentropy', optimizer=Adam())\n\nmodel.fit(oh_x_rnn, oh_y_rnn, batch_size=64, nb_epoch=8)\n\nget_nexts_oh(' this is')", "Theano GRU\nSeparate weights\nThe theano GRU looks just like the simple theano RNN, except for the use of the reset and update gates. Each of these gates requires its own hidden and input weights, so we add those to our weight matrices.", "W_h = id_and_bias(n_hidden)\nW_x = init_wgts(n_input, n_hidden)\nW_y = wgts_and_bias(n_hidden, n_output)\nrW_h = init_wgts(n_hidden, n_hidden)\nrW_x = wgts_and_bias(n_input, n_hidden)\nuW_h = init_wgts(n_hidden, n_hidden)\nuW_x = wgts_and_bias(n_input, n_hidden)\nw_all = list(chain.from_iterable([W_h, W_y, uW_x, rW_x]))\nw_all.extend([W_x, uW_h, rW_h])", "Here's the definition of a gate - it's just a sigmoid applied to the addition of the dot products of the input vectors.", "def gate(x, h, W_h, W_x, b_x):\n return nnet.sigmoid(T.dot(x, W_x) + b_x + T.dot(h, W_h))", "Our step is nearly identical to before, except that we multiply our hidden state by our reset gate, and we update our hidden state based on the update gate.", "def step(x, h, W_h, b_h, W_y, b_y, uW_x, ub_x, rW_x, rb_x, W_x, uW_h, rW_h):\n reset = gate(x, h, rW_h, rW_x, rb_x)\n update = gate(x, h, uW_h, uW_x, ub_x)\n h_new = gate(x, h * reset, W_h, W_x, b_h)\n h = update*h + (1-update)*h_new\n y = nnet.softmax(T.dot(h, W_y) + b_y)\n return h, T.flatten(y, 1)", "Everything from here on is identical to our simple RNN in theano.", "[v_h, v_y], _ = theano.scan(step, sequences=t_inp, outputs_info=[t_h0, None], non_sequences=w_all)\n\nerror = nnet.categorical_crossentropy(v_y, t_outp).sum()\ng_all = T.grad(error, w_all)\n\nupd = upd_dict(w_all, g_all, lr)\nfn = theano.function(all_args, error, updates=upd, allow_input_downcast=True)\n\nerr=0.0; l_rate=0.1\nfor i in range(len(X)): \n err+=fn(np.zeros(n_hidden), X[i], Y[i], l_rate)\n if i % 3000 == 2999: \n l_rate *= 0.95\n print (\"Error:{:.2f}\".format(err/3000))\n err=0.0", "Combined weights\nWe can make the previous section simpler and faster by concatenating the hidden and input matrices and inputs together. We're not going to step through this cell by cell - you'll see it's identical to the previous section except for this concatenation.", "W = (shared(np.concatenate([np.eye(n_hidden), normal(size=(n_input, n_hidden))])\n .astype(np.float32)), init_bias(n_hidden))\n\nrW = wgts_and_bias(n_input+n_hidden, n_hidden)\nuW = wgts_and_bias(n_input+n_hidden, n_hidden)\nW_y = wgts_and_bias(n_hidden, n_output)\nw_all = list(chain.from_iterable([W, W_y, uW, rW]))\n\ndef gate(m, W, b): return nnet.sigmoid(T.dot(m, W) + b)\n\ndef step(x, h, W, b, W_y, b_y, uW, ub, rW, rb):\n m = T.concatenate([h, x])\n reset = gate(m, rW, rb)\n update = gate(m, uW, ub)\n m = T.concatenate([h*reset, x])\n h_new = gate(m, W, b)\n h = update*h + (1-update)*h_new\n y = nnet.softmax(T.dot(h, W_y) + b_y)\n return h, T.flatten(y, 1)\n\n[v_h, v_y], _ = theano.scan(step, sequences=t_inp, outputs_info=[t_h0, None], non_sequences=w_all)\n\ndef upd_dict(wgts, grads, lr): \n return OrderedDict({w: w-lr*g for (w,g) in zip(wgts,grads)})\n\nerror = nnet.categorical_crossentropy(v_y, t_outp).sum()\ng_all = T.grad(error, w_all)\n\nupd = upd_dict(w_all, g_all, lr)\nfn = theano.function(all_args, error, updates=upd, allow_input_downcast=True)\n\nerr=0.0; l_rate=0.01\nfor i in range(len(X)): \n err+=fn(np.zeros(n_hidden), X[i], Y[i], l_rate)\n if i % 3000 == 2999: \n print (\"Error:{:.2f}\".format(err/3000))\n err=0.0", "End" ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
dipanjanS/text-analytics-with-python
New-Second-Edition/Ch08 - Semantic Analysis/Ch08b - Named Entity Recognition.ipynb
apache-2.0
[ "Named Entity Recognition\nIn any text document, there are particular terms that represent specific entities that are more informative and have a unique context. These entities are known as named entities , which more specifically refer to terms that represent real-world objects like people, places, organizations, and so on, which are often denoted by proper names. \nNamed entity recognition (NER) , also known as entity chunking/extraction , is a popular technique used in information extraction to identify and segment the named entities and classify or categorize them under various predefined classes.\nThere are out of the box NER taggers available through popular libraries like nltk and spacy. Each library follows a different approach to solve the problem.\nNER with SpaCy", "text = \"\"\"Three more countries have joined an “international grand committee” of parliaments, adding to calls for \nFacebook’s boss, Mark Zuckerberg, to give evidence on misinformation to the coalition. Brazil, Latvia and Singapore \nbring the total to eight different parliaments across the world, with plans to send representatives to London on 27 \nNovember with the intention of hearing from Zuckerberg. Since the Cambridge Analytica scandal broke, the Facebook chief \nhas only appeared in front of two legislatures: the American Senate and House of Representatives, and the European parliament. \nFacebook has consistently rebuffed attempts from others, including the UK and Canadian parliaments, to hear from Zuckerberg. \nHe added that an article in the New York Times on Thursday, in which the paper alleged a pattern of behaviour from Facebook \nto “delay, deny and deflect” negative news stories, “raises further questions about how recent data breaches were allegedly \ndealt with within Facebook.”\n\"\"\"\nprint(text)\n\nimport re\n\ntext = re.sub(r'\\n', '', text)\ntext\n\nimport spacy\n\nnlp = spacy.load('en')\ntext_nlp = nlp(text)\n\n# print named entities in article\nner_tagged = [(word.text, word.ent_type_) for word in text_nlp]\nprint(ner_tagged)\n\nfrom spacy import displacy\n\n# visualize named entities\ndisplacy.render(text_nlp, style='ent', jupyter=True)", "Spacy offers fast NER tagger based on a number of techniques. The exact algorithm hasn't been talked about in much detail but the documentation marks it as <font color=blue> \"The exact algorithm is a pastiche of well-known methods, and is not currently described in any single publication \" </font>\nThe entities identified by spacy NER tagger are as shown in the following table (details here: spacy_documentation)", "named_entities = []\ntemp_entity_name = ''\ntemp_named_entity = None\nfor term, tag in ner_tagged:\n if tag:\n temp_entity_name = ' '.join([temp_entity_name, term]).strip()\n temp_named_entity = (temp_entity_name, tag)\n else:\n if temp_named_entity:\n named_entities.append(temp_named_entity)\n temp_entity_name = ''\n temp_named_entity = None\n\nprint(named_entities)\n\nfrom collections import Counter\nc = Counter([item[1] for item in named_entities])\nc.most_common()", "NER with Stanford NLP\nStanford’s Named Entity Recognizer is based on an implementation of linear chain Conditional Random Field (CRF) sequence models. \nPrerequisites: Download the official Stanford NER Tagger from here, which seems to work quite well. You can try out a later version by going to this website\nThis model is only trained on instances of PERSON, ORGANIZATION and LOCATION types. The model is exposed through nltk wrappers.", "import os\nfrom nltk.tag import StanfordNERTagger\n\nJAVA_PATH = r'C:\\Program Files\\Java\\jre1.8.0_192\\bin\\java.exe'\nos.environ['JAVAHOME'] = JAVA_PATH\n\nSTANFORD_CLASSIFIER_PATH = 'E:/stanford/stanford-ner-2014-08-27/classifiers/english.all.3class.distsim.crf.ser.gz'\nSTANFORD_NER_JAR_PATH = 'E:/stanford/stanford-ner-2014-08-27/stanford-ner.jar'\n\nsn = StanfordNERTagger(STANFORD_CLASSIFIER_PATH,\n path_to_jar=STANFORD_NER_JAR_PATH)\nsn\n\ntext_enc = text.encode('ascii', errors='ignore').decode('utf-8')\nner_tagged = sn.tag(text_enc.split())\nprint(ner_tagged)\n\nnamed_entities = []\ntemp_entity_name = ''\ntemp_named_entity = None\nfor term, tag in ner_tagged:\n if tag != 'O':\n temp_entity_name = ' '.join([temp_entity_name, term]).strip()\n temp_named_entity = (temp_entity_name, tag)\n else:\n if temp_named_entity:\n named_entities.append(temp_named_entity)\n temp_entity_name = ''\n temp_named_entity = None\n\nprint(named_entities)\n\nc = Counter([item[1] for item in named_entities])\nc.most_common()", "NER with Stanford CoreNLP\nNLTK is slowly deprecating the old Stanford Parsers in favor of the more active Stanford Core NLP Project. It might even get removed after nltk version 3.4 so best to stay updated.\nDetails: https://github.com/nltk/nltk/issues/1839\nStep by Step Tutorial here: https://github.com/nltk/nltk/wiki/Stanford-CoreNLP-API-in-NLTK\nSadly a lot of things have changed in the process so we need to do some extra effort to make it work!\nGet CoreNLP from here\nAfter you download, go to the folder and spin up a terminal and start the Core NLP Server locally\nE:\\&gt; java -mx4g -cp \"*\" edu.stanford.nlp.pipeline.StanfordCoreNLPServer -preload tokenize,ssplit,pos,lemma,ner,parse,depparse -status_port 9000 -port 9000 -timeout 15000\nIf it runs successfully you should see the following messages on the terminal\nE:\\stanford\\stanford-corenlp-full-2018-02-27&gt;java -mx4g -cp \"*\" edu.stanford.nlp.pipeline.StanfordCoreNLPServer -preload tokenize,ssplit,pos,lemma,ner,parse,depparse -status_port 9000 -port 9000 -timeout 15000\n[main] INFO CoreNLP - --- StanfordCoreNLPServer#main() called ---\n[main] INFO CoreNLP - setting default constituency parser\n[main] INFO CoreNLP - warning: cannot find edu/stanford/nlp/models/srparser/englishSR.ser.gz\n[main] INFO CoreNLP - using: edu/stanford/nlp/models/lexparser/englishPCFG.ser.gz instead\n[main] INFO CoreNLP - to use shift reduce parser download English models jar from:\n[main] INFO CoreNLP - http://stanfordnlp.github.io/CoreNLP/download.html\n[main] INFO CoreNLP - Threads: 4\n[main] INFO edu.stanford.nlp.pipeline.StanfordCoreNLP - Adding annotator tokenize\n[main] INFO edu.stanford.nlp.pipeline.TokenizerAnnotator - No tokenizer type provided. Defaulting to PTBTokenizer.\n[main] INFO edu.stanford.nlp.pipeline.StanfordCoreNLP - Adding annotator ssplit\n[main] INFO edu.stanford.nlp.pipeline.StanfordCoreNLP - Adding annotator pos\n[main] INFO edu.stanford.nlp.tagger.maxent.MaxentTagger - Loading POS tagger from edu/stanford/nlp/models/pos-tagger/english-left3words/english-left3words-distsim.tagger ... done [1.4 sec].\n[main] INFO edu.stanford.nlp.pipeline.StanfordCoreNLP - Adding annotator lemma\n[main] INFO edu.stanford.nlp.pipeline.StanfordCoreNLP - Adding annotator ner\n[main] INFO edu.stanford.nlp.ie.AbstractSequenceClassifier - Loading classifier from edu/stanford/nlp/models/ner/english.all.3class.distsim.crf.ser.gz ... done [1.9 sec].\n[main] INFO edu.stanford.nlp.ie.AbstractSequenceClassifier - Loading classifier from edu/stanford/nlp/models/ner/english.muc.7class.distsim.crf.ser.gz ... done [2.0 sec].\n[main] INFO edu.stanford.nlp.ie.AbstractSequenceClassifier - Loading classifier from edu/stanford/nlp/models/ner/english.conll.4class.distsim.crf.ser.gz ... done [0.8 sec].\n[main] INFO edu.stanford.nlp.time.JollyDayHolidays - Initializing JollyDayHoliday for SUTime from classpath edu/stanford/nlp/models/sutime/jollyday/Holidays_sutime.xml as sutime.binder.1.\n[main] INFO edu.stanford.nlp.time.TimeExpressionExtractorImpl - Using following SUTime rules: edu/stanford/nlp/models/sutime/defs.sutime.txt,edu/stanford/nlp/models/sutime/english.sutime.txt,edu/stanford/nlp/models/sutime/english.holidays.sutime.txt\n[main] INFO edu.stanford.nlp.pipeline.TokensRegexNERAnnotator - TokensRegexNERAnnotator ner.fine.regexner: Read 580641 unique entries out of 581790 from edu/stanford/nlp/models/kbp/regexner_caseless.tab, 0 TokensRegex patterns.\n[main] INFO edu.stanford.nlp.pipeline.TokensRegexNERAnnotator - TokensRegexNERAnnotator ner.fine.regexner: Read 4857 unique entries out of 4868 from edu/stanford/nlp/models/kbp/regexner_cased.tab, 0 TokensRegex patterns.\n[main] INFO edu.stanford.nlp.pipeline.TokensRegexNERAnnotator - TokensRegexNERAnnotator ner.fine.regexner: Read 585498 unique entries from 2 files\n[main] INFO edu.stanford.nlp.pipeline.StanfordCoreNLP - Adding annotator parse\n[main] INFO edu.stanford.nlp.parser.common.ParserGrammar - Loading parser from serialized file edu/stanford/nlp/models/lexparser/englishPCFG.ser.gz ... done [4.6 sec].\n[main] INFO edu.stanford.nlp.pipeline.StanfordCoreNLP - Adding annotator depparse\n[main] INFO edu.stanford.nlp.parser.nndep.DependencyParser - Loading depparse model: edu/stanford/nlp/models/parser/nndep/english_UD.gz ...\n[main] INFO edu.stanford.nlp.parser.nndep.Classifier - PreComputed 99996, Elapsed Time: 22.43 (s)\n[main] INFO edu.stanford.nlp.parser.nndep.DependencyParser - Initializing dependency parser ... done [24.4 sec].\n[main] INFO CoreNLP - Starting server...\n[main] INFO CoreNLP - StanfordCoreNLPServer listening at /0:0:0:0:0:0:0:0:9000", "from nltk.parse import CoreNLPParser\n\nner_tagger = CoreNLPParser(url='http://localhost:9000', tagtype='ner')\nner_tagger\n\nimport nltk\n\ntags = list(ner_tagger.raw_tag_sents(nltk.sent_tokenize(text)))\ntags = [sublist[0] for sublist in tags]\ntags = [word_tag for sublist in tags for word_tag in sublist]\nprint(tags)\n\nnamed_entities = []\ntemp_entity_name = ''\ntemp_named_entity = None\nfor term, tag in tags:\n if tag != 'O':\n temp_entity_name = ' '.join([temp_entity_name, term]).strip()\n temp_named_entity = (temp_entity_name, tag)\n else:\n if temp_named_entity:\n named_entities.append(temp_named_entity)\n temp_entity_name = ''\n temp_named_entity = None\n\nprint(named_entities)\n\nc = Counter([item[1] for item in named_entities])\nc.most_common()" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
udacity/deep-learning
first-neural-network/Your_first_neural_network.ipynb
mit
[ "Your first neural network\nIn this project, you'll build your first neural network and use it to predict daily bike rental ridership. We've provided some of the code, but left the implementation of the neural network up to you (for the most part). After you've submitted this project, feel free to explore the data and the model more.", "%matplotlib inline\n%load_ext autoreload\n%autoreload 2\n%config InlineBackend.figure_format = 'retina'\n\nimport numpy as np\nimport pandas as pd\nimport matplotlib.pyplot as plt", "Load and prepare the data\nA critical step in working with neural networks is preparing the data correctly. Variables on different scales make it difficult for the network to efficiently learn the correct weights. Below, we've written the code to load and prepare the data. You'll learn more about this soon!", "data_path = 'Bike-Sharing-Dataset/hour.csv'\n\nrides = pd.read_csv(data_path)\n\nrides.head()", "Checking out the data\nThis dataset has the number of riders for each hour of each day from January 1 2011 to December 31 2012. The number of riders is split between casual and registered, summed up in the cnt column. You can see the first few rows of the data above.\nBelow is a plot showing the number of bike riders over the first 10 days or so in the data set. (Some days don't have exactly 24 entries in the data set, so it's not exactly 10 days.) You can see the hourly rentals here. This data is pretty complicated! The weekends have lower over all ridership and there are spikes when people are biking to and from work during the week. Looking at the data above, we also have information about temperature, humidity, and windspeed, all of these likely affecting the number of riders. You'll be trying to capture all this with your model.", "rides[:24*10].plot(x='dteday', y='cnt')", "Dummy variables\nHere we have some categorical variables like season, weather, month. To include these in our model, we'll need to make binary dummy variables. This is simple to do with Pandas thanks to get_dummies().", "dummy_fields = ['season', 'weathersit', 'mnth', 'hr', 'weekday']\nfor each in dummy_fields:\n dummies = pd.get_dummies(rides[each], prefix=each, drop_first=False)\n rides = pd.concat([rides, dummies], axis=1)\n\nfields_to_drop = ['instant', 'dteday', 'season', 'weathersit', \n 'weekday', 'atemp', 'mnth', 'workingday', 'hr']\ndata = rides.drop(fields_to_drop, axis=1)\ndata.head()", "Scaling target variables\nTo make training the network easier, we'll standardize each of the continuous variables. That is, we'll shift and scale the variables such that they have zero mean and a standard deviation of 1.\nThe scaling factors are saved so we can go backwards when we use the network for predictions.", "quant_features = ['casual', 'registered', 'cnt', 'temp', 'hum', 'windspeed']\n# Store scalings in a dictionary so we can convert back later\nscaled_features = {}\nfor each in quant_features:\n mean, std = data[each].mean(), data[each].std()\n scaled_features[each] = [mean, std]\n data.loc[:, each] = (data[each] - mean)/std", "Splitting the data into training, testing, and validation sets\nWe'll save the data for the last approximately 21 days to use as a test set after we've trained the network. We'll use this set to make predictions and compare them with the actual number of riders.", "# Save data for approximately the last 21 days \ntest_data = data[-21*24:]\n\n# Now remove the test data from the data set \ndata = data[:-21*24]\n\n# Separate the data into features and targets\ntarget_fields = ['cnt', 'casual', 'registered']\nfeatures, targets = data.drop(target_fields, axis=1), data[target_fields]\ntest_features, test_targets = test_data.drop(target_fields, axis=1), test_data[target_fields]", "We'll split the data into two sets, one for training and one for validating as the network is being trained. Since this is time series data, we'll train on historical data, then try to predict on future data (the validation set).", "# Hold out the last 60 days or so of the remaining data as a validation set\ntrain_features, train_targets = features[:-60*24], targets[:-60*24]\nval_features, val_targets = features[-60*24:], targets[-60*24:]", "Time to build the network\nBelow you'll build your network. We've built out the structure. You'll implement both the forward pass and backwards pass through the network. You'll also set the hyperparameters: the learning rate, the number of hidden units, and the number of training passes.\n<img src=\"assets/neural_network.png\" width=\"300\">\nThe network has two layers, a hidden layer and an output layer. The hidden layer will use the sigmoid function for activations. The output layer has only one node and is used for the regression, the output of the node is the same as the input of the node. That is, the activation function is $f(x)=x$. A function that takes the input signal and generates an output signal, but takes into account the threshold, is called an activation function. We work through each layer of our network calculating the outputs for each neuron. All of the outputs from one layer become inputs to the neurons on the next layer. This process is called forward propagation.\nWe use the weights to propagate signals forward from the input to the output layers in a neural network. We use the weights to also propagate error backwards from the output back into the network to update our weights. This is called backpropagation.\n\nHint: You'll need the derivative of the output activation function ($f(x) = x$) for the backpropagation implementation. If you aren't familiar with calculus, this function is equivalent to the equation $y = x$. What is the slope of that equation? That is the derivative of $f(x)$.\n\nBelow, you have these tasks:\n1. Implement the sigmoid function to use as the activation function. Set self.activation_function in __init__ to your sigmoid function.\n2. Implement the forward pass in the train method.\n3. Implement the backpropagation algorithm in the train method, including calculating the output error.\n4. Implement the forward pass in the run method.", "#############\n# In the my_answers.py file, fill out the TODO sections as specified\n#############\n\nfrom my_answers import NeuralNetwork\n\ndef MSE(y, Y):\n return np.mean((y-Y)**2)", "Unit tests\nRun these unit tests to check the correctness of your network implementation. This will help you be sure your network was implemented correctly befor you starting trying to train it. These tests must all be successful to pass the project.", "import unittest\n\ninputs = np.array([[0.5, -0.2, 0.1]])\ntargets = np.array([[0.4]])\ntest_w_i_h = np.array([[0.1, -0.2],\n [0.4, 0.5],\n [-0.3, 0.2]])\ntest_w_h_o = np.array([[0.3],\n [-0.1]])\n\nclass TestMethods(unittest.TestCase):\n \n ##########\n # Unit tests for data loading\n ##########\n \n def test_data_path(self):\n # Test that file path to dataset has been unaltered\n self.assertTrue(data_path.lower() == 'bike-sharing-dataset/hour.csv')\n \n def test_data_loaded(self):\n # Test that data frame loaded\n self.assertTrue(isinstance(rides, pd.DataFrame))\n \n ##########\n # Unit tests for network functionality\n ##########\n\n def test_activation(self):\n network = NeuralNetwork(3, 2, 1, 0.5)\n # Test that the activation function is a sigmoid\n self.assertTrue(np.all(network.activation_function(0.5) == 1/(1+np.exp(-0.5))))\n\n def test_train(self):\n # Test that weights are updated correctly on training\n network = NeuralNetwork(3, 2, 1, 0.5)\n network.weights_input_to_hidden = test_w_i_h.copy()\n network.weights_hidden_to_output = test_w_h_o.copy()\n \n network.train(inputs, targets)\n self.assertTrue(np.allclose(network.weights_hidden_to_output, \n np.array([[ 0.37275328], \n [-0.03172939]])))\n self.assertTrue(np.allclose(network.weights_input_to_hidden,\n np.array([[ 0.10562014, -0.20185996], \n [0.39775194, 0.50074398], \n [-0.29887597, 0.19962801]])))\n\n def test_run(self):\n # Test correctness of run method\n network = NeuralNetwork(3, 2, 1, 0.5)\n network.weights_input_to_hidden = test_w_i_h.copy()\n network.weights_hidden_to_output = test_w_h_o.copy()\n\n self.assertTrue(np.allclose(network.run(inputs), 0.09998924))\n\nsuite = unittest.TestLoader().loadTestsFromModule(TestMethods())\nunittest.TextTestRunner().run(suite)", "Training the network\nHere you'll set the hyperparameters for the network. The strategy here is to find hyperparameters such that the error on the training set is low, but you're not overfitting to the data. If you train the network too long or have too many hidden nodes, it can become overly specific to the training set and will fail to generalize to the validation set. That is, the loss on the validation set will start increasing as the training set loss drops.\nYou'll also be using a method know as Stochastic Gradient Descent (SGD) to train the network. The idea is that for each training pass, you grab a random sample of the data instead of using the whole data set. You use many more training passes than with normal gradient descent, but each pass is much faster. This ends up training the network more efficiently. You'll learn more about SGD later.\nChoose the number of iterations\nThis is the number of batches of samples from the training data we'll use to train the network. The more iterations you use, the better the model will fit the data. However, this process can have sharply diminishing returns and can waste computational resources if you use too many iterations. You want to find a number here where the network has a low training loss, and the validation loss is at a minimum. The ideal number of iterations would be a level that stops shortly after the validation loss is no longer decreasing.\nChoose the learning rate\nThis scales the size of weight updates. If this is too big, the weights tend to explode and the network fails to fit the data. Normally a good choice to start at is 0.1; however, if you effectively divide the learning rate by n_records, try starting out with a learning rate of 1. In either case, if the network has problems fitting the data, try reducing the learning rate. Note that the lower the learning rate, the smaller the steps are in the weight updates and the longer it takes for the neural network to converge.\nChoose the number of hidden nodes\nIn a model where all the weights are optimized, the more hidden nodes you have, the more accurate the predictions of the model will be. (A fully optimized model could have weights of zero, after all.) However, the more hidden nodes you have, the harder it will be to optimize the weights of the model, and the more likely it will be that suboptimal weights will lead to overfitting. With overfitting, the model will memorize the training data instead of learning the true pattern, and won't generalize well to unseen data. \nTry a few different numbers and see how it affects the performance. You can look at the losses dictionary for a metric of the network performance. If the number of hidden units is too low, then the model won't have enough space to learn and if it is too high there are too many options for the direction that the learning can take. The trick here is to find the right balance in number of hidden units you choose. You'll generally find that the best number of hidden nodes to use ends up being between the number of input and output nodes.", "import sys\n\n####################\n### Set the hyperparameters in you myanswers.py file ###\n####################\n\nfrom my_answers import iterations, learning_rate, hidden_nodes, output_nodes\n\n\nN_i = train_features.shape[1]\nnetwork = NeuralNetwork(N_i, hidden_nodes, output_nodes, learning_rate)\n\nlosses = {'train':[], 'validation':[]}\nfor ii in range(iterations):\n # Go through a random batch of 128 records from the training data set\n batch = np.random.choice(train_features.index, size=128)\n X, y = train_features.loc[batch].values, train_targets.loc[batch]['cnt']\n \n network.train(X, y)\n \n # Printing out the training progress\n train_loss = MSE(network.run(train_features).T, train_targets['cnt'].values)\n val_loss = MSE(network.run(val_features).T, val_targets['cnt'].values)\n sys.stdout.write(\"\\rProgress: {:2.1f}\".format(100 * ii/float(iterations)) \\\n + \"% ... Training loss: \" + str(train_loss)[:5] \\\n + \" ... Validation loss: \" + str(val_loss)[:5])\n sys.stdout.flush()\n \n losses['train'].append(train_loss)\n losses['validation'].append(val_loss)\n\nplt.plot(losses['train'], label='Training loss')\nplt.plot(losses['validation'], label='Validation loss')\nplt.legend()\n_ = plt.ylim()", "Check out your predictions\nHere, use the test data to view how well your network is modeling the data. If something is completely wrong here, make sure each step in your network is implemented correctly.", "fig, ax = plt.subplots(figsize=(8,4))\n\nmean, std = scaled_features['cnt']\npredictions = network.run(test_features).T*std + mean\nax.plot(predictions[0], label='Prediction')\nax.plot((test_targets['cnt']*std + mean).values, label='Data')\nax.set_xlim(right=len(predictions))\nax.legend()\n\ndates = pd.to_datetime(rides.ix[test_data.index]['dteday'])\ndates = dates.apply(lambda d: d.strftime('%b %d'))\nax.set_xticks(np.arange(len(dates))[12::24])\n_ = ax.set_xticklabels(dates[12::24], rotation=45)", "OPTIONAL: Thinking about your results(this question will not be evaluated in the rubric).\nAnswer these questions about your results. How well does the model predict the data? Where does it fail? Why does it fail where it does?\n\nNote: You can edit the text in this cell by double clicking on it. When you want to render the text, press control + enter\n\nYour answer below" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
sympy/scipy-2017-codegen-tutorial
notebooks/25-chemical-kinetics-intro.ipynb
bsd-3-clause
[ "Chemical kinetics\nIn chemistry one is often interested in how fast a chemical process proceeds. Chemical reactions (when viewed as single events on a molecular scale) are probabilitic. However, most reactive systems of interest involve very large numbers of molecules (a few grams of a simple substance containts on the order of $10^{23}$ molecules. The sheer number allows us to describe this inherently stochastic process deterministically.\nLaw of mass action\nIn order to describe chemical reactions as as system of ODEs in terms of concentrations ($c_i$) and time ($t$), one can use the law of mass action:\n$$\n\\frac{dc_i}{dt} = \\sum_j S_{ij} r_j\n$$\nwhere $r_j$ is given by:\n$$\nr_j = k_j\\prod_l c_l^{R_{jl}}\n$$\nand $S$ is a matrix with the overall net stoichiometric coefficients (positive for net production, negative for net consumption), and $R$ is a matrix with the multiplicities of each reactant for each equation.\nExample: Nitrosylbromide\nWe will now look at the following (bi-directional) chemical reaction:\n$$\n\\mathrm{2\\,NO + Br_2 \\leftrightarrow 2\\,NOBr}\n$$\nwhich describes the equilibrium between nitrogen monoxide (NO) and bromine (Br$_2$) and nitrosyl bromide (NOBr). It can be represented as a set of two uni-directional reactions (forward and backward):\n$$\n\\mathrm{2\\,NO + Br_2 \\overset{k_f}{\\rightarrow} 2\\,NOBr} \\ \n\\mathrm{2\\,NOBr \\overset{k_b}{\\rightarrow} 2\\,NO + Br_2}\n$$\nThe law of mass action tells us that the rate of the first process (forward) is proportional to the concentration Br$_2$ and the square of the concentration of NO. The rate of the second reaction (the backward process) is in analogy proportional to the square of the concentration of NOBr. Using the proportionality constants $k_f$ and $k_b$ we can formulate our system of nonlinear ordinary differential equations as follows:\n$$\n\\frac{dc_1}{dt} = 2(k_b c_3^2 - k_f c_2 c_1^2) \\\n\\frac{dc_2}{dt} = k_b c_3^2 - k_f c_2 c_1^2 \\\n\\frac{dc_3}{dt} = 2(k_f c_2 c_1^2 - k_b c_3^2)\n$$\nwhere we have denoted the concentration of NO, Br$_2$, NOBr with $c_1,\\ c_2,\\ c_3$ respectively.\nThis ODE system corresponds to the following two matrices:\n$$\nS = \\begin{bmatrix}\n-2 & 2 \\\n-1 & 1 \\\n2 & -2\n\\end{bmatrix}\n$$\n$$\nR = \\begin{bmatrix}\n2 & 1 & 0 \\\n0 & 0 & 2 \n\\end{bmatrix}\n$$\nSolving the initial value problem numerically\nWe will now integrate this system of ordinary differential equations numerically as an initial value problem (IVP) using the odeint solver provided by scipy:", "import numpy as np\nfrom scipy.integrate import odeint", "By looking at the documentation of odeint we see that we need to provide a function which computes a vector of derivatives ($\\dot{\\mathbf{y}} = [\\frac{dy_1}{dt}, \\frac{dy_2}{dt}, \\frac{dy_3}{dt}]$). The expected signature of this function is:\nf(y: array[float64], t: float64, *args: arbitrary constants) -&gt; dydt: array[float64]\n\nin our case we can write it as:", "def rhs(y, t, kf, kb):\n rf = kf * y[0]**2 * y[1]\n rb = kb * y[2]**2\n return [2*(rb - rf), rb - rf, 2*(rf - rb)]\n\n%load_ext scipy2017codegen.exercise", "Replace ??? by the proper arguments for odeint, you can write odeint? to read its documentaiton.", "%exercise exercise_odeint.py\n\nimport matplotlib.pyplot as plt\n%matplotlib inline\n\nplt.plot(tout, yout)\n_ = plt.legend(['NO', 'Br$_2$', 'NOBr'])", "Writing the rhs function by hand for larger reaction systems quickly becomes tedious. Ideally we would like to construct it from a symbolic representation (having a symbolic representation of the problem opens up many possibilities as we will soon see). But at the same time, we need the rhs function to be fast. Which means that we want to produce a fast function from our symbolic representation. Generating a function from our symbolic representation is achieved through code generation. \nIn summary we will need to:\n\nConstruct a symbolic representation from some domain specific representation using SymPy.\nHave SymPy generate a function with an appropriate signature (or multiple thereof), which we pass on to the solver.\n\nWe will achieve (1) by using SymPy symbols (and functions if needed). For (2) we will use a function in SymPy called lambdify―it takes a symbolic expressions and returns a function. In a later notebook, we will look at (1), for now we will just use rhs which we've already written:", "import sympy as sym\nsym.init_printing()\n\ny, k = sym.symbols('y:3'), sym.symbols('kf kb')\nydot = rhs(y, None, *k)\nydot", "Exercise\nNow assume that we had constructed ydot above by applying the more general law of mass action, instead of hard-coding the rate expressions in rhs. Then we could have created a function corresponding to rhs using lambdify:", "%exercise exercise_lambdify.py\n\nplt.plot(tout, odeint(f, y0, tout, k_vals))\n_ = plt.legend(['NO', 'Br$_2$', 'NOBr'])", "In this example the gains of using a symbolic representation are arguably limited. However, it is quite common that the numerical solver will need another function which calculates the Jacobian of $\\dot{\\mathbf{y}}$ (given as Dfun in the case of odeint). Writing that by hand is both tedious and error prone. But SymPy solves both of those issues:", "sym.Matrix(ydot).jacobian(y)", "In the next notebook we will look at an example where providing this as a function is beneficial for performance." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
cuttlefishh/emp
code/10-sequence-lookup/trading-card-latex/blast_xml_to_taxonomy.ipynb
bsd-3-clause
[ "author: lukethompson@gmail.com<br>\ndate: 16 Nov 2016<br>\nlanguage: Python 2.7<br>\nlicense: BSD3<br>\nblast_xml_to_taxonomy.ipynb\nTakes the XML output of blastn (query: Deblur OTU, database: RDP Release 11, percent ID: 100%), parses it, and creates a file with the query, top RDP lineage (with number of hits having that lineage over total hits), and top-3 RDP species (with number of hits having that species over total hits).", "import pandas as pd\nimport numpy as np\nimport Bio.Blast.NCBIXML\nfrom cStringIO import StringIO\nfrom __future__ import print_function\n\n# convert RDP-style lineage to Greengenes-style lineage\ndef rdp_lineage_to_gg(lineage):\n d = {}\n linlist = lineage.split(';')\n for i in np.arange(0, len(linlist), 2):\n d[linlist[i+1]] = linlist[i]\n linstr = ''\n for level in ['domain', 'kingdom', 'phylum', 'class', 'order', 'family', 'genus']:\n try:\n linstr += level[0] + '__' + d[level].replace('\"', '') + '; '\n except:\n linstr += level[0] + '__' + '; '\n linstr = linstr[:-2]\n return(linstr)\n\n# parse blast xml record\ndef parse_record_alignments_taxonomy(record):\n df = pd.DataFrame(columns=('strain', 'lineage'))\n for alignment in record.alignments:\n strain, lineage = alignment.hit_def.split(' ')\n linstr = rdp_lineage_to_gg(lineage)\n df = df.append({'strain': strain, 'lineage': linstr}, ignore_index=True)\n df['species'] = [(x.split(' ')[0] + ' ' + x.split(' ')[1]).replace(';', '') for x in df.strain]\n num_hits = df.shape[0]\n vc_species = df.species.value_counts()\n vc_lineage = df.lineage.value_counts()\n return(num_hits, vc_species, vc_lineage)\n\n# main function\ndef xml_to_taxonomy(path_xml, path_output):\n # read file as single string, generate handle, and parse xml handle to records generator\n with open(path_xml) as file:\n str_xml = file.read()\n handle_xml = StringIO(str_xml)\n records = Bio.Blast.NCBIXML.parse(handle_xml)\n\n # write top lineage and top 3 strains for each query\n with open(path_output, 'w') as target:\n # write header\n target.write('query\\tlineage_count\\tspecies_1st_count\\tspecies_2nd_count\\tspecies_3rd_count\\n')\n # iterate over records generator\n for record in records:\n target.write('%s' % record.query)\n try:\n num_hits, vc_species, vc_lineage = parse_record_alignments_taxonomy(record)\n except:\n pass\n try:\n target.write('\\t%s (%s/%s)' % (vc_lineage.index[0], vc_lineage[0], num_hits))\n except:\n pass\n try:\n target.write('\\t%s (%s/%s)' % (vc_species.index[0], vc_species[0], num_hits))\n except:\n pass\n try:\n target.write('\\t%s (%s/%s)' % (vc_species.index[1], vc_species[1], num_hits))\n except:\n pass\n try:\n target.write('\\t%s (%s/%s)' % (vc_species.index[2], vc_species[2], num_hits))\n except:\n pass\n target.write('\\n')", "Run for 90-bp sequences (top 500 by prevalence in 90-bp biom table)", "path_xml = '../../data/sequence-lookup/rdp-taxonomy/otu_seqs_top_500_prev.emp_deblur_90bp.subset_2k.rare_5000.xml'\npath_output = 'otu_seqs_top_500_prev.emp_deblur_90bp.subset_2k.rare_5000.tsv'\nxml_to_taxonomy(path_xml, path_output)", "Run for 100-bp sequences (top 500 by prevalence in 100-bp biom table)", "path_xml = '../../data/sequence-lookup/rdp-taxonomy/otu_seqs_top_500_prev.emp_deblur_100bp.subset_2k.rare_5000.xml'\npath_output = 'otu_seqs_top_500_prev.emp_deblur_100bp.subset_2k.rare_5000.tsv'\nxml_to_taxonomy(path_xml, path_output)", "Run for 150-bp sequences (top 500 by prevalence in 150-bp biom table)", "path_xml = '../../data/sequence-lookup/rdp-taxonomy/otu_seqs_top_500_prev.emp_deblur_150bp.subset_2k.rare_5000.xml'\npath_output = 'otu_seqs_top_500_prev.emp_deblur_150bp.subset_2k.rare_5000.tsv'\nxml_to_taxonomy(path_xml, path_output)" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
prasants/pyds
05.Booleans_True_or_False.ipynb
mit
[ "Table of Contents\n<p><div class=\"lev1 toc-item\"><a href=\"#Booleans\" data-toc-modified-id=\"Booleans-1\"><span class=\"toc-item-num\">1&nbsp;&nbsp;</span>Booleans</a></div><div class=\"lev2 toc-item\"><a href=\"#Not-True-/-Not-False?\" data-toc-modified-id=\"Not-True-/-Not-False?-11\"><span class=\"toc-item-num\">1.1&nbsp;&nbsp;</span>Not True / Not False?</a></div><div class=\"lev2 toc-item\"><a href=\"#and-/-or-?\" data-toc-modified-id=\"and-/-or-?-12\"><span class=\"toc-item-num\">1.2&nbsp;&nbsp;</span>and / or ?</a></div><div class=\"lev1 toc-item\"><a href=\"#Boolean-Operations\" data-toc-modified-id=\"Boolean-Operations-2\"><span class=\"toc-item-num\">2&nbsp;&nbsp;</span>Boolean Operations</a></div><div class=\"lev2 toc-item\"><a href=\"#Exercise\" data-toc-modified-id=\"Exercise-21\"><span class=\"toc-item-num\">2.1&nbsp;&nbsp;</span>Exercise</a></div>\n\n# Booleans\n\nBooleans are a separate data type. The origins of it lie in the work of [George Boole](https://en.wikipedia.org/wiki/George_Boole), and has its own branch of algebra called [Boolean Algebra](https://en.wikipedia.org/wiki/Boolean_algebra).\n\nBooleans have two values - True or False. That's it. End of lesson. Go home!\n\nOk, maybe not, let's show you how easy this is.", "mybool_1 = True\nprint(mybool_1)\n\nmybool_2 = False\nprint(mybool_2)", "Not True / Not False?\nWhat's not True?\n* False\nWhat's not False?\n* True", "not True\n\nnot False", "and / or ?\n\na and b will return True if both a and b are True\na or b will return True if either a or b are True", "a = True\nb = True\n\nprint(a and b)\n\na = True\nb = False\na or b\n\na = False\nb = False\na or b\n\na and b", "Boolean Operations", "var1 = 10\nvar2 = 20\nvar3 = 30\n\nprint((var1+var2) == var3)\n\nprint((var1+var3) == 40 and var2*2 ==40) \n\nprint((var1-var2)==100 or var3-var1 == var2)\n\nprint(not(var1 - 100)==var2 or var3-var1 == 900)", "Exercise\nPredict the outcome of the cells below:", "True and True\n\nTrue or False\n\nnot(True) or False\n\nnot(not(False)) or not(True or False)\n\nTrue and 100 == 10**2\n\n\"Hello\" == \"hello\" and \"Howdy\" == \"Howdy\"\n\nnot(not(1==2)) and (not(False) or (not(2==2)))" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
RNAer/Calour
doc/source/notebooks/microbiome_normalization.ipynb
bsd-3-clause
[ "Microbiome data normalization tutorial\nThis is a jupyter notebook example of the different ways to normalize the reads (i.e. TSS, rarefaction, compositionality correction, etc).\nSetup", "import calour as ca\nca.set_log_level(11)\nimport numpy as np\nimport matplotlib.pyplot as plt\n%matplotlib notebook", "Load the data\nwe use the chronic fatigue syndrome data from:\nGiloteaux, L., Goodrich, J.K., Walters, W.A., Levine, S.M., Ley, R.E. and Hanson, M.R., 2016.\nReduced diversity and altered composition of the gut microbiome in individuals with myalgic encephalomyelitis/chronic fatigue syndrome.\nMicrobiome, 4(1), p.30.\nStandard load with TSS normalization\nfor each sample we normalize to 10000 reads/sample.\nThis is done by dividing the number of reads of each feature in a sample by the total sum of reads (of all features) in the sample, and then multiplying by the desired number of reads (i.e. 10000).\nAfter this normalization, the sum of (normalized) reads in each sample will be 10000.\nThis is different from rarefaction, since each feature can have a non-integer number of reads, and less information is thrown away. However, you need to be careful not to have a bias by the original number of reads (mostly in binary methods). dsFDR works fine with this normalization.\nNote that we also throw away samples with less than min_reads=1000 reads total (before normalization). This is in order to reduce the discretization effect in samples with low number of reads.", "cfs_normalized=ca.read_amplicon('data/chronic-fatigue-syndrome.biom',\n 'data/chronic-fatigue-syndrome.sample.txt',\n normalize=10000,min_reads=1000)\n\nprint(cfs_normalized)", "The sum of reads per sample should be 10000", "cfs_normalized.get_data(sparse=False).sum(axis=1)", "The original number of reads per sample (before normalization) is stored in the sample_metadata table in the field \"_calour_original_abundance\"", "res=plt.hist(cfs_normalized.sample_metadata['_calour_original_abundance'],50)\nplt.xlabel('original number of reads')\nplt.ylabel('number of samples')", "load with no normalization\nwe can load the data without normalizing the reads per sample by setting the parameter normalize=None\nThis is not recommended for typical microbiome experiments since the number of reads per sample is arbitrary and does not reflect the number of bacteria in the sample.\nWe still chose to remove all samples with less than 1000 reads total.", "cfs_not_normalized=ca.read_amplicon('data/chronic-fatigue-syndrome.biom',\n 'data/chronic-fatigue-syndrome.sample.txt',\n normalize=None,min_reads=1000)\n\ncfs_not_normalized.get_data(sparse=False).sum(axis=1)", "TSS normalization (normalize)\nWe can always normalize to constant sum per sample (similar to the read_amplicon normaliztion)", "tt = cfs_not_normalized.normalize(5000)\n\ntt.get_data(sparse=False).sum(axis=1)", "Compositional normalization (normalize_compositional)\nIn some cases, a plausible biological scenario is that a few bacteria have a very large number of reads. Increase in the frequency of such a bacteria will cause a decrease in the frequencies of all other bacteria (even if in reality their total number remains constant in the sample) since data is normalized to constant sum per sample.\nUnder the assumption that most bacteria do not change in total number between the samples, we can normalize to constant sum when ignoring the set of high frequency bacteria.\nWe will demonstrate using a synthetic example:\nIn the original dataset we have a few tens of bacteria separating between healthy and sick", "dd=cfs_normalized.diff_abundance('Subject','Control','Patient', random_seed=2018)", "Effect of a high frequency artificial bacteria\nlet's make the first bactertia high frequency only in the Healthy (Control) and not in the Sick (Patient).\nAnd renormalize to 10000 reads/sample", "tt=cfs_normalized.copy()\ntt.sparse=False\n\ntt.data[tt.sample_metadata['Subject']=='Control',0] = 50000\ntt=tt.normalize(10000)\n\ndd=tt.diff_abundance('Subject','Control','Patient', random_seed=2018)", "We get more bacteria which are higher in 'Patient' since bacteria 0 is now very high in controls, and data is TSS normalized\nLet's fix by doing the compositional normalization", "yy=tt.normalize_compositional()\n\ndd=yy.diff_abundance('Subject','Control','Patient', random_seed=2018)", "so we reduced the inflation of false differentially abundant bacteria due to data compositionallity\nNormalization on part of the features (normalize_by_subset_features)\nSometimes we want to normalize while ignoring some features (say ignoring all mitochondrial sequences), but we still want to keep these features - just not use them in the normalization.\nNote the sum of reads per sample will not be constant (since samples also contain the ignored features).\nLets ignore the bacteria that don't have a good taxonomy assignment", "bad_seqs=[cseq for cseq,ctax in cfs_not_normalized.feature_metadata['taxonomy'].iteritems() if len(ctax)<13]\n\ntt = cfs_not_normalized.normalize_by_subset_features(bad_seqs, total=10000)\n\ntt.get_data(sparse=False).sum(axis=1)" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
ctools/ctools
doc/source/users/tutorials/cta/howto/howto_prepare_1dc.ipynb
gpl-3.0
[ "Data preparation for 1DC How-tos\nThis tutorial assumes that you have downloaded the data for the first CTA Data Challenge. If this is not the case, please read first how to get the 1DC data.\nStart by importing the relevant Python modules.", "import gammalib\nimport ctools\nimport cscripts", "Now set the CTADATA and CALDB environment variables. Please adjust the path below so that it points to the relevant location.", "%env CTADATA=/project-data/cta/data/1dc\n%env CALDB=/project-data/cta/data/1dc/caldb", "Now prepare a dataset that comprises the Galactic Centre observations that have been performed during the Galactic Plane Scan. Start with selecting the observations.", "obsselect = cscripts.csobsselect()\nobsselect['inobs'] = '$CTADATA/obs/obs_gps_baseline.xml'\nobsselect['pntselect'] = 'CIRCLE'\nobsselect['coordsys'] = 'GAL'\nobsselect['glon'] = 0.0\nobsselect['glat'] = 0.0\nobsselect['rad'] = 3.0\nobsselect['tmin'] = 'NONE'\nobsselect['tmax'] = 'NONE'\nobsselect['outobs'] = 'obs.xml'\nobsselect.execute()", "Now select the events with energies comprised between 1 and 100 TeV from the observations.", "select = ctools.ctselect()\nselect['inobs'] = 'obs.xml'\nselect['ra'] = 'NONE'\nselect['dec'] = 'NONE'\nselect['rad'] = 'NONE'\nselect['tmin'] = 'NONE'\nselect['tmax'] = 'NONE'\nselect['emin'] = 1.0\nselect['emax'] = 100.0\nselect['outobs'] = 'obs_selected.xml'\nselect.execute()", "The next step is to stack the selected events into a counts cube.", "binning = ctools.ctbin()\nbinning['inobs'] = 'obs_selected.xml'\nbinning['xref'] = 0.0\nbinning['yref'] = 0.0\nbinning['coordsys'] = 'GAL'\nbinning['proj'] = 'CAR'\nbinning['binsz'] = 0.02\nbinning['nxpix'] = 300\nbinning['nypix'] = 300\nbinning['ebinalg'] = 'LOG'\nbinning['emin'] = 1.0\nbinning['emax'] = 100.0\nbinning['enumbins'] = 20\nbinning['outobs'] = 'cntcube.fits'\nbinning.execute()", "Now compute the corresponding stacked exposure cube, point spread function cube and background cube.", "expcube = ctools.ctexpcube()\nexpcube['inobs'] = 'obs_selected.xml'\nexpcube['incube'] = 'cntcube.fits'\nexpcube['outcube'] = 'expcube.fits'\nexpcube.execute()\n\npsfcube = ctools.ctpsfcube()\npsfcube['inobs'] = 'obs_selected.xml'\npsfcube['incube'] = 'NONE'\npsfcube['ebinalg'] = 'LOG'\npsfcube['emin'] = 1.0\npsfcube['emax'] = 100.0\npsfcube['enumbins'] = 20\npsfcube['nxpix'] = 10\npsfcube['nypix'] = 10\npsfcube['binsz'] = 1.0\npsfcube['coordsys'] = 'GAL'\npsfcube['proj'] = 'CAR'\npsfcube['xref'] = 0.0\npsfcube['yref'] = 0.0\npsfcube['outcube'] = 'psfcube.fits'\npsfcube.execute()\n\nbkgcube = ctools.ctbkgcube()\nbkgcube['inobs'] = 'obs_selected.xml'\nbkgcube['inmodel'] = '$CTOOLS/share/models/bkg_irf.xml'\nbkgcube['incube'] = 'cntcube.fits'\nbkgcube['outcube'] = 'bkgcube.fits'\nbkgcube['outmodel'] = 'bkgcube.xml' \nbkgcube.execute()", "Now you are done. All data structures are prepared for the following tutorials." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
maxis42/ML-DA-Coursera-Yandex-MIPT
5 Data analysis applications/Homework/2 project wage forecast for Russia/wage_forecast_solomkin/wage_forecast_solomkin.ipynb
mit
[ "Прогнозирование уровня средней заработной платы в России\nДля выполнения этого задания вам понадобятся данные о среднемесячных уровнях заработной платы в России:\nВ файле записаны данные о заработной плате за каждый месяц с января 1993 по август 2016. Если хотите, можете дописать в конец ряда данные за следующие месяцы, если они уже опубликованы; найти эти данные можно, например, здесь: http://sophist.hse.ru/exes/tables/WAG_M.htm\nНеобходимо проанализировать данные, подобрать для них оптимальную прогнозирующую модель в классе ARIMA и построить прогноз на каждый месяц на два года вперёд от конца данных.", "#загружаем необходимые модули\n%pylab inline\nimport pandas as pd\nfrom scipy import stats\nimport statsmodels.api as sm\nimport matplotlib.pyplot as plt\nimport warnings\nfrom itertools import product\n\n#обратное преобразование Бокса-Кокса\ndef invboxcox(y,lmbda):\n if lmbda == 0:\n return(np.exp(y))\n else:\n return(np.exp(np.log(lmbda*y+1)/lmbda))\n\n#загружаем данные из файла\ndata = pd.read_csv('WAG_C_M_updated.csv', sep = ';', index_col=['month'], parse_dates=['month'], dayfirst=True)\ndata.rename(columns={'WAG_C_M':'wage'}, inplace=True)\nprint data.shape\ndata.head()", "Выведем данные на график:", "plt.figure(figsize(15,7))\ndata.wage.plot()\nplt.ylabel(u'Средняя зарплата')\npylab.show()", "Проверка стационарности и STL-декомпозиция ряда:", "plt.figure(figsize(15,10))\nsm.tsa.seasonal_decompose(data.wage).plot()\nprint(\"Критерий Дики-Фуллера: p=%f\" % sm.tsa.stattools.adfuller(data.wage)[1])", "По критерию Дики-Фуллера гипотеза о нестационарности ряда не отвергается.\nСразу видны следующие особенности данных:\n* возрастающий тренд\n* растущая со временем дисперсия\n* периодичность (период - 12 месяцев)\n* заметная структура остатков\nСтабилизация дисперсии\nСделаем преобразование Бокса-Кокса для стабилизации дисперсии:", "data['wage_box'], lmbda = stats.boxcox(data.wage)\nplt.figure(figsize(15,7))\ndata.wage_box.plot()\nplt.ylabel(u'Средняя зарплата после преобразования Бокса-Кокса')\nprint(\"Оптимальный параметр преобразования Бокса-Кокса: %f\" % lmbda)\nprint(\"Критерий Дики-Фуллера: p=%f\" % sm.tsa.stattools.adfuller(data.wage_box)[1])", "Размах дисперсии ощутимо уменьшился.\nПо критерию Дики-Фуллера нулевая гипотеза о нестационарности ряда не отвергается (0.72 > 0.05)\nСтационарность\nПопробуем сезонное дифференцирование; сделаем на продифференцированном ряде STL-декомпозицию и проверим стационарность:", "data['wage_box_diff'] = data.wage_box - data.wage_box.shift(12)\nplt.figure(figsize(15,10))\nsm.tsa.seasonal_decompose(data.wage_box_diff[12:]).plot()\nprint(\"Критерий Дики-Фуллера: p=%f\" % sm.tsa.stattools.adfuller(data.wage_box_diff[12:])[1])", "По критерию Дики-Фуллера можно отвергнеть гипотезу нестационарности (0.009 < 0.05), в остатках стало заметно меньше структуры, но полностью избавиться от тренда не удалось. Попробуем добавить ещё обычное дифференцирование:", "data['wage_box_diff2'] = data.wage_box_diff - data.wage_box_diff.shift(1)\nplt.figure(figsize(15,10))\nsm.tsa.seasonal_decompose(data.wage_box_diff2[13:]).plot() \nprint(\"Критерий Дики-Фуллера: p=%f\" % sm.tsa.stattools.adfuller(data.wage_box_diff2[13:])[1])", "Гипотеза нестационарности отвергается, остатки выглядят похоже на белый шум, и явного тренда больше нет. \nПодбор модели\nПосмотрим на ACF и PACF полученного ряда:", "plt.figure(figsize(15,12))\nax = plt.subplot(211)\nsm.graphics.tsa.plot_acf(data.wage_box_diff2[13:].values.squeeze(), lags=50, ax=ax)\npylab.show()\nax = plt.subplot(212)\nsm.graphics.tsa.plot_pacf(data.wage_box_diff2[13:].values.squeeze(), lags=50, ax=ax)\npylab.show()", "Выбираем параметры нашей модели:\nQ - значение последнего значимого сезонного лага на автокоррелограмме. Т.к. значимых лагов, кратных периоду (12) нет, то Q = 0\nq - значение последнего значимого несезонного лага на автокоррелограмме. q = 1\nP - значение последнего значимого сезонного лага на частичной автокоррелограмме. В данном случае это лаг = 12, поэтому возьмем значение P = 1\np - значение последнего значимого несезонного лага, меньшего величин периода, на частичной автокоррелограмме. p = 1\nНачальные приближения: Q=0, q=1, P=1, p=1", "#устанавливаем границы массивов наших параметров согласно начальным приближениям\nps = range(0, 2)\nd=1\nqs = range(0, 2)\nPs = range(0, 2)\nD=1\nQs = range(0, 1)\n\nparameters = product(ps, qs, Ps, Qs)\nparameters_list = list(parameters)\nlen(parameters_list)\n\n%%time\nresults = []\nbest_aic = float(\"inf\")\nwarnings.filterwarnings('ignore')\n\nfor param in parameters_list:\n #try except нужен, потому что на некоторых наборах параметров модель не обучается\n try:\n model=sm.tsa.statespace.SARIMAX(data.wage_box, order=(param[0], d, param[1]), \n seasonal_order=(param[2], D, param[3], 12)).fit(disp=-1)\n #выводим параметры, на которых модель не обучается и переходим к следующему набору\n except ValueError:\n print('wrong parameters:', param)\n continue\n aic = model.aic\n #сохраняем лучшую модель, aic, параметры\n if aic < best_aic:\n best_model = model\n best_aic = aic\n best_param = param\n results.append([param, model.aic])\n \nwarnings.filterwarnings('default')", "Выводим удачные модели:", "result_table = pd.DataFrame(results)\nresult_table.columns = ['parameters', 'aic']\nprint(result_table.sort_values(by = 'aic', ascending=True).head())", "Лучшая модель:", "print(best_model.summary())", "Рассмотрим остатки модели:", "plt.figure(figsize(15,8))\nplt.subplot(211)\nbest_model.resid[13:].plot()\nplt.ylabel(u'Residuals')\n\nax = plt.subplot(212)\nsm.graphics.tsa.plot_acf(best_model.resid[13:].values.squeeze(), lags=48, ax=ax)\n\nprint(\"Критерий Стьюдента: p=%f\" % stats.ttest_1samp(best_model.resid[13:], 0)[1])\nprint(\"Критерий Дики-Фуллера: p=%f\" % sm.tsa.stattools.adfuller(best_model.resid[13:])[1])", "Критерией Стьюдента: p = 0.137 > 0.05 - остатки несмещены \nКритерий Дики-Фуллера: p < 0.05 - остатки стационарны\nКритерий Льюнга-Бокса: p = 0.12 > 0.05 - остатки неавтокоррелированы\n\nОднако, уровни значимости критерия Льюинг-Бокса и критерия Стьюдента не слишком велики. Возможно, стоит перебрать больше значений параметров для поиска оптимальной модели?", "ps = range(0, 4)\nd=1\nqs = range(0, 4)\nPs = range(0, 2)\nD=1\nQs = range(0, 2)\nparameters = product(ps, qs, Ps, Qs)\nparameters_list = list(parameters)\nlen(parameters_list)\n\n%%time\nresults = []\nbest_aic = float(\"inf\")\nwarnings.filterwarnings('ignore')\n\nfor param in parameters_list:\n #try except нужен, потому что на некоторых наборах параметров модель не обучается\n try:\n model=sm.tsa.statespace.SARIMAX(data.wage_box, order=(param[0], d, param[1]), \n seasonal_order=(param[2], D, param[3], 12)).fit(disp=-1)\n #выводим параметры, на которых модель не обучается и переходим к следующему набору\n except ValueError:\n print('wrong parameters:', param)\n continue\n aic = model.aic\n #сохраняем лучшую модель, aic, параметры\n if aic < best_aic:\n best_model = model\n best_aic = aic\n best_param = param\n results.append([param, model.aic])\n \nwarnings.filterwarnings('default')\n\nresult_table = pd.DataFrame(results)\nresult_table.columns = ['parameters', 'aic']\nprint(result_table.sort_values(by = 'aic', ascending=True).head())", "Как видно, действительно удалось найти модель с лучшим рейтингом Акаики (24.27 < 36.75), хоть и более сложную", "print(best_model.summary())", "Рассмотрим остатки новой модели:", "plt.figure(figsize(15,10))\nplt.subplot(211)\nbest_model.resid[13:].plot()\nplt.ylabel(u'Residuals')\n\nax = plt.subplot(212)\nsm.graphics.tsa.plot_acf(best_model.resid[13:].values.squeeze(), lags=48, ax=ax)\n\nprint(\"Критерий Стьюдента: p=%f\" % stats.ttest_1samp(best_model.resid[13:], 0)[1])\nprint(\"Критерий Дики-Фуллера: p=%f\" % sm.tsa.stattools.adfuller(best_model.resid[13:])[1])", "Проверяем критерии:\n* Критерией Стьюдента: p = 0.364 > 0.05 - остатки несмещены \n* Критерий Дики-Фуллера: p < 0.05 - остатки стационарны\n* Критерий Льюнга-Бокса: p = 0.64 > 0.05 - остатки неавтокоррелированы\nТеперь посмотрим, как модель приближает данные.", "data['model'] = invboxcox(best_model.fittedvalues, lmbda)\nplt.figure(figsize(15,7))\ndata.wage.plot(label = 'фактические значения')\ndata.model[13:].plot(color='r', label = 'прогноз', linestyle = '--')\nplt.ylabel(u'Средняя зарплата')\nplt.legend()\npylab.show()", "Визуально наша модель неплохо приближает реальные данные. Посмотрим прогноз.\nПрогноз", "#подбираем значения индексов для прогноза\nbest_model.predict(start=294, end=294 + 24)\n\n#Выводим прогноз\ndata2 = data[['wage']]\ndate_list = [datetime.datetime.strptime(\"2017-06-01\", \"%Y-%m-%d\") + relativedelta(months=x) for x in range(0,24)]\nfuture = pd.DataFrame(index=date_list, columns= data2.columns)\ndata2 = pd.concat([data2, future])\ndata2['forecast'] = invboxcox(best_model.predict(start=293, end=294 + 24), lmbda)\n\nplt.figure(figsize(15,7))\ndata.wage.plot(label = 'фактические значения')\ndata2.forecast.plot(color='r', label = 'прогноз', linestyle = '--')\nplt.ylabel(u'Средняя зарплата')\nplt.legend()\npylab.show()", "Видно, что прогноз учитывает как сезонные колебания в данных, так и возрастающий тренд. Прогноз выглядит адекватно." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
jdhp-docs/python-notebooks
python_sklearn_mlp_fr.ipynb
mit
[ "Le perceptron multicouche avec scikit-learn\nDocumentation officielle: http://scikit-learn.org/stable/modules/neural_networks_supervised.html\nNotebooks associés:\n- http://www.jdhp.org/docs/notebooks/ai_multilayer_perceptron_fr.html\nVérification de la version de la bibliothèque scikit-learn\nAttention: le Perceptron Multicouche n'est implémenté dans scikit-learn que depuis la version 0.18 (septembre 2016).\nLe code source de cette implémentation est disponible sur github.\nLe long fil de discussion qui précédé l'intégration de cette implémentation est disponible sur la page suivante: issue #3204.", "import sklearn\n\n# version >= 0.18 is required\nversion = [int(num) for num in sklearn.__version__.split('.')]\nassert (version[0] >= 1) or (version[1] >= 18)", "Classification\nC.f. http://scikit-learn.org/stable/modules/neural_networks_supervised.html#classification\nPremier exemple", "from sklearn.neural_network import MLPClassifier\n\nX = [[0., 0.], [1., 1.]]\ny = [0, 1]\n\nclf = MLPClassifier(solver='lbfgs',\n alpha=1e-5,\n hidden_layer_sizes=(5, 2),\n random_state=1)\n\nclf.fit(X, y)", "Une fois le réseau de neurones entrainé, on peut tester de nouveaux exemples:", "clf.predict([[2., 2.], [-1., -2.]])", "clf.coefs_ contient les poids du réseau de neurones (une liste d'array):", "clf.coefs_\n\n[coef.shape for coef in clf.coefs_]", "Vector of probability estimates $P(y|x)$ per sample $x$:", "clf.predict_proba([[2., 2.], [-1., -2.]])", "Régression\nC.f. http://scikit-learn.org/stable/modules/neural_networks_supervised.html#regression\nPremier exemple", "from sklearn.neural_network import MLPRegressor\n\nX = [[0., 0.], [1., 1.]]\ny = [0, 1]\n\nreg = MLPRegressor(solver='lbfgs',\n alpha=1e-5,\n hidden_layer_sizes=(5, 2),\n random_state=1)\n\nreg.fit(X, y)", "Une fois le réseau de neurones entrainé, on peut tester de nouveaux exemples:", "reg.predict([[2., 2.], [-1., -2.]])", "clf.coefs_ contient les poids du réseau de neurones (une liste d'array):", "reg.coefs_\n\n[coef.shape for coef in reg.coefs_]", "Régularisation\nC.f. http://scikit-learn.org/stable/modules/neural_networks_supervised.html#regularization", "# TODO...", "Normalisation des données d'entrée\nC.f. http://scikit-learn.org/stable/modules/neural_networks_supervised.html#tips-on-practical-use\nItérer manuellement\nC.f. http://scikit-learn.org/stable/modules/neural_networks_supervised.html#more-control-with-warm-start\nItérer manuellement la boucle d'apprentissage peut être pratique pour suivre son évolution ou pour l'orienter.\nVoici un exemple où on suit l'évolution des poids du réseau sur 10 itérations:", "X = [[0., 0.], [1., 1.]]\ny = [0, 1]\n\nclf = MLPClassifier(hidden_layer_sizes=(15,),\n random_state=1,\n max_iter=1, # <- !\n warm_start=True) # <- !\n\nfor i in range(10):\n clf.fit(X, y)\n print(clf.coefs_)", "TODO: Quelle différence avec le mode d'apprentissage online (boucle ouverte ?) fit.partial_fit() ???" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
elingg/tensorflow
tensorflow/examples/udacity/5_word2vec.ipynb
apache-2.0
[ "Deep Learning\nAssignment 5\nThe goal of this assignment is to train a Word2Vec skip-gram model over Text8 data.", "# These are all the modules we'll be using later. Make sure you can import them\n# before proceeding further.\n%matplotlib inline\nfrom __future__ import print_function\nimport collections\nimport math\nimport numpy as np\nimport os\nimport random\nimport tensorflow as tf\nimport zipfile\nfrom matplotlib import pylab\nfrom six.moves import range\nfrom six.moves.urllib.request import urlretrieve\nfrom sklearn.manifold import TSNE\nfrom itertools import compress", "Download the data from the source website if necessary.", "url = 'http://mattmahoney.net/dc/'\n\ndef maybe_download(filename, expected_bytes):\n \"\"\"Download a file if not present, and make sure it's the right size.\"\"\"\n if not os.path.exists(filename):\n filename, _ = urlretrieve(url + filename, filename)\n statinfo = os.stat(filename)\n if statinfo.st_size == expected_bytes:\n print('Found and verified %s' % filename)\n else:\n print(statinfo.st_size)\n raise Exception(\n 'Failed to verify ' + filename + '. Can you get to it with a browser?')\n return filename\n\nfilename = maybe_download('text8.zip', 31344016)", "Read the data into a string.", "def read_data(filename):\n \"\"\"Extract the first file enclosed in a zip file as a list of words\"\"\"\n with zipfile.ZipFile(filename) as f:\n data = tf.compat.as_str(f.read(f.namelist()[0])).split()\n return data\n \nwords = read_data(filename)\nprint('Data size %d' % len(words))", "Build the dictionary and replace rare words with UNK token.", "vocabulary_size = 50000\n\ndef build_dataset(words):\n count = [['UNK', -1]]\n count.extend(collections.Counter(words).most_common(vocabulary_size - 1))\n dictionary = dict()\n for word, _ in count:\n dictionary[word] = len(dictionary)\n data = list()\n unk_count = 0\n for word in words:\n if word in dictionary:\n index = dictionary[word]\n else:\n index = 0 # dictionary['UNK']\n unk_count = unk_count + 1\n data.append(index)\n count[0][1] = unk_count\n reverse_dictionary = dict(zip(dictionary.values(), dictionary.keys())) \n return data, count, dictionary, reverse_dictionary\n\ndata, count, dictionary, reverse_dictionary = build_dataset(words)\nprint('Most common words (+UNK)', count[:5])\nprint('Sample data', data[:10])\ndel words # Hint to reduce memory.", "Function to generate a training batch for the skip-gram model.", "data_index = 0\n\ndef generate_batch(batch_size, num_skips, skip_window):\n global data_index\n assert batch_size % num_skips == 0\n assert num_skips <= 2 * skip_window\n batch = np.ndarray(shape=(batch_size), dtype=np.int32)\n labels = np.ndarray(shape=(batch_size, 1), dtype=np.int32)\n span = 2 * skip_window + 1 # [ skip_window target skip_window ]\n buffer = collections.deque(maxlen=span)\n for _ in range(span):\n buffer.append(data[data_index])\n data_index = (data_index + 1) % len(data)\n for i in range(batch_size // num_skips):\n target = skip_window # target label at the center of the buffer\n targets_to_avoid = [ skip_window ]\n for j in range(num_skips):\n while target in targets_to_avoid:\n target = random.randint(0, span - 1)\n targets_to_avoid.append(target)\n batch[i * num_skips + j] = buffer[skip_window]\n labels[i * num_skips + j, 0] = buffer[target]\n buffer.append(data[data_index])\n data_index = (data_index + 1) % len(data)\n return batch, labels\n\ndef generate_batch_cbow(batch_size, skip_window):\n global data_index\n surrounding_words = 2 * skip_window # words surrounding the target\n assert batch_size % surrounding_words == 0 \n total_labels = batch_size / surrounding_words \n batch = np.ndarray(shape=(batch_size), dtype=np.int32)\n labels = np.ndarray(shape=(total_labels, 1), dtype=np.int32)\n span = 2 * skip_window + 1 # [ skip_window target skip_window ]\n buffer = collections.deque(maxlen=span)\n for _ in range(span):\n buffer.append(data[data_index])\n data_index = (data_index + 1) % len(data)\n for i in range(total_labels):\n target = skip_window # target label at the center of the buffer\n targets_to_avoid = [ skip_window ]\n labels[i, 0] = buffer[target] # label the target\n for j in range(surrounding_words):\n while target in targets_to_avoid:\n target = random.randint(0, span - 1)\n targets_to_avoid.append(target)\n batch[i * surrounding_words + j] = buffer[target]\n buffer.append(data[data_index])\n data_index = (data_index + 1) % len(data)\n return batch, labels\n", "Train a skip-gram model.", "batch_size = 128\nembedding_size = 128 # Dimension of the embedding vector.\nskip_window = 1 # How many words to consider left and right.\nnum_skips = 2 # How many times to reuse an input to generate a label.\n# We pick a random validation set to sample nearest neighbors. here we limit the\n# validation samples to the words that have a low numeric ID, which by\n# construction are also the most frequent. \nvalid_size = 16 # Random set of words to evaluate similarity on.\nvalid_window = 100 # Only pick dev samples in the head of the distribution.\nvalid_examples = np.array(random.sample(range(valid_window), valid_size))\nnum_sampled = 64 # Number of negative examples to sample.\nsurrounding_words = 2 * skip_window\ntotal_labels = batch_size / surrounding_words\n\ngraph = tf.Graph()\n\nwith graph.as_default(), tf.device('/cpu:0'):\n\n # Input data.\n train_dataset = tf.placeholder(tf.int32, shape=[batch_size])\n train_labels = tf.placeholder(tf.int32, shape=[total_labels, 1])\n valid_dataset = tf.constant(valid_examples, dtype=tf.int32)\n \n # Variables.\n embeddings = tf.Variable(\n tf.random_uniform([vocabulary_size, embedding_size], -1.0, 1.0))\n softmax_weights = tf.Variable(\n tf.truncated_normal([vocabulary_size, embedding_size],\n stddev=1.0 / math.sqrt(embedding_size)))\n softmax_biases = tf.Variable(tf.zeros([vocabulary_size]))\n \n # Model.\n # Look up embeddings for inputs.\n embed = tf.nn.embedding_lookup(embeddings, train_dataset)\n\n mask = np.zeros(batch_size, dtype=np.int32)\n mask_index = -1\n for i in range(batch_size):\n if i % surrounding_words == 0:\n mask_index = mask_index + 1\n mask[i] = mask_index\n \n embed_filtered = tf.segment_sum(embed, mask)\n\n\n # Compute the softmax loss, using a sample of the negative labels each time.\n loss = tf.reduce_mean(\n tf.nn.sampled_softmax_loss(weights=softmax_weights, biases=softmax_biases, inputs=embed_filtered,\n labels=train_labels, num_sampled=num_sampled, num_classes=vocabulary_size))\n\n # Optimizer.\n # Note: The optimizer will optimize the softmax_weights AND the embeddings.\n # This is because the embeddings are defined as a variable quantity and the\n # optimizer's `minimize` method will by default modify all variable quantities \n # that contribute to the tensor it is passed.\n # See docs on `tf.train.Optimizer.minimize()` for more details.\n optimizer = tf.train.AdagradOptimizer(1.0).minimize(loss)\n \n # Compute the similarity between minibatch examples and all embeddings.\n # We use the cosine distance:\n norm = tf.sqrt(tf.reduce_sum(tf.square(embeddings), 1, keep_dims=True))\n normalized_embeddings = embeddings / norm\n valid_embeddings = tf.nn.embedding_lookup(\n normalized_embeddings, valid_dataset)\n similarity = tf.matmul(valid_embeddings, tf.transpose(normalized_embeddings))\n\nnum_steps = 100001\n\nwith tf.Session(graph=graph) as session:\n tf.global_variables_initializer().run()\n print('Initialized')\n average_loss = 0\n for step in range(num_steps):\n batch_data, batch_labels = generate_batch_cbow(\n batch_size, skip_window)\n feed_dict = {train_dataset : batch_data, train_labels : batch_labels}\n _, l = session.run([optimizer, loss], feed_dict=feed_dict)\n average_loss += l\n if step % 2000 == 0:\n if step > 0:\n average_loss = average_loss / 2000\n # The average loss is an estimate of the loss over the last 2000 batches.\n print('Average loss at step %d: %f' % (step, average_loss))\n average_loss = 0\n # note that this is expensive (~20% slowdown if computed every 500 steps)\n if step % 10000 == 0:\n sim = similarity.eval()\n for i in range(valid_size):\n valid_word = reverse_dictionary[valid_examples[i]]\n top_k = 8 # number of nearest neighbors\n nearest = (-sim[i, :]).argsort()[1:top_k+1]\n log = 'Nearest to %s:' % valid_word\n for k in range(top_k):\n close_word = reverse_dictionary[nearest[k]]\n log = '%s %s,' % (log, close_word)\n print(log)\n final_embeddings = normalized_embeddings.eval()\n\nnum_points = 400\n\ntsne = TSNE(perplexity=30, n_components=2, init='pca', n_iter=5000)\ntwo_d_embeddings = tsne.fit_transform(final_embeddings[1:num_points+1, :])\n\ndef plot(embeddings, labels):\n assert embeddings.shape[0] >= len(labels), 'More labels than embeddings'\n pylab.figure(figsize=(15,15)) # in inches\n for i, label in enumerate(labels):\n x, y = embeddings[i,:]\n pylab.scatter(x, y)\n pylab.annotate(label, xy=(x, y), xytext=(5, 2), textcoords='offset points',\n ha='right', va='bottom')\n pylab.show()\n\nwords = [reverse_dictionary[i] for i in range(1, num_points+1)]\nplot(two_d_embeddings, words)", "Problem\nAn alternative to skip-gram is another Word2Vec model called CBOW (Continuous Bag of Words). In the CBOW model, instead of predicting a context word from a word vector, you predict a word from the sum of all the word vectors in its context. Implement and evaluate a CBOW model trained on the text8 dataset." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
AllenDowney/CompStats
effect_size.ipynb
mit
[ "Effect Size\nExamples and exercises for a tutorial on statistical inference.\nCopyright 2016 Allen Downey\nLicense: Creative Commons Attribution 4.0 International", "from __future__ import print_function, division\n\nimport numpy\nimport scipy.stats\n\nimport matplotlib.pyplot as pyplot\n\nfrom ipywidgets import interact, interactive, fixed\nimport ipywidgets as widgets\n\n# seed the random number generator so we all get the same results\nnumpy.random.seed(17)\n\n# some nice colors from http://colorbrewer2.org/\nCOLOR1 = '#7fc97f'\nCOLOR2 = '#beaed4'\nCOLOR3 = '#fdc086'\nCOLOR4 = '#ffff99'\nCOLOR5 = '#386cb0'\n\n%matplotlib inline", "Part One\nTo explore statistics that quantify effect size, we'll look at the difference in height between men and women. I used data from the Behavioral Risk Factor Surveillance System (BRFSS) to estimate the mean and standard deviation of height in cm for adult women and men in the U.S.\nI'll use scipy.stats.norm to represent the distributions. The result is an rv object (which stands for random variable).", "mu1, sig1 = 178, 7.7\nmale_height = scipy.stats.norm(mu1, sig1)\n\nmu2, sig2 = 163, 7.3\nfemale_height = scipy.stats.norm(mu2, sig2)", "The following function evaluates the normal (Gaussian) probability density function (PDF) within 4 standard deviations of the mean. It takes and rv object and returns a pair of NumPy arrays.", "def eval_pdf(rv, num=4):\n mean, std = rv.mean(), rv.std()\n xs = numpy.linspace(mean - num*std, mean + num*std, 100)\n ys = rv.pdf(xs)\n return xs, ys", "Here's what the two distributions look like.", "xs, ys = eval_pdf(male_height)\npyplot.plot(xs, ys, label='male', linewidth=4, color=COLOR2)\n\nxs, ys = eval_pdf(female_height)\npyplot.plot(xs, ys, label='female', linewidth=4, color=COLOR3)\npyplot.xlabel('height (cm)')\nNone", "Let's assume for now that those are the true distributions for the population.\nI'll use rvs to generate random samples from the population distributions. Note that these are totally random, totally representative samples, with no measurement error!", "male_sample = male_height.rvs(1000)\n\nfemale_sample = female_height.rvs(1000)", "Both samples are NumPy arrays. Now we can compute sample statistics like the mean and standard deviation.", "mean1, std1 = male_sample.mean(), male_sample.std()\nmean1, std1", "The sample mean is close to the population mean, but not exact, as expected.", "mean2, std2 = female_sample.mean(), female_sample.std()\nmean2, std2", "And the results are similar for the female sample.\nNow, there are many ways to describe the magnitude of the difference between these distributions. An obvious one is the difference in the means:", "difference_in_means = male_sample.mean() - female_sample.mean()\ndifference_in_means # in cm", "On average, men are 14--15 centimeters taller. For some applications, that would be a good way to describe the difference, but there are a few problems:\n\n\nWithout knowing more about the distributions (like the standard deviations) it's hard to interpret whether a difference like 15 cm is a lot or not.\n\n\nThe magnitude of the difference depends on the units of measure, making it hard to compare across different studies.\n\n\nThere are a number of ways to quantify the difference between distributions. A simple option is to express the difference as a percentage of the mean.\nExercise 1: what is the relative difference in means, expressed as a percentage?", "# Solution goes here", "STOP HERE: We'll regroup and discuss before you move on.\nPart Two\nAn alternative way to express the difference between distributions is to see how much they overlap. To define overlap, we choose a threshold between the two means. The simple threshold is the midpoint between the means:", "simple_thresh = (mean1 + mean2) / 2\nsimple_thresh", "A better, but slightly more complicated threshold is the place where the PDFs cross.", "thresh = (std1 * mean2 + std2 * mean1) / (std1 + std2)\nthresh", "In this example, there's not much difference between the two thresholds.\nNow we can count how many men are below the threshold:", "male_below_thresh = sum(male_sample < thresh)\nmale_below_thresh", "And how many women are above it:", "female_above_thresh = sum(female_sample > thresh)\nfemale_above_thresh", "The \"overlap\" is the area under the curves that ends up on the wrong side of the threshold.", "male_overlap = male_below_thresh / len(male_sample)\nfemale_overlap = female_above_thresh / len(female_sample)\nmale_overlap, female_overlap", "In practical terms, you might report the fraction of people who would be misclassified if you tried to use height to guess sex, which is the average of the male and female overlap rates:", "misclassification_rate = (male_overlap + female_overlap) / 2\nmisclassification_rate", "Another way to quantify the difference between distributions is what's called \"probability of superiority\", which is a problematic term, but in this context it's the probability that a randomly-chosen man is taller than a randomly-chosen woman.\nExercise 2: Suppose I choose a man and a woman at random. What is the probability that the man is taller?\nHINT: You can zip the two samples together and count the number of pairs where the male is taller, or use NumPy array operations.", "# Solution goes here\n\n# Solution goes here", "Overlap (or misclassification rate) and \"probability of superiority\" have two good properties:\n\n\nAs probabilities, they don't depend on units of measure, so they are comparable between studies.\n\n\nThey are expressed in operational terms, so a reader has a sense of what practical effect the difference makes.\n\n\nCohen's effect size\nThere is one other common way to express the difference between distributions. Cohen's $d$ is the difference in means, standardized by dividing by the standard deviation. Here's the math notation:\n$ d = \\frac{\\bar{x}_1 - \\bar{x}_2} s $\nwhere $s$ is the pooled standard deviation:\n$s = \\sqrt{\\frac{n_1 s^2_1 + n_2 s^2_2}{n_1+n_2}}$\nHere's a function that computes it:", "def CohenEffectSize(group1, group2):\n \"\"\"Compute Cohen's d.\n\n group1: Series or NumPy array\n group2: Series or NumPy array\n\n returns: float\n \"\"\"\n diff = group1.mean() - group2.mean()\n\n n1, n2 = len(group1), len(group2)\n var1 = group1.var()\n var2 = group2.var()\n\n pooled_var = (n1 * var1 + n2 * var2) / (n1 + n2)\n d = diff / numpy.sqrt(pooled_var)\n return d", "Computing the denominator is a little complicated; in fact, people have proposed several ways to do it. This implementation uses the \"pooled standard deviation\", which is a weighted average of the standard deviations of the two groups.\nAnd here's the result for the difference in height between men and women.", "CohenEffectSize(male_sample, female_sample)", "Most people don't have a good sense of how big $d=1.9$ is, so let's make a visualization to get calibrated.\nHere's a function that encapsulates the code we already saw for computing overlap and probability of superiority.", "def overlap_superiority(control, treatment, n=1000):\n \"\"\"Estimates overlap and superiority based on a sample.\n \n control: scipy.stats rv object\n treatment: scipy.stats rv object\n n: sample size\n \"\"\"\n control_sample = control.rvs(n)\n treatment_sample = treatment.rvs(n)\n thresh = (control.mean() + treatment.mean()) / 2\n \n control_above = sum(control_sample > thresh)\n treatment_below = sum(treatment_sample < thresh)\n overlap = (control_above + treatment_below) / n\n \n superiority = (treatment_sample > control_sample).mean()\n return overlap, superiority", "Here's the function that takes Cohen's $d$, plots normal distributions with the given effect size, and prints their overlap and superiority.", "def plot_pdfs(cohen_d=2):\n \"\"\"Plot PDFs for distributions that differ by some number of stds.\n \n cohen_d: number of standard deviations between the means\n \"\"\"\n control = scipy.stats.norm(0, 1)\n treatment = scipy.stats.norm(cohen_d, 1)\n xs, ys = eval_pdf(control)\n pyplot.fill_between(xs, ys, label='control', color=COLOR3, alpha=0.7)\n\n xs, ys = eval_pdf(treatment)\n pyplot.fill_between(xs, ys, label='treatment', color=COLOR2, alpha=0.7)\n \n o, s = overlap_superiority(control, treatment)\n pyplot.text(0, 0.05, 'overlap ' + str(o))\n pyplot.text(0, 0.15, 'superiority ' + str(s))\n pyplot.show()\n #print('overlap', o)\n #print('superiority', s)", "Here's an example that demonstrates the function:", "plot_pdfs(2)", "And an interactive widget you can use to visualize what different values of $d$ mean:", "slider = widgets.FloatSlider(min=0, max=4, value=2)\ninteract(plot_pdfs, cohen_d=slider)\nNone", "Cohen's $d$ has a few nice properties:\n\n\nBecause mean and standard deviation have the same units, their ratio is dimensionless, so we can compare $d$ across different studies.\n\n\nIn fields that commonly use $d$, people are calibrated to know what values should be considered big, surprising, or important.\n\n\nGiven $d$ (and the assumption that the distributions are normal), you can compute overlap, superiority, and related statistics.\n\n\nIn summary, the best way to report effect size depends on the audience and your goals. There is often a tradeoff between summary statistics that have good technical properties and statistics that are meaningful to a general audience." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
AcceleratedCloud/SPynq
Examples/LogisticRegression/notebook/LogisticRegressionApp.ipynb
apache-2.0
[ "<center> FPGA-ACCELERATION OF MACHINE LEARNING APPLICATIONS USING APACHE SPARK <br/> A USE CASE SCENARIO ON LOGISTIC REGRESSION <center/>\n<center> Classifying Handwritten Digits <br/> with <br/> Logistic Regression </center>\n<img style=\"float: center; width: 450px; height: 100px;\" src=\"sample.png\">\nIntroduction\nIn this notebook an interactive PySpark shell is loaded and our Logistic Regression application is executed, using our accelerated ML library. The accelerated ML library is written in Python. It supports standard learning algorithms, including common settings like classification, regression etc. We are given the option to choose between an accelerated execution that uses both software and hardware and a non-accelerated one, that uses only the CPU cores. Upon choosing the accelerated option, the accelerator's library is invoked (which is also written in Python) where the input data is stored in memory mapped buffers and are then transfered and processed in the PL. The whole communication with the PL is achieved using an AXI4-Stream Accelerator Adapter.\n\n1. Data Sets\nThe data are taken from the famous <a href=\"http://yann.lecun.com/exdb/mnist/\">MNIST</a> dataset. \nThe original data file contains gray-scale images of hand-drawn digits, from zero through nine. Each image is 28 pixels in height and 28 pixels in width, for a total of 784 pixels in total. Each pixel has a single pixel-value associated with it, indicating the lightness or darkness of that pixel, with higher numbers meaning darker. This pixel-value is an integer between 0 and 255, inclusive.\nIn this example the data we use are already preprocessed/normalized using Feature Standardization method (<a href=\"https://en.wikipedia.org/wiki/Standard_score\">Z-score scaling</a>).\nThe (train and test) data sets that are used below have 785 columns. The first column, called \"label\", is the digit that was drawn by the user. The rest of the columns contain the (rescaled) pixel-values of the associated image.\n2. PySpark initialization\nIn this section we initialize PySpark to predefine the SparkContext variable. \\$SPARK_HOME and other needed environment variables are set under the /etc/environment file. \n\nMake sure you have correctly set all needed paths and variables and that Py4J matches the version you have installed.", "import sys, os\n\nspark_home = os.environ.get(\"SPARK_HOME\", None)\n\n# Add the spark python sub-directory to the path\nsys.path.insert(0, spark_home + \"/python\")\n\n# Add the py4j to the path.\n# You may need to change the version number to match your install\nsys.path.insert(0, os.path.join(spark_home + \"/python/lib/py4j-0.10.4-src.zip\"))\n\n# Initialize PySpark to predefine the SparkContext variable 'sc'\nfilename = spark_home+\"/python/pyspark/shell.py\"\nexec(open(filename).read())", "3. Logistic Regression Application\nThis example shows how our accelerated Logistic Regression library is called to train a LR model on the train set and then test its accuracy. If accel is set (accel = 1), the hardware accelerator is used for the computation of the gradients in each iteration.\nRead data & parameters\nThe size of the train set, as well as the number of the iterations are intentionally picked small, to avoid large execution time in SW-only cases.", "chunkSize = 4000\nalpha = 0.25\niterations = 5\n\ntrain_file = \"data/MNIST_train.dat\"\ntest_file = \"data/MNIST_test.dat\"\n\nsc.appName = \"Python Logistic Regression\"\n\nprint(\"* LogisticRegression Application *\")\nprint(\" # train file: \" + train_file)\nprint(\" # test file: \" + test_file)", "HW accelerated vs SW-only", "accel = int(input(\"Select mode (0: SW-only, 1: HW accelerated) : \"))", "Instantiate a Logistic Regression model", "from pyspark.mllib_accel.classification import LogisticRegression\n\ntrainRDD = sc.textFile(train_file).coalesce(1)\n\nnumClasses = 10\nnumFeatures = 784 \nLR = LogisticRegression(numClasses, numFeatures) ", "Train the LR model", "weights = LR.train(trainRDD, chunkSize, alpha, iterations, accel)\n \nwith open(\"data/weights.out\", \"w\") as weights_file:\n for k in range(0, numClasses):\n for j in range(0, numFeatures):\n if j == 0:\n weights_file.write(str(round(weights[k * numFeatures + j], 5)))\n else:\n weights_file.write(\",\" + str(round(weights[k * numFeatures + j], 5)))\n weights_file.write(\"\\n\")\nweights_file.close()", "Test the LR model", "testRDD = sc.textFile(test_file)\n\nLR.test(testRDD)", "4. Performance metrics\nExecution time for different execution scenarios:\nTarget | Time\n:--- | ---:\nPYNQ SW-only: | 1483.859 sec \nPYNQ HW accelerated: | 96.310 sec" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
probml/pyprobml
deprecated/rbm_contrastive_divergence.ipynb
mit
[ "A demonstration of using contrastive divergence to train the parameters of a restricted Boltzmann machine.\nReferences and Materials\nThis notebook has made use of various textbooks, articles, and other resources with some particularly relevant examples given below.\nRBM and CD Background:\n- [1] K. Murphy. Probabilistic Machine Learning: Advanced Topics. MIT Press, 2023.\n- D. MacKay. Information theory, inference and learning algorithms. Cambridge University Press, 2003.\n- Hastie, Trevor, et al. The elements of statistical learning: data mining, inference, and prediction. Vol. 2. New York: springer, 2009.\nPractical advice for training RBMs with the CD algorithm:\n- [2] G. Hinton. A Practical Guide to Training Restricted Boltzmann Machines. Tech. rep. U. Toronto, 2010.\nCode:\n- gugarosa/learnenergy\n- yell/boltzmann-machines\n- Ruslan Salakhutdinov Matlab code", "!pip install optax\n\nimport numpy as np\nimport jax\nfrom jax import numpy as jnp\nfrom jax import grad, jit, vmap, random\nimport optax\nimport tensorflow_datasets as tfds\nfrom sklearn.linear_model import LogisticRegression\n\nfrom matplotlib import pyplot as plt\nimport matplotlib.gridspec as gridspec", "Plotting functions", "def plot_digit(img, label=None, ax=None):\n \"\"\"Plot MNIST Digit.\"\"\"\n if ax is None:\n fig, ax = plt.subplots()\n if img.ndim == 1:\n img = img.reshape(28, 28)\n ax.imshow(img.squeeze(), cmap=\"Greys_r\")\n ax.axis(\"off\")\n if label is not None:\n ax.set_title(f\"Label:{label}\", fontsize=10, pad=1.3)\n return ax\n\n\ndef grid_plot_imgs(imgs, dim=None, axs=None, labels=None, figsize=(5, 5)):\n \"\"\"Plot a series of digits in a grid.\"\"\"\n if dim is None:\n if axs is None:\n n_imgs = len(imgs)\n dim = np.sqrt(n_imgs)\n if not dim.is_integer():\n raise ValueError(\"If dim not specified `len(imgs)` must be a square number.\")\n else:\n dim = int(dim)\n else:\n dim = len(axs)\n\n if axs is None:\n gridspec_kw = {\"hspace\": 0.05, \"wspace\": 0.05}\n if labels is not None:\n gridspec_kw[\"hspace\"] = 0.25\n fig, axs = plt.subplots(dim, dim, figsize=figsize, gridspec_kw=gridspec_kw)\n\n for n in range(dim**2):\n img = imgs[n]\n row_idx = n // dim\n col_idx = n % dim\n axi = axs[row_idx, col_idx]\n if labels is not None:\n ax_label = labels[n]\n else:\n ax_label = None\n plot_digit(img, ax=axi, label=ax_label)\n\n return axs\n\n\ndef gridspec_plot_imgs(imgs, gs_base, title=None, dim=5):\n \"\"\"Plot digits into a gridspec subgrid.\n\n Args:\n imgs - images to plot.\n gs_base - from `gridspec.GridSpec`\n title - subgrid title.\n\n Note that, in general, for this type of plotting it is considerably more\n simple to using `fig.subfigures()` however that requires matplotlib >=3.4\n which has some conflicts with the default colab setup as of the time of\n writing.\n \"\"\"\n gs0 = gs_base.subgridspec(dim, dim)\n for i in range(dim):\n for j in range(dim):\n ax = fig.add_subplot(gs0[i, j])\n plot_digit(imgs[i * dim + j], ax=ax)\n if (i == 0) and (j == 2):\n if title is not None:\n ax.set_title(title)", "Restricted Boltzmann Machines\nRestricted Boltzmann Machines (RBMs) are a type of energy based model in which the connectivity of nodes is carefully designed to facilitate efficient sampling methods.\nFor details of RBMs see the sections on undirected graphical models (Section 4.3) and energy-based models (Chapter 23) in [1]. We reproduce here some of the relevant sampling equations which we will instrumenting below.\nWe will be considering RBMs with binary units in both the hidden, $\\mathbf{h}$, and visible, $\\mathbf{v}$, layers.\nIn general for Boltzmann machines with hidden units the probability of a particular state for the visible nodes is given by:\n$$\nP_{\\theta}(\\mathbf{v}) = \\frac{\\sum_{\\mathbf{h}}\\\n \\exp(-\\mathcal{E}\\left(\\mathbf{h},\\mathbf{v},\\theta)\\right)}{Z(\\theta)}\n$$\nwhere $\\theta$ is the collection of parameters $\\theta = (\\mathbf{W}, \\mathbf{a}, \\mathbf{b})$:\n- $\\mathbf{W} \\in \\mathbb{R}^{N_{\\mathrm{vis}} \\times N_{\\mathrm{hid}}}$\n- $\\mathbf{a} \\in \\mathbb{R}^{N_{\\mathrm{hid}}}$\n- $\\mathbf{b} \\in \\mathbb{R}^{N_{\\mathrm{vis}}}$\nand the energy of state is given by:\n$$\n\\mathcal{E}(\\mathbf{h}, \\mathbf{v}, \\theta) = \\mathbf{v}^\\top \\mathbf{W} \\mathbf{h} + \\mathbf{h}^\\top \\mathbf{a} + \\mathbf{v}^\\top \\mathbf{b}.\n$$\nIn restricted Boltzmann machines the hidden units are independent from one another conditional on the visible units, and vic versa. This means that it is straightforward to do conditional block-sampling of the state of the network. \nThis independence structure has the property that when conditionally sampling, the probability that the $j$th hidden unit is active is,\n$$\np(h_j = 1 | \\mathbf{v}, \\theta) = \\sigma\\left(b_j + \\sum_i v_i w_{ij}\\right),\n$$\nand probability that the $i$th visible unit is active is given by,\n$$\np(v_i = 1 | \\mathbf{h}, \\theta) = \\sigma\\left(a_i + \\sum_j h_j w_{ij}\\right).\n$$\nThe function $\\sigma(\\cdot)$ is the sigmoid function:\n$$\n\\sigma(x) = \\frac{1}{1 + e^{-x}}.\n$$\nContrastive Divergence\nContrastive divergence (CD) is the name for a family of algorithms used to perform approximate maximum likelihood training for RBMs.\nContrastive divergence approximates the gradient of the log probability of the data (our desired objective function) by intialising an MCMC chain on the data vector and sampling for a small number of steps. The insight behind CD is that even with a very small number of steps the process still provides gradient information which can be used to fit the model parameters.\nHere we implement the CD1 algorithm which uses just a single round of Gibbs sampling.\nFor more details on the CD algorithm see [1] (Section 23.2.2).", "def initialise_params(N_vis, N_hid, key):\n \"\"\"Initialise the parameters.\n\n Args:\n N_vis - number of visible units.\n N_hid - number of hidden units.\n key - PRNG key.\n\n Returns:\n params - (W, a, b), Weights and biases for network.\n \"\"\"\n W_key, a_key, b_key = random.split(key, 3)\n W = random.normal(W_key, (N_vis, N_hid)) * 0.01\n a = random.normal(a_key, (N_hid,)) * 0.01\n b = random.normal(b_key, (N_vis,)) * 0.01\n return (W, a, b)\n\n@jit\ndef sample_hidden(vis, params, key):\n \"\"\"Performs the hidden layer sampling, P(h|v;θ).\n\n Args:\n vis - state of the visible units.\n params - (W, a, b), Weights and biases for network.\n key - PRNG key.\n\n Returns:\n The probabilities and states of the hidden layer sampling.\n \"\"\"\n W, a, _ = params\n activation = jnp.dot(vis, W) + a\n hid_probs = jax.nn.sigmoid(activation)\n hid_states = random.bernoulli(key, hid_probs).astype(\"int8\")\n return hid_probs, hid_states\n\n\n@jit\ndef sample_visible(hid, params, key):\n \"\"\"Performs the visible layer sampling, P(v|h;θ).\n\n Args:\n hid - state of the hidden units\n params - (W, a, b), Weights and biases for network.\n key - PRNG key.\n\n Returns:\n The probabilities and states of the visible layer sampling.\n \"\"\"\n W, _, b = params\n activation = jnp.dot(hid, W.T) + b\n vis_probs = jax.nn.sigmoid(activation)\n vis_states = random.bernoulli(key, vis_probs).astype(\"int8\")\n return vis_probs, vis_states\n\n\n@jit\ndef CD1(vis_sample, params, key):\n \"\"\"The one-step contrastive divergence algorithm.\n\n Can handle batches of training data.\n\n Args:\n vis_sample - sample of visible states from data.\n params - (W, a, b), Weights and biases for network.\n key - PRNG key.\n\n Returns:\n An estimate of the gradient of the log likelihood with respect\n to the parameters.\n \"\"\"\n key, subkey = random.split(key)\n hid_prob0, hid_state0 = sample_hidden(vis_sample, params, subkey)\n key, subkey = random.split(key)\n vis_prob1, vis_state1 = sample_visible(hid_state0, params, subkey)\n key, subkey = random.split(key)\n # It would be more efficient here to not actual sample the unused states.\n hid_prob1, _ = sample_hidden(vis_state1, params, subkey)\n\n delta_W = jnp.einsum(\"...j,...k->...jk\", vis_sample, hid_prob0) - jnp.einsum(\n \"...j,...k->...jk\", vis_state1, hid_prob1\n )\n delta_a = hid_prob0 - hid_prob1\n delta_b = vis_sample - vis_state1\n return (delta_W, delta_a, delta_b)\n\n@jit\ndef reconstruct_vis(vis_sample, params, key):\n \"\"\"Reconstruct the visible state from a conditional sample of the hidden\n units.\n\n Returns\n Reconstruction probabilities.\n \"\"\"\n subkey1, subkey2 = random.split(key, 2)\n _, hid_state = sample_hidden(vis_sample, params, subkey1)\n vis_recon_prob, _ = sample_visible(hid_state, params, subkey2)\n return vis_recon_prob\n\n\n@jit\ndef reconstruction_loss(vis_samples, params, key):\n \"\"\"Calculate the L2 loss between a batch of visible samples and their\n reconstructions.\n\n Note this is a heuristic for evaluating training progress, not an objective\n function.\n \"\"\"\n reconstructed_samples = reconstruct_vis(vis_samples, params, key)\n loss = optax.l2_loss(vis_samples.astype(\"float32\"), reconstructed_samples).mean()\n return loss\n\n\n@jit\ndef vis_free_energy(vis_state, params):\n \"\"\"Calculate the free enery of a visible state.\n\n The free energy of a visible state is equal to the sum of the energies of\n all of the configurations of the total state (hidden + visible) which\n contain that visible state.\n\n Args:\n vis_state - state of the visible units.\n params - (W, a, b), Weights and biases for network.\n key - PRNG key.\n\n Returns:\n The free energy of the visible state.\n \"\"\"\n W, a, b = params\n activation = jnp.dot(vis_state, W) + a\n return -jnp.dot(vis_state, b) - jnp.sum(jax.nn.softplus(activation))\n\n\n@jit\ndef free_energy_gap(vis_train_samples, vis_test_samples, params):\n \"\"\"Calculate the average difference in free energies between test and train\n data.\n\n The free energy gap can be used to evaluate overfitting. If the model\n starts to overfit the training data the free energy gap will start to\n become increasingly negative.\n\n Args:\n vis_train_samples - samples of visible states from training data.\n vis_test_samples - samples of visible states from validation data.\n params - (W, a, b), Weights and biases for network.\n\n Returns:\n The difference between the test and validation free energies.\n \"\"\"\n train_FE = vmap(vis_free_energy, (0, None))(vis_train_samples, params)\n test_FE = vmap(vis_free_energy, (0, None))(vis_test_samples, params)\n return train_FE.mean() - test_FE.mean()\n\n\n@jit\ndef evaluate_params(train_samples, test_samples, params, key):\n \"\"\"Calculate performance measures of parameters.\"\"\"\n train_key, test_key = random.split(key)\n train_recon_loss = reconstruction_loss(train_samples, params, train_key)\n test_recon_loss = reconstruction_loss(test_samples, params, test_key)\n FE_gap = free_energy_gap(train_samples, test_samples, params)\n return train_recon_loss, test_recon_loss, FE_gap", "Load MNIST", "def preprocess_images(images):\n images = images.reshape((len(images), -1))\n return jnp.array(images > (255 / 2), dtype=\"float32\")\n\n\ndef load_mnist(split):\n images, labels = tfds.as_numpy(tfds.load(\"mnist\", split=split, batch_size=-1, as_supervised=True))\n procced_images = preprocess_images(images)\n return procced_images, labels\n\nmnist_train_imgs, mnist_train_labels = load_mnist(\"train\")\nmnist_test_imgs, mnist_test_labels = load_mnist(\"test\")", "Training with optax", "def train_RBM(params, train_data, optimizer, key, eval_samples, n_epochs=5, batch_size=20):\n \"\"\"Optimize parameters of RBM using the CD1 algoritm.\"\"\"\n\n @jit\n def batch_step(params, opt_state, batch, key):\n grads = jax.tree_map(lambda x: x.mean(0), CD1(batch, params, key))\n updates, opt_state = optimizer.update(grads, opt_state, params)\n params = jax.tree_map(lambda p, u: p - u, params, updates)\n return params, opt_state\n\n opt_state = optimizer.init(params)\n metric_list = []\n param_list = [params]\n n_batches = len(train_data) // batch_size\n\n for _ in range(n_epochs):\n key, subkey = random.split(key)\n perms = random.permutation(subkey, len(mnist_train_imgs))\n perms = perms[: batch_size * n_batches] # Skip incomplete batch\n perms = perms.reshape((n_batches, -1))\n for n, perm in enumerate(perms):\n batch = mnist_train_imgs[perm, ...]\n key, subkey = random.split(key)\n params, opt_state = batch_step(params, opt_state, batch, subkey)\n if n % 200 == 0:\n key, eval_key = random.split(key)\n batch_metrics = evaluate_params(*eval_samples, params, eval_key)\n metric_list.append(batch_metrics)\n param_list.append(params)\n\n return params, metric_list, param_list\n\n# In practice you can use many more than 100 hidden units, up to 1000-2000.\n# A small number is chosen here so that training is fast.\nN_vis, N_hid = mnist_train_imgs.shape[-1], 100\nkey = random.PRNGKey(111)\nkey, subkey = random.split(key)\ninit_params = initialise_params(N_vis, N_hid, subkey)\n\noptimizer = optax.sgd(learning_rate=0.05, momentum=0.9)\neval_samples = (mnist_train_imgs[:1000], mnist_test_imgs[:1000])\n\nparams, metric_list, param_list = train_RBM(init_params, mnist_train_imgs, optimizer, key, eval_samples)", "Evaluating Training\nThe reconstruction loss is a heuristic measure of training performance. It measures a combination of two effects:\n\n\nThe difference between the equilibrium distribution of the RBM and the empirical distribution of the data.\n\n\nThe mixing rate of the Gibbs sampling.\n\n\nThe first of these effects tends to be what we care about however it is impossible to distinguish it from the second [2].\nThe objective function which contrastive divergence optimizes is the probability that the RBM assigns to the dataset. For the reasons outlined above we cannot calculate this directly because it requires knowledge of the partition function.\nWe can however compare the average free energy between two different sets of data. In the comparison the partition function cancel out. Hinton [2] suggests using this comparison as a measure of overfitting. If the model is not overfitting the values should be approximately the same. As the model starts to overfit the free energy of the validation data will increase with respect to the training data so the difference between the two values will become increasingly negative.", "train_recon_loss, test_recon_loss, FE_gap = list(zip(*metric_list))\nepoch_progress = np.linspace(0, 5, len(train_recon_loss))\n\nfig, (ax1, ax2) = plt.subplots(1, 2, figsize=(15, 6))\n\nax1.plot(epoch_progress, train_recon_loss, label=\"Train Reconstruction Loss\")\nax1.plot(epoch_progress, test_recon_loss, label=\"Test Reconstruction Loss\")\nax1.legend()\nax1.set_xlabel(\"Epoch\")\nax1.set_ylabel(\"Loss\")\n\nax2.plot(epoch_progress, FE_gap)\nax2.set_xlabel(\"Epoch\")\nax2.set_ylabel(\"Free Energy Gap\");\n\nvis_data_samples = mnist_test_imgs[:25]\n\nfig = plt.figure(figsize=(15, 5))\ngs_bases = gridspec.GridSpec(1, 3, figure=fig)\nrecon_params = (param_list[0], param_list[1], param_list[-1])\nsubfig_titles = (\"Initial\", \"Epoch 1\", \"Epoch 5\")\n\nkey, subkey = random.split(key)\n\nfor gs_base, epoch_param, sf_title in zip(gs_bases, recon_params, subfig_titles):\n # Use the same subkey for all parameter sets.\n vis_recon_probs = reconstruct_vis(vis_data_samples, epoch_param, subkey)\n title = f\"{sf_title} Parameters\"\n gridspec_plot_imgs(vis_recon_probs, gs_base, title)\n\nfig.suptitle(\"Reconstruction Samples\", fontsize=20);", "Classification\nWhile Boltzmann Machines are generative models they can be adapted to be used for classification and other discriminative tasks.\nHere we use RBM to transform a sample image into the hidden representation and then use this as input to a logistic regression classifier.\nThis classification is more accurate than when using the raw image data as input. Furthermore, the hidden the accuracy of classification increases as the training time increases.\nAlternatively, a RBM can made to include a set of visible units which encode the class label. Classification is then performed by clamping each of the class units in turn along with the test sample. The unit that gives the lowest free energy is the chosen class [2].", "class RBM_LogReg:\n \"\"\"\n Perform logistic regression on samples transformed to RBM hidden\n representation with `params`.\n \"\"\"\n\n def __init__(self, params):\n self.params = params\n self.LR = LogisticRegression(solver=\"saga\", tol=0.1)\n\n def _transform(self, samples):\n W, a, _ = self.params\n activation = jnp.dot(samples, W) + a\n hidden_probs = jax.nn.sigmoid(activation)\n return hidden_probs\n\n def fit(self, train_samples, train_labels):\n transformed_samples = self._transform(train_samples)\n self.LR.fit(transformed_samples, train_labels)\n\n def score(self, test_samples, test_labels):\n transformed_samples = self._transform(test_samples)\n return self.LR.score(transformed_samples, test_labels)\n\n def predict(self, test_samples):\n transformed_samples = self._transform(test_samples)\n return self.LR.predict(transformed_samples)\n\n def reconstruct_samples(self, samples, key):\n return reconstruct_vis(samples, self.params, key)\n\ntrain_data = (mnist_train_imgs, mnist_train_labels)\ntest_data = (mnist_test_imgs, mnist_test_labels)\n\n# Train LR classifier on the raw pixel data for comparison.\nLR_raw = LogisticRegression(solver=\"saga\", tol=0.1)\nLR_raw.fit(*train_data)\n\n# LR classifier trained on hidden representations after 1 Epoch of training.\nrbm_lr1 = RBM_LogReg(param_list[1])\nrbm_lr1.fit(*train_data)\n\n# LR classifier trained on hidden representations after 5 Epochs of training.\nrbm_lr5 = RBM_LogReg(param_list[-1])\nrbm_lr5.fit(*train_data)\n\nprint(\"Logistic Regression Accuracy:\")\nprint(f\"\\tRaw Data: {LR_raw.score(*test_data)}\")\nprint(f\"\\tHidden Units Epoch-1: {rbm_lr1.score(*test_data)}\")\nprint(f\"\\tHidden Units Epoch-5: {rbm_lr5.score(*test_data)}\")", "The increase in accuracy here is modest because of the small number of hidden units. When 1000 hidden units are used the Epoch-5 accuracy approaches 97.5%.", "class1_correct = rbm_lr1.predict(mnist_test_imgs) == mnist_test_labels\nclass5_correct = rbm_lr5.predict(mnist_test_imgs) == mnist_test_labels\n\ndiff_class_img_idxs = np.where(class5_correct & ~class1_correct)[0]\nprint(f\"There are {len(diff_class_img_idxs)} images which were correctly labelled after >1 Epochs of training.\")", "We can explore the quality of the learned hidden tranformation by inspecting reconstructions of these test images.\nYou can explore this by choosing different subsets of images in the cell below:", "key = random.PRNGKey(100)\n\n# Try out different subsets of img indices.\nidx_list = diff_class_img_idxs[100:]\n\nn_rows = 5\nfig, axs = plt.subplots(n_rows, 3, figsize=(9, 20))\n\nfor img_idx, ax_row in zip(idx_list, axs):\n ax1, ax2, ax3 = ax_row\n\n img = mnist_test_imgs[img_idx]\n\n plot_digit(img, ax=ax1)\n true_label = mnist_test_labels[img_idx]\n ax1.set_title(f\"Raw Image\\nTrue Label: {true_label}\")\n\n epoch1_recon = rbm_lr1.reconstruct_samples(img, key)\n plot_digit(epoch1_recon, ax=ax2)\n hid1_label = rbm_lr1.predict(img[None, :])[0]\n ax2.set_title(f\"Epoch 1 Reconstruction\\nPredicted Label: {hid1_label} (incorrect)\")\n\n epoch5_recon = rbm_lr5.reconstruct_samples(img, key)\n hid5_label = rbm_lr5.predict(img[None, :])[0]\n plot_digit(epoch5_recon, ax=ax3)\n ax3.set_title(f\"Epoch 5 Reconstruction\\nPredicted Label: {hid5_label} (correct)\");" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
kimkipyo/dss_git_kkp
통계, 머신러닝 복습/160516월_3일차_기초 선형 대수 1 - 행렬의 정의와 연산 Basic Linear Algebra(NumPy)/2.NumPy 배열 생성과 변형.ipynb
mit
[ "NumPy 배열 생성과 변형\nNumPy의 자료형\nNumPy의 ndarray클래스는 포함하는 모든 데이터가 같은 자료형(data type)이어야 한다. 또한 자료형 자체도 일반 파이썬에서 제공하는 것보다 훨씬 세분화되어 있다.\nNumPy의 자료형은 dtype 이라는 인수로 지정한다. dtype 인수로 지정할 값은 다음 표에 보인것과 같은 dtype 접두사로 시작하는 문자열이고 비트/바이트 수를 의미하는 숫자가 붙을 수도 있다.\n| dtype 접두사 | 설명 | 사용 예 |\n|-|-|-|\n| t | 비트 필드 | t4 (4비트) | \n| b | 불리언 | b (참 혹은 거짓) | \n| i | 정수 | i8 (64비트) | \n| u | 부호 없는 정수 | u8 (64비트) | \n| f | 부동소수점 | f8 (64비트) | \n| c | 복소 부동소수점 | c16 (128비트) | \n| O | 객체 | 0 (객체에 대한 포인터) | \n| S, a | 문자열 | S24 (24 글자) | \n| U | 유니코드 문자열 | U24 (24 유니코드 글자) | \n| V | 기타 | V12 (12바이트의 데이터 블럭) | \nndarray 객체의 dtype 속성으로 자료형을 알 수 있다.", "x = np.array([1, 2, 3])\nx.dtype\n\nx = np.array([1, 2, 3]) \nx.dtype #2.7과 3버전의 차이인가?", "만약 부동소수점을 사용하는 경우에는 무한대를 표현하기 위한 np.inf와 정의할 수 없는 숫자를 나타내는 np.nan 을 사용할 수 있다.", "np.exp(-np.inf)\n\n-np.inf", "The irrational number e is also known as Euler’s number. It is approximately 2.718281, and is the base of the natural logarithm", "np.exp(1)\n\nnp.array([1, 0]) / np.array([0, 0])\n\nnp.array([1, 0]) / np.array([0, 0])", "배열 생성", "x = np.array([1, 2, 3])\nx", "앞에서 파이썬 리스트를 NumPy의 ndarray 객체로 변환하여 생성하려면 array 명령을 사용하였다. 그러나 보통은 이러한 기본 객체없이 다음과 같은 명령을 사용하여 바로 ndarray 객체를 생성한다. \n\nzeros, ones\nzeros_like, ones_like\nempty\narange\nlinspace, logspace\nrand, randn\n\n크기가 정해져 있고 모든 값이 0인 배열을 생성하려면 zeros 명령을 사용한다. dtype 인수가 없으면 정수형이 된다.", "a = np.zeros(5)\na", "dtype 인수를 명시하면 해당 자료형 원소를 가진 배열을 만든다.", "b = np.zeros((5, 2), dtype=\"f8\")\nb", "문자열 배열도 가능하지면 모든 원소의 문자열 크기가 같아야 한다. 만약 더 큰 크기의 문자열을 할당하면 잘릴 수 있다.", "c = np.zeros(5, dtype='S4')\nc\n\nc = np.zeros(5, dtype=\"S4\")\nc[0] = 'abcd'\nc[1] = 'ABCDE'\nc", "0이 아닌 1로 초기화된 배열을 생성하려면 ones 명령을 사용한다.", "d = np.ones((2,3,2,4), dtype='i8')\nd", "만약 크기를 튜플(tuple)로 명시하지 않고 특정한 배열 혹은 리스트와 같은 크기의 배열을 생성하고 싶다면 ones_like, zeros_like 명령을 사용한다.", "e = range(10)\nprint(e)\nf=np.ones_like(e, dtype=\"f\")\nf", "배열의 크기가 커지면 배열을 초기화하는데도 시간이 걸린다. 이 시간을 단축하려면 생성만 하고 초기화를 하지 않는 empty 명령을 사용할 수 있다. empty 명령으로 생성된 배열에 어떤 값이 들어있을지는 알 수 없다.", "g = np.empty((3,6))\ng", "arange 명령은 NumPy 버전의 range 명령이라고 볼 수 있다. 해당하는 범위의 숫자 순열을 생성한다.", "np.arange(10) # 0 . . . n-1\n\nnp.arange(3, 21, 2) # start, end (exclusive), step", "linspace 명령이나 logspace 명령은 선형 구간 혹은 로그 구간을 지정한 구간의 수만큼 분할한다.", "np.linspace(0, 100, 5) # start, end, num-points\n\nnp.logspace(0, 4, 4, endpoint=False)", "임의의 난수를 생성하고 싶다면 random 서브패키지의 rand 혹은 randn 명령을 사용한다. rand 명령을 uniform 분포를 따르는 난수를 생성하고 randn 명령을 가우시안 정규 분포를 따르는 난수를 생성한다. 생성할 시드(seed)값을 지정하려면 seed 명령을 사용한다.", "np.random.seed(0)\n\nnp.random.rand(4)\n\nnp.random.randn(3,5)", "배열의 크기 변형\n일단 만들어진 배열의 내부 데이터는 보존한 채로 형태만 바꾸려면 reshape 명령이나 메서드를 사용한다. 예를 들어 12개의 원소를 가진 1차원 행렬은 3x4 형태의 2차원 행렬로 만들 수 있다.", "a = np.arange(12)\na\n\nb = a.reshape(3, 4)\nb", "사용하는 원소의 갯수가 정해저 있기 때문에 reshape 명령의 형태 튜플의 원소 중 하나는 -1이라는 숫자로 대체할 수 있다. -1을 넣으면 해당 숫자는 다른 값에서 계산되어 사용된다.", "a.reshape(2,2,-1)\n\na.reshape(2,-1,2)", "다차원 배열을 무조건 1차원으로 펼치기 위해서는 flatten 명령이나 메서드를 사용한다.", "a.flatten()", "길이가 5인 1차원 배열과 행, 열의 갯수가 (5,1)인 2차원 배열은 데이터는 같아도 엄연히 다른 객체이다.", "x = np.arange(5)\nx\n\ny = x.reshape(5, 1)\ny", "이렇게 같은 배열에 대해 차원만 1차원 증가시키는 경우에는 newaxis 명령을 사용하기도 한다.", "z = x[:, np.newaxis]\nz", "배열 연결\n행의 수나 열의 수가 같은 두 개 이상의 배열을 연결하여(concatenate) 더 큰 배열을 만들 때는 다음과 같은 명령을 사용한다.\n\nhstack\nvstack\ndstack\nstack\nr_\ntile\n\nhstack 명령은 행의 수가 같은 두 개 이상의 배열을 옆으로 연결하여 열의 수가 더 많은 배열을 만든다. 연결할 배열은 하나의 리스트에 담아야 한다.", "a1 = np.ones((2, 3))\na1\n\na2 = np.zeros((2, 2))\na2\n\nnp.hstack([a1, a2])", "vstack 명령은 열의 수가 같은 두 개 이상의 배열을 위아래로 연결하여 행의 수가 더 많은 배열을 만든다. 연결할 배열은 마찬가지로 하나의 리스트에 담아야 한다.", "b1 = np.ones((2, 3))\nb1\n\nb2 = np.zeros((3, 3))\nb2\n\nnp.vstack([b1, b2])", "dstack 명령은 제3의 축 즉, 행이나 열이 아닌 깊이(depth) 방향으로 배열을 합친다.", "c1 = np.ones((2,3))\nc1\n\nc2 = np.zeros((2,3))\nc2\n\nnp.dstack([c1, c2])", "stack 명령은 새로운 차원(축으로) 배열을 연결하며 당연히 연결하고자 하는 배열들의 크기가 모두 같아야 한다.\naxis 인수(디폴트 0)를 사용하여 연결후의 회전 방향을 정한다.", "np.stack([c1, c2])\n\nnp.stack([c1, c2], axis=0)\n\nnp.stack([c1, c2], axis=1)\n\nnp.stack([c1, c2], axis=2)", "r_ 메서드는 hstack 명령과 유사하다. 다만 메서드임에도 불구하고 소괄호(parenthesis, ())를 사용하지 않고 인덱싱과 같이 대괄호(bracket, [])를 사용한다.", "np.r_[np.array([1,2,3]), 0, 0, np.array([4,5,6])]", "tile 명령은 동일한 배열을 반복하여 연결한다.", "a = np.array([0, 1, 2])\nnp.tile(a, 2)\n\nnp.tile(a, [2,3])\n\nb = np.array([2,3])\nnp.tile(a,b)\n\nnp.tile(a, (3, 2))", "그리드 생성\n변수가 2개인 2차원 함수의 그래프를 그리거나 표를 작성하려면 많은 좌표를 한꺼번에 생성하여 각 좌표에 대한 함수 값을 계산해야 한다.\n예를 들어 x, y 라는 두 변수를 가진 함수에서 x가 0부터 2까지, y가 0부터 4까지의 사각형 영역에서 변화하는 과정을 보고 싶다면 이 사각형 영역 안의 다음과 같은 (x,y) 쌍 값들에 대해 함수를 계산해야 한다. \n$$ (x,y) = (0,0), (0,1), (0,2), (0,3), (0,4), (1,0), \\cdots (2,4) $$\n이러한 과정을 자동으로 해주는 것이 NumPy의 meshgrid 명령이다. meshgrid 명령은 사각형 영역을 구성하는 가로축의 점들과 세로축의 점을 나타내는 두 벡터를 인수로 받아서 이 사각형 영역을 이루는 조합을 출력한다. 단 조합이 된 (x,y)쌍을 x값만을 표시하는 행렬과 y값만을 표시하는 행렬 두 개로 분리하여 출력한다.", "x = np.arange(3)\nx\n\ny = np.arange(5)\ny\n\nX, Y = np.meshgrid(x, y)\n\nX\n\nY\n\n[zip(x, y)]\n\n[zip(X, Y)]\n\nfor x, y in zip(X, Y):\n print (x, y)\n\nfor x, y in zip(X,Y):\n print (x, y)\n\n[zip(x, y) for x, y in zip(X, Y)]\n\nX\n\nY\n\nplt.scatter(X, Y, linewidths=10);" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
Chipe1/aima-python
notebooks/chapter19/RNN.ipynb
mit
[ "RNN\nOverview\nWhen human is thinking, they are thinking based on the understanding of previous time steps but not from scratch. Traditional neural networks can’t do this, and it seems like a major shortcoming. For example, imagine you want to do sentimental analysis of some texts. It will be unclear if the traditional network cannot recognize the short phrase and sentences.\nRecurrent neural networks address this issue. They are networks with loops in them, allowing information to persist.\n<img src=\"images/rnn_unit.png\" width=\"500\"/>\nA recurrent neural network can be thought of as multiple copies of the same network, each passing a message to a successor. Consider what happens if we unroll the above loop:\n<img src=\"images/rnn_units.png\" width=\"500\"/>\nAs demonstrated in the book, recurrent neural networks may be connected in many different ways: sequences in the input, the output, or in the most general case both.\n<img src=\"images/rnn_connections.png\" width=\"700\"/>\nImplementation\nIn our case, we implemented rnn with modules offered by the package of keras. To use keras and our module, you must have both tensorflow and keras installed as a prerequisite. keras offered very well defined high-level neural networks API which allows for easy and fast prototyping. keras supports many different types of networks such as convolutional and recurrent neural networks as well as user-defined networks. About how to get started with keras, please read the tutorial.\nTo view our implementation of a simple rnn, please use the following code:", "import warnings\nwarnings.filterwarnings(\"ignore\", category=FutureWarning)\nimport os, sys\nsys.path = [os.path.abspath(\"../../\")] + sys.path\nfrom deep_learning4e import *\nfrom notebook4e import *\n\npsource(SimpleRNNLearner)", "train_data and val_data are needed when creating a simple rnn learner. Both attributes take lists of examples and the targets in a tuple. Please note that we build the network by adding layers to a Sequential() model which means data are passed through the network one by one. SimpleRNN layer is the key layer of rnn which acts the recursive role. Both Embedding and Dense layers before and after the rnn layer are used to map inputs and outputs to data in rnn form. And the optimizer used in this case is the Adam optimizer.\nExample\nHere is an example of how we train the rnn network made with keras. In this case, we used the IMDB dataset which can be viewed here in detail. In short, the dataset is consist of movie reviews in text and their labels of sentiment (positive/negative). After loading the dataset we use keras_dataset_loader to split it into training, validation and test datasets.", "from keras.datasets import imdb\ndata = imdb.load_data(num_words=5000)\ntrain, val, test = keras_dataset_loader(data)", "Then we build and train the rnn model for 10 epochs:", "model = SimpleRNNLearner(train, val, epochs=10)", "The accuracy of the training dataset and validation dataset are both over 80% which is very promising. Now let's try on some random examples in the test set:\nAutoencoder\nAutoencoders are an unsupervised learning technique in which we leverage neural networks for the task of representation learning. It works by compressing the input into a latent-space representation, to do transformations on the data. \n<img src=\"images/autoencoder.png\" width=\"800\"/>\nAutoencoders are learned automatically from data examples. It means that it is easy to train specialized instances of the algorithm that will perform well on a specific type of input and that it does not require any new engineering, only the appropriate training data.\nAutoencoders have different architectures for different kinds of data. Here we only provide a simple example of a vanilla encoder, which means they're only one hidden layer in the network:\n<img src=\"images/vanilla.png\" width=\"500\"/>\nYou can view the source code by:", "psource(AutoencoderLearner)", "It shows we added two dense layers to the network structures." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
dchandan/rebound
ipython_examples/EscapingParticles.ipynb
gpl-3.0
[ "Escaping particles\nSometimes we are not interested in particles that get too far from the central body. Here we will define a radius beyond which we remove particles from the simulation. Let's set up an artificial situation with 2 planets, and the inner one simply moves radially outward with $v > v_{escape}$.", "import rebound\nimport numpy as np\ndef setupSimulation():\n sim = rebound.Simulation()\n sim.integrator = \"ias15\" # IAS15 is the default integrator, so we don't need this line\n sim.add(m=1., id=0)\n sim.add(m=1e-3,x=1.,vx=2., id=1)\n sim.add(m=1e-3,a=1.25,M=np.pi/2, id=2)\n sim.move_to_com()\n return sim", "We have assigned each particle an ID for later reference (see IDs.ipynb for more information).", "sim = setupSimulation()\nsim.status()", "Now let's run a simulation for 20 years (in default units where $G=1$, and thus AU, yr/2$\\pi$, and $M_\\odot$, see Units.ipynb for how to change units), and set up a 50 AU sphere beyond which we remove particles from the simulation. We can do this by setting the exit_max_distance flag of the simulation object. If a particle's distance (from the origin of whatever inertial reference frame chosen) exceeds sim.exit_max_distance, an exception is thrown.\nIf we simply call sim.integrate(), the program will crash due to the unhandled exception when the particle escapes, so we'll create a try-except block to catch the exception.", "sim = setupSimulation() # Resets everything\nsim.exit_max_distance = 50.\nNoutputs = 1000\ntimes = np.linspace(0,20.*2.*np.pi,Noutputs)\nxs = np.zeros((3,Noutputs))\nys = np.zeros((3,Noutputs))\nfor i,time in enumerate(times):\n try:\n sim.integrate(time) \n except rebound.Escape as error:\n print(error)\n max_d2 = 0.\n for p in sim.particles:\n d2 = p.x*p.x + p.y*p.y + p.z*p.z\n if d2>max_d2:\n max_d2 = d2\n mid = p.id\n sim.remove(id=mid)\n for j in range(sim.N):\n xs[j,i] = sim.particles[j].x\n ys[j,i] = sim.particles[j].y", "Let's check that the particle 1 was correctly removed from the simulation:", "sim.status()", "So this worked as expected. We went down to 2 particles, and particles[1] (which had id = 1 before) has evidently been replaced with particles[2]). By default, remove() preserves the ordering in the particles array (see IDs.ipynb for more info). Now let's plot what we got:", "%matplotlib inline\nimport matplotlib.pyplot as plt\nfig,ax = plt.subplots(figsize=(15,5))\nfor i in range(3):\n ax.plot(xs[i,:], ys[i,:])\nax.set_aspect('equal')\nax.set_xlim([-2,10]);", "Uh oh. The problem here is that we kept updating xs[1] with particles[1].x after particle[1] was removed. This means that following the removal, xs[1] all of a sudden started getting populated by the values for the particle with ID=2. This is why the radial green trajectory (horizontal line along the $x$ axis) all of a sudden jumps onto the roughly circular orbit corresponding to the outer particle with ID=2 (originally red). One way to fix these problems is:", "sim = setupSimulation() # Resets everything\nsim.exit_max_distance = 50.\nNoutputs = 1000\ntimes = np.linspace(0,20.*2.*np.pi,Noutputs)\nxs = np.zeros((3,Noutputs))\nys = np.zeros((3,Noutputs))\nfor i,time in enumerate(times):\n try:\n sim.integrate(time) \n except rebound.Escape as error:\n print(error)\n max_d2 = 0.\n for p in sim.particles:\n d2 = p.x*p.x + p.y*p.y + p.z*p.z\n if d2>max_d2:\n max_d2 = d2\n mid = p.id\n sim.remove(id=mid)\n for j in range(sim.N):\n xs[sim.particles[j].id,i] = sim.particles[j].x\n ys[sim.particles[j].id,i] = sim.particles[j].y\n \nfig,ax = plt.subplots(figsize=(15,5))\nfor i in range(3):\n ax.plot(xs[i,:], ys[i,:])\nax.set_aspect('equal')\nax.set_xlim([-2,10]);", "Much better! Since at the beginning of the integration the IDs match up with the corresponding indices in the xs and ys arrays, we solved problem by using the IDs as indices throughout the simulation.\nAs an aside, the horizontal drift of the circular orbit is a real effect: in the center of mass frame, if the Jupiter-mass planet is drifting right at some speed, the Sun must be moving at a speed lower by a factor of approximately 1000 (their mass ratio) in the opposite direction, so the Sun-particle2 system slowly drifts left. If we integrated long enough, this would mean all our particles would eventually leave our box. \nIf we wanted to make sure things stayed in the box, we could additionally move to new center of mass frame after each removal of a particle, but this would introduce unphysical jumps in the remaining particles' time series, since their coordinates are measured between different inertial frames. Of course, whether this matters depends on the application!" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
CopernicusMarineInsitu/INSTACTraining
PythonNotebooks/indexFileNavigation/index_file_navigation_fileUpdate.ipynb
mit
[ "<h3> ABSTRACT </h3>\n\nAll CMEMS in situ data products can be found and downloaded after registration via CMEMS catalogue.\nSuch channel is advisable just for sporadic netCDF donwloading because when operational, interaction with the web user interface is not practical. In this context though, the use of scripts for ftp file transference is is a much more advisable approach.\nAs long as every line of such files contains information about the netCDFs contained within the different directories see at tips why, it is posible for users to loop over its lines to download only those that matches a number of specifications such as spatial coverage, time coverage, provider, data_mode, parameters or file_name related (region, data type, TS or PF, platform code, or/and platform category, timestamp).\n<h3>PREREQUISITES</h3>\n\n\ncredentias\naimed in situ product name\naimed hosting distribution unit\naimed index file\n\ni.e:", "user = '' #type CMEMS user name within colons\npassword = ''#type CMEMS password within colons\nproduct_name = 'INSITU_BAL_NRT_OBSERVATIONS_013_032' #type aimed CMEMS in situ product \ndistribution_unit = 'cmems.smhi.se' #type aimed hosting institution\nindex_file = 'index_latest.txt' #type aimed index file name", "<h3>DOWNLOAD</h3>\n\n\nIndex file download", "import ftplib \n\nftp=ftplib.FTP(distribution_unit,user,password) \nftp.cwd(\"Core\")\nftp.cwd(product_name) \nremote_filename= index_file\nlocal_filename = remote_filename\nlocal_file = open(local_filename, 'wb')\nftp.retrbinary('RETR ' + remote_filename, local_file.write)\nlocal_file.close()\nftp.quit()\n#ready when 221 Goodbye.!", "<h3>QUICK VIEW</h3>\n\nReading a random line of the index file to know more about the information it contains.", "import numpy as np\nimport pandas as pd\nfrom random import randint\n\nindex = np.genfromtxt(index_file, skip_header=6, unpack=False, delimiter=',', dtype=None,\n names=['catalog_id', 'file_name', 'geospatial_lat_min', 'geospatial_lat_max',\n 'geospatial_lon_min', 'geospatial_lon_max',\n 'time_coverage_start', 'time_coverage_end', \n 'provider', 'date_update', 'data_mode', 'parameters'])\n\ndataset = randint(0,len(index)) #ramdom line of the index file\n\nvalues = [index[dataset]['catalog_id'], '<a href='+index[dataset]['file_name']+'>'+index[dataset]['file_name']+'</a>', index[dataset]['geospatial_lat_min'], index[dataset]['geospatial_lat_max'],\n index[dataset]['geospatial_lon_min'], index[dataset]['geospatial_lon_max'], index[dataset]['time_coverage_start'],\n index[dataset]['time_coverage_end'], index[dataset]['provider'], index[dataset]['date_update'], index[dataset]['data_mode'],\n index[dataset]['parameters']]\nheaders = ['catalog_id', 'file_name', 'geospatial_lat_min', 'geospatial_lat_max',\n 'geospatial_lon_min', 'geospatial_lon_max',\n 'time_coverage_start', 'time_coverage_end', \n 'provider', 'date_update', 'data_mode', 'parameters']\ndf = pd.DataFrame(values, index=headers, columns=[dataset])\ndf.style", "<h3>FILTERING CRITERIA</h3>\n\nRegarding the above glimpse, it is posible to filter by 12 criteria. As example we will setup next a filter to only download those files that has been updated in the last X hours.", "#packages\nfrom datetime import date\nimport datetime", "1. Number of minutes", "time_lapse = 10 #all files updated within the last 10 hours will be downloaded\nend_date = datetime.datetime.today()\nini_date = end_date - datetime.timedelta(hours=time_lapse)", "2. netCDF filtering/selection", "#read file lines (iterate over them)\nselected_netCDFs = [];\nfor netCDF in index: \n file_name = netCDF['file_name'].decode('utf-8')\n last_idx_slash = file_name.rfind('/')\n ncdf_file_name = file_name[last_idx_slash+1:]\n date_update = netCDF['date_update'].decode('utf-8')\n date_format = \"%Y-%m-%dT%H:%M:%SZ\" \n file_date = datetime.datetime.strptime(date_update, date_format)\n #set up a selection criteria: i.e dates within the last 30 min\n if ini_date < file_date < end_date:\n selected_netCDFs.append(file_name)\nprint(\"total: \" +str(len(selected_netCDFs))) ", "<h3> SELECTION DOWNLOAD </h3>", "for nc in selected_netCDFs:\n last_idx_slash = nc.rfind('/')\n ncdf_file_name = nc[last_idx_slash+1:]\n folders = nc.split('/')[3:len(file_name.split('/'))-1]\n host = nc.split('/')[2] #distribution unit\n \n ftp=ftplib.FTP(host,user,password) \n for folder in folders:\n ftp.cwd(folder)\n \n local_file = open(ncdf_file_name, 'wb')\n ftp.retrbinary('RETR '+ncdf_file_name, local_file.write)\n local_file.close()\n ftp.quit()" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
Housebeer/Natural-Gas-Model
.ipynb_checkpoints/Matching Market v2-checkpoint.ipynb
mit
[ "Matching Market\nThis simple model consists of a buyer, a supplier, and a market. \nThe buyer represents a group of customers whose willingness to pay for a single unit of the good is captured by a vector of prices wta. You can initiate the buyer with a set_quantity function which randomly assigns the willingness to pay according to your specifications. You may ask for these willingness to pay quantities with a getbid function. \nThe supplier is similiar, but instead the supplier is willing to be paid to sell a unit of technology. The supplier for instance may have non-zero variable costs that make them unwilling to produce the good unless they receive a specified price. Similarly the supplier has a get_ask function which returns a list of desired prices. \nThe willingness to pay or sell are set randomly using uniform random distributions. The resultant lists of bids are effectively a demand curve. Likewise the list of asks is effectively a supply curve. A more complex determination of bids and asks is possible, for instance using time of year to vary the quantities being demanded. \nNew in version 2\nThe actioneer now has a book to \nMicroeconomic Foundations\nThe market assumes the presence of an auctioneer which will create a book, which seeks to match the bids and the asks as much as possible. If the auctioneer is neutral, then it is incentive compatible for the buyer and the supplier to truthfully announce their bids and asks. The auctioneer will find a single price which clears as much of the market as possible. Clearing the market means that as many willing swaps happens as possible. You may ask the market object at what price the market clears with the get_clearing_price function. You may also ask the market how many units were exchanged with the get_units_cleared function.\nAgent-Based Objects\nThe following section presents three objects which can be used to make an agent-based model of an efficient, two-sided market.", "%matplotlib inline\nimport random as rnd\nimport pandas as pd\n\nclass Seller():\n wta = []\n def __init__(self,name):\n self.name = name\n \n # the supplier has n quantities that they can sell\n # they may be willing to sell this quantity anywhere from a lower price of l\n # to a higher price of u\n def set_quantity(self,n,l,u):\n wta = []\n for i in range(n):\n p = rnd.uniform(l,u)\n self.wta.append(p)\n\n def get_name(self):\n return self.name\n \n def get_asks(self):\n return self.wta\n\nclass Buyer():\n \n def __init__(self, name):\n self.wtp = []\n self.name = name\n \n # the supplier has n quantities that they can buy\n # they may be willing to sell this quantity anywhere from a lower price of l\n # to a higher price of u\n def set_quantity(self,n,l,u):\n for i in range(n):\n p = rnd.uniform(l,u)\n self.wtp.append(p)\n \n def get_name(self):\n return self.name\n \n # return list of willingness to pay\n def get_bids(self):\n return self.wtp\n\nclass Book():\n ledger = pd.DataFrame(columns = (\"role\",\"name\",\"price\",\"cleared\"))\n\n \n def set_asks(self,seller_list):\n # ask each seller their name\n # ask each seller their willingness\n # for each willingness append the data frame\n for seller in seller_list:\n seller_name = seller.get_name()\n seller_price = seller.get_asks()\n for price in seller_price:\n self.ledger=self.ledger.append({\"role\":\"seller\",\"name\":seller_name,\"price\":price,\"cleared\":\"in process\"},\n ignore_index=True)\n\n def set_bids(self,buyer_list):\n # ask each seller their name\n # ask each seller their willingness\n # for each willingness append the data frame\n for buyer in buyer_list:\n buyer_name = buyer.get_name()\n buyer_price = buyer.get_bids()\n for price in buyer_price:\n self.ledger=self.ledger.append({\"role\":\"buyer\",\"name\":buyer_name,\"price\":price,\"cleared\":\"in process\"},\n ignore_index=True)\n\n def update_ledger(self,ledger):\n self.ledger = ledger\n \n def get_ledger(self):\n return self.ledger\n \n \nclass Market():\n count = 0\n last_price = ''\n book = Book()\n b = []\n s = []\n ledger = ''\n \n #def __init__(self):\n\n def add_buyer(self,buyer):\n self.b.append(buyer)\n \n def add_seller(self,seller):\n self.s.append(seller)\n \n def set_book(self):\n self.book.set_bids(self.b)\n self.book.set_asks(self.s)\n \n def get_ledger(self):\n self.ledger = self.book.get_ledger()\n return self.ledger\n \n def get_bids(self):\n # this is a data frame\n ledger = self.book.get_ledger()\n rows= ledger.loc[ledger['role'] == 'buyer']\n # this is a series\n prices=rows['price']\n # this is a list\n bids = prices.tolist()\n return bids\n \n def get_asks(self):\n # this is a data frame\n ledger = self.book.get_ledger()\n rows = ledger.loc[ledger['role'] == 'seller']\n # this is a series\n prices=rows['price']\n # this is a list\n asks = prices.tolist()\n return asks\n \n # return the price at which the market clears\n # this fails because there are more buyers then sellers\n \n def get_clearing_price(self):\n # buyer makes a bid starting with the buyer which wants it most\n b = self.get_bids()\n s = self.get_asks()\n # highest to lowest\n self.b=sorted(b, reverse=True)\n # lowest to highest\n self.s=sorted(s, reverse=False)\n \n # find out whether there are more buyers or sellers\n # then drop the excess buyers or sellers; they won't compete\n n = len(b)\n m = len(s)\n \n # there are more sellers than buyers\n # drop off the highest priced sellers \n if (m > n):\n s = s[0:n]\n matcher = n\n # There are more buyers than sellers\n # drop off the lowest bidding buyers \n else:\n b = b[0:m]\n matcher = m\n \n # It's possible that not all items sold actually clear the market here\n for i in range(matcher):\n if (self.b[i] > self.s[i]):\n self.count +=1\n self.last_price = self.b[i]\n \n return self.last_price\n \n # TODO: Annotate the ledger\n def annotate_ledger(self,clearing_price):\n ledger = self.book.get_ledger()\n for index, row in ledger.iterrows():\n if (row['role'] == 'seller'):\n if (row['price'] < clearing_price):\n ledger.ix[index,'cleared'] = 'True'\n else:\n ledger.ix[index,'cleared'] = 'False'\n else:\n if (row['price'] > clearing_price):\n ledger.ix[index,'cleared'] = 'True'\n else:\n ledger.ix[index,'cleared'] = 'False' \n \n self.book.update_ledger(ledger)\n \n def get_units_cleared(self):\n return self.count\n \n ", "Test DataFrame Appending", "# Test the Book\nledger = pd.DataFrame(columns = (\"role\",\"name\",\"price\",\"cleared\"))\nledger=ledger.append({\"role\":\"seller\",\"name\":\"gas\",\"price\":24,\"cleared\":\"in process\"},ignore_index=True)\nledger=ledger.append({\"role\":\"buyer\",\"name\":\"gas\",\"price\":25,\"cleared\":\"in process\"},ignore_index=True)\n#df.append({'foo':1, 'bar':2}, ignore_index=True)\nrows=ledger.loc[ledger['role'] == 'seller']\nprint(rows['price'].tolist())\n\n\nfor index, row in ledger.iterrows():\n if (row['role'] == 'seller'):\n print(\"yes\",\"index\")\n ledger.ix[index,'cleared']='True'\n row['cleared']='True'\n else:\n print(\"No change\")\nprint()\nprint(ledger)\n", "Example Market\nIn the following code example we use the buyer and supplier objects to create a market. At the market a single price is announced which causes as many units of goods to be swapped as possible. The buyers and sellers stop trading when it is no longer in their own interest to continue.", "# make a supplier and get the asks\nsupplier = Seller(\"Natural Gas\")\nsupplier.set_quantity(100,0,10)\n\nbook = Book()\nbook.set_asks([supplier])\n\n# make a buyer and get the bids\nbuyerNames = ('home', 'industry', 'cat')\nbuyerDictionary = {}\nfor name in buyerNames:\n buyerDictionary[name] = Buyer('%s' %name)\n\nfor obj in buyerDictionary.values():\n obj.set_quantity(100,0,10)\n\n'''\n# make a buyer and get the bids\nbuyer1 = Buyer(\"Home\")\nbuyer1.set_quantity(100,0,10)\n\n# make a buyer and get the bids\nbuyer2 = Buyer(\"Industry\")\nbuyer2.set_quantity(100,0,10)\n'''\n\nbook.set_bids([buy for buy in buyerDictionary.values()])\nledger = book.get_ledger()\n\ngas_market = Market()\ngas_market.add_seller(supplier)\ngas_market.add_buyer(buyer1)\ngas_market.add_buyer(buyer2)\ngas_market.set_book()\nasks = gas_market.get_asks()\n#print(asks)\n\nclearing = gas_market.get_clearing_price()\ngas_market.annotate_ledger(clearing)\nnew_ledger = gas_market.get_ledger()\n\npd.DataFrame.head(new_ledger)", "Operations Research Formulation\nThe market can also be formulated as a very simple linear program or linear complementarity problem. It is clearer and easier to implement this market clearing mechanism with agents. One merit of the agent-based approach is that we don't need linear or linearizeable supply and demand function. \nThe auctioneer is effectively following a very simple linear program subject to constraints on units sold. The auctioneer is, in the primal model, maximizing the consumer utility received by customers, with respect to the price being paid, subject to a fixed supply curve. On the dual side the auctioneer is minimizing the cost of production for the supplier, with respect to quantity sold, subject to a fixed demand curve. It is the presumed neutrality of the auctioneer which justifies the honest statement of supply and demand. \nAn alternative formulation is a linear complementarity problem. Here the presence of an optimal space of trades ensures that there is a Pareto optimal front of possible trades. The perfect opposition of interests in dividing the consumer and producer surplus means that this is a zero sum game. Furthermore the solution to this zero-sum game maximizes societal welfare and is therefore the Hicks optimal solution.\nNext Steps\nA possible addition of this model would be to have a weekly varying demand of customers, for instance caused by the use of natural gas as a heating agent. This would require the bids and asks to be time varying, and for the market to be run over successive time periods. A second addition would be to create transport costs, or enable intermediate goods to be produced. This would need a more elaborate market operator. Another possible addition would be to add a profit maximizing broker. This may require adding belief, fictitious play, or message passing. \nThe object-orientation of the models will probably need to be further rationalized. Right now the market requires very particular ordering of calls to function correctly.", "# To \n# Create a dictionary for the properties of the agents\n\nobjectNames = (\"foo\", \"bar\", \"cat\", \"mouse\")\nobjectDictionary = {}\nfor name in objectNames:\n objectDictionary[name] = MyClass(property=foo,property2=bar)\n\nfor obj in objectDictionary.itervalues():\n obj.DoStuff(variable = foobar)" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
dmolina/scopus_analysis
Reading papers from user.ipynb
gpl-3.0
[ "Getting information from Scopus\nNote: This work is obtained from http://kitchingroup.cheme.cmu.edu/blog/2015/04/03/Getting-data-from-the-Scopus-API/, its author is really the author of that information. I have mainly put in a more useful way.\nThe access to Scopus API is restricted. First, you need a Elseview-API. You can obtain one from http://dev.elsevier.com/myapikey.html. \nIn my case, because I have access by a Proxy (and using a VPN, but it is transparent) I also need a PROXY_URL. That both information are not in the repository because they are personal and private. You should create your own my_scopus.py file to run that code without changes.", "import requests\nimport json\nfrom my_scopus import MY_API_KEY, PROXY_URL, MY_AUTHOR_ID", "First, we define a function to access to the information", "def print_json(resp_json):\n print(json.dumps(resp_json,\n sort_keys=True,\n indent=4, separators=(',', ': ')))\n\ndef scopus_get_info_api(url, proxy=PROXY_URL,*,verbose=False,json=True):\n \"\"\"\n Returns the information obtained by the Elseview API\n \"\"\"\n proxies = {\n \"http\": PROXY_URL\n }\n\n resp = requests.get(\"http://api.elsevier.com/content/\" +url,\n headers={'Accept':'application/json',\n 'X-ELS-APIKey': MY_API_KEY}, proxies=proxies)\n if verbose:\n print_json(resp.json())\n \n if json:\n return resp.json()\n else:\n return resp.text.encode('utf-8')", "Then, a util function to show the information\nA function that return the information of the author.\nObtaining Author info", "def scopus_get_author(author_id):\n msg = \"author?author_id={}&view=metrics\".format(author_id)\n resp = scopus_get_info_api(msg)\n return resp['author-retrieval-response'][0]", "Example, to obtain my h-index", "author_info = scopus_get_author(MY_AUTHOR_ID)\nprint_json(author_info)\nh_index = author_info['h-index']\nprint(\"My automatic h_index is {}\".format(h_index))", "Obtaining list of references\nNow, we are going to extract the list of published papers.", "def scopus_search_list(query, field, max=100, *, debug=False):\n msg = \"search/scopus?query={}&nofield={}&count={}\".format(query, field, max)\n \n if debug:\n print_json(scopus_get_info_api(msg))\n \n resp = scopus_get_info_api(msg)['search-results']\n list = []\n \n if resp['entry']:\n list = resp['entry']\n \n return list\n\ndef extract_info_papers(list):\n def get_type(code):\n if code in ['ar','re', 'ed', 'ip']:\n return 'article'\n elif code == 'cp':\n return 'congress'\n else:\n return code\n \n return [{'id': info['dc:identifier'], \n 'title': info['dc:title'], \n 'url': info['prism:url'], \n 'citations': int(info['citedby-count']), \n 'type': get_type(info['subtype']), \n 'year': info['prism:coverDate'][:4], \n 'journal': info['prism:publicationName']} for info in list]\n\ndef scopus_papers_from_author(author_id, *, max=100):\n \"\"\"\n Return the list of papers from the author\n \"\"\"\n query = \"AU-ID({})\".format(author_id)\n field = \"dc:identifier\"\n \n list = scopus_search_list(query, field, max)\n #print_json(list)\n return extract_info_papers(list)\n \n\npapers = scopus_papers_from_author(MY_AUTHOR_ID)\nprint('{} papers'.format(len(papers)))", "Translate to pandas", "import pandas as pd\n\ndf = pd.DataFrame.from_dict(papers)\nprint(df.head())", "Ploting results", "papers_journal = df[df['type']=='article']\ncitations = papers_journal.groupby(['year']).sum()\n\n%pylab inline\nfrom matplotlib import pyplot as plt\nimport seaborn as sns\nsns.set()\n\nax = citations.plot(kind='bar', legend=None)\nax.set_xlabel('Year')\nax.set_ylabel('Citations')", "Get complete reference of a paper", "def get_scopus_info(SCOPUS_ID):\n url = (\"abstract/scopus_id/\"\n + SCOPUS_ID\n + \"?field=authors,title,publicationName,volume,issueIdentifier,\"\n + \"prism:pageRange,coverDate,article-number,doi,citedby-count,prism:aggregationType\")\n \n resp = scopus_get_info_api(url, json=True)\n results = resp['abstracts-retrieval-response']\n authors_info = results['authors']\n info = results['coredata']\n\n fstring = '{authors}, {title}, {journal}, {volume}, {articlenum}, ({date}). {doi} (cited {cites} times).\\n'\n return fstring.format(authors=', '.join([au['ce:indexed-name'] for au in authors_info['author']]),\n title=info['dc:title'],\n journal=info['prism:publicationName'],\n volume=info.get('prism:volume') or 1,\n articlenum=info.get('prism:pageRange') or\n info.get('article-number'),\n date=info['prism:coverDate'],\n doi='doi:' +(info.get('prism:doi') or 'NA'),\n cites=int(info['citedby-count']))\n\ndf_lasts = df[df['year']=='2015']\n\nfor id in df.sort(['citations'], ascending=[0])['id']:\n #print(\"id: '{}'\".format(id))\n print(get_scopus_info(id))", "Get authors from a list", "def get_author_info(paper_id):\n url = (\"abstract/scopus_id/\"\n + paper_id\n + \"?field=authors,title,publicationName,volume,issueIdentifier,\"\n + \"prism:pageRange,coverDate,article-number,doi,citedby-count,prism:aggregationType\")\n \n resp = scopus_get_info_api(url, json=True)\n results = resp['abstracts-retrieval-response']\n authors_info = results['authors']['author']\n authors_id = [au['ce:indexed-name'] for au in authors_info]\n return authors_id\n \n\nfrom collections import defaultdict\n\ndef get_authors_list(papers_id):\n number = defaultdict(int)\n \n for i, paper_id in enumerate(papers_id): \n authors = get_author_info(paper_id)\n \n for author in authors:\n number[author] += 1\n \n return number\n\nauthors = get_authors_list(df['id'])\nprint(authors)\n \n\ndef show_author_list(authors):\n names = authors.keys()\n names = sorted(names, key=lambda k: authors[k], reverse=True)\n\n for name in names:\n print(\"{}: {}\".format(name, authors[name]))\n\nshow_author_list(authors = get_authors_list(df.id))", "List of journals in which I have published", "revistas = sorted(set(papers_journal['journal']))\nfor revista in revistas:\n print(revista)", "Searching by a criterion", "def scopus_search_papers(words, type='ar'):\n \"\"\"\n Return the list of papers from the author\n \"\"\"\n query = \"TITLE-ABS-KEY({}) AND PUBYEAR > 2010 AND DOCTYPE({})\".format(words, type)\n field = \"dc:identifier\"\n \n list = scopus_search_list(query, field, 200)\n return extract_info_papers(list) \n\nresults = scopus_search_papers(\"large scale optimization evolutionary\")\n\npapers_lsgo = pd.DataFrame.from_dict(results)\nnum_total = len(papers_lsgo)\nprint(sorted(set(papers_lsgo['year'])))", "Get the number of journals with the results", "lsgo_journal = papers_lsgo.groupby(['journal']).sum()\nlsgo_journal.columns = ['number']\nlsgo_journal = lsgo_journal.sort('number', ascending=False)\nlsgo_journal = lsgo_journal[lsgo_journal['number']>0]\n\nprint(lsgo_journal)\n\npapers_lsgo = papers_lsgo.sort('citations', ascending=False)\npapers_lsgo = papers_lsgo[papers_lsgo['citations']>0]\n\nfor id in papers_lsgo['id'][:10]:\n print(get_scopus_info(id))", "Count the references and number for each author", "show_author_list(authors = get_authors_list(papers_lsgo.id))", "For Congress", "results_cp = scopus_search_papers(\"large scale optimization evolutionary\", type='cp')\nresults = pd.DataFrame.from_dict(results_cp)\nshow_author_list(authors = get_authors_list(results.id))\n\n\nresults_cp = scopus_search_papers(\"large scale optimization differential evolution\", type='cp')\nresults = pd.DataFrame.from_dict(results_cp)\nshow_author_list(authors = get_authors_list(results.id))\n\nresults = results.sort(['citations'], ascending=False)\nfor id in results.id[:10]:\n print(get_scopus_info(id))" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
knowledgeanyhow/notebooks
hacks/Webserver in a Notebook.ipynb
mit
[ "Run a Web Server in a Notebook\nIn this notebook, we show how to run a Tornado or Flask web server within a notebook, and access it from the public Internet. It sounds hacky, but the technique can prove useful:\n\nTo quickly prototype a REST API for an external web application to consume\nTo quickly expose a simple web dashboard to select external users\n\nIn this notebook, we'll demonstrate the technique using both Tornado and Flask as the web server. In both cases, the servers will listen for HTTPS connections and use a self-signed certificate. The servers will not authenticate connecting users / clients. (We want to keep things simple for this demo, but such authentication is an obvious next step in securing the web service for real-world use.)\nDefine the Demo Scenario\nSuppose we have completed a notebook that, among other things, can plot a point-in-time sample of data from an external source. Assume we now want to surface this plot in a very simple UI that has:\n\nThe title of the demo\nThe current plot\nA refresh button that takes a new sample and updates the plot\n\nCreate the Plotting Function\nSuppose we have a function that generates a plot and returns the image as a PNG in a Python string.", "import matplotlib.pyplot as plt\nimport pandas as pd\nimport numpy\nimport io\npd.options.display.mpl_style = 'default'\n\ndef plot_random_numbers(n=50):\n '''\n Plot random numbers as a line graph.\n '''\n fig, ax = plt.subplots()\n # generate some random numbers\n arr = numpy.random.randn(n)\n ax.plot(arr)\n ax.set_title('Random numbers!')\n # fetch the plot bytes\n output = io.BytesIO()\n plt.savefig(output, format='png')\n png = output.getvalue()\n plt.close()\n return png", "We can test our function by showing its output inline using the Image utility from IPython.", "from IPython.display import Image\nImage(plot_random_numbers())", "Create a Simple Dashboard Page\nNow we'll craft a simple dashboard page that includes our plot. We don't have to do anything fancy here other than use an &lt;img&gt; tag and a &lt;button&gt;. But to demonstate what's possible, we'll make it pretty with Bootstrap and jQuery, and use a Jinja template that accepts the demo title as a parameter.\nNote that the image tag points to a /plot resource on the server. Nothing dictates that we must fetch the plot image from our dashboard page. Another application could treat our web server as an API and use it in other ways.", "import jinja2\n\npage = jinja2.Template('''\\\n<!doctype html>\n<html>\n <head>\n <link rel=\"stylesheet\" type=\"text/css\" href=\"//maxcdn.bootstrapcdn.com/bootstrap/3.3.2/css/bootstrap.min.css\" />\n <title>{{ title }}</title>\n </head>\n <body>\n <nav class=\"navbar navbar-default\">\n <div class=\"container-fluid\">\n <div class=\"navbar-header\">\n <a class=\"navbar-brand\" href=\"#\">{{ title }}</a>\n </div>\n </div>\n </nav>\n <div class=\"container text-center\">\n <div class=\"row\">\n <img src=\"/plot\" alt=\"Random numbers for a plot\" />\n </div>\n <div class=\"row\">\n <button class=\"btn btn-primary\">Refresh Plot</button>\n </div>\n </div>\n <script type=\"text/javascript\" src=\"//code.jquery.com/jquery-2.1.3.min.js\"></script>\n <script type=\"text/javascript\">\n console.debug('running');\n $('button').on('click', function() {\n $('img').attr('src', '/plot?'+(new Date().getTime()));\n });\n </script>\n </body>\n</html>''')", "We can now expose both the plotting function and the template via our web servers (Tornado first, then Flask) using the following endpoints:\n\n/ will serve the dashboard HTML.\n/plot will serve the plot PNG.\n\nRun Tornado in a Notebook\nFirst we create a self-signed certificate using the openssl command line library. If we had a real cert, we could use it instead.", "%%bash\nmkdir -p -m 700 ~/.ssh\nopenssl req -new -newkey rsa:4096 -days 365 -nodes -x509 \\\n -subj \"/C=XX/ST=Unknown/L=Somewhere/O=None/CN=None\" \\\n -keyout /home/notebook/.ssh/notebook.key -out /home/notebook/.ssh/notebook.crt", "Next we import the Tornado models we need.", "import tornado.ioloop\nimport tornado.web\nimport tornado.httpserver", "Then we define the request handlers for our two endpoints.", "class MainHandler(tornado.web.RequestHandler):\n def get(self):\n '''Renders the template with a title on HTTP GET.'''\n self.finish(page.render(title='Tornado Demo'))\n\nclass PlotHandler(tornado.web.RequestHandler):\n def get(self):\n '''Creates the plot and returns it on HTTP GET.'''\n self.set_header('content-type', 'image/png')\n png = plot_random_numbers()\n self.finish(png)", "Now we define the application object which maps the web paths to the handlers.", "application = tornado.web.Application([\n (r\"/\", MainHandler),\n (r\"/plot\", PlotHandler)\n])", "Finally, we create a new HTTP server bound to a publicly exposed port on our notebook server (e.g., 9000) and using the self-signed certificate with corresponding key.\n<div class=\"alert\" style=\"border: 1px solid #aaa; background: radial-gradient(ellipse at center, #ffffff 50%, #eee 100%);\">\n <div class=\"row\">\n <div class=\"col-sm-1\"><img src=\"https://knowledgeanyhow.org/static/images/favicon_32x32.png\" style=\"margin-top: -6px\"/></div>\n <div class=\"col-sm-11\">In IBM Knowledge Anyhow Workbench, ports 9000 through 9004 are exposed on a public interface. We can bind our webserver to any of those ports.</div>\n </div>\n</div>", "server = tornado.httpserver.HTTPServer(application, ssl_options = {\n \"certfile\": '/home/notebook/.ssh/notebook.crt',\n \"keyfile\": '/home/notebook/.ssh/notebook.key'\n})\nserver.listen(9000, '0.0.0.0')", "To see the result, we need to visit the public IP address of our notebook server. For example, if our IP address is 192.168.11.10, we would visit https://192.168.11.10:9000.\n<div class=\"alert\" style=\"border: 1px solid #aaa; background: radial-gradient(ellipse at center, #ffffff 50%, #eee 100%);\">\n<div class=\"row\">\n <div class=\"col-sm-1\"><img src=\"https://knowledgeanyhow.org/static/images/favicon_32x32.png\" style=\"margin-top: -6px\"/></div>\n <div class=\"col-sm-11\">In IBM Knowledge Anyhow Workbench, we can get our public IP address from an environment variable by executing the code below in our notebook:\n<pre style=\"background-color: transparent\">import os\nos.getenv('HOST_PUBLIC_IP')</pre>\n </div>\n</div>\n</div>\n\nWhen we visit the web server in a browser and accept the self-signed cert warning, we should see the resulting dashboard. Clicking Refresh Plot in the dashboard shows us a new plot. \nNote that since IPython itself is based on Tornado, we are able to run other cells and get ouput while the web server is running. In fact, we can even modify the plotting function and template and see the changes the next time we refresh the dashboard in our browser.\nWhen we want to shut the server down, we execute the lines below. Restarting the notebook kernel has the same net effect.", "server.close_all_connections()\nserver.stop()", "Run Flask in a Notebook\nThe same technique works with Flask, albeit with different pros and cons. First, we need to install Flask since it does not come preinstalled in the notebook environment by default.", "!pip install flask", "Now we import our Flask requirements, define our app, and create our route mappings.", "from flask import Flask, make_response\n\nflask_app = Flask('flask_demo')\n\n@flask_app.route('/')\ndef index():\n '''Renders the template with a title on HTTP GET.'''\n return page.render(title='Flask Demo')\n\n@flask_app.route('/plot')\ndef get_plot():\n '''Creates the plot and returns it on HTTP GET.'''\n response = make_response(plot_random_numbers())\n response.mimetype = 'image/png'\n return response", "Finally, we run the Flask web server. Flask supports the generation of an ad-hoc HTTP certificate and key so we don't need to explicitly put one on disk like we did in the case of Tornado.", "flask_app.run(host='0.0.0.0', port=9000, ssl_context='adhoc')", "Unlike in the Tornado case, the run command above blocks the notebook kernel from returning for as long as the web server is running. To stop the server, we need to interrupt the kernel (Kernel &rarr; Interrupt). \nRun Flask in a Tornado WSGIContainer\nIf we are in love with Flask syntax, but miss the cool, non-blocking ability of Tornado, we can run the Flask application in a Tornado WSGIContainer like so.", "from tornado.wsgi import WSGIContainer\nserver = tornado.httpserver.HTTPServer(WSGIContainer(flask_app), ssl_options = {\n \"certfile\": '/home/notebook/.ssh/notebook.crt',\n \"keyfile\": '/home/notebook/.ssh/notebook.key'\n})\nserver.listen(9000, '0.0.0.0')", "And once we do, we can view the dashboard in a web browser even while executing cells in the notebook. When we're done, we can cleanup with the same logic as in the pure Tornado case.", "server.close_all_connections()\nserver.stop()", "Conclusion\nIn this notebook, we:\n\nDefined a simple function that returns a PNG of a plot\nDefined a template that renders a very simple HTML dashboard\nExposed two HTTPS endpoints in Tornado, one for the dashboard HTML and one for the plot\nExposed two HTTPS endpoints in Flask for the same resources\nExposed two HTTPS endpoints in Flask + Tornado for the same resources\n\nOf course, what we chose to expose was specific to the demo scenario. For example, we could have just as easily created a REST API that accepted feature values for classification and feedback about whether the classification was right or not for future training.\nWhile the result is not \"production ready\", it does allow us to expose prototype code to other users without worrying about migrating our work from notebook(s) to other environments.\n<div class=\"alert\" style=\"border: 1px solid #aaa; background: radial-gradient(ellipse at center, #ffffff 50%, #eee 100%);\">\n<div class=\"row\">\n <div class=\"col-sm-1\"><img src=\"https://knowledgeanyhow.org/static/images/favicon_32x32.png\" style=\"margin-top: -6px\"/></div>\n <div class=\"col-sm-11\">This notebook was created using <a href=\"https://knowledgeanyhow.org\">IBM Knowledge Anyhow Workbench</a>. To learn more, visit us at <a href=\"https://knowledgeanyhow.org\">https://knowledgeanyhow.org</a>.</div>\n </div>\n</div>" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
JanetMatsen/Machine_Learning_CSE_546
HW2/notebooks/Q-1-1-3_Multiclass_Ridge-Copy1.ipynb
mit
[ "Question_1-1-3_Multiclass_Ridge\nJanet Matsen\nCode notes:\n* Indivudal regressions are done by instinces of RidgeRegression, defined in rige_regression.py.\n * RidgeRegression gets some methods from ClassificationBase, defined in classification_base.py.\n* The class HyperparameterExplorer in hyperparameter_explorer is used to tune hyperparameters on training data.", "import numpy as np\nimport matplotlib as mpl\n%matplotlib inline\nimport time\n\nimport pandas as pd\nimport seaborn as sns\n\nfrom mnist import MNIST # public package for making arrays out of MINST data.\n\nimport sys\nsys.path.append('../code/')\n\nfrom ridge_regression import RidgeMulti\nfrom hyperparameter_explorer import HyperparameterExplorer\n\nfrom mnist_helpers import mnist_training, mnist_testing\n\nimport matplotlib.pyplot as plt\nfrom pylab import rcParams\nrcParams['figure.figsize'] = 4, 3", "Prepare MNIST training data", "train_X, train_y = mnist_training()\ntest_X, test_y = mnist_testing()", "Explore hyperparameters before training model on all of the training data.", "hyper_explorer = HyperparameterExplorer(X=train_X, y=train_y, \n model=RidgeMulti, \n validation_split=0.1, score_name = 'training RMSE', \n use_prev_best_weights=False,\n test_X=test_X, test_y=test_y)\n\nhyper_explorer.train_model(lam=1e10, verbose=False)\n\nhyper_explorer.train_model(lam=1e+08, verbose=False)\nhyper_explorer.train_model(lam=1e+07, verbose=False)\n\nhyper_explorer.train_model(lam=1e+06, verbose=False)\n\nhyper_explorer.train_model(lam=1e5, verbose=False)\nhyper_explorer.train_model(lam=1e4, verbose=False)\nhyper_explorer.train_model(lam=1e03, verbose=False)\nhyper_explorer.train_model(lam=1e2, verbose=False)\n\nhyper_explorer.train_model(lam=1e1, verbose=False)\n\nhyper_explorer.train_model(lam=1e0, verbose=False)\nhyper_explorer.train_model(lam=1e-1, verbose=False)\nhyper_explorer.train_model(lam=1e-2, verbose=False)\nhyper_explorer.train_model(lam=1e-3, verbose=False)\nhyper_explorer.train_model(lam=1e-4, verbose=False)\nhyper_explorer.train_model(lam=1e-5, verbose=False)\n\nhyper_explorer.summary\n\nhyper_explorer.plot_fits()\n\nt = time.localtime(time.time())\n\nhyper_explorer.plot_fits(filename = \"Q-1-1-3_val_and_train_RMSE_{}-{}\".format(t.tm_mon, t.tm_mday))\n\nhyper_explorer.plot_fits(ylim=(.6,.7),\n filename = \"Q-1-1-3_val_and_train_RMSE_zoomed_in{}-{}\".format(t.tm_mon, t.tm_mday))\n\nhyper_explorer.best('score')\n\nhyper_explorer.best('summary')\n\nhyper_explorer.best('best score')\n\nhyper_explorer.train_on_whole_training_set()\n\nhyper_explorer.final_model.results_row()\n\nhyper_explorer.evaluate_test_data()" ]
[ "markdown", "code", "markdown", "code", "markdown", "code" ]
awagner-mainz/notebooks
gallery/TextProcessing_Azpilcueta.ipynb
mit
[ "Text Processing\nTable of Contents\n<p><div class=\"lev1 toc-item\"><a href=\"#Text-Processing\" data-toc-modified-id=\"Text-Processing-1\"><span class=\"toc-item-num\">1&nbsp;&nbsp;</span>Text Processing</a></div><div class=\"lev2 toc-item\"><a href=\"#Introduction\" data-toc-modified-id=\"Introduction-11\"><span class=\"toc-item-num\">1.1&nbsp;&nbsp;</span>Introduction</a></div><div class=\"lev1 toc-item\"><a href=\"#Preparations\" data-toc-modified-id=\"Preparations-2\"><span class=\"toc-item-num\">2&nbsp;&nbsp;</span>Preparations</a></div><div class=\"lev1 toc-item\"><a href=\"#TF/IDF-\" data-toc-modified-id=\"TF/IDF--3\"><span class=\"toc-item-num\">3&nbsp;&nbsp;</span>TF/IDF </a></div><div class=\"lev1 toc-item\"><a href=\"#Translations?\" data-toc-modified-id=\"Translations?-4\"><span class=\"toc-item-num\">4&nbsp;&nbsp;</span>Translations?</a></div><div class=\"lev2 toc-item\"><a href=\"#New-Approach:-Use-Aligner-from-Machine-Translation-Studies-\" data-toc-modified-id=\"New-Approach:-Use-Aligner-from-Machine-Translation-Studies--41\"><span class=\"toc-item-num\">4.1&nbsp;&nbsp;</span>New Approach: Use Aligner from Machine Translation Studies </a></div><div class=\"lev1 toc-item\"><a href=\"#Similarity-\" data-toc-modified-id=\"Similarity--5\"><span class=\"toc-item-num\">5&nbsp;&nbsp;</span>Similarity </a></div><div class=\"lev1 toc-item\"><a href=\"#Word-Clouds-\" data-toc-modified-id=\"Word-Clouds--6\"><span class=\"toc-item-num\">6&nbsp;&nbsp;</span>Word Clouds </a></div>\n\n## Introduction\n\nThis file is the continuation of preceding work. Previously, I have worked my way through a couple of text-analysing approaches - such as tf/idf frequencies, n-grams and the like - in the context of a project concerned with Juan de Solórzano Pereira's *Politica Indiana*. This can be seen [here](TextProcessing_Solorzano.ipynb).\n\nIn the former context, I got somewhat stuck when I was trying to automatically align corresponding passages of two editions of the same work ... where the one edition would be a **translation** of the other and thus we would have two different languages. In vector terminology, two languages means two almost orthogonal vectors and it makes little sense to search for similarities there.\n\nThe present file takes this up, tries to refine an approach taken there and to find alternative ways of analysing a text across several languages. This time, the work concerned is Martín de Azpilcueta's *Manual de confesores*, a work of the 16th century that has seen very many editions and translations, quite a few of them even by the work's original author and it is the subject of the research project [\"Martín de Azpilcueta’s Manual for Confessors and the Phenomenon of Epitomisation\"](http://www.rg.mpg.de/research/martin-de-azpilcuetas-manual-for-confessors) by Manuela Bragagnolo. \n\n(There are a few DH-ey things about the project that are not directly of concern here, like a synoptic display of several editions or the presentation of the divergence of many actual translations of a given term. Such aspects are being treated with other software, like [HyperMachiavel](http://hyperprince.ens-lyon.fr/hypermachiavel) or [Lera](http://lera.uzi.uni-halle.de/).)\n\nAs in the previous case, the programming language used in the following examples is \"python\" and the tool used to get prose discussion and code samples together is called [\"jupyter\"](http://jupyter.org/). (A common way of installing both the language and the jupyter software, especially in windows, is by installing a python \"distribution\" like [Anaconda](https://www.anaconda.com/what-is-anaconda/).) In jupyter, you have a \"notebook\" that you can populate with text (if you want to use it, jupyter understands [markdown](http://jupyter-notebook.readthedocs.io/en/stable/examples/Notebook/Working%20With%20Markdown%20Cells.html) code formatting) or code, and a program that pipes a nice rendering of the notebook to a web browser as you are reading right now. In many places in such a notebook, the output that the code samples produce is printed right below the code itself. Sometimes this can be quite a lot of output and depending on your viewing environment you might have to scroll quite some way to get to the continuation of the discussion.\n\nYou can save your notebook online (the current one is [here at github](https://github.com/awagner-mainz/notebooks/blob/master/gallery/TextProcessing_Azpilcueta.ipynb)) and there is an online service, nbviewer, able to render any notebook that it can access online. So chances are you are reading this present notebook at the web address [https://nbviewer.jupyter.org/github/awagner-mainz/notebooks/blob/master/gallery/TextProcessing_Azpilcueta.ipynb](https://nbviewer.jupyter.org/github/awagner-mainz/notebooks/blob/master/gallery/TextProcessing_Azpilcueta.ipynb).\n\nA final word about the elements of this notebook:\n\n<div class=\"alert alertbox alert-success\">At some points I am mentioning things I consider to be important decisions or take-away messages for scholarly readers. E.g. whether or not to insert certain artefacts into the very transcription of your text, what the methodological ramifications of a certain approach or parameter are, what the implications of an example solution are, or what a possible interpretation of a certain result might be. I am highlighting these things in a block like this one here or at least in <font color=\"green\">**green bold font**</font>.</div>\n\n<div class=\"alert alertbox alert-danger\">**NOTE:** As I am continually improving the notebook on the side of the source text, wordlists and other parameters, it is sometimes hard to keep the prose description in sync. So while the actual descriptions still apply, the numbers that are mentioned in the prose (as where we have e.g. a \"table with 20 rows and 1.672 columns\") might no longer reflect the latest state of the sources, auxiliary files and parameters and you should take these with a grain of salt. Best double check them by reading the actual code ;-)\n\nI apologize for the inconsistency.</div>\n\n# Preparations\n\nUnlike in the previous case, where we had word files that we could export as plaintext, in this case Manuela has prepared a sample chapter with four editions transcribed *in parallel* in an office spreadsheet. So we first of all make sure that we have good **UTF-8** comma-separated-value files, e.g. by uploading a **csv** export of our office program of choice to [a CSV Linting service](https://csvlint.io/). (As a side remark, in my case, exporting with LibreOffice provided me with options to select UTF-8 encoding and choose the field delimiter and resulted in a valid csv file. MS Excel did neither of those.) Below, we expect the file at the following position:", "sourcePath = 'Azpilcueta/cap6_align_-_2018-01.csv'", "Then, we can go ahead and open the file in python's csv reader:", "import csv\n\nsourceFile = open(sourcePath, newline='', encoding='utf-8')\nsourceTable = csv.reader(sourceFile)", "And next, we read each line into new elements of four respective lists (since we're dealing with one sample chapter, we try to handle it all in memory first and see if we run into problems):\n(Note here and in the following that in most cases, when the program is counting, it does so beginning with zero. Which means that if we end up with 20 segments, they are going to be called segment 0, segment 1, ..., segment 19. There is not going to be a segment bearing the number twenty, although we do have twenty segments. The first one has the number zero and the twentieth one has the number nineteen. Even for more experienced coders, this sometimes leads to mistakes, called \"off-by-one errors\".)", "import re\n\n# Initialize a list of lists, or two-dimensional list ...\nEditions = [[]]\n\n# ...with four sub-lists 0 to 3\nfor i in range(3):\n a = []\n Editions.append(a)\n\n# Now populate it from our sourceTable\nsourceFile.seek(0) # in repeated runs, restart from the beginning of the file\nfor row in sourceTable:\n for i, field in enumerate(row): # We normalize quite a bit here already:\n p = field.replace('¶', ' ¶ ') # spaces around ¶ \n p = re.sub(\"&([^c])\",\" & \\\\1\", p) # always spaces around &, except for &c\n p = re.sub(\"([,.:?/])(\\S)\",\"\\\\1 \\\\2\", p) # always a space after ',.:?/'\n p = re.sub(\"([0-9])([a-zA-Z])\", \"\\\\1 \\\\2\", p) # always a space between numbers and word characters\n p = re.sub(\"([a-z]) ?\\\\(\\\\1\\\\b\", \" (\\\\1\", p) # if a letter is repeated on its own in a bracketed\n # expression it's a note and we eliminate the character\n # from the preceding word\n p = \" \".join(p.split()) # always only one space\n Editions[i].append(p)\n\nprint(str(len(Editions[0])) + \" rows read.\\n\")\n\n# As an example, see the first seven sections of the third edition (1556 SPA):\nfor field in range(len(Editions[2])):\n print(Editions[2][field])", "Actually, let's define two more list variables to hold information about the different editions - language and year of print:", "numOfEds = 4\nlanguage = [\"PT\", \"PT\", \"ES\", \"LA\"] # I am using language codes that later on can be used in babelnet\nyear = [1549, 1552, 1556, 1573]", "TF/IDF <a name=\"tfidf\"></a>\nIn the previous (i.e. Solórzano) analyses, things like tokenization, lemmatization and stop-word lists filtering are explained step by step. Here, we rely on what we have found there and feed it all into functions that are ready-made and available in suitable libraries...\nFirst, we build our lemmatization resource and \"function\":", "lemma = [{} for i in range(numOfEds)]\n# lemma = {} # we build a so-called dictionary for the lookups\n\nfor i in range(numOfEds):\n \n wordfile_path = 'Azpilcueta/wordforms-' + language[i].lower() + '.txt'\n\n # open the wordfile (defined above) for reading\n wordfile = open(wordfile_path, encoding='utf-8')\n\n tempdict = []\n for line in wordfile.readlines():\n tempdict.append(tuple(line.split('>'))) # we split each line by \">\" and append\n # a tuple to a temporary list.\n\n lemma[i] = {k.strip(): v.strip() for k, v in tempdict} # for every tuple in the temp. list,\n # we strip whitespace and make a key-value\n # pair, appending it to our \"lemma\"\n # dictionary\n wordfile.close\n\n print(str(len(lemma[i])) + ' ' + language[i] + ' wordforms known to the system.')\n", "Again, a quick test: Let's see with which \"lemma\"/basic word the particular wordform \"diremos\" is associated, or, in other words, what value our lemma variable returns when we query for the key \"diremos\":", "lemma[language.index(\"PT\")]['diremos']", "And we are going to need the stopwords lists:", "stopwords = []\n\nfor i in range(numOfEds):\n \n stopwords_path = 'Azpilcueta/stopwords-' + language[i].lower() + '.txt'\n stopwords.append(open(stopwords_path, encoding='utf-8').read().splitlines())\n\n print(str(len(stopwords[i])) + ' ' + language[i]\n + ' stopwords known to the system, e.g.: ' + str(stopwords[i][100:119]) + '\\n')", "(In contrast to simpler numbers that have been filtered out by the stopwords filter, I have left numbers representing years like \"1610\" in place.)\nAnd, later on when we try sentence segmentation, we are going to need the list of abbreviations - words where a subsequent period not necessarily means a new sentence:", "abbreviations = [] # As of now, this is one for all languages :-(\n\nabbrs_path = 'Azpilcueta/abbreviations.txt'\nabbreviations = open(abbrs_path, encoding='utf-8').read().splitlines()\n\nprint(str(len(abbreviations)) + ' abbreviations known to the system, e.g.: ' + str(abbreviations[100:119]))", "Next, we should find some very characteristic words for each segment for each edition. (Let's say we are looking for the \"Top 20\".) We should build a vocabulary for each edition individually and only afterwards work towards a common vocabulary of several \"Top n\" sets.", "import re\nimport pandas as pd\nfrom sklearn.feature_extraction.text import TfidfVectorizer\n\nnumTopTerms = 20\n\n# So first we build a tokenising and lemmatising function (per language) to work as\n# an input filter to the CountVectorizer function\ndef ourLaLemmatiser(str_input):\n wordforms = re.split('\\W+', str_input)\n return [lemma[language.index(\"LA\")][wordform].lower().strip() if wordform in lemma[language.index(\"LA\")] else wordform.lower().strip() for wordform in wordforms ]\ndef ourEsLemmatiser(str_input):\n wordforms = re.split('\\W+', str_input)\n return [lemma[language.index(\"ES\")][wordform].lower().strip() if wordform in lemma[language.index(\"ES\")] else wordform.lower().strip() for wordform in wordforms ]\ndef ourPtLemmatiser(str_input):\n wordforms = re.split('\\W+', str_input)\n return [lemma[language.index(\"PT\")][wordform].lower().strip() if wordform in lemma[language.index(\"PT\")] else wordform.lower().strip() for wordform in wordforms ]\n\ndef ourLemmatiser(lang):\n if (lang == \"LA\"):\n return ourLaLemmatiser\n if (lang == \"ES\"):\n return ourEsLemmatiser\n if (lang == \"PT\"):\n return ourPtLemmatiser\n\ndef ourStopwords(lang):\n if (lang == \"LA\"):\n return stopwords[language.index(\"LA\")]\n if (lang == \"ES\"):\n return stopwords[language.index(\"ES\")]\n if (lang == \"PT\"):\n return stopwords[language.index(\"PT\")]\n\ntopTerms = []\nfor i in range(numOfEds):\n\n topTermsEd = []\n # Initialize the library's function, specifying our\n # tokenizing function from above and our stopwords list.\n tfidf_vectorizer = TfidfVectorizer(stop_words=ourStopwords(language[i]), use_idf=True, tokenizer=ourLemmatiser(language[i]), norm='l2')\n\n # Finally, we feed our corpus to the function to build a new \"tfidf_matrix\" object\n tfidf_matrix = tfidf_vectorizer.fit_transform(Editions[i])\n\n # convert your matrix to an array to loop over it\n mx_array = tfidf_matrix.toarray()\n\n # get your feature names\n fn = tfidf_vectorizer.get_feature_names()\n\n # now loop through all segments and get the respective top n words.\n pos = 0\n for j in mx_array:\n # We have empty segments, i.e. none of the words in our vocabulary has any tf/idf score > 0\n if (j.max() == 0):\n topTermsEd.append([(\"\", 0)])\n # otherwise append (present) lemmatised words until numTopTerms or the number of words (-stopwords) is reached\n else:\n topTermsEd.append(\n [(fn[x], j[x]) for x in ((j*-1).argsort()) if j[x] > 0] \\\n [:min(numTopTerms, len(\n [word for word in re.split('\\W+', Editions[i][pos]) if ourLemmatiser(language[i])(word) not in stopwords]\n ))])\n pos += 1\n topTerms.append(topTermsEd)", "Translations?\nMaybe there is an approach to inter-lingual comparison after all. After a first unsuccessful try with conceptnet.io, I next want to try Babelnet in order to lookup synonyms, related terms and translations. I still have to study the API...\nFor example, let's take this single segment 19:", "segment_no = 18", "And then first let's see how this segment compares in the different editions:", "print(\"Comparing words from segments \" + str(segment_no) + \" ...\")\nprint(\" \")\nprint(\"Here is the segment in the four editions:\")\nprint(\" \")\nfor i in range(numOfEds):\n print(\"Ed. \" + str(i) + \":\")\n print(\"------\")\n print(Editions[i][segment_no])\n print(\" \")\n\nprint(\" \")\nprint(\" \")\n\n# Build List of most significant words for a segment\n\nprint(\"Most significant words in the segment:\")\nprint(\" \")\nfor i in range(numOfEds):\n print(\"Ed. \" + str(i) + \":\")\n print(\"------\")\n print(topTerms[i][segment_no])\n print(\" \")", "Now we look up the \"concepts\" associated to those words in babelnet. Then we look up the concepts associated with the words of the present segment from another edition/language, and see if the concepts are the same.\nBut we have to decide on some particular editions to get things started. Let's take the Spanish and Latin ones:", "startEd = 1\nsecondEd = 2", "And then we can continue...", "import urllib\nimport json\nfrom collections import defaultdict\n\nbabelAPIKey = '18546fd3-8999-43db-ac31-dc113506f825'\nbabelGetSynsetIdsURL = \"https://babelnet.io/v5/getSynsetIds?\" + \\\n \"targetLang=LA&targetLang=ES&targetLang=PT\" + \\\n \"&searchLang=\" + language[startEd] + \\\n \"&key=\" + babelAPIKey + \\\n \"&lemma=\"\n\n# Build lists of possible concepts\ntop_possible_conceptIDs = defaultdict(list)\nfor (word, val) in topTerms[startEd][segment_no]:\n concepts_uri = babelGetSynsetIdsURL + urllib.parse.quote(word)\n response = urllib.request.urlopen(concepts_uri)\n conceptIDs = json.loads(response.read().decode(response.info().get_param('charset') or 'utf-8'))\n for rel in conceptIDs:\n top_possible_conceptIDs[word].append(rel.get(\"id\"))\n\nprint(\" \")\nprint(\"For each of the '\" + language[startEd] + \"' words, here are possible synsets:\")\nprint(\" \")\n\nfor word in top_possible_conceptIDs:\n print(word + \":\" + \" \" + ', '.join(c for c in top_possible_conceptIDs[word]))\n print(\" \")\n\nprint(\" \")\nprint(\" \")\nprint(\" \")\n\nbabelGetSynsetIdsURL2 = \"https://babelnet.io/v5/getSynsetIds?\" + \\\n \"targetLang=LA&targetLang=ES&targetLang=PT\" + \\\n \"&searchLang=\" + language[secondEd] + \\\n \"&key=\" + babelAPIKey + \\\n \"&lemma=\"\n\n# Build list of 10 most significant words in the second language\ntop_possible_conceptIDs_2 = defaultdict(list)\nfor (word, val) in topTerms[secondEd][segment_no]:\n concepts_uri = babelGetSynsetIdsURL2 + urllib.parse.quote(word)\n response = urllib.request.urlopen(concepts_uri)\n conceptIDs = json.loads(response.read().decode(response.info().get_param('charset') or 'utf-8'))\n for rel in conceptIDs:\n top_possible_conceptIDs_2[word].append(rel.get(\"id\"))\n\nprint(\" \")\nprint(\"For each of the '\" + language[secondEd] + \"' words, here are possible synsets:\")\nprint(\" \")\nfor word in top_possible_conceptIDs_2:\n print(word + \":\" + \" \" + ', '.join(c for c in top_possible_conceptIDs_2[word]))\n print(\" \")\n\n# calculate number of overlapping terms\nvalues_a = set([item for sublist in top_possible_conceptIDs.values() for item in sublist])\nvalues_b = set([item for sublist in top_possible_conceptIDs_2.values() for item in sublist])\noverlaps = values_a & values_b\nprint(\"Overlaps: \" + str(overlaps))\n\nbabelGetSynsetInfoURL = \"https://babelnet.io/v5/getSynset?key=\" + babelAPIKey + \\\n \"&targetLang=LA&targetLang=ES&targetLang=PT\" + \\\n \"&id=\"\n\nfor c in overlaps:\n info_uri = babelGetSynsetInfoURL + c\n response = urllib.request.urlopen(info_uri)\n words = json.loads(response.read().decode(response.info().get_param('charset') or 'utf-8'))\n \n senses = words['senses']\n for result in senses[:1]:\n lemma = result['properties'].get('fullLemma')\n resultlang = result['properties'].get('language')\n print(c + \": \" + lemma + \" (\" + resultlang.lower() + \")\")\n\n# what's left: do a nifty ranking", "Actually I think this is somewhat promising - an overlap of four independent, highly meaning-bearing words, or of forty-something related concepts. At first glance, they should be capable of distinguishing this section from all the other ones. However, getting this result was made possible by quite a bit of manual tuning the stopwords and lemmatization dictionaries before, so this work is important and cannot be eliminated.\nNew Approach: Use Aligner from Machine Translation Studies <a name=\"newApproach\"/>\nIn contrast to what I thought previously, there is a couple of tools for automatically aligning parallel texts after all. After some investigation of the literature, the most promising candidate seems to be HunAlign. However, as this is a commandline tool written in C++ (there is LF Aligner, a GUI, available), it is not possible to run it from within this notebook.\nFirst results were problematic, due to the different literary conventions that our editions follow: Punctuation was used inconsistently (but sentence length is one of the most relevant factors for aligning), as were abbreviations and notes.\nMy current idea is to use this notebook to preprocess the texts and to feed a cleaned up version of them to hunalign...\nComing back to this after a first couple of rounds with Hunalign, I have the feeling that the fact that literary conventions are so divergent probably means that Aligning via sentence lengths is a bad idea in our from the outset. Probably better to approach this with GMA or similar methods. Anyway, here are the first attempts with Hunalign:", "from nltk import sent_tokenize\n\n## First, train the sentence tokenizer:\nfrom pprint import pprint\nfrom nltk.tokenize.punkt import PunktSentenceTokenizer, PunktLanguageVars, PunktTrainer\n \nclass BulletPointLangVars(PunktLanguageVars):\n sent_end_chars = ('.', '?', ':', '!', '¶')\n\ntrainer = PunktTrainer()\ntrainer.INCLUDE_ALL_COLLOCS = True\ntokenizer = PunktSentenceTokenizer(trainer.get_params(), lang_vars = BulletPointLangVars())\nfor tok in abbreviations : tokenizer._params.abbrev_types.add(tok)\n\n## Now we sentence-segmentize all our editions, printing results and saving them to files:\n\n# folder for the several segment files:\noutputBase = 'Azpilcueta/sentences'\ndest = None\n\n# Then, sentence-tokenize our segments:\nfor i in range(numOfEds):\n dest = open(outputBase + '_' + str(year[i]) + '.txt',\n encoding='utf-8',\n mode='w')\n print(\"Sentence-split of ed. \" + str(i) + \":\")\n print(\"------\")\n for s in range(0, len(Editions[i])):\n for a in tokenizer.tokenize(Editions[i][s]):\n dest.write(a.strip() + '\\n')\n print(a)\n dest.write('<p>\\n')\n print('<p>')\n dest.close()\n", "... lemmatize/stopwordize it---", "# folder for the several segment files:\noutputBase = 'Azpilcueta/sentences-lemmatized'\ndest = None\n\n# Then, sentence-tokenize our segments:\nfor i in range(numOfEds):\n dest = open(outputBase + '_' + str(year[i]) + '.txt',\n encoding='utf-8',\n mode='w')\n stp = set(stopwords[i])\n print(\"Cleaned/lemmatized ed. \" + str(i) + \" [\" + language[i] + \"]:\")\n print(\"------\")\n for s in range(len(Editions[i])):\n for a in tokenizer.tokenize(Editions[i][s]):\n dest.write(\" \".join([x for x in ourLemmatiser(language[i])(a) if x not in stp]) + '\\n')\n print(\" \".join([x for x in ourLemmatiser(language[i])(a) if x not in stp]))\n dest.write('<p>\\n')\n print('<p>')\n dest.close()\n", "With these preparations made, Hunaligning 1552 and 1556 reports \"Quality 0.63417\" for unlemmatized and \"Quality 0.51392\" for lemmatized versions of the texts for its findings which still contain many errors. Removing \":\" from the sentence end marks gives \"Quality 0.517048/0.388377\", but from a first impression with fewer errors. Results can be output in different formats, xls files are here and here.\nSimilarity <a name=\"DocumentSimilarity\"/>\nIt seems we could now create another matrix replacing lemmata with concepts and retaining the tf/idf values (so as to keep a weight coefficient to the concepts). Then we should be able to calculate similarity measures across the same concepts...\nThe approach to choose would probably be the \"cosine similarity\" of concept vector spaces. Again, there is a library ready for us to use (but you can find some documentation here, here and here.)\nHowever, this is where I have to take a break now. I will return to here soon...", "from sklearn.metrics.pairwise import cosine_similarity\n\nsimilarities = pd.DataFrame(cosine_similarity(tfidf_matrix))\nsimilarities[round(similarities, 0) == 1] = 0 # Suppress a document's similarity to itself\nprint(\"Pairwise similarities:\")\nprint(similarities)\n\nprint(\"The two most similar segments in the corpus are\")\nprint(\"segments\", \\\n similarities[similarities == similarities.values.max()].idxmax(axis=0).idxmax(axis=1), \\\n \"and\", \\\n similarities[similarities == similarities.values.max()].idxmax(axis=0)[ similarities[similarities == similarities.values.max()].idxmax(axis=0).idxmax(axis=1) ].astype(int), \\\n \".\")\nprint(\"They have a similarity score of\")\nprint(similarities.values.max())", "<div class=\"alert alertbox alert-success\">Of course, in every set of documents, we will always find two that are similar in the sense of them being more similar to each other than to the other ones. Whether or not this actually *means* anything in terms of content is still up to scholarly interpretation. But at least it means that a scholar can look at the two documents and when she determines that they are not so similar after all, then perhaps there is something interesting to say about similar vocabulary used for different puproses. Or the other way round: When the scholar knows that two passages are similar, but they have a low \"similarity score\", shouldn't that say something about the texts's rhetorics?</div>\n\nWord Clouds <a name=\"WordClouds\"/>\nWe can use a library that takes word frequencies like above, calculates corresponding relative sizes of words and creates nice wordcloud images for our sections (again, taking the fourth segment as an example) like this:", "from wordcloud import WordCloud\nimport matplotlib.pyplot as plt\n\n# We make tuples of (lemma, tf/idf score) for one of our segments\n# But we have to convert our tf/idf weights to pseudo-frequencies (i.e. integer numbers)\nfrq = [ int(round(x * 100000, 0)) for x in Editions[1][3]]\nfreq = dict(zip(fn, frq))\n\nwc = WordCloud(background_color=None, mode=\"RGBA\", max_font_size=40, relative_scaling=1).fit_words(freq)\n\n# Now show/plot the wordcloud\nplt.figure()\nplt.imshow(wc, interpolation=\"bilinear\")\nplt.axis(\"off\")\nplt.show()", "In order to have a nicer overview over the many segments than is possible in this notebook, let's create a new html file listing some of the characteristics that we have found so far...", "outputDir = \"Azpilcueta\"\nhtmlfile = open(outputDir + '/Overview.html', encoding='utf-8', mode='w')\n\n# Write the html header and the opening of a layout table\nhtmlfile.write(\"\"\"<!DOCTYPE html>\n<html>\n <head>\n <title>Section Characteristics</title>\n <meta charset=\"utf-8\"/>\n </head>\n <body>\n <table>\n\"\"\")\n\na = [[]]\na.clear()\ndicts = []\nw = []\n\n# For each segment, create a wordcloud and write it along with label and\n# other information into a new row of the html table\nfor i in range(len(mx_array)):\n # this is like above in the single-segment example...\n a.append([ int(round(x * 100000, 0)) for x in mx_array[i]])\n dicts.append(dict(zip(fn, a[i])))\n w.append(WordCloud(background_color=None, mode=\"RGBA\", \\\n max_font_size=40, min_font_size=10, \\\n max_words=60, relative_scaling=0.8).fit_words(dicts[i]))\n # We write the wordcloud image to a file\n w[i].to_file(outputDir + '/wc_' + str(i) + '.png')\n # Finally we write the column row\n htmlfile.write(\"\"\"\n <tr>\n <td>\n <head>Section {a}: <b>{b}</b></head><br/>\n <img src=\"./wc_{a}.png\"/><br/>\n <small><i>length: {c} words</i></small>\n </td>\n </tr>\n <tr><td>&nbsp;</td></tr>\n\"\"\".format(a = str(i), b = label[i], c = len(tokenised[i])))\n\n# And then we write the end of the html file.\nhtmlfile.write(\"\"\"\n </table>\n </body>\n</html>\n\"\"\")\nhtmlfile.close()", "This should have created a nice html file which we can open here." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
statsmodels/statsmodels.github.io
v0.12.1/examples/notebooks/generated/discrete_choice_overview.ipynb
bsd-3-clause
[ "Discrete Choice Models Overview", "import numpy as np\nimport statsmodels.api as sm", "Data\nLoad data from Spector and Mazzeo (1980). Examples follow Greene's Econometric Analysis Ch. 21 (5th Edition).", "spector_data = sm.datasets.spector.load(as_pandas=False)\nspector_data.exog = sm.add_constant(spector_data.exog, prepend=False)", "Inspect the data:", "print(spector_data.exog[:5,:])\nprint(spector_data.endog[:5])", "Linear Probability Model (OLS)", "lpm_mod = sm.OLS(spector_data.endog, spector_data.exog)\nlpm_res = lpm_mod.fit()\nprint('Parameters: ', lpm_res.params[:-1])", "Logit Model", "logit_mod = sm.Logit(spector_data.endog, spector_data.exog)\nlogit_res = logit_mod.fit(disp=0)\nprint('Parameters: ', logit_res.params)", "Marginal Effects", "margeff = logit_res.get_margeff()\nprint(margeff.summary())", "As in all the discrete data models presented below, we can print a nice summary of results:", "print(logit_res.summary())", "Probit Model", "probit_mod = sm.Probit(spector_data.endog, spector_data.exog)\nprobit_res = probit_mod.fit()\nprobit_margeff = probit_res.get_margeff()\nprint('Parameters: ', probit_res.params)\nprint('Marginal effects: ')\nprint(probit_margeff.summary())", "Multinomial Logit\nLoad data from the American National Election Studies:", "anes_data = sm.datasets.anes96.load(as_pandas=False)\nanes_exog = anes_data.exog\nanes_exog = sm.add_constant(anes_exog, prepend=False)", "Inspect the data:", "print(anes_data.exog[:5,:])\nprint(anes_data.endog[:5])", "Fit MNL model:", "mlogit_mod = sm.MNLogit(anes_data.endog, anes_exog)\nmlogit_res = mlogit_mod.fit()\nprint(mlogit_res.params)", "Poisson\nLoad the Rand data. Note that this example is similar to Cameron and Trivedi's Microeconometrics Table 20.5, but it is slightly different because of minor changes in the data.", "rand_data = sm.datasets.randhie.load(as_pandas=False)\nrand_exog = rand_data.exog.view(float).reshape(len(rand_data.exog), -1)\nrand_exog = sm.add_constant(rand_exog, prepend=False)", "Fit Poisson model:", "poisson_mod = sm.Poisson(rand_data.endog, rand_exog)\npoisson_res = poisson_mod.fit(method=\"newton\")\nprint(poisson_res.summary())", "Negative Binomial\nThe negative binomial model gives slightly different results.", "mod_nbin = sm.NegativeBinomial(rand_data.endog, rand_exog)\nres_nbin = mod_nbin.fit(disp=False)\nprint(res_nbin.summary())", "Alternative solvers\nThe default method for fitting discrete data MLE models is Newton-Raphson. You can use other solvers by using the method argument:", "mlogit_res = mlogit_mod.fit(method='bfgs', maxiter=250)\nprint(mlogit_res.summary())" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
YeEmrick/learning
cs231/assignment/assignment1/knn.ipynb
apache-2.0
[ "k-Nearest Neighbor (kNN) exercise\nComplete and hand in this completed worksheet (including its outputs and any supporting code outside of the worksheet) with your assignment submission. For more details see the assignments page on the course website.\nThe kNN classifier consists of two stages:\n\nDuring training, the classifier takes the training data and simply remembers it\nDuring testing, kNN classifies every test image by comparing to all training images and transfering the labels of the k most similar training examples\nThe value of k is cross-validated\n\nIn this exercise you will implement these steps and understand the basic Image Classification pipeline, cross-validation, and gain proficiency in writing efficient, vectorized code.", "# Run some setup code for this notebook.\n\nimport random\nimport numpy as np\nfrom cs231n.data_utils import load_CIFAR10\nimport matplotlib.pyplot as plt\n\nfrom __future__ import print_function\n\n# This is a bit of magic to make matplotlib figures appear inline in the notebook\n# rather than in a new window.\n%matplotlib inline\nplt.rcParams['figure.figsize'] = (10.0, 8.0) # set default size of plots\nplt.rcParams['image.interpolation'] = 'nearest'\nplt.rcParams['image.cmap'] = 'gray'\n\n# Some more magic so that the notebook will reload external python modules;\n# see http://stackoverflow.com/questions/1907993/autoreload-of-modules-in-ipython\n%load_ext autoreload\n%autoreload 2\n\n# Load the raw CIFAR-10 data.\ncifar10_dir = 'cs231n/datasets/cifar-10-batches-py'\n\n# Cleaning up variables to prevent loading data multiple times (which may cause memory issue)\ntry:\n del X_train, y_train\n del X_test, y_test\n print('Clear previously loaded data.')\nexcept:\n pass\n\nX_train, y_train, X_test, y_test = load_CIFAR10(cifar10_dir)\n\n# As a sanity check, we print out the size of the training and test data.\nprint('Training data shape: ', X_train.shape)\nprint('Training labels shape: ', y_train.shape)\nprint('Test data shape: ', X_test.shape)\nprint('Test labels shape: ', y_test.shape)\n\n\n# Visualize some examples from the dataset.\n# We show a few examples of training images from each class.\nclasses = ['plane', 'car', 'bird', 'cat', 'deer', 'dog', 'frog', 'horse', 'ship', 'truck']\nnum_classes = len(classes)\nsamples_per_class = 7\nfor y, cls in enumerate(classes):\n idxs = np.flatnonzero(y_train == y)\n idxs = np.random.choice(idxs, samples_per_class, replace=False)\n for i, idx in enumerate(idxs):\n plt_idx = i * num_classes + y + 1\n plt.subplot(samples_per_class, num_classes, plt_idx)\n plt.imshow(X_train[idx].astype('uint8'))\n plt.axis('off')\n if i == 0:\n plt.title(cls)\nplt.show()\n\n# Subsample the data for more efficient code execution in this exercise\nnum_training = 5000\nmask = list(range(num_training))\nX_train = X_train[mask]\ny_train = y_train[mask]\n\nnum_test = 500\nmask = list(range(num_test))\nX_test = X_test[mask]\ny_test = y_test[mask]\n\n# Reshape the image data into rows\nX_train = np.reshape(X_train, (X_train.shape[0], -1))\nX_test = np.reshape(X_test, (X_test.shape[0], -1))\nprint(X_train.shape, X_test.shape)\n\nfrom cs231n.classifiers import KNearestNeighbor\n\n# Create a kNN classifier instance. \n# Remember that training a kNN classifier is a noop: \n# the Classifier simply remembers the data and does no further processing \nclassifier = KNearestNeighbor()\nclassifier.train(X_train, y_train)\n", "We would now like to classify the test data with the kNN classifier. Recall that we can break down this process into two steps: \n\nFirst we must compute the distances between all test examples and all train examples. \nGiven these distances, for each test example we find the k nearest examples and have them vote for the label\n\nLets begin with computing the distance matrix between all training and test examples. For example, if there are Ntr training examples and Nte test examples, this stage should result in a Nte x Ntr matrix where each element (i,j) is the distance between the i-th test and j-th train example.\nFirst, open cs231n/classifiers/k_nearest_neighbor.py and implement the function compute_distances_two_loops that uses a (very inefficient) double loop over all pairs of (test, train) examples and computes the distance matrix one element at a time.", "# Open cs231n/classifiers/k_nearest_neighbor.py and implement\n# compute_distances_two_loops.\n\n# Test your implementation:\ndists = classifier.compute_distances_two_loops(X_test)\nprint(dists.shape)\n\n# We can visualize the distance matrix: each row is a single test example and\n# its distances to training examples\nplt.imshow(dists, interpolation='none')\nplt.show()", "Inline Question #1: Notice the structured patterns in the distance matrix, where some rows or columns are visible brighter. (Note that with the default color scheme black indicates low distances while white indicates high distances.)\n\nWhat in the data is the cause behind the distinctly bright rows?\nWhat causes the columns?\n\nYour Answer: fill this in.", "# Now implement the function predict_labels and run the code below:\n# We use k = 1 (which is Nearest Neighbor).\ny_test_pred = classifier.predict_labels(dists, k=1)\n\n# Compute and print the fraction of correctly predicted examples\nnum_correct = np.sum(y_test_pred == y_test)\naccuracy = float(num_correct) / num_test\nprint('Got %d / %d correct => accuracy: %f' % (num_correct, num_test, accuracy))", "You should expect to see approximately 27% accuracy. Now lets try out a larger k, say k = 5:", "y_test_pred = classifier.predict_labels(dists, k=5)\nnum_correct = np.sum(y_test_pred == y_test)\naccuracy = float(num_correct) / num_test\nprint('Got %d / %d correct => accuracy: %f' % (num_correct, num_test, accuracy))", "You should expect to see a slightly better performance than with k = 1.\nInline Question 2\nWe can also other distance metrics such as L1 distance.\nThe performance of a Nearest Neighbor classifier that uses L1 distance will not change if (Select all that apply.):\n1. The data is preprocessed by subtracting the mean.\n2. The data is preprocessed by subtracting the mean and dividing by the standard deviation.\n3. The coordinate axes for the data are rotated.\n4. None of the above.\nYour Answer:\nYour explanation:", "# Now lets speed up distance matrix computation by using partial vectorization\n# with one loop. Implement the function compute_distances_one_loop and run the\n# code below:\ndists_one = classifier.compute_distances_one_loop(X_test)\n\n# To ensure that our vectorized implementation is correct, we make sure that it\n# agrees with the naive implementation. There are many ways to decide whether\n# two matrices are similar; one of the simplest is the Frobenius norm. In case\n# you haven't seen it before, the Frobenius norm of two matrices is the square\n# root of the squared sum of differences of all elements; in other words, reshape\n# the matrices into vectors and compute the Euclidean distance between them.\ndifference = np.linalg.norm(dists - dists_one, ord='fro')\nprint('Difference was: %f' % (difference, ))\nif difference < 0.001:\n print('Good! The distance matrices are the same')\nelse:\n print('Uh-oh! The distance matrices are different')\n\n# Now implement the fully vectorized version inside compute_distances_no_loops\n# and run the code\ndists_two = classifier.compute_distances_no_loops(X_test)\n\n# check that the distance matrix agrees with the one we computed before:\ndifference = np.linalg.norm(dists - dists_two, ord='fro')\nprint('Difference was: %f' % (difference, ))\nif difference < 0.001:\n print('Good! The distance matrices are the same')\nelse:\n print('Uh-oh! The distance matrices are different')\n\n# Let's compare how fast the implementations are\ndef time_function(f, *args):\n \"\"\"\n Call a function f with args and return the time (in seconds) that it took to execute.\n \"\"\"\n import time\n tic = time.time()\n f(*args)\n toc = time.time()\n return toc - tic\n\ntwo_loop_time = time_function(classifier.compute_distances_two_loops, X_test)\nprint('Two loop version took %f seconds' % two_loop_time)\n\none_loop_time = time_function(classifier.compute_distances_one_loop, X_test)\nprint('One loop version took %f seconds' % one_loop_time)\n\nno_loop_time = time_function(classifier.compute_distances_no_loops, X_test)\nprint('No loop version took %f seconds' % no_loop_time)\n\n# you should see significantly faster performance with the fully vectorized implementation", "Cross-validation\nWe have implemented the k-Nearest Neighbor classifier but we set the value k = 5 arbitrarily. We will now determine the best value of this hyperparameter with cross-validation.", "num_folds = 5\nk_choices = [1, 3, 5, 8, 10, 12, 15, 20, 50, 100]\n\nX_train_folds = []\ny_train_folds = []\n################################################################################\n# TODO: #\n# Split up the training data into folds. After splitting, X_train_folds and #\n# y_train_folds should each be lists of length num_folds, where #\n# y_train_folds[i] is the label vector for the points in X_train_folds[i]. #\n# Hint: Look up the numpy array_split function. #\n################################################################################\nX_train_folds = np.array_split(X_train, num_folds)\ny_train_folds = np.array_split(y_train, num_folds)\nprint(\"X_train_folds shape\", X_train_folds[0].shape)\nprint(\"y_train_folds shape\", y_train_folds[0].shape)\n\n################################################################################\n# END OF YOUR CODE #\n################################################################################\n\n# A dictionary holding the accuracies for different values of k that we find\n# when running cross-validation. After running cross-validation,\n# k_to_accuracies[k] should be a list of length num_folds giving the different\n# accuracy values that we found when using that value of k.\nk_to_accuracies = {}\n\n\n################################################################################\n# TODO: #\n# Perform k-fold cross validation to find the best value of k. For each #\n# possible value of k, run the k-nearest-neighbor algorithm num_folds times, #\n# where in each case you use all but one of the folds as training data and the #\n# last fold as a validation set. Store the accuracies for all fold and all #\n# values of k in the k_to_accuracies dictionary. #\n################################################################################\nfrom functools import reduce\nclassifier = KNearestNeighbor()\nfor k in k_choices:\n accuracies = np.zeros(num_folds)\n for i in range(num_folds):\n X_train_data = np.delete(X_train_folds, i)\n y_train_data = np.delete(y_train_folds, i)\n X_train_data = reduce(lambda x,y: np.row_stack((x, y)), X_train_data)\n y_train_data = reduce(lambda x,y: np.append(x, y), y_train_data)\n X_val_data = X_train_folds[i]\n y_val_data = y_train_folds[i]\n classifier.train(X_train_data, y_train_data)\n y_pred_data = classifier.predict(X_val_data, k)\n num_correct = np.sum(y_pred_data == y_val_data)\n accuracy = float(num_correct) / num_test\n accuracies[i] = accuracy\n k_to_accuracies[k] = accuracies\n \n \n \n \n################################################################################\n# END OF YOUR CODE #\n################################################################################\n\n# Print out the computed accuracies\nfor k in sorted(k_to_accuracies):\n for accuracy in k_to_accuracies[k]:\n print('k = %d, accuracy = %f' % (k, accuracy))\n\n\n# plot the raw observations\nfor k in k_choices:\n accuracies = k_to_accuracies[k]\n plt.scatter([k] * len(accuracies), accuracies)\n\n# plot the trend line with error bars that correspond to standard deviation\naccuracies_mean = np.array([np.mean(v) for k,v in sorted(k_to_accuracies.items())])\naccuracies_std = np.array([np.std(v) for k,v in sorted(k_to_accuracies.items())])\nplt.errorbar(k_choices, accuracies_mean, yerr=accuracies_std)\nplt.title('Cross-validation on k')\nplt.xlabel('k')\nplt.ylabel('Cross-validation accuracy')\nplt.show()\n\n# Based on the cross-validation results above, choose the best value for k, \n# retrain the classifier using all the training data, and test it on the test\n# data. You should be able to get above 28% accuracy on the test data.\nbest_k = 1\n\nclassifier = KNearestNeighbor()\nclassifier.train(X_train, y_train)\ny_test_pred = classifier.predict(X_test, k=best_k)\n\n# Compute and display the accuracy\nnum_correct = np.sum(y_test_pred == y_test)\naccuracy = float(num_correct) / num_test\nprint('Got %d / %d correct => accuracy: %f' % (num_correct, num_test, accuracy))", "Inline Question 3\nWhich of the following statements about $k$-Nearest Neighbor ($k$-NN) are true in a classification setting, and for all $k$? Select all that apply.\n1. The training error of a 1-NN will always be better than that of 5-NN.\n2. The test error of a 1-NN will always be better than that of a 5-NN.\n3. The decision boundary of the k-NN classifier is linear.\n4. The time needed to classify a test example with the k-NN classifier grows with the size of the training set.\n5. None of the above.\nYour Answer:\nYour explanation:" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
parrt/msan692
notes/datastructures.ipynb
mit
[ "Data structures\nFor a refresher on object-oriented programming, see Object-oriented programming.\nA simple set implementation\nSets in Python can be specified with set notation:", "s = {1,3,2,9}", "Or with by creating a set object and assigning it to a variable then manually adding elements:", "s = set()\ns.add(1)\ns.add(3)", "We can build our own set object implementation by creating a class definition:", "class MySet:\n def __init__(self):\n self.elements = []\n def add(self, x):\n if x not in self.elements:\n self.elements.append(x)\n\ns = MySet()\ns.add(3) # same as MySet.add(a,3)\ns.add(3)\ns.add(2)\ns.add('cat')\ns.elements\n\nfrom lolviz import *\nobjviz(s)", "Question: How expensive is it to add an element to a set with this implementation?\nExercise\nAdd a method called hasmember() that returns true or false according to whether parameter x is a member of the set.", "class MySet:\n def __init__(self):\n self.elements = []\n def add(self, x):\n if x not in self.elements:\n self.elements.append(x)\n def hasmember(self, x):\n return x in self.elements\n\ns = MySet()\ns.add(3) # same as MySet.add(a,3)\ns.add(3)\ns.add(2)\ns.add('cat')\ns.hasmember(3), s.hasmember(99)", "Linked lists -- the gateway drug\nWe've studied arrays/lists that are built into Python but they are not always the best kind of list to use. Sometimes, we are inserting and deleting things from the head or middle of the list. If we do this in lists made up of contiguous cells in memory, we have to move a lot of cells around to make room for a new element or to close a hole made by a deletion. Most importantly, linked lists are the degenerate form of a general object graph. So, it makes sense to start with the simple versions and move up to general graphs.\nLinked lists allow us to efficiently insert and remove things anywhere we want, at the cost of more memory.\nA linked list associates a next pointer with each value. We call these things nodes and here's a simple implementation for node objects:", "class LLNode:\n def __init__(self, value, next=None):\n self.value = value\n self.next = next\n\nhead = LLNode('tombu')\ncallsviz(varnames='head')\n\nhead = LLNode('parrt', head)\ncallsviz(varnames='head')\n\nhead = LLNode(\"xue\", head)\ncallsviz(varnames='head')", "Walk list\nTo walk a list, we use the notion of a cursor, which we can think of as a finger that moves along a data structure from node to node. We initialize the cursor to point to the first node of the list, the head, and then walk the cursor through the list via the next fields:", "p = head\nwhile p is not None:\n print(p.value)\n p = p.next", "Question: How fast can we walk the linked list?\nExercise\nModify the walking code so that it lives in a method of LLNode called exists(self, x) that looks for a node with value x starting at self. If we test with head.exists('parrt') then self would be our global head variable. Have the function return true if x exists in the list, else return false. You can test it with:\npython\nhead = LLNode('tombu')\nhead = LLNode('parrt', head)\nhead = LLNode(\"xue\", head)\nhead.exists('parrt'), head.exists('part')", "class LLNode:\n def __init__(self, value, next=None):\n self.value = value\n self.next = next\n \n def exists(self, x):\n p = self # start looking at this node\n while p is not None:\n if x==p.value:\n return True\n p = p.next\n return False\n \nhead = LLNode('tombu')\nhead = LLNode('parrt', head)\nhead = LLNode(\"xue\", head)\nhead.exists('parrt'), head.exists('part')", "Insertion at head\nIf we want to insert an element at the front of a linked list, we create a node to hold the value and set its next pointer to point to the old head. Then we have the head variable point at the new node. Here is the sequence.\nCreate new node", "x = LLNode('mary')\ncallviz(varnames=['head','x'])", "Make next field of new node point to head", "x.next = head\ncallviz(varnames=['head','x'])", "Make head point at new node", "head = x\ncallviz(varnames=['head','x'])", "Deletion of node", "# to delete xue, make previous node skip over xue\nxue = head.next\ncallviz(varnames=['head','x','xue'])\n\nhead.next = xue.next\ncallviz(varnames=['head','x'])", "Notice that xue still points at the node but we are going to ignore that variable from now on. Moving from the head of the list, we still cannot see the node with 'xue' in it.", "head.next = xue.next\ncallviz(varnames=['head','x','xue'])", "Exercise\nGet a pointer to the node with value tombu and then delete it from the list using the same technique we just saw.", "before_tombu = head.next\ncallviz(varnames=['head','x','before_tombu'])\n\nbefore_tombu.next = None\ncallviz(varnames=['head','x','before_tombu'])", "Binary trees\nThe tree data structure is one of the most important in computer science and is extremely common in data science as well. Decision trees, which form the core of gradient boosting machines and random forests (machine learning algorithms), are naturally represented as trees in memory. When we process HTML and XML files, those are generally represented by trees. For example:\n<img align=\"right\" src=\"figures/xml-tree.png\" width=\"200\"></td>\nxml\n&lt;bookstore&gt;\n &lt;book category=\"cooking\"&gt;\n &lt;title lang=\"en\"&gt;Everyday Italian&lt;/title&gt;\n &lt;author&gt;Giada De Laurentiis&lt;/author&gt;\n &lt;year&gt;2005&lt;/year&gt;\n &lt;price&gt;30.00&lt;/price&gt;\n &lt;/book&gt;\n &lt;book category=\"web\"&gt;\n &lt;title lang=\"en\"&gt;Learning XML&lt;/title&gt;\n &lt;author&gt;Erik T. Ray&lt;/author&gt;\n &lt;year&gt;2003&lt;/year&gt;\n &lt;price&gt;39.95&lt;/price&gt;\n &lt;/book&gt;\n&lt;/bookstore&gt;\nWe're going to look at a simple kind of tree that has at most two children: a binary tree. A node that has no children is called a leaf and non-leaves are called internal nodes.\nIn general, trees with $n$ nodes have $n-1$ edges. Each node has a single incoming edge and the root has none.\nNodes have parents and children and siblings (at the same level).\nSometimes nodes have links back to their parents for programming convenience reasons. That would make it a graph not a tree but we still consider it a tree.", "class Tree:\n def __init__(self, value, left=None, right=None):\n self.value = value\n self.left = left\n self.right = right \n\nroot = Tree('parrt')\nroot.left = Tree('mary')\nroot.right = Tree('april')\ntreeviz(root)\n\nroot = Tree('parrt', Tree('mary'), Tree('april'))\ntreeviz(root)\n\nroot = Tree('parrt')\nmary = Tree('mary')\napril = Tree('april')\njim = Tree('jim')\nsri = Tree('sri')\nmike = Tree('mike')\n\nroot.left = mary\nroot.right = april\nmary.left = jim\nmary.right = mike\napril.right = sri\n\ntreeviz(root)", "Exercise\nCreate a class definition for NTree that allows arbitrary numbers of children. (Use a list for field children rather than left and right.) The constructor should init an empty children list. Test your code using:\n```python\nfrom lolviz import objviz\nroot2 = NTree('parrt')\nmary = NTree('mary')\napril = NTree('april')\njim = NTree('jim')\nsri = NTree('sri')\nmike = NTree('mike')\nroot2.addchild(mary)\nroot2.addchild(jim)\nroot2.addchild(sri)\nsri.addchild(mike)\nsri.addchild(april)\nobjviz(root2)\n```\nSolution", "class NTree:\n def __init__(self, value):\n self.value = value\n self.children = []\n \n def addchild(self, child):\n if isinstance(child, NTree):\n self.children.append(child)\n\nroot2 = NTree('parrt')\nmary = NTree('mary')\napril = NTree('april')\njim = NTree('jim')\nsri = NTree('sri')\nmike = NTree('mike')\n\nroot2.addchild(mary)\nroot2.addchild(jim)\nroot2.addchild(sri)\nsri.addchild(mike)\nsri.addchild(april)\n\nobjviz(root2)", "Walking trees\nWalking a tree is a matter of moving a cursor like we did with the linked lists above. The goal is to visit each node in the tree. We start out by having the cursor point at the root of the tree and then walk downwards until we hit leaves, and then we come back up and try other alternatives. \nA good physical analogy: imagine a person (cursor) from HR needing to speak (visit) each person in a company starting with the president/CEO. Here's a sample org chart:\n<img src=\"figures/orgchart.png\" width=\"200\">\nThe general visit algorithm starting at node p is meet with p then visit each direct report. Then visit all of their direct reports, one level of the tree at a time. The node visitation sequence would be A,B,C,F,H,J,... This is a breadth-first search of the tree and easy to describe but a bit more work to implement that a depth-first search. Depth first means visiting a person then visit their first direct report and that person's direct report etc... until you reach a leaf node. Then back up a level and move to next direct report. That visitation sequence is A,B,C,D,E,F,G,H,I,J,K,L.\nIf you'd like to start at node B, not A, what is the procedure? The same, of course. So visiting A means, say, printing A then visiting B. Visiting B means visiting C, and when that completes, visiting F, etc... The key is that the procedure for visiting a node is exactly the same regardless of which node you start with. This is generally true for any self-similar data structure like a tree.\nAnother easy way to think about binary tree visitation in particular is positioning yourself in a room with a bunch of doors as choices. Each door leads to other rooms, which might also have doors leading to other rooms. We can think of a room as a node and doors as pointers to other nodes. Each room is identical and has 0, 1, or 2 doors (for a binary tree). At the root node we might see two choices and, to explore all nodes, we can visit each door in turn. Let's go left:\n<img src=\"figures/left-door.png\" width=\"100\">\nAfter exploring all possible rooms by taking the left door, we come all the way back out to the root room and try the next alternative on the right:\n<img src=\"figures/right-door.png\" width=\"100\">\nAlgorithmically what were doing in each room is\nprocedure visit room:\n if left door exists, visit rooms accessible through left door\n if right door exists, visit rooms accessible through right door\nOr in code notation:\npython\ndef visit(room):\n if room.left: visit(room.left)\n if room.right: visit(room.right)\nThis mechanism works from any room. Imagine waking up and finding yourself in a room with two doors. You have no idea whether you are at the root or somewhere in the middle of a labyrinth (maze) of rooms.\nThis approach is called backtracking.\nLet's code this up but make a regular function not a method of the tree class to keep things simple. Let's look at that tree again:", "treeviz(root)\n\ndef walk(t):\n \"Depth-first walk of binary tree\"\n if t is None: return\n# if t.left is None: callsviz(varnames=['t']).view()\n print(t.value) # \"visit\" or process this node\n walk(t.left) # walk into the left door\n walk(t.right) # after visiting all those, enter right door\n \nwalk(root)", "That is a recursive function, meaning that walk calls itself. It's really no different than the recurrence relations we use in mathematics, such as the gradient descent recurrence:\n$x_{t+1} = x_t - \\eta f'(x_t)$\nVariable $x$ is a function of previous incarnations of itself.", "def fact(n):\n print(f\"fact({n})\")\n if n==0: return 1\n return n * fact(n-1)\n\nfact(10)", "Don't let the recursion scare you, just pretend that you are calling a different function or that you are calling the same function except that it is known to be correct. We call that the \"recursive leap of faith.\" (See Fundamentals of Recursion,Although that one is using C++ not Python.)\nAs the old joke goes: \"To truly understand recursion, you must first understand recursion.\"\nThe order in which we reach (enter/exit) each node during the search is always the same for a given search strategy, such as depth first search. Here is a visualization from Wikipedia:\n<img src=\"https://upload.wikimedia.org/wikipedia/commons/thumb/d/d4/Sorted_binary_tree_preorder.svg/440px-Sorted_binary_tree_preorder.svg.png\" width=\"250\">\nWe always try to go as deep as possible before exploring siblings.\nNow, notice the black dots on the traversal. That signifies processing or \"visiting\" a node and in this case is done before visiting the children. When we process a node and then it's children, we call that a preorder traversal. If we process a node after walking the children, we call it a post-order traversal:\n<img src=\"https://upload.wikimedia.org/wikipedia/commons/thumb/9/9d/Sorted_binary_tree_postorder.svg/440px-Sorted_binary_tree_postorder.svg.png\" width=\"250\">\nIn code, that just means switching the processing step two after the walk of the children:", "def walk(t):\n if t is None: return\n walk(t.left)\n walk(t.right)\n print(t.value) # process after visiting children\n \nwalk(root)", "In both cases we are performing a depth-first walk of the tree, which means that we are immediately seeking the leaves rather than siblings. A depth first walk scans down all of the left child fields of the nodes until it hits a leaf and then goes back up a level, looking for children at that level.\nIn contrast, a breadth-first walk processes all children before looking at grandchildren. This is a less common walk but, for our tree, would be the sequence parrt, mary, april, jim, mike, sri. In a sense, breadth first processes one level of the tree at a time:\n<img src=\"https://upload.wikimedia.org/wikipedia/commons/thumb/d/d1/Sorted_binary_tree_breadth-first_traversal.svg/440px-Sorted_binary_tree_breadth-first_traversal.svg.png\" width=\"250\">\nExercise\nAlter the depth-first recursive tree walk above to sum the values in a binary tree. Have walk() return the sum of a node's value and all it childrens' values. Test with:\n```python\na = Tree(3)\nb = Tree(5)\nc = Tree(10)\nd = Tree(9)\ne = Tree(4)\nf = Tree(1)\na.left = b\na.right = c\nb.left = d\nb.right = e\ne.right = f\ntreeviz(a)\nprint(walk(a), walk(b), walk(c))\n```", "class Tree:\n def __init__(self, value, left=None, right=None):\n self.value = value\n self.left = left\n self.right = right \n \ndef walk(t:Tree) -> int:\n if t is None: return 0\n return t.value + walk(t.left) + walk(t.right)\n\na = Tree(3)\nb = Tree(5)\nc = Tree(10)\nd = Tree(9)\ne = Tree(4)\nf = Tree(1)\n\na.left = b\na.right = c\nb.left = d\nb.right = e\ne.right = f\ntreeviz(a)\n\nprint(walk(a), walk(b), walk(c))", "Graphs\nTrees are actually a subset of the class of directed, acyclic graphs. If we remove the acyclic restriction and the restriction that nodes have a single incoming edge, we get a general, directed graph. These are also extremely common in computer science and are used to represent graphs of users in a social network, locations on a map, or a graph of webpages, which is how Google does page ranking.\ngraphviz\nYou might find it useful to display graphs visually and graphviz is an excellent way to do that. Here's an example", "import graphviz as gv\n\ngv.Source(\"\"\"\ndigraph G {\n node [shape=box penwidth=\"0.6\" margin=\"0.0\" fontname=\"Helvetica\" fontsize=10]\n edge [arrowsize=.4 penwidth=\"0.6\"]\n rankdir=LR;\n ranksep=.25;\n cat->dog\n dog->cat\n dog->horse\n dog->zebra\n horse->zebra\n zebra->llama\n}\n\"\"\")", "Once again, it's very convenient to represent a node in this graph as an object, which means we need a class definition:", "class GNode:\n def __init__(self, value):\n self.value = value\n self.edges = [] # outgoing edges\n \n def connect(self, other):\n self.edges.append(other)\n\ncat = GNode('cat')\ndog = GNode('dog')\nhorse = GNode('horse')\nzebra = GNode('zebra')\nllama = GNode('llama')\n\ncat.connect(dog)\ndog.connect(cat)\ndog.connect(horse)\ndog.connect(zebra)\nhorse.connect(zebra)\nzebra.connect(llama)\n\nobjviz(cat)", "Walking graphs\nWalking a graph (depth-first) is just like walking a tree in that we use backtracking to try all possible branches out of every node until we have reached all reachable nodes. When we run into a dead end, we back up to the most recently available on visited path and try that. That's how you get from the entrance to the exit of a maze. \n<img src=\"figures/maze.jpg\" width=\"300\">\nThe only difference between walking a tree and walking a graph is that we have to watch out for cycles when walking a graph, so that we don't get stuck in an infinite loop. We leave a trail of breadcrumbs or candies or string to help us keep track of where we have visited and where we have not. If we run into our trail, we have hit a cycle and must also backtrack to avoid an infinite loop. This is a depth first search.\nHere's a nice visualization website for graph walking.\n<a href=\"http://algoanim.ide.sk/index.php?page=showanim&id=47)\"><img src=\"figures/graph-dfs-icon.png\" width=\"300\"></a>\nIn code, here is how we perform a depth-frist search on a graph:", "def walk(g, visited):\n \"Depth-first walk of a graph\"\n if g is None or g in visited: return\n visited.add(g) # mark as visited\n print(g.value) # process before visiting outgoing edges\n for node in g.edges:\n walk(node, visited) # walk all outgoing edge targets\n \nwalk(cat, set())", "Where we start the walk of the graph matters:", "walk(llama, set())\n\nwalk(horse, set())", "Operator overloading\n(Note: We overload operators but override methods in a subclass definition)\nPython allows class definitions to implement functions that are called when standard operator symbols such as + and / are applied to objects of that type. This is extremely useful for mathematical libraries such as numpy, but is often abused. Note that you could redefine subtraction to be multiplication when someone used the - sign. (Yikes!)\nHere's an extension to Point that supports + for Point addition:", "import numpy as np\n\nclass Point:\n def __init__(self, x, y):\n self.x = x\n self.y = y\n \n def distance(self, other):\n return np.sqrt( (self.x - other.x)**2 + (self.y - other.y)**2 )\n \n def __add__(self,other):\n x = self.x + other.x\n y = self.y + other.y\n return Point(x,y)\n \n def __str__(self):\n return f\"({self.x},{self.y})\"\n\np = Point(3,4)\nq = Point(5,6)\nprint(p, q)\nprint(p + q) # calls p.__add__(q) or Point.__add__(p,q)\nprint(Point.__add__(p,q))", "Exercise\nAdd a method to implement the - subtraction operator for Point so that the following code works:\npython\np = Point(5,4)\nq = Point(1,5)\nprint(p, q)\nprint(p - q)", "import numpy as np\n\nclass Point:\n def __init__(self, x, y):\n self.x = x\n self.y = y\n \n def distance(self, other):\n return np.sqrt( (self.x - other.x)**2 + (self.y - other.y)**2 )\n \n def __add__(self,other):\n x = self.x + other.x\n y = self.y + other.y\n return Point(x,y)\n \n def __sub__(self,other):\n x = self.x - other.x\n y = self.y - other.y\n return Point(x,y)\n \n def __str__(self):\n return f\"({self.x},{self.y})\"\n \np = Point(5,4)\nq = Point(1,5)\nprint(p, q)\nprint(p - q)" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
mne-tools/mne-tools.github.io
stable/_downloads/61268d5dc873438a743241ad21a989fd/decoding_rsa_sgskip.ipynb
bsd-3-clause
[ "%matplotlib inline", "Representational Similarity Analysis\nRepresentational Similarity Analysis is used to perform summary statistics\non supervised classifications where the number of classes is relatively high.\nIt consists in characterizing the structure of the confusion matrix to infer\nthe similarity between brain responses and serves as a proxy for characterizing\nthe space of mental representations\n:footcite:Shepard1980,LaaksoCottrell2000,KriegeskorteEtAl2008.\nIn this example, we perform RSA on responses to 24 object images (among\na list of 92 images). Subjects were presented with images of human, animal\nand inanimate objects :footcite:CichyEtAl2014. Here we use the 24 unique\nimages of faces and body parts.\n<div class=\"alert alert-info\"><h4>Note</h4><p>this example will download a very large (~6GB) file, so we will not\n build the images below.</p></div>", "# Authors: Jean-Remi King <jeanremi.king@gmail.com>\n# Jaakko Leppakangas <jaeilepp@student.jyu.fi>\n# Alexandre Gramfort <alexandre.gramfort@inria.fr>\n#\n# License: BSD-3-Clause\n\nimport os.path as op\nimport numpy as np\nfrom pandas import read_csv\nimport matplotlib.pyplot as plt\n\nfrom sklearn.model_selection import StratifiedKFold\nfrom sklearn.pipeline import make_pipeline\nfrom sklearn.preprocessing import StandardScaler\nfrom sklearn.linear_model import LogisticRegression\nfrom sklearn.metrics import roc_auc_score\nfrom sklearn.manifold import MDS\n\nimport mne\nfrom mne.io import read_raw_fif, concatenate_raws\nfrom mne.datasets import visual_92_categories\n\n\nprint(__doc__)\n\ndata_path = visual_92_categories.data_path()\n\n# Define stimulus - trigger mapping\nfname = op.join(data_path, 'visual_stimuli.csv')\nconds = read_csv(fname)\nprint(conds.head(5))", "Let's restrict the number of conditions to speed up computation", "max_trigger = 24\nconds = conds[:max_trigger] # take only the first 24 rows", "Define stimulus - trigger mapping", "conditions = []\nfor c in conds.values:\n cond_tags = list(c[:2])\n cond_tags += [('not-' if i == 0 else '') + conds.columns[k]\n for k, i in enumerate(c[2:], 2)]\n conditions.append('/'.join(map(str, cond_tags)))\nprint(conditions[:10])", "Let's make the event_id dictionary", "event_id = dict(zip(conditions, conds.trigger + 1))\nevent_id['0/human bodypart/human/not-face/animal/natural']", "Read MEG data", "n_runs = 4 # 4 for full data (use less to speed up computations)\nfname = op.join(data_path, 'sample_subject_%i_tsss_mc.fif')\nraws = [read_raw_fif(fname % block, verbose='error')\n for block in range(n_runs)] # ignore filename warnings\nraw = concatenate_raws(raws)\n\nevents = mne.find_events(raw, min_duration=.002)\n\nevents = events[events[:, 2] <= max_trigger]", "Epoch data", "picks = mne.pick_types(raw.info, meg=True)\nepochs = mne.Epochs(raw, events=events, event_id=event_id, baseline=None,\n picks=picks, tmin=-.1, tmax=.500, preload=True)", "Let's plot some conditions", "epochs['face'].average().plot()\nepochs['not-face'].average().plot()", "Representational Similarity Analysis (RSA) is a neuroimaging-specific\nappelation to refer to statistics applied to the confusion matrix\nalso referred to as the representational dissimilarity matrices (RDM).\nCompared to the approach from Cichy et al. we'll use a multiclass\nclassifier (Multinomial Logistic Regression) while the paper uses\nall pairwise binary classification task to make the RDM.\nAlso we use here the ROC-AUC as performance metric while the\npaper uses accuracy. Finally here for the sake of time we use\nRSA on a window of data while Cichy et al. did it for all time\ninstants separately.", "# Classify using the average signal in the window 50ms to 300ms\n# to focus the classifier on the time interval with best SNR.\nclf = make_pipeline(StandardScaler(),\n LogisticRegression(C=1, solver='liblinear',\n multi_class='auto'))\nX = epochs.copy().crop(0.05, 0.3).get_data().mean(axis=2)\ny = epochs.events[:, 2]\n\nclasses = set(y)\ncv = StratifiedKFold(n_splits=5, random_state=0, shuffle=True)\n\n# Compute confusion matrix for each cross-validation fold\ny_pred = np.zeros((len(y), len(classes)))\nfor train, test in cv.split(X, y):\n # Fit\n clf.fit(X[train], y[train])\n # Probabilistic prediction (necessary for ROC-AUC scoring metric)\n y_pred[test] = clf.predict_proba(X[test])", "Compute confusion matrix using ROC-AUC", "confusion = np.zeros((len(classes), len(classes)))\nfor ii, train_class in enumerate(classes):\n for jj in range(ii, len(classes)):\n confusion[ii, jj] = roc_auc_score(y == train_class, y_pred[:, jj])\n confusion[jj, ii] = confusion[ii, jj]", "Plot", "labels = [''] * 5 + ['face'] + [''] * 11 + ['bodypart'] + [''] * 6\nfig, ax = plt.subplots(1)\nim = ax.matshow(confusion, cmap='RdBu_r', clim=[0.3, 0.7])\nax.set_yticks(range(len(classes)))\nax.set_yticklabels(labels)\nax.set_xticks(range(len(classes)))\nax.set_xticklabels(labels, rotation=40, ha='left')\nax.axhline(11.5, color='k')\nax.axvline(11.5, color='k')\nplt.colorbar(im)\nplt.tight_layout()\nplt.show()", "Confusion matrix related to mental representations have been historically\nsummarized with dimensionality reduction using multi-dimensional scaling [1].\nSee how the face samples cluster together.", "fig, ax = plt.subplots(1)\nmds = MDS(2, random_state=0, dissimilarity='precomputed')\nchance = 0.5\nsummary = mds.fit_transform(chance - confusion)\ncmap = plt.get_cmap('rainbow')\ncolors = ['r', 'b']\nnames = list(conds['condition'].values)\nfor color, name in zip(colors, set(names)):\n sel = np.where([this_name == name for this_name in names])[0]\n size = 500 if name == 'human face' else 100\n ax.scatter(summary[sel, 0], summary[sel, 1], s=size,\n facecolors=color, label=name, edgecolors='k')\nax.axis('off')\nax.legend(loc='lower right', scatterpoints=1, ncol=2)\nplt.tight_layout()\nplt.show()", "References\n.. footbibliography::" ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
Faris137/MachineLearningArabic
Pima Project/.ipynb_checkpoints/Pima Project 2.0-checkpoint.ipynb
mit
[ "محاولة لإستكشاف افضل الطرق لتحسين اداء نموذج بيما", "import numpy as np\nimport pandas as pd\nimport seaborn as sb\nfrom sklearn.metrics import classification_report\nfrom sklearn.metrics import confusion_matrix\nfrom sklearn.ensemble import RandomForestClassifier\nfrom sklearn.model_selection import train_test_split\nfrom sklearn.preprocessing import StandardScaler\nfrom sklearn import preprocessing\n\n%matplotlib inline\n\ndf = pd.read_csv('diabetes.csv')\ndf.head(20) #لاستعراض ال20 السجلات الاولى من إطار البيانات", "هذه الدالة تعطينا توصيف كامل للبيانات و تكشف لنا في ما إذا كانت هناك قيم مفقودة", "df.info()", "سيبورن مكتبة جميلة للرسوميات سهلة في الكتابة لكن مفيدة جداً في المعلومات التي ممكن ان نقراءها عبر الهيستوقرام\nفائدها ممكن ان تكون في\n1- تلخيص توزيع البينات في رسوميات\n2- فهم او الإطلاع على القيم الفريدة\n3- تحمل الرسوميات معنى اعمق من الكلمات", "sb.countplot(x='Outcome',data=df, palette='hls')\n\nsb.countplot(x='Pregnancies',data=df, palette='hls')\n\nsb.countplot(x='Glucose',data=df, palette='hls')\n\nsb.heatmap(df.corr())\n\nsb.pairplot(df, hue=\"Outcome\")\n\nfrom scipy.stats import kendalltau\nsb.jointplot(df['Pregnancies'], df['Glucose'], kind=\"hex\", stat_func=kendalltau, color=\"#4CB391\")\n\nimport matplotlib.pyplot as plt\ng = sb.FacetGrid(df, row=\"Pregnancies\", col=\"Outcome\", margin_titles=True)\nbins = np.linspace(0, 50, 13)\ng.map(plt.hist, \"BMI\", color=\"steelblue\", bins=bins, lw=0)\n\nsb.pairplot(df, vars=[\"Pregnancies\", \"BMI\"])", "تجربة استخدام تقييس و تدريج الخواص لتحسين اداء النموذج", "columns = ['Pregnancies', 'Glucose', 'BloodPressure', 'SkinThickness', 'Insulin', 'BMI', 'DiabetesPedigreeFunction', 'Age']\nlabels = df['Outcome'].values\nfeatures = df[list(columns)].values\n\nX_train, X_test, y_train, y_test = train_test_split(features, labels, test_size=0.30)\n\nclf = RandomForestClassifier(n_estimators=1)\nclf = clf.fit(X_train, y_train)\n\naccuracy = clf.score(X_train, y_train)\nprint ' اداء النموذج في عينة التدريب بدقة ', accuracy*100\n\naccuracy = clf.score(X_test, y_test)\nprint ' اداء النموذج في عينة الفحص بدقة ', accuracy*100\n\nypredict = clf.predict(X_train)\nprint '\\n Training classification report\\n', classification_report(y_train, ypredict)\nprint \"\\n Confusion matrix of training \\n\", confusion_matrix(y_train, ypredict)\n\nypredict = clf.predict(X_test)\nprint '\\n Training classification report\\n', classification_report(y_test, ypredict)\nprint \"\\n Confusion matrix of training \\n\", confusion_matrix(y_test, ypredict)", "تجربة تحسين اداء النموذج باستخدام طريقة\nstandard scaler", "#scaling\nscaler = StandardScaler()\n\n# Fit only on training data\nscaler.fit(X_train)\nX_train = scaler.transform(X_train)\n# apply same transformation to test data\nX_test = scaler.transform(X_test)\n\nclf = RandomForestClassifier(n_estimators=1)\nclf = clf.fit(X_train, y_train)\n\naccuracy = clf.score(X_train, y_train)\nprint ' اداء النموذج في عينة التدريب بدقة ', accuracy*100\n\naccuracy = clf.score(X_test, y_test)\nprint ' اداء النموذج في عينة الفحص بدقة ', accuracy*100\n\nypredict = clf.predict(X_train)\nprint '\\n Training classification report\\n', classification_report(y_train, ypredict)\nprint \"\\n Confusion matrix of training \\n\", confusion_matrix(y_train, ypredict)\n\nypredict = clf.predict(X_test)\nprint '\\n Training classification report\\n', classification_report(y_test, ypredict)\nprint \"\\n Confusion matrix of training \\n\", confusion_matrix(y_test, ypredict)", "تجربة تحسين اداء النموذج بطريقة\nmin-max scaler", "columns = ['Pregnancies', 'Glucose', 'BloodPressure', 'SkinThickness', 'Insulin', 'BMI', 'DiabetesPedigreeFunction', 'Age']\nlabels = df['Outcome'].values\nfeatures = df[list(columns)].values\n\nX_train, X_test, y_train, y_test = train_test_split(features, labels, test_size=0.30)\n\nscaler = preprocessing.MinMaxScaler()\n\nscaler.fit(X_train)\nX_train = scaler.transform(X_train)\n# apply same transformation to test data\nX_test = scaler.transform(X_test)\n\nclf = RandomForestClassifier(n_estimators=1)\nclf = clf.fit(X_train, y_train)\n\naccuracy = clf.score(X_train, y_train)\nprint ' اداء النموذج في عينة التدريب بدقة ', accuracy*100\n\naccuracy = clf.score(X_test, y_test)\nprint ' اداء النموذج في عينة الفحص بدقة ', accuracy*100\n\nypredict = clf.predict(X_train)\nprint '\\n Training classification report\\n', classification_report(y_train, ypredict)\nprint \"\\n Confusion matrix of training \\n\", confusion_matrix(y_train, ypredict)\n\nypredict = clf.predict(X_test)\nprint '\\n Training classification report\\n', classification_report(y_test, ypredict)\nprint \"\\n Confusion matrix of training \\n\", confusion_matrix(y_test, ypredict)\n\ncolumns = ['Pregnancies', 'Glucose', 'BloodPressure', 'SkinThickness', 'Insulin', 'BMI', 'DiabetesPedigreeFunction', 'Age']\nlabels = df['Outcome'].values\nfeatures = df[list(columns)].values\n\nX_train, X_test, y_train, y_test = train_test_split(features, labels, test_size=0.30)\n\nclf = RandomForestClassifier(n_estimators=5)\nclf = clf.fit(X_train, y_train)\n\naccuracy = clf.score(X_train, y_train)\nprint ' اداء النموذج في عينة التدريب بدقة ', accuracy*100\n\naccuracy = clf.score(X_test, y_test)\nprint ' اداء النموذج في عينة الفحص بدقة ', accuracy*100\n\nypredict = clf.predict(X_train)\nprint '\\n Training classification report\\n', classification_report(y_train, ypredict)\nprint \"\\n Confusion matrix of training \\n\", confusion_matrix(y_train, ypredict)\n\nypredict = clf.predict(X_test)\nprint '\\n Testing classification report\\n', classification_report(y_test, ypredict)\nprint \"\\n Confusion matrix of training \\n\", confusion_matrix(y_test, ypredict)" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
karlstroetmann/Artificial-Intelligence
Python/1 Search/Bidirectional-BFS.ipynb
gpl-2.0
[ "from IPython.core.display import HTML\nwith open('../style.css') as f:\n css = f.read()\nHTML(css)", "Bidirectional Breadth First Search\nThe function search takes three arguments to solve a search problem:\n- start is the start state of the search problem,\n- goal is the goal state, and\n- next_states is a function with signature $\\texttt{next_states}:Q \\rightarrow 2^Q$, where $Q$ is the set of states.\n For every state $s \\in Q$, $\\texttt{next_states}(s)$ is the set of states that can be reached from $s$ in one step.\nIf successful, search returns a path from start to goal that is a solution of the search problem\n$$ \\langle Q, \\texttt{next_states}, \\texttt{start}, \\texttt{goal} \\rangle. $$\nThe implementation of search uses bidirectional breadth first search to find a path from start to goal.", "def search(start, goal, next_states): \n FrontierA = { start }\n ParentA = { start: start}\n FrontierB = { goal }\n ParentB = { goal: goal} \n while FrontierA and FrontierB:\n NewFrontier = set()\n for s in FrontierA:\n for ns in next_states(s):\n if ns not in ParentA:\n NewFrontier |= { ns }\n ParentA[ns] = s\n if ns in ParentB:\n return combinePaths(ns, ParentA, ParentB)\n FrontierA = NewFrontier\n NewFrontier = set()\n for s in FrontierB:\n for ns in next_states(s):\n if ns not in ParentB:\n NewFrontier |= { ns }\n ParentB[ns] = s\n if ns in ParentA:\n return combinePaths(ns, ParentA, ParentB)\n FrontierB = NewFrontier", "Given a state and a parent dictionary Parent, the function path_to returns a path leading to the given state.", "def path_to(state, Parent):\n p = Parent[state]\n if p == state:\n return [state]\n return path_to(p, Parent) + [state]", "The function combinePath takes three parameters:\n- state is a state that has been reached in bidirectional BFS from both start and goal.\n- ParentA is the parent dictionary that has been build when searching from start.\n If $\\texttt{ParentA}[s_1] = s_2$ holds, then either $s_1 = s_2 = \\texttt{start}$ or \n $s_1 \\in \\texttt{next_states}(s_2)$.\n- ParentB is the parent dictionary that has been build when searching from goal.\n If $\\texttt{ParentB}[s_1] = s_2$ holds, then either $s_1 = s_2 = \\texttt{goal}$ or\n $s_1 \\in \\texttt{next_states}(s_2)$.\nThe function returns a path from start to goal.", "def combinePaths(state, ParentA, ParentB):\n Path1 = path_to(state, ParentA)\n Path2 = path_to(state, ParentB)\n return Path1[:-1] + Path2[::-1] # Path2 is reversed\n\n%run Sliding-Puzzle.ipynb\n\n%load_ext memory_profiler\n\n%%time\n%memit Path = search(start, goal, next_states)\nprint(len(Path)-1)\n\nanimation(Path)\n\n%%time\nPath = search(start2, goal2, next_states)\nprint(len(Path)-1)\n\nanimation(Path)" ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
camillescott/boink
notebooks/Untitled.ipynb
mit
[ "%pylab inline\n%config InlineBackend.figure_format = 'retina'\n\nimport pandas as pd\nimport seaborn as sns\n\nk35_df = pd.read_csv('mmetsp/Asterionellopsis_glacialis/k35/decision_nodes.csv', skipinitialspace=True)\nk27_df = pd.read_csv('mmetsp/Asterionellopsis_glacialis/k27/decision_nodes.csv', skipinitialspace=True)\n\nk35_df.head()", "We can find the number of decision nodes in the dBG by counting unique hashes...", "k27_df.hash.nunique(), k35_df.hash.nunique()", "We'll make a new column for total degree, for convenience.", "k35_df['degree'] = k35_df['l_degree'] + k35_df['r_degree']\nk27_df['degree'] = k27_df['l_degree'] + k27_df['r_degree']", "Let's start with the overal degree distribution during the entire construction process.", "figsize(18,10)\nfig, ax_mat = subplots(ncols=3, nrows=2)\ntop = ax_mat[0]\nsns.distplot(k35_df.degree, kde=False, ax=top[0], bins=8)\nsns.distplot(k35_df.l_degree, kde=False, ax=top[1], bins=5)\nsns.distplot(k35_df.r_degree, kde=False, ax=top[2], bins=5)\n\nbottom = ax_mat[1]\nsns.distplot(k27_df.degree, kde=False, ax=bottom[0], bins=8)\nsns.distplot(k27_df.l_degree, kde=False, ax=bottom[1], bins=5)\nsns.distplot(k27_df.r_degree, kde=False, ax=bottom[2], bins=5)", "So most decision nodes in this dataset have degree 3. Note that a few have degree 2; these forks without handles.", "figsize(12,8)\nsns.distplot(k35_df.position, kde=False, label='K=35', bins=15)\nsns.distplot(k27_df.position, kde=False, label='K=27', bins=15)\nlegend()\n\nmelted_df = k35_df.melt(id_vars=['hash', 'position'], value_vars=['l_degree', 'r_degree'], )\nmelted_df.head()\n\nfigsize(18,8)\nsns.violinplot('position', 'value', 'variable', melted_df)\n\nk35_dnodes_per_read = k35_df.groupby('read_n').count().\\\n reset_index()[['read_n', 'hash']].rename({'hash': 'n_dnodes'}, axis='columns')\n \nk27_dnodes_per_read = k27_df.groupby('read_n').count().\\\n reset_index()[['read_n', 'hash']].rename({'hash': 'n_dnodes'}, axis='columns')\n\nax = k35_dnodes_per_read.rolling(1000, min_periods=10, on='read_n').mean().plot(x='read_n', \n y='n_dnodes', \n label='k = 35')\n\nax = k27_dnodes_per_read.rolling(1000, min_periods=10, on='read_n').mean().plot(x='read_n', \n y='n_dnodes', \n label='k = 27', \n ax=ax)\n\nax.xaxis.set_major_formatter(mpl.ticker.StrMethodFormatter(\"{x:,}\"))" ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
a301-teaching/a301_code
notebooks/layertops_demo_solution.ipynb
mit
[ "Reading the Lidar LayerTops variable\n1) First, grab the lidar file", "import glob\nimport h5py\nimport numpy as np\nimport glob\nfrom a301lib.cloudsat import get_geo\nfrom a301utils.a301_readfile import download\nfrom matplotlib import pyplot as plt\nlidar_name='2006303212128_02702_CS_2B-GEOPROF-LIDAR_GRANULE_P2_R04_E02.h5'\ndownload(lidar_name)\n", "2) use glob.glob wildcards to read the filename\nfrom disk without having to get the name exactly right", "the_file=glob.glob('2006*LIDAR*h5')[0]\nprint(the_file)", "3) Use hdfview to figure out the path to the LayerTop variable, and to get the\n factor and offset needed to turn the 16 bit interger data into science values as\n described on the cloudsat web page", "with h5py.File(the_file,'r') as in_file:\n layer_tops=in_file['2B-GEOPROF-LIDAR']['Data Fields']['LayerTop'][...]\n factor=in_file['2B-GEOPROF-LIDAR']['Data Fields']['LayerTop'].attrs['factor']\n offset=in_file['2B-GEOPROF-LIDAR']['Data Fields']['LayerTop'].attrs['offset']\n units=in_file['2B-GEOPROF-LIDAR']['Data Fields']['LayerTop'].attrs['units']\n missing = in_file['2B-GEOPROF-LIDAR']['Data Fields']['LayerTop'].attrs['missing']\n #\n # the next line turns the numpy bytes (b'm') object returned by h5py into a unicode string\n # for printing\n #\n units=units.decode('utf-8')\nprint('missing value, factor, offset and units: {} {} {} {}'\n .format(missing,factor,offset,units))", "4) get the time values (in seconds) for the orbit and convert to decimal minutes for plotting", "lat,lon,date_times,prof_times,dem_elevation=get_geo(the_file)\n\ntime_minutes = prof_times/60.", "5) turn the missing values (-99) into np.nan (\"not a number\") so they will be dropped from our plot. Count\nany cloud height below 10 meters as noise and assign it as missing.", "hit = layer_tops < 10 \n#http://cswww.cira.colostate.edu/dataSpecs.php \nlayer_tops = (layer_tops - offset)/factor\nlayer_tops[hit] = np.nan\n\n", "6) go through each of the layers and find the mean height (not counting the nan values)\nand the number of timesteps where there was cloud detected", "num_times,num_layers = layer_tops.shape\ntext=\"\"\"\n layer number: {0:}\n cloud fraction is {1:4.2f}%\n mean height is {2:6.1f} meters\n \"\"\"\n\nfor the_layer in range(num_layers):\n missing = np.isnan(layer_tops[:,the_layer])\n present = np.logical_not(missing)\n num_present = np.sum(present)\n percent_present=num_present/num_times*100.\n mean_height = np.nanmean(layer_tops[:,the_layer])\n print(text.format(the_layer,percent_present,mean_height))", "7) What fraction of the time is there a layer 2 cloud above layer 1 cloud?", "layer1= layer_tops[:,0]\nlayer2= layer_tops[:,1]\nlayer1_count=0\noverlap_count=0\nfor index,layer1_height in enumerate(layer1):\n if not np.isnan(layer1_height):\n layer1_count += 1\n if not np.isnan(layer2[index]):\n overlap_count += 1\noverlap_freq=100.*overlap_count/layer1_count\nprint(('when there was cloud in layer 1, there was also cloud in layer2 {:6.2f} '\n 'percent of the time')\n .format(overlap_freq))", "solution: plot the layers with a legend", "%matplotlib inline\nfrom IPython.display import display\nplt.close('all')\nmeters2km=1.e3\nseconds2mins=60.\ndef plot_layers(time_secs,layer_tops,ax):\n ntimes,nlayers=layer_tops.shape\n time_mins=time_secs/seconds2mins\n for i in range(nlayers):\n label='layer {}'.format(i)\n ax.plot(time_mins,layer_tops[:,i]/meters2km,label=label)\n ax.legend()\n return ax\n\nfig, ax = plt.subplots(1,1,figsize=(12,4))\nax=plot_layers(prof_times,layer_tops,ax)\nax.set(title='Cloudsat Orbit -- lidar/radar cloud tops',\n xlabel='time (minutes)',ylabel='height (km)');\n#\n# expand to view the 60-70 minute time inteval\n#\nhit=np.logical_and(prof_times > 60*60,prof_times < 70*60)\nfig, ax = plt.subplots(1,1,figsize=(12,3))\nax=plot_layers(prof_times[hit],layer_tops[hit,:],ax)\nax.set(title='Cloudsat Orbit -- lidar/radar cloud tops -- zoomed',\n xlabel='time (minutes)',ylabel='height (km)');\n" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
mohanprasath/Course-Work
coursera/python_for_data_science/4.3_Loading_Data_and_Viewing_Data.ipynb
gpl-3.0
[ "<a href=\"http://cocl.us/topNotebooksPython101Coursera\"><img src = \"https://ibm.box.com/shared/static/yfe6h4az47ktg2mm9h05wby2n7e8kei3.png\" width = 750, align = \"center\"></a>\n<a href=\"https://www.bigdatauniversity.com\"><img src = \"https://ibm.box.com/shared/static/ugcqz6ohbvff804xp84y4kqnvvk3bq1g.png\" width = 300, align = \"center\"></a>\n<h1 align=center><font size = 5>Introduction to Pandas Python</font></h1>\n\nTable of Contents\n<div class=\"alert alert-block alert-info\" style=\"margin-top: 20px\">\n<li><a href=\"#ref0\">About the Dataset</a></li>\n<li><a href=\"#ref1\">Importing Data</a></p></li>\n<li><a href=\"#ref2\">Viewing Data and Accessing Data </a></p></li>\n<br>\n<p></p>\nEstimated Time Needed: <strong>15 min</strong>\n</div>\n\n<hr>\n\n<a id=\"ref0\"></a>\n<h2 align=center>About the Dataset</h2>\n\nThe table has one row for each album and several columns\n\nartist - Name of the artist\nalbum - Name of the album\nreleased_year - Year the album was released\nlength_min_sec - Length of the album (hours,minutes,seconds)\ngenre - Genre of the album\nmusic_recording_sales_millions - Music recording sales (millions in USD) on SONG://DATABASE\nclaimed_sales_millions - Album's claimed sales (millions in USD) on SONG://DATABASE\ndate_released - Date on which the album was released\nsoundtrack - Indicates if the album is the movie soundtrack (Y) or (N)\nrating_of_friends - Indicates the rating from your friends from 1 to 10\n<br>\n\nYou can see the dataset here:\n<font size=\"1\">\n<table font-size:xx-small style=\"width:25%\">\n <tr>\n <th>Artist</th>\n <th>Album</th> \n <th>Released</th>\n <th>Length</th>\n <th>Genre</th> \n <th>Music recording sales (millions)</th>\n <th>Claimed sales (millions)</th>\n <th>Released</th>\n <th>Soundtrack</th>\n <th>Rating (friends)</th>\n </tr>\n <tr>\n <td>Michael Jackson</td>\n <td>Thriller</td> \n <td>1982</td>\n <td>00:42:19</td>\n <td>Pop, rock, R&B</td>\n <td>46</td>\n <td>65</td>\n <td>30-Nov-82</td>\n <td></td>\n <td>10.0</td>\n </tr>\n <tr>\n <td>AC/DC</td>\n <td>Back in Black</td> \n <td>1980</td>\n <td>00:42:11</td>\n <td>Hard rock</td>\n <td>26.1</td>\n <td>50</td>\n <td>25-Jul-80</td>\n <td></td>\n <td>8.5</td>\n </tr>\n <tr>\n <td>Pink Floyd</td>\n <td>The Dark Side of the Moon</td> \n <td>1973</td>\n <td>00:42:49</td>\n <td>Progressive rock</td>\n <td>24.2</td>\n <td>45</td>\n <td>01-Mar-73</td>\n <td></td>\n <td>9.5</td>\n </tr>\n <tr>\n <td>Whitney Houston</td>\n <td>The Bodyguard</td> \n <td>1992</td>\n <td>00:57:44</td>\n <td>Soundtrack/R&B, soul, pop</td>\n <td>26.1</td>\n <td>50</td>\n <td>25-Jul-80</td>\n <td>Y</td>\n <td>7.0</td>\n </tr>\n <tr>\n <td>Meat Loaf</td>\n <td>Bat Out of Hell</td> \n <td>1977</td>\n <td>00:46:33</td>\n <td>Hard rock, progressive rock</td>\n <td>20.6</td>\n <td>43</td>\n <td>21-Oct-77</td>\n <td></td>\n <td>7.0</td>\n </tr>\n <tr>\n <td>Eagles</td>\n <td>Their Greatest Hits (1971-1975)</td> \n <td>1976</td>\n <td>00:43:08</td>\n <td>Rock, soft rock, folk rock</td>\n <td>32.2</td>\n <td>42</td>\n <td>17-Feb-76</td>\n <td></td>\n <td>9.5</td>\n </tr>\n <tr>\n <td>Bee Gees</td>\n <td>Saturday Night Fever</td> \n <td>1977</td>\n <td>1:15:54</td>\n <td>Disco</td>\n <td>20.6</td>\n <td>40</td>\n <td>15-Nov-77</td>\n <td>Y</td>\n <td>9.0</td>\n </tr>\n <tr>\n <td>Fleetwood Mac</td>\n <td>Rumours</td> \n <td>1977</td>\n <td>00:40:01</td>\n <td>Soft rock</td>\n <td>27.9</td>\n <td>40</td>\n <td>04-Feb-77</td>\n <td></td>\n <td>9.5</td>\n </tr>\n</table>\n</font>\n<a id=\"ref1\"></a>\n<h2 align=center> Importing Data </h2>\n\nWe can import the libraries or dependency like Pandas using the following command:", "import pandas as pd\n", "After the import command, we now have access to a large number of pre-built classes and functions. This assumes the library is installed; in our lab environment all the necessary libraries are installed. One way pandas allows you to work with data is a dataframe. Let's go through the process to go from a comma separated values (.csv ) file to a dataframe. This variable csv_path stores the path of the .csv ,that is used as an argument to the read_csv function. The result is stored in the object df, this is a common short form used for a variable referring to a Pandas dataframe.", "csv_path='https://s3-api.us-geo.objectstorage.softlayer.net/cf-courses-data/CognitiveClass/PY0101EN/labs/top_selling_albums.csv'\ndf = pd.read_csv(csv_path)", "<div class=\"alert alert-block alert-info\" style=\"margin-top: 20px\">\n <a href=\"http://cocl.us/object_storage_corsera\"><img src = \"https://ibm.box.com/shared/static/6qbj1fin8ro0q61lrnmx2ncm84tzpo3c.png\" width = 750, align = \"center\"></a>\n\nWe can use the method **head()** to examine the first five rows of a dataframe:", "df.head()", "The process for loading an excel file is similar, we use the path of the excel file and the function read_excel. The result is a data frame as before:", "#dependency needed to install file \n!pip install xlrd\n\nxlsx_path='https://ibm.box.com/shared/static/mzd4exo31la6m7neva2w45dstxfg5s86.xlsx'\n\ndf = pd.read_excel(xlsx_path)\ndf.head()", "We can access the column \"Length\" and assign it a new dataframe 'x':", "x=df[['Length']]\nx", "The process is shown in the figure: \n<img src = \"https://ibm.box.com/shared/static/bz800py5ui4w0kpb0k09lq3k5oegop5v.png\" width = 750, align = \"center\"></a>\n<a id=\"ref2\"></a>\n<h2 align=center> Viewing Data and Accessing Data </h2>\n\nYou can also assign the value to a series, you can think of a Pandas series as a 1-D dataframe. Just use one bracket:", "x=df['Length']\nx", "You can also assign different columns, for example, we can assign the column 'Artist':", "x=df[['Artist']]\nx", "Assign the variable 'q' to the dataframe that is made up of the column 'Rating':", "q = df[['Rating']]\nq", "<div align=\"right\">\n<a href=\"#q1\" class=\"btn btn-default\" data-toggle=\"collapse\">Click here for the solution</a>\n\n</div>\n<div id=\"q1\" class=\"collapse\">\n```\nq=df[['Rating']]\nq\n```\n</div>\n\nYou can do the same thing for multiple columns; we just put the dataframe name, in this case, df, and the name of the multiple column headers enclosed in double brackets. The result is a new dataframe comprised of the specified columns:", "y=df[['Artist','Length','Genre']]\ny", "The process is shown in the figure:\n<img src = \"https://ibm.box.com/shared/static/dh9duk3ucuhmmmbixa6ugac6g384m5sq.png\" width = 1100, align = \"center\"></a>", "print(df[['Album','Released','Length']])\nq = df[['Album','Released']]\nq", "Assign the variable 'q' to the dataframe that is made up of the column 'Released' and 'Artist':\n<div align=\"right\">\n<a href=\"#q2\" class=\"btn btn-default\" data-toggle=\"collapse\">Click here for the solution</a>\n\n</div>\n<div id=\"q2\" class=\"collapse\">\n```\nq=df[['Released','Artist']]\nq\n```\n</div>\n\nOne way to access unique elements is the 'ix' method, where you can access the 1st row and first column as follows :", "#**ix** will be deprecated, use **iloc** for integer indexes \n#df.ix[0,0]\ndf.iloc[0,0]", "You can access the 2nd row and first column as follows:", "#**ix** will be deprecated, use **iloc** for integer indexes\n#df.ix[1,0]\ndf.iloc[1,0]", "You can access the 1st row 3rd column as follows:", "#**ix** will be deprecated, use **iloc** for integer indexes\n#df.ix[0,2]\ndf.iloc[0,2]", "Access the 2nd row 3rd column:", "df.iloc[1, 2]", "<div align=\"right\">\n<a href=\"#q3\" class=\"btn btn-default\" data-toggle=\"collapse\">Click here for the solution</a>\n\n</div>\n<div id=\"q3\" class=\"collapse\">\n```\ndf.ix[1,2]\n\nor df.iloc[0,2]\n```\n</div>\n\nYou can access the column using the name as well, the following are the same as above:", "#**ix** will be deprecated, use **loc** for label-location based indexer\n#df.ix[0,'Artist']\ndf.loc[0,'Artist']\n\n#**ix** will be deprecated, use **loc** for label-location based indexer\n#df.ix[1,'Artist']\ndf.loc[1,'Artist']\n\n#**ix** will be deprecated, use **loc** for label-location based indexer\n#df.ix[0,'Released']\ndf.loc[0,'Released']\n\n#**ix** will be deprecated, use **loc** for label-location based indexer\n#df.ix[1,'Released']\ndf.loc[1,'Released']\n\ndf.ix[1,2]", "You can perform slicing using both the index and the name of the column:", "#**ix** will be deprecated, use **loc** for label-location based indexer\n#df.ix[0:2, 0:3]\ndf.iloc[0:2, 0:3]\n\n\n\n#**ix** will be deprecated, use **loc** for label-location based indexer\n#df.ix[0:2, 'Artist':'Released']\ndf.loc[0:2, 'Artist':'Released']", "<a href=\"http://cocl.us/bottemNotebooksPython101Coursera\"><img src = \"https://ibm.box.com/shared/static/irypdxea2q4th88zu1o1tsd06dya10go.png\" width = 750, align = \"center\"></a>\nAbout the Authors:\nJoseph Santarcangelo has a PhD in Electrical Engineering, his research focused on using machine learning, signal processing, and computer vision to determine how videos impact human cognition. Joseph has been working for IBM since he completed his PhD.\nCopyright &copy; 2017 cognitiveclass.ai. This notebook and its source code are released under the terms of the MIT License." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
dafrie/lstm-load-forecasting
notebooks/5_calendar_weather.ipynb
mit
[ "Model Category 5: Calendar + Weather\nThe fourth model category will calendar + weather features to create a forecast for the electricity load.\nModel category specific configuration\nThese parameters are model category specific", "# Model category name used throughout the subsequent analysis\nmodel_cat_id = \"05\"\n\n# Which features from the dataset should be loaded:\n# ['all', 'actual', 'entsoe', 'weather_t', 'weather_i', 'holiday', 'weekday', 'hour', 'month']\nfeatures = ['actual', 'calendar', 'weather']\n\n# LSTM Layer configuration\n# ========================\n# Stateful True or false\nlayer_conf = [ True, True, True ]\n# Number of neurons per layer\ncells = [[ 5, 10, 20, 30, 50, 75, 100, 125, 150], [0, 10, 20, 50], [0, 10, 15, 20]]\n# Regularization per layer\ndropout = [0, 0.1, 0.2]\n# Size of how many samples are used for one forward/backward pass\nbatch_size = [8]\n# In a sense this is the output neuron dimension, or how many timesteps the neuron should output. Currently not implemented, defaults to 1.\ntimesteps = [1]", "Module imports", "import os\nimport sys\nimport math\nimport itertools\nimport datetime as dt\nimport pytz\nimport time as t\nimport numpy as np\nimport pandas as pd\nfrom pandas import read_csv\nfrom pandas import datetime\nfrom numpy import newaxis\n\nimport matplotlib as mpl\n\nimport matplotlib.pyplot as plt\nimport scipy.stats as stats\nfrom statsmodels.tsa import stattools\nfrom tabulate import tabulate\n\nimport math\nimport keras as keras\nfrom keras import backend as K\nfrom keras.models import Sequential\nfrom keras.layers import Activation, Dense, Dropout, LSTM\nfrom keras.callbacks import TensorBoard\nfrom keras.utils import np_utils\nfrom keras.models import load_model\n\nfrom sklearn.preprocessing import StandardScaler\nfrom sklearn.metrics import mean_squared_error, mean_absolute_error\n\nfrom IPython.display import HTML\nfrom IPython.display import display\n%matplotlib notebook\nmpl.rcParams['figure.figsize'] = (9,5)\n\n# Import custom module functions\nmodule_path = os.path.abspath(os.path.join('../'))\nif module_path not in sys.path:\n sys.path.append(module_path)\n\nfrom lstm_load_forecasting import data, lstm", "Overall configuration\nThese parameters are later used, but shouldn't have to change between different model categories (model 1-5)", "# Directory with dataset\npath = os.path.join(os.path.abspath(''), '../data/fulldataset.csv')\n\n# Splitdate for train and test data. As the TBATS and ARIMA benchmark needs 2 full cycle of all seasonality, needs to be after jan 01. \nloc_tz = pytz.timezone('Europe/Zurich')\nsplit_date = loc_tz.localize(dt.datetime(2017,2,1,0,0,0,0))\n\n# Validation split percentage\nvalidation_split = 0.2\n# How many epochs in total\nepochs = 30\n# Set verbosity level. 0 for only per model, 1 for progress bar...\nverbose = 0\n\n# Dataframe containing the relevant data from training of all models\nresults = pd.DataFrame(columns=['model_name', 'config', 'dropout',\n 'train_loss', 'train_rmse', 'train_mae', 'train_mape', \n 'valid_loss', 'valid_rmse', 'valid_mae', 'valid_mape', \n 'test_rmse', 'test_mae', 'test_mape',\n 'epochs', 'batch_train', 'input_shape',\n 'total_time', 'time_step', 'splits'\n ])\n# Early stopping parameters\nearly_stopping = True\nmin_delta = 0.006\npatience = 2", "Preparation and model generation\nNecessary preliminary steps and then the generation of all possible models based on the settings at the top of this notebook.", "# Generate output folders and files\nres_dir = '../results/notebook_' + model_cat_id + '/'\nplot_dir = '../plots/notebook_' + model_cat_id + '/'\nmodel_dir = '../models/notebook_' + model_cat_id + '/'\nos.makedirs(res_dir, exist_ok=True)\nos.makedirs(model_dir, exist_ok=True)\noutput_table = res_dir + model_cat_id + '_results_' + t.strftime(\"%Y%m%d\") + '.csv'\ntest_output_table = res_dir + model_cat_id + '_test_results' + t.strftime(\"%Y%m%d\") + '.csv'\n\n# Generate model combinations\nmodels = []\nmodels = lstm.generate_combinations(\n model_name=model_cat_id + '_', layer_conf=layer_conf, cells=cells, dropout=dropout, \n batch_size=batch_size, timesteps=[1])", "Loading the data:", "# Load data and prepare for standardization\ndf = data.load_dataset(path=path, modules=features)\ndf_scaled = df.copy()\ndf_scaled = df_scaled.dropna()\n\n# Get all float type columns and standardize them\nfloats = [key for key in dict(df_scaled.dtypes) if dict(df_scaled.dtypes)[key] in ['float64']]\nscaler = StandardScaler()\nscaled_columns = scaler.fit_transform(df_scaled[floats])\ndf_scaled[floats] = scaled_columns\n\n# Split in train and test dataset\ndf_train = df_scaled.loc[(df_scaled.index < split_date )].copy()\ndf_test = df_scaled.loc[df_scaled.index >= split_date].copy()\n\n# Split in features and label data\ny_train = df_train['actual'].copy()\nX_train = df_train.drop('actual', 1).copy()\ny_test = df_test['actual'].copy()\nX_test = df_test.drop('actual', 1).copy()", "Running through all generated models\nNote: Depending on the above settings, this can take very long!", "start_time = t.time()\nfor idx, m in enumerate(models):\n stopper = t.time()\n print('========================= Model {}/{} ========================='.format(idx+1, len(models)))\n print(tabulate([['Starting with model', m['name']], ['Starting time', datetime.fromtimestamp(stopper)]],\n tablefmt=\"jira\", numalign=\"right\", floatfmt=\".3f\"))\n try:\n # Creating the Keras Model\n model = lstm.create_model(layers=m['layers'], sample_size=X_train.shape[0], batch_size=m['batch_size'], \n timesteps=m['timesteps'], features=X_train.shape[1])\n # Training...\n history = lstm.train_model(model=model, mode='fit', y=y_train, X=X_train, \n batch_size=m['batch_size'], timesteps=m['timesteps'], epochs=epochs, \n rearrange=False, validation_split=validation_split, verbose=verbose, \n early_stopping=early_stopping, min_delta=min_delta, patience=patience)\n\n # Write results\n min_loss = np.min(history.history['val_loss'])\n min_idx = np.argmin(history.history['val_loss'])\n min_epoch = min_idx + 1\n \n if verbose > 0:\n print('______________________________________________________________________')\n print(tabulate([['Minimum validation loss at epoch', min_epoch, 'Time: {}'.format(t.time()-stopper)],\n ['Training loss & MAE', history.history['loss'][min_idx], history.history['mean_absolute_error'][min_idx] ], \n ['Validation loss & mae', history.history['val_loss'][min_idx], history.history['val_mean_absolute_error'][min_idx] ],\n ], tablefmt=\"jira\", numalign=\"right\", floatfmt=\".3f\"))\n print('______________________________________________________________________')\n \n \n result = [{'model_name': m['name'], 'config': m, 'train_loss': history.history['loss'][min_idx], 'train_rmse': 0,\n 'train_mae': history.history['mean_absolute_error'][min_idx], 'train_mape': 0,\n 'valid_loss': history.history['val_loss'][min_idx], 'valid_rmse': 0, \n 'valid_mae': history.history['val_mean_absolute_error'][min_idx],'valid_mape': 0, \n 'test_rmse': 0, 'test_mae': 0, 'test_mape': 0, 'epochs': '{}/{}'.format(min_epoch, epochs), 'batch_train':m['batch_size'],\n 'input_shape':(X_train.shape[0], timesteps, X_train.shape[1]), 'total_time':t.time()-stopper, \n 'time_step':0, 'splits':str(split_date), 'dropout': m['layers'][0]['dropout']\n }]\n results = results.append(result, ignore_index=True)\n \n # Saving the model and weights\n model.save(model_dir + m['name'] + '.h5')\n \n # Write results to csv\n results.to_csv(output_table, sep=';')\n \n K.clear_session()\n import tensorflow as tf\n tf.reset_default_graph()\n \n # Shouldn't catch all errors, but for now...\n except BaseException as e:\n print('=============== ERROR {}/{} ============='.format(idx+1, len(models)))\n print(tabulate([['Model:', m['name']], ['Config:', m]], tablefmt=\"jira\", numalign=\"right\", floatfmt=\".3f\"))\n print('Error: {}'.format(e))\n result = [{'model_name': m['name'], 'config': m, 'train_loss': str(e)}]\n results = results.append(result, ignore_index=True)\n results.to_csv(output_table,sep=';')\n continue\n ", "Model selection based on the validation MAE\nSelect the top 5 models based on the Mean Absolute Error in the validation data:\nhttp://scikit-learn.org/stable/modules/model_evaluation.html#mean-absolute-error", "# Number of the selected top models \nselection = 5\n# If run in the same instance not necessary. If run on the same day, then just use output_table\nresults_fn = res_dir + model_cat_id + '_results_' + '20170616' + '.csv'\n\nresults_csv = pd.read_csv(results_fn, delimiter=';')\ntop_models = results_csv.nsmallest(selection, 'valid_mae')", "Evaluate top 5 models", "# Init test results table\ntest_results = pd.DataFrame(columns=['Model name', 'Mean absolute error', 'Mean squared error'])\n\n# Init empty predictions\npredictions = {}\n\n# Loop through models\nfor index, row in top_models.iterrows():\n filename = model_dir + row['model_name'] + '.h5'\n model = load_model(filename)\n batch_size = int(row['batch_train'])\n \n # Calculate scores\n loss, mae = lstm.evaluate_model(model=model, X=X_test, y=y_test, batch_size=batch_size, timesteps=1, verbose=verbose)\n \n # Store results\n result = [{'Model name': row['model_name'], \n 'Mean squared error': loss, 'Mean absolute error': mae\n }]\n test_results = test_results.append(result, ignore_index=True)\n \n # Generate predictions\n model.reset_states()\n model_predictions = lstm.get_predictions(model=model, X=X_test, batch_size=batch_size, timesteps=timesteps[0], verbose=verbose)\n \n # Save predictions\n predictions[row['model_name']] = model_predictions\n \n K.clear_session()\n import tensorflow as tf\n tf.reset_default_graph()\n \n\ntest_results = test_results.sort_values('Mean absolute error', ascending=True)\ntest_results = test_results.set_index(['Model name'])\n\nif not os.path.isfile(test_output_table):\n test_results.to_csv(test_output_table, sep=';')\nelse: # else it exists so append without writing the header\n test_results.to_csv(test_output_table,mode = 'a',header=False, sep=';')\n\nprint('Test dataset performance of the best {} (out of {} tested models):'.format(min(selection, len(models)), len(models)))\nprint(tabulate(test_results, headers='keys', tablefmt=\"grid\", numalign=\"right\", floatfmt=\".3f\"))" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
krispingal/shelterAnimalOutcomes
notebook/shelterAnimalOutcomes-Visualization.ipynb
mit
[ "Shelter Animal Outcomes 1\nData visualization", "%matplotlib inline\nimport pandas as pd\nimport matplotlib.pyplot as plt\nimport seaborn as sns\n\ndf = pd.read_csv('train.csv')\n\ndf.head()\n\ndf['AnimalType'].unique()\n\ndf.groupby(['AnimalType']).get_group('Cat').shape[0]\n\ndf.groupby(['AnimalType']).get_group('Dog').shape[0]\n\ndf['OutcomeType'].unique()\n\nf, (ax1, ax2) = plt.subplots(1, 2, figsize=(16, 4))\nsns.countplot(x=\"OutcomeType\", data=df, ax=ax1)\nsns.countplot(x=\"AnimalType\", hue=\"OutcomeType\", data=df, ax=ax2)", "Overall it seems not many animals died of natural causes. \nDoesn't seem like cats have nine lives unfortunately.\nProbably because of their shitty attitude and general evilness they are likely to get transferred.\nDogs have tricked their masters with their sad puppy face to get returned more. Also they are told to be more loyal.", "sns.countplot(x=\"SexuponOutcome\", hue=\"OutcomeType\", data=df)", "Overall sex likely does not play a big role in outcome, but spayed/neutered population is bigger they are more likely to get adopted", "dfCat = df.groupby(['AnimalType']).get_group('Cat')\ndfDog = df.groupby(['AnimalType']).get_group('Dog')\n\nf, (ax1, ax2) = plt.subplots(1, 2, figsize=(16, 4))\nsns.countplot(x=\"SexuponOutcome\", hue=\"OutcomeType\", data=dfCat, ax=ax1)\nsns.countplot(x=\"SexuponOutcome\", hue=\"OutcomeType\", data=dfDog, ax=ax2)", "Cats and dogs have different probability distributions for outcome", "dfCat['Color'].describe()\n\ndfDog['Color'].describe()", "As expected there are too many colors that makes it difficult to properly visualize without discarding a majority of colors. Thinking a bit, it makes more sense to have a combination of both color and breed to make a pet to be more appealing/attractive.", "df['AgeuponOutcome'].unique()", "As expected there are animals over a wide spectrum of ages. Age should play a major role deciding the outcome.", "df['NameIsPresent'] = df['Name'].isnull()\n\nsns.countplot(x=\"NameIsPresent\", hue=\"OutcomeType\", data=df)", "Animals that didn't have names or their names were lost, as is evident from the graph above, that their outcome probability distribution would be very different. Named animals seem to be more popular for adoption. Named animals could mean that they had previous owners and possible stories.", "df[df['NameIsPresent'] == True].shape[0]\n\ndf[df['NameIsPresent'] == False].shape[0]", "We can see that out of the animals present in training set more than 2/3 had names and roughly about half of them got adopted.", "df['OutcomeSubtype'].unique()\n\nsns.set_context(\"poster\")\nsns.countplot(x=\"OutcomeSubtype\", hue=\"AnimalType\", data=df)\n\ndf['DateTime']" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
cmorgan/toyplot
docs/interaction.ipynb
bsd-3-clause
[ ".. _interaction:\nInteraction\nA key goal for the Toyplot team is to explore interactive features for plots, making them truly useful and embeddable, so that they become a ubiquitous part of every data graphic user's experience. The following examples of interaction are just scratching the surface of what we have planned for Toyplot:\nTitles\nMost of the visualization types in Toyplot accept a title parameter, allowing you to specify per-series or per-datum titles for a figure. With Toyplot's preferred embeddable HTML output, those titles are displayed via a popup when hovering over the data. For example, the following figure has a global title \"Employee Schedule\", which you should see as a popup when you hover the mouse over any of the bars:", "import numpy\nnumpy.random.seed(1234)\nstart = numpy.random.normal(loc=8, scale=1, size=20)\nend = numpy.random.normal(loc=16, scale=1, size=20)\nboundaries = numpy.column_stack((start, end))\ntitle = \"Employee Schedule\"\n\nimport toyplot\ntoyplot.bars(boundaries, baseline=None, title=title, width=500, height=300);", "If your plot includes multiple series, you can assign a per-series title instead. Hover the mouse over both series in the following plot to see \"Morning Schedule\" and \"Afternoon Schedule\":", "lunch = numpy.random.normal(loc=12, scale=0.5, size=20)\nboundaries = numpy.column_stack((start, lunch, end))\ntitle = [\"Morning Schedule\", \"Afternoon Schedule\"]\n\ntoyplot.bars(boundaries, baseline=None, title=title, width=500, height=300);", "Finally, you can assign a title for every datum:", "title = numpy.column_stack((\n [\"Employee %s Morning\" % i for i in range(20)],\n [\"Employee %s Afternoon\" % i for i in range(20)]\n ))\n\ntoyplot.bars(boundaries, baseline=None, title=title, width=500, height=300);", "Of course, the title attribute works with all types of visualizations.\nCoordinates\nAs you mouse over the above figures, you should also see the interactive mouse coordinates in the upper-right-hand corner of the axes. These coordinates show the domain values where the crosshair mouse cursor is located.\nIf you wish to disable the mouse coordinates altogether, you can do so using the axes:", "canvas, axes, mark = toyplot.bars(boundaries, baseline=None, title=title, width=500, height=300)\naxes.coordinates.show = False", "Now when you mouse over the axes, the coordinates are no longer there.\nData Export\nIf you right-click the mouse over any of the above plots, a small popup menu will appear, giving you the option to \"Save as .csv\". If you choose that option, the raw data from the plot will be extracted in CSV format and you can save it.\nNote that different browsers, browser versions, and platforms will behave differently when extracting the file:\n\nSafari on OSX will open the file in a separate tab, which you can save to disk using File &gt; Save As.\nChrome on OSX will immediately open a file dialog, prompting you to save the file.\nFirefox on OSX will prompt you to open the file with Microsoft Excel (if installed), or save it to disk.\n\nNote that, on the browsers that support it, the default filename for the saved data is toyplot.csv. You can override this default on a per-data-table basis by specifying the filename when you create your figure. For example, when exporting data from the following figure (again, for browsers that support setting a default filename), the filename will default to employee-schedules.csv:", "canvas, axes, mark = toyplot.bars(boundaries, baseline=None, filename=\"employee-schedules\", title=title, width=500, height=300)", "Note that the filename you specify should not include a file extension, as the file extension is added for you (and other file formats may become available in the future)." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
gaufung/Data_Analytics_Learning_Note
DesignPattern/ProxyPattern.ipynb
mit
[ "代理模式(Proxy Pattern)\n代理模式是一种使用频率非常高的模式,在多个著名的开源软件和当前多个著名的互联网产品后台程序中都有所应用。下面我们用一个抽象化的简单例子,来说明代理模式。", "info_struct=dict()\ninfo_struct['addr']=10000\ninfo_struct['content']=''\nclass Server(object):\n content=''\n def recv(self, info):\n pass\n def send(self, info):\n pass\n def show(self):\n pass\nclass infoServer(Server):\n def recv(self,info):\n self.content=info\n return 'recv OK!'\n def send(self, info):\n pass\n def show(self):\n print('SHOW:%s'%self.content)", "infoServer有接收和发送的功能,发送功能由于暂时用不到,保留。另外新加一个接口show,用来展示服务器接收的内容。接收的数据格式必须如info_struct所示,服务器仅接受info_struct的content字段。那么,如何给这个服务器设置一个白名单,使得只有白名单里的地址可以访问服务器呢?修改Server结构是个方法,但这显然不符合软件设计原则中的单一职责原则。在此基础之上,使用代理,是个不错的方法。代理配置如下:", "class serverProxy(object):\n pass\nclass infoServerProxy(serverProxy):\n server=''\n def __init__(self,server):\n self.server=server\n def recv(self,info):\n return self.server.recv(info)\n def show(self):\n self.server.show()\nclass WhiteInfoServerProxy(infoServerProxy):\n whilte_list=[]\n def recv(self,info):\n try:\n assert type(info)==dict\n except:\n return 'info structure is not correct'\n addr = info.get('addr',0)\n if not addr in self.whilte_list:\n return 'Your address is not the white list'\n else:\n content=info.get('content','')\n return self.server.recv(content)\n def addWhite(self, addr):\n self.whilte_list.append(addr)\n def rmvWhite(self, addr):\n self.whilte_list.remove(addr)\n def clearWhite(self):\n self.whilte_list=[]\n\ninfo_struct=dict()\ninfo_struct['addr']=10010\ninfo_struct['content']='Hello World!'\ninfo_server = infoServer()\ninfo_server_proxy = WhiteInfoServerProxy(info_server)\nprint(info_server_proxy.recv(info_struct))\ninfo_server_proxy.show()\ninfo_server_proxy.addWhite(10010)\nprint(info_server_proxy.recv(info_struct))\ninfo_server_proxy.show()", "Advantages\n\n职责清晰:非常符合单一职责原则,主题对象实现真实业务逻辑,而非本职责的事务,交由代理完成;\n扩展性强:面对主题对象可能会有的改变,代理模式在不改变对外接口的情况下,可以实现最大程度的扩展;\n保证主题对象的处理逻辑:代理可以通过检查参数的方式,保证主题对象的处理逻辑输入在理想范围内。\n\nUsages\n\n针对某特定对象进行功能和增强性扩展。如IP防火墙、远程访问代理等技术的应用;\n对主题对象进行保护。如大流量代理,安全代理等;\n减轻主题对象负载。如权限代理等。" ]
[ "markdown", "code", "markdown", "code", "markdown" ]
fweik/espresso
doc/tutorials/lennard_jones/lennard_jones.ipynb
gpl-3.0
[ "Introductory Tutorial: Lennard-Jones Liquid\nTable of Contents\n\nIntroduction\nBackground\nThe Lennard-Jones Potential\nUnits\nFirst steps\nOverview of a simulation script\nSystem setup\nPlacing and accessing particles\nSetting up non-bonded interactions\nEnergy minimization\nChoosing the thermodynamic ensemble, thermostat\nIntegrating equations of motion and taking manual measurements\nAutomated data collection\n\n\nFurther Exercises\nBinary Lennard-Jones Liquid\n\n\nReferences\n\nIntroduction\nWelcome to the basic ESPResSo tutorial!\nIn this tutorial, you will learn, how to use the ESPResSo package for your \nresearch. We will cover the basics of ESPResSo, i.e., how to set up and modify a physical system, how to run a simulation, and how to load, save and analyze the produced simulation data.\nMore advanced features and algorithms available in the ESPResSo package are \ndescribed in additional tutorials.\nBackground\nToday's research on Soft Condensed Matter has brought the needs for having a flexible, extensible, reliable, and efficient (parallel) molecular simulation package. For this reason ESPResSo (Extensible Simulation Package for Research on Soft Matter Systems) <a href='#[1]'>[1]</a> has been developed at the Max Planck Institute for Polymer Research, Mainz, and at the Institute for Computational Physics at the University of Stuttgart in the group of Prof. Dr. Christian Holm <a href='#[2]'>[2,3]</a>. The ESPResSo package is probably the most flexible and extensible simulation package in the market. It is specifically developed for coarse-grained molecular dynamics (MD) simulation of polyelectrolytes but is not necessarily limited to this. For example, it could also be used to simulate granular media. ESPResSo has been nominated for the Heinz-Billing-Preis for Scientific Computing in 2003 <a href='#[4]'>[4]</a>.\nThe Lennard-Jones Potential\nA pair of neutral atoms or molecules is subject to two distinct forces in the limit of large separation and small separation: an attractive force at long ranges (van der Waals force, or dispersion force) and a repulsive force at short ranges (the result of overlapping electron orbitals, referred to as Pauli repulsion from the Pauli exclusion principle). The Lennard-Jones potential (also referred to as the L-J potential, 6-12 potential or, less commonly, 12-6 potential) is a simple mathematical model that represents this behavior. It was proposed in 1924 by John Lennard-Jones. The L-J potential is of the form\n\\begin{equation}\nV(r) = 4\\epsilon \\left[ \\left( \\dfrac{\\sigma}{r} \\right)^{12} - \\left( \\dfrac{\\sigma}{r} \\right)^{6} \\right]\n\\end{equation}\nwhere $\\epsilon$ is the depth of the potential well and $\\sigma$ is the (finite) distance at which the inter-particle potential is zero and $r$ is the distance between the particles. The $\\left(\\frac{1}{r}\\right)^{12}$ term describes repulsion and the $(\\frac{1}{r})^{6}$ term describes attraction. The Lennard-Jones potential is an\napproximation. The form of the repulsion term has no theoretical justification; the repulsion force should depend exponentially on the distance, but the repulsion term of the L-J formula is more convenient due to the ease and efficiency of computing $r^{12}$ as the square of $r^6$.\nIn practice, the L-J potential is typically cutoff beyond a specified distance $r_{c}$ and the potential at the cutoff distance is zero.\n<figure>\n<img src='figures/lennard-jones-potential.png' alt='missing' style='width: 600px;'/>\n<center>\n<figcaption>Figure 1: Lennard-Jones potential</figcaption>\n</center>\n</figure>\n\nUnits\nNovice users must understand that ESPResSo has no fixed unit system. The unit \nsystem is set by the user. Conventionally, reduced units are employed, in other \nwords LJ units.\nFirst steps\nWhat is ESPResSo? It is an extensible, efficient Molecular Dynamics package specially powerful on simulating charged systems. In depth information about the package can be found in the relevant sources <a href='#[1]'>[1,4,2,3]</a>.\nESPResSo consists of two components. The simulation engine is written in C++ for the sake of computational efficiency. The steering or control level is interfaced to the kernel via an interpreter of the Python scripting languages.\nThe kernel performs all computationally demanding tasks. Before all, integration of Newton's equations of motion, including calculation of energies and forces. It also takes care of internal organization of data, storing the data about particles, communication between different processors or cells of the cell-system.\nThe scripting interface (Python) is used to setup the system (particles, boundary conditions, interactions etc.), control the simulation, run analysis, and store and load results. The user has at hand the full reliability and functionality of the scripting language. For instance, it is possible to use the SciPy package for analysis and PyPlot for plotting.\nWith a certain overhead in efficiency, it can also be bused to reject/accept new configurations in combined MD/MC schemes. In principle, any parameter which is accessible from the scripting level can be changed at any moment of runtime. In this way methods like thermodynamic integration become readily accessible.\nNote: This tutorial assumes that you already have a working ESPResSo\ninstallation on your system. If this is not the case, please consult the first chapters of the user's guide for installation instructions.\nOverview of a simulation script\nTypically, a simulation script consists of the following parts:\n\nSystem setup (box geometry, thermodynamic ensemble, integrator parameters)\nPlacing the particles\nSetup of interactions between particles\nWarm up (bringing the system into a state suitable for measurements)\nIntegration loop (propagate the system in time and record measurements)\n\nSystem setup\nThe functionality of ESPResSo for python is provided via a python module called espressomd. At the beginning of the simulation script, it has to be imported.", "import espressomd\nrequired_features = [\"LENNARD_JONES\"]\nespressomd.assert_features(required_features)\nfrom espressomd import observables, accumulators, analyze\n\n# Importing other relevant python modules\nimport numpy as np\nimport matplotlib.pyplot as plt\nfrom scipy import optimize\nnp.random.seed(42)\nplt.rcParams.update({'font.size': 22})\n\n# System parameters\nN_PART = 200\nDENSITY = 0.75\n\nBOX_L = np.power(N_PART / DENSITY, 1.0 / 3.0) * np.ones(3)", "The next step would be to create an instance of the System class. This instance is used as a handle to the simulation system. At any time, only one instance of the System class can exist.\nExercise:\n\nCreate an instance of an espresso system and store it in a variable called <tt>system</tt>;\n use <tt>BOX_L</tt> as box length.\n\nSee ESPResSo documentation and module documentation.\npython\nsystem = espressomd.System(box_l=BOX_L)", "# Test solution of Exercise 1\nassert isinstance(system, espressomd.System)", "It can be used to store and manipulate the crucial system parameters like the time step and the size of the simulation box (<tt>time_step</tt>, and <tt>box_l</tt>).", "SKIN = 0.4\nTIME_STEP = 0.01\n\nsystem.time_step = TIME_STEP\nsystem.cell_system.skin = SKIN", "Placing and accessing particles\nParticles in the simulation can be added and accessed via the <tt>part</tt> property of the System class. Individual particles are referred to by an integer id, e.g., <tt>system.part[0]</tt>. If <tt>id</tt> is unspecified, an unused particle id is automatically assigned. It is also possible to use common python iterators and slicing operations to add or access several particles at once.\nParticles can be grouped into several types, so that, e.g., a binary fluid can be simulated. Particle types are identified by integer ids, which are set via the particles' <tt>type</tt> attribute. If it is not specified, zero is implied.\n<!-- **Exercise:** -->\n\n\nCreate <tt>N_PART</tt> particles at random positions.\n\nUse system.part.add().\nEither write a loop or use an (<tt>N_PART</tt> x 3) array for positions.\n Use <tt>np.random.random()</tt> to generate random numbers.\npython\nsystem.part.add(type=[0] * N_PART, pos=np.random.random((N_PART, 3)) * system.box_l)", "# Test that now we have indeed N_PART particles in the system\nassert len(system.part) == N_PART", "The particle properties can be accessed using standard numpy slicing syntax:", "# Access position of a single particle\nprint(\"position of particle with id 0:\", system.part[0].pos)\n\n# Iterate over the first five particles for the purpose of demonstration.\n# For accessing all particles, use a slice: system.part[:]\nfor i in range(5):\n print(\"id\", i, \"position:\", system.part[i].pos)\n print(\"id\", i, \"velocity:\", system.part[i].v)\n\n# Obtain all particle positions\ncur_pos = system.part[:].pos", "Many objects in ESPResSo have a string representation, and thus can be displayed via python's <tt>print</tt> function:", "print(system.part[0])", "Setting up non-bonded interactions\nNon-bonded interactions act between all particles of a given combination of particle types. In this tutorial, we use the Lennard-Jones non-bonded interaction. First we define the LJ parameters", "# use LJ units: EPS=SIG=1\nLJ_EPS = 1.0\nLJ_SIG = 1.0\nLJ_CUT = 2.5 * LJ_SIG", "In a periodic system it is in general not straight forward to calculate all non-bonded interactions. Due to the periodicity and to speed up calculations usually a cut-off $r_{cut}$ for infinite-range potentials like Lennard-Jones is applied, such that $V(r>r_c) = 0$. The potential can be shifted to zero at the cutoff value to ensure continuity using the <tt>shift='auto'</tt> option of espressomd.interactions.LennardJonesInteraction.\nTo allow for comparison with the fundamental work on MD simulations of LJ systems <a href='#[6]'>[6]</a> we don't shift the potential to zero at the cutoff and instead correct for the long-range error $V_\\mathrm{lr}$ later. \nTo avoid spurious self-interactions of particles with their periodic images one usually forces that the shortest box length is at least twice the cutoff distance:", "assert (BOX_L - 2 * SKIN > LJ_CUT).all()", "Exercise:\n\nSetup a Lennard-Jones interaction with $\\epsilon=$<tt>LJ_EPS</tt> and $\\sigma=$<tt>LJ_SIG</tt> that is cut at $r_\\mathrm{c}=$<tt>LJ_CUT</tt>$\\times\\sigma$ and not shifted.\n\nHint: \n* Have a look at the docs\npython\nsystem.non_bonded_inter[0, 0].lennard_jones.set_params(\n epsilon=LJ_EPS, sigma=LJ_SIG, cutoff=LJ_CUT, shift=0)\nEnergy minimization\nIn many cases, including this tutorial, particles are initially placed randomly in the simulation box. It is therefore possible that particles overlap, resulting in a huge repulsive force between them. In this case, integrating the equations of motion would not be numerically stable. Hence, it is necessary to remove this overlap. This is typically done by performing a steepest descent minimization of the potential energy until a maximal force criterion is reached.\nNote:\nMaking sure a system is well equilibrated highly depends on the system's details.\nIn most cases a relative convergence criterion on the forces and/or energies works well but you might have to make sure that the total force is smaller than a threshold value <tt>f_max</tt> at the end of the minimization.\nDepending on the simulated system other strategies like simulations with small time step or capped forces might be necessary.", "F_TOL = 1e-2\nDAMPING = 30\nMAX_STEPS = 10000\nMAX_DISPLACEMENT = 0.01 * LJ_SIG\nEM_STEP = 10", "Exercise:\n\nUse <tt>espressomd.integrate.set_steepest_descent</tt> to relax the initial configuration.\n Use a maximal displacement of <tt>MAX_DISPLACEMENT</tt>.\n A damping constant <tt>gamma = DAMPING</tt> usually is a good choice.\nUse the relative change of the systems maximal force as a convergence criterion.\n See the documentation <tt>espressomd.particle_data</tt> module on how to obtain the forces.\n The steepest descent has converged if the relative force change < <tt>F_TOL</tt>\nBreak the minimization loop after a maximal number of <tt>MAX_STEPS</tt> steps or if convergence is achieved.\n Check for convergence every <tt>EMSTEP</tt> steps.\n\nHint: To obtain the initial forces one has to initialize the integrator using <tt>integ_steps=0</tt>, i.e. call <tt>system.integrator.run(0)</tt> before the force array can be accessed.\n```python\nSet up steepest descent integration\nsystem.integrator.set_steepest_descent(f_max=0, # use a relative convergence criterion only\n gamma=DAMPING,\n max_displacement=MAX_DISPLACEMENT)\nInitialize integrator to obtain initial forces\nsystem.integrator.run(0)\nold_force = np.max(np.linalg.norm(system.part[:].f, axis=1))\nwhile system.time / system.time_step < MAX_STEPS:\n system.integrator.run(EM_STEP)\n force = np.max(np.linalg.norm(system.part[:].f, axis=1))\n rel_force = np.abs((force - old_force) / old_force)\n print(f'rel. force change:{rel_force:.2e}')\n if rel_force < F_TOL:\n break\n old_force = force\n```", "# check that after the exercise the total energy is negative\nassert system.analysis.energy()['total'] < 0\n# reset clock\nsystem.time = 0.", "Choosing the thermodynamic ensemble, thermostat\nSimulations can be carried out in different thermodynamic ensembles such as NVE (particle __N__umber, __V__olume, __E__nergy), NVT (particle __N__umber, __V__olume, __T__emperature) or NpT-isotropic (particle __N__umber, __p__ressure, __T__emperature).\nIn this tutorial, we use the Langevin thermostat.", "# Parameters for the Langevin thermostat\n# reduced temperature T* = k_B T / LJ_EPS\nTEMPERATURE = 0.827 # value from Tab. 1 in [6]\nGAMMA = 1.0", "Exercise:\n\nUse <tt>system.integrator.set_vv()</tt> to use a Velocity Verlet integration scheme and\n <tt>system.thermostat.set_langevin()</tt> to turn on the Langevin thermostat.\n\nSet the temperature to TEMPERATURE and damping coefficient to GAMMA.\nFor details see the online documentation.\npython\nsystem.integrator.set_vv()\nsystem.thermostat.set_langevin(kT=TEMPERATURE, gamma=GAMMA, seed=42)\nIntegrating equations of motion and taking manual measurements\nNow, we integrate the equations of motion and take measurements of relevant quantities.", "# Integration parameters\nSTEPS_PER_SAMPLE = 20\nN_SAMPLES = 1000\n\ntimes = np.zeros(N_SAMPLES)\ne_total = np.zeros_like(times)\ne_kin = np.zeros_like(times)\nT_inst = np.zeros_like(times)", "Exercise:\n\nIntegrate the system and measure the total and kinetic energy. Take N_SAMPLES measurements every STEPS_PER_SAMPLE integration steps.\nCalculate the total and kinetic energies using the analysis method <tt>system.analysis.energy()</tt>.\nUse the containers times, e_total and e_kin from the cell above to store the time series.\nFrom the simulation results, calculate the instantaneous temperature $T_{\\mathrm{inst}} = 2/3 \\times E_\\mathrm{kin}$/<tt>N_PART</tt>.\n\npython\nfor i in range(N_SAMPLES):\n times[i] = system.time\n energy = system.analysis.energy()\n e_total[i] = energy['total']\n e_kin[i] = energy['kinetic']\n system.integrator.run(STEPS_PER_SAMPLE)\nT_inst = 2. / 3. * e_kin / N_PART", "plt.figure(figsize=(10, 6))\nplt.plot(times, T_inst, label='$T_{\\\\mathrm{inst}}$')\nplt.plot(times, [TEMPERATURE] * len(times), label='$T$ set by thermostat')\nplt.legend()\nplt.xlabel('t')\nplt.ylabel('T')\nplt.show()", "Since the ensemble average $\\langle E_\\text{kin}\\rangle=3/2 N k_B T$ is related to the temperature,\nwe may compute the actual temperature of the system via $k_B T= 2/(3N) \\langle E_\\text{kin}\\rangle$.\nThe temperature is fixed and does not fluctuate in the NVT ensemble! The instantaneous temperature is\ncalculated via $2/(3N) E_\\text{kin}$ (without ensemble averaging), but it is not the temperature of the system.\nIn the first simulation run we picked STEPS_PER_SAMPLE arbitrary. To ensure proper statistics at short total run time, we will now calculate the steps_per_uncorrelated_sample which we will use for the rest of the tutorial.", "# Use only the data after the equilibration period in the beginning\nwarmup_time = 15\ne_total = e_total[times > warmup_time]\ne_kin = e_kin[times > warmup_time]\ntimes = times[times > warmup_time]\ntimes -= times[0]\n\ndef autocor(x):\n x = np.asarray(x)\n mean = x.mean()\n var = np.var(x)\n xp = x - mean\n corr = analyze.autocorrelation(xp) / var\n return corr\n\n\ndef fit_correlation_time(data, ts):\n data = np.asarray(data)\n data /= data[0]\n\n def fitfn(t, t_corr): return np.exp(-t / t_corr)\n popt, pcov = optimize.curve_fit(fitfn, ts, data)\n return popt[0]", "Exercise\n* Calculate the autocorrelation of the total energy (store in e_total_autocor). Calculate the correlation time (corr_time) and estimate a number of steps_per_uncorrelated_sample for uncorrelated sampling\nHint\n* we consider samples to be uncorrelated if the time between them is larger than 3 times the correlation time\npython\ne_total_autocor = autocor(e_total)\ncorr_time = fit_correlation_time(e_total_autocor[:100], times[:100])\nsteps_per_uncorrelated_sample = int(np.ceil(3 * corr_time / system.time_step))", "print(steps_per_uncorrelated_sample)", "We plot the autocorrelation function and the fit to visually confirm a roughly exponential decay", "plt.figure(figsize=(10, 6))\nplt.plot(times, e_total_autocor, label='data')\nplt.plot(times, np.exp(-times / corr_time), label='exponential fit')\nplt.plot(2 * [steps_per_uncorrelated_sample * system.time_step],\n [min(e_total_autocor), 1], label='sampling interval')\nplt.xlim(left=-2, right=50)\nplt.ylim(top=1.2, bottom=-0.15)\nplt.legend()\nplt.xlabel('t')\nplt.ylabel('total energy autocorrelation')\nplt.show()", "For statistical analysis, we only want uncorrelated samples.\nExercise:\n* Calculate the mean and standard error of the mean potential energy per particle from uncorrelated samples (define mean_pot_energy and SEM_pot_energy).\nHint\n* you know how many steps are between samples in e_total and how many steps are between uncorrelated samples. So you have to figure out how many samples to skip\npython\nuncorrelated_sample_step = int(np.ceil(steps_per_uncorrelated_sample / STEPS_PER_SAMPLE))\npot_energies = (e_total - e_kin)[::uncorrelated_sample_step] / N_PART\nmean_pot_energy = np.mean(pot_energies)\nSEM_pot_energy = np.std(pot_energies) / np.sqrt(len(pot_energies))", "print(f'mean potential energy = {mean_pot_energy:.2f} +- {SEM_pot_energy:.2f}')", "For comparison to literature values we need to account for the error made by the LJ truncation.\nFor an isotropic system one can assume that the density is homogeneous behind the cutoff, which allows to calculate the so-called long-range corrections to the energy and pressure,\n$$V_\\mathrm{lr} = 1/2 \\rho \\int_{r_\\mathrm{c}}^\\infty 4 \\pi r^2 g(r) V(r) \\,\\mathrm{d}r,$$\nUsing that the radial distribution function $g(r)=1$ for $r>r_\\mathrm{cut}$ one obtains\n$$V_\\mathrm{lr} = -\\frac{8}{3}\\pi \\rho \\varepsilon \\sigma^3 \\left[\\frac{1}{3} (\\sigma/r_{cut})^9 - (\\sigma/r_{cut})^3 \\right].$$\nSimilarly, a long-range contribution to the pressure can be derived <a href='#[5]'>[5]</a>.", "tail_energy_per_particle = 8. / 3. * np.pi * DENSITY * LJ_EPS * \\\n LJ_SIG**3 * (1. / 3. * (LJ_SIG / LJ_CUT)**9 - (LJ_SIG / LJ_CUT)**3)\nmean_pot_energy_corrected = mean_pot_energy + tail_energy_per_particle\nprint(f'corrected mean potential energy = {mean_pot_energy_corrected:.2f}')", "This value differs quite strongly from the uncorrected one but agrees well with the literature value $U^i = -5.38$ given in Table 1 of Ref. <a href='#[6]'>[6]</a>.\nAutomated data collection\nAs we have seen, it is easy to manually extract information from an ESPResSo simulation, but it can get quite tedious. Therefore, ESPResSo provides a number of data collection tools to make life easier (and less error-prone). We will now demonstrate those with the calculation of the radial distribution function.\nObservables extract properties from the particles and calculate some quantity with it, e.g. the center of mass, the total energy or a histogram.\nAccumulators allow the calculation of observables while running the system and then doing further analysis. Examples are a simple time series or more advanced methods like correlators.\nFor our purposes we need an accumulator that calculates the average of the RDF samples.", "# Parameters for the radial distribution function\nN_BINS = 100\nR_MIN = 0.0\nR_MAX = system.box_l[0] / 2.0", "Exercise\n* Instantiate a RDF observable\n* Instantiate a MeanVarianceCalculator accumulator to track the RDF over time. Samples should be taken every steps_per_uncorrelated_sample steps.\n* Add the accumulator to the auto_update_accumulators of the system for automatic updates\npython\nrdf_obs = observables.RDF(ids1=system.part[:].id, min_r=R_MIN, max_r=R_MAX, n_r_bins=N_BINS)\nrdf_acc = accumulators.MeanVarianceCalculator(obs=rdf_obs, delta_N=steps_per_uncorrelated_sample)\nsystem.auto_update_accumulators.add(rdf_acc)\nNow we don't need an elaborate integration loop anymore, instead the RDFs are calculated and accumulated automatically", "system.integrator.run(N_SAMPLES * steps_per_uncorrelated_sample)", "Exercise\n* Get the mean RDF (define rdf) from the accmulator\n* Get the histogram bin centers (define rs) from the observable\npython\nrdf = rdf_acc.mean()\nrs = rdf_obs.bin_centers()", "fig, ax = plt.subplots(figsize=(10, 7))\nax.plot(rs, rdf, label='simulated')\nplt.legend()\nplt.xlabel('r')\nplt.ylabel('RDF')", "We now plot the experimental radial distribution.\nEmpirical radial distribution functions have been determined for pure fluids <a href='#[7]'>[7]</a>, mixtures <a href='#[8]'>[8]</a> and confined fluids <a href='#[9]'>[9]</a>. We will compare our distribution $g(r)$ to the theoretical distribution $g(r^, \\rho^, T^*)$ of a pure fluid <a href='#[7]'>[7]</a>.", "# comparison to literature\ndef calc_literature_rdf(rs, temperature, density, LJ_eps, LJ_sig):\n T_star = temperature / LJ_eps\n rho_star = density * LJ_sig**3\n\n # expression of the factors Pi from Equations 2-8 with coefficients qi from Table 1\n # expression for a,g\n def P(q1, q2, q3, q4, q5, q6, q7, q8, q9): return \\\n q1 + q2 * np.exp(-q3 * T_star) + q4 * np.exp(-q5 * T_star) + q6 / rho_star + q7 / rho_star**2 \\\n + q8 * np.exp(-q3 * T_star) / rho_star**3 + q9 * \\\n np.exp(-q5 * T_star) / rho_star**4\n a = P(9.24792, -2.64281, 0.133386, -1.35932, 1.25338,\n 0.45602, -0.326422, 0.045708, -0.0287681)\n g = P(0.663161, -0.243089, 1.24749, -2.059, 0.04261,\n 1.65041, -0.343652, -0.037698, 0.008899)\n\n # expression for c,k\n def P(q1, q2, q3, q4, q5, q6, q7, q8): return \\\n q1 + q2 * np.exp(-q3 * T_star) + q4 * rho_star + q5 * rho_star**2 + q6 * rho_star**3 \\\n + q7 * rho_star**4 + q8 * rho_star**5\n c = P(-0.0677912, -1.39505, 0.512625, 36.9323, -\n 36.8061, 21.7353, -7.76671, 1.36342)\n k = P(16.4821, -0.300612, 0.0937844, -61.744,\n 145.285, -168.087, 98.2181, -23.0583)\n\n # expression for b,h\n def P(q1, q2, q3): return q1 + q2 * np.exp(-q3 * rho_star)\n b = P(-8.33289, 2.1714, 1.00063)\n h = P(0.0325039, -1.28792, 2.5487)\n\n # expression for d,l\n def P(q1, q2, q3, q4): return q1 + q2 * \\\n np.exp(-q3 * rho_star) + q4 * rho_star\n d = P(-26.1615, 27.4846, 1.68124, 6.74296)\n l = P(-6.7293, -59.5002, 10.2466, -0.43596)\n\n # expression for s\n def P(q1, q2, q3, q4, q5, q6, q7, q8): return \\\n (q1 + q2 * rho_star + q3 / T_star + q4 / T_star**2 + q5 / T_star**3) \\\n / (q6 + q7 * rho_star + q8 * rho_star**2)\n s = P(1.25225, -1.0179, 0.358564, -0.18533,\n 0.0482119, 1.27592, -1.78785, 0.634741)\n\n # expression for m\n def P(q1, q2, q3, q4, q5, q6): return \\\n q1 + q2 * np.exp(-q3 * T_star) + q4 / T_star + \\\n q5 * rho_star + q6 * rho_star**2\n m = P(-5.668, -3.62671, 0.680654, 0.294481, 0.186395, -0.286954)\n\n # expression for n\n def P(q1, q2, q3): return q1 + q2 * np.exp(-q3 * T_star)\n n = P(6.01325, 3.84098, 0.60793)\n\n # fitted expression (=theoretical curve)\n # slightly more than 1 to smooth out the discontinuity in the range [1.0, 1.02]\n theo_rdf_cutoff = 1.02\n\n theo_rdf = 1 + 1 / rs**2 * (np.exp(-(a * rs + b)) * np.sin(c * rs + d)\n + np.exp(-(g * rs + h)) * np.cos(k * rs + l))\n theo_rdf[np.nonzero(rs <= theo_rdf_cutoff)] = \\\n s * np.exp(-(m * rs + n)**4)[np.nonzero(rs <= theo_rdf_cutoff)]\n return theo_rdf\n\ntheo_rdf = calc_literature_rdf(rs, TEMPERATURE, DENSITY, LJ_EPS, LJ_SIG)\n\nax.plot(rs, theo_rdf, label='literature')\nax.legend()\nfig", "Further Exercises\nBinary Lennard-Jones Liquid\nA two-component Lennard-Jones liquid can be simulated by placing particles of two types (0 and 1) into the system. Depending on the Lennard-Jones parameters, the two components either mix or separate.\n\nModify the code such that half of the particles are of <tt>type=1</tt>. Type 0 is implied for the remaining particles.\nSpecify Lennard-Jones interactions between type 0 particles with other type 0 particles, type 1 particles with other type 1 particles, and type 0 particles with type 1 particles (set parameters for <tt>system.non_bonded_inter[i,j].lennard_jones</tt> where <tt>{i,j}</tt> can be <tt>{0,0}</tt>, <tt>{1,1}</tt>, and <tt>{0,1}</tt>. Use the same Lennard-Jones parameters for interactions within a component, but use a different <tt>lj_cut_mixed</tt> parameter for the cutoff of the Lennard-Jones interaction between particles of type 0 and particles of type 1. Set this parameter to $2^{\\frac{1}{6}}\\sigma$ to get de-mixing or to $2.5\\sigma$ to get mixing between the two components.\nRecord the radial distribution functions separately for particles of type 0 around particles of type 0, type 1 around particles of type 1, and type 0 around particles of type 1. This can be done by changing the <tt>ids1</tt>/<tt>ids2</tt> arguments of the <tt>espressomd.observables.RDF</tt> command. You can record all three radial distribution functions in a single simulation. It is also possible to write them as several columns into a single file.\nPlot the radial distribution functions for all three combinations of particle types. The mixed case will differ significantly, depending on your choice of <tt>lj_cut_mixed</tt>. Explain these differences.\n\nReferences\n<a id='[1]'></a>[1] <a href=\"http://espressomd.org\">http://espressomd.org</a>\n<a id='[2]'></a>[2] HJ Limbach, A. Arnold, and B. Mann. ESPResSo: An extensible simulation package for research on soft matter systems. Computer Physics Communications, 174(9):704–727, 2006.\n<a id='[3]'></a>[3] A. Arnold, O. Lenz, S. Kesselheim, R. Weeber, F. Fahrenberger, D. Rohm, P. Kosovan, and C. Holm. ESPResSo 3.1 — molecular dynamics software for coarse-grained models. In M. Griebel and M. A. Schweitzer, editors, Meshfree Methods for Partial Differential Equations VI, volume 89 of Lecture Notes in Computational Science and Engineering, pages 1–23. Springer Berlin Heidelberg, 2013.\n<a id='[4]'></a>[4] A. Arnold, BA Mann, HJ Limbach, and C. Holm. ESPResSo–An Extensible Simulation Package for Research on Soft Matter Systems. Forschung und wissenschaftliches Rechnen, 63:43–59, 2003.\n<a id='[5]'></a>[5] W. P. Allen & D. J. Tildesley. Computer Simulation of Liquids. Oxford University Press, 2017.\n<a id='[6]'></a>[6] L. Verlet, “Computer ‘Experiments’ on Classical Fluids. I. Thermodynamical Properties of Lennard-Jones Molecules, Phys. Rev., 159(1):98–103, 1967. <small>DOI:</small><a href=\"https://doi.org/10.1103/PhysRev.159.98\">10.1103/PhysRev.159.98</a> \n<a id='[7]'></a>[6] Morsali, Goharshadi, Mansoori, Abbaspour. An accurate expression for radial distribution function of the Lennard-Jones fluid. Chemical Physics, 310(1–3):11–15, 2005. <small>DOI:</small><a href=\"https://doi.org/10.1016/j.chemphys.2004.09.027\">10.1016/j.chemphys.2004.09.027</a>\n<a id='[8]'></a>[7] Matteoli. A simple expression for radial distribution functions of pure fluids and mixtures. The Journal of Chemical Physics, 103(11):4672, 1995. <small>DOI:</small><a href=\"https://doi.org/10.1063/1.470654\">10.1063/1.470654</a>\n<a id='[9]'></a>[8] Abbaspour, Akbarzadeha, Abroodia. A new and accurate expression for the radial distribution function of confined Lennard-Jones fluid in carbon nanotubes. RSC Advances, 5(116): 95781–95787, 2015. <small>DOI:</small><a href=\"https://doi.org/10.1039/C5RA16151G\">10.1039/C5RA16151G</a>" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
prody/ProDy-website
_static/ipynb/workshop2020/prody_prs.ipynb
mit
[ "Prody Perturb Response Scanning (PRS): Evaluation of Sites Acting as Sensors and Effectors of Allosteric Signals\nThis tutorial demonstrates how to use perturbation response scanning (PRS) to determine sensors and effectors, which are important for allosteric signal transduction. The PRS approach is derived from linear response theory where perturbation forces are applied via a covariance matrix, which can be derived from elastic network models or MD simulations.\nThe example used in this tutorial is the Hsp70 chaperone, which we studied using this method in General et al. 2014, PLOS Comput. Biol. 10(5):e1003624. See https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4022485/\nFirst we need to import required packages.", "from prody import *\nimport numpy as np\nfrom pylab import *\n%matplotlib inline\nconfProDy(auto_show=False)", "1. Load in the starting structure and apply the anisotropic network model to it\nFirst, we parse a structure that we want to analyse with PRS. For this tutorial we will use the Hsp70 structure 4jne from the PDB. We first select the relevant residues and exclude flexible ends that may influence the results. In the same step, we can create a selection containing the calpha atoms (hsp70_ca), which we will use in downstream steps.", "hsp70_aa = parsePDB('4jne', chain='A')\nselection = hsp70_aa.select('resnum 4 to 530')\nhsp70_ca = selection.select('name CA')", "Next, create an GNM instance and calculate modes from which the covariance matrix can be calculated. We could alternatively apply the PRS to another model from which a covariance matrix could be derived such as PCA, GNM or an MD simulation.", "gnm = GNM('hsp70')\ngnm.buildKirchhoff(hsp70_ca)\ngnm.calcModes(n_modes='all')", "2. Calculate the normalized PRS matrix\nThe PRS matrix is then calculated from the covariance matrix from the GNM, which is symmetric and does not allow differentiation of sensors and effectors. We therefore normalize it by dividing each row by its diagonal element. This is handled by the function calcPerturbResponse, which also returns effectiveness and sensitivity profiles, which are the averages over the rows and columns of the normalized PRS matrix, respectively.", "prs_mat, eff, sens = calcPerturbResponse(gnm)", "These profiles can also be calculated during analysis steps as shown later.\n4. Identifying the effectors and sensors, and making a figure\nEffectors are the most effective residues whose perturbation has large effects on the structure and dynamics. Conversely, sensors are the most sensitive residues, which respond most strongly to perturbations of effectors and themselves undergo structural changes. We can identify these two sets of residues by (a) viewing the profiles as graphs and deciding upon a reasonable cutoff or (b) coloring residues by effectiveness and sensitivity on the structure and looking at them in a molecular graphics program. For the latter approach, we write the profiles into new PDB files in place of the b-factor or occupancy field.\na. Making graphs and plotting them alongside the matrix\nThis approach is implemented in the showPerturbResponse function. This function will calculate the PRS matrix (normalized covariance) as well as the effectiveness and sensitivity profiles. In order to do this, we provide a model from which the covariance matrix can be retrieved or calculated (in this case gnm). We also provide hsp70_ca so that atom information can be used too.", "showPerturbResponse(prs_mat, hsp70_ca, \n cmap=cm.inferno, \n norm=Normalize(0,np.max(prs_mat)/5));", "The last two options make the matrix color map match the paper and normalize the scale to make weaker signals more apparent. There are usually a few very strong signals, which otherwise drown out everything else. The current choice of capping at 1/5 of the max value looks reasonable for seeing the structure of the matrix.\n5. Plotting or visualizing effectiveness and sensitivity profiles\nWe can show the effectiveness and sensitivity profiles separately to aid in identifying the effectors and sensors, which would be the residues with the highest values for these quantities, respectively. This can be shown in a plot or on a structure.\na. Plotting profiles\nWe can plot profiles from showPerturbResponse by setting show_matrix=False. The default is to show the overall effectiveness and sensitivity profiles, which are the averages across the rows and columns. You can also select residues and show individual rows and columns corresponding to them as I will demonstrate later.", "showPerturbResponse(prs_mat, atoms=hsp70_ca, show_matrix=False);", "b. Writing profiles to PDB files for visualization (Figures 6B-C)\nIn order to visualize profiles on the structure, we write new PDB files with these values in the beta-factor or occupancy column. This can be done on the all-atom structure by making use of the function extendAtomicData as follows:", "writePDB('4jne_effectiveness.pdb', hsp70_ca, beta=eff)", "To visualize this data, load the files into the graphics program of your choice and color by b-factor. In VMD, you would do this through the Graphical Representation window (from Graphics > Representations menu). The window that comes up gives various Color Method options from which you would pick Beta. The residues with high b-factor are shown in blue followed by white and ending at red for low b-factor. You can change this via the Color Controls window (Graphics > Color); this has a Color Scale tab with a Method dropdown from which you can pick other options.\n5. Plotting or visualizing effectiveness and sensitivity profiles\nTo look at the effectiveness that perturbing a residue has in eliciting a response in individual residues (instead of its overall effectiveness) or to look at the sensitivity of a residue to perturbations of individual residues (instead of its overall sensitivity), we read out rows or columns from the perturbation response matrix.\na. Plotting\nFor this purpose, use the showPerturbResponse function with the option show_matrix=False, which makes it create line graphs. By default this gives plots for the average effectiveness and sensitivity. \nYou can also show plots for individual residues by slicing out rows or columns of the PRS matrix. By default, a row is selected (axis=0), which corresponds to the effectiveness of the selected residue(s).", "showPerturbResponse(prs_mat, atoms=hsp70_ca, show_matrix=False,\n select='resnum 389');", "To slice a column and show a sensitivity profile instead, we provide option axis=1.", "showPerturbResponse(prs_mat, atoms=hsp70_ca, show_matrix=False,\n select='resnum 389', axis=1);", "When multiple residues are selected, the lines are overlaid.", "showPerturbResponse(prs_mat, atoms=hsp70_ca, show_matrix=False,\n select='resnum 389 to 392', axis=1);", "To show individual plots, you can provide figure names or numbers as follows:", "for i, resnum in enumerate(range(389,392)):\n showPerturbResponse(prs_mat, atoms=hsp70_ca, show_matrix=False,\n select='resnum {0}'.format(resnum), figure=i);", "b. Visualization\nWe can extract residue-specific profiles by slicing the PRS matrix with a residue selection using sliceAtomicData. We then use writePDB again. \nWe set axis=0 to read out a row from the PRS matrix, which is a residue-specific effectiveness profile. For example, you could use the following command:", "V389_effectiveness = sliceAtomicData(prs_mat, hsp70_ca, 'resnum 389', \n axis=0)\n\nwritePDB('4jne_V389_row.pdb', hsp70_ca, betas=V389_effectiveness)", "You can also ask for columns (axis=1) rather than rows to get residue-specific sensititivity:", "V389_sensitivity = sliceAtomicData(prs_mat, hsp70_ca, 'resnum 389', \n axis=1)\n\nwritePDB('4jne_V389_col.pdb', hsp70_ca, betas=V389_sensitivity)", "We can also extend the data to include all atoms using the function extendAtomicData as follows. We need to apply the flatten method of V397_sensitivity because it has two dimensions (one of them telling us that is just one row).", "V389_sensitivity.shape\n\next_V389_sensitivity = extendAtomicData(V389_sensitivity.flatten(), \n hsp70_ca, selection)\n\nwritePDB('4jne_aa_V389_col.pdb', selection, betas=ext_V389_sensitivity)", "The same applies for V397_effectiveness where one of the two dimensions is telling us that is just one column.", "V389_effectiveness.shape\n\next_V389_effectiveness = extendAtomicData(V389_effectiveness.flatten(), \n hsp70_ca, selection)\n\nwritePDB('4jne_aa_V389_row.pdb', selection, betas=ext_V389_effectiveness)" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
cliburn/sta-663-2017
homework/06_Making_Python_Faster_2.ipynb
mit
[ "%matplotlib inline\n\nimport os\nimport numpy as np\nimport pandas as pd\nimport matplotlib.pyplot as plt", "Making Python Faster Part 2\n1. (25 points) Accelerating network bound procedures.\n\nPrint the names of the first 5 PNG images on the URL http://people.duke.edu/~ccc14/misc/. (10 points)\nWrite a function that uses a for loop to download all images and time how long it takes (5 points)\nWrite a function that uses concurrent.futures and a thread pool to download all images and time how long it takes (5 points)\nWrite a function that uses multiprocessing and a process pool to download all images and time how long it takes (5 points)", "import requests\nfrom bs4 import BeautifulSoup\n\ndef listFD(url, ext=''):\n page = requests.get(url).text\n soup = BeautifulSoup(page, 'html.parser')\n return [url + node.get('href') for node in soup.find_all('a') \n if node.get('href').endswith(ext)]\n\nsite = 'http://people.duke.edu/~ccc14/misc/'\next = 'png'\nfor i, file in enumerate(listFD(site, ext)):\n if i == 5:\n break\n print(file)\n\ndef download_one(url, path):\n r = requests.get(url, stream=True)\n img = r.raw.read()\n with open(path, 'wb') as f:\n f.write(img) \n\n%%time\n\nfor url in listFD(site, ext):\n filename = os.path.split(url)[-1]\n download_one(url, filename)\n\n%%time\n\nfrom concurrent.futures import ThreadPoolExecutor\n\nargs = [(url, os.path.split(url)[-1]) \n for url in listFD(site, ext)]\nwith ThreadPoolExecutor(max_workers=4) as pool:\n pool.map(lambda x: download_one(x[0], x[1]), args)\n\n%%time\n\nfrom multiprocessing import Pool\n\nargs = [(url, os.path.split(url)[-1]) \n for url in listFD(site, ext)]\nwith Pool(processes=4) as pool:\n pool.starmap(download_one, args)", "2. (25 points) Accelerating CPU bound procedures\n\nUse the insanely slow Buffon's needle algorithm to estimate $\\pi$. Suppose the needle is of length 1, and the lines are also 1 unit apart. Write a function to simulate the dropping of a pin with a random position and random angle, and return 0 if it does not cross a line and 1 if it does. Since the problem is periodic, you can assume that the bottom of the pin falls within (0, 1) and check if it crosses the line y=0 or y=1. (10 points)\nCalculate pi from dropping n=10^6 pins and time it (10 points)\nUse concurrent.futures and a process pool to parallelize your solution and time it.", "n = 100\np = 10\nxs = np.random.random((n, p))\n\ndef dist(x, y):\n return np.sqrt(np.sum((x - y)**2))\n\ndef pdist(xs):\n m = np.empty((len(xs), len(xs)))\n for i, x in enumerate(xs):\n for j, y in enumerate(xs):\n m[i, j] = dist(x, y)\n return m\n\n%timeit pdist(xs)", "3. (25 points) Use C++ to\n\nGenerate 10 $x$-coordinates linearly spaced between 10 and 15\nGenerate 10 random $y$-values as $y = 3x^2 − 7x + 2 + \\epsilon$ where $\\epsilon∼10N(0,1)$\nFind the norm of $x$ and $y$ considered as length 10-vectors\nFind the Euclidean distance between $x$ and $y$\nSolve the linear system to find a quadratic fit for this data\n\nYou may wish to use armadillo or eigen to solve this exercise.\n4. (25 points) 4. Write a C++ function that uses the eigen library to solve the least squares linear problem\n$$\n\\beta = (X^TX)^{-1}X^Ty\n$$\nfor a matrix $X$ and vector $y$ and returns the vector of coefficients $\\beta$. Wrap the function for use in Python and call it like so\nbeta &lt;- least_squares(X, y)\nwhere $X$ and $y$ are given below. \nWrap the function so that it can be called from Python and compare with the np.linalg.lstsq solution shown.", "n = 10\nx = np.linspace(0, 10, n)\ny = 3*x**2 - 7*x + 2 + np.random.normal(0, 10, n)\nX = np.c_[np.ones(n), x, x**2]\n\nbeta = np.linalg.lstsq(X, y)[0]\n\nbeta\n\nplt.scatter(x, y)\nplt.plot(x, X @ beta, 'red')\npass" ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
ocelot-collab/ocelot
demos/ipython_tutorials/6_coupler_kick.ipynb
gpl-3.0
[ "This notebook was created by Sergey Tomin (sergey.tomin@desy.de). Source and license info is on GitHub. April 2020.\nTutorial N6. Coupler Kick.\nSecond order tracking with coupler kick in TESLA type cavity of the 200k particles.\nAs an example, we will use linac L1 of the European XFEL Injector. \nThe input coupler and the higher order mode couplers of the RF cavities distort the axial symmetry of the electromagnetic (EM) field and affect the electron beam. This effect can be calculated by direct tracking of the particles in the asymmetric (due to the couplers) 3D EM field using a tracking code (e.g. ASTRA). For fast estimation of the coupler effect a discrete coupler model (as described, for example in M. Dohlus et al, Coupler Kick for Very Short Bunches and its Compensation, Proc. of EPAC08, MOPP013 or T.Hellert and M.Dohlus, Detuning related coupler kick variation of a superconducting nine-cell 1.3 GHz cavity) was implemented in OCELOT. Coefficients for 1.3 GHz modules are given in M.Dohlus, Effects of RF coupler kicks in L1 of EXFEL. The 1st order part of the model includes time and offset dependency; the offset dependency has a skew component. To include effect of all couplers, the kicks are applied at the entrance and the exit of each cavity. \nThe zeroth and first order kick $\\vec k$ on a bunch induced by a coupler can be expressed as\n\\begin{equation} \n\\vec k(x, y) \\approx \\frac{eV_0}{E_0} \\Re \\left{ \\left( \n\\begin{matrix}\nV_{x0}\\\nV_{y0}\n\\end{matrix} \\right) + \\left( \n\\begin{matrix}\n V_{xx} & V_{xy} \\\n V_{yx} & V_{yy}\n\\end{matrix}\\right)\n\\left( \n\\begin{matrix}\nx\\\ny\n\\end{matrix} \\right) e^{i \\phi}\\right}\n\\end{equation}\nwith $E_0$ being the beam energy at the corresponding coupler region, $V_0$ and $\\phi$ the amplitude and phase of the accelerating field, respectively, $e$ the elementary charge and $x$ and $y$ the transverse beam position at the coupler location. From Maxwell equations it follows that $V_{yy} = −V_{xx}$ and $V_{xy} = V_{yx}$. Thus, coupler kicks are up to first order well described with four normalized coupler kick coefficients $[V_{0x}, V_{0y}, V_{xx}, V_{xy}]$.\nIn OCELOT one can define copler kick coefficients for upstream and downstream coplers. \npython\nCavity(l=0., v=0., phi=0., freq=0., vx_up=0, vy_up=0, vxx_up=0, vxy_up=0,\n vx_down=0, vy_down=0, vxx_down=0, vxy_down=0, eid=None)\nThis example will cover the following topics:\n\nDefining the coupler coefficients for Cavity\ntracking of second order with Coupler Kick effect.\n\nDetails of implementation in the code\nNew in version 20.04.0\nThe coupler kicks are implemented in the code the same way as it was done for Edge elements. At the moment of inizialisation of MagneticLattice around Cavity element are created elemnents CouplerKick, the coupler kick before Cavity use coefficents with suffix \"_up\" (upstream) and after Cavity is placed CouplerKick with coefficent \"_down\" (downstream). The Coupler Kick elements are created even though coupler kick coefficennts are zeros.", "# the output of plotting commands is displayed inline within frontends, \n# directly below the code cell that produced it\n\n%matplotlib inline\n\nfrom time import time \n\n# this python library provides generic shallow (copy) \n# and deep copy (deepcopy) operations \nfrom copy import deepcopy\n\n# import from Ocelot main modules and functions\nfrom ocelot import *\n# extra function to track the Particle though a lattice\nfrom ocelot.cpbd.track import lattice_track\n\n# import from Ocelot graphical modules\nfrom ocelot.gui.accelerator import *\n\n# import lattice\nfrom xfel_l1 import *\n\ntws0 = Twiss()\ntws0.E = 0.005\n\ntws0.beta_x = 7.03383607232\ntws0.beta_y = 4.83025657816\ntws0.alpha_x = 0.981680481977\ntws0.alpha_y = -0.524776086698\ntws0.E = 0.1300000928\n\nlat = MagneticLattice(cell_l1, start=bpmf_103_i1, stop=qd_210_b1)\n\n# twiss parameters without coupler kick\ntws1 = twiss(lat, tws0)\n\n# adding coupler coefficients in [1/m]\nfor elem in lat.sequence:\n if elem.__class__ == Cavity:\n if not(\".AH1.\" in elem.id):\n # 1.3 GHz cavities\n elem.vx_up = (-56.813 + 10.751j) * 1e-6\n elem.vy_up = (-41.091 + 0.5739j) * 1e-6\n elem.vxx_up = (0.99943 - 0.81401j) * 1e-3\n elem.vxy_up = (3.4065 - 0.4146j) * 1e-3\n elem.vx_down = (-24.014 + 12.492j) * 1e-6\n elem.vy_down = (36.481 + 7.9888j) * 1e-6\n elem.vxx_down = (-4.057 - 0.1369j) * 1e-3\n elem.vxy_down = (2.9243 - 0.012891j) * 1e-3\n else:\n # AH1 cavity (3.9 GHz) module names are 'C3.AH1.1.1.I1', 'C3.AH1.1.2.I1', ...\n # Modules with odd and even number X 'C3.AH1.1.X.I1' have different coefficients\n \n module_number = float(elem.id.split(\".\")[-2])\n \n if module_number % 2 == 1:\n\n elem.vx_up = -0.00057076 - 1.3166e-05j\n elem.vy_up = -3.5079e-05 + 0.00012636j\n elem.vxx_up = -0.026045 - 0.042918j\n elem.vxy_up = 0.0055553 - 0.023455j\n\n elem.vx_down = -8.8766e-05 - 0.00024852j\n elem.vy_down = 2.9889e-05 + 0.00014486j\n elem.vxx_down = -0.0050593 - 0.013491j\n elem.vxy_down = 0.0051488 + 0.024771j\n else:\n\n elem.vx_up = 0.00057076 + 1.3166e-05j\n elem.vy_up = 3.5079e-05 - 0.00012636j\n elem.vxx_up = -0.026045 - 0.042918j\n elem.vxy_up = 0.0055553 - 0.023455j\n\n elem.vx_down = 8.8766e-05 + 0.00024852j\n elem.vy_down = -2.9889e-05 - 0.00014486j\n elem.vxx_down = -0.0050593 - 0.013491j\n elem.vxy_down = 0.0051488 + 0.024771j\n\n# update transfer maps\nlat.update_transfer_maps()\ntws = twiss(lat, tws0)", "Twiss parameters with and without coupler kick", "bx0 = [tw.beta_x for tw in tws1]\nby0 = [tw.beta_y for tw in tws1]\ns0 = [tw.s for tw in tws1]\n\nbx = [tw.beta_x for tw in tws]\nby = [tw.beta_y for tw in tws]\ns = [tw.s for tw in tws]\n\nfig, ax = plot_API(lat, legend=False)\nax.plot(s0, bx0, \"b\", lw=1, label=r\"$\\beta_x$\")\nax.plot(s, bx, \"b--\", lw=1, label=r\"$\\beta_x$, CK\")\nax.plot(s0, by0, \"r\", lw=1, label=r\"$\\beta_y$\")\nax.plot(s, by, \"r--\", lw=1, label=r\"$\\beta_y$, CK\")\nax.set_ylabel(r\"$\\beta_{x,y}$, m\")\nax.legend()\nplt.show()", "Trajectories with Coupler Kick", "def plot_trajectories(lat):\n f, (ax1, ax2) = plt.subplots(2, 1, sharex=True)\n for a in np.arange(-0.6, 0.6, 0.1):\n cix_118_i1.angle = a*0.001\n lat.update_transfer_maps()\n p = Particle(px=0, E=0.130)\n plist = lattice_track(lat, p)\n s = [p.s for p in plist]\n x = [p.x for p in plist]\n y = [p.y for p in plist]\n px = [p.px for p in plist]\n py = [p.py for p in plist]\n ax1.plot(s, x)\n ax2.plot(s, y)\n plt.xlabel(\"z [m]\")\n plt.show()\n \nplot_trajectories(lat)", "Horizantal and vertical emittances\nBefore start we remove zero order terms (dipole kicks) from coupler kicks coefficients. \nAnd check if we have any asymmetry.", "for elem in lat.sequence:\n if elem.__class__ == Cavity:\n if not(\".AH1.\" in elem.id):\n # 1.3 GHz cavities\n elem.vx_up = 0.\n elem.vy_up = 0.\n elem.vxx_up = (0.99943 - 0.81401j) * 1e-3\n elem.vxy_up = (3.4065 - 0.4146j) * 1e-3\n elem.vx_down = 0.\n elem.vy_down = 0.\n elem.vxx_down = (-4.057 - 0.1369j) * 1e-3\n elem.vxy_down = (2.9243 - 0.012891j) * 1e-3\n\n# update transfer maps\nlat.update_transfer_maps()\n\n# plot the trajectories \nplot_trajectories(lat)", "Tracking of the particles though lattice with coupler kicks\nSteps:\n* create ParticleArray with zero length and zero energy spread and chirp\n* track the Particle array through the lattice\n* plot the emittances", "# create ParticleArray with \"one clice\"\nparray = generate_parray(sigma_tau=0., sigma_p=0.0, chirp=0.0)\nprint(parray)\n\n# track the beam though the lattice\nnavi = Navigator(lat)\ntws_track, _ = track(lat, parray, navi)\n\n# plot emittances\nemit_x = np.array([tw.emit_x for tw in tws_track])\nemit_y = np.array([tw.emit_y for tw in tws_track])\ngamma = np.array([tw.E for tw in tws_track])/m_e_GeV\n\ns = [tw.s for tw in tws_track]\n\nfig, ax = plot_API(lat, legend=False)\nax.plot(s, emit_x * gamma * 1e6, \"b\", lw=1, label=r\"$\\varepsilon_x$ [mm $\\cdot$ mrad]\")\nax.plot(s, emit_y * gamma * 1e6, \"r\", lw=1, label=r\"$\\varepsilon_y$ [mm $\\cdot$ mrad]\")\nax.set_ylabel(r\"$\\varepsilon_{x,y}$ [mm $\\cdot$ mrad]\")\nax.legend()\nplt.show()", "Eigenemittance\nAs can we see, the projected emittances are not preserved, although all matrices are symplectic. The reason is the coupler kicks inroduce coupling between $X$ and $Y$ planes while the projected emittances are invariants under linear uncoupled (with respect to the laboratory coordinate system) symplectic transport. \nHowever, there are invariants under arbitrary (possibly coupled) linear symplectic transformations - eigenemittances. Details can be found here V. Balandin and N. Golubeva \"Notes on Linear Theory of Coupled Particle Beams with Equal Eigenemittances\" and V.Balandin et al \"Twiss Parameters of Coupled Particle Beams with Equal Eigenemittances\"", "# plot emittances\nemit_x = np.array([tw.eigemit_1 for tw in tws_track])\nemit_y = np.array([tw.eigemit_2 for tw in tws_track])\ngamma = np.array([tw.E for tw in tws_track])/m_e_GeV\n\ns = [tw.s for tw in tws_track]\n\nfig, ax = plot_API(lat, legend=False)\nax.plot(s, emit_x * gamma * 1e6, \"b\", lw=1, label=r\"$\\varepsilon_x$ [mm $\\cdot$ mrad]\")\nax.plot(s, emit_y * gamma * 1e6, \"r\", lw=1, label=r\"$\\varepsilon_y$ [mm $\\cdot$ mrad]\")\nax.set_ylabel(r\"$\\varepsilon_{x,y}$ [mm $\\cdot$ mrad]\")\nax.legend()\nplt.show()" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
Mdround/fastai-deeplearning1
deeplearning1/nbs/statefarm-sample.ipynb
apache-2.0
[ "Enter State Farm", "from theano.sandbox import cuda\ncuda.use('gpu1')\n\n%matplotlib inline\nfrom __future__ import print_function, division\n#path = \"data/state/\"\npath = \"data/state/sample/\"\nimport utils; reload(utils)\nfrom utils import *\nfrom IPython.display import FileLink\n\nbatch_size=64", "Create sample\nThe following assumes you've already created your validation set - remember that the training and validation set should contain different drivers, as mentioned on the Kaggle competition page.", "%cd data/state\n\n%cd train\n\n%mkdir ../sample\n%mkdir ../sample/train\n%mkdir ../sample/valid\n\nfor d in glob('c?'):\n os.mkdir('../sample/train/'+d)\n os.mkdir('../sample/valid/'+d)\n\nfrom shutil import copyfile\n\ng = glob('c?/*.jpg')\nshuf = np.random.permutation(g)\nfor i in range(1500): copyfile(shuf[i], '../sample/train/' + shuf[i])\n\n%cd ../valid\n\ng = glob('c?/*.jpg')\nshuf = np.random.permutation(g)\nfor i in range(1000): copyfile(shuf[i], '../sample/valid/' + shuf[i])\n\n%cd ../../..\n\n%mkdir data/state/results\n\n%mkdir data/state/sample/test", "Create batches", "batches = get_batches(path+'train', batch_size=batch_size)\nval_batches = get_batches(path+'valid', batch_size=batch_size*2, shuffle=False)\n\n(val_classes, trn_classes, val_labels, trn_labels, val_filenames, filenames,\n test_filename) = get_classes(path)", "Basic models\nLinear model\nFirst, we try the simplest model and use default parameters. Note the trick of making the first layer a batchnorm layer - that way we don't have to worry about normalizing the input ourselves.", "model = Sequential([\n BatchNormalization(axis=1, input_shape=(3,224,224)),\n Flatten(),\n Dense(10, activation='softmax')\n ])", "As you can see below, this training is going nowhere...", "model.compile(Adam(), loss='categorical_crossentropy', metrics=['accuracy'])\nmodel.fit_generator(batches, batches.nb_sample, nb_epoch=2, validation_data=val_batches, \n nb_val_samples=val_batches.nb_sample)", "Let's first check the number of parameters to see that there's enough parameters to find some useful relationships:", "model.summary()", "Over 1.5 million parameters - that should be enough. Incidentally, it's worth checking you understand why this is the number of parameters in this layer:", "10*3*224*224", "Since we have a simple model with no regularization and plenty of parameters, it seems most likely that our learning rate is too high. Perhaps it is jumping to a solution where it predicts one or two classes with high confidence, so that it can give a zero prediction to as many classes as possible - that's the best approach for a model that is no better than random, and there is likely to be where we would end up with a high learning rate. So let's check:", "np.round(model.predict_generator(batches, batches.N)[:10],2)", "Our hypothesis was correct. It's nearly always predicting class 1 or 6, with very high confidence. So let's try a lower learning rate:", "model = Sequential([\n BatchNormalization(axis=1, input_shape=(3,224,224)),\n Flatten(),\n Dense(10, activation='softmax')\n ])\nmodel.compile(Adam(lr=1e-5), loss='categorical_crossentropy', metrics=['accuracy'])\nmodel.fit_generator(batches, batches.nb_sample, nb_epoch=2, validation_data=val_batches, \n nb_val_samples=val_batches.nb_sample)", "Great - we found our way out of that hole... Now we can increase the learning rate and see where we can get to.", "model.optimizer.lr=0.001\n\nmodel.fit_generator(batches, batches.nb_sample, nb_epoch=4, validation_data=val_batches, \n nb_val_samples=val_batches.nb_sample)", "We're stabilizing at validation accuracy of 0.39. Not great, but a lot better than random. Before moving on, let's check that our validation set on the sample is large enough that it gives consistent results:", "rnd_batches = get_batches(path+'valid', batch_size=batch_size*2, shuffle=True)\n\nval_res = [model.evaluate_generator(rnd_batches, rnd_batches.nb_sample) for i in range(10)]\nnp.round(val_res, 2)", "Yup, pretty consistent - if we see improvements of 3% or more, it's probably not random, based on the above samples.\nL2 regularization\nThe previous model is over-fitting a lot, but we can't use dropout since we only have one layer. We can try to decrease overfitting in our model by adding l2 regularization (i.e. add the sum of squares of the weights to our loss function):", "model = Sequential([\n BatchNormalization(axis=1, input_shape=(3,224,224)),\n Flatten(),\n Dense(10, activation='softmax', W_regularizer=l2(0.01))\n ])\nmodel.compile(Adam(lr=10e-5), loss='categorical_crossentropy', metrics=['accuracy'])\nmodel.fit_generator(batches, batches.nb_sample, nb_epoch=2, validation_data=val_batches, \n nb_val_samples=val_batches.nb_sample)\n\nmodel.optimizer.lr=0.001\n\nmodel.fit_generator(batches, batches.nb_sample, nb_epoch=4, validation_data=val_batches, \n nb_val_samples=val_batches.nb_sample)", "Looks like we can get a bit over 50% accuracy this way. This will be a good benchmark for our future models - if we can't beat 50%, then we're not even beating a linear model trained on a sample, so we'll know that's not a good approach.\nSingle hidden layer\nThe next simplest model is to add a single hidden layer.", "model = Sequential([\n BatchNormalization(axis=1, input_shape=(3,224,224)),\n Flatten(),\n Dense(100, activation='relu'),\n BatchNormalization(),\n Dense(10, activation='softmax')\n ])\nmodel.compile(Adam(lr=1e-5), loss='categorical_crossentropy', metrics=['accuracy'])\nmodel.fit_generator(batches, batches.nb_sample, nb_epoch=2, validation_data=val_batches, \n nb_val_samples=val_batches.nb_sample)\n\nmodel.optimizer.lr = 0.01\nmodel.fit_generator(batches, batches.nb_sample, nb_epoch=5, validation_data=val_batches, \n nb_val_samples=val_batches.nb_sample)", "Not looking very encouraging... which isn't surprising since we know that CNNs are a much better choice for computer vision problems. So we'll try one.\nSingle conv layer\n2 conv layers with max pooling followed by a simple dense network is a good simple CNN to start with:", "def conv1(batches):\n model = Sequential([\n BatchNormalization(axis=1, input_shape=(3,224,224)),\n Convolution2D(32,3,3, activation='relu'),\n BatchNormalization(axis=1),\n MaxPooling2D((3,3)),\n Convolution2D(64,3,3, activation='relu'),\n BatchNormalization(axis=1),\n MaxPooling2D((3,3)),\n Flatten(),\n Dense(200, activation='relu'),\n BatchNormalization(),\n Dense(10, activation='softmax')\n ])\n\n model.compile(Adam(lr=1e-4), loss='categorical_crossentropy', metrics=['accuracy'])\n model.fit_generator(batches, batches.nb_sample, nb_epoch=2, validation_data=val_batches, \n nb_val_samples=val_batches.nb_sample)\n model.optimizer.lr = 0.001\n model.fit_generator(batches, batches.nb_sample, nb_epoch=4, validation_data=val_batches, \n nb_val_samples=val_batches.nb_sample)\n return model\n\nconv1(batches)", "The training set here is very rapidly reaching a very high accuracy. So if we could regularize this, perhaps we could get a reasonable result.\nSo, what kind of regularization should we try first? As we discussed in lesson 3, we should start with data augmentation.\nData augmentation\nTo find the best data augmentation parameters, we can try each type of data augmentation, one at a time. For each type, we can try four very different levels of augmentation, and see which is the best. In the steps below we've only kept the single best result we found. We're using the CNN we defined above, since we have already observed it can model the data quickly and accurately.\nWidth shift: move the image left and right -", "gen_t = image.ImageDataGenerator(width_shift_range=0.1)\nbatches = get_batches(path+'train', gen_t, batch_size=batch_size)\n\nmodel = conv1(batches)", "Height shift: move the image up and down -", "gen_t = image.ImageDataGenerator(height_shift_range=0.05)\nbatches = get_batches(path+'train', gen_t, batch_size=batch_size)\n\nmodel = conv1(batches)", "Random shear angles (max in radians) -", "gen_t = image.ImageDataGenerator(shear_range=0.1)\nbatches = get_batches(path+'train', gen_t, batch_size=batch_size)\n\nmodel = conv1(batches)", "Rotation: max in degrees -", "gen_t = image.ImageDataGenerator(rotation_range=15)\nbatches = get_batches(path+'train', gen_t, batch_size=batch_size)\n\nmodel = conv1(batches)", "Channel shift: randomly changing the R,G,B colors -", "gen_t = image.ImageDataGenerator(channel_shift_range=20)\nbatches = get_batches(path+'train', gen_t, batch_size=batch_size)\n\nmodel = conv1(batches)", "And finally, putting it all together!", "gen_t = image.ImageDataGenerator(rotation_range=15, height_shift_range=0.05, \n shear_range=0.1, channel_shift_range=20, width_shift_range=0.1)\nbatches = get_batches(path+'train', gen_t, batch_size=batch_size)\n\nmodel = conv1(batches)", "At first glance, this isn't looking encouraging, since the validation set is poor and getting worse. But the training set is getting better, and still has a long way to go in accuracy - so we should try annealing our learning rate and running more epochs, before we make a decisions.", "model.optimizer.lr = 0.0001\nmodel.fit_generator(batches, batches.nb_sample, nb_epoch=5, validation_data=val_batches, \n nb_val_samples=val_batches.nb_sample)", "Lucky we tried that - we starting to make progress! Let's keep going.", "model.fit_generator(batches, batches.nb_sample, nb_epoch=25, validation_data=val_batches, \n nb_val_samples=val_batches.nb_sample)", "Amazingly, using nothing but a small sample, a simple (not pre-trained) model with no dropout, and data augmentation, we're getting results that would get us into the top 50% of the competition! This looks like a great foundation for our futher experiments.\nTo go further, we'll need to use the whole dataset, since dropout and data volumes are very related, so we can't tweak dropout without using all the data." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
tensorflow/federated
docs/tutorials/federated_learning_for_text_generation.ipynb
apache-2.0
[ "Copyright 2019 The TensorFlow Authors.", "#@title Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# https://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.", "Federated Learning for Text Generation\n<table class=\"tfo-notebook-buttons\" align=\"left\">\n <td>\n <a target=\"_blank\" href=\"https://www.tensorflow.org/federated/tutorials/federated_learning_for_text_generation\"><img src=\"https://www.tensorflow.org/images/tf_logo_32px.png\" />View on TensorFlow.org</a>\n </td>\n <td>\n <a target=\"_blank\" href=\"https://colab.research.google.com/github/tensorflow/federated/blob/v0.27.0/docs/tutorials/federated_learning_for_text_generation.ipynb\"><img src=\"https://www.tensorflow.org/images/colab_logo_32px.png\" />Run in Google Colab</a>\n </td>\n <td>\n <a target=\"_blank\" href=\"https://github.com/tensorflow/federated/blob/v0.27.0/docs/tutorials/federated_learning_for_text_generation.ipynb\"><img src=\"https://www.tensorflow.org/images/GitHub-Mark-32px.png\" />View source on GitHub</a>\n </td>\n <td>\n <a href=\"https://storage.googleapis.com/tensorflow_docs/federated/docs/tutorials/federated_learning_for_text_generation.ipynb\"><img src=\"https://www.tensorflow.org/images/download_logo_32px.png\" />Download notebook</a>\n </td>\n</table>\n\nNOTE: This colab has been verified to work with the latest released version of the tensorflow_federated pip package, but the Tensorflow Federated project is still in pre-release development and may not work on main.\nThis tutorial builds on the concepts in the Federated Learning for Image Classification tutorial, and demonstrates several other useful approaches for federated learning.\nIn particular, we load a previously trained Keras model, and refine it using federated training on a (simulated) decentralized dataset. This is practically important for several reasons . The ability to use serialized models makes it easy to mix federated learning with other ML approaches. Further, this allows use of an increasing range of pre-trained models --- for example, training language models from scratch is rarely necessary, as numerous pre-trained models are now widely available (see, e.g., TF Hub). Instead, it makes more sense to start from a pre-trained model, and refine it using Federated Learning, adapting to the particular characteristics of the decentralized data for a particular application.\nFor this tutorial, we start with a RNN that generates ASCII characters, and refine it via federated learning. We also show how the final weights can be fed back to the original Keras model, allowing easy evaluation and text generation using standard tools.", "#@test {\"skip\": true}\n!pip install --quiet --upgrade tensorflow-federated\n!pip install --quiet --upgrade nest-asyncio\n\nimport nest_asyncio\nnest_asyncio.apply()\n\nimport collections\nimport functools\nimport os\nimport time\n\nimport numpy as np\nimport tensorflow as tf\nimport tensorflow_federated as tff\n\nnp.random.seed(0)\n\n# Test the TFF is working:\ntff.federated_computation(lambda: 'Hello, World!')()", "Load a pre-trained model\nWe load a model that was pre-trained following the TensorFlow tutorial\nText generation using a RNN with eager execution. However,\nrather than training on The Complete Works of Shakespeare, we pre-trained the model on the text from the Charles Dickens'\n A Tale of Two Cities\n and\n A Christmas Carol.\nOther than expanding the vocabulary, we didn't modify the original tutorial, so this initial model isn't state-of-the-art, but it produces reasonable predictions and is sufficient for our tutorial purposes. The final model was saved with tf.keras.models.save_model(include_optimizer=False).\nWe will use federated learning to fine-tune this model for Shakespeare in this tutorial, using a federated version of the data provided by TFF.\nGenerate the vocab lookup tables", "# A fixed vocabularly of ASCII chars that occur in the works of Shakespeare and Dickens:\nvocab = list('dhlptx@DHLPTX $(,048cgkoswCGKOSW[_#\\'/37;?bfjnrvzBFJNRVZ\"&*.26:\\naeimquyAEIMQUY]!%)-159\\r')\n\n# Creating a mapping from unique characters to indices\nchar2idx = {u:i for i, u in enumerate(vocab)}\nidx2char = np.array(vocab)", "Load the pre-trained model and generate some text", "def load_model(batch_size):\n urls = {\n 1: 'https://storage.googleapis.com/tff-models-public/dickens_rnn.batch1.kerasmodel',\n 8: 'https://storage.googleapis.com/tff-models-public/dickens_rnn.batch8.kerasmodel'}\n assert batch_size in urls, 'batch_size must be in ' + str(urls.keys())\n url = urls[batch_size]\n local_file = tf.keras.utils.get_file(os.path.basename(url), origin=url) \n return tf.keras.models.load_model(local_file, compile=False)\n\ndef generate_text(model, start_string):\n # From https://www.tensorflow.org/tutorials/sequences/text_generation\n num_generate = 200\n input_eval = [char2idx[s] for s in start_string]\n input_eval = tf.expand_dims(input_eval, 0)\n text_generated = []\n temperature = 1.0\n\n model.reset_states()\n for i in range(num_generate):\n predictions = model(input_eval)\n predictions = tf.squeeze(predictions, 0)\n predictions = predictions / temperature\n predicted_id = tf.random.categorical(\n predictions, num_samples=1)[-1, 0].numpy()\n input_eval = tf.expand_dims([predicted_id], 0)\n text_generated.append(idx2char[predicted_id])\n\n return (start_string + ''.join(text_generated))\n\n# Text generation requires a batch_size=1 model.\nkeras_model_batch1 = load_model(batch_size=1)\nprint(generate_text(keras_model_batch1, 'What of TensorFlow Federated, you ask? '))", "Load and Preprocess the Federated Shakespeare Data\nThe tff.simulation.datasets package provides a variety of datasets that are split into \"clients\", where each client corresponds to a dataset on a particular device that might participate in federated learning.\nThese datasets provide realistic non-IID data distributions that replicate in simulation the challenges of training on real decentralized data. Some of the pre-processing of this data was done using tools from the Leaf project (github).", "train_data, test_data = tff.simulation.datasets.shakespeare.load_data()", "The datasets provided by shakespeare.load_data() consist of a sequence of\nstring Tensors, one for each line spoken by a particular character in a\nShakespeare play. The client keys consist of the name of the play joined with\nthe name of the character, so for example MUCH_ADO_ABOUT_NOTHING_OTHELLO corresponds to the lines for the character Othello in the play Much Ado About Nothing. Note that in a real federated learning scenario\nclients are never identified or tracked by ids, but for simulation it is useful\nto work with keyed datasets.\nHere, for example, we can look at some data from King Lear:", "# Here the play is \"The Tragedy of King Lear\" and the character is \"King\".\nraw_example_dataset = train_data.create_tf_dataset_for_client(\n 'THE_TRAGEDY_OF_KING_LEAR_KING')\n# To allow for future extensions, each entry x\n# is an OrderedDict with a single key 'snippets' which contains the text.\nfor x in raw_example_dataset.take(2):\n print(x['snippets'])", "We now use tf.data.Dataset transformations to prepare this data for training the char RNN loaded above.", "# Input pre-processing parameters\nSEQ_LENGTH = 100\nBATCH_SIZE = 8\nBUFFER_SIZE = 100 # For dataset shuffling\n\n# Construct a lookup table to map string chars to indexes,\n# using the vocab loaded above:\ntable = tf.lookup.StaticHashTable(\n tf.lookup.KeyValueTensorInitializer(\n keys=vocab, values=tf.constant(list(range(len(vocab))),\n dtype=tf.int64)),\n default_value=0)\n\n\ndef to_ids(x):\n s = tf.reshape(x['snippets'], shape=[1])\n chars = tf.strings.bytes_split(s).values\n ids = table.lookup(chars)\n return ids\n\n\ndef split_input_target(chunk):\n input_text = tf.map_fn(lambda x: x[:-1], chunk)\n target_text = tf.map_fn(lambda x: x[1:], chunk)\n return (input_text, target_text)\n\n\ndef preprocess(dataset):\n return (\n # Map ASCII chars to int64 indexes using the vocab\n dataset.map(to_ids)\n # Split into individual chars\n .unbatch()\n # Form example sequences of SEQ_LENGTH +1\n .batch(SEQ_LENGTH + 1, drop_remainder=True)\n # Shuffle and form minibatches\n .shuffle(BUFFER_SIZE).batch(BATCH_SIZE, drop_remainder=True)\n # And finally split into (input, target) tuples,\n # each of length SEQ_LENGTH.\n .map(split_input_target))", "Note that in the formation of the original sequences and in the formation of\nbatches above, we use drop_remainder=True for simplicity. This means that any\ncharacters (clients) that don't have at least (SEQ_LENGTH + 1) * BATCH_SIZE\nchars of text will have empty datasets. A typical approach to address this would\nbe to pad the batches with a special token, and then mask the loss to not take\nthe padding tokens into account.\nThis would complicate the example somewhat, so for this tutorial we only use full batches, as in the\nstandard tutorial.\nHowever, in the federated setting this issue is more significant, because many\nusers might have small datasets.\nNow we can preprocess our raw_example_dataset, and check the types:", "example_dataset = preprocess(raw_example_dataset)\nprint(example_dataset.element_spec)", "Compile the model and test on the preprocessed data\nWe loaded an uncompiled keras model, but in order to run keras_model.evaluate, we need to compile it with a loss and metrics. We will also compile in an optimizer, which will be used as the on-device optimizer in Federated Learning.\nThe original tutorial didn't have char-level accuracy (the fraction\nof predictions where the highest probability was put on the correct\nnext char). This is a useful metric, so we add it.\nHowever, we need to define a new metric class for this because \nour predictions have rank 3 (a vector of logits for each of the \nBATCH_SIZE * SEQ_LENGTH predictions), and SparseCategoricalAccuracy\nexpects only rank 2 predictions.", "class FlattenedCategoricalAccuracy(tf.keras.metrics.SparseCategoricalAccuracy):\n\n def __init__(self, name='accuracy', dtype=tf.float32):\n super().__init__(name, dtype=dtype)\n\n def update_state(self, y_true, y_pred, sample_weight=None):\n y_true = tf.reshape(y_true, [-1, 1])\n y_pred = tf.reshape(y_pred, [-1, len(vocab), 1])\n return super().update_state(y_true, y_pred, sample_weight)", "Now we can compile a model, and evaluate it on our example_dataset.", "BATCH_SIZE = 8 # The training and eval batch size for the rest of this tutorial.\nkeras_model = load_model(batch_size=BATCH_SIZE)\nkeras_model.compile(\n loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True),\n metrics=[FlattenedCategoricalAccuracy()])\n\n# Confirm that loss is much lower on Shakespeare than on random data\nloss, accuracy = keras_model.evaluate(example_dataset.take(5), verbose=0)\nprint(\n 'Evaluating on an example Shakespeare character: {a:3f}'.format(a=accuracy))\n\n# As a sanity check, we can construct some completely random data, where we expect\n# the accuracy to be essentially random:\nrandom_guessed_accuracy = 1.0 / len(vocab)\nprint('Expected accuracy for random guessing: {a:.3f}'.format(\n a=random_guessed_accuracy))\nrandom_indexes = np.random.randint(\n low=0, high=len(vocab), size=1 * BATCH_SIZE * (SEQ_LENGTH + 1))\ndata = collections.OrderedDict(\n snippets=tf.constant(\n ''.join(np.array(vocab)[random_indexes]), shape=[1, 1]))\nrandom_dataset = preprocess(tf.data.Dataset.from_tensor_slices(data))\nloss, accuracy = keras_model.evaluate(random_dataset, steps=10, verbose=0)\nprint('Evaluating on completely random data: {a:.3f}'.format(a=accuracy))", "Fine-tune the model with Federated Learning\nTFF serializes all TensorFlow computations so they can potentially be run in a\nnon-Python environment (even though at the moment, only a simulation runtime implemented in Python is available). Even though we are running in eager mode, (TF 2.0), currently TFF serializes TensorFlow computations by constructing the\nnecessary ops inside the context of a \"with tf.Graph.as_default()\" statement.\nThus, we need to provide a function that TFF can use to introduce our model into\na graph it controls. We do this as follows:", "# Clone the keras_model inside `create_tff_model()`, which TFF will\n# call to produce a new copy of the model inside the graph that it will \n# serialize. Note: we want to construct all the necessary objects we'll need \n# _inside_ this method.\ndef create_tff_model():\n # TFF uses an `input_spec` so it knows the types and shapes\n # that your model expects.\n input_spec = example_dataset.element_spec\n keras_model_clone = tf.keras.models.clone_model(keras_model)\n return tff.learning.from_keras_model(\n keras_model_clone,\n input_spec=input_spec,\n loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True),\n metrics=[FlattenedCategoricalAccuracy()])", "Now we are ready to construct a Federated Averaging iterative process, which we will use to improve the model (for details on the Federated Averaging algorithm, see the paper Communication-Efficient Learning of Deep Networks from Decentralized Data).\nWe use a compiled Keras model to perform standard (non-federated) evaluation after each round of federated training. This is useful for research purposes when doing simulated federated learning and there is a standard test dataset. \nIn a realistic production setting this same technique might be used to take models trained with federated learning and evaluate them on a centralized benchmark dataset for testing or quality assurance purposes.", "# This command builds all the TensorFlow graphs and serializes them: \nfed_avg = tff.learning.build_federated_averaging_process(\n model_fn=create_tff_model,\n client_optimizer_fn=lambda: tf.keras.optimizers.SGD(lr=0.5))", "Here is the simplest possible loop, where we run federated averaging for one round on a single client on a single batch:", "state = fed_avg.initialize()\nstate, metrics = fed_avg.next(state, [example_dataset.take(5)])\ntrain_metrics = metrics['train']\nprint('loss={l:.3f}, accuracy={a:.3f}'.format(\n l=train_metrics['loss'], a=train_metrics['accuracy']))", "Now let's write a slightly more interesting training and evaluation loop.\nSo that this simulation still runs relatively quickly, we train on the same three clients each round, only considering two minibatches for each.", "def data(client, source=train_data):\n return preprocess(source.create_tf_dataset_for_client(client)).take(5)\n\n\nclients = [\n 'ALL_S_WELL_THAT_ENDS_WELL_CELIA', 'MUCH_ADO_ABOUT_NOTHING_OTHELLO',\n]\n\ntrain_datasets = [data(client) for client in clients]\n\n# We concatenate the test datasets for evaluation with Keras by creating a \n# Dataset of Datasets, and then identity flat mapping across all the examples.\ntest_dataset = tf.data.Dataset.from_tensor_slices(\n [data(client, test_data) for client in clients]).flat_map(lambda x: x)", "The initial state of the model produced by fed_avg.initialize() is based\non the random initializers for the Keras model, not the weights that were loaded,\nsince clone_model() does not clone the weights. To start training\nfrom a pre-trained model, we set the model weights in the server state\ndirectly from the loaded model.", "NUM_ROUNDS = 5\n\n# The state of the FL server, containing the model and optimization state.\nstate = fed_avg.initialize()\n\n# Load our pre-trained Keras model weights into the global model state.\nstate = tff.learning.state_with_new_model_weights(\n state,\n trainable_weights=[v.numpy() for v in keras_model.trainable_weights],\n non_trainable_weights=[\n v.numpy() for v in keras_model.non_trainable_weights\n ])\n\n\ndef keras_evaluate(state, round_num):\n # Take our global model weights and push them back into a Keras model to\n # use its standard `.evaluate()` method.\n keras_model = load_model(batch_size=BATCH_SIZE)\n keras_model.compile(\n loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True),\n metrics=[FlattenedCategoricalAccuracy()])\n state.model.assign_weights_to(keras_model)\n loss, accuracy = keras_model.evaluate(example_dataset, steps=2, verbose=0)\n print('\\tEval: loss={l:.3f}, accuracy={a:.3f}'.format(l=loss, a=accuracy))\n\n\nfor round_num in range(NUM_ROUNDS):\n print('Round {r}'.format(r=round_num))\n keras_evaluate(state, round_num)\n state, metrics = fed_avg.next(state, train_datasets)\n train_metrics = metrics['train']\n print('\\tTrain: loss={l:.3f}, accuracy={a:.3f}'.format(\n l=train_metrics['loss'], a=train_metrics['accuracy']))\n\nprint('Final evaluation')\nkeras_evaluate(state, NUM_ROUNDS + 1)", "With the default changes, we haven't done enough training to make a big difference, but if you train longer on more Shakespeare data, you should see a difference in the style of the text generated with the updated model:", "# Set our newly trained weights back in the originally created model.\nkeras_model_batch1.set_weights([v.numpy() for v in keras_model.weights])\n# Text generation requires batch_size=1\nprint(generate_text(keras_model_batch1, 'What of TensorFlow Federated, you ask? '))", "Suggested extensions\nThis tutorial is just the first step! Here are some ideas for how you might try extending this notebook:\n * Write a more realistic training loop where you sample clients to train on randomly.\n * Use \".repeat(NUM_EPOCHS)\" on the client datasets to try multiple epochs of local training (e.g., as in McMahan et. al.). See also Federated Learning for Image Classification which does this.\n * Change the compile() command to experiment with using different optimization algorithms on the client.\n * Try the server_optimizer argument to build_federated_averaging_process to try different algorithms for applying the model updates on the server.\n * Try the client_weight_fn argument to to build_federated_averaging_process to try different weightings of the clients. The default weights client updates by the number of examples on the client, but you can do e.g. client_weight_fn=lambda _: tf.constant(1.0)." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]