markdown
stringlengths
0
37k
code
stringlengths
1
33.3k
path
stringlengths
8
215
repo_name
stringlengths
6
77
license
stringclasses
15 values
Export as Edge model You can export an AutoML image object detection model as a Edge model which you can then custom deploy to an edge device or download locally. Use the method export_model() to export the model to Cloud Storage, which takes the following parameters: artifact_destination: The Cloud Storage location t...
response = model.export_model( artifact_destination=BUCKET_NAME, export_format_id="tflite", sync=True ) model_package = response["artifactOutputUri"]
notebooks/community/sdk/sdk_automl_image_object_detection_online_export_edge.ipynb
GoogleCloudPlatform/vertex-ai-samples
apache-2.0
Download the TFLite model artifacts Now that you have an exported TFLite version of your model, you can test the exported model locally, but first downloading it from Cloud Storage.
! gsutil ls $model_package # Download the model artifacts ! gsutil cp -r $model_package tflite tflite_path = "tflite/model.tflite"
notebooks/community/sdk/sdk_automl_image_object_detection_online_export_edge.ipynb
GoogleCloudPlatform/vertex-ai-samples
apache-2.0
Instantiate a TFLite interpreter The TFLite version of the model is not a TensorFlow SavedModel format. You cannot directly use methods like predict(). Instead, one uses the TFLite interpreter. You must first setup the interpreter for the TFLite model as follows: Instantiate an TFLite interpreter for the TFLite model....
import tensorflow as tf interpreter = tf.lite.Interpreter(model_path=tflite_path) interpreter.allocate_tensors() input_details = interpreter.get_input_details() output_details = interpreter.get_output_details() input_shape = input_details[0]["shape"] print("input tensor shape", input_shape)
notebooks/community/sdk/sdk_automl_image_object_detection_online_export_edge.ipynb
GoogleCloudPlatform/vertex-ai-samples
apache-2.0
Get test item You will use an arbitrary example out of the dataset as a test item. Don't be concerned that the example was likely used in training the model -- we just want to demonstrate how to make a prediction.
test_items = ! gsutil cat $IMPORT_FILE | head -n1 test_item = test_items[0].split(",")[0] with tf.io.gfile.GFile(test_item, "rb") as f: content = f.read() test_image = tf.io.decode_jpeg(content) print("test image shape", test_image.shape) test_image = tf.image.resize(test_image, (224, 224)) print("test image shap...
notebooks/community/sdk/sdk_automl_image_object_detection_online_export_edge.ipynb
GoogleCloudPlatform/vertex-ai-samples
apache-2.0
Make a prediction with TFLite model Finally, you do a prediction using your TFLite model, as follows: Convert the test image into a batch of a single image (np.expand_dims) Set the input tensor for the interpreter to your batch of a single image (data). Invoke the interpreter. Retrieve the softmax probabilities for th...
import numpy as np data = np.expand_dims(test_image, axis=0) interpreter.set_tensor(input_details[0]["index"], data) interpreter.invoke() softmax = interpreter.get_tensor(output_details[0]["index"]) label = np.argmax(softmax) print(label)
notebooks/community/sdk/sdk_automl_image_object_detection_online_export_edge.ipynb
GoogleCloudPlatform/vertex-ai-samples
apache-2.0
Classes Everything is an object in Python including native types. You define class names with camel casing. You define the constructor with special name __init__(). The fields (private) are denoted with _variable_name specification and properties are decorated with @property decorator. Fields and properties are accesse...
# Define a class to hold a satellite or aerial imagery file. Its properties give information # such as location of the ground, area, dimensions, spatial and spectral resolution etc. class ImageryObject: _default_gsd = 5.0 def __init__(self, file_path): self._file_path = file_path self._gps...
python_crash_course/python_cheat_sheet_2.ipynb
AtmaMani/pyChakras
mit
Exception handling Exceptions are classes. You can define your own by inheriting from Exception class. try: statements except Exception_type1 as e1: handling statements except Exception_type2 as e2: specific handling statements except Exception as generic_ex: generic handling statements else: so...
try: img2 = ImageryObject("user\img\file2.img") img2.display() except: print("something bad happened") try: img2 = ImageryObject("user\img\file2.img") img2.display() except: print("something bad happened") else: print("else block") finally: print("finally block") try: img2 = Imager...
python_crash_course/python_cheat_sheet_2.ipynb
AtmaMani/pyChakras
mit
This downloads the dataset and automatically pre-processes it into sparse matrices suitable for further calculation. In particular, it prepares the sparse user-item matrices, containing positive entries where a user interacted with a product, and zeros otherwise. We have two such matrices, a training and a testing set....
print(repr(data['train'])) print(repr(data['test']))
examples/quickstart/quickstart.ipynb
paoloRais/lightfm
apache-2.0
We need to import the model class to fit the model:
from lightfm import LightFM
examples/quickstart/quickstart.ipynb
paoloRais/lightfm
apache-2.0
We're going to use the WARP (Weighted Approximate-Rank Pairwise) model. WARP is an implicit feedback model: all interactions in the training matrix are treated as positive signals, and products that users did not interact with they implicitly do not like. The goal of the model is to score these implicit positives highl...
model = LightFM(loss='warp') %time model.fit(data['train'], epochs=30, num_threads=2)
examples/quickstart/quickstart.ipynb
paoloRais/lightfm
apache-2.0
Done! We should now evaluate the model to see how well it's doing. We're most interested in how good the ranking produced by the model is. Precision@k is one suitable metric, expressing the percentage of top k items in the ranking the user has actually interacted with. lightfm implements a number of metrics in the eval...
from lightfm.evaluation import precision_at_k
examples/quickstart/quickstart.ipynb
paoloRais/lightfm
apache-2.0
We'll measure precision in both the train and the test set.
print("Train precision: %.2f" % precision_at_k(model, data['train'], k=5).mean()) print("Test precision: %.2f" % precision_at_k(model, data['test'], k=5).mean())
examples/quickstart/quickstart.ipynb
paoloRais/lightfm
apache-2.0
Unsurprisingly, the model fits the train set better than the test set. For an alternative way of judging the model, we can sample a couple of users and get their recommendations. To make predictions for given user, we pass the id of that user and the ids of all products we want predictions for into the predict method.
def sample_recommendation(model, data, user_ids): n_users, n_items = data['train'].shape for user_id in user_ids: known_positives = data['item_labels'][data['train'].tocsr()[user_id].indices] scores = model.predict(user_id, np.arange(n_items)) top_items = data['item_label...
examples/quickstart/quickstart.ipynb
paoloRais/lightfm
apache-2.0
CONTENTS Overview Graph Coloring N-Queens AC-3 Backtracking Search Tree CSP Solver Graph Coloring Visualization N-Queens Visualization OVERVIEW CSPs are a special kind of search problems. Here we don't treat the space as a black box but the state has a particular form and we use that to our advantage to tweak our alg...
psource(CSP)
csp.ipynb
jo-tez/aima-python
mit
The _ init _ method parameters specify the CSP. Variables can be passed as a list of strings or integers. Domains are passed as dict (dictionary datatpye) where "key" specifies the variables and "value" specifies the domains. The variables are passed as an empty list. Variables are extracted from the keys of the doma...
s = UniversalDict(['R','G','B']) s[5]
csp.ipynb
jo-tez/aima-python
mit
For our CSP we also need to define a constraint function f(A, a, B, b). In this, we need to ensure that the neighbors don't have the same color. This is defined in the function different_values_constraint of the module.
psource(different_values_constraint)
csp.ipynb
jo-tez/aima-python
mit
The CSP class takes neighbors in the form of a Dict. The module specifies a simple helper function named parse_neighbors which allows us to take input in the form of strings and return a Dict of a form that is compatible with the CSP Class.
%pdoc parse_neighbors
csp.ipynb
jo-tez/aima-python
mit
The MapColoringCSP function creates and returns a CSP with the above constraint function and states. The variables are the keys of the neighbors dict and the constraint is the one specified by the different_values_constratint function. Australia, USA and France are three CSPs that have been created using MapColoringCSP...
psource(MapColoringCSP) australia, usa, france
csp.ipynb
jo-tez/aima-python
mit
N-QUEENS The N-queens puzzle is the problem of placing N chess queens on an N×N chessboard in a way such that no two queens threaten each other. Here N is a natural number. Like the graph coloring problem, NQueens is also implemented in the csp module. The NQueensCSP class inherits from the CSP class. It makes some mod...
psource(queen_constraint)
csp.ipynb
jo-tez/aima-python
mit
The NQueensCSP method implements methods that support solving the problem via min_conflicts which is one of the many popular techniques for solving CSPs. Because min_conflicts hill climbs the number of conflicts to solve, the CSP assign and unassign are modified to record conflicts. More details about the structures: r...
psource(NQueensCSP)
csp.ipynb
jo-tez/aima-python
mit
The _ init _ method takes only one parameter n i.e. the size of the problem. To create an instance, we just pass the required value of n into the constructor.
eight_queens = NQueensCSP(8)
csp.ipynb
jo-tez/aima-python
mit
We have defined our CSP. Now, we need to solve this. Min-conflicts As stated above, the min_conflicts algorithm is an efficient method to solve such a problem. <br> In the start, all the variables of the CSP are randomly initialized. <br> The algorithm then randomly selects a variable that has conflicts and violates ...
psource(min_conflicts)
csp.ipynb
jo-tez/aima-python
mit
Let's use this algorithm to solve the eight_queens CSP.
solution = min_conflicts(eight_queens)
csp.ipynb
jo-tez/aima-python
mit
This is indeed a valid solution. <br> notebook.py has a helper function to visualize the solution space.
plot_NQueens(solution)
csp.ipynb
jo-tez/aima-python
mit
Lets' see if we can find a different solution.
eight_queens = NQueensCSP(8) solution = min_conflicts(eight_queens) plot_NQueens(solution)
csp.ipynb
jo-tez/aima-python
mit
The solution is a bit different this time. Running the above cell several times should give you different valid solutions. <br> In the search.ipynb notebook, we will see how NQueensProblem can be solved using a heuristic search method such as uniform_cost_search and astar_search. Helper Functions We will now implement...
import copy class InstruCSP(CSP): def __init__(self, variables, domains, neighbors, constraints): super().__init__(variables, domains, neighbors, constraints) self.assignment_history = [] def assign(self, var, val, assignment): super().assign(var,val, assignment) se...
csp.ipynb
jo-tez/aima-python
mit
Next, we define make_instru which takes an instance of CSP and returns an instance of InstruCSP.
def make_instru(csp): return InstruCSP(csp.variables, csp.domains, csp.neighbors, csp.constraints)
csp.ipynb
jo-tez/aima-python
mit
We will now use a graph defined as a dictionary for plotting purposes in our Graph Coloring Problem. The keys are the nodes and their values are the corresponding nodes they are connected to.
neighbors = { 0: [6, 11, 15, 18, 4, 11, 6, 15, 18, 4], 1: [12, 12, 14, 14], 2: [17, 6, 11, 6, 11, 10, 17, 14, 10, 14], 3: [20, 8, 19, 12, 20, 19, 8, 12], 4: [11, 0, 18, 5, 18, 5, 11, 0], 5: [4, 4], 6: [8, 15, 0, 11, 2, 14, 8, 11, 15, 2, 0, 14], 7: [13, 16, 13, 16], 8: [19, 15...
csp.ipynb
jo-tez/aima-python
mit
Now we are ready to create an InstruCSP instance for our problem. We are doing this for an instance of MapColoringProblem class which inherits from the CSP Class. This means that our make_instru function will work perfectly for it.
coloring_problem = MapColoringCSP('RGBY', neighbors) coloring_problem1 = make_instru(coloring_problem)
csp.ipynb
jo-tez/aima-python
mit
CONSTRAINT PROPAGATION Algorithms that solve CSPs have a choice between searching and or doing a constraint propagation, a specific type of inference. The constraints can be used to reduce the number of legal values for another variable, which in turn can reduce the legal values for some other variable, and so on. <br...
psource(AC3)
csp.ipynb
jo-tez/aima-python
mit
AC3 also employs a helper function revise.
psource(revise)
csp.ipynb
jo-tez/aima-python
mit
AC3 maintains a queue of arcs to consider which initially contains all the arcs in the CSP. An arbitrary arc $(X_i, X_j)$ is popped from the queue and $X_i$ is made arc-consistent with respect to $X_j$. <br> If in doing so, $D_i$ is left unchanged, the algorithm just moves to the next arc, but if the domain $D_i$ is r...
neighbors = parse_neighbors('A: B; B: ') domains = {'A': [0, 1, 2, 3, 4], 'B': [0, 1, 2, 3, 4]} constraints = lambda X, x, Y, y: x % 2 == 0 and (x + y) == 4 and y % 2 != 0 removals = []
csp.ipynb
jo-tez/aima-python
mit
We'll now define a CSP object.
csp = CSP(variables=None, domains=domains, neighbors=neighbors, constraints=constraints) AC3(csp, removals=removals)
csp.ipynb
jo-tez/aima-python
mit
This configuration is inconsistent.
constraints = lambda X, x, Y, y: (x % 2) == 0 and (x + y) == 4 removals = [] csp = CSP(variables=None, domains=domains, neighbors=neighbors, constraints=constraints) AC3(csp,removals=removals)
csp.ipynb
jo-tez/aima-python
mit
This configuration is consistent. BACKTRACKING SEARCH The main issue with using Naive Search Algorithms to solve a CSP is that they can continue to expand obviously wrong paths; whereas, in backtracking search, we check the constraints as we go and we deal with only one variable at a time. Backtracking Search is implem...
result = backtracking_search(coloring_problem1) result # A dictonary of assignments.
csp.ipynb
jo-tez/aima-python
mit
Let us also check the number of assignments made.
coloring_problem1.nassigns
csp.ipynb
jo-tez/aima-python
mit
Now, let us check the total number of assignments and unassignments, which would be the length of our assignment history. We can see it by using the command below.
len(coloring_problem1.assignment_history)
csp.ipynb
jo-tez/aima-python
mit
Now let us explore the optional keyword arguments that the backtracking_search function takes. These optional arguments help speed up the assignment further. Along with these, we will also point out the methods in the CSP class that help to make this work. The first one is select_unassigned_variable. It takes in, as a...
psource(mrv) psource(num_legal_values) psource(CSP.nconflicts)
csp.ipynb
jo-tez/aima-python
mit
Another ordering related parameter order_domain_values governs the value ordering. Here we select the Least Constraining Value which is implemented by the function lcv. The idea is to select the value which rules out least number of values in the remaining variables. The intuition behind selecting the lcv is that it al...
psource(lcv)
csp.ipynb
jo-tez/aima-python
mit
Finally, the third parameter inference can make use of one of the two techniques called Arc Consistency or Forward Checking. The details of these methods can be found in the Section 6.3.2 of the book. In short the idea of inference is to detect the possible failure before it occurs and to look ahead to not make mistake...
solve_simple = copy.deepcopy(usa) solve_parameters = copy.deepcopy(usa) backtracking_search(solve_simple) backtracking_search(solve_parameters, order_domain_values=lcv, select_unassigned_variable=mrv, inference=mac) solve_simple.nassigns solve_parameters.nassigns
csp.ipynb
jo-tez/aima-python
mit
TREE CSP SOLVER The tree_csp_solver function (Figure 6.11 in the book) can be used to solve problems whose constraint graph is a tree. Given a CSP, with neighbors forming a tree, it returns an assignment that satisfies the given constraints. The algorithm works as follows: First it finds the topological sort of the tre...
psource(tree_csp_solver)
csp.ipynb
jo-tez/aima-python
mit
We will now use the above function to solve a problem. More specifically, we will solve the problem of coloring Australia's map. We have two colors at our disposal: Red and Blue. As a reminder, this is the graph of Australia: "SA: WA NT Q NSW V; NT: WA Q; NSW: Q V; T: " Unfortunately, as you can see, the above is not a...
australia_small = MapColoringCSP(list('RB'), 'NT: WA Q; NSW: Q V')
csp.ipynb
jo-tez/aima-python
mit
We will input australia_small to the tree_csp_solver and print the given assignment.
assignment = tree_csp_solver(australia_small) print(assignment)
csp.ipynb
jo-tez/aima-python
mit
WA, Q and V got painted with the same color and NT and NSW got painted with the other. GRAPH COLORING VISUALIZATION Next, we define some functions to create the visualisation from the assignment_history of coloring_problem1. The readers need not concern themselves with the code that immediately follows as it is the usa...
%matplotlib inline import networkx as nx import matplotlib.pyplot as plt import matplotlib import time
csp.ipynb
jo-tez/aima-python
mit
The ipython widgets we will be using require the plots in the form of a step function such that there is a graph corresponding to each value. We define the make_update_step_function which returns such a function. It takes in as inputs the neighbors/graph along with an instance of the InstruCSP. The example below will e...
def make_update_step_function(graph, instru_csp): #define a function to draw the graphs def draw_graph(graph): G=nx.Graph(graph) pos = nx.spring_layout(G,k=0.15) return (G, pos) G, pos = draw_graph(graph) def update_step(iteration): # here iteratio...
csp.ipynb
jo-tez/aima-python
mit
Finally let us plot our problem. We first use the function below to obtain a step function.
step_func = make_update_step_function(neighbors, coloring_problem1)
csp.ipynb
jo-tez/aima-python
mit
Next, we set the canvas size.
matplotlib.rcParams['figure.figsize'] = (18.0, 18.0)
csp.ipynb
jo-tez/aima-python
mit
Finally, our plot using ipywidget slider and matplotib. You can move the slider to experiment and see the colors change. It is also possible to move the slider using arrow keys or to jump to the value by directly editing the number with a double click. The Visualize Button will automatically animate the slider for you....
import ipywidgets as widgets from IPython.display import display iteration_slider = widgets.IntSlider(min=0, max=len(coloring_problem1.assignment_history)-1, step=1, value=0) w=widgets.interactive(step_func,iteration=iteration_slider) display(w) visualize_callback = make_visualize(iteration_slider) visualize_button ...
csp.ipynb
jo-tez/aima-python
mit
N-QUEENS VISUALIZATION Just like the Graph Coloring Problem, we will start with defining a few helper functions to help us visualize the assignments as they evolve over time. The make_plot_board_step_function behaves similar to the make_update_step_function introduced earlier. It initializes a chess board in the form o...
def label_queen_conflicts(assignment,grid): ''' Mark grid with queens that are under conflict. ''' for col, row in assignment.items(): # check each queen for conflict conflicts = {temp_col:temp_row for temp_col,temp_row in assignment.items() if (temp_row == row and temp_col != ...
csp.ipynb
jo-tez/aima-python
mit
Now let us visualize a solution obtained via backtracking. We make use of the previosuly defined make_instru function for keeping a history of steps.
twelve_queens_csp = NQueensCSP(12) backtracking_instru_queen = make_instru(twelve_queens_csp) result = backtracking_search(backtracking_instru_queen) backtrack_queen_step = make_plot_board_step_function(backtracking_instru_queen) # Step Function for Widgets
csp.ipynb
jo-tez/aima-python
mit
Now finally we set some matplotlib parameters to adjust how our plot will look like. The font is necessary because the Black Queen Unicode character is not a part of all fonts. You can move the slider to experiment and observe how the queens are assigned. It is also possible to move the slider using arrow keys or to ju...
matplotlib.rcParams['figure.figsize'] = (8.0, 8.0) matplotlib.rcParams['font.family'].append(u'Dejavu Sans') iteration_slider = widgets.IntSlider(min=0, max=len(backtracking_instru_queen.assignment_history)-1, step=0, value=0) w=widgets.interactive(backtrack_queen_step,iteration=iteration_slider) display(w) visualize...
csp.ipynb
jo-tez/aima-python
mit
Now let us finally repeat the above steps for min_conflicts solution.
conflicts_instru_queen = make_instru(twelve_queens_csp) result = min_conflicts(conflicts_instru_queen) conflicts_step = make_plot_board_step_function(conflicts_instru_queen)
csp.ipynb
jo-tez/aima-python
mit
This visualization has same features as the one above; however, this one also highlights the conflicts by labeling the conflicted queens with a red background.
iteration_slider = widgets.IntSlider(min=0, max=len(conflicts_instru_queen.assignment_history)-1, step=0, value=0) w=widgets.interactive(conflicts_step,iteration=iteration_slider) display(w) visualize_callback = make_visualize(iteration_slider) visualize_button = widgets.ToggleButton(description = "Visualize", value ...
csp.ipynb
jo-tez/aima-python
mit
Pivot Tables w/ pandas http://nicolas.kruchten.com/content/2015/09/jupyter_pivottablejs/
YouTubeVideo("ZbrRrXiWBKc", width=400, height=300) !conda install pivottablejs -y df = pd.read_csv("../data/mps.csv", encoding="ISO-8859-1") df.head(10) from pivottablejs import pivot_ui
notebook-tutorial/notebooks/01-Tips-and-tricks.ipynb
AstroHackWeek/AstroHackWeek2016
mit
Enhanced Pandas Dataframe Display
pivot_ui(df) # Province, Party, Average, Age, Heatmap
notebook-tutorial/notebooks/01-Tips-and-tricks.ipynb
AstroHackWeek/AstroHackWeek2016
mit
Keyboard shortcuts For help, ESC + h
# in select mode, shift j/k (to select multiple cells at once) # split cell with ctrl shift - first = 1 second = 2 third = 3
notebook-tutorial/notebooks/01-Tips-and-tricks.ipynb
AstroHackWeek/AstroHackWeek2016
mit
You can also get syntax highlighting if you tell it the language that you're including: ```bash mkdir toc cd toc wget https://raw.githubusercontent.com/minrk/ipython_extensions/master/nbextensions/toc.js wget https://raw.githubusercontent.com/minrk/ipython_extensions/master/nbextensions/toc.css cd .. jupyter-nbextensi...
%%bash pwd for i in *.ipynb do wc $i done echo echo "break" echo du -h *ipynb def silly_absolute_value_function(xval): """Takes a value and returns the value.""" xval_sq = xval ** 2.0 xval_abs = np.sqrt(xval_sq) return xval_abs silly_absolute_value_function? silly_absolute_value_function?? # s...
notebook-tutorial/notebooks/01-Tips-and-tricks.ipynb
AstroHackWeek/AstroHackWeek2016
mit
Stop here for now R pyRserve rpy2
import numpy as np # !conda install -c r rpy2 -y import rpy2 %load_ext rpy2.ipython X = np.array([0,1,2,3,4]) Y = np.array([3,5,4,6,7]) %%R? %%R -i X,Y -o XYcoef XYlm = lm(Y~X) XYcoef = coef(XYlm) print(summary(XYlm)) par(mfrow=c(2,2)) plot(XYlm) XYcoef
notebook-tutorial/notebooks/01-Tips-and-tricks.ipynb
AstroHackWeek/AstroHackWeek2016
mit
Table 4 - Low Resolution Analysis
tbl4 = ascii.read("http://iopscience.iop.org/0004-637X/794/1/36/suppdata/apj500669t4_mrt.txt") tbl4[0:4] Na_mask = ((tbl4["f_EWNaI"] == "Y") | (tbl4["f_EWNaI"] == "N")) print "There are {} sources with Na I line detections out of {} sources in the catalog".format(Na_mask.sum(), len(tbl4)) tbl4_late = tbl4[['Name', '...
notebooks/Hernandez2014.ipynb
BrownDwarf/ApJdataFrames
mit
Vertex SDK: AutoML training image object detection model for online prediction <table align="left"> <td> <a href="https://colab.research.google.com/github/GoogleCloudPlatform/vertex-ai-samples/tree/master/notebooks/official/automl/sdk_automl_image_object_detection_online.ipynb"> <img src="https://cloud.goog...
import os # Google Cloud Notebook if os.path.exists("/opt/deeplearning/metadata/env_version"): USER_FLAG = "--user" else: USER_FLAG = "" ! pip3 install --upgrade google-cloud-aiplatform $USER_FLAG
notebooks/community/sdk/sdk_automl_image_object_detection_online.ipynb
GoogleCloudPlatform/vertex-ai-samples
apache-2.0
Create and run training pipeline To train an AutoML model, you perform two steps: 1) create a training pipeline, and 2) run the pipeline. Create training pipeline An AutoML training pipeline is created with the AutoMLImageTrainingJob class, with the following parameters: display_name: The human readable name for the T...
dag = aip.AutoMLImageTrainingJob( display_name="salads_" + TIMESTAMP, prediction_type="object_detection", multi_label=False, model_type="CLOUD", base_model=None, ) print(dag)
notebooks/community/sdk/sdk_automl_image_object_detection_online.ipynb
GoogleCloudPlatform/vertex-ai-samples
apache-2.0
Run the training pipeline Next, you run the DAG to start the training job by invoking the method run, with the following parameters: dataset: The Dataset resource to train the model. model_display_name: The human readable name for the trained model. training_fraction_split: The percentage of the dataset to use for tra...
model = dag.run( dataset=dataset, model_display_name="salads_" + TIMESTAMP, training_fraction_split=0.8, validation_fraction_split=0.1, test_fraction_split=0.1, budget_milli_node_hours=20000, disable_early_stopping=False, )
notebooks/community/sdk/sdk_automl_image_object_detection_online.ipynb
GoogleCloudPlatform/vertex-ai-samples
apache-2.0
Deploy the model Next, deploy your model for online prediction. To deploy the model, you invoke the deploy method.
endpoint = model.deploy()
notebooks/community/sdk/sdk_automl_image_object_detection_online.ipynb
GoogleCloudPlatform/vertex-ai-samples
apache-2.0
Send a online prediction request Send a online prediction to your deployed model. Get test item You will use an arbitrary example out of the dataset as a test item. Don't be concerned that the example was likely used in training the model -- we just want to demonstrate how to make a prediction.
test_items = !gsutil cat $IMPORT_FILE | head -n1 cols = str(test_items[0]).split(",") if len(cols) == 11: test_item = str(cols[1]) test_label = str(cols[2]) else: test_item = str(cols[0]) test_label = str(cols[1]) print(test_item, test_label)
notebooks/community/sdk/sdk_automl_image_object_detection_online.ipynb
GoogleCloudPlatform/vertex-ai-samples
apache-2.0
Make the prediction Now that your Model resource is deployed to an Endpoint resource, you can do online predictions by sending prediction requests to the Endpoint resource. Request Since in this example your test item is in a Cloud Storage bucket, you open and read the contents of the image using tf.io.gfile.Gfile(). T...
import base64 import tensorflow as tf with tf.io.gfile.GFile(test_item, "rb") as f: content = f.read() # The format of each instance should conform to the deployed model's prediction input schema. instances = [{"content": base64.b64encode(content).decode("utf-8")}] prediction = endpoint.predict(instances=instan...
notebooks/community/sdk/sdk_automl_image_object_detection_online.ipynb
GoogleCloudPlatform/vertex-ai-samples
apache-2.0
Undeploy the model When you are done doing predictions, you undeploy the model from the Endpoint resouce. This deprovisions all compute resources and ends billing for the deployed model.
endpoint.undeploy_all()
notebooks/community/sdk/sdk_automl_image_object_detection_online.ipynb
GoogleCloudPlatform/vertex-ai-samples
apache-2.0
Collapse all posts of the same job title into a single document
by_job_title = jobs.groupby('title') job_title_df = by_job_title.agg({'job_id': lambda x: ','.join(x), 'doc': lambda x: 'next_doc'.join(x)}) job_title_df = job_title_df.add_prefix('agg_').job_title_dfet_index() job_title_df.head() n_job_title = by_job_title.ngroups print('# job titles: %d' %n_job_title) reload(clust...
.ipynb_checkpoints/jobtitle_skill-checkpoint.ipynb
musketeer191/job_analytics
gpl-3.0
Concat matrices doc_unigram, doc_bigram and doc_trigram to get occurrences of all skills:
from scipy.sparse import hstack jobtitle_skill = hstack([doc_unigram, doc_bigram, doc_trigram]) with(open(SKILL_DIR + 'jobtitle_skill.mtx', 'w')) as f: mmwrite(f, jobtitle_skill) jobtitle_skill.shape jobtitle_skill = jobtitle_skill.toarray()
.ipynb_checkpoints/jobtitle_skill-checkpoint.ipynb
musketeer191/job_analytics
gpl-3.0
Most popular skills by job title
job_title_df.head(1) idx_of_top_skill = np.apply_along_axis(np.argmax, 1, jobtitle_skill) # skill_df = skills skills = skill_df['skill'] top_skill_by_job_title = pd.DataFrame({'job_title': job_titles, 'idx_of_top_skill': idx_of_top_skill}) top_skill_by_job_title['top_skill'] = top_skill_by_job_title['idx_of_top_skill...
.ipynb_checkpoints/jobtitle_skill-checkpoint.ipynb
musketeer191/job_analytics
gpl-3.0
/...this is where I learned to not use pip install with scikit-learn... To upgrade scikit-learn: conda update scikit-learn
import sklearn.cluster #from sklearn.cluster import KMeans silAverage = [0.4227, 0.33299, 0.354, 0.3768, 0.3362, 0.3014, 0.3041, 0.307, 0.313, 0.325, 0.3109, 0.2999, 0.293, 0.289, 0.2938, 0.29, 0.288, 0.3, 0.287] import matplotlib.pyplot as plt %matplotlib inline
.ipynb_checkpoints/KL rambling notes on Python-checkpoint.ipynb
halexand/NB_Distribution
mit
OK...can I get a simple scatter plot?
plt.scatter(range(0,len(silAverage)), silAverage) plt.grid() #put on a grid plt.xlim(-1,20) #get list of column names in pandas data frame list(my_dataframe.columns.values) for i in range(0,len(ut)): if i == 10: break p = ut.iloc[i,:] n = p.name if n[0] == 'R': #do the plotting, ...
.ipynb_checkpoints/KL rambling notes on Python-checkpoint.ipynb
halexand/NB_Distribution
mit
Write a function to match RI number and cNumbers
def findRInumber(dataIn,KEGGin): #find possible RI numbers for a given KEGG number. for i,KEGG in enumerate(dataIn['KEGG']): if KEGG == KEGGin: t = dataIn.index[i] print t #For example: this will give back one row, C18028 will be multiple m = findRInumber(forRelatedness,'C00031...
.ipynb_checkpoints/KL rambling notes on Python-checkpoint.ipynb
halexand/NB_Distribution
mit
Also note that this exercise assumes you've already populated a malicious/ and a benign/ directory with samples that you consider malicious and benign, respectively. How many samples? In this notebook, I'm using 50K of each for demonstration purposes. Sadly, you must bring your own. If you don't populate these subd...
from classifier import common # this will take a LONG time the first time you run it (and cache features to disk for next time) # it's also chatty. Parts of feature extraction require LIEF, and LIEF is quite chatty. # the output you see below is *after* I've already run feature extraction, so that # X and sample_in...
BSidesLV -- your model isn't that special -- (1) MLP.ipynb
endgameinc/youarespecial
mit
Multilayer perceptron We'll use the features we extracted to train a multilayer perceptron (MLP). An MLP is an artificial neural network with at least one hidden layer. Is a multilayer perceptron "deep learning"? Well, it's a matter of semantics, but "deep learning" may imply that the features and model are optimize...
# StandardScaling the data can be important to multilayer perceptron from sklearn.preprocessing import StandardScaler scaler = StandardScaler().fit(X_train) # Note that we're using scaling info form X_train to transform both X_train = scaler.transform(X_train) # scale for multilayer perceptron X_test = scaler.transfor...
BSidesLV -- your model isn't that special -- (1) MLP.ipynb
endgameinc/youarespecial
mit
Sanity check: random forest classifier Alright. Is that good? Let's compare to another model. We'll reach for the simple and reliable random forest classifier? One nice thing about tree-based classifiers like a random forest classifier is that they are invariant to linear scaling and shifting of the dataset (the mode...
from sklearn.ensemble import RandomForestClassifier, GradientBoostingClassifier # you can increase performance by increasing n_estimators, and removing the restriction on max_depth # I've kept those in there because I want a quick-and-dirty look at how the MLP above rf = RandomForestClassifier( n_estimators=40, ...
BSidesLV -- your model isn't that special -- (1) MLP.ipynb
endgameinc/youarespecial
mit
The file object is already implemented in Python, just like thousands of other classes, therefore we do not have to bother with reading and writing files in Pthon. Therefore, let's have a look at defining our own classes. A class can be defined using the <span style="color: green">class</span> statement followed by a ...
# define class class Car: pass # create two instances vw = Car() audi= Car() print('vw: ', type(vw), 'audi: ', type(audi)) print('vw: ', vw.__class__, 'audi: ', audi.__class__) print('vw: ', str(vw), 'audi: ', str(audi))
felis_python1/lectures/06_Classes.ipynb
mmaelicke/felis_python1
mit
Methods The shown class <span style='color: blue'>Car</span> is not really useful. But we can define functions inside the class namespace. These functions are called methods. To be correct here: they are called instance methods and should not be confused with class methods, which will not be covered here. Although, we...
# redefine class class Car: def __init__(self): self.speed = 0 self.max_speed = 100 # create two instances vw = Car() audi = Car() print('vw: speed: %d max speed: %d' % (vw.speed, vw.max_speed)) print('audi: speed: %d max speed: %d' % (audi.speed, audi.max_speed)) audi.max_speed = 250 audi.speed =...
felis_python1/lectures/06_Classes.ipynb
mmaelicke/felis_python1
mit
This is better, but still somehow wrong. A car should not be allowed to drive faster than the maximum possible speed. A Volkswagen might not be the best car in the world, but it can do definitely better than negative speeds. A better approach would be to define some methods for accelerating and decelerating the car.<br...
# redefine class class Car: pass vw = Car() print(vw.speed) vw.accelerate(60) print(vw.speed) vw.accelerate(45) print(vw.speed) vw.decelerate(10) print(vw.speed) vw.decelerate(2000) print(vw.speed)
felis_python1/lectures/06_Classes.ipynb
mmaelicke/felis_python1
mit
Magic Methods Maybe you recognized the two underscores in the __init__ method. A defined set of function names following this name pattern are called magic methods in Python, because they are influcencing the object behaviour using magic. Beside __init__ two other very important magic methods are __repr__ and __str__. ...
print('str(vw) old:' , str(vw)) class Car: pass vw = Car() vw.accelerate(45) print('str(vw) new:', str(vw))
felis_python1/lectures/06_Classes.ipynb
mmaelicke/felis_python1
mit
Using these functions, almost any behaviour of the <span style='color: blue'>Car</span> instance can be influenced. Imagine you are using it in a conditional statement and test two instances for equality or if one instance is bigger than the other one.<br> Are these two variables equal if they reference exactly the s...
class Car: pass vw = Car('vw') vw2 = Car('vw') audi = Car('audi') print('vw equals vw2? ',vw == vw2) print('vw equals vw? ',vw == vw) print('vw equals audi? ', vw == audi) print('is vw exactly 9? ', vw == 9)
felis_python1/lectures/06_Classes.ipynb
mmaelicke/felis_python1
mit
private methods and attributes The <span style='color: blue'>Car</span> class has two methods which are meant to be used for mainpulating the actual speed. Nevertheless, one could directly assign new values, even of other types than integers, to the speed and max_speed attribute. Thus, one would call these attributes p...
vw = Car('audi') print('Speed: ', vw.speed) vw.speed = 900 print('Speed: ', vw.speed) vw.speed = -11023048282 print('Speed: ', vw.speed) vw.speed = Car('vw') print('Speed: ', vw.speed)
felis_python1/lectures/06_Classes.ipynb
mmaelicke/felis_python1
mit
Consequently, we want to protect this attribute from access from outside the class itself. Other languages use the keyword <span style="color: blue">private</span> to achieve this. Here, Python is not very explicit, as it does not define a keyword or statement for this. You'll have to prefix your attribute or method na...
class Car: pass vw = Car('vw') vw.accelerate(45) print(vw) vw.decelerate(20) print(vw) print(vw.getSpeed())
felis_python1/lectures/06_Classes.ipynb
mmaelicke/felis_python1
mit
class attributes All attributes and methods defined so far have one thing in common. They are bound to the instance. That means you can only access or invoke them using a reference to this instance. In most cases this is exactly what you want and would expect, as altering one instance won't influence other class instan...
class Car: pass vw = Car('vw') print(vw.count) audi = Car('audi') print(audi.count) bmw = Car('bmw') print('BMW:', bmw.max_speed) print('VW:', vw.max_speed) print('Audi:', audi.max_speed) print(vw.count)
felis_python1/lectures/06_Classes.ipynb
mmaelicke/felis_python1
mit
Inheritance As a proper OOP language Python does also implement inheritance. This means, that one can define a class which inherits the attibutes and classes from another class. You can put other classes into the parenthesis of your class signature and the new class will inherit from these classes. One would call this ...
class VW(Car): def __init__(self): super(VW, self).__init__('vw') class Audi(Car): def __init__(self): super(Audi, self).__init__('audi') vw = VW() audi = Audi() vw.accelerate(40) audi.accelerate(400) print(vw) print(audi) print(vw == audi) print(isinstance(vw, VW)) print(isinstance(v...
felis_python1/lectures/06_Classes.ipynb
mmaelicke/felis_python1
mit
Property Sometimes it would be really handy if an attribute could be altered or calculated before returning it to the user. Or even better: if one could make a function to behave like an attribute. That's exactly what a property does. These are methods with no other argument than self and therefore be executed without ...
class MyInt(int): def as_string(self): return 'The value is %s' % self i = MyInt(5) print(i.as_string()) class MyInt(int): @property def as_string(self): return 'The value is %s' % self x = MyInt(7) print(x.as_string) class Car: pass class VW(Car): def __init__(self): ...
felis_python1/lectures/06_Classes.ipynb
mmaelicke/felis_python1
mit
Property.setter Obviously, the protectec __speed attribute cannot be changed and the speed property is a function and thus, cannot be set. In the example of the Car, this absolutely makes sense, but nevertheless, setting a property is also possible. This time the property function is defined again accepting an addition...
class Model(object): def __init__(self, name): self.__model = self.check_model(name) def check_model(self, name): if name.lower() not in ('vw', 'audi'): return 'VW' else: return name.upper() @property def model(self): return self.__mo...
felis_python1/lectures/06_Classes.ipynb
mmaelicke/felis_python1
mit
Grabbing Current Data data.current() data.current() can be used to retrieve the most recent value of a given field(s) for a given asset(s). data.current() requires two arguments: the asset or list of assets, and the field or list of fields being queried. Possible fields include 'price', 'open', 'high', 'low', 'close', ...
def initialize(context): # Reference to Tech Stocks context.techies = [sid(16841), sid(24), sid(1900)] def handle_data(context, data): # Position our portfolio optimization! tech_close = data.current(context.techies, 'close') print(type(tech_close)) # P...
17-09-17-Python-for-Financial-Analysis-and-Algorithmic-Trading/10-Quantopian-Platform/02-Basic-Algorithm-Methods.ipynb
arcyfelix/Courses
apache-2.0
Note! You can use data.is_stale(sid(#)) to check if the results of data.current() where generated at the current bar (the timeframe) or were forward filled from a previous time. Checking for trading data.can_trade() data.can_trade() is used to determine if an asset(s) is currently listed on a supported exchange and can...
def initialize(context): # Reference to amazn context.amzn = sid(16841) def handle_data(context, data): # This insures we don't hit an exception! if data.can_trade(sid(16841)): order_target_percent(context.amzn, 1.0)
17-09-17-Python-for-Financial-Analysis-and-Algorithmic-Trading/10-Quantopian-Platform/02-Basic-Algorithm-Methods.ipynb
arcyfelix/Courses
apache-2.0
Checking Historical Data When your algorithm calls data.history on equities, the returned data is adjusted for splits, mergers, and dividends as of the current simulation date. In other words, when your algorithm asks for a historical window of prices, and there is a split in the middle of that window, the first part o...
def initialize(context): # AAPL, MSFT, and SPY context.assets = [sid(24), sid(1900), sid(16841)] def before_trading_start(context,data): price_history = data.history(context.assets, fields = "price", bar_count = 5, ...
17-09-17-Python-for-Financial-Analysis-and-Algorithmic-Trading/10-Quantopian-Platform/02-Basic-Algorithm-Methods.ipynb
arcyfelix/Courses
apache-2.0
The bar_count field specifies the number of days or minutes to include in the pandas DataFrame returned by the history function. This parameter accepts only integer values. The frequency field specifies how often the data is sampled: daily or minutely. Acceptable inputs are ‘1d’ or ‘1m’. For other frequencies, use the ...
def initialize(context): context.appl = sid(49051) # At ebginning of trading week # At Market Open, set 10% of portfolio to be apple schedule_function(open_positions, date_rules.week_start(), time_rules.market_open()) # At end of trading week #...
17-09-17-Python-for-Financial-Analysis-and-Algorithmic-Trading/10-Quantopian-Platform/02-Basic-Algorithm-Methods.ipynb
arcyfelix/Courses
apache-2.0
Portfolio Information You can get portfolio information and record it!
def initialize(context): context.amzn = sid(16841) context.ibm = sid(3766) schedule_function(rebalance, date_rules.every_day(), time_rules.market_open()) schedule_function(record_vars, date_rules.every_day(), ti...
17-09-17-Python-for-Financial-Analysis-and-Algorithmic-Trading/10-Quantopian-Platform/02-Basic-Algorithm-Methods.ipynb
arcyfelix/Courses
apache-2.0
Slippage and Commision Slippage Slippage is where a simulation estimates the impact of orders on the fill rate and execution price they receive. When an order is placed for a trade, the market is affected. Buy orders drive prices up, and sell orders drive prices down; this is generally referred to as the price_impact o...
set_slippage(slippage.VolumeShareSlippage(volume_limit = 0.025, price_impact = 0.1))
17-09-17-Python-for-Financial-Analysis-and-Algorithmic-Trading/10-Quantopian-Platform/02-Basic-Algorithm-Methods.ipynb
arcyfelix/Courses
apache-2.0
Using the default model, if an order of 60 shares is placed for a given stock, then 1000 shares of that stock trade in each of the next several minutes and the volume_limit is 0.025, then our trade order will be split into three orders (25 shares, 25 shares, and 10 shares) that execute over the next 3 minutes. At the e...
set_commission(commission.PerShare(cost = 0.0075, min_trade_cost = 1))
17-09-17-Python-for-Financial-Analysis-and-Algorithmic-Trading/10-Quantopian-Platform/02-Basic-Algorithm-Methods.ipynb
arcyfelix/Courses
apache-2.0
Данные были взяты из репозитория UCI Machine Learning Repository по адресу http://archive.ics.uci.edu/ml/datasets/banknote+authentication. Выборка сконструирована при помощи вейвлет преобразования избражений фальшивых и аутентичных банкнот в градациях серого.
df = pd.read_csv( 'data_banknote_authentication.txt', sep = ",", decimal = ".", header = None, names = [ "variance", "skewness", "curtosis", "entropy", "class" ] ) y = df.xs( "class", axis = 1 ) X = df.drop( "class", axis = 1 )
year_15_16/fall_2015/game theoretic foundations of ml/labs/SVM-lab.ipynb
ivannz/study_notes
mit
В исследуемых данных мы имеем следующее число точек:
print len( X )
year_15_16/fall_2015/game theoretic foundations of ml/labs/SVM-lab.ipynb
ivannz/study_notes
mit
Загруженные данные разбиваем на две выборки: обучающую ($\text{_train}$) и тестовую. которая будет не будет использоваться при обучении ($\text{_test}$). Разобьём выборку на обучающую и тестовую в соотношении 2:3.
X_train, X_test, y_train, y_test = cross_validation.train_test_split( X, y, test_size = 0.60, random_state = random_state )
year_15_16/fall_2015/game theoretic foundations of ml/labs/SVM-lab.ipynb
ivannz/study_notes
mit
В обучающей выборке имеем столько наблюдений:
print len( X_train )
year_15_16/fall_2015/game theoretic foundations of ml/labs/SVM-lab.ipynb
ivannz/study_notes
mit
Рассмотрим SVM в линейно неразделимом случае с $L^1$ нормой на зазоры $(\xi_i){i=1}^n$: $$ \frac{1}{2} \|\beta\|^2 + C \sum{i=1}^n \xi_i \to \min_{\beta, \beta_0, (\xi_i)_{i=1}^n} \,, $$ при условиях: для любого $i=1,\ldots,n$ требуется $\xi_i \geq 0$ и $$ \bigl( \beta' \phi(x_i) + \beta_0 \bigr) y_i \geq 1 - \xi_i \...
svm_clf_ = svm.SVC( probability = True, max_iter = 100000 )
year_15_16/fall_2015/game theoretic foundations of ml/labs/SVM-lab.ipynb
ivannz/study_notes
mit
Параметры вида ядра (и соответственно отображений признаков $\phi:\mathcal{X}\to\mathcal{H}$) и параметр регуляризации $C$ будем искать с помощью переборного поиска на сетке с $5$-fold кроссвалидацией на тренировочной выборке $\text{X_train}$. Рассмотрим три ядра: гауссовское $$ K( x, y ) = \text{exp}\bigl{ -\frac{1}{2...
## Вид ядра : Гауссовское ядро grid_rbf_ = grid_search.GridSearchCV( svm_clf_, param_grid = { ## Параметр регуляризции: C = 0.0001, 0.001, 0.01, 0.1, 1, 10. "C" : np.logspace( -4, 1, num = 6 ), "kernel" : [ "rbf" ], ## Параметр "концентрации" Гауссовского ядра "gamma" : np.logspace( -2, 2, num =...
year_15_16/fall_2015/game theoretic foundations of ml/labs/SVM-lab.ipynb
ivannz/study_notes
mit
полимониальное $$ K( x, y ) = \bigl( 1 + \langle x, y\rangle\bigr)^p \,, $$
## Вид ядра : Полиномиальное ядро grid_poly_ = grid_search.GridSearchCV( svm.SVC( probability = True, max_iter = 20000, kernel = "poly" ), param_grid = { ## Параметр регуляризции: C = 0.0001, 0.001, 0.01, 0.1, 1, 10. "C" : np.logspace( -4, 1, num = 6 ), "kernel" : [ "poly" ], ## Степень полиномиального...
year_15_16/fall_2015/game theoretic foundations of ml/labs/SVM-lab.ipynb
ivannz/study_notes
mit