markdown
stringlengths
0
37k
code
stringlengths
1
33.3k
path
stringlengths
8
215
repo_name
stringlengths
6
77
license
stringclasses
15 values
Flag photos for deletion by clicking 'Delete'. Then click 'Next Batch' to delete flagged photos and keep the rest in that row. ImageCleaner will show you a new row of images until there are no more to show. In this case, the widget will show you images until there are none left from top_losses.ImageCleaner(ds, idxs).<br> 点击“Delete”标记待删除的照片,然后再点击“Next Batch”来删除已标记的照片,同时保持其他图片仍在原来的位置。ImageCleaner将显示一行新的图片,直到没有更多的图片可以展示。在这种情况下,小工具程序会为你展示图片,直到从top_losses.ImageCleaner(ds, idxs)没有更多图片输出为止。 You can also find duplicates in your dataset and delete them! To do this, you need to run .from_similars to get the potential duplicates' ids and then run ImageCleaner with duplicates=True. The API works in a similar way as with misclassified images: just choose the ones you want to delete and click 'Next Batch' until there are no more images left.<br> 你会发现在你的数据集中存在重复图片,一定要删除他们!为了做到这一点,你需要运行.from_similars来获取有潜在重复可能的图片的id,然后运行ImageCleaner并使用duplicate=True作为参数。API的工作方式和(处理)错误分类的图片相类似:你只要选中那些你想删除的图片,然后点击'Next Batch'直到没有更多的图片遗留为止。 Make sure to recreate the databunch and learn_cln from the cleaned.csv file. Otherwise the file would be overwritten from scratch, loosing all the results from cleaning the data from toplosses.<br> 确保你从cleaned.csv文件中重新创建了数据堆和learn_cln,否则文件会被完全覆盖,你将丢失所有从失误排行里清洗数据后的结果。
ds, idxs = DatasetFormatter().from_similars(learn_cln) ImageCleaner(ds, idxs, path, duplicates=True)
zh-nbs/Lesson2_download.ipynb
fastai/course-v3
apache-2.0
Remember to recreate your ImageDataBunch from your cleaned.csv to include the changes you made in your data!<br> 记住从你的cleaned.csv中重新创建ImageDatabunch,以便包含你对数据的所有变更! Putting your model in production 部署模型 First thing first, let's export the content of our Learner object for production:<br> 首先,导出我们训练好的Learner对象内容,为部署做好准备:
learn.export()
zh-nbs/Lesson2_download.ipynb
fastai/course-v3
apache-2.0
This will create a file named 'export.pkl' in the directory where we were working that contains everything we need to deploy our model (the model, the weights but also some metadata like the classes or the transforms/normalization used).<br> 这个命令会在我们处理模型的目录里创建名为export.pk1的文件,该文件中包含了用于部署模型的所有信息(模型、权重,以及一些元数据,如一些类或用到的变换/归一化处理等)。 You probably want to use CPU for inference, except at massive scale (and you almost certainly don't need to train in real-time). If you don't have a GPU that happens automatically. You can test your model on CPU like so:<br> 你可能想用CPU来进行推断,除了大规模的(几乎可以肯定你不需要实时训练模型),(所以)如果你没有GPU资源,你也可以使用CPU来对模型做简单的测试:
defaults.device = torch.device('cpu') img = open_image(path/'black'/'00000021.jpg') img
zh-nbs/Lesson2_download.ipynb
fastai/course-v3
apache-2.0
We create our Learner in production enviromnent like this, jsut make sure that path contains the file 'export.pkl' from before.<br> 我们在这样的生产环境下创建学习器,只需确保path参数包含了前面生成好的“export.pk1”文件。
learn = load_learner(path) pred_class,pred_idx,outputs = learn.predict(img) pred_class
zh-nbs/Lesson2_download.ipynb
fastai/course-v3
apache-2.0
So you might create a route something like this (thanks to Simon Willison for the structure of this code):<br> 你可能需要像下面的代码这样,创建一个路径, (谢谢Simon Willison提供了这些代码的架构): python @app.route("/classify-url", methods=["GET"]) async def classify_url(request): bytes = await get_bytes(request.query_params["url"]) img = open_image(BytesIO(bytes)) _,_,losses = learner.predict(img) return JSONResponse({ "predictions": sorted( zip(cat_learner.data.classes, map(float, losses)), key=lambda p: p[1], reverse=True ) }) (This example is for the Starlette web app toolkit.)<br> (这个例子适用于 Starlette的web app工具包) Things that can go wrong 可能出错的地方 Most of the time things will train fine with the defaults 大多数时候使用默认参数就能训练出好模型 There's not much you really need to tune (despite what you've heard!) 没有太多需要你去调整的(尽管你可能听到过一些) Most likely are 可能就是(下面的参数) Learning rate 学习率 Number of epochs epochs的数目 Learning rate (LR) too high 学习率(LR)太高
learn = cnn_learner(data, models.resnet34, metrics=error_rate) learn.fit_one_cycle(1, max_lr=0.5)
zh-nbs/Lesson2_download.ipynb
fastai/course-v3
apache-2.0
Learning rate (LR) too low 学习率(LR)太低
learn = cnn_learner(data, models.resnet34, metrics=error_rate)
zh-nbs/Lesson2_download.ipynb
fastai/course-v3
apache-2.0
Previously we had this result:<br> 前面的代码运行后,我们得到如下结果: Total time: 00:57 epoch train_loss valid_loss error_rate 1 1.030236 0.179226 0.028369 (00:14) 2 0.561508 0.055464 0.014184 (00:13) 3 0.396103 0.053801 0.014184 (00:13) 4 0.316883 0.050197 0.021277 (00:15)
learn.fit_one_cycle(5, max_lr=1e-5) learn.recorder.plot_losses()
zh-nbs/Lesson2_download.ipynb
fastai/course-v3
apache-2.0
As well as taking a really long time, it's getting too many looks at each image, so may overfit.<br> 不仅运行耗时过长,而且模型对每一个图片都太过注重细节,因此可能过拟合。 Too few epochs epochs过少
learn = cnn_learner(data, models.resnet34, metrics=error_rate, pretrained=False) learn.fit_one_cycle(1)
zh-nbs/Lesson2_download.ipynb
fastai/course-v3
apache-2.0
Too many epochs epochs过多
np.random.seed(42) data = ImageDataBunch.from_folder(path, train=".", valid_pct=0.9, bs=32, ds_tfms=get_transforms(do_flip=False, max_rotate=0, max_zoom=1, max_lighting=0, max_warp=0 ),size=224, num_workers=4).normalize(imagenet_stats) learn = cnn_learner(data, models.resnet50, metrics=error_rate, ps=0, wd=0) learn.unfreeze() learn.fit_one_cycle(40, slice(1e-6,1e-4))
zh-nbs/Lesson2_download.ipynb
fastai/course-v3
apache-2.0
Estimation
O.par['n_cond']=3; # For estimation, the numbner of conditionals need never be highed than the number of actual conditional hard and/or soft data. O.par['n_real']=1; O.par['do_estimation']=1 O.par['do_entropy']=1 # when using mps_genesim, we need to compute a conditional O.par['n_max_cpdf_count']=1000000; # We need to be able to compute the conditional O.par['n_max_ite']=1000000 O.delete_local_files() # to make sure no old data are floating around O.remove_gslib_after_simulation=1 O.run() print('Time used to perform MPS estimation: %4.1fs' % (O.time)) # Get P(m_i==1) P1=O.est[1][:,:,0].T # Get H(m_i H=O.Hcond[:,:,0].transpose() plt.figure() plt.subplot(121) plt.imshow(P1) plt.colorbar() plt.title('P(m_i)=1') plt.subplot(122) plt.imshow(H) plt.colorbar() plt.title('Conditional Entropy') plt.show()
scikit-mps/examples/ex_mpslib_estimation.ipynb
ergosimulation/mpslib
lgpl-3.0
Simulation
O.par['n_real']=30; O.par['n_cond']=25; # For estimation, one typically needs n_cond higher to obtain realizations with resonable pattern reproduction. O.par['do_estimation']=0 O.par['max_cpdf_count']=1; # Direct sampling mode O.par['n_max_ite']=1000 # This is typically not set to a very high number to avoid scanning the whole TI O.par['n_real']=100 O.delete_local_files() # to make sure no old data are floating around O.remove_gslib_after_simulation=1 O.run_parallel() print('Time used to perform MPS simulation:%4.1fs' % (O.time)) O.plot_etype()
scikit-mps/examples/ex_mpslib_estimation.ipynb
ergosimulation/mpslib
lgpl-3.0
1. What is the output of the commands above? Now each time we call a function that’s in a library, we use the syntax: python LibraryName.FunctionName Adding the library name with a . before the function name tells Python where to find the function. In the example above, we have imported Pandas as pd. This means we don’t have to type out pandas each time we call a Pandas function. We will begin by locating and reading our data which are in a table format. We can use Pandas’ read_table function to pull the file directly into a DataFrame. Getting data into the notebook What’s a DataFrame? A DataFrame is a 2-dimensional data structure that can store data of different types (including characters, integers, floating point values, factors and more) in columns. It is similar to a spreadsheet or an SQL table or the data.frame in R. A DataFrame always has an index (0-based since it's Python). An index refers to the position of an element in the data structure and is super useful for some of the pandas methods. pandas also can take apart DataFrames into columns by making a specialized data structure called a Series (a 1D structure that still has an index). This allows many pandas methods to work on one column at a time, or the whole DataFrame column by column. pandas can also perform what are called vectorized operations which means that the iteration is already built in and you don't have to loop through values in a list, you can just call a methon on a Series. But enough of that for now, let's get to some data... Load data The commands below load the data directly from the GitHub repository where the data is stored. This particular data set is tab separated since the columns in the data set are separated by a TAB. We need to tell the read_table function in Pandas that this is the case with sep = '\t' argument. (this \t for tab is kind of like the \n character that we have seen for a new line in a text file) The fact that this data is tab separated as opposed to the more common comma separated (.csv) data is just a quirk of how we came by the data, and databases and instruments often have complex or even proprietary data formats, so get used to it. run the code below to bring in the data and assigns it to the identifier gapminder.
# url where data is stored url = "https://raw.githubusercontent.com/Reproducible-Science-Curriculum/data-exploration-RR-Jupyter/master/gapminderDataFiveYear_superDirty.txt" # assigns the identifier 'gapminder' to the entire dataset gapminder = pd.read_table(url, sep = "\t")
lesson14/Lesson14_team_improved.ipynb
nerdcommander/scientific_computing_2017
mit
2. In the command above, why did we use pd.read_table, not just read_table or pandas.read_table? Now we have a DataFrame called gapminder with all of our data in it! One of the first things we do after importing data is to just have a look and make sure it looks like what we think it should. In this case, this is your first look, so type the name of the DataFrame and run the code cell below to see what happens...
## DataFrame name here
lesson14/Lesson14_team_improved.ipynb
nerdcommander/scientific_computing_2017
mit
There are usually too many rows to print to the screen and you don't really want to find out by printing them all to the screen and regretting it. By default, when you type the name of the DataFrame and run a cell, pandas knows not to print the whole thing if there are a lot of rows. Instead, you will see the first and last few rows with dots in between. A neater way to look at a preview of the dataset is by using the head() method. Calling DataFrame.head() will displace the first 5 rows of the data (this is also an exmaple of the "dot" notation where we want the "head" gapminder, so we write DataFrame.head()). You can specify how many rows you want to see as an argument, like DataFrame.head(10). The tail() method does the same with the last rows of the DataFrame. Use these methods below to get an idea of what the gapminder DataFrame looks like.
## head and tail methods
lesson14/Lesson14_team_improved.ipynb
nerdcommander/scientific_computing_2017
mit
3. Make a few observations about what you see in the data so far (you don't have to go too deep since we are going to use pandas to look next) Assess structure and cleanliness How many rows and columns are in the data? We often want to know how many rows and columns are in the data -- we want to know what is called the "shape" attribute of the DataFrame. Pandas has a convenient way for getting that information by using the DataFrame.shape (using DataFrame as a generic name for a, well, DataFrame, which in pandas is usually written DataFrame). This returns a tuple (values separated by commas) representing the dimensions of the DataFrame (rows, columns). write code to use shape to get the shape of the gapminder DataFrame:
## shape
lesson14/Lesson14_team_improved.ipynb
nerdcommander/scientific_computing_2017
mit
4. How many rows and columns are there? The info() method gives a few useful pieces of information quickly, including the shape of the DataFrame, the variable type of each column, and the amount of memory stored. run the code in the cell below
# get info gapminder.info()
lesson14/Lesson14_team_improved.ipynb
nerdcommander/scientific_computing_2017
mit
There are several problems with this data set as it is. BTW, this is super common and data nerd slang for fundamentally useful data with issues is "dirty data" and the process for un-dirtying it is called "data cleaning". (believe it or not, tidy data is yet again a different thing, we'll get to that later) The first step in data cleaning is identifying the problems. The info above has already revealed one major problem and one minor/annoying problem. Let's investigate! 5. What types of variables are in the gapminder DataFrame and what values can they take? (region is a little tricky) 6. Some of your data types should seem odd to you (hint - party like it's 1999.9?). Which ones and why? 7. Look at the number of entries in the entire DataFrame at the top (RangeIndex) and compare that to each column. Explain what you think is happening. Let's keep looking There are other fast, easy, and informative ways to get a sense of what your data might look like and any issues that it has. The describe() method will take the numeric columns and give a summary of their values. This is useful for getting a sense of the ranges of values and seeing if there are any unusual or suspicious numbers. Use the describe() method on the gapminder DataFrame in the cell below
## describe
lesson14/Lesson14_team_improved.ipynb
nerdcommander/scientific_computing_2017
mit
Uh-oh what happened to the percentiles? Also notice that you get an error, but it gives you some output. The DataFrame method describe() just blindly looks at all numeric variables. First, we wouldn't actually want to take the mean year. Additionally, we obtain 'NaN' (not a number) values for our quartiles. This suggests we might have missing data which we can (and will) deal with shortly when we begin to clean our data. For now, let's pull out only the columns that are truly continuous numbers (i.e. ignore the description for 'year'). This is a preview of selection columns of the data; we'll talk more about how to do it later in the lesson, but many methods that work on DataFrames work in this way if you use DataFrame.method(['column_name']) -- the brackets should remind you of indexing a list or a string and the 'column_name' needs to be in quotes since it is itself a string. run the code in the cell below
# use describe but only on columns with continuous values gapminder[['pop', 'life Exp', 'gdpPercap']].describe()
lesson14/Lesson14_team_improved.ipynb
nerdcommander/scientific_computing_2017
mit
8. We haven't really nailed this problem down yet. Reflect (or look) back on your answer to Question#7 and think about what might be happening (don't worry too much about the mathematical specifics). Let's get after it. The command value_counts() gives you a quick idea of what kinds of names are in your categorical data such as strings. In this case our categorical data is in the region column and represents the names of the regions where the data came from. Important info: The data set covers 12 years, so each region should appear 12 times. use value_counts() on gapminder to see if all regions have 12 rows/entries as expected
## use value_counts to find out how many times each unique region occus
lesson14/Lesson14_team_improved.ipynb
nerdcommander/scientific_computing_2017
mit
Uh-oh! The table reveals several problems. 9. Describe and Explain the (several) problems that you see. Data cleaning Handling Missing Data Missing data is an important issue to handle. As we've seen, ignoring it doesn't make it go away, in our example, it has been giving us problems in our describe() results -- remember the NaN values? There are different ways of dealing with missing data which include, all of them have advantages and disadvantages so choose carefully and with good reasons!: * analyzing only the available data (i.e. ignore the missing data) * replace the missing data with replacement values and treat these as though they were observed (danger!) * replace the missing data and account for the fact that these values were inputed with uncertainty (e.g. create a new boolean variable as a flag so you know that these values were not actually observed) * use statistical models to allow for missing data -- make assumptions about their relationships with the available data as necessary For our purposes with the dirty gapminder data set, we know our missing data is excess (and unnecessary) and we are going to choose to analyze only the available data. To do this, we will simply remove rows with missing values. 10. Wait, why do we think it's extra data? Justify! Also include from what regions you expect to lose observations. In large tabular data that is used for analysis, missing data is usually coded NA (stands for Not Available and various other things) although there are other possibilities such as NaN (as we saw above) if a function or import method expects a number. NA can mean several things under the hood, but using NA for missing values in your data is your best bet and most of the methods will expect that (and even have it in their name). Let's find out how many NAs there are We are going to chain 2 steps together to determine the number of NA/null values in the gapminder DataFrame. + first isnull() returns a boolean for each cell in the DataFrame - True if the value in the cell is NA, False if it is not. + then sum() adds up the values in each column. "hold on", you say, "you can't add booleans!" Oh yes you can! True == 1 and False == 0 in Python so if you sum() the results of isnull() you get the number of NA/null values in each column -- awesome!
# isnull results in T/F for each cell, # sum adds them up by column since F=0 and T=1 so #=#NAs gapminder.isnull().sum()
lesson14/Lesson14_team_improved.ipynb
nerdcommander/scientific_computing_2017
mit
Yikes! There are NA values in each column except region. Removing NAs from a DataFrame is incredibly easy to do because pandas allows you to either remove all instances will NA/null data or replace them with a particular value. (sometimes too easy, so make sure you are careful!) The method df = df.dropna() (here df is short for DataFrame) drops rows with any column having NA/null data. df = df.fillna(value) replaces all NA/null data with the argument value. You have to use the assignment to the same (or different) identifier since dropna() does not work "in place" so if you don't assign the result to something it prints and nothing happens to the actual DataFrame. use dropna() to remove the NAs from the gapminder DataFrame
## before you rip into removing, it's a good idea to know how many rows you have ## you know a few ways to get this info, write some code here to get # of rows ## now use dropna() ## now get the number of rows again ## use isnull() again to confirm
lesson14/Lesson14_team_improved.ipynb
nerdcommander/scientific_computing_2017
mit
11. How many rows were removed? Make sure the code immediately above justifies this. 12. Which regions lost rows? Run code to see if there are still NAs and also to identify regions with too many observations. (hint: it won't be perfect yet)
## your code
lesson14/Lesson14_team_improved.ipynb
nerdcommander/scientific_computing_2017
mit
Are we done yet? Oh, no, we certainly are not... One more nice trick. If you want to examine a subset (more later) of the rows, for example to exmaine the region with too many observations, you can use code like this: python gapminder[gapminder.column == 'value_in_column'] where column is the name of a column and 'value_in_column' is what you're interested in, for example the region. write code in the cell below to look at all of the rows from regions that have a problematically high number of observations.
## code to examine rows from regions with too many obs
lesson14/Lesson14_team_improved.ipynb
nerdcommander/scientific_computing_2017
mit
Handling strange variable types Remember above where we had some strange/annoying variable types? Now we can fix them. The methods below would have failed if there were NAs in the data since they look for the type of the data and try to change it. Inconsistencies in data types and NAs cause problems for a lot of methods, so we often deal with them first when cleaning data. We can change (or type cast) the inappropriate data types with the function astype(), like so: python DataFrame.astype(dtype) A few important things to consider! + dtype is the data type that you want to cast to + if you just use DataFrame it will try to cast the entire DataFrame to the same type, and you do not want to go there! so we need to specify which columns we want to cast, see below... There are several ways to select only some columns of a DataFrame in pandas but the easiest and most intuitive is usually to just use the name. It is very much like indexing a list or a string and looks like: DataFrame['column_name'], where column_name is the column name. So in context of our astype() method call we would want: python DataFrame['column_name'].astype(dtype) to only cast the type of a single column. Run the code in the cell below to see an example:
##type casts the year column data from float to int gapminder['year'] = gapminder['year'].astype(int)
lesson14/Lesson14_team_improved.ipynb
nerdcommander/scientific_computing_2017
mit
Now you use astype() to type cast the other problematic column and use info() to make sure these two operations worked.
## fix other column and make sure all is ok
lesson14/Lesson14_team_improved.ipynb
nerdcommander/scientific_computing_2017
mit
Progress! Your data types should make more sense now and since we removed the NA values above, the total number of rows or entries is the same as the non-null rows. Nice! Handling (Unwanted) Repetitive Data Sometimes observations can end up in the data set more than once creating a duplicate row. Luckily, pandas has methods that allow us to identify which observations are duplicates. The method df.duplicated() will return boolean values for each row in the DataFrame telling you whether or not a row is an exact repeat. Run the code below to see how it works.
## get T/F output for each row if it is a duplicate (only look at top 5) gapminder.duplicated().head()
lesson14/Lesson14_team_improved.ipynb
nerdcommander/scientific_computing_2017
mit
Wow, there is a repeat in the first 5 rows! Write code below that allows you to confirm this by examining the DataFrame.
## confirm duplicate in top 5 rows of df ## you can use the .sum() to count the number of duplicated rows in the DataFrame
lesson14/Lesson14_team_improved.ipynb
nerdcommander/scientific_computing_2017
mit
In cases where you don’t want exactly repeated rows (we don’t -- we only want each country to be represented once for every relevant year), you can easily drop such duplicate rows with the method df.drop_duplicates(). Note: drop_duplicates() is another method that does not work in place, and if you want the DataFrame to now not have the duplicates, you need to assign it again: df = df.drop_duplicates() Warning: you always want to be cautious about dropping rows from your data. 13. Justify why it's ok for us to drop these rows (a mechanical reason that has to do with the structure of the data itself, the "experimental" reason is above). write code below to remove the duplicated rows and confirm that the duplicate in the first 5 rows is gone and/or that all of the duplicates are gone.
## remove duplicates and confirm
lesson14/Lesson14_team_improved.ipynb
nerdcommander/scientific_computing_2017
mit
More Progress! Reindexing with reset_index() Now we have 1704 rows, but our index is off. Look at the index values in gapminder just above. One is missing! We can reset our indices easily with the method reset_index(drop=True). Remember, Python is 0-indexed so our indices will be valued 0-1703. The drop=True parameter drops the old index (as opposed to placing it in a new column, which is useful sometimes but not here). The concept of reindexing is important. When we remove some of the messier, unwanted data, we end up with "gaps" in our index values. By correcting this, we can improve our search functionality and our ability to perform iterative functions on our cleaned data set. Note: reset_index() is yet another method that does not work in place, and if you want the DataFrame to now not have the duplicates, you need to assign it again... write code that resets the index of gapminder and shows you the top 5 so you can see it has changed
## reset index and check top 5
lesson14/Lesson14_team_improved.ipynb
nerdcommander/scientific_computing_2017
mit
Handling Inconsistent Data The region column still has issues that will affect our analysis. We used the value_counts() method above to examine some of these issues... write code to look at the number of observations for each unique region
## get value counts for the region column
lesson14/Lesson14_team_improved.ipynb
nerdcommander/scientific_computing_2017
mit
14. Describe and Explain... 14a. what is better now that we have done some data cleaning. 14b. what is still in need of being fixed. String manipulations Very common problems with string variables are: + lingering white space + upper case vs. lower case These issues are problematic since upper and lower case characters are considered different characters by Python (consider that 'abc' == 'ABC' evaluates to False) and any extra character in a string makes it different to Python (consider that 'ABC' == ' ABC' evaluates to False). The following three pandas string methods (hence the str) remove all such trailing spaces (left and right) and put everything in lowercase, respectively. python df.str.lstrip() # Strip white space on left df.str.rstrip() # Strip white space on right df.str.lower() # Convert to lowercase Note: none of these methods work in place, so if you want the DataFrame to now not have the duplicates, you need to assign it again (df = df.method())... write code that strips the white space from both sides of the values in the region column, makes all of the values lower case, and shows that it has been accomplished (to make sure you've reassigned).
## write code to strip white space on both left and right of region names ## convert region names to lowercase, and print out df to confirm
lesson14/Lesson14_team_improved.ipynb
nerdcommander/scientific_computing_2017
mit
As a side note, one of the coolest thing about pandas is an idea called chaining. If you prefer, the three commands can be written in one single line! python df['column_name'] = df['column_name'].str.lstrip().str.rstrip().str.lower() You may be wondering about the order of operations and it starts closest to the data (DataFrame name, df here), and moves away performing methods in order. This is a bit advanced, but can save a lot of typing once you get used to it. The downside is that it can be harder to troubleshoot -- it can leave you asking "which command in my long slick looking chain produced the weird error?" Are we there yet??? 15. What is still wrong with the region column? regex + replace() A regular expression, aka regex, is a powerful search technique that uses a sequence of characters that define a search pattern. In a regular expression, the symbol "*" matches the preceding character 0 or more times, whereas "+" matches the preceding character 1 or more times. "." matches any single character. Writing "x|y" means to match either "x" or "y". For more regex shortcuts (cheatsheet): https://www.shortcutfoo.com/app/dojos/regex/cheatsheet Pandas allows you to use regex in its replace() function -- when a regex term is found in an element, the element is then replaced with the specified replacement term. In order for it to appropriately correct elements, both regex and inplace variables need to be set to True (as their defaults are false). This ensures that the initial input string is read as a regular expression and that the elements will be modified in place. For more documentation on the replace method: http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.replace.html Using regex and replace() is easy and powerful and potentially dangerous Here's an incorrect regex example: we create a temporary DataFrame (notice the temp = assignment) in which a regex statement inside replace() identifies all values that contain the term "congo" and replaces it with "africa_dem rep congo". Unfortunately, this creates 24 instances of the Democratic Republic of the Congo (and there should only be 12) -- this is an error in our cleaning! run the code below to go through the incorrect example (and let it be a warning to ye)
# gives a problem -- 24 values of the congo! temp = gapminder['region'].replace(".*congo.*", "africa_dem rep congo", regex=True) temp.value_counts()
lesson14/Lesson14_team_improved.ipynb
nerdcommander/scientific_computing_2017
mit
16. What happened?
## this should help # shows all the rows that have 'congo' in the name gapminder[gapminder['region'].str.contains('congo')]
lesson14/Lesson14_team_improved.ipynb
nerdcommander/scientific_computing_2017
mit
We can revert back to working on the the non-temporary DataFrame and correctly modify our regex to isolate only the Democratic Republic of Congo instances (as opposed to including the Republic of Congo as well). Using regex to fix the Dem. Rep. Congo... As noted above, regular expressions (regex) provide a powerful tool for fixing errors that arise in strings. In order to correctly label the two different countries that include the word "congo", we need to design and use (via df.replace()) a regex that correctly differentiates between the two countries. Recall that the "." is the wildcard (matching any single character); combining this with "*" allows us to match any number of single characters an unspecified number of times. By combining these characters with substrings corresponding to variations in the naming of the Democratic Republic of the Congo, we can correctly normalize the name. If you feel that the use of regex is not particularly straightforward, you are absolutely correct -- appropriately using these tools takes a great deal of time to master. When designing regex for these sorts of tasks, you might find the following prototyper helpful: https://regex101.com/ run the code below to fix the rows for the Democratic Republic of the Congo
# fix the two different versions of the incorrect name to the same correct version gapminder['region'].replace(".*congo, dem.*", "africa_dem rep congo", regex=True, inplace=True) gapminder['region'].replace(".*_democratic republic of the congo", "africa_dem rep congo", regex=True, inplace=True) ## you write code to check to make sure it's fixed
lesson14/Lesson14_team_improved.ipynb
nerdcommander/scientific_computing_2017
mit
Exercise (regex): Now that we've taken a close look at how to properly design and use regex to clean string entries in our data, let's try to normalize the naming of a few other countries. Using the pandas code we constructed above as a template, write similar code to what we used above (using df.replace()) to fix the names of all of the rows/entries for the Ivory Coast and Canada to "africa_cote d'ivoire" and "americas_canada", respectively.
## code to fix other incorrect names
lesson14/Lesson14_team_improved.ipynb
nerdcommander/scientific_computing_2017
mit
Are we there yet??!??! 17. Looks like we successfully cleaned our data! Justify that we are done. Tidy data Having what is called a Tidy data set can make cleaning and analyzing your data much easier. Two of the important aspects of Tidy data are: * every variable has its own column * every observation has its own row There are other aspects of Tidy data, here is a good blog post about Tidy data in Python. Currently the dataset has a single column for continent and country (the ‘region’ column). 18. Why is having a single column for continent and country not Tidy? Let's make gapminder Tidy We can split the region column into two, by using the underscore that separates continent from country. We can create a new column in the DataFrame by naming it before the = sign: python gapminder['country'] = The following commands use the string method split() to split the string at the underscore (the first argument), which results in a list of two elements: before and after the _. The second argument, in this case "1", tells split() that the split should take place only at the first occurrence of the underscore. Then the str[] specifies which item in the resulting Series to return.
# split region into country and continent to tidy data gapminder['country']=gapminder['region'].str.split('_', 1).str[1] gapminder['continent']=gapminder['region'].str.split('_', 1).str[0] gapminder.head()
lesson14/Lesson14_team_improved.ipynb
nerdcommander/scientific_computing_2017
mit
19. Describe what happens for each of the lines of code that contains split(). Be sure to explain why the 2 columns end up with different values.
## mess around with it if you need to, but make a 'temp' version of gapminder to play with!
lesson14/Lesson14_team_improved.ipynb
nerdcommander/scientific_computing_2017
mit
Removing and renaming columns We have now added the columns country and continent, but we still have the old region column as well. In order to remove that column we use the drop() command. The first argument of the drop() command is the name of the element to be dropped. The second argument is the axis number: 0 for row, 1 for column. Note: any time we are getting rid of stuff, we want to make sure that we are doing it for a good reason and that we know our data will be ok after. You might want to double check your new columns before you drop the old one
## check new columns # drop old region column gapminder = gapminder.drop('region', 1) gapminder.head()
lesson14/Lesson14_team_improved.ipynb
nerdcommander/scientific_computing_2017
mit
Finally, it is a good idea to look critically at your column names themselves. It is a good idea to be as consistent as possible when naming columns. We often use all lowercase for all column names to avoid accidentally making names that can be confusing -- gdppercap and gdpPercap and GDPpercap are all the same nut gdpercap is different. Avoid spaces in column names to simplify manipulating your data. Also look out for lingering white space at the beginning or end of your column names. Run the following code that turns all column names to lowercase.
# turns all column names to lowercase # yes you need the .columns on the left side too gapminder.columns = gapminder.columns.str.lower() gapminder.head()
lesson14/Lesson14_team_improved.ipynb
nerdcommander/scientific_computing_2017
mit
We also want to remove the space from the life exp column name. We can do that with Pandas rename method. It takes a dictionary as its argument, with the old column names as keys and new column names as values. If you're unfamiliar with dictionaries, they are a very useful data structure in Python. You can read more about them here.
# rename column gapminder = gapminder.rename(columns={'life exp' : 'lifeexp'}) gapminder.head()
lesson14/Lesson14_team_improved.ipynb
nerdcommander/scientific_computing_2017
mit
Tidy data wrap-up 21. explain why the data set at this point is Tidy or at least much tidier than it was before. Export clean and tidy data file Now that we have a clean and tidy data set in a DataFrame, we want to export it and give it a new name so that we can refer to it and use it python df.to_csv('file name for export', index=False) # index=False keeps index out of columns For more info on this method, check out the docs for to_csv()
# exports gapminder_CandT.csv gapminder.to_csv('gapminder_CandT.csv', index=False)
lesson14/Lesson14_team_improved.ipynb
nerdcommander/scientific_computing_2017
mit
Subsetting and sorting There are many ways in which you can manipulate a Pandas DataFrame - here we will discuss only two: subsetting and sorting. Sometimes you only want part of a larger data set, then you would subset your DataFrame. Othertimes you want to sort the data into a particular order (year more recent to oldest, GDP lowest to highest, etc.). Subsetting We can subset (or slice) by giving the numbers of the rows you want to see between square brackets. Run the code below for an example:
# first 15 rows gapminder[0:15] # could also be gapminder[:15]
lesson14/Lesson14_team_improved.ipynb
nerdcommander/scientific_computing_2017
mit
write code to take a different slice
## your slice
lesson14/Lesson14_team_improved.ipynb
nerdcommander/scientific_computing_2017
mit
21. Predict what the slice below will do. python gapminder[-10:]
## run the code here to test prediction.
lesson14/Lesson14_team_improved.ipynb
nerdcommander/scientific_computing_2017
mit
22a. What does the negative number (in the cell above) mean? 22b. What happens when you leave the space before or after the colon empty? write code to take another different slice
## your 'nother slice
lesson14/Lesson14_team_improved.ipynb
nerdcommander/scientific_computing_2017
mit
More Subsetting Subsetting can also be done by selecting for a particular value in a column. For instance to select all of the rows that have 'africa' in the column 'continent'. python gapminder_africa = gapminder[gapminder['continent']=='africa'] Note the double equal sign: Remember that single equal signs are used in Python to assign something to a variable. The double equal sign is a comparison: in this case, the value from the column on the left has to be exactly equal to the string to the right. Also note that we made a new DataFrame to contain our subset of the data from Africa.
# makes a new DataFrame with just data from Africa gapminder_africa = gapminder[gapminder['continent']=='africa'] ## write code to quickly check that this worked
lesson14/Lesson14_team_improved.ipynb
nerdcommander/scientific_computing_2017
mit
Even more... There are several other fancy ways to subset rows and columns from DataFrames, the .loc and .iloc methods are particularly useful. If you're interested, look them up on the cheatsheet or in the docs. Sorting Sorting may help to further organize and inspect your data. The method sort_values() takes a number of arguments; the most important ones are by and ascending. The following command will sort your DataFrame by year, beginning with the most recent.
# sort by year, from most recent to oldest gapminder.sort_values(by='year', ascending = False)
lesson14/Lesson14_team_improved.ipynb
nerdcommander/scientific_computing_2017
mit
Note: the sort() method does not sort in place. write code to prove it.
## your code, sort() not in place
lesson14/Lesson14_team_improved.ipynb
nerdcommander/scientific_computing_2017
mit
23. Make a new variable with a sorted version of the DataFrame organized by country, from ‘Afganistan’ to ‘Zimbabwe’. Also include code to show that it is sorted correctly.
## alphabetical by country
lesson14/Lesson14_team_improved.ipynb
nerdcommander/scientific_computing_2017
mit
Summarize and plot Summaries and Statistics are very useful for intial examination of data as well as in depth analysis. Here we will only scratch the surface. Plots/graphs/charts are also great visual ways to examine data and illustrate patterns. Exploring your data is often iterative - summarize, plot, summarize, plot, etc. - sometimes it branche, sometimes there are more cleaning steps to be discovered... Let's try it! Summarizing data Remember that the info() method gives a few useful pieces of information, including the shape of the DataFrame, the variable type of each column, and the amount of memory stored. We can see many of our changes (continent and country columns instead of region, higher number of rows, etc.) reflected in the output of the info() method.
# review info() gapminder.info()
lesson14/Lesson14_team_improved.ipynb
nerdcommander/scientific_computing_2017
mit
We also saw above that the describe() method will take the numeric columns and give a summary of their values. We have to remember that we changed the changed column names, and this time it shouldn't have NAs.
# review describe gapminder[['pop', 'lifeexp', 'gdppercap']].describe()
lesson14/Lesson14_team_improved.ipynb
nerdcommander/scientific_computing_2017
mit
More summaries What if we just want a single value, like the mean of the population column? We can call mean on a single column this way:
# population mean gapminder['pop'].mean()
lesson14/Lesson14_team_improved.ipynb
nerdcommander/scientific_computing_2017
mit
What if we want to know the mean population by continent? Then we need to use the Pandas groupby() method and tell it which column we want to group by.
# population mean by continent gapminder[['continent', 'pop']].groupby(by='continent').mean()
lesson14/Lesson14_team_improved.ipynb
nerdcommander/scientific_computing_2017
mit
What if we want to know the median population by continent? The method that gives you the median is called median(). write code below to get the median population by continent
## population median by continent
lesson14/Lesson14_team_improved.ipynb
nerdcommander/scientific_computing_2017
mit
Or the number of entries (rows) per continent?
# count number of rows gapminder[['continent', 'country']].groupby(by='continent').count()
lesson14/Lesson14_team_improved.ipynb
nerdcommander/scientific_computing_2017
mit
Sometimes we don't want a whole DataFrame. Here is another way to do this that produces a series as opposed to a DataFrame that tells us number of entries (rows).
# get size by continent gapminder[['continent', 'country']].groupby(by='continent').size()
lesson14/Lesson14_team_improved.ipynb
nerdcommander/scientific_computing_2017
mit
We can also look at the mean GDP per capita of each country: write code below to get the mean GDP per capita of each country
## mean GDP per capita by country
lesson14/Lesson14_team_improved.ipynb
nerdcommander/scientific_computing_2017
mit
What if we wanted a new DataFrame that just contained these summaries? This could be a table in a report, for example.
# pretty slick, right?! continent_mean_pop = gapminder[['continent', 'pop']].groupby(by='continent').mean() continent_mean_pop = continent_mean_pop.rename(columns = {'pop':'meanpop'}) continent_row_ct = gapminder[['continent', 'country']].groupby(by='continent').count() continent_row_ct = continent_row_ct.rename(columns = {'country':'nrows'}) continent_median_pop = gapminder[['continent', 'pop']].groupby(by='continent').median() continent_median_pop = continent_median_pop.rename(columns = {'pop':'medianpop'}) gapminder_summs = pd.concat([continent_row_ct,continent_mean_pop,continent_median_pop], axis=1) gapminder_summs = gapminder_summs.rename(columns = {'y':'year'}) gapminder_summs
lesson14/Lesson14_team_improved.ipynb
nerdcommander/scientific_computing_2017
mit
Visualization with matplotlib Recall that matplotlib is Python's main visualization library. It provides a range of tools for constructing plots, and numerous high-level and add on plotting libraries are built with matplotlib in mind. When we were in the early stages of setting up our analysis, we loaded these libraries like so:
# import again just in case import matplotlib.pyplot as plt ## generate some toy data and plot it # required to generate toy data import numpy as np # magic to plot straight into notebook, probably no longer needed. # %matplotlib inline # evenly sampled time at 200ms intervals t = np.arange(0., 5., 0.2) # red dashes, blue squares and green triangles plt.plot(t, t, 'r--', t, t**2, 'bs', t, t**3, 'g^') plt.show()
lesson14/Lesson14_team_improved.ipynb
nerdcommander/scientific_computing_2017
mit
Single variable plots Histograms - provide a quick way of visualizing the distribution of numerical data, or the frequencies of observations for categorical variables. Run the code below to generate a smaple histogram.
# example histogram # generate some random numbers from a normal distribution data = 100 + np.random.randn(500) # make a histogram with 20 bins plt.hist(data, 20) plt.xlabel('x axis') plt.ylabel('y axis') plt.show()
lesson14/Lesson14_team_improved.ipynb
nerdcommander/scientific_computing_2017
mit
write code below to make a histogram of the distribution of life expectancies of the countries in the gapminder DataFrame.
## histogram of lifeexp
lesson14/Lesson14_team_improved.ipynb
nerdcommander/scientific_computing_2017
mit
Target configuration The target configuration is used to describe and configure your test environment. You can find more details in examples/utils/testenv_example.ipynb.
# Root path of the gem5 workspace base = "/home/vagrant/gem5" conf = { # Only 'linux' is supported by gem5 for now, 'android' is a WIP "platform" : 'linux', # Preload settings for a specific target "board" : 'gem5', # Devlib modules to load - "gem5stats" is required to use the power instrument "modules" : ["cpufreq", "bl", "gem5stats"], # Host that will run the gem5 instance "host" : "workstation-lin", "gem5" : { # System to simulate "system" : { # Platform description "platform" : { # Gem5 platform description # LISA will also look for an optional gem5<platform> board file # located in the same directory as the description file. "description" : os.path.join(base, "juno.py"), "args" : [ "--power-model", # Platform-specific parameter enabling power modelling "--juno-revision 2", # Resume simulation from a previous checkpoint # Checkpoint must be taken before Virtio folders are mounted #"--checkpoint-indir /data/tmp/Results_LISA/gem5", #"--checkpoint-resume 1", ] }, # Kernel compiled for gem5 with Virtio flags "kernel" : os.path.join(base, "product/", "vmlinux"), # DTB of the system to simulate "dtb" : os.path.join(base, "product/", "armv8_juno_r2.dtb"), # Disk of the distrib to run "disk" : os.path.join(base, "aarch64-ubuntu-trusty-headless.img") }, # gem5 settings "simulator" : { # Path to gem5 binary "bin" : os.path.join(base, "gem5/build/ARM/gem5.fast"), # Args to be given to the binary "args" : [ # Zilch ], } }, # Tools required by the experiments "tools" : ['trace-cmd', 'sysbench', 'rt-app'], # Output directory on host "results_dir" : "gem5_res", # Energy Meters configuration based on Gem5 stats "emeter" : { "instrument" : "gem5", "conf" : { # Zilch }, # Each channel here must refer to a specific **power** field in the stats file. 'channel_map' : { 'Core0S' : 'system.cluster0.cores0.power_model.static_power', 'Core0D' : 'system.cluster0.cores0.power_model.dynamic_power', 'Core1S' : 'system.cluster0.cores1.power_model.static_power', 'Core1D' : 'system.cluster0.cores1.power_model.dynamic_power', 'Core2S' : 'system.cluster0.cores2.power_model.static_power', 'Core2D' : 'system.cluster0.cores2.power_model.dynamic_power', 'Core3S' : 'system.cluster0.cores3.power_model.static_power', 'Core3D' : 'system.cluster0.cores3.power_model.dynamic_power', 'Core4S' : 'system.cluster1.cores0.power_model.static_power', 'Core4D' : 'system.cluster1.cores0.power_model.dynamic_power', 'Core5S' : 'system.cluster1.cores1.power_model.static_power', 'Core5D' : 'system.cluster1.cores1.power_model.dynamic_power', }, }, } # This can take a lot of time ... te = TestEnv(conf, wipe=True) target = te.target
ipynb/deprecated/examples/energy_meter/EnergyMeter_Gem5.ipynb
ARM-software/lisa
apache-2.0
Workload execution For this example, we will investigate the energy consumption of the target while running a random workload. Our observations will be made using the RT-App decreasing ramp workload defined below. With such a workload, the system load goes from high to low over time. We expect to see a similar pattern in power consumption.
# Create and RTApp RAMP tasks rtapp = RTA(target, 'ramp', calibration=te.calibration()) rtapp.conf(kind='profile', params={ 'ramp1' : Ramp( start_pct = 95, end_pct = 5, delta_pct = 10, time_s = 0.1).get(), 'ramp2' : Ramp( start_pct = 90, end_pct = 30, delta_pct = 20, time_s = 0.2).get(), })
ipynb/deprecated/examples/energy_meter/EnergyMeter_Gem5.ipynb
ARM-software/lisa
apache-2.0
Energy estimation The gem5 energy meters feature two methods: reset and report. * The reset method will start sampling the channels defined in the target configuration. * The report method will stop the sampling and produce a CSV file containing power samples together with a JSON file summarizing the total energy consumption of each channel.
# Start emeters & run workload te.emeter.reset() rtapp.run(out_dir=te.res_dir) nrg_rep = te.emeter.report(te.res_dir)
ipynb/deprecated/examples/energy_meter/EnergyMeter_Gem5.ipynb
ARM-software/lisa
apache-2.0
Data analysis
logging.info("Measured channels energy:") print json.dumps(nrg_rep.channels, indent=4) logging.info("DataFrame of collected samples (only first 5)") nrg_rep.data_frame.head() # Obtain system level energy by ... df = nrg_rep.data_frame # ... summing the dynamic power of all cores to obtain total dynamic power, ... df["total_dynamic"] = df[('system.cluster0.cores0.power_model.dynamic_power', 'power')] + \ df[('system.cluster0.cores1.power_model.dynamic_power', 'power')] + \ df[('system.cluster0.cores2.power_model.dynamic_power', 'power')] + \ df[('system.cluster0.cores3.power_model.dynamic_power', 'power')] + \ df[('system.cluster1.cores0.power_model.dynamic_power', 'power')] + \ df[('system.cluster1.cores1.power_model.dynamic_power', 'power')] # ... summing the static power of all cores to obtain total static power and ... df["total_static"] = df[('system.cluster0.cores0.power_model.static_power', 'power')] + \ df[('system.cluster0.cores1.power_model.static_power', 'power')] + \ df[('system.cluster0.cores2.power_model.static_power', 'power')] + \ df[('system.cluster0.cores3.power_model.static_power', 'power')] + \ df[('system.cluster1.cores0.power_model.static_power', 'power')] + \ df[('system.cluster1.cores1.power_model.static_power', 'power')] # ... summing the dynamic and static powers df["total"] = df["total_dynamic"] + df["total_static"] logging.info("Plot of collected power samples") axes =df[('total')].plot(figsize=(16,8), drawstyle='steps-post'); axes.set_title('Power samples'); axes.set_xlabel('Time [s]'); axes.set_ylabel('Output power [W]');
ipynb/deprecated/examples/energy_meter/EnergyMeter_Gem5.ipynb
ARM-software/lisa
apache-2.0
We can see on the above plot that the system level power consumption decreases over time (in average). This is coherent with the expected behaviour given the decreasing ramp workload under consideration.
logging.info("Power distribution") axes = df[('total')].plot(kind='hist', bins=32, figsize=(16,8)); axes.set_title('Power Histogram'); axes.set_xlabel('Output power [W] buckets'); axes.set_ylabel('Samples per bucket'); logging.info("Plot of collected power samples") nrg_rep.data_frame.describe(percentiles=[0.90, 0.95, 0.99]).T # Don't forget to stop Gem5 target.disconnect()
ipynb/deprecated/examples/energy_meter/EnergyMeter_Gem5.ipynb
ARM-software/lisa
apache-2.0
Here we upload the data obtained from Brownian Dynamics simulations of isotropic diffusion on a 2D potential.
h5file = "data/cossio_kl1.3_Dx1_Dq1.h5" f = h5py.File(h5file, 'r') data = np.array(f['data']) f.close()
examples/brownian_dynamics_2D/2D_smFS_MSM.ipynb
daviddesancho/MasterMSM
gpl-2.0
Trajectory analysis
fig, ax = plt.subplots(2,1,figsize=(12,3), sharex=True,sharey=False) ax[0].plot(data[:,0],data[:,1],'.', markersize=1) ax[1].plot(data[:,0],data[:,2],'g.', markersize=1) ax[0].set_ylim(-10,10) ax[1].set_xlim(0,25000) ax[0].set_ylabel('x') ax[1].set_ylabel('q') ax[1].set_xlabel('Time') plt.tight_layout(h_pad=0) fig, ax = plt.subplots(figsize=(6,4)) hist, bin_edges = np.histogram(data[:,1], bins=np.linspace(-7,7,20), \ density=True) bin_centers = [0.5*(bin_edges[i]+bin_edges[i+1]) \ for i in range(len(bin_edges)-1)] ax.plot(bin_centers, -np.log(hist),label="x") hist, bin_edges = np.histogram(data[:,2], bins=np.linspace(-7,7,20), \ density=True) bin_centers = [0.5*(bin_edges[i]+bin_edges[i+1]) \ for i in range(len(bin_edges)-1)] ax.plot(bin_centers, -np.log(hist),label="q") ax.set_xlim(-7,7) ax.set_ylim(1,9) #ax.set_xlabel('x') ax.set_ylabel('PMF ($k_BT$)') ax.legend()
examples/brownian_dynamics_2D/2D_smFS_MSM.ipynb
daviddesancho/MasterMSM
gpl-2.0
Representation of the bistable 2D free energy surface as a function of the measured q and molecular x extensions:
H, x_edges, y_edges = np.histogram2d(data[:,1],data[:,2], \ bins=[np.linspace(-7,7,20), np.linspace(-7,7,20)]) fig, ax = plt.subplots(figsize=(6,5)) pmf = -np.log(H.transpose()) pmf -= np.min(pmf) cs = ax.contourf(pmf, extent=[x_edges.min(), x_edges.max(), \ y_edges.min(), y_edges.max()], \ cmap=cm.rainbow, levels=np.arange(0, 6,0.5)) cbar = plt.colorbar(cs) ax.set_xlim(-7,7) ax.set_ylim(-7,7) ax.set_yticks(range(-5,6,5)) ax.set_xlabel('$x$', fontsize=20) ax.set_ylabel('$q$', fontsize=20) plt.tight_layout()
examples/brownian_dynamics_2D/2D_smFS_MSM.ipynb
daviddesancho/MasterMSM
gpl-2.0
Assignment Now we discretize the trajectory using the states obtained from partitioning the 2D free energy surface of the diffusion of the molecule. We first need to import the function that makes the grid.
from scipy.stats import binned_statistic_2d statistic, x_edge, y_edge, binnumber = \ binned_statistic_2d(data[:,1],data[:,2],None,'count', \ bins=[np.linspace(-7,7,20), np.linspace(-7,7,20)]) fig, ax = plt.subplots(figsize=(6,5)) grid = ax.imshow(-np.log(statistic.transpose()),origin="lower",cmap=plt.cm.rainbow) cbar = plt.colorbar(grid) ax.set_yticks(range(0,20,5)) ax.set_xticks(range(0,20,5)) ax.set_xlabel('$x_{bin}$', fontsize=20) ax.set_ylabel('$q_{bin}$', fontsize=20) plt.tight_layout() fig,ax=plt.subplots(3,1,figsize=(12,6),sharex=True) plt.subplots_adjust(wspace=0, hspace=0) ax[0].plot(range(0,len(data[:,1])),data[:,1]) ax[1].plot(range(0,len(data[:,2])),data[:,2],color="g") ax[2].plot(binnumber) ax[0].set_ylabel('x') ax[1].set_ylabel('q') ax[2].set_ylabel("s") ax[2].set_xlabel("time (ps)") ax[2].set_xlim(0,2000)
examples/brownian_dynamics_2D/2D_smFS_MSM.ipynb
daviddesancho/MasterMSM
gpl-2.0
Master Equation Model
from mastermsm.trajectory import traj from mastermsm.msm import msm distraj = traj.TimeSeries(distraj=list(binnumber), dt=1) distraj.find_keys() distraj.keys.sort() msm_2D = msm.SuperMSM([distraj])
examples/brownian_dynamics_2D/2D_smFS_MSM.ipynb
daviddesancho/MasterMSM
gpl-2.0
Convergence Test
for i in [1, 2, 5, 10, 20, 50, 100]: msm_2D.do_msm(i) msm_2D.msms[i].do_trans(evecs=True) msm_2D.msms[i].boots() fig, ax = plt.subplots() for i in range(5): tau_vs_lagt = np.array([[x,msm_2D.msms[x].tauT[i],msm_2D.msms[x].tau_std[i]] \ for x in sorted(msm_2D.msms.keys())]) ax.errorbar(tau_vs_lagt[:,0],tau_vs_lagt[:,1],fmt='o-', yerr=tau_vs_lagt[:,2], markersize=10) #ax.fill_between(10**np.arange(-0.2,3,0.2), 1e-1, 10**np.arange(-0.2,3,0.2), facecolor='lightgray') ax.set_xlabel(r'$\Delta$t [ps]', fontsize=16) ax.set_ylabel(r'$\tau$ [ps]', fontsize=16) ax.set_xlim(0.8,200) ax.set_ylim(1,1000) ax.set_yscale('log') _ = ax.set_xscale('log')
examples/brownian_dynamics_2D/2D_smFS_MSM.ipynb
daviddesancho/MasterMSM
gpl-2.0
There is no dependency of the relaxation times $\tau$ on the lag time $\Delta$t. Estimation
lt=2 plt.figure() plt.imshow(msm_2D.msms[lt].trans, interpolation='none', \ cmap='viridis_r',origin="lower") plt.ylabel('$\it{i}$') plt.xlabel('$\it{j}$') plt.colorbar() plt.figure() plt.imshow(np.log(msm_2D.msms[lt].trans), interpolation='none', \ cmap='viridis_r',origin="lower") plt.ylabel('$\it{i}$') plt.xlabel('$\it{j}$') plt.colorbar() fig, ax = plt.subplots() ax.errorbar(range(1,12),msm_2D.msms[lt].tauT[0:11], fmt='o-', \ yerr= msm_2D.msms[lt].tau_std[0:11], ms=10) ax.set_xlabel('Eigenvalue') ax.set_ylabel(r'$\tau_i$ [ns]')
examples/brownian_dynamics_2D/2D_smFS_MSM.ipynb
daviddesancho/MasterMSM
gpl-2.0
The first mode captured by $\lambda_1$ is significantly slower than the others. That mode, which is described by the right eigenvector $\psi^R_1$ as the transition of the protein between the folded and unfolded states.
fig, ax = plt.subplots(figsize=(10,4)) ax.plot(msm_2D.msms[2].rvecsT[:,1]) ax.fill_between(range(len(msm_2D.msms[lt].rvecsT[:,1])), 0, \ msm_2D.msms[lt].rvecsT[:,1], \ where=msm_2D.msms[lt].rvecsT[:,1]>0,\ facecolor='c', interpolate=True,alpha=.4) ax.fill_between(range(len(msm_2D.msms[lt].rvecsT[:,1])), 0, \ msm_2D.msms[lt].rvecsT[:,1], \ where=msm_2D.msms[lt].rvecsT[:,1]<0,\ facecolor='g', interpolate=True,alpha=.4) ax.set_ylabel("$\Psi^R_1$") plt.show()
examples/brownian_dynamics_2D/2D_smFS_MSM.ipynb
daviddesancho/MasterMSM
gpl-2.0
The projection of $\psi^R_1$ on the 2D grid shows the transitions between the two conformational states (red and blue).
fig,ax = plt.subplots(1,2,figsize=(10,5),sharey=True,sharex=True) rv_mat = np.zeros((20,20), float) for i in [x for x in zip(msm_2D.msms[lt].keep_keys, \ msm_2D.msms[lt].rvecsT[:,1])]: unr_ind=np.unravel_index(i[0],(21,21)) rv_mat[unr_ind[0]-1,unr_ind[1]-1] = -i[1] ax[0].imshow(rv_mat.transpose(), interpolation="none", \ cmap='bwr',origin="lower") ax[1].imshow(-np.log(statistic.transpose()), \ cmap=plt.cm.rainbow,origin="lower") ax[1].set_yticks(range(0,20,5)) ax[1].set_xticks(range(0,20,5))
examples/brownian_dynamics_2D/2D_smFS_MSM.ipynb
daviddesancho/MasterMSM
gpl-2.0
where DEEPNUC_PATH specifies the path to the DeepNuc directory. All examples in this manual assume usage of this directory structure. Although this manual was written using Jupyter notebook, I would not recommend running training within a notebook environment. Example 1: Training and testing a model from a fasta file For our first example, we will train and test a binary classifier using the commonly used fasta format. All entries of an input fasta file need to be the same length. First import the appropriate classes.
import tensorflow as tf import sys import os.path sys.path.append('../../') import numpy as np from deepnuc.nucdata import * from deepnuc.nucbinaryclassifier import NucBinaryClassifier from deepnuc.databatcher import DataBatcher from deepnuc.modelparams import * from deepnuc.logger import Logger import deepnuc.nucconvmodel as nucconvmodel from deepnuc.crossvalidator import CrossValidator
docs/manual/DeepNuc Instructions.ipynb
LarsDu/DeepNuc
gpl-3.0
Then specify model parameters. * seq_len: Sequence length. * num_epochs: Number of epochs. An epoch is a single pass through every item in a training set. * learning_rate: Learning rate for gradient descent * batch_size: Mini-batch size. * keep_prob: The keep probability for Dropout. A keep prob of .6 will randomly drop 40% of weights on each training cycle. * beta1: A parameter used by the AdamOptimizer * concat_revcom_input: If set to true, the reverse complemented sequence nucleotide sequence for a item will be concatenated to the input vector. * inference_method_key: A string value key for different methods found in nucconvmodel.py. Currently can be set to "inferenceA" or "inferenceD"
params = ModelParams( seq_len=600, num_epochs=35, learning_rate=1e-4, batch_size=24, keep_prob=0.5, beta1=0.9, concat_revcom_input=False, inference_method_key="inferenceA" )
docs/manual/DeepNuc Instructions.ipynb
LarsDu/DeepNuc
gpl-3.0
Now create a NucData type object from your fasta file. NucData type objects are described in the nucdata.py module. There are many different types of NucData objects for different filetypes. We will use NucDataBedMem which reads all nucleotide sequences into the computer's memory. Note that since most bed files will have irregularly sized entries (end-start will differ for most lines), this the NucDataBedMem constructor has an option to only pull objects within a window around the start coordinate to make things nice and even. This constructor also has an option to automatically generate a dinucleotide shuffled negative class fasta file from each bed file entry. More information can be found in the API docs.
bed_file = "kruesi_bed_0_4.bed" genome_file="WS230_genome.fa" chr_sizes_file="WS230_genome.chr.sizes" seq_len=600 skip_first=False nuc_data = NucDataBedMem( bed_file, genome_file, chr_sizes_file, seq_len=600, skip_first= False, #skip header line if true gen_dinuc_shuffle=True, neg_data_reader=None, start_window=[-300,300] )
docs/manual/DeepNuc Instructions.ipynb
LarsDu/DeepNuc
gpl-3.0
NucData objects encapsulate data from different file formats, but aren't used for training and testing directly. DataBatcher objects are useful for slicing NucData objects into training and testing sets by index. DataBatcher objects also shuffle records by index every epoch and pull randomized mini-batches for training without replacement.
#First scramble the indices you want to use seed = 12415 perm_indices = np.random.RandomState(seed).\ permutation(range(nuc_data.num_records)) #Slice out 20% of indices for testing and 80% for training test_frac = .2 test_size = int(nuc_data.num_records*test_frac) train_size = nuc_data.num_records - test_size test_indices = perm_indices[0:int(nuc_data.num_records*test_frac)] train_indices = np.setdiff1d(perm_indices,test_indices) #Create training and testing batchers train_batcher = DataBatcher(nuc_data,train_indices) test_batcher = DataBatcher(nuc_data,test_indices)
docs/manual/DeepNuc Instructions.ipynb
LarsDu/DeepNuc
gpl-3.0
With the training and testing batchers, we can now run training from within a Tensorflow session. Note: Training parameters can be passed directly to NucBinaryClassifier, but in many situtations, using the ModelParams object can be useful for writing methods which train off different dataset using the same training parameters.
save_dir="test_train1" os.makedirs(save_dir) with tf.Session() as sess: nc_test = NucBinaryClassifier(sess, train_batcher, test_batcher, params.num_epochs, params.learning_rate, params.batch_size, params.seq_len, save_dir, params.keep_prob, params.beta1 params.concat_revcom_input, params.inference_method_key) nc_test.build_model() nc_test.train()
docs/manual/DeepNuc Instructions.ipynb
LarsDu/DeepNuc
gpl-3.0
nc_test.build_model() will also load trained models. For example. After running training, the following line can be used to evaluate metrics from a differerent test or evaluation batcher.
nc_test.build_model() nc_test.eval_model_metrics(different_batcher)
docs/manual/DeepNuc Instructions.ipynb
LarsDu/DeepNuc
gpl-3.0
Generating a dinucleotide shuffled file manually Making dinucleotide shuffled files can also be done manually using the BedReader class Example 2: Training and testing a model from a fasta file Training and testing a from a fasta file is identical to training and testing from a bed file. The only difference is changing the nuc_data object from a NucDataBedMem to a NucDataFastaMem. NucDataFastaMem takes a list of fasta files as an argument. For binary classification this list should be length = 2. The first fasta file passed in the list is the negative class and the second file passed should be the positive class.
fasta_neg = "kruesi_tss_-300_300_dinuc_shuffle.fa" fasta_pos = "kruesi_tss_-300_300.fa" nuc_data = NucDataFastaMem([fasta_neg,fasta_pos],seq_len=600)
docs/manual/DeepNuc Instructions.ipynb
LarsDu/DeepNuc
gpl-3.0
Example 3: Performing k-folds cross-validation using a bed file Getting a model trained and evaluating a test set is nice, but in a real world situation, we want to have a way to know how well our model performed. This is where k-folds cross-validation is useful. Cross validation can be done in the following manner.
cv_save_dir = "cv_test1" cv_test = CrossValidator(params, nuc_data, cv_save_dir, seed=124155, k_folds=3, test_frac=0.15) cv.run() #Calculate average of k-fold metrics and display in terminal cv.calc_avg_k_metrics() #Plot curves cv.plot_auroc_auprc("cv test curves")
docs/manual/DeepNuc Instructions.ipynb
LarsDu/DeepNuc
gpl-3.0
Note that cv.run() does not need to be called from within a tf.Session() Example 4: Performing a grid search with cross-validation using a bed file On top of k-folds cross-validation, we want to have cross-validation metrics for different hyperparameter combinations. This is where a grid search is useful. Performing a grid search is one of the simplest and most common ways of using DeepNuc A grid search requires initialization of a GridParams object. This object is similar to ModelParams except all arguments except seq_len should be passed as lists
gsave_dir="gridtest1" gparams = GridParams( seq_len= seq_len, num_epochs=[55], learning_rate=[1e-4], batch_size=[24], keep_prob=[0.5], beta1=[0.9], concat_revcom_input = [True,False], inference_method_key=["inferenceA","inferenceD"]) ) nuc_data = NucDataBed(bed_file, genome_file, chr_sizes_file, seq_len=600, skip_first= False, #skip header line if true gen_dinuc_shuffle=True, start_window=[-300,300] ) gsearch = GridSearch( gparams, nuc_data, save_dir=gsave_dir, seed = 29517, k_folds=3, test_frac=.2, fig_title_prefix = "") gsearch.run() #Does not need to be run within tf.Session()
docs/manual/DeepNuc Instructions.ipynb
LarsDu/DeepNuc
gpl-3.0
After a grid search is performed, the model can be loaded and final training performed on the best classifier:
with tf.Session() as sess: best_classifier,best_model_params = gsearch.best_binary_classifier(sess,"auroc") print "\n\n\nBest model params for {}".format(tss_bed) best_model_params.print_params()
docs/manual/DeepNuc Instructions.ipynb
LarsDu/DeepNuc
gpl-3.0
The best_classifier can be treated as any other NucBinaryClassifier object.
best_classifier.eval_model_metrics(different_batcher)
docs/manual/DeepNuc Instructions.ipynb
LarsDu/DeepNuc
gpl-3.0
Note that if running back to back grid search or training runs from the same script, you should reset the default graph with the following command.
tf.reset_default_graph()
docs/manual/DeepNuc Instructions.ipynb
LarsDu/DeepNuc
gpl-3.0
Example 5: Using a fasta or bed file as a negative class To specify a specific bed or fasta file to use as a negative class, pass a BedReader or FastaReader to the neg_data_reader parameter of NucDataBedMem when making a NucData object.
seq_len=600 specific_neg_file="my_specific_file.fa" my_neg_reader = FastaReader(specific_neg_file,seq_len) bed_file = "kruesi_bed_0_4.bed" genome_file="WS230_genome.fa" chr_sizes_file="WS230_genome.chr.sizes" skip_first=False nuc_data = NucDataBedMem( bed_file, genome_file, chr_sizes_file, seq_len=600, skip_first= False, #skip header line if true gen_dinuc_shuffle=False neg_data_reader= my_neg_reader start_window=[-300,300] )
docs/manual/DeepNuc Instructions.ipynb
LarsDu/DeepNuc
gpl-3.0
Since training should be performed with balanced data, you may want to set a limit on the number of negative items you pull from a specified file. This is available as an argument to the FastaReader object.
seq_len=600 specific_neg_file="my_specific_file.fa" my_neg_reader = FastaReader(specific_neg_file, seq_len, pull_limit=4121)
docs/manual/DeepNuc Instructions.ipynb
LarsDu/DeepNuc
gpl-3.0
Tarea 1 (a). Irradiancia máxima Como estamos considerando el haz láser de un puntero que emite en el visible, como tiempo de exposición emplearemos el tiempo que se tarda en cerrar el párpado. Así con este tiempo de exposición estimar de la gráfica la irradiancia máxima que puede alcanzar el ojo. Escribir el tiempo de exposición empleado y el correspondiente valor de la irradiancia. Tiempo de exposición (parpadeo) = s Irradiancia máxima permisible = W/cm$^2$ Tarea 1 (b). Potencia máxima Vamos a considerar que el haz que alcanza nuestro ojo está colimado con un tamaño equivalente al de nuestra pupila. Empleando dicho tamaño calcular la potencia máxima que puede alcanzar nuestro ojo sin provocar daño. Escribir el tamaño de la pupila considerado, las operaciones y el resultado final de la potencia (en mW) Diámetro o radio de la pupila = mm Cálculos intermedios Potencia máxima permisible = mW Tarea 2. Elección del puntero láser Buscar en internet información sobre un puntero láser visible que sea de alta potencia. Verifi car que dicho puntero láser puede provocar daño ocular (teniendo en cuenta el resultado de la Tarea 1 (b)) Escribir aquí las características técnicas de dicho láser (potencia, longitud de onda, etc.) y el precio. Incluir referencia sobre la página web http:\ Vamos a incrustar en el notebook la página web empleada. Para ello escribimos la dirección de la página web en la celda de código siguiente.
#### # Parámetros a modificar. INICIO #### web_laser = 'http://www.punterolaser.com' # Incluir la dirección de la página web web_anchura = '1100' # Valor en pixeles de la anchura de la página web incrustada en el notebook. Solo modificar si no se ve bien la página web web_altura = '800' # Valor en pixeles de la altura de la página web incrustada en el notebook. Solo modificar si no se ve bien la página web #### # Parámetros a modificar. FIN #### ############################################################################################################################## texto_web_laser='<iframe src= '+web_laser+' width='+web_anchura+'px, height='+web_altura+'px>' from IPython.display import HTML HTML(texto_web_laser)
Trabajo Filtro Interferencial/.ipynb_checkpoints/TrabajoFiltrosweb-checkpoint.ipynb
ecabreragranado/OpticaFisicaII
gpl-3.0
Tarea 3. Elección del filtro interferencial Vamos a buscar en internet un filtro interferencial comercial que permita evitar el riesgo de daño ocular para el puntero láser seleccionado. Se tratará de un filtro que bloquee la longitud de onda del puntero láser. Tarea 3 (a). Busqueda e información del filtro interferencial Vamos a emplear la información accesible en la casa Semrock ( http://www.semrock.com/filters.aspx ) Seleccionar en esta página web un filtro adecuado. Pinchar sobre cada filtro (sobre la curva de transmitancia, sobre el Part Number, o sobre Show Product Detail) para obtener más información. Escribir aquí las características más relevantes del filtro seleccionado: transmitancia T, absorbancia o densidad óptica OD, rango de longitudes de onda, precio, etc.. Vamos a incrustar en el notebook la página web con la información detallada del filtro seleccionado. Para ello escribimos la dirección de dicha página web en la celda de código siguiente.
#### # Parámetros a modificar. INICIO #### web_filtro = 'http://www.semrock.com/FilterDetails.aspx?id=LP02-224R-25' # Incluir la dirección de la página web web_anchura = '1100' # Valor en pixeles de la anchura de la página web incrustada en el notebook. Solo modificar si no se ve bien la página web web_altura = '800' # Valor en pixeles de la altura de la página web incrustada en el notebook. Solo modificar si no se ve bien la página web #### # Parámetros a modificar. FIN #### ############################################################################################################################## texto_web_filtro='<iframe src= '+web_filtro+' width='+web_anchura+'px, height='+web_altura+'px>' from IPython.display import HTML HTML(texto_web_filtro)
Trabajo Filtro Interferencial/.ipynb_checkpoints/TrabajoFiltrosweb-checkpoint.ipynb
ecabreragranado/OpticaFisicaII
gpl-3.0
Tarea 3 (b). Verificación del filtro Empleando el dato de la transmitancia (T) a la longitud de onda del puntero láser comprobar que dicho filtro evitará el riesgo de lesión. Para ello vamos a usar los datos de la transmitancia del filtro seleccionado que aparecen en la página web de Semrock. Para cargar dichos datos en nuestro notebook se emplea el código que identifica a dicho filtro y que se obtiene automáticamente de la dirección de la página web seleccionada en el apartado anterior (Tarea 3(a)). En la siguiente celda de código se representa la transmitancia del filtro en escala logarítmica en función de la longitud de onda (en nm). A la izquierda se muestra la curva original obtenida de Semrock y a la derecha se muestra un zoom de la misma en la región que nosotros elijamos. Esta gráfica nos permite obtener el valor de la transmitancia a la longitud de onda de nuestro puntero láser, por ello debemos escribir el valor de dicha longitud de onda en el código, y el se encargará de calcular la transmitancia. El resultado aparece en la parte superior de las gráficas.
%pylab inline #### # Parámetros a modificar. INICIO #### longitud_de_onda_laser = 530 # Incluir el valor de la longitud de onda del puntero láser seleccionado (en nm) # Pintamos la gráfica original y un zoom empleando el rango de valores siguientes (para ver mejor la zona deseada) longitud_de_onda_minina = 500 # Incluir el valor de la longitud de onda mímina (en nm) para hacer zoom longitud_de_onda_maxima = 670 # Incluir el valor de la longitud de onda maxima (en nm) para hacer zoom transmitancia_minina = 1e-8 # Incluir el valor de la transmitancia mímina para hacer zoom transmitancia_maxima = 1 # Incluir el valor de la transmitancia máxima para hacer zoom #### # Parámetros a modificar. FIN #### ############################################################################################################################## from numpy.core.defchararray import find indice_igual=find(web_filtro,'=') codigoID = web_filtro[indice_igual+1:-3] Codigo_Filtro = codigoID filename = 'http://www.semrock.com/_ProductData/Spectra/'+Codigo_Filtro+'_Spectrum.txt' # Dirección del fichero de datos data=genfromtxt(filename,dtype=float,skip_header=4) # Carga los datos longitud_de_onda=data[:,0];transmitancia=data[:,1]; figure(figsize(13,6)) subplot(1,2,1) semilogy(longitud_de_onda,transmitancia) xlabel('$\lambda$ (nm)');ylabel('T');title('Curva original') subplot(1,2,2) semilogy(longitud_de_onda,transmitancia) xlabel('$\lambda$ (nm)');ylabel('T');title('Zoom') axis([longitud_de_onda_minina, longitud_de_onda_maxima, transmitancia_minina, transmitancia_maxima]); from scipy.interpolate import interp1d f_transm = interp1d(data[:,0],data[:,1]) transm_para_laser = f_transm(longitud_de_onda_laser) print "Transmitancia para la longitud de onda del puntero láser" print transm_para_laser
Trabajo Filtro Interferencial/.ipynb_checkpoints/TrabajoFiltrosweb-checkpoint.ipynb
ecabreragranado/OpticaFisicaII
gpl-3.0
Load the file, and from the file pull our the engine (which tells us what the timestep was) and the move scheme (which gives us a starting point for much of the analysis).
# note that this log will overwrite the log from the previous notebook #import logging.config #logging.config.fileConfig("logging.conf", disable_existing_loggers=False) %%time flexible = paths.AnalysisStorage("ad_tps.nc") # opening as AnalysisStorage is a little slower, but speeds up the move_summary engine = flexible.engines[0] flex_scheme = flexible.schemes[0] print("File size: {0} for {1} steps, {2} snapshots".format( flexible.file_size_str, len(flexible.steps), len(flexible.snapshots) ))
examples/alanine_dipeptide_tps/AD_tps_3a_analysis_flex.ipynb
dwhswenson/openpathsampling
mit
Replica history tree and decorrelated trajectories The ReplicaHistoryTree object gives us both the history tree (often called the "move tree") and the number of decorrelated trajectories. A ReplicaHistoryTree is made for a certain set of Monte Carlo steps. First, we make a tree of only the first 25 steps in order to visualize it. (All 10000 steps would be unwieldy.) After the visualization, we make a second PathTree of all the steps. We won't visualize that; instead we use it to count the number of decorrelated trajectories.
replica_history = ops_vis.ReplicaEvolution(replica=0) tree = ops_vis.PathTree( flexible.steps[0:25], replica_history ) tree.options.css['scale_x'] = 3 SVG(tree.svg()) # can write to svg file and open with programs that can read SVG with open("flex_tps_tree.svg", 'w') as f: f.write(tree.svg()) print("Decorrelated trajectories:", len(tree.generator.decorrelated_trajectories)) %%time full_history = ops_vis.PathTree( flexible.steps, ops_vis.ReplicaEvolution( replica=0 ) ) n_decorrelated = len(full_history.generator.decorrelated_trajectories) print("All decorrelated trajectories:", n_decorrelated)
examples/alanine_dipeptide_tps/AD_tps_3a_analysis_flex.ipynb
dwhswenson/openpathsampling
mit
Path length distribution Flexible length TPS gives a distribution of path lengths. Here we calculate the length of every accepted trajectory, then histogram those lengths, and calculate the maximum and average path lengths. We also use engine.snapshot_timestep to convert the count of frames to time, including correct units.
path_lengths = [len(step.active[0].trajectory) for step in flexible.steps] plt.hist(path_lengths, bins=40, alpha=0.5); print("Maximum:", max(path_lengths), "("+(max(path_lengths)*engine.snapshot_timestep).format("%.3f")+")") print ("Average:", "{0:.2f}".format(np.mean(path_lengths)), "("+(np.mean(path_lengths)*engine.snapshot_timestep).format("%.3f")+")")
examples/alanine_dipeptide_tps/AD_tps_3a_analysis_flex.ipynb
dwhswenson/openpathsampling
mit
Path density histogram Next we will create a path density histogram. Calculating the histogram itself is quite easy: first we reload the collective variables we want to plot it in (we choose the phi and psi angles). Then we create the empty path density histogram, by telling it which CVs to use and how to make the histogram (bin sizes, etc). Finally, we build the histogram by giving it the list of active trajectories to histogram.
from openpathsampling.numerics import HistogramPlotter2D psi = flexible.cvs['psi'] phi = flexible.cvs['phi'] deg = 180.0 / np.pi path_density = paths.PathDensityHistogram(cvs=[phi, psi], left_bin_edges=(-180/deg,-180/deg), bin_widths=(2.0/deg,2.0/deg)) path_dens_counter = path_density.histogram([s.active[0].trajectory for s in flexible.steps])
examples/alanine_dipeptide_tps/AD_tps_3a_analysis_flex.ipynb
dwhswenson/openpathsampling
mit
Convert to MDTraj for analysis by external tools The trajectory can be converted to an MDTraj trajectory, and then used anywhere that MDTraj can be used. This includes writing it to a file (in any number of file formats) or visualizing the trajectory using, e.g., NGLView.
ops_traj = flexible.steps[1000].active[0].trajectory traj = ops_traj.to_mdtraj() traj # Here's how you would then use NGLView: #import nglview as nv #view = nv.show_mdtraj(traj) #view flexible.close()
examples/alanine_dipeptide_tps/AD_tps_3a_analysis_flex.ipynb
dwhswenson/openpathsampling
mit
Make plot
# Create the general blog and the "subplots" i.e. the bars f, ax1 = plt.subplots(1, figsize=(10,5)) # Set the bar width bar_width = 0.75 # positions of the left bar-boundaries bar_l = [i+1 for i in range(len(df['pre_score']))] # positions of the x-axis ticks (center of the bars as bar labels) tick_pos = [i+(bar_width/2) for i in bar_l] # Create a bar plot, in position bar_1 ax1.bar(bar_l, # using the pre_score data df['pre_score'], # set the width width=bar_width, # with the label pre score label='Pre Score', # with alpha 0.5 alpha=0.5, # with color color='#F4561D') # Create a bar plot, in position bar_1 ax1.bar(bar_l, # using the mid_score data df['mid_score'], # set the width width=bar_width, # with pre_score on the bottom bottom=df['pre_score'], # with the label mid score label='Mid Score', # with alpha 0.5 alpha=0.5, # with color color='#F1911E') # Create a bar plot, in position bar_1 ax1.bar(bar_l, # using the post_score data df['post_score'], # set the width width=bar_width, # with pre_score and mid_score on the bottom bottom=[i+j for i,j in zip(df['pre_score'],df['mid_score'])], # with the label post score label='Post Score', # with alpha 0.5 alpha=0.5, # with color color='#F1BD1A') # set the x ticks with names plt.xticks(tick_pos, df['first_name']) # Set the label and legends ax1.set_ylabel("Total Score") ax1.set_xlabel("Test Subject") plt.legend(loc='upper left') # Set a buffer around the edge plt.xlim([min(tick_pos)-bar_width, max(tick_pos)+bar_width])
python/matplotlib_stacked_bar_plot.ipynb
tpin3694/tpin3694.github.io
mit
1. What is the voxelwise threshold?
# From smoothness + mask to ReselCount FWHM = 3 ReselSize = FWHM**3 MNI_mask = nib.load("MNI152_T1_2mm_brain_mask.nii.gz").get_data() Volume = np.sum(MNI_mask) ReselCount = Volume/ReselSize print("ReselSize: "+str(ReselSize)) print("Volume: "+str(Volume)) print("ReselCount: "+str(ReselCount)) print("------------") # From ReselCount to FWE treshold FweThres_cmd = 'ptoz 0.05 -g %s' %ReselCount FweThres = os.popen(FweThres_cmd).read() print("FWE voxelwise GRF threshold: "+str(FweThres))
Figure1_Power/.ipynb_checkpoints/RequiredEffectSizeForSufficientPower-checkpoint.ipynb
jokedurnez/RequiredEffectSize
mit
Detect 3 regions What if there are three regions and we want to detect all three. What should the power be in each region to have omnibus 80% power (find all 3)
regions = 3 PowerPerRegion = Power**(1/3) print("The power in each region should be: "+str(PowerPerRegion))
Figure1_Power/.ipynb_checkpoints/RequiredEffectSizeForSufficientPower-checkpoint.ipynb
jokedurnez/RequiredEffectSize
mit