markdown
stringlengths
0
37k
code
stringlengths
1
33.3k
path
stringlengths
8
215
repo_name
stringlengths
6
77
license
stringclasses
15 values
First we'll load the text file and convert it into integers for our network to use. Here I'm creating a couple dictionaries to convert the characters to and from integers. Encoding the characters as integers makes it easier to use as input in the network.
with open('chalo.txt', 'r') as f: text=f.read() vocab = set(text) vocab_to_int = {c: i for i, c in enumerate(vocab)} int_to_vocab = dict(enumerate(vocab)) chars = np.array([vocab_to_int[c] for c in text], dtype=np.int32)
intro-to-rnns/RNN Albert Camus.ipynb
javoweb/deep-learning
mit
Making training and validation batches Now I need to split up the data into batches, and into training and validation sets. I should be making a test set here, but I'm not going to worry about that. My test will be if the network can generate new text. Here I'll make both input and target arrays. The targets are the same as the inputs, except shifted one character over. I'll also drop the last bit of data so that I'll only have completely full batches. The idea here is to make a 2D matrix where the number of rows is equal to the batch size. Each row will be one long concatenated string from the character data. We'll split this data into a training set and validation set using the split_frac keyword. This will keep 90% of the batches in the training set, the other 10% in the validation set.
def split_data(chars, batch_size, num_steps, split_frac=0.9): """ Split character data into training and validation sets, inputs and targets for each set. Arguments --------- chars: character array batch_size: Size of examples in each of batch num_steps: Number of sequence steps to keep in the input and pass to the network split_frac: Fraction of batches to keep in the training set Returns train_x, train_y, val_x, val_y """ slice_size = batch_size * num_steps n_batches = int(len(chars) / slice_size) # Drop the last few characters to make only full batches x = chars[: n_batches*slice_size] y = chars[1: n_batches*slice_size + 1] # Split the data into batch_size slices, then stack them into a 2D matrix x = np.stack(np.split(x, batch_size)) y = np.stack(np.split(y, batch_size)) # Now x and y are arrays with dimensions batch_size x n_batches*num_steps # Split into training and validation sets, keep the virst split_frac batches for training split_idx = int(n_batches*split_frac) train_x, train_y= x[:, :split_idx*num_steps], y[:, :split_idx*num_steps] val_x, val_y = x[:, split_idx*num_steps:], y[:, split_idx*num_steps:] return train_x, train_y, val_x, val_y
intro-to-rnns/RNN Albert Camus.ipynb
javoweb/deep-learning
mit
Building the model Below is a function where I build the graph for the network.
def build_rnn(num_classes, batch_size=50, num_steps=50, lstm_size=128, num_layers=2, learning_rate=0.001, grad_clip=5, sampling=False): # When we're using this network for sampling later, we'll be passing in # one character at a time, so providing an option for that if sampling == True: batch_size, num_steps = 1, 1 tf.reset_default_graph() # Declare placeholders we'll feed into the graph inputs = tf.placeholder(tf.int32, [batch_size, num_steps], name='inputs') targets = tf.placeholder(tf.int32, [batch_size, num_steps], name='targets') # Keep probability placeholder for drop out layers keep_prob = tf.placeholder(tf.float32, name='keep_prob') # One-hot encoding the input and target characters x_one_hot = tf.one_hot(inputs, num_classes) y_one_hot = tf.one_hot(targets, num_classes) ### Build the RNN layers # Use a basic LSTM cell lstm = tf.contrib.rnn.BasicLSTMCell(lstm_size) # Add dropout to the cell drop = tf.contrib.rnn.DropoutWrapper(lstm, output_keep_prob=keep_prob) # Stack up multiple LSTM layers, for deep learning cell = tf.contrib.rnn.MultiRNNCell([drop] * num_layers) initial_state = cell.zero_state(batch_size, tf.float32) ### Run the data through the RNN layers # This makes a list where each element is on step in the sequence rnn_inputs = [tf.squeeze(i, squeeze_dims=[1]) for i in tf.split(x_one_hot, num_steps, 1)] # Run each sequence step through the RNN and collect the outputs outputs, state = tf.contrib.rnn.static_rnn(cell, rnn_inputs, initial_state=initial_state) final_state = state # Reshape output so it's a bunch of rows, one output row for each step for each batch seq_output = tf.concat(outputs, axis=1) output = tf.reshape(seq_output, [-1, lstm_size]) # Now connect the RNN putputs to a softmax layer with tf.variable_scope('softmax'): softmax_w = tf.Variable(tf.truncated_normal((lstm_size, num_classes), stddev=0.1)) softmax_b = tf.Variable(tf.zeros(num_classes)) # Since output is a bunch of rows of RNN cell outputs, logits will be a bunch # of rows of logit outputs, one for each step and batch logits = tf.matmul(output, softmax_w) + softmax_b # Use softmax to get the probabilities for predicted characters preds = tf.nn.softmax(logits, name='predictions') # Reshape the targets to match the logits y_reshaped = tf.reshape(y_one_hot, [-1, num_classes]) loss = tf.nn.softmax_cross_entropy_with_logits(logits=logits, labels=y_reshaped) cost = tf.reduce_mean(loss) # Optimizer for training, using gradient clipping to control exploding gradients tvars = tf.trainable_variables() grads, _ = tf.clip_by_global_norm(tf.gradients(cost, tvars), grad_clip) train_op = tf.train.AdamOptimizer(learning_rate) optimizer = train_op.apply_gradients(zip(grads, tvars)) # Export the nodes # NOTE: I'm using a namedtuple here because I think they are cool export_nodes = ['inputs', 'targets', 'initial_state', 'final_state', 'keep_prob', 'cost', 'preds', 'optimizer'] Graph = namedtuple('Graph', export_nodes) local_dict = locals() graph = Graph(*[local_dict[each] for each in export_nodes]) return graph
intro-to-rnns/RNN Albert Camus.ipynb
javoweb/deep-learning
mit
Hyperparameters Here I'm defining the hyperparameters for the network. batch_size - Number of sequences running through the network in one pass. num_steps - Number of characters in the sequence the network is trained on. Larger is better typically, the network will learn more long range dependencies. But it takes longer to train. 100 is typically a good number here. lstm_size - The number of units in the hidden layers. num_layers - Number of hidden LSTM layers to use learning_rate - Learning rate for training keep_prob - The dropout keep probability when training. If you're network is overfitting, try decreasing this. Here's some good advice from Andrej Karpathy on training the network. I'm going to write it in here for your benefit, but also link to where it originally came from. Tips and Tricks Monitoring Validation Loss vs. Training Loss If you're somewhat new to Machine Learning or Neural Networks it can take a bit of expertise to get good models. The most important quantity to keep track of is the difference between your training loss (printed during training) and the validation loss (printed once in a while when the RNN is run on the validation data (by default every 1000 iterations)). In particular: If your training loss is much lower than validation loss then this means the network might be overfitting. Solutions to this are to decrease your network size, or to increase dropout. For example you could try dropout of 0.5 and so on. If your training/validation loss are about equal then your model is underfitting. Increase the size of your model (either number of layers or the raw number of neurons per layer) Approximate number of parameters The two most important parameters that control the model are lstm_size and num_layers. I would advise that you always use num_layers of either 2/3. The lstm_size can be adjusted based on how much data you have. The two important quantities to keep track of here are: The number of parameters in your model. This is printed when you start training. The size of your dataset. 1MB file is approximately 1 million characters. These two should be about the same order of magnitude. It's a little tricky to tell. Here are some examples: I have a 100MB dataset and I'm using the default parameter settings (which currently print 150K parameters). My data size is significantly larger (100 mil >> 0.15 mil), so I expect to heavily underfit. I am thinking I can comfortably afford to make lstm_size larger. I have a 10MB dataset and running a 10 million parameter model. I'm slightly nervous and I'm carefully monitoring my validation loss. If it's larger than my training loss then I may want to try to increase dropout a bit and see if that heps the validation loss. Best models strategy The winning strategy to obtaining very good models (if you have the compute time) is to always err on making the network larger (as large as you're willing to wait for it to compute) and then try different dropout values (between 0,1). Whatever model has the best validation performance (the loss, written in the checkpoint filename, low is good) is the one you should use in the end. It is very common in deep learning to run many different models with many different hyperparameter settings, and in the end take whatever checkpoint gave the best validation performance. By the way, the size of your training and validation splits are also parameters. Make sure you have a decent amount of data in your validation set or otherwise the validation performance will be noisy and not very informative.
batch_size = 100 num_steps = 100 lstm_size = 512 num_layers = 2 learning_rate = 0.001 keep_prob = 0.3
intro-to-rnns/RNN Albert Camus.ipynb
javoweb/deep-learning
mit
Training Time for training which is pretty straightforward. Here I pass in some data, and get an LSTM state back. Then I pass that state back in to the network so the next batch can continue the state from the previous batch. And every so often (set by save_every_n) I calculate the validation loss and save a checkpoint. Here I'm saving checkpoints with the format i{iteration number}_l{# hidden layer units}_v{validation loss}.ckpt
epochs = 300 # Save every N iterations save_every_n = 100 train_x, train_y, val_x, val_y = split_data(chars, batch_size, num_steps) model = build_rnn(len(vocab), batch_size=batch_size, num_steps=num_steps, learning_rate=learning_rate, lstm_size=lstm_size, num_layers=num_layers) saver = tf.train.Saver(max_to_keep=100) with tf.Session() as sess: sess.run(tf.global_variables_initializer()) # Use the line below to load a checkpoint and resume training #saver.restore(sess, 'checkpoints/______.ckpt') n_batches = int(train_x.shape[1]/num_steps) iterations = n_batches * epochs for e in range(epochs): # Train network new_state = sess.run(model.initial_state) loss = 0 for b, (x, y) in enumerate(get_batch([train_x, train_y], num_steps), 1): iteration = e*n_batches + b start = time.time() feed = {model.inputs: x, model.targets: y, model.keep_prob: keep_prob, model.initial_state: new_state} batch_loss, new_state, _ = sess.run([model.cost, model.final_state, model.optimizer], feed_dict=feed) loss += batch_loss end = time.time() print('Epoch {}/{} '.format(e+1, epochs), 'Iteration {}/{}'.format(iteration, iterations), 'Training loss: {:.4f}'.format(loss/b), '{:.4f} sec/batch'.format((end-start))) if (iteration%save_every_n == 0) or (iteration == iterations): # Check performance, notice dropout has been set to 1 val_loss = [] new_state = sess.run(model.initial_state) for x, y in get_batch([val_x, val_y], num_steps): feed = {model.inputs: x, model.targets: y, model.keep_prob: 1., model.initial_state: new_state} batch_loss, new_state = sess.run([model.cost, model.final_state], feed_dict=feed) val_loss.append(batch_loss) print('Validation loss:', np.mean(val_loss), 'Saving checkpoint!') saver.save(sess, "checkpoints/i{}_l{}_v{:.3f}.ckpt".format(iteration, lstm_size, np.mean(val_loss)))
intro-to-rnns/RNN Albert Camus.ipynb
javoweb/deep-learning
mit
Here, pass in the path to a checkpoint and sample from the network.
checkpoint = "checkpoints/i3000_l512_v2.497.ckpt" samp = sample(checkpoint, 1000, lstm_size, len(vocab), prime="Cuando en") print(samp)
intro-to-rnns/RNN Albert Camus.ipynb
javoweb/deep-learning
mit
copy over the example files to the working directory
path = 'data' gpth = os.path.join('..', 'data', 'mf2005_test', 'test1ss.*') for f in glob.glob(gpth): shutil.copy(f, path)
examples/Notebooks/flopy3_sfrpackage_example.ipynb
mrustl/flopy
bsd-3-clause
Load example dataset, skipping the SFR package
m = flopy.modflow.Modflow.load('test1ss.nam', version='mf2005', exe_name=exe_name, model_ws=path, load_only=['ghb', 'evt', 'rch', 'dis', 'bas6', 'oc', 'sip', 'lpf'])
examples/Notebooks/flopy3_sfrpackage_example.ipynb
mrustl/flopy
bsd-3-clause
Read pre-prepared reach and segment data into numpy recarrays using numpy.genfromtxt() Reach data (Item 2 in the SFR input instructions), are input and stored in a numpy record array http://docs.scipy.org/doc/numpy/reference/generated/numpy.recarray.html This allows for reach data to be indexed by their variable names, as described in the SFR input instructions. For more information on Item 2, see the Online Guide to MODFLOW: http://water.usgs.gov/nrp/gwsoftware/modflow2000/MFDOC/index.html?sfr.htm
rpth = os.path.join('..', 'data', 'sfr_examples', 'test1ss_reach_data.csv') reach_data = np.genfromtxt(rpth, delimiter=',', names=True) reach_data
examples/Notebooks/flopy3_sfrpackage_example.ipynb
mrustl/flopy
bsd-3-clause
Segment Data structure Segment data are input and stored in a dictionary of record arrays, which
spth = os.path.join('..', 'data', 'sfr_examples', 'test1ss_segment_data.csv') ss_segment_data = np.genfromtxt(spth, delimiter=',', names=True) segment_data = {0: ss_segment_data} segment_data[0][0:1]['width1']
examples/Notebooks/flopy3_sfrpackage_example.ipynb
mrustl/flopy
bsd-3-clause
define dataset 6e (channel flow data) for segment 1 dataset 6e is stored in a nested dictionary keyed by stress period and segment, with a list of the following lists defined for each segment with icalc == 4 FLOWTAB(1) FLOWTAB(2) ... FLOWTAB(NSTRPTS) DPTHTAB(1) DPTHTAB(2) ... DPTHTAB(NSTRPTS) WDTHTAB(1) WDTHTAB(2) ... WDTHTAB(NSTRPTS)
channel_flow_data = {0: {1: [[0.5, 1.0, 2.0, 4.0, 7.0, 10.0, 20.0, 30.0, 50.0, 75.0, 100.0], [0.25, 0.4, 0.55, 0.7, 0.8, 0.9, 1.1, 1.25, 1.4, 1.7, 2.6], [3.0, 3.5, 4.2, 5.3, 7.0, 8.5, 12.0, 14.0, 17.0, 20.0, 22.0]]}}
examples/Notebooks/flopy3_sfrpackage_example.ipynb
mrustl/flopy
bsd-3-clause
define dataset 6d (channel geometry data) for segments 7 and 8 dataset 6d is stored in a nested dictionary keyed by stress period and segment, with a list of the following lists defined for each segment with icalc == 4 FLOWTAB(1) FLOWTAB(2) ... FLOWTAB(NSTRPTS) DPTHTAB(1) DPTHTAB(2) ... DPTHTAB(NSTRPTS) WDTHTAB(1) WDTHTAB(2) ... WDTHTAB(NSTRPTS)
channel_geometry_data = {0: {7: [[0.0, 10.0, 80.0, 100.0, 150.0, 170.0, 240.0, 250.0], [20.0, 13.0, 10.0, 2.0, 0.0, 10.0, 13.0, 20.0]], 8: [[0.0, 10.0, 80.0, 100.0, 150.0, 170.0, 240.0, 250.0], [25.0, 17.0, 13.0, 4.0, 0.0, 10.0, 16.0, 20.0]]}}
examples/Notebooks/flopy3_sfrpackage_example.ipynb
mrustl/flopy
bsd-3-clause
Define SFR package variables
nstrm = len(reach_data) # number of reaches nss = len(segment_data[0]) # number of segments nsfrpar = 0 # number of parameters (not supported) nparseg = 0 const = 1.486 # constant for manning's equation, units of cfs dleak = 0.0001 # closure tolerance for stream stage computation istcb1 = 53 # flag for writing SFR output to cell-by-cell budget (on unit 53) istcb2 = 81 # flag for writing SFR output to text file dataset_5 = {0: [nss, 0, 0]} # dataset 5 (see online guide)
examples/Notebooks/flopy3_sfrpackage_example.ipynb
mrustl/flopy
bsd-3-clause
Instantiate SFR package Input arguments generally follow the variable names defined in the Online Guide to MODFLOW
sfr = flopy.modflow.ModflowSfr2(m, nstrm=nstrm, nss=nss, const=const, dleak=dleak, istcb1=istcb1, istcb2=istcb2, reach_data=reach_data, segment_data=segment_data, channel_geometry_data=channel_geometry_data, channel_flow_data=channel_flow_data, dataset_5=dataset_5) sfr.reach_data[0:1]
examples/Notebooks/flopy3_sfrpackage_example.ipynb
mrustl/flopy
bsd-3-clause
Plot the SFR segments any column in the reach_data array can be plotted using the key argument
sfr.plot(key='iseg');
examples/Notebooks/flopy3_sfrpackage_example.ipynb
mrustl/flopy
bsd-3-clause
Check the SFR dataset for errors
chk = sfr.check() m.external_fnames = [os.path.split(f)[1] for f in m.external_fnames] m.external_fnames m.write_input() m.run_model()
examples/Notebooks/flopy3_sfrpackage_example.ipynb
mrustl/flopy
bsd-3-clause
Look at results
sfr_outfile = os.path.join('..', 'data', 'sfr_examples', 'test1ss.flw') names = ["layer", "row", "column", "segment", "reach", "Qin", "Qaquifer", "Qout", "Qovr", "Qprecip", "Qet", "stage", "depth", "width", "Cond", "gradient"]
examples/Notebooks/flopy3_sfrpackage_example.ipynb
mrustl/flopy
bsd-3-clause
Read results into numpy array using genfromtxt
sfrresults = np.genfromtxt(sfr_outfile, skip_header=8, names=names, dtype=None) sfrresults[0:1]
examples/Notebooks/flopy3_sfrpackage_example.ipynb
mrustl/flopy
bsd-3-clause
Read results into pandas dataframe requires the pandas library
import pandas as pd df = pd.read_csv(sfr_outfile, delim_whitespace=True, skiprows=8, names=names, header=None) df
examples/Notebooks/flopy3_sfrpackage_example.ipynb
mrustl/flopy
bsd-3-clause
Plot streamflow and stream/aquifer interactions for a segment
inds = df.segment == 3 ax = df.ix[inds, ['Qin', 'Qaquifer', 'Qout']].plot(x=df.reach[inds]) ax.set_ylabel('Flow, in cubic feet per second') ax.set_xlabel('SFR reach')
examples/Notebooks/flopy3_sfrpackage_example.ipynb
mrustl/flopy
bsd-3-clause
Look at stage, model top, and streambed top
streambed_top = m.sfr.segment_data[0][m.sfr.segment_data[0].nseg == 3][['elevup', 'elevdn']][0] streambed_top df['model_top'] = m.dis.top.array[df.row.values - 1, df.column.values -1] fig, ax = plt.subplots() plt.plot([1, 6], list(streambed_top), label='streambed top') ax = df.ix[inds, ['stage', 'model_top']].plot(ax=ax, x=df.reach[inds]) ax.set_ylabel('Elevation, in feet') plt.legend()
examples/Notebooks/flopy3_sfrpackage_example.ipynb
mrustl/flopy
bsd-3-clause
Get SFR leakage results from cell budget file
bpth = os.path.join('data', 'test1ss.cbc') cbbobj = bf.CellBudgetFile(bpth) cbbobj.list_records() sfrleak = cbbobj.get_data(text=' STREAM LEAKAGE')[0] sfrleak[sfrleak == 0] = np.nan # remove zero values
examples/Notebooks/flopy3_sfrpackage_example.ipynb
mrustl/flopy
bsd-3-clause
Plot leakage in plan view
im = plt.imshow(sfrleak[0], interpolation='none', cmap='coolwarm', vmin = -3, vmax=3) cb = plt.colorbar(im, label='SFR Leakage, in cubic feet per second');
examples/Notebooks/flopy3_sfrpackage_example.ipynb
mrustl/flopy
bsd-3-clause
Plot total streamflow
sfrQ = sfrleak[0].copy() sfrQ[sfrQ == 0] = np.nan sfrQ[df.row.values-1, df.column.values-1] = df[['Qin', 'Qout']].mean(axis=1).values im = plt.imshow(sfrQ, interpolation='none') plt.colorbar(im, label='Streamflow, in cubic feet per second');
examples/Notebooks/flopy3_sfrpackage_example.ipynb
mrustl/flopy
bsd-3-clause
The first function we will use is aop_h5refl2array. This function is loaded into the cell below, we encourage you to look through the code to understand what it is doing -- most of these steps should look familiar to you from the first lesson. This function can be thought of as a wrapper to automate the steps required to read AOP hdf5 reflectance tiles into a Python format. This function also cleans the data: it sets any no data values within the reflectance tile to nan (not a number) and applies the reflectance scale factor so the final array that is returned represents unitless scaled reflectance, with values ranging between 0 and 1 (0-100%).
def aop_h5refl2array(refl_filename): """aop_h5refl2array reads in a NEON AOP reflectance hdf5 file and returns 1. reflectance array (with the no data value and reflectance scale factor applied) 2. dictionary of metadata including spatial information, and wavelengths of the bands -------- Parameters refl_filename -- full or relative path and name of reflectance hdf5 file -------- Returns -------- reflArray: array of reflectance values metadata: dictionary containing the following metadata: bad_band_window1 (tuple) bad_band_window2 (tuple) bands: # of bands (float) data ignore value: value corresponding to no data (float) epsg: coordinate system code (float) map info: coordinate system, datum & ellipsoid, pixel dimensions, and origin coordinates (string) reflectance scale factor: factor by which reflectance is scaled (float) wavelength: wavelength values (float) wavelength unit: 'm' (string) -------- NOTE: This function applies to the NEON hdf5 format implemented in 2016, and should be used for data acquired 2016 and after. Data in earlier NEON hdf5 format (collected prior to 2016) is expected to be re-processed after the 2018 flight season. -------- Example Execution: -------- sercRefl, sercRefl_metadata = h5refl2array('NEON_D02_SERC_DP3_368000_4306000_reflectance.h5') """ import h5py #Read in reflectance hdf5 file hdf5_file = h5py.File(refl_filename,'r') #Get the site name file_attrs_string = str(list(hdf5_file.items())) file_attrs_string_split = file_attrs_string.split("'") sitename = file_attrs_string_split[1] #Extract the reflectance & wavelength datasets refl = hdf5_file[sitename]['Reflectance'] reflData = refl['Reflectance_Data'] reflRaw = refl['Reflectance_Data'].value #Create dictionary containing relevant metadata information metadata = {} metadata['map info'] = refl['Metadata']['Coordinate_System']['Map_Info'].value metadata['wavelength'] = refl['Metadata']['Spectral_Data']['Wavelength'].value #Extract no data value & scale factor metadata['data ignore value'] = float(reflData.attrs['Data_Ignore_Value']) metadata['reflectance scale factor'] = float(reflData.attrs['Scale_Factor']) #metadata['interleave'] = reflData.attrs['Interleave'] #Apply no data value reflClean = reflRaw.astype(float) arr_size = reflClean.shape if metadata['data ignore value'] in reflRaw: print('% No Data: ',np.round(np.count_nonzero(reflClean==metadata['data ignore value'])*100/(arr_size[0]*arr_size[1]*arr_size[2]),1)) nodata_ind = np.where(reflClean==metadata['data ignore value']) reflClean[nodata_ind]=np.nan #Apply scale factor reflArray = reflClean/metadata['reflectance scale factor'] #Extract spatial extent from attributes metadata['spatial extent'] = reflData.attrs['Spatial_Extent_meters'] #Extract bad band windows metadata['bad band window1'] = (refl.attrs['Band_Window_1_Nanometers']) metadata['bad band window2'] = (refl.attrs['Band_Window_2_Nanometers']) #Extract projection information #metadata['projection'] = refl['Metadata']['Coordinate_System']['Proj4'].value metadata['epsg'] = int(refl['Metadata']['Coordinate_System']['EPSG Code'].value) #Extract map information: spatial extent & resolution (pixel size) mapInfo = refl['Metadata']['Coordinate_System']['Map_Info'].value hdf5_file.close return reflArray, metadata
tutorials/Python/Hyperspectral/indices/NEON_AOP_Hyperspectral_Functions_Tiles_py/NEON_AOP_Hyperspectral_Functions_Tiles_py.ipynb
NEONScience/NEON-Data-Skills
agpl-3.0
If you forget what this function does, or don't want to scroll up to read the docstrings, remember you can use help or ? to display the associated docstrings.
help(aop_h5refl2array) aop_h5refl2array?
tutorials/Python/Hyperspectral/indices/NEON_AOP_Hyperspectral_Functions_Tiles_py/NEON_AOP_Hyperspectral_Functions_Tiles_py.ipynb
NEONScience/NEON-Data-Skills
agpl-3.0
Now that we have an idea of how this function works, let's try it out. First, define the path where th e reflectance data is stored and use os.path.join to create the full path to the data file. Note that if you want to run this notebook later on a different reflectance tile, you just have to change this variable.
# Note you will need to update this filepath for your local machine serc_h5_tile = ('/Users/olearyd/Git/data/NEON_D02_SERC_DP3_368000_4306000_reflectance.h5')
tutorials/Python/Hyperspectral/indices/NEON_AOP_Hyperspectral_Functions_Tiles_py/NEON_AOP_Hyperspectral_Functions_Tiles_py.ipynb
NEONScience/NEON-Data-Skills
agpl-3.0
Now that we've specified our reflectance tile, we can call aop_h5refl2array to read in the reflectance tile as a python array called sercRefl , and the associated metadata into a dictionary sercMetadata
sercRefl,sercMetadata = aop_h5refl2array(serc_h5_tile)
tutorials/Python/Hyperspectral/indices/NEON_AOP_Hyperspectral_Functions_Tiles_py/NEON_AOP_Hyperspectral_Functions_Tiles_py.ipynb
NEONScience/NEON-Data-Skills
agpl-3.0
We can use the shape method to see the dimensions of the array we read in. NEON tiles are (1000 x 1000 x # of bands), the number of bands may vary depending on the hyperspectral sensor used, but should be in the vicinity of 426.
sercRefl.shape
tutorials/Python/Hyperspectral/indices/NEON_AOP_Hyperspectral_Functions_Tiles_py/NEON_AOP_Hyperspectral_Functions_Tiles_py.ipynb
NEONScience/NEON-Data-Skills
agpl-3.0
plot_aop_refl: plot a single band Next we'll use the function plot_aop_refl to plot a single band of reflectance data. Read the Parameters section of the docstring to understand the required inputs & data type for each of these; only the band and spatial extent are required inputs, the rest are optional inputs that, if specified, allow you to set the range color values, specify the axis, add a title, colorbar, colorbar title, and change the colormap (default is to plot in greyscale).
def plot_aop_refl(band_array,refl_extent,colorlimit=(0,1),ax=plt.gca(),title='',cbar ='on',cmap_title='',colormap='Greys'): '''plot_refl_data reads in and plots a single band or 3 stacked bands of a reflectance array -------- Parameters -------- band_array: array of reflectance values, created from aop_h5refl2array refl_extent: extent of reflectance data to be plotted (xMin, xMax, yMin, yMax) use metadata['spatial extent'] from aop_h5refl2array function colorlimit: optional, range of values to plot (min,max). - helpful to look at the histogram of reflectance values before plotting to determine colorlimit. ax: optional, default = current axis title: optional; plot title (string) cmap_title: optional; colorbar title colormap: optional (string, see https://matplotlib.org/examples/color/colormaps_reference.html) for list of colormaps -------- Returns -------- plots flightline array of single band of reflectance data -------- Examples: -------- plot_aop_refl(sercb56, sercMetadata['spatial extent'], colorlimit=(0,0.3), title='SERC Band 56 Reflectance', cmap_title='Reflectance', colormap='Greys_r') ''' import matplotlib.pyplot as plt plot = plt.imshow(band_array,extent=refl_extent,clim=colorlimit); if cbar == 'on': cbar = plt.colorbar(plot,aspect=40); plt.set_cmap(colormap); cbar.set_label(cmap_title,rotation=90,labelpad=20) plt.title(title); ax = plt.gca(); ax.ticklabel_format(useOffset=False, style='plain'); #do not use scientific notation for ticklabels rotatexlabels = plt.setp(ax.get_xticklabels(),rotation=90); #rotate x tick labels 90 degrees
tutorials/Python/Hyperspectral/indices/NEON_AOP_Hyperspectral_Functions_Tiles_py/NEON_AOP_Hyperspectral_Functions_Tiles_py.ipynb
NEONScience/NEON-Data-Skills
agpl-3.0
Now that we have loaded this function, let's extract a single band from the SERC reflectance array and plot it:
sercb56 = sercRefl[:,:,55] plot_aop_refl(sercb56, sercMetadata['spatial extent'], colorlimit=(0,0.3), title='SERC Band 56 Reflectance', cmap_title='Reflectance', colormap='Greys_r')
tutorials/Python/Hyperspectral/indices/NEON_AOP_Hyperspectral_Functions_Tiles_py/NEON_AOP_Hyperspectral_Functions_Tiles_py.ipynb
NEONScience/NEON-Data-Skills
agpl-3.0
RGB Plots - Band Stacking It is often useful to look at several bands together. We can extract and stack three reflectance bands in the red, green, and blue (RGB) spectrums to produce a color image that looks like what we see with our eyes; this is your typical camera image. In the next part of this tutorial, we will learn to stack multiple bands and make a geotif raster from the compilation of these bands. We can see that different combinations of bands allow for different visualizations of the remotely-sensed objects and also conveys useful information about the chemical makeup of the Earth's surface. We will select bands that fall within the visible range of the electromagnetic spectrum (400-700 nm) and at specific points that correspond to what we see as red, green, and blue. <figure> <a href="https://raw.githubusercontent.com/NEONScience/NEON-Data-Skills/main/graphics/hyperspectral-general/spectrum_RGBcombined.png"> <img src="https://raw.githubusercontent.com/NEONScience/NEON-Data-Skills/main/graphics/hyperspectral-general/spectrum_RGBcombined.png"></a> <figcaption> NEON Imaging Spectrometer bands and their respective nanometers. Source: National Ecological Observatory Network (NEON) </figcaption> </figure> For this exercise, we'll first use the neon_aop_module function stack_rgb to extract the bands we want to stack. This function uses splicing to extract the nth band from the reflectance array, and then uses the numpy function stack to create a new 3D array (1000 x 1000 x 3) consisting of only the three bands we want.
def stack_rgb(reflArray,bands): import numpy as np red = reflArray[:,:,bands[0]-1] green = reflArray[:,:,bands[1]-1] blue = reflArray[:,:,bands[2]-1] stackedRGB = np.stack((red,green,blue),axis=2) return stackedRGB
tutorials/Python/Hyperspectral/indices/NEON_AOP_Hyperspectral_Functions_Tiles_py/NEON_AOP_Hyperspectral_Functions_Tiles_py.ipynb
NEONScience/NEON-Data-Skills
agpl-3.0
First, we will look at red, green, and blue bands, whos indices are defined below. To confirm that these band indices correspond to wavelengths in the expected portion of the spectrum, we can print out the wavelength values stored in metadata['wavelength']:
rgb_bands = (58,34,19) print('Band 58 Center Wavelength = %.2f' %(sercMetadata['wavelength'][57]),'nm') print('Band 33 Center Wavelength = %.2f' %(sercMetadata['wavelength'][33]),'nm') print('Band 19 Center Wavelength = %.2f' %(sercMetadata['wavelength'][18]),'nm')
tutorials/Python/Hyperspectral/indices/NEON_AOP_Hyperspectral_Functions_Tiles_py/NEON_AOP_Hyperspectral_Functions_Tiles_py.ipynb
NEONScience/NEON-Data-Skills
agpl-3.0
Below we use stack_rgb to create an RGB array. Check that the dimensions of this array are as expected. Data Tip: Checking the shape of arrays with .shape is a good habit to get into when creating your own workflows, and can be a handy tool for troubleshooting.
SERCrgb = stack_rgb(sercRefl,rgb_bands) SERCrgb.shape
tutorials/Python/Hyperspectral/indices/NEON_AOP_Hyperspectral_Functions_Tiles_py/NEON_AOP_Hyperspectral_Functions_Tiles_py.ipynb
NEONScience/NEON-Data-Skills
agpl-3.0
plot_aop_refl: plot an RGB band combination Next, we can use the function plot_aop_refl, even though we have more than one band. This function only works for a single or 3-band array, so ensure the array you use has the proper dimensions before using. You do not need to specify the colorlimits as the matplotlib.pyplot automatically scales 3-band arrays to 8-bit color (256).
plot_aop_refl(SERCrgb, sercMetadata['spatial extent'], title='SERC RGB Image', cbar='off')
tutorials/Python/Hyperspectral/indices/NEON_AOP_Hyperspectral_Functions_Tiles_py/NEON_AOP_Hyperspectral_Functions_Tiles_py.ipynb
NEONScience/NEON-Data-Skills
agpl-3.0
You'll notice that this image is very dark; it is possible to make out some of the features (roads, buildings), but it is not ideal. Since colorlimits don't apply to 3-band images, we have to use some other image processing tools to enhance the visibility of this image. Image Processing -- Contrast Stretch & Histogram Equalization We can also try out some image processing routines to better visualize the reflectance data using the ski-image package. Histogram equalization is a method in image processing of contrast adjustment using the image's histogram. Stretching the histogram can improve the contrast of a displayed image by eliminating very high or low reflectance values that skew the display of the image. <figure> <a href="https://raw.githubusercontent.com/NEONScience/NEON-Data-Skills/main/graphics/hyperspectral-general/histogram_equalization.png"> <img src="https://raw.githubusercontent.com/NEONScience/NEON-Data-Skills/main/graphics/hyperspectral-general/histogram_equalization.png"></a> <figcaption> Histogram equalization is a method in image processing of contrast adjustment using the image's histogram. Stretching the histogram can improve the contrast of a displayed image, as we will show how to do below. Source: <a href="https://en.wikipedia.org/wiki/Talk%3AHistogram_equalization#/media/File:Histogrammspreizung.png"> Wikipedia - Public Domain </a> </figcaption> </figure> The following tutorial section is adapted from skikit-image's tutorial <a href="http://scikit-image.org/docs/stable/auto_examples/color_exposure/plot_equalize.html#sphx-glr-auto-examples-color-exposure-plot-equalize-py" target="_blank"> Histogram Equalization</a>. Let's see what the image looks like using a 5% linear contrast stretch using the skiimage module's exposure function.
from skimage import exposure def plot_aop_rgb(rgbArray,ext,ls_pct=5,plot_title=''): from skimage import exposure pLow, pHigh = np.percentile(rgbArray[~np.isnan(rgbArray)], (ls_pct,100-ls_pct)) img_rescale = exposure.rescale_intensity(rgbArray, in_range=(pLow,pHigh)) plt.imshow(img_rescale,extent=ext) plt.title(plot_title + '\n Linear ' + str(ls_pct) + '% Contrast Stretch'); ax = plt.gca(); ax.ticklabel_format(useOffset=False, style='plain') #do not use scientific notation # rotatexlabels = plt.setp(ax.get_xticklabels(),rotation=90) #rotate x tick labels 90 degree plot_aop_rgb(SERCrgb, sercMetadata['spatial extent'], plot_title = 'SERC RGB')
tutorials/Python/Hyperspectral/indices/NEON_AOP_Hyperspectral_Functions_Tiles_py/NEON_AOP_Hyperspectral_Functions_Tiles_py.ipynb
NEONScience/NEON-Data-Skills
agpl-3.0
False Color Image - Color Infrared (CIR) We can also create an image from bands outside of the visible spectrum. An image containing one or more bands outside of the visible range is called a false-color image. Here we'll use the green and blue bands as before, but we replace the red band with a near-infrared (NIR) band. For more information about non-visible wavelengths, false color images, and some frequently used false-color band combinations, refer to <a href="https://earthobservatory.nasa.gov/Features/FalseColor/" target="_blank">NASA's Earth Observatory page</a>.
CIRbands = (90,34,19) print('Band 90 Center Wavelength = %.2f' %(sercMetadata['wavelength'][89]),'nm') print('Band 34 Center Wavelength = %.2f' %(sercMetadata['wavelength'][33]),'nm') print('Band 19 Center Wavelength = %.2f' %(sercMetadata['wavelength'][18]),'nm') SERCcir = stack_rgb(sercRefl,CIRbands) plot_aop_rgb(SERCcir, sercMetadata['spatial extent'], ls_pct=2, plot_title='SERC CIR')
tutorials/Python/Hyperspectral/indices/NEON_AOP_Hyperspectral_Functions_Tiles_py/NEON_AOP_Hyperspectral_Functions_Tiles_py.ipynb
NEONScience/NEON-Data-Skills
agpl-3.0
Demo: Exploring Band Combinations Interactively Now that we have made a couple different band combinations, we can demo a Python widget to explore different combinations of bands in the visible and non-visible portions of the spectrum.
from IPython.html.widgets import * array = copy.copy(sercRefl) metadata = copy.copy(sercMetadata) def RGBplot_widget(R,G,B): #Pre-allocate array size rgbArray = np.zeros((array.shape[0],array.shape[1],3), 'uint8') Rband = array[:,:,R-1].astype(np.float) #Rband_clean = clean_band(Rband,Refl_md) Gband = array[:,:,G-1].astype(np.float) #Gband_clean = clean_band(Gband,Refl_md) Bband = array[:,:,B-1].astype(np.float) #Bband_clean = clean_band(Bband,Refl_md) rgbArray[..., 0] = Rband*256 rgbArray[..., 1] = Gband*256 rgbArray[..., 2] = Bband*256 # Apply Adaptive Histogram Equalization to Improve Contrast: img_nonan = np.ma.masked_invalid(rgbArray) #first mask the image img_adapteq = exposure.equalize_adapthist(img_nonan, clip_limit=0.10) plot = plt.imshow(img_adapteq,extent=metadata['spatial extent']); plt.title('Bands: \nR:' + str(R) + ' (' + str(round(metadata['wavelength'][R-1])) +'nm)' + '\n G:' + str(G) + ' (' + str(round(metadata['wavelength'][G-1])) + 'nm)' + '\n B:' + str(B) + ' (' + str(round(metadata['wavelength'][B-1])) + 'nm)'); ax = plt.gca(); ax.ticklabel_format(useOffset=False, style='plain') rotatexlabels = plt.setp(ax.get_xticklabels(),rotation=90) interact(RGBplot_widget, R=(1,426,1), G=(1,426,1), B=(1,426,1))
tutorials/Python/Hyperspectral/indices/NEON_AOP_Hyperspectral_Functions_Tiles_py/NEON_AOP_Hyperspectral_Functions_Tiles_py.ipynb
NEONScience/NEON-Data-Skills
agpl-3.0
Demo: Interactive Linear Stretch & Equalization Here is another widget to play around with, demonstrating how to interactively visualize linear contrast stretches with a variable percent.
rgbArray = copy.copy(SERCrgb) def linearStretch(percent): pLow, pHigh = np.percentile(rgbArray[~np.isnan(rgbArray)], (percent,100-percent)) img_rescale = exposure.rescale_intensity(rgbArray, in_range=(pLow,pHigh)) plt.imshow(img_rescale,extent=sercMetadata['spatial extent']) plt.title('SERC RGB \n Linear ' + str(percent) + '% Contrast Stretch'); ax = plt.gca() ax.ticklabel_format(useOffset=False, style='plain') rotatexlabels = plt.setp(ax.get_xticklabels(),rotation=90) interact(linearStretch,percent=(0,20,1))
tutorials/Python/Hyperspectral/indices/NEON_AOP_Hyperspectral_Functions_Tiles_py/NEON_AOP_Hyperspectral_Functions_Tiles_py.ipynb
NEONScience/NEON-Data-Skills
agpl-3.0
Load a well from LAS Use the from_las() method to load a well by passing a filename as a str. This is really just a wrapper for lasio but instantiates a Header, Curves, etc.
from welly import Well w = Well.from_las('data/P-129_out.LAS')
tutorial/06_Welly_and_LAS.ipynb
agile-geoscience/welly
apache-2.0
Save LAS file We can write out to LAS with a simple command, passing the file name you want:
w.to_las('data/out.las')
tutorial/06_Welly_and_LAS.ipynb
agile-geoscience/welly
apache-2.0
Let's just check we get the same thing out of that file as we put in:
w.plot() z = Well.from_las('data/out.las') z.plot() z.data['CALI'].plot()
tutorial/06_Welly_and_LAS.ipynb
agile-geoscience/welly
apache-2.0
We don't get the striplog back (right hand side), but everything else looks good. Header Maybe should be called 'meta' as it's not really a header...
w.header w.header.name w.uwi
tutorial/06_Welly_and_LAS.ipynb
agile-geoscience/welly
apache-2.0
What?? OK, we need to load this file more carefully... Coping with messy LAS Some file headers are a disgrace: # LAS format log file from PETREL # Project units are specified as depth units #================================================================== ~Version information VERS. 2.0: WRAP. YES: #================================================================== ~WELL INFORMATION #MNEM.UNIT DATA DESCRIPTION #---- ------ -------------- ----------------------------- STRT .M 1.0668 :START DEPTH STOP .M 1939.13760 :STOP DEPTH STEP .M 0.15240 :STEP NULL . -999.25 :NULL VALUE COMP . Elmworth Energy Corporation :COMPANY WELL . Kennetcook #2 :WELL FLD . Windsor Block :FIELD LOC . Lat = 45* 12' 34.237" N :LOCATION PROV . Nova Scotia :PROVINCE UWI. Long = 63* 45'24.460 W :UNIQUE WELL ID LIC . P-129 :LICENSE NUMBER CTRY . CA :COUNTRY (WWW code) DATE. 10-Oct-2007 :LOG DATE {DD-MMM-YYYY} SRVC . Schlumberger :SERVICE COMPANY LATI .DEG :LATITUDE LONG .DEG :LONGITUDE GDAT . :GeoDetic Datum SECT . 45.20 Deg N :Section RANG . PD 176 :Range TOWN . 63.75 Deg W :Township
import welly import re def transform_ll(text): def callback(match): d = match.group(1).strip() m = match.group(2).strip() s = match.group(3).strip() c = match.group(4).strip() if c.lower() in ('w', 's') and d[0] != '-': d = '-' + d return ' '.join([d, m, s]) pattern = re.compile(r""".+?([-0-9]+?).? ?([0-9]+?).? ?([\.0-9]+?).? +?([NESW])""", re.I) text = pattern.sub(callback, text) return welly.utils.dms2dd([float(i) for i in text.split()]) print(transform_ll("""Lat = 45* 12' 34.237" N""")) print(transform_ll("""Long = 63* 45'24.460 W""")) remap = { 'LATI': 'LOC', # Use LOC for the parameter LATI. 'LONG': 'UWI', # Use UWI for the parameter LONG. 'SECT': None, # Use nothing for the parameter SECT. 'RANG': None, # Use nothing for the parameter RANG. 'TOWN': None, # Use nothing for the parameter TOWN. } funcs = { 'LATI': transform_ll, # Pass LATI through this function before loading. 'LONG': transform_ll, # Pass LONG through it too. 'UWI': lambda x: "No name, oh no!" } w = Well.from_las('data/P-129_out.LAS', remap=remap, funcs=funcs) w.location w.location.crs # Should be empty. w.uwi
tutorial/06_Welly_and_LAS.ipynb
agile-geoscience/welly
apache-2.0
Ruta de la trayectoria Escribir después de la diagonal, la ruta de la trayectoria seleccionada con el rmsd más bajo.
ruta=os.getcwd() c=input('Nombre de la trayectoria para realizar el análisis... Ejemplo: run001....') if os.path.isdir(c): indir = '/'+c print (indir) ruta_old_traj=ruta+indir print (ruta) print (ruta_old_traj) else: print ('La carpetac'+c+' no existe...') # ruta_scripts=ruta+'/scripts_fimda' print (ruta_scripts) if os.path.exists(ruta_scripts): print ('Ruta identificada para búsqueda de scripst adicionales ===>',ruta_scripts) else: print ('La carpeta de scripst adicionales no existe, copiar en '+ruta_scripts+' ..!!!')
dinamica-2puentes.ipynb
lguarneros/fimda
gpl-3.0
Convirtiendo la trayectoria DCD -> XTC Los siguientes comandos convierten la trayectoria DCD contenida en la carpeta seleccionada a formato de XTC Crear la nueva ruta para enviar las trayectorias convertidas
#Verificando que exista la nueva carpeta para la conversión de trayectorias #nuevaruta = ruta+'/'+indir+'_XTC' nuevaruta = ruta+indir+'_Dinamica' print ( nuevaruta ) if not os.path.exists(nuevaruta): os.makedirs(nuevaruta) print ('Se ha creado la ruta ===>',nuevaruta) else: print ("La ruta "+nuevaruta+" existe..!!!")
dinamica-2puentes.ipynb
lguarneros/fimda
gpl-3.0
Realizando la conversión de la trayectoria
print ('Obtenemos los archivos a convertir') #Buscamos el archivo DCD, PDB y PSF para realizar las operaciones for filename in os.listdir(ruta_old_traj): if filename.endswith('.dcd'): dcd_file=filename if filename.endswith('.psf'): psf_file=filename if filename.endswith('.pdb'): pdb_file=filename print ('pdb file =>', pdb_file) print ('psf file =>', psf_file) print ('dcd file =>', dcd_file) print ( 'Nos vemos a ....', ruta_old_traj ) os.chdir( ruta_old_traj ) print ('\nEjecutando CATDCD para convertir la trayectoria....') output_catdcd=!catdcd -otype trr -o output.trr $dcd_file print (output_catdcd.n) print ('\nEjecutando TRJCONV para convertir la trayectoria....') output_trjconv=!trjconv -f output.trr -o output.xtc -timestep 20 #print (output_trjconv.n) print ('\nBorrando archivos temporales de conversión...') output_rm=!rm output.trr print ('\nMoviendo los archivos de salida al directorio '+nuevaruta) source_file=ruta_old_traj+'/output.xtc' dest_file=nuevaruta+'/output.xtc' shutil.move(source_file,dest_file) print ('\Copiando el archivo ionized.pdb a '+nuevaruta) source_file=ruta_old_traj+'/ionized.pdb' dest_file=nuevaruta+'/ionized.pdb' shutil.copy(source_file,dest_file) print ('\nCopiando el archivo ionized.psf a '+nuevaruta) source_file=ruta_old_traj+'/ionized.psf' dest_file=nuevaruta+'/ionized.psf' shutil.copy(source_file,dest_file) print('\nTrayectoria convertida, regresando a '+ruta) os.chdir( ruta )
dinamica-2puentes.ipynb
lguarneros/fimda
gpl-3.0
Cargando la nueva trayectoria en VMD para su revisión
print ('Visualizando la nueva trayectoria') file_psf=nuevaruta+'/'+psf_file traj = nuevaruta+'/output.xtc' !vmd $file_psf $traj
dinamica-2puentes.ipynb
lguarneros/fimda
gpl-3.0
Calculando el RMSD con Gromacs 5 El siguiente script obtiene el RMSD de la trayectoria haciendo uso de Gromacs 5 Creando la carpeta de RMSD
### Creando el directorio para el análisis del RMSD #Verificando que exista la nueva carpeta para la conversión de trayectorias #nuevaruta = ruta+'/'+indir+'_XTC' ruta_rmsd = nuevaruta+'/rmsd' print ( ruta_rmsd ) if not os.path.exists(ruta_rmsd): os.makedirs(ruta_rmsd) print ('Se ha creado la ruta ===>',ruta_rmsd) else: print ("La ruta "+ruta_rmsd+" existe..!!!") print ( 'Nos vamos a ....', ruta_rmsd ) os.chdir( ruta_rmsd )
dinamica-2puentes.ipynb
lguarneros/fimda
gpl-3.0
Calculando el RMSD con la opción 3 'C-Alpha' Select group for least squares fit Group 3 ( C-alpha) Select a group: 3 Selected 3: 'C-alpha' Select group for RMSD calculation Group 3 ( C-alpha) Select a group: 3 Selected 3: 'C-alpha'
print ('Ejecutando el análisis de rmsd...') !echo 3 3 | g_rms -f ../output.xtc -s ../ionized.pdb -a avgrp.xvg
dinamica-2puentes.ipynb
lguarneros/fimda
gpl-3.0
Creando archivo rmsd.dat para su visualización en XMGRACE Se genera el archivo de salida rmsd.dat, éste se deberá visualizar con Xmgrace para guardarlo en formato PNG.
#Inicializando vector rmsd=[] try: archivo = open( 'rmsd.xvg' ) except IOError: print ('No se pudo abrir el archivo o no existe·..') i=0 for linea in archivo.readlines(): fila = linea.strip() sl = fila.split() cadena=sl[0] if (not '#' in cadena) and (not '@' in cadena): num=float(sl[0]) #num2=float(sl[1]) num=num/1000 rmsd.append(repr(num)+'\t'+sl[1]+'\n') i=i+1 #Escribiendo el archivo RMSD f = open('rmsd.dat', 'w') #f.write('@ title "RMSD" \n') f.write('@ xaxis label " Time (ns)" \n') f.write('@ xaxis label char size 1.480000\n') f.write('@ xaxis bar linewidth 3.0\n') f.write('@ xaxis ticklabel char size 1.480000\n') f.write('@ yaxis label " RMSD (nm)" \n') f.write('@ yaxis label char size 1.480000\n') f.write('@ yaxis bar linewidth 3.0\n') f.write('@ yaxis ticklabel char size 1.480000\n') f.write('@ s0 line linewidth 1.5\n') f.write('@TYPE xy \n') #f.write('@ subtitle "C-alpha after lsq fit to C-alpha" \n') f.write("".join(rmsd)) f.close() #Cargando el archivo para visualizar en xmgrace !xmgrace rmsd.dat #Cargando la imagen generada en xmgrace Image(filename='rmsd.png')
dinamica-2puentes.ipynb
lguarneros/fimda
gpl-3.0
Creando el archivo rmsd_residue.dat para visualizar con xmgrace Se crea el archivo rmsd_residue.dat formateado para su visualización en Xmgrace, en donde se deberá guardar como imagen PNG.
#Inicializando vector rmsd_residue=[] try: archivo_rmsd = open( 'aver.xvg' ) except IOError: print ('No se pudo abrir el archivo o no existe·..') i=1 for linea in archivo_rmsd.readlines(): fila = linea.strip() sl = fila.split() cadena=sl[0] if (not '#' in cadena) and (not '@' in cadena): num=int(sl[0]) print ('Residuo =>',num+1) rmsd_residue.append(repr(num+1)+'\t'+sl[1]+'\n') i=i+1 #Escribiendo el archivo RMSD_RESIDUE f = open('rmsd_residue.dat', 'w') #f.write('@ title "C-alpha" \n') f.write('@ xaxis label "Residue" \n') f.write('@ xaxis label char size 1.480000\n') f.write('@ xaxis bar linewidth 3.0\n') f.write('@ xaxis ticklabel char size 1.480000\n') f.write('@ yaxis label " RMSD (nm)" \n') f.write('@ yaxis label char size 1.480000\n') f.write('@ yaxis bar linewidth 3.0\n') f.write('@ yaxis ticklabel char size 1.480000\n') f.write('@ s0 line linewidth 2.5\n') f.write('@ s0 symbol 1\n') f.write('@ s0 symbol size 1.000000\n') f.write('@ s0 symbol color 1\n') f.write('@ s0 symbol pattern 1\n') f.write('@ s0 symbol fill color 2\n') f.write('@ s0 symbol fill pattern 1\n') f.write('@ s0 symbol linewidth 1.0\n') f.write('@TYPE xy \n') f.write("".join(rmsd_residue)) f.close() !xmgrace rmsd_residue.dat #Cargando la imagen generada en xmgrace Image(filename='rmsd_residue.png')
dinamica-2puentes.ipynb
lguarneros/fimda
gpl-3.0
Creando archivo rmsd.dat para su visualización en Matplotlib Se genera el gráfico de salida para matplotlib
data_rmsd=np.loadtxt('rmsd.xvg',comments=['#', '@']) #Engrosar marco fig=pl.figure(figsize=(20, 12), dpi=100, linewidth=3.0) ax = fig.add_subplot(111) for axis in ['top','bottom','left','right']: ax.spines[axis].set_linewidth(4) #Formateando los valores de los ejes ax.yaxis.set_major_formatter(FormatStrFormatter('%.2f')) pl.plot(data_rmsd[:,0]/1000, data_rmsd[:,1], linewidth = 2, markeredgewidth=3, color='black') pl.xlabel("Time (ns)", fontsize = 40) pl.ylabel('RMSD (nm)', fontsize = 40) #pl.suptitle('RMSD', fontsize=50) #pl.title('C-alpha after lsq fit to C-alpha', fontsize=30) pl.xticks(fontsize=30) pl.yticks(fontsize=30)
dinamica-2puentes.ipynb
lguarneros/fimda
gpl-3.0
Creando archivo rmsd_residue.dat para su visualización en Matplotlib Se genera el gráfico de salida para matplotlib
data_rmsd_res=np.loadtxt('aver.xvg',comments=['#', '@']) #Engrosar marco fig=pl.figure(figsize=(20, 12), dpi=100, linewidth=3.0) ax = fig.add_subplot(111) for axis in ['top','bottom','left','right']: ax.spines[axis].set_linewidth(4) #Formateando los valores de los ejes ax.yaxis.set_major_formatter(FormatStrFormatter('%.2f')) pl.plot(data_rmsd_res[:,0]+1, data_rmsd_res[:,1], '-o', color='black', markersize=25, markerfacecolor='red',markeredgecolor='black',markeredgewidth=3, linewidth = 4, ) pl.xlabel("Residue", fontsize = 40) pl.ylabel('RMSD (nm)', fontsize = 40) #pl.title('C-alpha', fontsize=40) pl.xticks(fontsize=30) pl.yticks(fontsize=30) pl.xlim(0, len(data_rmsd_res[:,1]))
dinamica-2puentes.ipynb
lguarneros/fimda
gpl-3.0
RMSF Se crea una carpeta RMSF para guardar los archivos generados.
### Creando el directorio para el análisis del RMSF #Verificando que exista la nueva carpeta para la conversión de trayectorias ruta_rmsf = nuevaruta+'/rmsf' print ( ruta_rmsf ) if not os.path.exists(ruta_rmsf): os.makedirs(ruta_rmsf) print ('Se ha creado la ruta ===>',ruta_rmsf) else: print ("La ruta "+ruta_rmsf+" existe..!!!") print ( 'Nos vamos a ....', ruta_rmsf ) os.chdir( ruta_rmsf )
dinamica-2puentes.ipynb
lguarneros/fimda
gpl-3.0
Calculando el RMSF con la opción 3 'C-Alpha'
print ('Ejecutando el análisis de rmsf...') !echo 3 | g_rmsf -f ../output.xtc -s ../ionized.pdb -oq bfac.pdb -o rmsf.xvg -res
dinamica-2puentes.ipynb
lguarneros/fimda
gpl-3.0
Creando archivo rmsf.dat para su visualización en XMGRACE Se genera el archivo de salida rmsf.dat, éste se deberá visualizar con Xmgrace para guardarlo en formato PNG.
#Inicializando vector rmsf=[] rmsf_x=[] rmsf_y=[] try: file_rmsf = open( 'rmsf.xvg' ) except IOError: print ('No se pudo abrir el archivo o no existe·..') i=0 for linea in file_rmsf.readlines(): fila = linea.strip() sl = fila.split() cadena=sl[0] if (not '#' in cadena) and (not '@' in cadena): print ('Residue =>',cadena) rmsf.append(sl[0]+'\t'+sl[1]+'\n') rmsf_x.append(int(sl[0])) rmsf_y.append(float(sl[1])) i=i+1 file_rmsf.close() #Escribiendo el archivo RMSD f = open('rmsf.dat', 'w') #f.write('@ title "RMSF fluctuation" \n') f.write('@ xaxis label " Residue" \n') f.write('@ xaxis label char size 1.480000\n') f.write('@ xaxis bar linewidth 3.0\n') f.write('@ xaxis ticklabel char size 1.480000\n') f.write('@ yaxis label "RMSF (nm)" \n') f.write('@ yaxis label char size 1.480000\n') f.write('@ yaxis bar linewidth 3.0\n') f.write('@ yaxis ticklabel char size 1.480000\n') f.write('@ s0 line linewidth 2.5\n') f.write('@ s0 symbol 1\n') f.write('@ s0 symbol size 1.000000\n') f.write('@ s0 symbol color 1\n') f.write('@ s0 symbol pattern 1\n') f.write('@ s0 symbol fill color 2\n') f.write('@ s0 symbol fill pattern 1\n') f.write('@ s0 symbol linewidth 1.0\n') f.write('@TYPE xy \n') f.write("".join(rmsf)) f.close() !xmgrace rmsf.dat #Cargando la imagen generada en xmgrace Image(filename='rmsf.png')
dinamica-2puentes.ipynb
lguarneros/fimda
gpl-3.0
Creando archivo rmsf.dat para su visualización en Matplotlib Se genera el gráfico de salida para matplotlib
data_rmsf=np.loadtxt('rmsf.xvg',comments=['#', '@']) #Engrosar marco fig=pl.figure(figsize=(20, 12), dpi=100, linewidth=3.0) ax = fig.add_subplot(111) for axis in ['top','bottom','left','right']: ax.spines[axis].set_linewidth(4) #Formateando los valores de los ejes ax.yaxis.set_major_formatter(FormatStrFormatter('%.2f')) pl.plot(data_rmsf[:,0], data_rmsf[:,1], '-o', color='black', markersize=25, markerfacecolor='red',markeredgecolor='black',markeredgewidth=3, linewidth = 4, ) pl.xlabel("Residue", fontsize = 40) pl.ylabel('RMSF (nm)', fontsize = 40) #pl.title('RMSF Fluctuation', fontsize=40) pl.xticks(fontsize=30) pl.yticks(fontsize=30) pl.xlim(0, len(data_rmsf[:,1]))
dinamica-2puentes.ipynb
lguarneros/fimda
gpl-3.0
B-factors Generando archivo para visualizarlo con XMGRACE
#Inicializando vector bfactors=[] try: file_bfactor = open( 'bfac.pdb' ) except IOError: print ('No se pudo abrir el archivo o no existe·..') i=0 for linea in file_bfactor.readlines(): fila = linea.strip() sl = fila.split() if (sl[0]=='ATOM'): #print (sl[0]) idresidue=fila[23:26] bfactor=fila[60:66] print (idresidue + '\t'+bfactor) bfactors.append(idresidue+'\t'+bfactor+'\n') #i=i+1 #Escribiendo el archivo BFACTOR.dat f = open('bfactor.dat', 'w') #f.write('@ title "B-factors" \n') foo = 'baz "\\"' f.write('@ xaxis label " Residue" \n') f.write('@ xaxis label char size 1.480000\n') f.write('@ xaxis bar linewidth 3.0\n') f.write('@ xaxis ticklabel char size 1.480000\n') f.write('@ yaxis label "B-factors (' +"\\"+'cE'+"\\"+'C)"\n') f.write('@ yaxis label char size 1.480000\n') f.write('@ yaxis bar linewidth 3.0\n') f.write('@ yaxis ticklabel char size 1.480000\n') f.write('@ s0 line linewidth 2.5\n') f.write('@ s0 symbol 1\n') f.write('@ s0 symbol size 1.000000\n') f.write('@ s0 symbol color 1\n') f.write('@ s0 symbol pattern 1\n') f.write('@ s0 symbol fill color 2\n') f.write('@ s0 symbol fill pattern 1\n') f.write('@ s0 symbol linewidth 1.0\n') f.write('@TYPE xy \n') f.write("".join(bfactors)) f.close() !xmgrace bfactor.dat #Cargando la imagen generada en xmgrace Image(filename='bfactor.png')
dinamica-2puentes.ipynb
lguarneros/fimda
gpl-3.0
Generando archivo para visualizar con Matplotlib
#Inicializando vector bfactors=[] try: file_bfactor = open( 'bfac.pdb' ) except IOError: print ('No se pudo abrir el archivo o no existe·..') i=0 print ('Residuo' + '\t'+'bfactor') for linea in file_bfactor.readlines(): fila = linea.strip() sl = fila.split() if (sl[0]=='ATOM'): #print (sl[0]) idresidue=fila[23:26] bfactor=fila[60:66] print (idresidue + '\t'+bfactor) bfactors.append(idresidue+'\t'+bfactor+'\n') #i=i+1 #Escribiendo el archivo BFACTOR.dat f = open('bfactor.dat', 'w') f.write("".join(bfactors)) f.close() data_bfactor=np.loadtxt('bfactor.dat',comments=['#', '@']) #Engrosar marco fig=pl.figure(figsize=(20, 12), dpi=100, linewidth=3.0) ax = fig.add_subplot(111) for axis in ['top','bottom','left','right']: ax.spines[axis].set_linewidth(4) #Formateando los valores de los ejes #ax.yaxis.set_major_formatter(FormatStrFormatter('%.2f')) pl.plot(data_bfactor[:,0], data_bfactor[:,1], '-o', color='black', markersize=25, markerfacecolor='red',markeredgecolor='black',markeredgewidth=3, linewidth = 4, ) pl.xlabel('Residue', fontsize = 40) pl.ylabel('B-factors ('+ r'$\AA$'+')' , fontsize = 40) #pl.title('B-Factors', fontsize=40) pl.xticks(fontsize=30) pl.yticks(fontsize=30) pl.xlim(0, len(data_bfactor[:,1]))
dinamica-2puentes.ipynb
lguarneros/fimda
gpl-3.0
Secondary Structure Se crea la carpeta para cálculo de la estructura
### Creando el directorio para el análisis del RMSF #Verificando que exista la nueva carpeta para la conversión de trayectorias ruta_ss = nuevaruta+'/estructura' print ( ruta_ss ) if not os.path.exists(ruta_ss): os.makedirs(ruta_ss) print ('Se ha creado la ruta ===>',ruta_ss) else: print ("La ruta "+ruta_ss+" existe..!!!") print ( 'Nos vamos a ....', ruta_ss ) os.chdir( ruta_ss )
dinamica-2puentes.ipynb
lguarneros/fimda
gpl-3.0
Calculando la estructura secundaria Se necesita contar con el programa dssp en la ruta /usr/local/bin, el cual se enlaza con Gromacs 5
print ('Ejecutando el análisis de esctructura secundaria...') !echo 5 | do_dssp -f ../output.xtc -s ../ionized.pdb -o sec_est.xpm -tu ns print ('\n Convirtiendo el archivo a ps...') !xpm2ps -f sec_est.xpm -by 6 -bx .1 -o est_sec.eps print('\nConvirtiendo a png...') !convert -density 600 est_sec.eps -resize 1024x1024 est_sec.png print ('Cargando el archivo...') Image(filename='est_sec.png', width=1024)
dinamica-2puentes.ipynb
lguarneros/fimda
gpl-3.0
R-GYRATE Se crea una carpeta rgiro para guardar los archivos generados.
### Creando el directorio para el análisis del r-gyro #Verificando que exista la nueva carpeta para la conversión de trayectorias ruta_rgyro = nuevaruta+'/rgyro' print ( ruta_rgyro ) if not os.path.exists(ruta_rgyro): os.makedirs(ruta_rgyro) print ('Se ha creado la ruta ===>',ruta_rgyro) else: print ("La ruta "+ruta_rgyro+" existe..!!!") print ( 'Nos vamos a ....', ruta_rgyro) os.chdir( ruta_rgyro )
dinamica-2puentes.ipynb
lguarneros/fimda
gpl-3.0
Calculando el r-gyro con la opción (3) - C-alpha Se calcula para los carbonos alfa.
print ('Ejecutando el análisis de rgyro...') !echo 3 | g_gyrate -f ../output.xtc -s ../ionized.pdb -o gyrate.xvg
dinamica-2puentes.ipynb
lguarneros/fimda
gpl-3.0
Generando el archivo rgyro.dat para su análisis con XMGRACE
#Inicializando vector rgyro=[] try: file_rmsf = open( 'gyrate.xvg' ) except IOError: print ('No se pudo abrir el archivo o no existe·..') i=0 for linea in file_rmsf.readlines(): fila = linea.strip() sl = fila.split() cadena=sl[0] if (not '#' in cadena) and (not '@' in cadena): num=float(sl[0]) #num2=float(sl[1]) num=num/1000 rgyro.append(repr(num)+'\t'+sl[1]+'\n') i=i+1 #Escribiendo el archivo RGYRO.DAT f = open('rgyro.dat', 'w') #f.write('@ title "Radius of gyration" \n') f.write('@ xaxis label " Time (ns)" \n') f.write('@ xaxis label char size 1.480000\n') f.write('@ xaxis bar linewidth 3.0\n') f.write('@ xaxis ticklabel char size 1.480000\n') f.write('@ yaxis label "Rg (nm)" \n') f.write('@ yaxis label char size 1.480000\n') f.write('@ yaxis bar linewidth 3.0\n') f.write('@ yaxis ticklabel char size 1.480000\n') f.write('@ s0 line linewidth 2.5\n') f.write('@TYPE xy \n') f.write("".join(rgyro)) f.close() !xmgrace rgyro.dat #Cargando la imagen generada en xmgrace Image(filename='rgyro.png')
dinamica-2puentes.ipynb
lguarneros/fimda
gpl-3.0
Ploteando el archivo gyrate.xvg con matplotlib
data_rgyro=np.loadtxt('gyrate.xvg',comments=['#', '@']) #Engrosar marco fig=pl.figure(figsize=(20, 12), dpi=100, linewidth=3.0) ax = fig.add_subplot(111) for axis in ['top','bottom','left','right']: ax.spines[axis].set_linewidth(4) #Formateando los valores de los ejes ax.yaxis.set_major_formatter(FormatStrFormatter('%.2f')) pl.plot(data_rgyro[:,0]/1000, data_rgyro[:,1], linewidth = 2, color='black') pl.xlabel("Time (ns)", fontsize = 40) pl.ylabel('Rg (nm)', fontsize = 40) #pl.suptitle('Radius of gyration', fontsize=50) pl.xticks(fontsize=30) pl.yticks(fontsize=30)
dinamica-2puentes.ipynb
lguarneros/fimda
gpl-3.0
RMSD Helix Alfa Para realizar este análisis se debe cargar el pdb original de la proteina que se encuentra en la carpeta 01_BUILD. Cargarlo con VMD y dirigirse al Menú EXTENSIONS -> ANALYSIS -> SEQUENCE VIEWER, en la cual se tomará el rango de átomos del campo Struct (H), el cual se proporcionará de la forma "resid X1 to X2" donde X1 es primer átomo de la helix y X2 el último átomo de la helix.
### Creando el directorio para el análisis del RMSF #Verificando que exista la nueva carpeta para la conversión de trayectorias ruta_helix = nuevaruta+'/rmsd_helix' print ( ruta_helix ) if not os.path.exists(ruta_helix): os.makedirs(ruta_helix) print ('Se ha creado la ruta ===>',ruta_helix) else: print ("La ruta "+ruta_helix+" existe..!!!") print ( 'Nos vamos a ....', ruta_helix) os.chdir( ruta_helix )
dinamica-2puentes.ipynb
lguarneros/fimda
gpl-3.0
Entrada de datos Para la entrada se deberá dar con la opción "resid X to X".
num=input('Número de hélices con las que cuenta la proteína:') print (num) if (int(num)==1): indices_ha1=input('Proporciona el rango de índices de la Hélice 1:') print (indices_ha1) r_helix_1=1 r_helix_2=0 r_helix_3=0 r_helix_4=0 if (int(num)==2): indices_ha1=input('Proporciona el rango de índices de la Hélice 1:') print (indices_ha1) indices_ha2=input('Proporciona el rango de índices de la Hélice 2:') print (indices_ha2) r_helix_1=1 r_helix_2=1 r_helix_3=0 r_helix_4=0 if (int(num)==3): indices_ha1=input('Proporciona el rango de índices de la Hélice 1:') print (indices_ha1) indices_ha2=input('Proporciona el rango de índices de la Hélice 2:') print (indices_ha2) indices_ha3=input('Proporciona el rango de índices de la Hélice 3:') print (indices_ha3) r_helix_1=1 r_helix_2=1 r_helix_3=1 r_helix_4=0 if (int(num)==4): indices_ha1=input('Proporciona el rango de índices de la Hélice 1:') print (indices_ha1) indices_ha2=input('Proporciona el rango de índices de la Hélice 2:') print (indices_ha2) indices_ha3=input('Proporciona el rango de índices de la Hélice 3:') print (indices_ha3) indices_ha4=input('Proporciona el rango de índices de la Hélice 4:') print (indices_ha4) r_helix_1=1 r_helix_2=1 r_helix_3=1 r_helix_4=1 #Script para vmd de la Hélice Alfa 2 psf=ruta_old_traj+'/'+psf_file dcd=ruta_old_traj+'/'+dcd_file if (r_helix_1==1): f = open('ha1.tcl', 'w') print(f) f.write('set psfFile '+ psf+' \n') f.write('set dcdFile '+ dcd+' \n') f.write('\nmol load psf $psfFile dcd $dcdFile\n') f.write('set outfile ' +'[open ' +'rmsd_ha1.dat'+' w]\n') f.write('set nf [molinfo top get numframes]\n') f.write('\n#RMSD calculation loop\n') f.write('set f1 [atomselect top "'+indices_ha1+' " frame 0]\n') f.write('for {set i 0} {$i < $nf} {incr i 1} {\n') f.write(' set sel [atomselect top "'+indices_ha1+' " frame $i]\n') f.write(' $sel move [measure fit $sel $f1]\n') f.write(' set time [expr {$i +1}]\n') f.write(' puts -nonewline $outfile "[measure rmsd $sel $f1]"\n') f.write(' puts $outfile "$time $time"\n') f.write('}\n') f.write('close $outfile') f.close() if (r_helix_2==1): f = open('ha2.tcl', 'w') print(f) f.write('set psfFile '+ psf+' \n') f.write('set dcdFile '+ dcd+' \n') f.write('\nmol load psf $psfFile dcd $dcdFile\n') f.write('set outfile ' +'[open ' +'rmsd_ha2.dat'+' w]\n') f.write('set nf [molinfo top get numframes]\n') f.write('\n#RMSD calculation loop\n') f.write('set f1 [atomselect top "'+indices_ha2+' " frame 0]\n') f.write('for {set i 0} {$i < $nf} {incr i 1} {\n') f.write(' set sel [atomselect top "'+indices_ha2+' " frame $i]\n') f.write(' $sel move [measure fit $sel $f1]\n') f.write(' set time [expr {$i +1}]\n') f.write(' puts -nonewline $outfile "[measure rmsd $sel $f1]"\n') f.write(' puts $outfile "$time $time"\n') f.write('}\n') f.write('close $outfile') f.close() if (r_helix_3==1): f = open('ha3.tcl', 'w') print(f) f.write('set psfFile '+ psf+' \n') f.write('set dcdFile '+ dcd+' \n') f.write('\nmol load psf $psfFile dcd $dcdFile\n') f.write('set outfile ' +'[open ' +'rmsd_ha3.dat'+' w]\n') f.write('set nf [molinfo top get numframes]\n') f.write('\n#RMSD calculation loop\n') f.write('set f1 [atomselect top "'+indices_ha3+' " frame 0]\n') f.write('for {set i 0} {$i < $nf} {incr i 1} {\n') f.write(' set sel [atomselect top "'+indices_ha3+' " frame $i]\n') f.write(' $sel move [measure fit $sel $f1]\n') f.write(' set time [expr {$i +1}]\n') f.write(' puts -nonewline $outfile "[measure rmsd $sel $f1]"\n') f.write(' puts $outfile "$time $time"\n') f.write('}\n') f.write('close $outfile') f.close() if (r_helix_4==1): f = open('ha4.tcl', 'w') print(f) f.write('set psfFile '+ psf+' \n') f.write('set dcdFile '+ dcd+' \n') f.write('\nmol load psf $psfFile dcd $dcdFile\n') f.write('set outfile ' +'[open ' +'rmsd_ha4.dat'+' w]\n') f.write('set nf [molinfo top get numframes]\n') f.write('\n#RMSD calculation loop\n') f.write('set f1 [atomselect top "'+indices_ha4+' " frame 0]\n') f.write('for {set i 0} {$i < $nf} {incr i 1} {\n') f.write(' set sel [atomselect top "'+indices_ha4+' " frame $i]\n') f.write(' $sel move [measure fit $sel $f1]\n') f.write(' set time [expr {$i +1}]\n') f.write(' puts -nonewline $outfile "[measure rmsd $sel $f1]"\n') f.write(' puts $outfile "$time $time"\n') f.write('}\n') f.write('close $outfile') f.close() if (r_helix_1==1): #Calculando con VMD hélice 1 !vmd -dispdev text < ha1.tcl if (r_helix_2==1): #Calculando con VMD hélice 2 !vmd -dispdev text < ha2.tcl if (r_helix_3==1): #Calculando con VMD hélice 3 !vmd -dispdev text < ha3.tcl if (r_helix_4==1): #Calculando con VMD hélice 4 !vmd -dispdev text < ha4.tcl if (int(num)==1): #Graficando data_ha1=np.loadtxt('rmsd_ha1.dat',comments=['#', '@']) #Engrosar marco fig=pl.figure(figsize=(20, 12), dpi=100, linewidth=3.0) ax = fig.add_subplot(111) for axis in ['top','bottom','left','right']: ax.spines[axis].set_linewidth(4) #Formateando los valores de los ejes ax.yaxis.set_major_formatter(FormatStrFormatter('%.0f')) #pl.plot(data_ha1[:,0], data_ha1[:,1], linewidth = 3) pl.plot(data_ha1[:,1]*0.02, data_ha1[:,0]/10, linewidth = 3, color='black') pl.xlabel("Time (ns)", fontsize = 40) pl.ylabel('RMSD (nm)', fontsize = 40) #pl.legend(bbox_to_anchor=(1.05, 1), loc=2, borderaxespad=0.) #pl.title('RMSD Helix Alfa', fontsize=50) pl.xticks(fontsize=30) pl.yticks(fontsize=30) if (int(num)==2): #Graficando data_ha1=np.loadtxt('rmsd_ha1.dat',comments=['#', '@']) data_ha2=np.loadtxt('rmsd_ha2.dat',comments=['#', '@']) #Engrosar marco fig=pl.figure(figsize=(20, 12), dpi=100, linewidth=3.0) ax = fig.add_subplot(111) for axis in ['top','bottom','left','right']: ax.spines[axis].set_linewidth(4) #Formateando los valores de los ejes ax.yaxis.set_major_formatter(FormatStrFormatter('%.0f')) #pl.plot(data_ha1[:,0], data_ha1[:,1], linewidth = 3) pl.plot(data_ha1[:,1]*0.02, data_ha1[:,0]/10, linewidth = 3, color='black') pl.plot(data_ha2[:,1]*0.02, data_ha2[:,0]/10, linewidth = 3, color='red') pl.xlabel("Time (ns)", fontsize = 40) pl.ylabel('RMSD (nm)', fontsize = 40) #pl.legend(bbox_to_anchor=(1.05, 1), loc=2, borderaxespad=0.) #pl.title('RMSD Helix Alfa', fontsize=50) pl.xticks(fontsize=30) pl.yticks(fontsize=30) if (int(num)==3): #Graficando data_ha1=np.loadtxt('rmsd_ha1.dat',comments=['#', '@']) data_ha2=np.loadtxt('rmsd_ha2.dat',comments=['#', '@']) data_ha3=np.loadtxt('rmsd_ha3.dat',comments=['#', '@']) #Engrosar marco fig=pl.figure(figsize=(20, 12), dpi=100, linewidth=3.0) ax = fig.add_subplot(111) for axis in ['top','bottom','left','right']: ax.spines[axis].set_linewidth(4) #Formateando los valores de los ejes ax.yaxis.set_major_formatter(FormatStrFormatter('%.0f')) #pl.plot(data_ha1[:,0], data_ha1[:,1], linewidth = 3) pl.plot(data_ha1[:,1]*0.02, data_ha1[:,0]/10, linewidth = 3, color='black') pl.plot(data_ha2[:,1]*0.02, data_ha2[:,0]/10, linewidth = 3, color='red') pl.plot(data_ha3[:,1]*0.02, data_ha3[:,0]/10, linewidth = 3, color='green') pl.xlabel("Time (ns)", fontsize = 40) pl.ylabel('RMSD (nm)', fontsize = 40) #pl.legend(bbox_to_anchor=(1.05, 1), loc=2, borderaxespad=0.) #pl.title('RMSD Helix Alfa', fontsize=50) pl.xticks(fontsize=30) pl.yticks(fontsize=30) if (int(num)==4): #Graficando data_ha1=np.loadtxt('rmsd_ha1.dat',comments=['#', '@']) data_ha2=np.loadtxt('rmsd_ha2.dat',comments=['#', '@']) data_ha3=np.loadtxt('rmsd_ha3.dat',comments=['#', '@']) data_ha4=np.loadtxt('rmsd_ha4.dat',comments=['#', '@']) #Engrosar marco fig=pl.figure(figsize=(20, 12), dpi=100, linewidth=3.0) ax = fig.add_subplot(111) for axis in ['top','bottom','left','right']: ax.spines[axis].set_linewidth(4) #Formateando los valores de los ejes ax.yaxis.set_major_formatter(FormatStrFormatter('%.0f')) #pl.plot(data_ha1[:,0], data_ha1[:,1], linewidth = 3) pl.plot(data_ha1[:,1]*0.02, data_ha1[:,0]/10, linewidth = 3, color='black') pl.plot(data_ha2[:,1]*0.02, data_ha2[:,0]/10, linewidth = 3, color='red') pl.plot(data_ha3[:,1]*0.02, data_ha3[:,0]/10, linewidth = 3, color='green') pl.plot(data_ha4[:,1]*0.02, data_ha4[:,0]/10, linewidth = 3, color='blue') pl.xlabel("Time (ns)", fontsize = 40) pl.ylabel('RMSD (A)', fontsize = 40) #pl.legend(bbox_to_anchor=(1.05, 1), loc=2, borderaxespad=0.) #pl.title('RMSD Helix Alfa', fontsize=50) pl.xticks(fontsize=30) pl.yticks(fontsize=30)
dinamica-2puentes.ipynb
lguarneros/fimda
gpl-3.0
SASA Creando la estructura de carpeta para el cálculo
### Creando el directorio para el análisis del SASA ### NOTA: se calcula con gromacs4 ya que arroja bien los resultados comparado con gromacs5 ruta_sasa = nuevaruta+'/sasa' print ( ruta_sasa ) if not os.path.exists(ruta_sasa): os.makedirs(ruta_sasa) print ('Se ha creado la ruta ===>',ruta_sasa) else: print ("La ruta "+ruta_sasa+" existe..!!!") print ( 'Nos vamos a ....', ruta_sasa ) os.chdir( ruta_sasa )
dinamica-2puentes.ipynb
lguarneros/fimda
gpl-3.0
Ejecutando el análisis de SASA con Gromacs4
print ('Ejecutando el análisis de sasa con Gromacs 4 utilizando la opción 1 (protein)...') !echo 1 1 | /opt/gromacs4/bin/g_sas -f ../output.xtc -s ../ionized.pdb -o solven-accessible-surface.xvg -oa atomic-sas.xvg -or residue-sas.xvg
dinamica-2puentes.ipynb
lguarneros/fimda
gpl-3.0
Creando el archivo sasa_residuo.dat para salida con XMGRACE
#Inicializando vector sasa_residuo=[] try: residue_sas = open( 'residue-sas.xvg' ) except IOError: print ('No se pudo abrir el archivo o no existe·..') i=0 for linea in residue_sas.readlines(): fila = linea.strip() sl = fila.split() cadena=sl[0] if (not '#' in cadena) and (not '@' in cadena): print ('Residue =>',cadena) sasa_residuo.append(sl[0]+'\t'+sl[1]+'\n') i=i+1 #Escribiendo el archivo RMSD f = open('sasa-residuo.dat', 'w') #f.write('@ title "Area per residue over the trajectory" \n') f.write('@ xaxis label " Residue " \n') f.write('@ xaxis label char size 1.480000\n') f.write('@ xaxis bar linewidth 3.0\n') f.write('@ xaxis ticklabel char size 1.480000\n') f.write('@ yaxis label "Area (nm' +"\\"+'S2'+"\\N"+')"\n') f.write('@ yaxis label char size 1.480000\n') f.write('@ yaxis bar linewidth 3.0\n') f.write('@ yaxis ticklabel char size 1.480000\n') f.write('@ s0 line linewidth 2.5\n') f.write('@ s0 symbol 1\n') f.write('@ s0 symbol size 1.000000\n') f.write('@ s0 symbol color 1\n') f.write('@ s0 symbol pattern 1\n') f.write('@ s0 symbol fill color 2\n') f.write('@ s0 symbol fill pattern 1\n') f.write('@ s0 symbol linewidth 1.0\n') f.write('@TYPE xy \n') f.write("".join(sasa_residuo)) f.close() !xmgrace sasa-residuo.dat #Cargando la imagen generada en xmgrace Image(filename='sasa-residuo.png')
dinamica-2puentes.ipynb
lguarneros/fimda
gpl-3.0
Cargando archivo residue-sas.xvg para su visualización en Matplotlib Se genera el gráfico de salida para matplotlib
data_sasa_residue=np.loadtxt('residue-sas.xvg',comments=['#', '@']) #Engrosar marco fig=pl.figure(figsize=(20, 12), dpi=100, linewidth=3.0) ax = fig.add_subplot(111) for axis in ['top','bottom','left','right']: ax.spines[axis].set_linewidth(4) #Formateando los valores de los ejes ax.yaxis.set_major_formatter(FormatStrFormatter('%.1f')) pl.plot(data_sasa_residue[:,0], data_sasa_residue[:,1], '-o', color='black', markersize=25, markerfacecolor='red',markeredgecolor='black',markeredgewidth=3, linewidth = 4, ) pl.xlabel("Residue", fontsize = 30) #pl.ylabel('Area (nm2)', fontsize = 30) pl.ylabel('Area ( nm'+ r'$\ ^2$'+')' , fontsize = 40) #pl.title('Area per residue over the trajectory', fontsize=40) pl.xticks(fontsize=30) pl.yticks(fontsize=30) pl.xlim(0, len(data_sasa_residue[:,1]))
dinamica-2puentes.ipynb
lguarneros/fimda
gpl-3.0
Creando el archivo sasa.dat para salida con XMGRACE
#Inicializando vector sasa=[] try: sasafile = open( 'solven-accessible-surface.xvg' ) except IOError: print ('No se pudo abrir el archivo o no existe·..') i=0 for linea in sasafile.readlines(): fila = linea.strip() sl = fila.split() cadena=sl[0] if (not '#' in cadena) and (not '@' in cadena): #print (cadena) num=float(sl[0]) num=num/1000 sasa.append(repr(num)+'\t'+sl[1]+'\t'+sl[2]+'\t'+sl[3]+'\n') i=i+1 cel2=float(sl[2]) print(cel2) #Escribiendo el archivo RMSD f = open('sasa.dat', 'w') #f.write('@ title "Solven Accessible Surface" \n') f.write('@ xaxis label " Time (ns) " \n') f.write('@ xaxis label char size 1.480000\n') f.write('@ xaxis bar linewidth 3.0\n') f.write('@ xaxis ticklabel char size 1.480000\n') f.write('@ yaxis label "Area (nm' +"\\"+'S2'+"\\N"+')"\n') f.write('@ yaxis label char size 1.480000\n') f.write('@ yaxis bar linewidth 3.0\n') f.write('@ yaxis ticklabel char size 1.480000\n') #f.write('@ s0 legend "Hydrophobic"\n') #if (cel2>0): #f.write('@ s1 legend "Hydrophilic"\n') f.write('@TYPE xy \n') f.write("".join(sasa)) f.close() !xmgrace sasa.dat #Cargando la imagen generada en xmgrace Image(filename='sasa.png')
dinamica-2puentes.ipynb
lguarneros/fimda
gpl-3.0
Cargando archivo solven-accessible-surface.xvg para graficar con Matplotlib
data_sasa=np.loadtxt('solven-accessible-surface.xvg',comments=['#', '@']) #Engrosar marco fig=pl.figure(figsize=(20, 12), dpi=100, linewidth=3.0) ax = fig.add_subplot(111) for axis in ['top','bottom','left','right']: ax.spines[axis].set_linewidth(4) #Formateando los valores de los ejes #ax.yaxis.set_major_formatter(FormatStrFormatter('%.1f')) pl.xlabel("Time (ns)", fontsize = 40) pl.ylabel('Area ( nm'+ r'$\ ^2$'+')' , fontsize = 40) #pl.title('Solvent Accessible Surface', fontsize=50) pl.xticks(fontsize=30) pl.yticks(fontsize=30) dato=data_sasa[:,2] dato2=dato[0] if (dato2>0): pl.plot(data_sasa[:,0]/1000, data_sasa[:,1], linewidth = 2, color='black') pl.plot(data_sasa[:,0]/1000, data_sasa[:,2], linewidth = 2, color='red') else: pl.plot(data_sasa[:,0]/1000, data_sasa[:,1], linewidth = 2, color='black')
dinamica-2puentes.ipynb
lguarneros/fimda
gpl-3.0
MATRIZ DE RMSD
### Creando el directorio para el análisis del SASA ### NOTA: se calcula con gromacs4 ya que arroja bien los resultados comparado con gromacs5 ruta_m_rmsd = nuevaruta+'/matriz' print ( ruta_m_rmsd ) if not os.path.exists(ruta_m_rmsd): os.makedirs(ruta_m_rmsd) print ('Se ha creado la ruta ===>',ruta_m_rmsd) else: print ("La ruta "+ruta_m_rmsd+" existe..!!!") print ( 'Nos vamos a ....', ruta_m_rmsd ) os.chdir( ruta_m_rmsd ) print ('\nCopiando el archivo rmsd_matrix.tcl a '+ruta_m_rmsd) source_file=ruta_scripts+'/rmsd_matriz/rmsd_matrix.tcl' dest_file=ruta_m_rmsd+'/rmsd_matrix.tcl' shutil.copy(source_file,dest_file) #print ( 'Nos vemos a ....', ruta_old_traj ) #os.chdir( ruta_old_traj ) file_dcd=ruta_old_traj+'/'+dcd_file file_psf=ruta_old_traj+'/'+psf_file print (file_dcd) print ('\nEjecutando CATDCD para obtener 100 frames de la trayectoria original....') output_catdcd=!catdcd -o 100.dcd -stride 50 $file_dcd print (output_catdcd.n)
dinamica-2puentes.ipynb
lguarneros/fimda
gpl-3.0
Cargar el scrit rmsd_matrix con vmd en la nueva trayectoria Arrancar VMD, dirigirse al manú Extensions -> Tk Console, copiar y ejecutar la siguiente secuencia de comandos: tcl source rmsd_matrix.tcl rmsd_matrix -mol top -seltext "name CA" -frames all -o salida.dat exit
#Arrancando VMD para cargar el script rmsd_matrix.tcl !vmd 100.dcd $file_psf ruta_matriz=os.getcwd() if os.path.isfile('salida.dat'): print ('El archivo salida.dat existe') else: print ('El archivo salida.dat no existe.. ejecutar desde MATRIZ DE RMSD...')
dinamica-2puentes.ipynb
lguarneros/fimda
gpl-3.0
Graficando el archivo de salida
#Creando el gráfico data_matriz=np.loadtxt('salida.dat',comments=['#', '@']) print(data_matriz.shape) pl.figure(figsize=(20, 12), dpi=100) imgplot = pl.imshow(data_matriz, origin='lower', cmap=pl.cm.Greens, interpolation='nearest') #imgplot = pl.imshow(data_matriz, origin='lower', cmap=pl.cm.coolwarm, interpolation='nearest') pl.xlabel("Time (ns)", fontsize = 30) pl.ylabel('Time (ns)', fontsize = 30) #pl.suptitle('RMSD', fontsize=50) #pl.title('C-Alpha RMSD matrix', fontsize=40) pl.xticks(fontsize=20) pl.yticks(fontsize=20) pl.xlim(0, 100) pl.ylim(0, 100) pl.colorbar()
dinamica-2puentes.ipynb
lguarneros/fimda
gpl-3.0
Matriz de distancia mínima
### Creando el directorio para el análisis del RMSF #Verificando que exista la nueva carpeta para la conversión de trayectorias ruta_matriz_dm = nuevaruta+'/matriz_dm' print ( ruta_matriz_dm ) if not os.path.exists(ruta_matriz_dm): os.makedirs(ruta_matriz_dm) print ('Se ha creado la ruta ===>',ruta_matriz_dm) else: print ("La ruta "+ruta_matriz_dm+" existe..!!!") print ( 'Nos vamos a ....', ruta_matriz_dm ) os.chdir( ruta_matriz_dm )
dinamica-2puentes.ipynb
lguarneros/fimda
gpl-3.0
Calculando la matriz de distancia mínima Seleccionar el backbone (opción 4)
!echo 4 | g_mdmat -f ../output.xtc -s ../ionized.pdb -mean average -frames frames -dt 10000
dinamica-2puentes.ipynb
lguarneros/fimda
gpl-3.0
Generando los archivos para visualizarlos
!xpm2ps -f frames.xpm -o frames.eps !xpm2ps -f average.xpm -o average.eps print('\nConvirtiendo a png...') !convert -density 600 frames.eps -resize 1024x1024 frames.png !convert -density 600 average.eps -resize 1024x1024 average.png print ('Cargando el archivo average...') Image(filename='average.png', width=800)
dinamica-2puentes.ipynb
lguarneros/fimda
gpl-3.0
Free Energy Para el cálculo de la energía libre se requiere el valor mínimo y máximo del RMSD y del radio de gyro, así como el valor de la temperatura a la cual se realizó la simulación. Estos datos son de entrada para el script del cálculo del mismo.
### Creando el directorio para el análisis de la libre energía ruta_f_energy = nuevaruta+'/free_energy' print ( ruta_f_energy ) if not os.path.exists(ruta_f_energy): os.makedirs(ruta_f_energy) print ('Se ha creado la ruta ===>',ruta_f_energy) else: print ("La ruta "+ruta_f_energy+" existe..!!!") print ( 'Nos vamos a ....', ruta_f_energy ) os.chdir( ruta_f_energy ) #Solicita la temperatura t=input('Temperatura a la cual se realizó la simulación:') temperatura=int(t) print ('Temperatura=>',temperatura)
dinamica-2puentes.ipynb
lguarneros/fimda
gpl-3.0
Calculando el rmsd y el r-gyro para obtener el mínimo y máximo de cada uno de ellos.
print ('Ejecutando el análisis de rmsd...') !echo 3 3 | g_rms -f ../output.xtc -s ../ionized.pdb -a avgrp.xvg print ('Ejecutando el análisis de rgyro...') !echo 3 | g_gyrate -f ../output.xtc -s ../ionized.pdb -o gyrate.xvg
dinamica-2puentes.ipynb
lguarneros/fimda
gpl-3.0
Escribiendo script a /tmp para utilizar en el cálculo
print ('\nCopiando el archivo generateFES.py a '+ruta_f_energy) source_file=ruta_scripts+'/free_energy/generateFES.py' dest_file=ruta_f_energy+'/generateFES.py' shutil.copy(source_file,dest_file) #Cambiando permisos de ejecución !chmod +x generateFES.py
dinamica-2puentes.ipynb
lguarneros/fimda
gpl-3.0
Realizando los cálculos para la Free Energy
#Cargando valores del RMSD data_rmsd=np.loadtxt('rmsd.xvg',comments=['#', '@']) #Cargnaod valores del R-GYRO data_rgyro=np.loadtxt('gyrate.xvg',comments=['#', '@']) #Obteniendo los valores máximo y mínimo del rmsd min_rmsd=np.amin(data_rmsd[:,1]) max_rmsd=np.amax(data_rmsd[:,1]) print ('Minimo RMSD=>',min_rmsd) print ('Máximo RMSD=>',max_rmsd) #Obteniendo los valores máximo y mínimo del r-gyro min_rgyro=np.amin(data_rgyro[:,1]) max_rgyro=np.amax(data_rgyro[:,1]) print ('Minimo RGYRO=>',min_rgyro) print ('Máximo RGYRO=>',max_rgyro) #Creando los archivos de entrada para el script np.savetxt('rmsd.dat',data_rmsd[:,1], fmt='%1.7f') np.savetxt('rgyro.dat',data_rgyro[:,1], fmt='%1.7f') !paste rgyro.dat rmsd.dat > fes.dat #Ejecutando el script de FES !python generateFES.py fes.dat $min_rgyro $max_rgyro $min_rmsd $max_rmsd 200 200 $temperatura FEES.dat #Cargando el archivo generado para plotear con matplotlib data_fes=np.loadtxt('FEES.dat',comments=['#', '@'])
dinamica-2puentes.ipynb
lguarneros/fimda
gpl-3.0
Ploteando con GNUplot
# This loads the magics for gnuplot %load_ext gnuplot_kernel #Configurando la salida para GNUplot %gnuplot inline pngcairo transparent enhanced font "arial,20" fontscale 1.0 size 1280,960; set zeroaxis;; %%gnuplot set output "free_energy.png" set palette model RGB set palette defined ( 0 '#000090',\ 1 '#000fff',\ 2 '#0090ff',\ 3 '#0fffee',\ 4 '#90ff70',\ 5 '#ffee00',\ 6 '#ff7000',\ 7 '#ee0000',\ 8 '#7f0000') set view map set dgrid3d set pm3d interpolate 0,0 set xlabel "Rg (nm) set ylabel "RMSD (nm)" ##Descomentar la siguiente línea de código en caso de que la escala comience con valor de 1 y ejecutar nuevamente #set cbrange[8:10] splot "FEES.dat" with pm3d
dinamica-2puentes.ipynb
lguarneros/fimda
gpl-3.0
PCA
### Creando el directorio para el análisis del PCA ruta_pca = nuevaruta+'/pca' print ( ruta_pca ) if not os.path.exists(ruta_pca): os.makedirs(ruta_pca) print ('Se ha creado la ruta ===>',ruta_pca) else: print ("La ruta "+ruta_pca+" existe..!!!") print ( 'Nos vamos a ....', ruta_pca ) os.chdir( ruta_pca ) #Calculando matriz de covarianza !echo 1 1 | g_covar -s ../ionized.pdb -f ../output.xtc -o eigenvalues.xvg -v eigenvectors.trr -xpma covar.xpm
dinamica-2puentes.ipynb
lguarneros/fimda
gpl-3.0
Una vez calculada la matriz el eigenvalues y eigenvectors sirven de entrada para generar el pca. El siguiente comando representa el movimiento del primer y segundo eigenvector.
!echo 1 1 | g_anaeig -s ../ionized.pdb -f ../output.xtc -v eigenvectors.trr -eig eigenvalues.xvg -first 1 -last 2 -2d 2dproj_1_2.xvg #pcaX, pcaY=np.loadtxt('2dproj_1_2.xvg',comments=['#', '@'], unpack=True) data_pca=np.loadtxt('2dproj_1_2.xvg',comments=['#', '@']) #Obteniendo los valores máximo y mínimo del pca min_pcaX=np.amin(data_pca[:,0]) max_pcaX=np.amax(data_pca[:,0]) print ('Minimo PCA_X=>',min_pcaX) print ('Máximo PCA_X=>',max_pcaX) min_pcaY=np.amin(data_pca[:,1]) max_pcaY=np.amax(data_pca[:,1]) print ('Minimo PCA_Y=>',min_pcaY) print ('Máximo PCA_Y=>',max_pcaY) #Creando los archivos de entrada para el script np.savetxt('PCA.dat',data_pca, fmt='%1.5f') #Copiando el script generateFES de la carpeta Free_energy print ('\nCopiando el archivo generateFES.py a '+ruta_pca+ ' desde '+ ruta_f_energy) source_file=ruta_f_energy+'/generateFES.py' dest_file=ruta_pca+'/generateFES.py' shutil.copy(source_file,dest_file) #Ejecutando el script de FES !python generateFES.py PCA.dat $min_pcaX $max_pcaX $min_pcaY $max_pcaY 200 200 $temperatura FEES_PCA.dat
dinamica-2puentes.ipynb
lguarneros/fimda
gpl-3.0
Ploteando el archivo con gnuplot
#Volver a cargar el kernel de gnuplot para limpiar su buffer %reload_ext gnuplot_kernel #Configurando la salida para GNUplot %gnuplot inline pngcairo transparent enhanced font "arial,20" fontscale 1.0 size 1280,960; set zeroaxis;; %%gnuplot set output "pca.png" set palette model RGB set palette defined ( 0 '#000090',\ 1 '#000fff',\ 2 '#0090ff',\ 3 '#0fffee',\ 4 '#90ff70',\ 5 '#ffee00',\ 6 '#ff7000',\ 7 '#ee0000',\ 8 '#7f0000') set view map set dgrid3d set pm3d interpolate 0,0 set xlabel "projection on eigenvector 1 (nm)" set ylabel "projection on eigenvector 2 (nm)" set title " " ##Descomentar la siguiente línea de código en caso de que la escala comience con valor de 1 y ejecutar nuevamente #set cbrange[8:10] splot "FEES_PCA.dat" with pm3d
dinamica-2puentes.ipynb
lguarneros/fimda
gpl-3.0
Análisis de puentes di sulfuro Este aplica para 2 puente, para lo cual se utiliza el software HTMD.
from htmd import *
dinamica-2puentes.ipynb
lguarneros/fimda
gpl-3.0
Creando la ruta Ruta para el análisis de los datos.
### Creando el directorio para el análisis de los RMSD de los puentes ruta_rmsd_diedros = nuevaruta+'/rmsd_diedros' print ( ruta_rmsd_diedros ) if not os.path.exists(ruta_rmsd_diedros): os.makedirs(ruta_rmsd_diedros) print ('Se ha creado la ruta ===>',ruta_rmsd_diedros) else: print ("La ruta "+ruta_rmsd_diedros+" existe..!!!") print ( 'Nos vamos a ....', ruta_rmsd_diedros) os.chdir( ruta_rmsd_diedros )
dinamica-2puentes.ipynb
lguarneros/fimda
gpl-3.0
Cargando de los puentes di sulfuro Para este análisis se deberá revisar el archivo psf_charmm.tcl de la carpeta 01_BUILD, en el cual se tiene la definición de los puentes como la siguiente: patch DISU A:4 A:22 patch DISU A:8 A:18 El número del puente se determinará de acuerdo al orden en que se encuentran definidos en este archivo, por ejemplo, la nota anterior: DB1 4-22 DB2 8-18 La entrada de datos será por los índices del lado izquierdo y derecho respectivamente, con los cuales se creará la estructura completa de cada uno de ellos tomando los valores de los indices para su respectivo análisis.
# Cargando la molécula mol = Molecule('../ionized.pdb') # Solicitando los datos de entrada px1l=input('Índice del DB1 izquierdo:') px1r=input('Índice del DB1 derecho:') px2l=input('Índice del DB2 izquierdo:') px2r=input('Índice del DB2 derecho:') revisa1=1 revisa2=1
dinamica-2puentes.ipynb
lguarneros/fimda
gpl-3.0
Obteniendo los índices de los puentes
if (revisa1>0): #Obteniendo lado izquierdo del DB1 x1l_name=mol.get('name','resname CYS and noh and resid '+px1l) x1l_index=mol.get('index','resname CYS and noh and resid '+px1l) x1l_resid=mol.get('resid','resname CYS and noh and resid '+px1l) #Obteniendo lado derecho del DB1 x1r_name=mol.get('name','resname CYS and noh and resid '+px1r) x1r_index=mol.get('index','resname CYS and noh and resid '+px1r) x1r_resid=mol.get('resid','resname CYS and noh and resid '+px1r) if (revisa2>0): #Obteniendo el lado izquierdo del DB2 x2l_name=mol.get('name','resname CYS and noh and resid '+px2l) x2l_index=mol.get('index','resname CYS and noh and resid '+px2l) x2l_resid=mol.get('resid','resname CYS and noh and resid '+px2l) #Obteniendo el lado derecho del DB2 x2r_name=mol.get('name','resname CYS and noh and resid '+px2r) x2r_index=mol.get('index','resname CYS and noh and resid '+px2r) x2r_resid=mol.get('resid','resname CYS and noh and resid '+px2r) #Obteniendo la lista de índices de los puentes print ('Generando la lista de los índices para enviarlos') db1x1l=[] db1x2l=[] db1x3m=[] db1x2r=[] db1x1r=[] db1l_name_l=[] db1l_index_l=[] db1r_name_l=[] db1r_index_l=[] db2l_name_l=[] db2l_index_l=[] db2r_name_l=[] db2r_index_l=[] db3l_name_l=[] db3l_index_l=[] db3r_name_l=[] db3r_index_l=[] if (revisa1>0): #Obteniendo los índices del DB1 for i in range(len(x1l_name)): if (x1l_name[i]=='N' or x1l_name[i]=='CA' or x1l_name[i]=='CB' or x1l_name[i]=='SG'): db1l_name_l.append(str(x1l_name[i])) db1l_index_l.append(str(x1l_index[i])) for i in range(len(x1r_name)): if (x1r_name[i]=='N' or x1r_name[i]=='CA' or x1r_name[i]=='CB' or x1r_name[i]=='SG'): db1r_name_l.append(str(x1r_name[i])) db1r_index_l.append(str(x1r_index[i])) print ('DB1 X1L =>',db1l_name_l) print (db1l_index_l) print ('DB1 X1R =>',db1r_name_l) print (db1r_index_l) if (revisa2>0): #Obteniendo los índices del DB2 for i in range(len(x2l_name)): if (x2l_name[i]=='N' or x2l_name[i]=='CA' or x2l_name[i]=='CB' or x2l_name[i]=='SG'): db2l_name_l.append(str(x2l_name[i])) db2l_index_l.append(str(x2l_index[i])) for i in range(len(x2r_name)): if (x2r_name[i]=='N' or x2r_name[i]=='CA' or x2r_name[i]=='CB' or x2r_name[i]=='SG'): db2r_name_l.append(str(x2r_name[i])) db2r_index_l.append(str(x2r_index[i])) print ('DB2 X1L =>',db2l_name_l) print (db2l_index_l) print ('DB2 X1R =>',db2r_name_l) print (db2r_index_l)
dinamica-2puentes.ipynb
lguarneros/fimda
gpl-3.0
Ordenando los puentes de la forma ['N', 'CA', 'CB', 'SG', 'SG', 'CB', 'CA', 'N']
#Generando el DB1 completo ordenado filas=8 col=2 DB1_i=[] DB1_N=[] DB2_i=[] DB2_N=[] DB3_i=[] DB3_N=[] for i in range(0,filas): DB1_N.append([' ']) DB1_i.append(['0']) DB2_N.append([' ']) DB2_i.append(['0']) DB3_N.append([' ']) DB3_i.append(['0']) if (revisa1>0): #Cargando índices para el puente 1 for i in range(len(db1l_name_l)): if db1l_name_l[i]=='N': DB1_N[0] = db1l_name_l[i] DB1_i[0]='index '+db1l_index_l[i] if db1l_name_l[i]=='CA': DB1_N[1] = db1l_name_l[i] DB1_i[1]='index '+db1l_index_l[i] if db1l_name_l[i]=='CB': DB1_N[2] = db1l_name_l[i] DB1_i[2]='index '+db1l_index_l[i] if db1l_name_l[i]=='SG': DB1_N[3] = db1l_name_l[i] DB1_i[3]='index '+db1l_index_l[i] for i in range(len(db1r_name_l)): if db1r_name_l[i]=='SG': DB1_N[4] = db1r_name_l[i] DB1_i[4]='index '+db1r_index_l[i] if db1r_name_l[i]=='CB': DB1_N[5] = db1r_name_l[i] DB1_i[5]='index '+db1r_index_l[i] if db1r_name_l[i]=='CA': DB1_N[6] = db1r_name_l[i] DB1_i[6]='index '+db1r_index_l[i] if db1r_name_l[i]=='N': DB1_N[7] = db1r_name_l[i] DB1_i[7]='index '+db1r_index_l[i] print ('Puente DB1 = resid '+px1l+':'+px1r) print ('Names DB1=>',DB1_i) print ('Index DB1=>',DB1_N) print ('\n') if (revisa2>0): #Cargando índices para el puente 2 for i in range(len(db2l_name_l)): if db2l_name_l[i]=='N': DB2_N[0] = db2l_name_l[i] DB2_i[0]='index '+db2l_index_l[i] if db2l_name_l[i]=='CA': DB2_N[1] = db2l_name_l[i] DB2_i[1]='index '+db2l_index_l[i] if db2l_name_l[i]=='CB': DB2_N[2] = db2l_name_l[i] DB2_i[2]='index '+db2l_index_l[i] if db2l_name_l[i]=='SG': DB2_N[3] = db2l_name_l[i] DB2_i[3]='index '+db2l_index_l[i] for i in range(len(db2r_name_l)): if db2r_name_l[i]=='SG': DB2_N[4] = db2r_name_l[i] DB2_i[4]='index '+db2r_index_l[i] if db2r_name_l[i]=='CB': DB2_N[5] = db2r_name_l[i] DB2_i[5]='index '+db2r_index_l[i] if db2r_name_l[i]=='CA': DB2_N[6] = db2r_name_l[i] DB2_i[6]='index '+db2r_index_l[i] if db2r_name_l[i]=='N': DB2_N[7] = db2r_name_l[i] DB2_i[7]='index '+db2r_index_l[i] print ('Puente DB2 = resid '+px2l+':'+px2r) print ('Names DB2=>',DB2_i) print ('Index DB2=>',DB2_N) print ('\n')
dinamica-2puentes.ipynb
lguarneros/fimda
gpl-3.0
Creando los archivos tcl para el cálculo del RMSD de los puentes Se crean los archivos de salida en formato tcl.
if (revisa1>0): #Creando script para DB1_x1l psf=ruta_old_traj+'/'+psf_file dcd=ruta_old_traj+'/'+dcd_file print(psf) f1 = open('DB1_x1l.tcl', 'w') print(f1) f1.write('set psfFile '+ psf+' \n') f1.write('set dcdFile '+ dcd+' \n') f1.write('\nmol load psf $psfFile dcd $dcdFile\n') f1.write('set outfile ' +'[open ' +'db1_x1l.dat'+' w]\n') f1.write('set nf [molinfo top get numframes]\n') f1.write('\n#RMSD calculation loop\n') f1.write('set f1 [atomselect top "'+DB1_i[0]+' or '+DB1_i[1]+' or '+DB1_i[2]+' or '+DB1_i[3]+' " frame 0]\n') f1.write('for {set i 0} {$i < $nf} {incr i 1} {\n') f1.write(' set sel [atomselect top "'+DB1_i[0]+' or '+DB1_i[1]+' or '+DB1_i[2]+' or '+DB1_i[3]+' " frame $i]\n') f1.write(' $sel move [measure fit $sel $f1]\n') f1.write(' set time [expr {$i +1}]\n') f1.write(' puts -nonewline $outfile "[measure rmsd $sel $f1]"\n') f1.write(' puts $outfile " $time"\n') f1.write('}\n') f1.write('close $outfile') f1.close() #Creando script para DB1_x2l psf=ruta_old_traj+'/'+psf_file dcd=ruta_old_traj+'/'+dcd_file print(psf) f2 = open('DB1_x2l.tcl', 'w') print(f2) f2.write('set psfFile '+ psf+' \n') f2.write('set dcdFile '+ dcd+' \n') f2.write('\nmol load psf $psfFile dcd $dcdFile\n') f2.write('set outfile ' +'[open ' +'db1_x2l.dat'+' w]\n') f2.write('set nf [molinfo top get numframes]\n') f2.write('\n#RMSD calculation loop\n') f2.write('set f1 [atomselect top "'+DB1_i[1]+' or '+DB1_i[2]+' or '+DB1_i[3]+' or '+DB1_i[4]+' " frame 0]\n') f2.write('for {set i 0} {$i < $nf} {incr i 1} {\n') f2.write(' set sel [atomselect top "'+DB1_i[1]+' or '+DB1_i[2]+' or '+DB1_i[3]+' or '+DB1_i[4]+' " frame $i]\n') f2.write(' $sel move [measure fit $sel $f1]\n') f2.write(' set time [expr {$i +1}]\n') f2.write(' puts -nonewline $outfile "[measure rmsd $sel $f1]"\n') f2.write(' puts $outfile " $time"\n') f2.write('}\n') f2.write('close $outfile') f2.close() #Creando script para DB1_x3m psf=ruta_old_traj+'/'+psf_file dcd=ruta_old_traj+'/'+dcd_file print(psf) f3 = open('DB1_x3m.tcl', 'w') print(f3) f3.write('set psfFile '+ psf+' \n') f3.write('set dcdFile '+ dcd+' \n') f3.write('\nmol load psf $psfFile dcd $dcdFile\n') f3.write('set outfile ' +'[open ' +'db1_x3m.dat'+' w]\n') f3.write('set nf [molinfo top get numframes]\n') f3.write('\n#RMSD calculation loop\n') f3.write('set f1 [atomselect top "'+DB1_i[2]+' or '+DB1_i[3]+' or '+DB1_i[4]+' or '+DB1_i[5]+' " frame 0]\n') f3.write('for {set i 0} {$i < $nf} {incr i 1} {\n') f3.write(' set sel [atomselect top "'+DB1_i[2]+' or '+DB1_i[3]+' or '+DB1_i[4]+' or '+DB1_i[5]+' " frame $i]\n') f3.write(' $sel move [measure fit $sel $f1]\n') f3.write(' set time [expr {$i +1}]\n') f3.write(' puts -nonewline $outfile "[measure rmsd $sel $f1]"\n') f3.write(' puts $outfile " $time"\n') f3.write('}\n') f3.write('close $outfile') f3.close() #Creando script para DB1_x2r psf=ruta_old_traj+'/'+psf_file dcd=ruta_old_traj+'/'+dcd_file print(psf) f4 = open('DB1_x2r.tcl', 'w') print(f4) f4.write('set psfFile '+ psf+' \n') f4.write('set dcdFile '+ dcd+' \n') f4.write('\nmol load psf $psfFile dcd $dcdFile\n') f4.write('set outfile ' +'[open ' +'db1_x2r.dat'+' w]\n') f4.write('set nf [molinfo top get numframes]\n') f4.write('\n#RMSD calculation loop\n') f4.write('set f1 [atomselect top "'+DB1_i[3]+' or '+DB1_i[4]+' or '+DB1_i[5]+' or '+DB1_i[6]+' " frame 0]\n') f4.write('for {set i 0} {$i < $nf} {incr i 1} {\n') f4.write(' set sel [atomselect top "'+DB1_i[3]+' or '+DB1_i[4]+' or '+DB1_i[5]+' or '+DB1_i[6]+' " frame $i]\n') f4.write(' $sel move [measure fit $sel $f1]\n') f4.write(' set time [expr {$i +1}]\n') f4.write(' puts -nonewline $outfile "[measure rmsd $sel $f1]"\n') f4.write(' puts $outfile " $time"\n') f4.write('}\n') f4.write('close $outfile') f4.close() #Creando script para DB1_x1r psf=ruta_old_traj+'/'+psf_file dcd=ruta_old_traj+'/'+dcd_file print(psf) f5 = open('DB1_x1r.tcl', 'w') print(f5) f5.write('set psfFile '+ psf+' \n') f5.write('set dcdFile '+ dcd+' \n') f5.write('\nmol load psf $psfFile dcd $dcdFile\n') f5.write('set outfile ' +'[open ' +'db1_x1r.dat'+' w]\n') f5.write('set nf [molinfo top get numframes]\n') f5.write('\n#RMSD calculation loop\n') f5.write('set f1 [atomselect top "'+DB1_i[4]+' or '+DB1_i[5]+' or '+DB1_i[6]+' or '+DB1_i[7]+' " frame 0]\n') f5.write('for {set i 0} {$i < $nf} {incr i 1} {\n') f5.write(' set sel [atomselect top "'+DB1_i[4]+' or '+DB1_i[5]+' or '+DB1_i[6]+' or '+DB1_i[7]+' " frame $i]\n') f5.write(' $sel move [measure fit $sel $f1]\n') f5.write(' set time [expr {$i +1}]\n') f5.write(' puts -nonewline $outfile "[measure rmsd $sel $f1]"\n') f5.write(' puts $outfile " $time"\n') f5.write('}\n') f5.write('close $outfile') f5.close() if (revisa2>0): ########################################################################################## ## Creando los archivos para DB2 ####################################################################################### #Creando script para DB2_x1l psf=ruta_old_traj+'/'+psf_file dcd=ruta_old_traj+'/'+dcd_file print(psf) f6 = open('DB2_x1l.tcl', 'w') print(f6) f6.write('set psfFile '+ psf+' \n') f6.write('set dcdFile '+ dcd+' \n') f6.write('\nmol load psf $psfFile dcd $dcdFile\n') f6.write('set outfile ' +'[open ' +'db2_x1l.dat'+' w]\n') f6.write('set nf [molinfo top get numframes]\n') f6.write('\n#RMSD calculation loop\n') f6.write('set f1 [atomselect top "'+DB2_i[0]+' or '+DB2_i[1]+' or '+DB2_i[2]+' or '+DB2_i[3]+' " frame 0]\n') f6.write('for {set i 0} {$i < $nf} {incr i 1} {\n') f6.write(' set sel [atomselect top "'+DB2_i[0]+' or '+DB2_i[1]+' or '+DB2_i[2]+' or '+DB2_i[3]+' " frame $i]\n') f6.write(' $sel move [measure fit $sel $f1]\n') f6.write(' set time [expr {$i +1}]\n') f6.write(' puts -nonewline $outfile "[measure rmsd $sel $f1]"\n') f6.write(' puts $outfile " $time"\n') f6.write('}\n') f6.write('close $outfile') f6.close() #Creando script para DB1_x2l psf=ruta_old_traj+'/'+psf_file dcd=ruta_old_traj+'/'+dcd_file print(psf) f7 = open('DB2_x2l.tcl', 'w') print(f7) f7.write('set psfFile '+ psf+' \n') f7.write('set dcdFile '+ dcd+' \n') f7.write('\nmol load psf $psfFile dcd $dcdFile\n') f7.write('set outfile ' +'[open ' +'db2_x2l.dat'+' w]\n') f7.write('set nf [molinfo top get numframes]\n') f7.write('\n#RMSD calculation loop\n') f7.write('set f1 [atomselect top "'+DB2_i[1]+' or '+DB2_i[2]+' or '+DB2_i[3]+' or '+DB2_i[4]+' " frame 0]\n') f7.write('for {set i 0} {$i < $nf} {incr i 1} {\n') f7.write(' set sel [atomselect top "'+DB2_i[1]+' or '+DB2_i[2]+' or '+DB2_i[3]+' or '+DB2_i[4]+' " frame $i]\n') f7.write(' $sel move [measure fit $sel $f1]\n') f7.write(' set time [expr {$i +1}]\n') f7.write(' puts -nonewline $outfile "[measure rmsd $sel $f1]"\n') f7.write(' puts $outfile " $time"\n') f7.write('}\n') f7.write('close $outfile') f7.close() #Creando script para DB1_x3m psf=ruta_old_traj+'/'+psf_file dcd=ruta_old_traj+'/'+dcd_file print(psf) f8 = open('DB2_x3m.tcl', 'w') print(f8) f8.write('set psfFile '+ psf+' \n') f8.write('set dcdFile '+ dcd+' \n') f8.write('\nmol load psf $psfFile dcd $dcdFile\n') f8.write('set outfile ' +'[open ' +'db2_x3m.dat'+' w]\n') f8.write('set nf [molinfo top get numframes]\n') f8.write('\n#RMSD calculation loop\n') f8.write('set f1 [atomselect top "'+DB2_i[2]+' or '+DB2_i[3]+' or '+DB2_i[4]+' or '+DB2_i[5]+' " frame 0]\n') f8.write('for {set i 0} {$i < $nf} {incr i 1} {\n') f8.write(' set sel [atomselect top "'+DB2_i[2]+' or '+DB2_i[3]+' or '+DB2_i[4]+' or '+DB2_i[5]+' " frame $i]\n') f8.write(' $sel move [measure fit $sel $f1]\n') f8.write(' set time [expr {$i +1}]\n') f8.write(' puts -nonewline $outfile "[measure rmsd $sel $f1]"\n') f8.write(' puts $outfile " $time"\n') f8.write('}\n') f8.write('close $outfile') f8.close() #Creando script para DB1_x2r psf=ruta_old_traj+'/'+psf_file dcd=ruta_old_traj+'/'+dcd_file print(psf) f9 = open('DB2_x2r.tcl', 'w') print(f9) f9.write('set psfFile '+ psf+' \n') f9.write('set dcdFile '+ dcd+' \n') f9.write('\nmol load psf $psfFile dcd $dcdFile\n') f9.write('set outfile ' +'[open ' +'db2_x2r.dat'+' w]\n') f9.write('set nf [molinfo top get numframes]\n') f9.write('\n#RMSD calculation loop\n') f9.write('set f1 [atomselect top "'+DB2_i[3]+' or '+DB2_i[4]+' or '+DB2_i[5]+' or '+DB2_i[6]+' " frame 0]\n') f9.write('for {set i 0} {$i < $nf} {incr i 1} {\n') f9.write(' set sel [atomselect top "'+DB2_i[3]+' or '+DB2_i[4]+' or '+DB2_i[5]+' or '+DB2_i[6]+' " frame $i]\n') f9.write(' $sel move [measure fit $sel $f1]\n') f9.write(' set time [expr {$i +1}]\n') f9.write(' puts -nonewline $outfile "[measure rmsd $sel $f1]"\n') f9.write(' puts $outfile " $time"\n') f9.write('}\n') f9.write('close $outfile') f9.close() #Creando script para DB1_x1r psf=ruta_old_traj+'/'+psf_file dcd=ruta_old_traj+'/'+dcd_file print(psf) f10 = open('DB2_x1r.tcl', 'w') print(f10) f10.write('set psfFile '+ psf+' \n') f10.write('set dcdFile '+ dcd+' \n') f10.write('\nmol load psf $psfFile dcd $dcdFile\n') f10.write('set outfile ' +'[open ' +'db2_x1r.dat'+' w]\n') f10.write('set nf [molinfo top get numframes]\n') f10.write('\n#RMSD calculation loop\n') f10.write('set f1 [atomselect top "'+DB2_i[4]+' or '+DB2_i[5]+' or '+DB2_i[6]+' or '+DB2_i[7]+' " frame 0]\n') f10.write('for {set i 0} {$i < $nf} {incr i 1} {\n') f10.write(' set sel [atomselect top "'+DB2_i[4]+' or '+DB2_i[5]+' or '+DB2_i[6]+' or '+DB2_i[7]+' " frame $i]\n') f10.write(' $sel move [measure fit $sel $f1]\n') f10.write(' set time [expr {$i +1}]\n') f10.write(' puts -nonewline $outfile "[measure rmsd $sel $f1]"\n') f10.write(' puts $outfile " $time"\n') f10.write('}\n') f10.write('close $outfile') f10.close()
dinamica-2puentes.ipynb
lguarneros/fimda
gpl-3.0
Ejecutando los archivos rmsd en tcl con vmd Ejecutando los archivos en VMD
if (revisa1>0): #Calculando con VMD rmsd DB1 X1L !vmd -dispdev text < DB1_x1l.tcl #Calculando con VMD DB1 X2L !vmd -dispdev text < DB1_x2l.tcl #Calculando con VMD DB1 X3M !vmd -dispdev text < DB1_x3m.tcl #Calculando con VMD DB1 X2R !vmd -dispdev text < DB1_x2r.tcl #Calculando con VMD DB1 X1R !vmd -dispdev text < DB1_x1r.tcl if (revisa2>0): #Calculando con VMD rmsd DB2 X1L !vmd -dispdev text < DB2_x1l.tcl #Calculando con VMD DB2 X2L !vmd -dispdev text < DB2_x2l.tcl #Calculando con VMD DB2 X3M !vmd -dispdev text < DB2_x3m.tcl #Calculando con VMD DB2 X2R !vmd -dispdev text < DB2_x2r.tcl #Calculando con VMD DB2 X1R !vmd -dispdev text < DB2_x1r.tcl
dinamica-2puentes.ipynb
lguarneros/fimda
gpl-3.0
Generando los gráficos RMSD en matplotlib
escale_y=[] fig = pl.figure(figsize=(25,8)) fig.subplots_adjust(hspace=.4, wspace=0.3) #Formateando los valores de los ejes #Engrosando marcos ax = fig.add_subplot(2,5,1) for axis in ['top','bottom','left','right']: ax.spines[axis].set_linewidth(3) ax = fig.add_subplot(2,5,2) for axis in ['top','bottom','left','right']: ax.spines[axis].set_linewidth(3) ax = fig.add_subplot(2,5,3) for axis in ['top','bottom','left','right']: ax.spines[axis].set_linewidth(3) ax = fig.add_subplot(2,5,4) for axis in ['top','bottom','left','right']: ax.spines[axis].set_linewidth(3) ax = fig.add_subplot(2,5,5) for axis in ['top','bottom','left','right']: ax.spines[axis].set_linewidth(3) ax = fig.add_subplot(2,5,6) for axis in ['top','bottom','left','right']: ax.spines[axis].set_linewidth(3) ax = fig.add_subplot(2,5,7) for axis in ['top','bottom','left','right']: ax.spines[axis].set_linewidth(3) ax = fig.add_subplot(2,5,8) for axis in ['top','bottom','left','right']: ax.spines[axis].set_linewidth(3) ax = fig.add_subplot(2,5,9) for axis in ['top','bottom','left','right']: ax.spines[axis].set_linewidth(3) ax = fig.add_subplot(2,5,10) for axis in ['top','bottom','left','right']: ax.spines[axis].set_linewidth(3) ax.yaxis.set_major_formatter(FormatStrFormatter('%.2f')) if (revisa1>0): #Datos de DB1 data_db1_x1l=np.loadtxt('db1_x1l.dat',comments=['#', '@']) data_db1_x2l=np.loadtxt('db1_x2l.dat',comments=['#', '@']) data_db1_x3m=np.loadtxt('db1_x3m.dat',comments=['#', '@']) data_db1_x2r=np.loadtxt('db1_x2r.dat',comments=['#', '@']) data_db1_x1r=np.loadtxt('db1_x1r.dat',comments=['#', '@']) sub1 = fig.add_subplot(251) # instead of plt.subplot(2, 2, 1) #sub1.set_title('DB1_X1L') sub1.set_xlabel('Time (ns)') sub1.set_ylabel('RMSD (nm)') sub1.plot(data_db1_x1l[:,1]*0.02, data_db1_x1l[:,0]/10, color='black', linewidth = 1, label='DB1_X1L') x1,x2,y1,y2=sub1.axis() escale_y.append(y2) sub2 = fig.add_subplot(252) #sub2.set_title('DB1_X2L') sub2.set_xlabel('Time (ns)') sub2.set_ylabel('RMSD (nm)') sub2.plot(data_db1_x2l[:,1]*0.02, data_db1_x2l[:,0]/10, color='black', linewidth = 1, label='DB1_X2L') x1,x2,y1,y2=sub2.axis() escale_y.append(y2) sub3 = fig.add_subplot(253) #sub3.set_title('DB1_X3M') sub3.set_xlabel('Time (ns)') sub3.set_ylabel('RMSD (nm)') sub3.plot(data_db1_x3m[:,1]*0.02, data_db1_x3m[:,0]/10, color='black', linewidth = 1, label='DB1_X3M') x1,x2,y1,y2=sub3.axis() escale_y.append(y2) sub4 = fig.add_subplot(254) #sub4.set_title('DB1_X2R') sub4.set_xlabel('Time (ns)') sub4.set_ylabel('RMSD (nm)') sub4.plot(data_db1_x2r[:,1]*0.02, data_db1_x2r[:,0]/10, color='black', linewidth = 1, label='DB1_X2R') x1,x2,y1,y2=sub4.axis() escale_y.append(y2) sub5 = fig.add_subplot(255) #sub5.set_title('DB1_X1R') sub5.set_xlabel('Time (ns)') sub5.set_ylabel('RMSD (nm)') sub5.plot(data_db1_x1r[:,1]*0.02, data_db1_x1r[:,0]/10, color='black', linewidth = 1, label='DB1_X1R') x1,x2,y1,y2=sub5.axis() escale_y.append(y2) if (revisa2>0): #DAtos de DB2 data_db2_x1l=np.loadtxt('db2_x1l.dat',comments=['#', '@']) data_db2_x2l=np.loadtxt('db2_x2l.dat',comments=['#', '@']) data_db2_x3m=np.loadtxt('db2_x3m.dat',comments=['#', '@']) data_db2_x2r=np.loadtxt('db2_x2r.dat',comments=['#', '@']) data_db2_x1r=np.loadtxt('db2_x1r.dat',comments=['#', '@']) #Ploteando DB2 sub6 = fig.add_subplot(256) #sub6.set_title('DB2_X1L') sub6.set_xlabel('Time (ns)') sub6.set_ylabel('RMSD (nm)') sub6.plot(data_db2_x1l[:,1]*0.02, data_db2_x1l[:,0]/10, color='red', linewidth = 1, label='DB2_X1L') x1,x2,y1,y2=sub6.axis() escale_y.append(y2) sub7 = fig.add_subplot(257) #sub7.set_title('DB2_X2L') sub7.set_xlabel('Time (ns)') sub7.set_ylabel('RMSD (nm)') sub7.plot(data_db2_x2l[:,1]*0.02, data_db2_x2l[:,0]/10, color='red', linewidth = 1, label='DB2_X2L') x1,x2,y1,y2=sub7.axis() escale_y.append(y2) sub8 = fig.add_subplot(258) #sub8.set_title('DB2_X3M') sub8.set_xlabel('Time (ns)') sub8.set_ylabel('RMSD (nm)') sub8.plot(data_db2_x3m[:,1]*0.02, data_db2_x3m[:,0]/10, color='red', linewidth = 1, label='DB2_X3M') x1,x2,y1,y2=sub8.axis() escale_y.append(y2) sub9 = fig.add_subplot(259) #sub9.set_title('DB2_X2R') sub9.set_xlabel('Time (ns)') sub9.set_ylabel('RMSD (nm)') sub9.plot(data_db2_x2r[:,1]*0.02, data_db2_x2r[:,0]/10, color='red', linewidth = 1, label='DB2_X2R') x1,x2,y1,y2=sub9.axis() escale_y.append(y2) sub10 = fig.add_subplot(2,5,10) #sub10.set_title('DB2_X1R') sub10.set_xlabel('Time (ns)') sub10.set_ylabel('RMSD (nm)') sub10.plot(data_db2_x1r[:,1]*0.02, data_db2_x1r[:,0]/10, color='red', linewidth = 1, label='DB2_X1R') x1,x2,y1,y2=sub10.axis() escale_y.append(y2) #escale_y escale_y.sort(reverse=True) escale_y ##Cambiando los ejes de las y sub1.axis((x1,x2,y1,escale_y[0])) sub2.axis((x1,x2,y1,escale_y[0])) sub3.axis((x1,x2,y1,escale_y[0])) sub4.axis((x1,x2,y1,escale_y[0])) sub5.axis((x1,x2,y1,escale_y[0])) sub6.axis((x1,x2,y1,escale_y[0])) sub7.axis((x1,x2,y1,escale_y[0])) sub8.axis((x1,x2,y1,escale_y[0])) sub9.axis((x1,x2,y1,escale_y[0])) sub10.axis((x1,x2,y1,escale_y[0]))
dinamica-2puentes.ipynb
lguarneros/fimda
gpl-3.0
FREE ENERGY DIHEDRAL INTRAMOLECULAR Se calculan las distancias de los ángulos diedros para el cálculo de la free energy intramolecular
### Creando el directorio para el análisis de las distancias de enlace de los puentes ruta_diedros = nuevaruta+'/diedros_intra' print ( ruta_diedros ) if not os.path.exists(ruta_diedros): os.makedirs(ruta_diedros) print ('Se ha creado la ruta ===>',ruta_diedros) else: print ("La ruta "+ruta_diedros+" existe..!!!") print ( 'Nos vamos a ....', ruta_diedros) os.chdir( ruta_diedros )
dinamica-2puentes.ipynb
lguarneros/fimda
gpl-3.0
Creación de los archivos tcl para el cálculo de los ángulos diedros
psf=ruta_old_traj+'/'+psf_file dcd=ruta_old_traj+'/'+dcd_file if (revisa1>0): #Creando script para DB1_x1l d1 = open('dihed_DB1_x1l.tcl', 'w') print(d1) d1.write('set psfFile '+ psf+' \n') d1.write('set dcdFile '+ dcd+' \n') d1.write('\nmol load psf $psfFile dcd $dcdFile\n') d1.write('set outfile ' +'[open ' +'dihed_db1_x1l.dat'+' w]\n') d1.write('set nf [molinfo top get numframes]\n') d1.write(' \n') d1.write('set selatoms1 [[atomselect top "protein and chain A and '+DB1_i[0]+'"] get index]\n') d1.write('set selatoms2 [[atomselect top "protein and chain A and '+DB1_i[1]+'"] get index]\n') d1.write('set selatoms3 [[atomselect top "protein and chain A and '+DB1_i[2]+'"] get index]\n') d1.write('set selatoms4 [[atomselect top "protein and chain A and '+DB1_i[3]+'"] get index]\n') d1.write('set dihed [list [lindex $selatoms1] [lindex $selatoms2] [lindex $selatoms3] [lindex $selatoms4] ]\n') d1.write('for {set i 0} {$i < $nf} {incr i 1} {\n') d1.write(' set x [measure dihed $dihed frame $i]\n') d1.write(' set time [expr {$i +1}]\n') d1.write(' puts $outfile "$time $x"\n') d1.write('}\n') d1.close() #Creando script para DB1_x2l d2 = open('dihed_DB1_x2l.tcl', 'w') print(d2) d2.write('set psfFile '+ psf+' \n') d2.write('set dcdFile '+ dcd+' \n') d2.write('\nmol load psf $psfFile dcd $dcdFile\n') d2.write('set outfile ' +'[open ' +'dihed_db1_x2l.dat'+' w]\n') d2.write('set nf [molinfo top get numframes]\n') d2.write(' \n') d2.write('set selatoms1 [[atomselect top "protein and chain A and '+DB1_i[1]+'"] get index]\n') d2.write('set selatoms2 [[atomselect top "protein and chain A and '+DB1_i[2]+'"] get index]\n') d2.write('set selatoms3 [[atomselect top "protein and chain A and '+DB1_i[3]+'"] get index]\n') d2.write('set selatoms4 [[atomselect top "protein and chain A and '+DB1_i[4]+'"] get index]\n') d2.write('set dihed [list [lindex $selatoms1] [lindex $selatoms2] [lindex $selatoms3] [lindex $selatoms4] ]\n') d2.write('for {set i 0} {$i < $nf} {incr i 1} {\n') d2.write(' set x [measure dihed $dihed frame $i]\n') d2.write(' set time [expr {$i +1}]\n') d2.write(' puts $outfile "$time $x"\n') d2.write('}\n') d2.close() #Creando script para DB1_x3m d3 = open('dihed_DB1_x3m.tcl', 'w') print(d3) d3.write('set psfFile '+ psf+' \n') d3.write('set dcdFile '+ dcd+' \n') d3.write('\nmol load psf $psfFile dcd $dcdFile\n') d3.write('set outfile ' +'[open ' +'dihed_db1_x3m.dat'+' w]\n') d3.write('set nf [molinfo top get numframes]\n') d3.write(' \n') d3.write('set selatoms1 [[atomselect top "protein and chain A and '+DB1_i[2]+'"] get index]\n') d3.write('set selatoms2 [[atomselect top "protein and chain A and '+DB1_i[3]+'"] get index]\n') d3.write('set selatoms3 [[atomselect top "protein and chain A and '+DB1_i[4]+'"] get index]\n') d3.write('set selatoms4 [[atomselect top "protein and chain A and '+DB1_i[5]+'"] get index]\n') d3.write('set dihed [list [lindex $selatoms1] [lindex $selatoms2] [lindex $selatoms3] [lindex $selatoms4] ]\n') d3.write('for {set i 0} {$i < $nf} {incr i 1} {\n') d3.write(' set x [measure dihed $dihed frame $i]\n') d3.write(' set time [expr {$i +1}]\n') d3.write(' puts $outfile "$time $x"\n') d3.write('}\n') d3.close() #Creando script para DB1_x2r d4 = open('dihed_DB1_x2r.tcl', 'w') print(d4) d4.write('set psfFile '+ psf+' \n') d4.write('set dcdFile '+ dcd+' \n') d4.write('\nmol load psf $psfFile dcd $dcdFile\n') d4.write('set outfile ' +'[open ' +'dihed_db1_x2r.dat'+' w]\n') d4.write('set nf [molinfo top get numframes]\n') d4.write(' \n') d4.write('set selatoms1 [[atomselect top "protein and chain A and '+DB1_i[3]+'"] get index]\n') d4.write('set selatoms2 [[atomselect top "protein and chain A and '+DB1_i[4]+'"] get index]\n') d4.write('set selatoms3 [[atomselect top "protein and chain A and '+DB1_i[5]+'"] get index]\n') d4.write('set selatoms4 [[atomselect top "protein and chain A and '+DB1_i[6]+'"] get index]\n') d4.write('set dihed [list [lindex $selatoms1] [lindex $selatoms2] [lindex $selatoms3] [lindex $selatoms4] ]\n') d4.write('for {set i 0} {$i < $nf} {incr i 1} {\n') d4.write(' set x [measure dihed $dihed frame $i]\n') d4.write(' set time [expr {$i +1}]\n') d4.write(' puts $outfile "$time $x"\n') d4.write('}\n') d4.close() #Creando script para DB1_x1r d5 = open('dihed_DB1_x1r.tcl', 'w') print(d5) d5.write('set psfFile '+ psf+' \n') d5.write('set dcdFile '+ dcd+' \n') d5.write('\nmol load psf $psfFile dcd $dcdFile\n') d5.write('set outfile ' +'[open ' +'dihed_db1_x1r.dat'+' w]\n') d5.write('set nf [molinfo top get numframes]\n') d5.write(' \n') d5.write('set selatoms1 [[atomselect top "protein and chain A and '+DB1_i[4]+'"] get index]\n') d5.write('set selatoms2 [[atomselect top "protein and chain A and '+DB1_i[5]+'"] get index]\n') d5.write('set selatoms3 [[atomselect top "protein and chain A and '+DB1_i[6]+'"] get index]\n') d5.write('set selatoms4 [[atomselect top "protein and chain A and '+DB1_i[7]+'"] get index]\n') d5.write('set dihed [list [lindex $selatoms1] [lindex $selatoms2] [lindex $selatoms3] [lindex $selatoms4] ]\n') d5.write('for {set i 0} {$i < $nf} {incr i 1} {\n') d5.write(' set x [measure dihed $dihed frame $i]\n') d5.write(' set time [expr {$i +1}]\n') d5.write(' puts $outfile "$time $x"\n') d5.write('}\n') d5.close() if (revisa2>0): ##################################################################### ########## Puente 2 ##########################################3 #Creando script para DB2_x1l d6 = open('dihed_DB2_x1l.tcl', 'w') print(d6) d6.write('set psfFile '+ psf+' \n') d6.write('set dcdFile '+ dcd+' \n') d6.write('\nmol load psf $psfFile dcd $dcdFile\n') d6.write('set outfile ' +'[open ' +'dihed_db2_x1l.dat'+' w]\n') d6.write('set nf [molinfo top get numframes]\n') d6.write(' \n') d6.write('set selatoms1 [[atomselect top "protein and chain A and '+DB2_i[0]+'"] get index]\n') d6.write('set selatoms2 [[atomselect top "protein and chain A and '+DB2_i[1]+'"] get index]\n') d6.write('set selatoms3 [[atomselect top "protein and chain A and '+DB2_i[2]+'"] get index]\n') d6.write('set selatoms4 [[atomselect top "protein and chain A and '+DB2_i[3]+'"] get index]\n') d6.write('set dihed [list [lindex $selatoms1] [lindex $selatoms2] [lindex $selatoms3] [lindex $selatoms4] ]\n') d6.write('for {set i 0} {$i < $nf} {incr i 1} {\n') d6.write(' set x [measure dihed $dihed frame $i]\n') d6.write(' set time [expr {$i +1}]\n') d6.write(' puts $outfile "$time $x"\n') d6.write('}\n') d6.close() #Creando script para DB2_x2l d7 = open('dihed_DB2_x2l.tcl', 'w') print(d7) d7.write('set psfFile '+ psf+' \n') d7.write('set dcdFile '+ dcd+' \n') d7.write('\nmol load psf $psfFile dcd $dcdFile\n') d7.write('set outfile ' +'[open ' +'dihed_db2_x2l.dat'+' w]\n') d7.write('set nf [molinfo top get numframes]\n') d7.write(' \n') d7.write('set selatoms1 [[atomselect top "protein and chain A and '+DB2_i[1]+'"] get index]\n') d7.write('set selatoms2 [[atomselect top "protein and chain A and '+DB2_i[2]+'"] get index]\n') d7.write('set selatoms3 [[atomselect top "protein and chain A and '+DB2_i[3]+'"] get index]\n') d7.write('set selatoms4 [[atomselect top "protein and chain A and '+DB2_i[4]+'"] get index]\n') d7.write('set dihed [list [lindex $selatoms1] [lindex $selatoms2] [lindex $selatoms3] [lindex $selatoms4] ]\n') d7.write('for {set i 0} {$i < $nf} {incr i 1} {\n') d7.write(' set x [measure dihed $dihed frame $i]\n') d7.write(' set time [expr {$i +1}]\n') d7.write(' puts $outfile "$time $x"\n') d7.write('}\n') d7.close() #Creando script para DB2_x3m d8 = open('dihed_DB2_x3m.tcl', 'w') print(d8) d8.write('set psfFile '+ psf+' \n') d8.write('set dcdFile '+ dcd+' \n') d8.write('\nmol load psf $psfFile dcd $dcdFile\n') d8.write('set outfile ' +'[open ' +'dihed_db2_x3m.dat'+' w]\n') d8.write('set nf [molinfo top get numframes]\n') d8.write(' \n') d8.write('set selatoms1 [[atomselect top "protein and chain A and '+DB2_i[2]+'"] get index]\n') d8.write('set selatoms2 [[atomselect top "protein and chain A and '+DB2_i[3]+'"] get index]\n') d8.write('set selatoms3 [[atomselect top "protein and chain A and '+DB2_i[4]+'"] get index]\n') d8.write('set selatoms4 [[atomselect top "protein and chain A and '+DB2_i[5]+'"] get index]\n') d8.write('set dihed [list [lindex $selatoms1] [lindex $selatoms2] [lindex $selatoms3] [lindex $selatoms4] ]\n') d8.write('for {set i 0} {$i < $nf} {incr i 1} {\n') d8.write(' set x [measure dihed $dihed frame $i]\n') d8.write(' set time [expr {$i +1}]\n') d8.write(' puts $outfile "$time $x"\n') d8.write('}\n') d8.close() #Creando script para DB2_x2r d9 = open('dihed_DB2_x2r.tcl', 'w') print(d9) d9.write('set psfFile '+ psf+' \n') d9.write('set dcdFile '+ dcd+' \n') d9.write('\nmol load psf $psfFile dcd $dcdFile\n') d9.write('set outfile ' +'[open ' +'dihed_db2_x2r.dat'+' w]\n') d9.write('set nf [molinfo top get numframes]\n') d9.write(' \n') d9.write('set selatoms1 [[atomselect top "protein and chain A and '+DB2_i[3]+'"] get index]\n') d9.write('set selatoms2 [[atomselect top "protein and chain A and '+DB2_i[4]+'"] get index]\n') d9.write('set selatoms3 [[atomselect top "protein and chain A and '+DB2_i[5]+'"] get index]\n') d9.write('set selatoms4 [[atomselect top "protein and chain A and '+DB2_i[6]+'"] get index]\n') d9.write('set dihed [list [lindex $selatoms1] [lindex $selatoms2] [lindex $selatoms3] [lindex $selatoms4] ]\n') d9.write('for {set i 0} {$i < $nf} {incr i 1} {\n') d9.write(' set x [measure dihed $dihed frame $i]\n') d9.write(' set time [expr {$i +1}]\n') d9.write(' puts $outfile "$time $x"\n') d9.write('}\n') d9.close() #Creando script para DB2_x1r d10 = open('dihed_DB2_x1r.tcl', 'w') print(d10) d10.write('set psfFile '+ psf+' \n') d10.write('set dcdFile '+ dcd+' \n') d10.write('\nmol load psf $psfFile dcd $dcdFile\n') d10.write('set outfile ' +'[open ' +'dihed_db2_x1r.dat'+' w]\n') d10.write('set nf [molinfo top get numframes]\n') d10.write(' \n') d10.write('set selatoms1 [[atomselect top "protein and chain A and '+DB2_i[4]+'"] get index]\n') d10.write('set selatoms2 [[atomselect top "protein and chain A and '+DB2_i[5]+'"] get index]\n') d10.write('set selatoms3 [[atomselect top "protein and chain A and '+DB2_i[6]+'"] get index]\n') d10.write('set selatoms4 [[atomselect top "protein and chain A and '+DB2_i[7]+'"] get index]\n') d10.write('set dihed [list [lindex $selatoms1] [lindex $selatoms2] [lindex $selatoms3] [lindex $selatoms4] ]\n') d10.write('for {set i 0} {$i < $nf} {incr i 1} {\n') d10.write(' set x [measure dihed $dihed frame $i]\n') d10.write(' set time [expr {$i +1}]\n') d10.write(' puts $outfile "$time $x"\n') d10.write('}\n') d10.close()
dinamica-2puentes.ipynb
lguarneros/fimda
gpl-3.0
Ejecutando los archivos de los ángulos diedros tcl generados con VMD
if (revisa1>0): #Calculando con VMD rmsd DB1 X1L !vmd -dispdev text < dihed_DB1_x1l.tcl #Calculando con VMD DB1 X2L !vmd -dispdev text < dihed_DB1_x2l.tcl #Calculando con VMD DB1 X3M !vmd -dispdev text < dihed_DB1_x3m.tcl #Calculando con VMD DB1 X2R !vmd -dispdev text < dihed_DB1_x2r.tcl #Calculando con VMD DB1 X1R !vmd -dispdev text < dihed_DB1_x1r.tcl if (revisa2>0): #Calculando con VMD rmsd DB2 X1L !vmd -dispdev text < dihed_DB2_x1l.tcl #Calculando con VMD DB2 X2L !vmd -dispdev text < dihed_DB2_x2l.tcl #Calculando con VMD DB2 X3M !vmd -dispdev text < dihed_DB2_x3m.tcl #Calculando con VMD DB2 X2R !vmd -dispdev text < dihed_DB2_x2r.tcl #Calculando con VMD DB2 X1R !vmd -dispdev text < dihed_DB2_x1r.tcl print ('\nCopiando el archivo generateFES.py a '+ruta_diedros) source_file=ruta_f_energy+'/generateFES.py' dest_file=ruta_diedros+'/generateFES.py' shutil.copy(source_file,dest_file) #Cambiando permisos de ejecución !chmod +x generateFES.py
dinamica-2puentes.ipynb
lguarneros/fimda
gpl-3.0
Calculando la Free Energy Intramolecular para el Puente 1
if (revisa1>0): #Cargando valores del DB1_X1L data_db1_x1l=np.loadtxt('dihed_db1_x1l.dat',comments=['#', '@']) #Cargando valores del DB1_X1R data_db1_x1r=np.loadtxt('dihed_db1_x1r.dat',comments=['#', '@']) #Obteniendo los valores máximo y mínimo del DB1_X1L min_x1l=np.amin(data_db1_x1l[:,1]) max_x1l=np.amax(data_db1_x1l[:,1]) print ('Minimo DB1_X1L=>',min_x1l) print ('Máximo DB1_X1L=>',max_x1l) #Obteniendo los valores máximo y mínimo del DB1_X1R min_x1r=np.amin(data_db1_x1r[:,1]) max_x1r=np.amax(data_db1_x1r[:,1]) print ('Minimo DB1_X1R=>',min_x1r) print ('Máximo DB1_X1R=>',max_x1r) #Creando los archivos de entrada para el script np.savetxt('db1_x1l.dat',data_db1_x1l[:,1], fmt='%1.14f') np.savetxt('db1_x1r.dat',data_db1_x1r[:,1], fmt='%1.14f') !paste db1_x1l.dat db1_x1r.dat > DB1_x1_lr.dat #Ejecutando el script de FES !python generateFES.py DB1_x1_lr.dat $min_x1l $max_x1l $min_x1r $max_x1r 200 200 $temperatura XL1_XR1.dat ################################################################### #Cargando valores del DB1_X2l data_db1_x2l=np.loadtxt('dihed_db1_x2l.dat',comments=['#', '@']) #Cargando valores del DB1_X1R data_db1_x2r=np.loadtxt('dihed_db1_x2r.dat',comments=['#', '@']) #Obteniendo los valores máximo y mínimo del DB1_X1L min_x2l=np.amin(data_db1_x2l[:,1]) max_x2l=np.amax(data_db1_x2l[:,1]) print ('Minimo DB1_X2L=>',min_x2l) print ('Máximo DB1_X2L=>',max_x2l) #Obteniendo los valores máximo y mínimo del DB1_X1R min_x2r=np.amin(data_db1_x2r[:,1]) max_x2r=np.amax(data_db1_x2r[:,1]) print ('Minimo DB1_X2R=>',min_x2r) print ('Máximo DB1_X2R=>',max_x2r) #Creando los archivos de entrada para el script np.savetxt('db1_x2l.dat',data_db1_x2l[:,1], fmt='%1.14f') np.savetxt('db1_x2r.dat',data_db1_x2r[:,1], fmt='%1.14f') !paste db1_x2l.dat db1_x2r.dat > DB1_x2_lr.dat #Ejecutando el script de FES !python generateFES.py DB1_x2_lr.dat $min_x2l $max_x2l $min_x2r $max_x2r 200 200 $temperatura XL2_XR2.dat ###################################################################################### #Generando los archivos para X3M data_db1_x3m=np.loadtxt('dihed_db1_x3m.dat',comments=['#', '@']) #Obteniendo los valores máximo y mínimo del DB1_X1L min_x3m=np.amin(data_db1_x3m[:,1]) max_x3m=np.amax(data_db1_x3m[:,1]) print ('Minimo DB1_X3M=>',min_x3m) print ('Máximo DB1_X3M=>',max_x3m) print ('Minimo DB1_X1L=>',min_x1l) print ('Máximo DB1_X1L=>',max_x1l) print ('Minimo DB1_X2L=>',min_x2l) print ('Máximo DB1_X2L=>',max_x2l) print ('Minimo DB1_X1R=>',min_x1r) print ('Máximo DB1_X1R=>',max_x1r) print ('Minimo DB1_X2R=>',min_x2r) print ('Máximo DB1_X2R=>',max_x2r) #Creando los archivos de entrada para el script np.savetxt('db1_x3m.dat',data_db1_x3m[:,1], fmt='%1.14f') !paste db1_x3m.dat db1_x1l.dat > DB1_x3m_x1l.dat !paste db1_x3m.dat db1_x2l.dat > DB1_x3m_x2l.dat !paste db1_x3m.dat db1_x1r.dat > DB1_x3m_x1r.dat !paste db1_x3m.dat db1_x2r.dat > DB1_x3m_x2r.dat #Ejecutando el script de FES !python generateFES.py DB1_x3m_x1l.dat $min_x3m $max_x3m $min_x1l $max_x1l 200 200 $temperatura XM3_XL1.dat !python generateFES.py DB1_x3m_x2l.dat $min_x3m $max_x3m $min_x2l $max_x2l 200 200 $temperatura XM3_XL2.dat !python generateFES.py DB1_x3m_x1r.dat $min_x3m $max_x3m $min_x1r $max_x1r 200 200 $temperatura XM3_XR1.dat !python generateFES.py DB1_x3m_x2r.dat $min_x3m $max_x3m $min_x2r $max_x2r 200 200 $temperatura XM3_XR2.dat
dinamica-2puentes.ipynb
lguarneros/fimda
gpl-3.0