text
stringlengths
20
1.01M
url
stringlengths
14
1.25k
dump
stringlengths
9
15
lang
stringclasses
4 values
source
stringclasses
4 values
A compact Python toolbox for anomaly detection. Project description anomatools anomatools is a small Python package containing recent anomaly detection algorithms. Anomaly detection strives to detect abnormal or anomalous data points from a given (large) dataset. The package contains two state-of-the-art (2018 and 2020) semi-supervised and two unsupervised anomaly detection algorithms. Installation Install the package directly from PyPi with the following command: pip install anomatools OR install the package using the setup.py file: python setup.py install OR install it directly from GitHub itself: pip install git+ Contents and usage Semi-supervised anomaly detection Given a dataset with attributes X and labels Y, indicating whether a data point is normal or anomalous, semi-supervised anomaly detection algorithms are trained using all the instances X and some of the labels Y. Semi-supervised approaches to anomaly detection generally outperform the unsupervised approaches, because they can use the label information to correct the assumptions on which the unsupervised detection process is based. The anomatools package implements two recent semi-supervised anomaly detection algorithms: - The SSDO (semi-supervised detection of outliers) algorithm first computes an unsupervised prior anomaly score and then corrects this score with the known label information [1]. - The SSkNNO (semi-supervised k-nearest neighbor anomaly detection) algorithm is a combination of the well-known kNN classifier and the kNNO (k-nearest neighbor outlier detection) method [2]. Given a training dataset X_train with labels Y_train, and a test dataset X_test, the algorithms are applied as follows: from anomatools.models import SSkNNO, SSDO # train detector = SSDO() detector.fit(X_train, Y_train) # predict labels = detector.predict(X_test) Similarly, the probability of each point in X_test being normal or anomalous can also be computed: probabilities = detector.predict_proba(X_test, method='squash') Sometimes we are interested in detecting anomalies in the training data (e.g., when we are doing a post-mortem analysis): # train detector = SSDO() detector.fit(X_train, Y_train) # predict labels = detector.labels_ Unsupervised anomaly detection: Unsupervised anomaly detectors do not make use of label information (user feedback) when detecting anomalies in a dataset. Given a dataset with attributes X and labels Y, the unsupervised detectors are trained using only X. The anomatools package implements two recent semi-supervised anomaly detection algorithms: - The kNNO (k-nearest neighbor outlier detection) algorithm computes for each data point the anomaly score as the distance to its k-nearest neighbor in the dataset [3]. - The iNNE (isolation nearest neighbor ensembles) algorithm computes for each data point the anomaly score roughly based on how isolation the point is from the rest of the data [4]. Given a training dataset X_train with labels Y_train, and a test dataset X_test, the algorithms are applied as follows: from anomatools.models import kNNO, iNNE # train detector = kNNO() detector.fit(X_train, Y_train) # predict labels = detector.predict(X_test) Package structure The anomaly detection algorithms are located in: anomatools/models/ For further examples of how to use the algorithms see the notebooks: anomatools/notebooks/ Dependencies The anomatools package requires the following python packages to be installed: Contact the author of the package: vincent.vercruyssen@kuleuven.be References [1] Vercruyssen, V., Meert, W., Verbruggen, G., Maes, K., Bäumer, R., Davis, J. (2018) Semi-Supervised Anomaly Detection with an Application to Water Analytics. IEEE International Conference on Data Mining (ICDM), Singapore. p527--536. [2] Vercruyssen, V., Meert, W., Davis, J. (2020) Transfer Learning for Anomaly Detection through Localized and Unsupervised Instance Selection. AAAI Conference on Artificial Intelligence, New York. Project details Release history Release notifications Download files Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
https://pypi.org/project/anomatools/
CC-MAIN-2019-51
en
refinedweb
Nationality predictor using LSTM INTRODUCTION In this tutorial, we will learn how to build a nationality predictor using LSTM with the help of Keras TensorFlow API in Python. The model processes a person’s name and predicts their nationality. GETTING THE DATASET We will use a dataset from Kaggle that contains a set of person’s name and the nation they belong to. The link to the dataset is here. D0wnload the dataset from Kaggle and upload it to the drive. If you work on google colab or use a jupyter notebook, move the file to the home location of the jupyter environment. I recommend using colab because LSTM takes really long time for training. IMPORTING THE LIBRARIES import numpy as np import matplotlib.pyplot as plt from sklearn.preprocessing import LabelEncoder,OneHotEncoder from keras.preprocessing.sequence import pad_sequences from sklearn.model_selection import train_test_split import tensorflow as tf from keras.utils import plot_model from tensorflow.keras.models import Sequential from tensorflow.keras.layers import LSTM,GRU,RNN,Dense,Embedding Here we have used Label Encoder and One Hot Encoder for converting words into vectors. These are not recommended because they do not learn the relationship between words and the sentences’ context. Embedding is an advanced technique that also learns the relationship between words, along with the model. Other than these, all other libraries are well known to you. ANALYSING THE DATA f=open('name2lang.txt','r') text=f.read().split('\n') ltrs="abcdefghijklmnopqrstuvwxyz" ltrs=list(ltrs) ltrs.append('##pad##') Here we have created a list that contains all the alphabets. names=[] lbl=[] freq={} for i in text: i=i.split(',') name='' t=i[0].lower() for l in t: if l in ltrs: name+=l names.append(name) lbl.append(i[1].strip()) if lbl[-1] not in freq.keys(): freq[lbl[-1]]=1 else: freq[lbl[-1]]+=1 We created three lists for storing the names, labels and frequency of the labels. plt.bar(list(freq.keys()),list(freq.values()),color='r') plt.xticks(rotation=90) This is the reason why we created a frequency list. We can visualise each labels frequency in a histogram. The link is here. plt.pie(freq.values(),labels=freq.keys(),radius=2,autopct='%1.1f%%') plt.show() We visualised labels frequency using a pie chart. The link is here lengths=[len(name) for name in names] print('Average number of letters: ',np.mean(lengths)) print('Median no. of ltrs: ',np.median(lengths)) print('standard dev: ',np.std(lengths)) Average number of letters: 7.133665835411471 Median no. of ltrs: 7.0 standard dev: 2.0706207047158274 We analysed the length of the names and printed some standard statistical results for lengths. plt.hist(lengths,range=(2,20)) plt.show() Histogram for the lengths’ list. The link is here. WORD TO VECTOR max_len=9 output_len=18 len(ltrs) le=LabelEncoder() int_enc=le.fit_transform(ltrs) ohe=OneHotEncoder(sparse=False) ohe.fit(int_enc.reshape(-1,1)) Here, first, we have used Label Encoder to assign numerical indices to each of the names. Next, we used One Hot Encoder to convert the assigned indices to binary arrays. It takes assigned indices as input and outputs a binary array with 1 on the assigned index and 0 as remaining entries. TRAINING THE DATASET x_train,x_test,y_train,y_test=train_test_split(x,y,test_size=0.2,shuffle=True,stratify=y) x_train.shape (16040, 9) We split the data into training and validation set with a validation split of 20 percent. Let’s build our LSTM model and train our dataset on the model. hidden_size=256 model=Sequential([Embedding(input_dim=len(ltrs), output_dim=hidden_size,mask_zero=True), LSTM(hidden_size,return_sequences=True), LSTM(50,return_sequences=False), Dense(18,activation='softmax') ]) It looks simple, right. But don’t underestimate the model. It is one of the most powerful models in Natural Language Processing. We built our model using Keras Sequential API. First, we included an Embedding layer that learns the word to vector representation along with the model. Next, we built an LSTM model that takes hidden size as its input, i.e. it has that many hidden units in it. Next, we have stacked another LSTM block on top of our first block. The output of the first LSTM block will be the input for the second block. After that, we finally used a Dense block with Softmax activation that takes the final LSTM block’s output as input and outputs each class probability. The number of nationalities is 18, so the final output size is set to 18. Now, let us train our model using the dataset. model.compile(loss='categorical_crossentropy',optimizer='Adam',metrics=['accuracy']) model.summary() Model: "sequential" _________________________________________________________________ Layer (type) Output Shape Param # ================================================================= embedding (Embedding) (None, None, 256) 6912 _________________________________________________________________ lstm (LSTM) (None, None, 256) 525312 _________________________________________________________________ lstm_1 (LSTM) (None, 50) 61400 _________________________________________________________________ dense (Dense) (None, 18) 918 ================================================================= Total params: 594,542 Trainable params: 594,542 Non-trainable params: 0 _________________________________________________________________ model.fit(x_train,y_train,epochs=50,batch_size=512,validation_split=0.2) Epoch 35/50 26/26 [==============================] - 1s 25ms/step - loss: 0.4434 - accuracy: 0.8667 - val_loss: 0.6332 - val_accuracy: 0.8074 Epoch 36/50 26/26 [==============================] - 1s 23ms/step - loss: 0.4201 - accuracy: 0.8743 - val_loss: 0.6445 - val_accuracy: 0.8089 Epoch 37/50 26/26 [==============================] - 1s 23ms/step - loss: 0.3965 - accuracy: 0.8807 - val_loss: 0.6426 - val_accuracy: 0.8086 Epoch 38/50 26/26 [==============================] - 1s 24ms/step - loss: 0.3793 - accuracy: 0.8852 - val_loss: 0.6319 - val_accuracy: 0.8117 Epoch 39/50 26/26 [==============================] - 1s 23ms/step - loss: 0.3689 - accuracy: 0.8923 - val_loss: 0.6330 - val_accuracy: 0.8083 Epoch 40/50 26/26 [==============================] - 1s 23ms/step - loss: 0.3505 - accuracy: 0.8975 - val_loss: 0.6406 - val_accuracy: 0.8052 Epoch 41/50 26/26 [==============================] - 1s 24ms/step - loss: 0.3291 - accuracy: 0.8987 - val_loss: 0.6299 - val_accuracy: 0.8092 Epoch 42/50 26/26 [==============================] - 1s 23ms/step - loss: 0.3190 - accuracy: 0.9059 - val_loss: 0.6307 - val_accuracy: 0.8074 Epoch 43/50 26/26 [==============================] - 1s 23ms/step - loss: 0.2995 - accuracy: 0.9128 - val_loss: 0.6395 - val_accuracy: 0.8099 Epoch 44/50 26/26 [==============================] - 1s 23ms/step - loss: 0.2958 - accuracy: 0.9151 - val_loss: 0.6272 - val_accuracy: 0.8086 Epoch 45/50 26/26 [==============================] - 1s 23ms/step - loss: 0.2768 - accuracy: 0.9175 - val_loss: 0.6265 - val_accuracy: 0.8102 Epoch 46/50 26/26 [==============================] - 1s 24ms/step - loss: 0.2637 - accuracy: 0.9225 - val_loss: 0.6395 - val_accuracy: 0.8058 Epoch 47/50 26/26 [==============================] - 1s 24ms/step - loss: 0.2499 - accuracy: 0.9252 - val_loss: 0.6560 - val_accuracy: 0.7986 Epoch 48/50 26/26 [==============================] - 1s 23ms/step - loss: 0.2466 - accuracy: 0.9288 - val_loss: 0.6262 - val_accuracy: 0.8089 Epoch 49/50 26/26 [==============================] - 1s 23ms/step - loss: 0.2334 - accuracy: 0.9318 - val_loss: 0.6364 - val_accuracy: 0.8099 Epoch 50/50 26/26 [==============================] - 1s 23ms/step - loss: 0.2226 - accuracy: 0.9356 - val_loss: 0.6377 - val_accuracy: 0.8111 <tensorflow.python.keras.callbacks.History at 0x7f719fc4e8d0 We trained the model for 50 epochs, and finally, our validation accuracy is 81 per cent. Now let us test it in our test set and see how accurately our model performs. model.evaluate(x_test,y_test) 126/126 [==============================] - 1s 6ms/step - loss: 0.6686 - accuracy: 0.8105 [0.6685804128646851, 0.8104737997055054] model.save_weights('model_weights.h5') We saved the weights of our model for further use. def process_ip(str): str=str.lower() ip=list(str) ip=le.transform(ip) pad_ip=[] for i in range(max_len): if i<len(ip): pad_ip.append(ip[i]) else: pad_ip.append(0) pad_ip=np.asarray(pad_ip) pad_ip=np.expand_dims(pad_ip,axis=0) return pad_ip def evaluate(str): model_ip=process_ip(str) model_op=model.predict(model_ip) ans=np.asarray([np.argmax(model_op)]) ans=nat.inverse_transform(ans) ans=ans[0] return ans This is the helper function for testing. It preprocesses the input and converts it into a vector using Label Encoder and One Hot Encoder and trains it on our model and also converts the resultant output class probability into the corresponding nationality by taking the max class probability, identifies its index and outputs the respective nationality name. test_name=input() print() ans=evaluate(test_name) print('Given Name : ',test_name) print('Nationality : ',ans) sherlock Given Name : sherlock Nationality : English The input is a particular person name, and then we called the helper function onto the input, and finally, it printed the predicted nationality as the output. CONCLUSION In this tutorial, we learned how to build a nationality predictor using LSTM. This is just a basic NLP application, and as you go on, you will have to learn how to process sentences instead of words and represent them as a vector. There are many NLP applications, and things will really get interesting as you dig deeper into this subject.
https://valueml.com/nationality-predictor-using-lstm/
CC-MAIN-2021-25
en
refinedweb
Monitoring Performance Using ^mgstat This chapter describes the ^mgstat utility, a tool for collecting basic performance data. This utility may be updated between releases. Contact the InterSystems Worldwide Response Center (WRC) Opens in a new window for information about downloading newmgstat.xml from Opens in a new window. You must call ^mgstat from the %SYS namespace, and can usethe following positional arguments: For example, if running ^mgstat as a background job, to specify that file samples be obtained every 5 seconds until 17280 samplings are obtained (in the Terminal, from the %SYS namespace), enter the following: %SYS>JOB ^mgstat(5,17280) Alternatively, if running ^mgstat interactively, to specify the same samplings redisplay the headings after each 10 rows of data, enter the following: %SYS>DO ^mgstat(5,17280,,10) By default ^mgstat generates a filename based on the server name, configuration name, and date and time, with the “mgst” extension, which is recognized by an analyzer tool written in Microsoft Excel that aids graphing of the data. By default, the file is located in the install-dir\mgr directory of the InterSystems IRIS® data platform instance; however, if the output directory has been changed through the ^SystemPerformance utility (see Change Output Directory in the “Monitoring Performance Using ^SystemPerformance” chapter in this guide), ^mgstat uses that output directory. The mgst file is also generated when you run the ^SystemPerformance utility and included in the HTML performance report (see the “Monitoring Performance Using ^SystemPerformance” chapter in this guide. To ensure minimal impact on system performance, the ^mgstat utility extracts various counter information from shared memory. If the utility is running and an apparent performance issue occurs, data is available to help you investigate the problem; for assistance with your analysis, contact the InterSystems Worldwide Response Center (WRC) Opens in a new window, which can provide tasks that automate both the running of ^mgstat and the purging of files. Most of the reported data is averaged in per-second values, except as noted in the table below. The generated output file is in a readable, comma-separated value (CSV) format, which is more easily interpreted with a spreadsheet tool such as Microsoft Excel. The first line of the file is a header line which includes the filename and the utility version, as well information about buffer allocation and the version of the product being monitored. The number of columns of data depends on the version of the product: the first two columns are the date and time; the remaining columns are: * 0 is displayed unless this is an ECP configuration. † Seize statistics are not included in the report unless the CollectResourceStats parameter in the CPF is enabled. Considering Seizes, ASeizes, and NSeizes A Seize occurs whenever a job needs exclusive access on a given resource to guarantee that an update occurs without interference from other processes. If the Seize is not immediately satisfied, the update is postponed until it is satisfied. On a single-CPU system, the process immediately hibernates (because it cannot do anything until the process holding the resource relinquishes it, which does not occur until after its own update completes). On a multiple-CPU system, the process enters a holding loop in the “hope” that it will gain the resource in a reasonable time, thus avoiding the expense of hibernating. If the process gets access to the resource during the hold loop, the loop immediately exits and the process continues with its update; upon completing the update, the process relinquishes the resource for other processes that may be waiting for it; this is an Aseize. If, at the end of the hold loop, the resource is still held by another process, the process continues to hibernate and wait to be woken up when the resource is released; this is an Nseize. Nseizes are a natural consequence of running multiple processes on a single-CPU system; Aseizes are a natural consequence of running multiple processes on a multi-CPU system. The difference is that Nseizes incur system, or privileged, CPU time because the operating system must change the context of the running process, whereas an Aseize incurs user time on the CPU because it continues to run until the resource is gained and released, or until it gives up and hibernates. In general, on multi-CPU systems it is more expensive for the operating system to do the context switch than to loop a few times to avoid this operation because there is both CPU overhead and memory latency associated with context switching on multi-CPU systems.
https://docs.intersystems.com/healthconnectlatest/csp/docbook/DocBook.UI.Page.cls?KEY=GCM_MGSTAT
CC-MAIN-2021-25
en
refinedweb
A general game setting or variable (like fraglimit). More... #include <serverstructs.h> A general game setting or variable (like fraglimit). This object is safe to copy. Definition at line 144 of file serverstructs.h. Command-line argument that sets this GameCVar. When launching a game, this command() is passed as one of the command line arguments and the value() is what follows directly after. Definition at line 156 of file serverstructs.cpp. Is any value assigned to this GameCVar. Definition at line 161 of file serverstructs.cpp. 'Null' objects are invalid. Definition at line 166 of file serverstructs.cpp. Nice name to display to user in Create Game dialog and in other widgets. Definition at line 171 of file serverstructs.cpp. Assign value() to this GameCVar. Definition at line 176 of file serverstructs.cpp. Passed as the second argument, following command(). Definition at line 181 of file serverstructs.cpp.
https://doomseeker.drdteam.org/docs/doomseeker_1.0/classGameCVar.php
CC-MAIN-2021-25
en
refinedweb
Tool to convert Seabird cnv textfiles Project description Python toolbox to read and process Seabird cnv files. These text files are the standard output files of the Seabird CTD software. The main purpose. Install The package was developed using python 3.5+, it might work with earlier versions, but its not supported. The newest Gibb Sea Water Toolbox (gsw) depends also on python 3.5+, pycnv heavily depends on the gsw toolbox. It therefore strongly recommended to use python 3.5+. User Install as a user using pip pip install pycnv Install as a user from the repository python setup.py install --user Uninstall as a user pip uninstall pycnv Developer Install as a developer python setup.py develop --user Uninstall as a user pip uninstall pycnv FEATURES The data can be accessed by the original names defined in the cnv file in the named array called data. E.g. header name “# name 11 = oxsatML/L: Oxygen Saturation, Weiss [ml/l]” can be accessed like this: data[‘oxsatML/L’]. Standard parameters (Temperature, Conductivity, pressure, oxygen) are mapped to standard names. E.g. data[‘T0’] for the first temperature sensor and data[‘C1’] for the second conductivity sensor. If the standard parameters (C0,T0,p), (C1,T1,p) are available the Gibbs Sea water toolbox is used to calculate absolute salinity, SA, conservative temperature, CT, and potential temperature pt. The data is stored in a second field called computed data: cdata. E.g. cdata[‘SA00’]. The code used to compute the properties are SP = gsw.SP_from_C(data['C' + isen], T, data['p']) SA = gsw.SA_from_SP(SP,data['p'],lon = lon, lat = lat) if(baltic == True): SA = gsw.SA_from_SP_Baltic(SA,lon = lon, lat = lat) PT = gsw.pt0_from_t(SA, T, data['p']) CT = gsw.CT_from_t(SA, T, data['p']) pot_rho = gsw.pot_rho_t_exact(SA, T, data['p'], p_ref) The cnv object provides standard entries for pressure (cnv.p), temperature (cnv.T), conservative temperature (cnv.CT), practical salinity (cnv.SP), absolute salinity (cnv.SA), potential density (cnv.pot_rho), oxygen (cnv.oxy). The units have the extension _units, i.e. cnv.p_units The module checks if the cast was made in the Baltic Sea, if so, the modified Gibbs sea water functions are automatically used. The package provides scripts to search a given folder for cnv files and can create a summary of the folder in a csv format easily readable by python or office programs. The search can be refined by a location or a predefined station. Possibility to provide an own function for parsing custom header information. Plotting of the profile using matplotlib USAGE The package installs the executables: - pycnv - pycnv_sum_folder EXAMPLES Plot the absolute salinity and oxygen of a CTD cast: import pycnv import pylab as pl fname = 'test.cnv' # Some CTD cast cnv = pycnv.pycnv(fname) print('Test if we are in the Baltic Sea (usage of different equation of state): ' + str(cnv.baltic)) print('Position of cast is: Longitude:', cnv.lon,'Latitude:',cnv.lat) print('Time of cast was:', cnv.date) print('Number of sensor entries (len(cnv.data.keys())):',len(cnv.data.keys())) print('Names of sensor entries (cnv.data.keys()):',cnv.data.keys()) # Get data of entry key0 = list(cnv.data.keys())[0] data0 = cnv.data[key0] # Get derived data: keyd0 = list(cnv.cdata.keys())[0] datad0 = cnv.cdata[keyd0] # Get unit of derived data datad0_unit = cnv.cunits[keyd0] # Standard names are mapped to # cnv.p,cnv.CT,cnv.T,cnv.SP,cnv.oxy # units are _unit, e.g. cnv.p_unit # Plot standard parameters pl.figure(1) pl.clf() pl.subplot(1,2,1) pl.plot(cnv.SA,cnv.p) pl.xlabel('Absolute salinity [' + cnv.SA_unit + ']') pl.ylabel('Pressure [' + cnv.p_unit + ']') pl.gca().invert_yaxis() pl.subplot(1,2,2) pl.plot(cnv.oxy,cnv.p) pl.plot(cnv.cdata['oxy0'],cnv.p) pl.plot(cnv.cdata['oxy1'],cnv.p) pl.xlabel('Oxygen [' + cnv.oxy_unit + ']') pl.ylabel('Pressure [' + cnv.p_unit + ']') pl.gca().invert_yaxis() pl.show() Lists all predefined stations (in terminal): pycnv_sum_folder --list_stations Makes a summary of the folder called cnv_data of all casts around station TF0271 with a radius of 5000 m, prints it to the terminal and saves it into the file TF271.txt (in terminal): pycnv_sum_folder --data_folder cnv_data --station TF0271 5000 -p -f TF271.txt Show and plot conservative temperature, salinity and potential density of a cnv file into a pdf: pycnv --plot show,save,CT00,SA00,pot_rho00 ctd_cast.cnv Interpolate all CTD casts on station TF0271 onto the same pressure axis and make a netCDF out of it: see code pycnv/test/make_netcdf.py Devices tested - SEACAT (SBE16) V4.0g - MICROCAT (SBE37) - SBE 11plus V 5.1e - SBE 11plus V 5.1g - Sea-Bird SBE 9 Software Version 4.206 Project details Release history Release notifications | RSS feed Download files Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
https://pypi.org/project/pycnv/
CC-MAIN-2021-25
en
refinedweb
Bert Huijben wrote on Sat, Sep 26, 2015 at 10:06:15 +0200: > A few comments inline. > Thanks for the review. > > + /* ### This is currently not parsed out of "index" lines (where it > > + * ### serves as an assertion of the executability state, without > > + * ### changing it). */ > > + svn_tristate_t old_executable_p; > > + svn_tristate_t new_executable_p; > > } svn_patch_t; > > I'm not sure if we should leave this comment here in the long term... *shrug* We could put it elsewhere. > > +/* Return whether the svn:executable proppatch and the out-of-band > > + * executability metadata contradict each other, assuming both are present. > > + */ > > +static svn_boolean_t > > +contradictory_executability(const svn_patch_t *patch, > > + const prop_patch_target_t *target) > > +{ > > + switch (target->operation) > > + { > > + case svn_diff_op_added: > > + return patch->new_executable_p == svn_tristate_false; > > + > > + case svn_diff_op_deleted: > > + return patch->new_executable_p == svn_tristate_true; > > + > > + case svn_diff_op_unchanged: > > + /* ### Can this happen? */ > > + return (patch->old_executable_p != svn_tristate_unknown > > + && patch->new_executable_p != svn_tristate_unknown > > + && patch->old_executable_p != patch->new_executable_p); > > + > > + case svn_diff_op_modified: > > + /* Can't happen: the property should only ever be added or deleted, > > + * but never modified from one valid value to another. */ > > + return (patch->old_executable_p != svn_tristate_unknown > > + && patch->new_executable_p != svn_tristate_unknown > > + && patch->old_executable_p == patch->new_executable_p); > > These can happen when the svn:executable property just changes value (E.g. from '*' to 'x'). That should not happen, but it can. Right. So _if_ somebody had broken the "value is SVN_PROP_BOOLEAN_VALUE" invariant, _and_ generated a diff that contains a 'old mode 0755\n new mode 0755\n' sequence — which, although syntactically valid, wouldn't be output by any diff program — then they will get a false positive "Contradictory executability" hard error. The place to handle broken invariants is at the entry point, which for patches is parse-diff.c. Would make sense, then, for parse-diff.c to normalize the 'old value' and 'new value' of svn_prop_is_boolean() properties to SVN_PROP_BOOLEAN_VALUE, and for patch.c to canonicalize the ACTUAL/WORKING property value to SVN_PROP_BOOLEAN_VALUE before trying to apply the patch? (Example: if the property value in the wc is "yes", and the patch says 'change the property value from "sí" to unset', parse-diff.c will fold "yes" to "*", patch.c will fold "sí" to "*", and the patch will apply successfully.) > > +add_or_delete_single_line(svn_diff_hunk_t **hunk_out, > > + const char *line, > > + svn_patch_t *patch, > > + svn_boolean_t add, > > + apr_pool_t *result_pool, > > + apr_pool_t *scratch_pool) > > +{ > > + svn_diff_hunk_t *hunk = apr_palloc(result_pool, sizeof(*hunk)); > > As pcalloc() would be safer here... ... > > + hunk->leading_context = 0; > > + hunk->trailing_context = 0; > > As this patch forgets to set the new booleans added on trunk to signal no EOL markers here. > > You might need to set one of them to true though, as I think you are trying to add a property with no end of line. It's just a merge artifact. I'd rather just initialize the newly-added members and leave the non-initializing allocation so cc/valgrind can catch bugs. I'll initialize them to FALSE for now, to plug the uninitialized memory access. I'm not sure we need to change that to TRUE; see discussion under the patch_tests.py hunks, below. > > +/* Parse the 'old mode ' line of a git extended unidiff. */ > > +static svn_error_t * > > +git_old_mode(enum parse_state *new_state, char *line, svn_patch_t > > *patch, > > + apr_pool_t *result_pool, apr_pool_t *scratch_pool) > > +{ > > + SVN_ERR(parse_bits_into_executability(&patch->old_executable_p, > > + line + STRLEN_LITERAL("old mode "))); > > + > > +#ifdef SVN_DEBUG > > + /* If this assert trips, the "old mode" is neither ...644 nor ...755 . */ > > + SVN_ERR_ASSERT(patch->old_executable_p != svn_tristate_unknown); > > +#endif > > I don't think we should leave these enabled even in maintainer mode. > This will break abilities to apply diffs generated from other tools. The purpose of the warning is to alert us that there is a syntax we aren't handling. That's why it's for maintainers only. The assumption is that maintainers can switch to a release build if they run into this warning unexpectedly. I don't think we can convert this to a warning, can we? This API doesn't have a way to report warnings to its caller. > (Perhaps even from older Subversion versions as I think you recently > changed what Subversion generates itself on trunk) > You mean r1695384? Personally, I think rejecting pre-r1695384 patches is _good_, since thoes patches were simply invalid and would have gotten in the way of assigning a meaning to the 010000 bit in the future. If we ever find out that somebody is generating such patches, we can then relax our parser to accept them. > > +def patch_ambiguous_executability_contradiction(sbox): > > + """patch ambiguous svn:executable, bad""" > > + > > + sbox.build(read_only=True) > > + wc_dir = sbox.wc_dir > > + > > + unidiff_patch = ( > > + "Index: iota\n" > > + > > "=============================================================== > > ====\n" > > + "diff --git a/iota b/iota\n" > > + "old mode 100755\n" > > + "new mode 100644\n" > > + "Property changes on: iota\n" > > + "-------------------------------------------------------------------\n" > > + "Added: svn:executable\n" > > + "## -0,0 +1 ##\n" > > + "+*\n" > > + ) > > This specifies the addition of the svn:executable property with the non canonical "*\n" value. Is this really what you want to test? > I think the actual property set code will change it to the proper value, but I would say you need the " \ No newline at end of property" marker here. > > The generated patchfile doesn't need it, but it does need the boolean to note the same thing. > I don't understand. If the generated patchfile doesn't need the \ line, then that is a great reason not to have it here, so that we test parsing the output svn currently generates. In any case, patch_dir_properties() and patch_add_symlink() are written without a \ line. I'll make one of them use a \ line so both variants are tested. Cheers, Daniel Received on 2015-09-27 02:47:04 CEST This is an archived mail posted to the Subversion Dev mailing list.
https://svn.haxx.se/dev/archive-2015-09/0341.shtml
CC-MAIN-2021-25
en
refinedweb
Classification of Celestial Bodies using CNN in Python Do you also wish to know how we classify objects from an image? So, here it is! In this article, We will build a fundamental image classification model using TensorFlow and Keras. Our main aim is to predict whether the image contains a star or a galaxy with the help of Python programming. To implement our model efficiently using Neural Networks, we need to know about certain libraries beforehand. The libraries we will be using are TensorFlow and Keras. - TensorFlow: This is a free and open-source software library developed by researchers and engineers of Google. It can be used for fast computing of Deep Learning models. It has gained this level of popularity because it can support multiple languages for building deep learning models, for instance, Python, C++, JavaScript, and R. The latest stable released version is 2.3.1, 24th September 2020. - Keras: This is also an open-source library that provides a Python interface for humans. Keras supports multiple backend neural networks computation engines like TensorFlow and Theano. The latest released version is 2.4.0, 18th June 2020. Now, as we are aware of the basic yet important libraries, we should get into our model-building task. Hey! c’mon don’t worry about installations of libraries, we’ll be doing that in this tutorial. I would request you to open any Python IDE, say Google Colab, enable the GPU from the runtime, and practice this tutorial with me. Steps involved in Model Building Let’s now move forward and start stepwise. Step 1: Understanding the DataSet Before directly jumping into model-building one should always get insights about the dataset. For this tutorial, you can download the dataset for respective operations. For the dataset, you can click here. (The size of the dataset was large, so posted the drive link). You’ll find 2 folders named as- - training_set: required for training our model. - test_set: we required for testing our model. Inside both the above folders you’ll find 2 more folders containing images of STARS and GALAXY. So, here’s your Dataset all set. Before getting ahead you need to install the required libraries (TensorFlow, Keras, matplotlib), for that, you can run the following command in your “.ipynb” file. pip install package-name Step 2: Importing the libraries and packages Libraries are the fundamental blocks of any programming language, we have them to make our task easier. In this section, we will import all the necessary libraries. import tensorflow as tf from tensorflow import keras from keras.models import Sequential from keras.layers import Conv2D from keras.layers import Activation from keras.layers import MaxPooling2D from keras.layers import Flatten from keras.layers import Dense import matplotlib.pyplot as plt We can also check the version of TensorFlow- print(tf.__version__) Output: 2.3.0 Step 2: Building the Convolutional Neural Network Model. So, while building a model we follow a series of steps or can say we have several layers in model building. The layers are as follows: Layer 1: Convolution (1st layer where feature extraction is done on input data) | Layer 2: Max Pooling (this layer selects the maximum area it convolves and takes that data further) | Layer 3: Flattening (In this, the pooled feature map is converted to a single column and passed to a fully connected layer) | Layer 4: Full Connection (Here, we find multiple layers of neurons that contribute to the decision of our model) Here, ‘relu’ and ‘sigmoid’ are the Activation Functions. Are you excited to know the implementation of these layers?? Let’s do it then. # Initialising the CNN my_model = Sequential() # Step 1 - Convolution my_model.add(Conv2D(32, 3, 3, input_shape = (64, 64, 3), activation = 'relu')) # Step 2 - Pooling my_model.add(layers.MaxPooling2D(pool_size = (2, 2))) # Adding a second convolutional layer my_model.add(Conv2D(64, (3, 3))) my_model.add(Activation('relu')) my_model.add(MaxPooling2D(pool_size=(2, 2))) # Adding a third convolutional layer my_model.add(Conv2D(64, (3, 3))) my_model.add(Activation('relu')) my_model.add(MaxPooling2D(pool_size=(2, 2))) # Step 3 - Flattening my_model.add(Flatten()) # Step 4 - Full connection my_model.add(Dense(128, activation = 'relu')) my_model.add(Dense(1, activation = 'sigmoid')) my_model.summary() Output: Step 3: Compile the model. Now, for compiling our model, we use the ‘adam’ optimizer with loss as ‘binary_crossentropy’ function along with ‘accuracy’ metrics. # Compiling the CNN my_model.compile(optimizer = 'adam', loss = 'binary_crossentropy', metrics = ['accuracy']) Step 4: Fitting the CNN to the images. In this step, you’ll be introduced to the ImageDataGenerator package of Keras. This step helps you build powerful CNN models with a very little dataset – just a few hundreds or thousands of pictures from each category. After this, we’ll train our model. For training, we will be using the ‘fit_generator’ method on our model.('ENTER THE LOCATION OF TRAINING_SET FOLDER', target_size = (64, 64), batch_size = 25, class_mode = 'binary') test_set = test_datagen.flow_from_directory('ENTER THE LOCATION OF TEST_SET FOLDER', target_size = (64, 64), batch_size = 25, class_mode = 'binary') my_model.fit_generator(training_set, samples_per_epoch =790, nb_epoch = 25, validation_data = test_set, nb_val_samples = 270) Output: As we can see in the output above, the 1st epoch yields a test_set accuracy of only 70% but as the training gets along, the final epoch yields an accuracy of 97.66% for the test_set, which is pretty good accuracy. We are good to go now. Do you know a trick? Let me tell you, YOU CAN SAVE YOUR MODEL AS WELL FOR FUTURE USE! Step 5: Saving the model. my_model.save('Finalmodel.h5') Now, let’s visualize our analysis using the most popular library – matplotlib Step 5: Visualization. Visualization is a tool that always lets you infer better and useful insights into your model. - Plot for train_set loss and test_set loss : import matplotlib.pyplot as plt # Plot the Loss plt.plot(Analysis.history['loss'],label = 'loss') plt.plot(Analysis.history['val_loss'],label = 'val_loss') plt.legend() plt.show() Output: 2. Plot for train_set accuracy and test_set accuracy : # Plot the Accuracy plt.plot(Analysis.history['accuracy'],label = 'acc') plt.plot(Analysis.history['val_accuracy'],label = 'val_acc') plt.legend() plt.show() This completes our Tutorial for Image Classification of Celestial Bodies in Python. So, what are you waiting for now? Open your Python notebook and go get your hands dirty while working with the layers of neurons. We all know Machine Learning is the future technology and I hope the tutorial might have helped you gain an understanding of the same. For more such tutorials on ML, DL using Keras and Tensorflow do checkout the valueml blog page. For any queries in the above article, you can ask in the comment section. Hope you learned something valuable today. Good Luck Ahead! Thank You for reading!
https://valueml.com/classification-of-celestial-bodies-using-cnn-in-python/
CC-MAIN-2021-25
en
refinedweb
In my project I'm handling all HTTP requests with python requests Now, I need to query the http server using specific DNS - there are two environments, each using its own DNS, and changes are made independently. So, when the code is running, it should use DNS specific to the environment, and not the DNS specified in my internet connection. Has anyone tried this using python-requests? I've only found solution for urllib2: requests uses urllib3, which ultimately uses httplib.HTTPConnection as well, so the techniques from still apply, to a certain extent. The requests.packages.urllib3.connection module subclasses httplib.HTTPConnection under the same name, having replaced the .connect() method with one that calls self._new_conn. It is perhaps easiest to patch that method: from requests.packages.urllib3.connection import HTTPConnection def patched_new_conn(self): """ Establish a socket connection and set nodelay settings on it :return: a new socket connection """ # resolve hostname to an ip address; use your own # resolver here, as otherwise the system resolver will be used. hostname = your_dns_resolver(self.host) try: conn = socket.create_connection( (hostname, self.port), self.timeout, self.source_address, ) except AttributeError: # Python 2.6 conn = socket.create_connection( (hostname, self.port), self.timeout, ) conn.setsockopt(socket.IPPROTO_TCP, socket.TCP_NODELAY, self.tcp_nodelay) return conn HTTPConnection._new_conn = patched_new_conn and you'd provide your own code to resolve self.host into an ip address instead of relying on the socket.create_connection() call resolving the hostname for you. Like all monkeypatching, be careful that the code hasn't significantly changed in later releases; the patch here was created against requests 2.3.0.
https://codedump.io/share/JWuiO4CTf8rD/1/python-39requests39-library---define-specific-dns
CC-MAIN-2021-25
en
refinedweb
Understanding KNN algorithm using Iris Dataset with Python Hey folks, Let’s learn about a lazy algorithm that can be used for both classification and regression. You know, it is the K-Nearest Neighbor Algorithm. Instance-Based Learning The knn algorithm is known by many names such as lazy learning, instance-based learning, case-based learning, or local-weighted regression, this is because it does not split the data while training. In other words, it uses all the data while training. Another property of the knn algorithm is that it follows non-parametric learning. Meaning it does not has a pre-assumption about the data. It is said instance-based learning because here we do not process the training examples to train a model. Instead, we store them and whenever we need to classify a new example/data, we retrieve a set of similar instances to generate the results. To find the result, the algorithm follows the fact – “Similar things exist in close proximity”. Now that we know a few things about the knn algorithm, let’s dive into the code part. Implementation using Iris Dataset in Python This dataset contains three classes of the iris flower. Among these three classes, the first is linearly separable whereas the other two classes aren’t linearly separable. For the implementation, we will use the scikit learn library. Let’s import the needed Python libraries. import pandas as pd from sklearn.preprocessing import LabelEncoder from sklearn.model_selection import train_test_split from sklearn.neighbors import KNeighborsClassifier from sklearn import metrics import matplotlib.pyplot as plt Now we will read the contents of the dataset to check if they are in the required format. df = pd.read_csv("D:\swapnali\Engineering\Third Year\sem 5\Machine Learning\Practical\iris.csv") df.head() The above lines will display the first 5 entries of the dataset, which contains text values. As shown below: We need to convert those values to numbers for easy calculations. To do this we use LabelEncoder. iris_fl = LabelEncoder() df['iris_fl_n'] = iris_fl.fit_transform(df['iris_fl']) X = df.drop('iris_fl',axis='columns') X = X.drop('iris_fl_n',axis='columns') print(X.head()) Now if we look at the first five entries, we will a new column added which has the numeric values (0-2) for each class of iris flower. And the column having text values is been dropped. After this, we will store our target values in a separate variable and split the data for train and test. y = df['iris_fl_n'] X_train,X_test,y_train,y_test = train_test_split(X,y,test_size=0.2,random_state=4) k_range = range(1,26) scores = {} scores_list = [] To check the accuracy of the model changes with changing values of k, we use this loop and store the accuracy score of the model for each value of k. This is just to check the accuracy and can be omitted. for k in k_range: knn = KNeighborsClassifier(n_neighbors=k) knn.fit(X_train,y_train) y_pred=knn.predict(X_test) scores[k]=metrics.accuracy_score(y_test,y_pred) scores_list.append(metrics.accuracy_score(y_test,y_pred)) Now we plot the accuracy of the model with respect to changing values of k. plt.plot(k_range,scores_list) plt.xlabel('Value of k for kNN') plt.ylabel('Testing Accuracy') Let us go ahead and actually implement the most important part of our program. We will store our model in a variable named knn. knn = KNeighborsClassifier(n_neighbors=5) knn.fit(X,y) Now, let’s introduce new values to the model to see it gives expected results. For this, we will make a list of test values. and pass them to the model to generate results. x_test = [[5,4,3,4],[5,4,4,5]] y_predict = knn.predict(x_test) Now we will print the class into which our model has classified these values. print("\n\nprediction for values:",x_new[0],"is: ",y_predict[0]) print("prediction for values:",x_new[1],"is: ",y_predict[1]) We get the output as: prediction for values: [5, 4, 3, 4] is: 1 prediction for values: [5, 4, 4, 5] is: 2 Conclusion We can cross-check that our model correctly classifies new instances into respective classes. This algorithm can be used in different scenarios like detecting patterns, classifying handwritten digits. Thank You. Further reading : Learning to classify wines using scikit-learn
https://valueml.com/understanding-knn-algorithm-using-iris-dataset-with-python/
CC-MAIN-2021-25
en
refinedweb
Convolutional Neural Networks Using Keras In this tutorial, We will learn how to implement Convolutional Neural Networks using Keras in Python language. CNN: A Brief Introduction A Convolutional Neural Network works on the principle of Weights. There are many reasons why they are being used so extensively. They reduce processing power without the loss of important features. The main difference is the use of the Kernel Layer. It is also known as a filter. Its size is smaller than the Input layer. Used for Extraction of Important Features (Convolutes over the input). When the dimension is reduced, it is known as valid padding. If it remains the same, it is known as the same padding. The Final Layer is just like a normal fully connected Neural Network. Keras has made it a whole lot easier to implement these networks with good optimization. Implementation of Keras Convolutional Neural Networks Tools Required - NumPy - Keras - Python - Spyder or Any Python IDE Keras has some built-in datasets which can be loaded easily. You can use any of the datasets from Datasets-Keras. I have used the MNIST digit Classification Dataset for this tutorial. So let’s start coding! Step-1 Loading the Libraries Not many Libraries Required. Will use built-in Keras’ Functions. #Import Libraries import numpy as np import pandas as pd import keras Step-2 Loading the Dataset Layers Required for the CNN are: - MaxPooling2D: Selects the Max value for the convoluted matrix. Better than Average Pooling.Co - Conv2D: The Convolution layer. - Dense: The fully connected layer as normal Neural Networks. - Flatten Increases efficiency by converting data into a 1-D array. - Dropout: Prevents Overfitting. from keras.datasets import mnist from keras.models import Sequential from keras.layers import Conv2D,MaxPooling2D,Flatten,Dense from keras.utils import to_categorical from keras.layers import Dropout from keras.layers.advanced_activations import LeakyReLU #Load the Dataset (x_train,y_train),(x_test,y_test)=mnist.load_data() #x_train has 60k samples of size 28x28 Now the Data would be loaded, but it still needs to be preprocessed into 1 channel. We will use Numpy’s Reshape Function. Step-3 Preprocessing the Data. Steps for Preprocessing: - Reshape Into Single Channel - Change datatype to “float32” - x_test and x_train values to vary from 0-1 We will also “One Hot Encode” the output values. The given dataset has 10 classes (0-9). We will change it into a binary vector with 1 for the correct digit and 0 for others. #Preprocess x_train=x_train.reshape(x_train.shape[0],28,28,1) x_test=x_test.reshape(x_test.shape[0],28,28,1) num_classes=np.unique(y_train) tot_classes=len(num_classes) x_train=x_train.astype('float32') x_test=x_test.astype('float32') x_train=x_train/255 x_test=x_test/255 y_train=to_categorical(y_train,tot_classes) y_test=to_categorical(y_test,tot_classes) Step-4 Defining the Model Building a “Sequential” Model using Keras.add function. Batch Size is the number of elements picked at once. One complete Representation of Dataset is Known as Epoch. Here I have trained for 10 epochs with a batch size of 128. You can Increase it if the dataset is large. It will require more computational power. If you want to learn more about Layers in Keras, you can visit Layers in Keras. #Model batch_size = 128 num_classes = 10 epochs = 10(256, activation='relu')) model.add(Dropout(0.5)) model.add(Dense(num_classes, activation='softmax')) Step-5 Training and Saving the Model Verbose 0 will not show the training Steps. Verbose 1 will show accuracy after each step. model.compile(loss=keras.losses.categorical_crossentropy, optimizer=keras.optimizers.Adadelta() ,metrics=['accuracy']) hist = model.fit(x_train, y_train,batch_size=batch_size, epochs=epochs,verbose=1 ,validation_data=(x_test, y_test)) model.save('hand_wr.h5') print("Saved Succesfuly") Training will take approx. 15 minutes on Intel(R) Core (TM) i7 CPU. Step-6 Finding Accuracy As this is not a very huge dataset, it achieves very high accuracy. The model is 99% percent accurate and not overfitting. Use “load_model” from Keras. #model Accuracy from keras.models import load_model model=load_model('hand_wr.h5') values=model.evaluate(x_test,y_test,verbose=1) print('Test Loss: ',values[0]) print('Test Accuracy: ',values[1]) So we understood the basics of a Convolutional Network. You can ask any doubt in the comments section. Machine Learning is ❤️.
https://valueml.com/convolutional-neural-networks-using-keras/
CC-MAIN-2021-25
en
refinedweb
ASP.NET Whidbey Overview Date Published: 29 March 2004 Introduction ASP. New Feature List There are a LOT of new features in ASP.NET 2.0 / VS 2005. This is just a simple list of features -- we'll look at some of the more notable ones in detail further in this article. - Master Pages New Web Controls - BulletedList - FileUpload - HiddenField - Panel - DynamicImage - MultiView / View - Wizard - Login / LoginName / LoginStatus / LoginView / PasswordRecovery - AdRotator (improved) - ImageMap - Buttons/HyperLinks can record impressions/clicks - TreeView - GridView (better DataGrid) - DetailsView (single-record vertical display control) - New Data Controls - No Code DataBinding - New Visual Studio IDE (a la Web Matrix) - Language Enhancements (generics, partial classes, anonymous methods) - Precompilation and no-source deployment - Better codebehind model - /Code folder - More configuration support - Built-in connection string encryption - Better ASP.NET tracing - Themes - Skins - Site Counters - ADO.NET PageReader - ADO.NET SqlResultSet - XML XQuery Support - Client focus / Client Scrolling support - Cross-Page Postbacks (with full event and intellisense support) - SQL Cache Invalidation - Validation Regions - Better Mobile Support - ObjectSpaces (object-relational-mapping) - Web Parts - Personalization Engine - Membership Engine - Role Manager - Site Directory Definition (and data sources) - URL Mapping Master Pages With ASP.NET 1.x, it's too hard to create a common look for a site. The standard options include creating header and footer (at least) user controls and adding them to every page, or using a base page class, or some combination of these two. A couple of good articles on these techniques can be found here: - Page Templates: Introduction, Paul Wilson - Page Templates: Designer Issues, Paul Wilson - Page Templates: Server Forms, Paul Wilson - Base Page and User Control Classes, Robert Boedigheimer The downside to these options is that they're not natively supported by the framework, so any solution is going to be a custom solution, and thus not something you're likely to be able to reuse between organizations. Further, there's not much IDE support for any of these techniques. With Master Pages, it is very easy to create a template page for the site (a 'master' page) and save it with its own extension (.master). Any page can inherit its visual formatting from a master page through a page directive. Master pages can inherit from other master pages. Content can be place in one more more regions within a master page, not just a single place around which all other UI is placed (as is common with today's techniques). Lastly, there is excellent IDE support for master pages, as this screenshot demonstrates: This is a page that has specified a master page. As you can see, some of it is greyed out and some of it is enabled. The grey portion is the master page content, and can be edited only by right clicking and selecting "Edit Master". The other section of the page is a content area that can be manipulated on this page. In this case, this is the home page of an "Internet Site" that can be created as a sample project in either VB or C#. More on these sample projects later Wizard Control A common requirement in data entry or registration form pages is to collect user data across a series of pages. For instance, a bank account application might require personal information from one or more applications, business information, financial information, etc., each on separate forms. Or an ecommerce application my collect information about billing address, shipping address, and payment on separate forms. In ASP.NET 1.x, this can be done, but there is not built-in support for it, so a lot of code must be written to support updating the page state, or many separate forms must be written and state managed between them. It's messy. The Wizard web control is designed to make these scenarios easier. The wizard control allows the developer to create templated "steps" within the control, and manages the state throughout the steps. Validation within individual steps can be completed without incomplete steps' RequiredFieldValidators firing (as is a problem in 1.x). Users can navigate forward, backward, or jump to a specific step within the wizard. At design time, it's very easy to manipulate the Wizard and its step templates, as this screenshot demonstrates: Here's an example of a very simple Wizard control in action, using the March 2004 Preview build of VS2005: <%@ page void Wizard1_FinishButtonClick (object sender, WizardNavigationEventArgs e) { Response.Write ("Finish clicked!"); } </script> <html> <head runat="server"> <title>Untitled Page</title> </head> <body> <form id="form1" runat="server"> <div> <asp:Wizard <WizardSteps> <asp:WizardStep This is content for STEP 1. </asp:WizardStep> <asp:WizardStep This is content for STEP 2. </asp:WizardStep> </WizardSteps> </asp:Wizard> </div> </form> </body> </html> GridView / DetailsView If you've done any heavy lifting with ASP.NET, you've probably at least played with the DataGrid control. This is the "Mother of All Controls" in ASP.NET 1.x, being one of the biggest honkin' web controls around. However, despite its wealth of functionality, it still requires a ton of code to get to do anything beyond displaying data in a single fashion. Since one of the goals of Whidbey was to reduce total code written by 2/3, the DataGrid was targeted for some major overhaul work to help address this issue. The most common things that are added to the standard DataGrid, each of which requiring more code than the last, are paging, sorting, and editing. The answer is the GridView. Want to add paging? Just turn it on and it works. Want sorting? Same deal. Bidirectional sorting? No problem. Editing? Again, it just works. Even databinding, which for the DataGrid requires at least a line of code to call .DataBind() has been streamlined so that it can be done without any code by using the new fooDataSource controls (where foo is the type of Data Source). An example GridView from the Pubs database with paging and sorting enabled (and no code required) is shown here: With a minimal amount of code, this GridView can be modified to include support for Master-Detail viewing, where selecting an author will update a separate GridView showing that author's titles. Further, selecting a title will bring it up in a DetailsView for editing, again with little or no code. Here's a completed page, built on the PDC Preview build of Whidbey, which rolls these features together and uses less than 10 lines of code. Although the object model for the GridView is very similar to the DataGrid, its Columns collection is populated with fooField objects, rather than fooColumn objects. For example, while a DataGrid has a BoundColumn control, the GridView has a BoundField. Authentication Controls A very common task in web applications revolves around determining who the current user is and whether or not they are authorized to access a particular resource. To support these tasks, a suite of security controls is included in ASP.NET 2.0. These controls include: - Prompts user for login name and password - Supports usual options such as Forgot Password link and Remember Password? checkbox - Integrates with Membership provider LoginName - A simple control that displaysthe current logged-in user's name (or nothing if the user is not authenticated). LoginStatus - Another simple but very common control, this one displays a link to the Login page if the user is not authenticated, otherwise it displays a link to the Logout pages LoginView - Allows custom content to be displayed based on whether the user is logged in. - Content can be customized for individual user roles PasswordRecovery - Displays a Forgot Password? dialog and allows you to easily choose one of several common mechanisms for sending or resetting a user's password Validation Groups One of the coolest features of ASP.NET is its suite of Validation Controls. The difference between validating user input in ASP.NET vs. ASP 3.0 with JavaScript is like night and day. That said, the validation controls do have some severe limitations in ASP.NET 1.0. One of the largest ones is that validation occurs strictly at the page level, so a page is either valid or it isn't. This makes it impossible to use these controls to validate separate parts of one page (including panel-based wizard style pages where all the panels are in one page and visibility is controlled by the user's button clicks). With ASP.NET 2.0, this problem is addressed through the introduction of Validation Groups. All validation controls and all controls that cause a postback support this new property. The ValidationGroup property is just a simple text field. When a postback occurs, only those validators whose ValidationGroup matches the postback control's ValidationGroup (even if it is blank, the default) are fired. This simple yet elegant solution solves a significant problem with the 1.x implementation of validation. Validation groups are also supported in code. A particular group's validity can be checked using If (Page.Validate("GroupName")) Then. Also, the standard Page.IsValid now evaluates the ValidationGroup associated with the last PostBack. Image Generation ASP.NET 2.0 will have native support for dynamic image generation. This is often done today using an ASPX page and setting the content type and using a bit of GDI+ from the System.Drawing namespace. In ASP.NET 2.0, there will be some new base classes that will make this much easier, and a new file extension dedicated to this type of content, *.asix. Coupled with these features will be a new web control, the DynamicImage control, which will make it easy to place generated images on a page. Images may be stored in memory or on disk using these features, and will support automatic detection of supported formats for mobile devices and can convert the image on-the-fly to a supported type if necessary. URL Mapping Another common problem with ASP.NET sites today is long, ugly URLs (look at any MSDN article for example). While URL mapping can certainly be done today using either Application_BeginRequest in the global.asax or a custom HttpHandler, ASP.NET 2.0 makes common mappings simple by adding support for them to the web.config file. A new configuration section, urlMappings, will allow individual 'fake' URLs to be mapped to their actual (probably much longer) counterparts. A very common use for this would be to eliminate navigation-specific querystring data, such as that used by the IBuySpy or Rainbow portal applications. Instead of having users navigate to you can point them to, and specify the mapping in the config like so: <urlMappings enabled="true"> <add url="~/Home.aspx" mappedUrl="~/Default.aspx?tabid=0" /> </urlMappings> Site Counters Let's say you have a button on your page and you want to know how often it's clicked. Or you just want to know how often users are viewing the pages in your application, but you're not a sysadmin and you don't want to deal with web logs and a log analyzer. Well then, ASP.NET 2.0's site counters are for you. These counters are built into the existing link and button controls, and use a built-in service or your own custom provider (the built-in service works with Sql Server or Access). The counters simply track impressions and/or clicks over whatever timespan you specify (e.g. impressions/hour vs impressions/day -- one row of data will exist per timespan, so more granularity equals a larger database). The main properties used for individual controls are CountClicks (bool), CounterGroup (string), and CounterName (string). An example hyperlink: <asp:hyperlink Text=“Click for Partner Details” NavigateUrl=“” CountClicks=“true” CounterGroup=“PartnerClicks” CounterName=“Partner1” runat=“server” /> Before these counters will work, the site counters need to be set up using the ASP.NET configuration tool (which unfortunately appears to be broken in the build I'm using at the moment). You can also specify site-level counters through a section in web.config. The config section is <siteCounters> but the exact syntax seems to be still up in the air at this point. Suffice to say, you will be able to specify individual page paths or wildcard paths of page groups/folders you would like to track activity on. Client-Side Features ASP.NET 2.0 has some frequently requested client side features, as well. First, it was way too hard to add client side click handlers in ASP.NET 1.x (using the Attributes collection), so now there is an OnClientClick property that can be used. Another frequent request was to improve support for client-side Focus. Several features have been added to address this, including a Page.SetFocus(control) method which works today in the alpha builds of VS2005. You can also declaratively set the default focus of a form by specifying a DefaultFocus in the <form> tag. This also supports a DefaultButton property, which will ensure that when the user hits enter, that button is clicked. I know these features will handle a LOT of frequent questions on several lists and forums I frequent. Example syntax: <form DefaultFocus=“textbox1” DefaultButton="button1" runat=“server”> You'll also be able to specify an individual control and call its Focus() method to set focus to that control. With the current build (March 2004 Preview) I'm not having much luck with the <form> and .Focus() techniques, but the Page.SetFocus(control) method has worked since before the PDC03 preview bits. The validation controls also have a new property, SetFocusOnError (boolean), which will set the focus to the first control on the page that had a validator fail. This seems to be working currently. Visual Studio 2005 Visual Studio 2005 has a number of new improvements over VS.NET 2003. One of the big ones is that FrontPage Server Extensions are no longer required. In fact, Microsoft has embraced a new Internet standard -- they're really leading the pack on this one -- called File Transfer Protocol, or FTP. Ok, so maybe it's not that new, but it is supported (finally) by VS2005, along with a number of other methods for connecting to webs. No more project files (for websites) VS2K5 uses a directory-based model that no longer requires project files. You can edit any web anywhere. The performance impact of this change is pretty dramatic for big sites. No More Single DLL Per Website It's no longer necessary to build the entire web application into a single DLL, which made team-development of a single web app a nightmare. In fact, by default you're never building web pages at all anymore. It's very much just like it was back in the 'good old' ASP 3.0 days where to build a web form you just saved the file, uploaded it, and to test you viewed it in your browser. Everything is compiled server-side. Now you can modify a single page in the application and not worry that you'll have to rebuild the whole thing (and restart the whole app due to changes in the bin folder) just to touch one page. HTML Source Preservation In theory, and so far the theory is proving to be true from my experience, the tool will never, ever, ever reformat or mangle your HTML code. It does support some reformatting features, but you specify when and if you use these. There is also going to be built-in support for a variety of schemas against which the HTML can be validated, including XHTML. Intellisense Everywhere No longer limited to codebehind files, you'll now see intellisense in HTML view, including for HTML elements. In page directives. In <script runat="server"> blocks within HTML, etc. All of these work in the March04 preview build. A big one that's not there yet but should be before release is web.config intellisense (but there should also be a web.config GUI editor, too, by then). HTML Tag Navigator Displays nesting depth at the bottom of the editor (e.g. <html><body><form><table> to show that you're inside a <table> tag). Will also support collapsing/expanding of HTML elements, similar to #regions in code. Built-In Test Web Server No longer requires IIS, so XP Home users can develop ASP.NET pages. The test web server is built on Cassini, which ships with the free Web Matrix tool (and for that matter, a lot of the ideas that went into VS2K5 came from Web Matrix). Only responds to requests from localhost, and not optimized at all for any significant traffic, this is purely a test web server. Sample Web Projects There are two powerful sample web projects built into VS2K5 whch provide a head start toward building two common types of applications. By release, there may be more such samples. These two samples are an Intranet 'portal' site and an Internet site. The main difference between these two sites is their target audience -- internal users versus anybody on the 'net. The Intranet Portal is a great application to use to get up to speed with how to build sites with ASP.NET 2.0. It includes support for master pages, a sitemap file, web parts, some security controls, and a bunch of web controls. Web parts in particular are an awesome way to provide users with customization options. Writing about them doesn't do them justice -- if you haven't seen them you really have to try them out to believe how cool they are. Using web parts, users can customize the layout of the page to suit them, using a drag-and-drop interface. The user literally just clicks a link to 'customize page' and from there they can drag controls from one column or header or section to another. Controls can be deleted or added from a 'catalog' as well. When they're done, they just click another link to finish customizing the page, and from that point on whenever they come back, the page will be as they left it. The Site Map file bears mentioning simply because I think almost all ASP.NET 2.0 developers will end up using this. It can be used along with the SiteMapDataSource control to bind site navigation data to controls like the TreeView or the SiteMapPath (breadcrumb) control, allowing for easy navigation updates. The Internet site is very similar to the Intranet site except that instead of relying on windows authentication, it has support for authentication using forms authentication. It also has a photo album built into it and some reports for site counters. Naturally the authentication pieces use the new built-in security web controls, and everything is tied into the membership provider by default. The site also uses standard features like master pages and sitemap files to good effect. It's a great starting point for an Internet site, or a great way to learn how to use many of the new tools and techniques available in ASP.NET 2.0. Summary Whidbey, or ASP.NET 2.0 / Visual Studio 2005, should be available as a full release in about a year, give or take a couple of quarters. The first beta is due in June 2004, and a second beta will likely follow before the end of the year. When the second beta comes, it is widely anticipated that it will include a 'Go Live' license like the second beta of ASP.NET 1.0 carried, and many organizations will begin hosting production sites on the beta bits (as many did on ASP.NET 1.0 Beta 2). With the wealth of new features, many of which eliminate effort that must be expended on virtually every web application built using 1.x code, it will definitely provide a competitive advantage for organizations to move to 2.0 quickly. The learning curve is nothing compared to moving to 1.0 from legacy ASP, and the productivity enhancements are huge. In this article, I've only scratched the surface of what is coming in 2.0, and hopefully my enthusiasm for this new release of ASP.NET is evident. Originally published on ASPAlliance.com.
https://ardalis.com/aspnet-whidbey/
CC-MAIN-2021-25
en
refinedweb
How create table with spring data cassandara? I have created my own repository like that: public interface MyRepository extends TypedIdCassandraRepository<MyEntity, String> { } So the question how automatically create cassandra table for that? Currently Spring injects MyRepository which tries to insert entity to non-existent table. So is there a way to create cassandra tables (if they do not exist) during spring container start up? P.S. It would be very nice if there is just config boolean property without adding lines of xml and creation something like BeanFactory and etc. :-) Overide the getSchemaAction property on the AbstractCassandraConfiguration class @Configuration @EnableCassandraRepositories(basePackages = "com.example") public class TestConfig extends AbstractCassandraConfiguration { @Override public String getKeyspaceName() { return "test_config"; } @Override public SchemaAction getSchemaAction() { return SchemaAction.RECREATE_DROP_UNUSED; } @Bean public CassandraOperations cassandraOperations() throws Exception { return new CassandraTemplate(session().getObject()); } } Create keyspace, table and generate tables dynamically using , method on the CassandraClusterFactoryBean . For generating the application's Tables, you essentially just need to annotate your application domain object(s) (entities) using the SD Cassandra @Table annotation, and make sure your domain objects/entities can be found on the application's CLASSPATH. Using Cassandra, I want to create keyspace and tables dynamically using Spring Boot application. I am using Java based configuration. I have an entity annotated with @Table whose schema I want to be created before application starts up since it has fixed fields that are known beforehand. You can use this config in the application.properties spring.data.cassandra.schema-action=CREATE_IF_NOT_EXISTS Spring Data for Apache Cassandra, Update.create().increment("balance", 50.00), Account.class); a selection of objects in the Apache Cassandra table. – @Table: identifies a domain object to be persisted to Cassandra as a table. – @PrimaryKey: identifies the primary key field of the entity. 4. Create a Cassandra repository – Open application.properties, configure spring.data.cassandra: You'll also need to Override the getEntityBasePackages() method in your AbstractCassandraConfiguration implementation. This will allow Spring to find any classes that you've annotated with @Table, and create the tables. @Override public String[] getEntityBasePackages() { return new String[]{"com.example"}; } Getting started with Spring Data Cassandra, Since I am using Spring Boot in this post spring-boot-starter-data-cassandra can to allow Spring to create tables that do not exist if there is a entity with @Table � Today we’ve built a Rest CRUD API using Spring Boot, Spring Data Cassandra and Spring Web MVC to create, retrieve, update, delete documents in Cassandra database. We also see that CassandraRepository supports a great way to make CRUD operations and custom finder methods without need of boilerplate code. - You'll need to include spring-data-cassandra dependency in your pom.xml file. Configure your TestConfig.class as below: @Configuration @PropertySource(value = { "classpath:Your .properties file here" }) @EnableCassandraRepositories(basePackages = { "base-package name of your Repositories'" }) public class CassandraConfig { @Autowired private Environment environment; @Bean public CassandraClusterFactoryBean cluster() { CassandraClusterFactoryBean cluster = new CassandraClusterFactoryBean(); cluster.setContactPoints(env.getProperty("contactpoints from your properties file")); cluster.setPort(Integer.parseInt(env.getProperty("ports from your properties file"))); return cluster; } @Bean public CassandraConverter converter() { return new MappingCassandraConverter(mappingContext()); } @Bean public CassandraSessionFactoryBean session() throws Exception { CassandraSessionFactoryBean session = new CassandraSessionFactoryBean(); session.setCluster(cluster().getObject()); session.setKeyspaceName(env.getProperty("keyspace from your properties file")); session.setConverter(converter()); session.setSchemaAction(SchemaAction.CREATE_IF_NOT_EXISTS); return session; } @Bean public CassandraOperations cassandraTemplate() throws Exception { return new CassandraTemplate(session().getObject()); } @Bean public CassandraMappingContext mappingContext() throws ClassNotFoundException { CassandraMappingContext mappingContext= new CassandraMappingContext(); mappingContext.setInitialEntitySet(getInitialEntitySet()); return mappingContext; } @Override public String[] getEntityBasePackages() { return new String[]{"base-package name of all your entity annotated with @Table"}; } @Override protected Set<Class<?>> getInitialEntitySet() throws ClassNotFoundException { return CassandraEntityClassScanner.scan(getEntityBasePackages()); } } This last getInitialEntitySet method might be an Optional one. Try without this too. Make sure your Keyspace, contactpoints and port in .properties file. Like : cassandra.contactpoints=localhost,127.0.0.1 cassandra.port=9042 cassandra.keyspace='Your Keyspace name here' Identifier.java. Note the use of the @UserDefinedType annotation.. Entities. An entity is a Java class that is mapped to a Cassandra table. The class includes all of the table’s columns or. Spring Data for Apache Cassandra approaches data access with mapped entity classes that fit your data model. You can use these entity classes to create Cassandra table specifications and user type definitions. INSERT, SELECT, DELETE DATA CASSANDRA USING SPRING DATA. In this article we will cover in detail how we can use the Cassandra Spring Data, for manipulation of the data in a spring boot application. While keeping the Spring as our Cassandra connection manager and data manager by utilizing the magic of the Spring. - I'm having the exact same issue and would love to know. So far I've created a custom repository method called "createIfExists" where I've instantiated a CreateTableSpecificationwith ifNotExists()called on it. I manually call the method after spring-data-cassandra initialization. - I'm creating the SessionBeanFactorymanually (to override spring-boot's) and setting the schema action. However, it doesn't appear to have made any difference and I still get "InvalidQueryException: unconfigured table mytable". - I suppose presently spring-data-cassandra 1.5M1 it is not working. Can you confirm on. - @AllahbakshAsadullah yes its not working with version 1.5.1. What version did you use instead?
https://thetopsites.net/article/56107676.shtml
CC-MAIN-2021-25
en
refinedweb
Container registry offerings AWS provides the Elastic Container Registry (ECR), Azure has Container Registry, and Google has it’s Container Registry. Each provider has associated services unique to their offering, but all support Docker or OCI compliant images. Build it Let’s examine how to create a registry with the provider of your choice. In these examples, we create a registry, build a Docker image, and push the image to the registry. The application used for the image is NGINX. Choose your cloud provider to learn how to build a registry. In this example, we create an ECR repository configured to scan an image’s Operating System components. Scanning for vulnerabilities in an application is currently out of scope. We also set a policy for the repository that controls the actions allowed and a lifecycle policy that expires an image after a set time. With Pulumi, it’s possible to build an image locally using Docker and push it to your repository. To push the image, we obtain the credentials required to push from the registry. Finally, we export the credentials and the URL for the registry. Read more about ECR in the API Reference. import * as docker from "@pulumi/docker"; import * as aws from "@pulumi/aws"; // Create a repository and configure to scan the image on push const repo = new aws.ecr.Repository("myrepository", { imageScanningConfiguration: { scanOnPush: true }, imageTagMutability: "MUTABLE", }); // Set a use policy for the repository const repositoryPolicy = new aws.ecr.RepositoryPolicy("myrepositorypolicy", { repository: repo.id, policy: JSON.stringify({ Version: "2012" ] }] }) }); // Set a policy to control the lifecycle of an image const lifecyclePolicy = new aws.ecr.LifecyclePolicy("mylifecyclepolicy", { repository: repo.id, policy: JSON.stringify({ rules: [{ rulePriority: 1, description: "Expire images older than 14 days", selection: { tagStatus: "untagged", countType: "sinceImagePushed", countUnit: "days", countNumber: 14 }, action: { type: "expire" } }] }) }); // Get the repository credentials we use to push to the repository const repoCreds = repo.registryId.apply(async (registryId) => { const credentials = await aws.ecr.getCredentials({ registryId: registryId, }); const decodedCredentials = Buffer.from(credentials.authorizationToken, "base64").toString(); const [username, password] = decodedCredentials.split(":"); return { server: credentials.proxyEndpoint, username, password }; }); // Create a new image and push to the repository const image = new docker.Image("myapp", { imageName: repo.repositoryUrl, build: "./app", registry: repoCreds, }) // Export credentials and URL to the repository export const credentials = repoCreds; export const repoEndpoint = repo.repositoryUrl; In this example, we create an Azure Resource Group to contain the resources for the registry, such as the App Service that hosts the registry. We instantiate a registry with the containerservice module and use the Image module in the Docker package to build and push the image to the registry. We export the registry URL and the username and password in case we should want to push or pull and image using the Docker CLI. import * as azure from "@pulumi/azure"; import * as docker from "@pulumi/docker"; import * as pulumi from "@pulumi/pulumi"; // Create an Azure Resource Group const resourceGroup = new azure.core.ResourceGroup("examples"); // Create a dedicated App Service Plan for Linux App Services const plan = new azure.appservice.Plan("linux-apps", { resourceGroupName: resourceGroup.name, kind: "Linux", reserved: true, sku: { tier: "Basic", size: "B1", }, }); const repo = new azure.containerservice.Registry("myrepository", { resourceGroupName: resourceGroup.name, sku: "Basic", adminEnabled: true, }); const myImage = new docker.Image("myapp", { imageName: pulumi.interpolate`${repo.loginServer}/${"myapp"}:v1.0.0`, build: { context: `./${"app"}`, }, registry: { server: repo.loginServer, username: repo.adminUsername, password: repo.adminPassword, }, }); export const server = repo.loginServer; export const username = repo.adminUsername; export const password = repo.adminPassword; In this example, we’ll use the configuration and credentials from the gcloud CLI to build an image and push it into the GCP registry. Make sure the GCP project is set and you are logged into GCP and Docker is configured to use the GCR, $ gcloud init $ gcloud auth login $ gcloud auth configure-docker We use the Image module in the Docker package to build and push the image to the registry. We export the registry URL and the username and password in case we should want to push or pull an image using the Docker CLI. import * as docker from "@pulumi/docker"; import * as gcp from "@pulumi/gcp"; import * as pulumi from "@pulumi/pulumi"; // Build and push image to gcr repository const imageName = "myapp"; const myImage = new docker.Image(imageName, { imageName: pulumi.interpolate`gcr.io/${gcp.config.project}/${imageName}:latest`, build: { context: "./app", }, }); // Export the repository end point export const repoEndpoint = pulumi.interpolate`gcr.io/${gcp.config.project}`; Learn more Container registries are just one of the many resources used for deploying modern applications. Implementations among cloud service providers differ by the functionality they offer and how they are deployed. The commonality among them is that they provide a secure place to store and retrieve Docker or OCI compliant container images. Explore how to create and manage resources for the cloud service provider of your choice with Pulumi. Great places to start are: Posted on
https://www.pulumi.com/blog/how-to-registries/?utm_source=dev.to&utm_medium=page-post&utm_campaign=content-syndication
CC-MAIN-2021-25
en
refinedweb
Most operators are actually just methods, so x + y is calling the + method of x with argument y, which would be written x.+(y). If you write a method of your own having semantic meaning of a given operator, you can implement your variant in the class. As a silly example: # A class that lets you operate on numbers by name. class NamedInteger name_to_value = { 'one' => 1, 'two' => 2, ... } # define the plus method def + (left_addend, right_addend) name_to_value(left_addend) + name_to_value(right_addend) end ... end &&vs. and, ||vs. or Note that there are two ways to express booleans, either && or and, and || or or -- they are often interchangeable, but not always. We'll refer to these as "character" and "word" variants. The character variants have higher precedence so reduce the need for parentheses in more complex statements helps avoid unexpected errors. The word variants were originally intended as control flow operators rather than boolean operators. That is, they were designed to be used in chained method statements: raise 'an error' and return While they can be used as boolean operators, their lower precedence makes them unpredictable. Secondly, many rubyists prefer the character variant when creating a boolean expression (one that evaluates to true or false) such as x.nil? || x.empty?. On the other hand, the word variants are preferred in cases where a series of methods are being evaluated, and one may fail. For example a common idiom using the word variant for methods that return nil on failure might look like: def deliver_email # If the first fails, try the backup, and if that works, all good deliver_by_primary or deliver_by_backup and return # error handling code end From highest to lowest, this is the precedence table for Ruby. High precedence operations happen before low precedence operations. ╔═══════════════════════╦════════════════════════════════════════╦═════════╗ ║ Operators ║ Operations ║ Method? ║ ╠═══════════════════════╬════════════════════════════════════════╬═════════╣ ║ . ║ Method call (e.g. foo.bar) ║ ║ ║ [] []= ║ Bracket Lookup, Bracket Set ║ ✓¹ ║ ║ ! ~ + ║ Boolean NOT, complement, unary plus ║ ✓² ║ ║ ** ║ Exponentiation ║ ✓ ║ ║ - ║ Unary minus ║ ✓² ║ ║ * / % ║ Multiplication, division, modulo ║ ✓ ║ ║ + - ║ Addition, subtraction ║ ✓ ║ ║ << >> ║ Bitwise shift ║ ✓ ║ ║ & ║ Bitwise AND ║ ✓ ║ ║ | ^ ║ Bitwise OR, Bitwise XOR ║ ✓ ║ ║ < <= >= > ║ Comparison ║ ✓ ║ ║ <=> == != === =~ !~ ║ Equality, pattern matching, comparison ║ ✓³ ║ ║ && ║ Boolean AND ║ ║ ║ || ║ Boolean OR ║ ║ ║ .. ... ║ Inclusive range, Exclusive range ║ ║ ║ ? : ║ Ternary operator ║ ║ ║ rescue ║ Modifier rescue ║ ║ ║ = += -= ║ Assignments ║ ║ ║ defined? ║ Defined operator ║ ║ ║ not ║ Boolean NOT ║ ║ ║ or and ║ Boolean OR, Boolean AND ║ ║ ║ if unless while until ║ Modifier if, unless, while, until ║ ║ ║ { } ║ Block with braces ║ ║ ║ do end ║ Block with do end ║ ║ ╚═══════════════════════╩════════════════════════════════════════╩═════════╝ Unary + and unary - are for +obj, -obj or -(some_expression). Modifier-if, modifier-unless, etc. are for the modifier versions of those keywords. For example, this is a modifier-unless expression: a += 1 unless a.zero? Operators with a ✓ may be defined as methods. Most methods are named exactly as the operator is named, for example: class Foo def **(x) puts "Raising to the power of #{x}" end def <<(y) puts "Shifting left by #{y}" end def ! puts "Boolean negation" end end Foo.new ** 2 #=> "Raising to the power of 2" Foo.new << 3 #=> "Shifting left by 3" !Foo.new #=> "Boolean negation" ¹ The Bracket Lookup and Bracket Set methods ( [] and []=) have their arguments defined after the name, for example: class Foo def [](x) puts "Looking up item #{x}" end def []=(x,y) puts "Setting item #{x} to #{y}" end end f = Foo.new f[:cats] = 42 #=> "Setting item cats to 42" f[17] #=> "Looking up item 17" ² The "unary plus" and "unary minus" operators are defined as methods named class Foo def [email protected] puts "unary minus" end def [email protected] puts "unary plus" end end f = Foo.new +f #=> "unary plus" -f #=> "unary minus" ³ In early versions of Ruby the inequality operator != and the non-matching operator !~ could not be defined as methods. Instead, the method for the corresponding equality operator == or matching operator =~ was invoked, and the result of that method was boolean inverted by Ruby. If you do not define your own != or !~ operators the above behavior is still true. However, as of Ruby 1.9.1, those two operators may also be defined as methods: class Foo def ==(x) puts "checking for EQUALITY with #{x}, returning false" false end end f = Foo.new x = (f == 42) #=> "checking for EQUALITY with 42, returning false" puts x #=> "false" x = (f != 42) #=> "checking for EQUALITY with 42, returning false" puts x #=> "true" class Foo def !=(x) puts "Checking for INequality with #{x}" end end f != 42 #=> "checking for INequality with 42" Also known as triple equals. This operator does not test equality, but rather tests if the right operand has an IS A relationship with the left operand. As such, the popular name case equality operator is misleading. This SO answer describes it thus: the best way to describe a === b is "if I have a drawer labeled a, does it make sense to put b in it?" In other words, does the set a include the member b? (1..5) === 3 # => true (1..5) === 6 # => false Integer === 42 # => true Integer === 'fourtytwo' # => false /ell/ === 'Hello' # => true /ell/ === 'Foobar' # => false Classes that override === Many classes override === to provide meaningful semantics in case statements. Some of them are: ╔═════════════════╦════════════════════╗ ║ Class ║ Synonym for ║ ╠═════════════════╬════════════════════╣ ║ Array ║ == ║ ║ ║ ║ ║ Date ║ == ║ ║ ║ ║ ║ Module ║ is_a? ║ ║ ║ ║ ║ Object ║ == ║ ║ ║ ║ ║ Range ║ include? ║ ║ ║ ║ ║ Regexp ║ =~ ║ ║ ║ ║ ║ String ║ == ║ ╚═════════════════╩════════════════════╝ Recommended practice Explicit use of the case equality operator === should be avoided. It doesn't test equality but rather subsumption, and its use can be confusing. Code is clearer and easier to understand when the synonym method is used instead. # Bad Integer === 42 (1..5) === 3 /ell/ === 'Hello' # Good, uses synonym method 42.is_a?(Integer) (1..5).include?(3) /ell/ =~ 'Hello' Ruby 2.3.0 added the safe navigation operator, &.. This operator is intended to shorten the paradigm of object && object.property && object.property.method in conditional statements. For example, you have a House object with an address property, and you want to find the street_name from the address. To program this safely to avoid nil errors in older Ruby versions, you'd use code something like this: if house && house.address && house.address.street_name house.address.street_name end The safe navigation operator shortens this condition. Instead, you can write: if house&.address&.street_name house.address.street_name end Caution: The safe navigation operator doesn't have exactly the same behavior as the chained conditional. Using the chained conditional (first example), the if block would not be executed if, say address was false. The safe navigation operator only recognises nil values, but permits values such as false. If address is false, using the SNO will yield an error: house&.address&.street_name # => undefined method `address' for false:FalseClass
https://sodocumentation.net/ruby/topic/3764/operators
CC-MAIN-2021-21
en
refinedweb
. Let's create an Enumerator for Fibonacci numbers. fibonacci = Enumerator.new do |yielder| a = b = 1 loop do yielder << a a, b = b, a + b end end We can now use any Enumerable method with fibonacci: fibonacci.take 10 # => [1, 1, 2, 3, 5, 8, 13, 21, 34, 55] If an iteration method such as each is called without a block, an Enumerator should be returned. This can be done using the enum_for method: def each return enum_for :each unless block_given? yield :x yield :y yield :z end This enables the programmer to compose Enumerable operations: each.drop(2).map(&:upcase).first # => :Z Use rewind to restart the enumerator. ℕ = Enumerator.new do |yielder| x = 0 loop do yielder << x x += 1 end end ℕ.next # => 0 ℕ.next # => 1 ℕ.next # => 2 ℕ.rewind ℕ.next # => 0
https://sodocumentation.net/ruby/topic/4985/enumerators
CC-MAIN-2021-21
en
refinedweb
Hey everyone. All the examples I see for getting the FCM token in Xamarin use FirebaseInstanceIdService which gives a warning it is is depreciated and obsolete. The code snippet below is an example how I can get the FCM token, but since this is depreciated, what is the recommended way? public class MyFirebaseIIDService : FirebaseInstanceIdService { public override void OnTokenRefresh() { var refreshedToken = FirebaseInstanceId.Instance.Token; } } I have seen android specific code in Java that looks similar to this as the "new way", but I can't seem to translate this into Xamarin. FirebaseInstanceId.getInstance().getInstanceId().addOnSuccessListener( MyActivity.this, new OnSuccessListener<InstanceIdResult>() { @Override public void onSuccess(InstanceIdResult instanceIdResult) { String newToken = instanceIdResult.getToken(); Log.e("newToken",newToken); } }); Any suggestions or are people just continuing to use the depreciated method in Xamarin? Thanks! Jeremy . class TestService: FirebaseMessagingService { public override void OnNewToken(string p0) { base.OnNewToken(p0); } } Answers . @jezh Thank you that gave me the new token as it was assigned. As "FirebaseInstanceId.Instance.Token" became obsolete, for getting the token you can use the following: 1. Implement in your class Android.Gms.Tasks.IOnSuccessListener - if you also extend Java.Lang.Object, you can remove the Dispose() and Handle{get;} var token = result.Class.GetMethod("getToken").Invoke(result).ToString(); How exaclty I can use token after OnSuccess? It works perfectly. But does it just ignore or bypass a warning? getToken is obsolete, like FirebaseInstanceId.Instance.Token.- So, even calling getToken could be a problem, right? I found another way (but since this is not documented, probably it is just luck and it doesnt' work :P). Anyway, the problem is that OnNewToken is not called on each app start and sometimes you loose the token and want to know what the token is at the moment. Can't understand why Google have to obsolete everything they write, after a fews months... Why? But on the documentation, you can find this: public String getToken (String senderId, String scope) At this point it is easy! Just port it to Xamarin, use the c# format and... Wait!?!? sender Id? Scope? I don't know if it is correct, but it works, so please write me if it is not correct. The sender ID is the number in your google-services.json, for example in client-info->mobilesdk_app_id, on the string: 1:888888888:android:999999999999999, the sender id is 88888888888. Or you can find it in the dashboard, under settings cloud messaging: Now, you can put the code somewhere in your code. For example in the FirebaseService.cs (where you have also OnNewToken()). Maybe in a Init function, so you can retrieve this token at beginning. It can not run in the main thread, this will give you an exception, since the call need 1-2 seconds to be executed. So, you can write something like this: In case you lost your token and want to get it again, here is a different take on the solution by @AntonioStan.2954. (getToken on the InstanceId object is not deprected) Posting this to help avoid using obsoleted API: To Request a token, you can create an Activity to handle the Android Promise like this: Use the code below to actually request the token:
https://forums.xamarin.com/discussion/136542/fcm-token-for-push-notifications-firebaseinstanceid-intance-token-obsolete
CC-MAIN-2021-21
en
refinedweb
Distributed Autograd Design¶ This note will present the detailed design for distributed autograd and walk through the internals of the same. Make sure you’re familiar with Autograd mechanics and the Distributed RPC Framework before proceeding. Background¶ Let’s say you have two nodes and a very simple model partitioned across two nodes. This can be implemented using torch.distributed.rpc as follows: import torch import torch.distributed.rpc as rpc def my_add(t1, t2): return torch.add(t1, t2) # On worker 0:() The main motivation behind distributed autograd is to enable running a backward pass on such distributed models with the loss that we’ve computed and record appropriate gradients for all tensors that require gradients. Autograd recording during the forward pass¶ PyTorch builds the autograd graph during the forward pass and this graph is used to execute the backward pass. For more details see How autograd encodes the history. For distributed autograd, we need to keep track of all RPCs during the forward pass to ensure the backward pass is executed appropriately. For this purpose, we attach send and recv functions to the autograd graph when we perform an RPC. The sendfunction is attached to the source of the RPC and its output edges point to the autograd function for the input tensors of the RPC. The input for this function during the backward pass is received from the destination as the output of the appropriate recvfunction. The recvfunction is attached to the destination of the RPC and its inputs are retrieved from operators executed on the destination using the input tensors. The output gradients of this function are sent to the source node to the appropriate sendfunction during the backward pass. Each send-recvpair is assigned a globally unique autograd_message_idto uniquely identify the pair. This is useful to lookup the corresponding function on a remote node during the backward pass. For RRef, whenever we call torch.distributed.rpc.RRef.to_here()we attach an appropriate send-recvpair for the tensors involved. As an example, this is what the autograd graph for our example above would look like (t5.sum() excluded for simplicity): Distributed Autograd Context¶ Each forward and backward pass that uses distributed autograd is assigned a unique torch.distributed.autograd.context and this context has a globally unique autograd_context_id. This context is created on each node as needed. This context serves the following purpose: Multiple nodes running distributed backward passes might accumulate gradients on the same tensor and as a result the .gradfield of the tensor would have gradients from a variety of distributed backward passes before we have the opportunity to run the optimizer. This is similar to calling torch.autograd.backward()multiple times locally. In order to provide a way of separating out the gradients for each backward pass, the gradients are accumulated in the torch.distributed.autograd.contextfor each backward pass. During the forward pass we store the sendand recvfunctions for each autograd pass in this context. This ensures we hold references to the appropriate nodes in the autograd graph to keep it alive. In addition to this, it is easy to lookup the appropriate sendand recvfunctions during the backward pass. In general we also use this context to store some metadata for each distributed autograd pass. From the user’s perspective the autograd context is setup as follows: import torch.distributed.autograd as dist_autograd with dist_autograd.context() as context_id: loss = model.forward() dist_autograd.backward(context_id, loss) It is important to note that your model’s forward pass must be invoked within the distributed autograd context manager, as a valid context is needed in order to ensure that all send and recv functions are stored properly to run the backward pass across all participating nodes. Distributed Backward Pass¶ In this section we outline the challenge of computing dependencies accurately during a distributed backward pass and describe a couple of algorithms (with tradeoffs) on how we can execute a distributed backward pass. Computing dependencies¶ Consider the following piece of code being run on a single machine import torch a = torch.rand((3, 3), requires_grad=True) b = torch.rand((3, 3), requires_grad=True) c = torch.rand((3, 3), requires_grad=True) d = a + b e = b * c d.sum.().backward() This is what the autograd graph for the code above would look like: The first step the autograd engine performs as part of the backward pass is computing the number of dependencies for each node in the autograd graph. This helps the autograd engine know when a node in the graph is ready for execution. The numbers in brackets for add(1) and mul(0) denote the number of dependencies. As you can see, this means during the backward pass the add node needs 1 input and the mul node doesn’t need any inputs (in other words doesn’t need to be executed). The local autograd engine computes these dependencies by traversing the graph from the root nodes ( d in this case). The fact that certain nodes in the autograd graph might not be executed in the backward pass poses a challenge for distributed autograd. Consider this piece of code which uses RPC. import torch import torch.distributed.rpc as rpc a = torch.rand((3, 3), requires_grad=True) b = torch.rand((3, 3), requires_grad=True) c = torch.rand((3, 3), requires_grad=True) d = rpc.rpc_sync("worker1", torch.add, args=(a, b)) e = rpc.rpc_sync("worker1", torch.mul, args=(b, c)) loss = d.sum() The associated autograd graph for the code above would be: Computing dependencies of this distributed autograd graph is much more challenging and requires some overhead (either in terms of computation or network communication). For performance sensitive applications we can avoid a lot of overhead by assuming every send and recv function are valid as part of the backward pass (most applications don’t perform RPCs that aren’t used). This simplifies the distributed autograd algorithm and is much more efficient, but at the cost that the application needs to be aware of the limitations. This algorithm is called the FAST mode algorithm and is described in detail below. In the general case it might not be necessary that every send and recv function is valid as part of the backward pass. To address this, we have proposed a SMART mode algorithm which is described in a later section. Please note that currently, only the FAST mode algorithm is implemented. FAST mode algorithm¶ The key assumption of this algorithm is that each send function has a dependency of 1 when we run a backward pass. In other words, we assume we’ll receive a gradient over RPC from another node. The algorithm is as follows: We start from the worker which has the roots for the backward pass (all roots must be local). Lookup all the sendfunctions for the current Distributed Autograd Context. Compute dependencies locally starting from the provided roots and all the sendfunctions we retrieved. After computing dependencies, kick off the local autograd engine with the provided roots. When the autograd engine executes the recvfunction, the recvfunction sends the input gradients via RPC to the appropriate worker. Each recvfunction knows the destination worker id since it is recorded as part of the forward pass. The recvfunction also sends over the autograd_context_idand autograd_message_idto the remote host. When this request is received on the remote host, we use the autograd_context_idand autograd_message_idto look up the appropriate sendfunction. If this is the first time a worker has received a request for the given autograd_context_id, it will compute dependencies locally as described in points 1-3 above. The sendfunction retrieved in 6. is then enqueued for execution on the local autograd engine for that worker. Finally, instead of accumulating the gradients on the .gradfield of the Tensor, we accumulate the gradients separately per Distributed Autograd Context. The gradients are stored in a Dict[Tensor, Tensor], which is basically a map from Tensor to its associated gradient and this map can be retrieved using the get_gradients()API. As an example the complete code with distributed autograd would be as follows: import torch import torch.distributed.autograd as dist_autograd import torch.distributed.rpc as rpc def my_add(t1, t2): return torch.add(t1, t2) # On worker 0: # Setup the autograd context. Computations that take # part in the distributed backward pass must be within # the distributed autograd context manager. with dist_autograd.context() as context_id:() # Run the backward pass. dist_autograd.backward(context_id, [loss]) # Retrieve the gradients from the context. dist_autograd.get_gradients(context_id) The distributed autograd graph with dependencies would be as follows (t5.sum() excluded for simplicity): The FAST mode algorithm applied to the above example would be as follows: On Worker 0we start from the roots lossand send1to compute dependencies. As a result send1is marked with a dependency of 1 and mulon Worker 0is marked with a dependency of 1. Now, we kickoff the local autograd engine on Worker 0. We first execute the mulfunction, accumulate its output in the autograd context as the gradient for t4. Then, we execute recv2which sends the gradients to Worker 1. Since this is the first time Worker 1has heard about this backward pass, it starts dependency computation and marks the dependencies for send2, addand recv1appropriately. Next, we enqueue send2on the local autograd engine of Worker 1, which in turn executes addand recv1. When recv1is executed it sends the gradients over to Worker 0. Since Worker 0has already computed dependencies for this backward pass, it just enqueues and executes send1locally. Finally, gradients for t1, t2and t4are accumulated in the Distributed Autograd Context. Distributed Optimizer¶ The DistributedOptimizer operates as follows: Takes a list of remote parameters ( RRef) to optimize. These could also be local parameters wrapped within a local RRef. Takes a Optimizerclass as the local optimizer to run on all distinct RRefowners. The distributed optimizer creates an instance of the local Optimizeron each of the worker nodes and holds an RRefto them. When torch.distributed.optim.DistributedOptimizer.step()is invoked, the distributed optimizer uses RPC to remotely execute all the local optimizers on the appropriate remote workers. A distributed autograd context_idmust be provided as input to torch.distributed.optim.DistributedOptimizer.step(). This is used by local optimizers to apply gradients stored in the corresponding context. If multiple concurrent distributed optimizers are updating the same parameters on a worker, these updates are serialized via a lock. Simple end to end example¶ Putting it all together, the following is a simple end to end example using distributed autograd and the distributed optimizer. If the code is placed into a file called “dist_autograd_simple.py”, it can be run with the command MASTER_ADDR="localhost" MASTER_PORT=29500 python dist_autograd_simple.py: import torch import torch.multiprocessing as mp import torch.distributed.autograd as dist_autograd from torch.distributed import rpc from torch import optim from torch.distributed.optim import DistributedOptimizer def random_tensor(): return torch.rand((3, 3), requires_grad=True) def _run_process(rank, dst_rank, world_size): name = "worker{}".format(rank) dst_name = "worker{}".format(dst_rank) # Initialize RPC. rpc.init_rpc( name=name, rank=rank, world_size=world_size ) # Use a distributed autograd context. with dist_autograd.context() as context_id: # Forward pass (create references on remote nodes). rref1 = rpc.remote(dst_name, random_tensor) rref2 = rpc.remote(dst_name, random_tensor) loss = rref1.to_here() + rref2.to_here() # Backward pass (run distributed autograd). dist_autograd.backward(context_id, [loss.sum()]) # Build DistributedOptimizer. dist_optim = DistributedOptimizer( optim.SGD, [rref1, rref2], lr=0.05, ) # Run the distributed optimizer step. dist_optim.step(context_id) def run_process(rank, world_size): dst_rank = (rank + 1) % world_size _run_process(rank, dst_rank, world_size) rpc.shutdown() if __name__ == '__main__': # Run world_size workers world_size = 2 mp.spawn(run_process, args=(world_size,), nprocs=world_size)
https://pytorch.org/docs/master/rpc/distributed_autograd.html
CC-MAIN-2021-21
en
refinedweb
102175/how-do-i-make-python def wait(): m.getch() This should wait for a keypress. Additional info: in Python 3 raw_input() does not exist In Python 2 input(prompt) is equivalent to eval(raw_input(prompt)) Then use this code: import keyboard keyboard.sleep() Hahah! Yes, you need to create a ...READ MORE copy a file in python from shutil ...READ MORE Use the shutil module. copyfile(src, dst) Copy the contents .. 1886 Use del and specify the index of the element ...READ MORE Hey, Web scraping is a technique to automatically ...READ MORE OR Already have an account? Sign in.
https://www.edureka.co/community/102175/how-do-i-make-python-wait-for-a-pressed-key
CC-MAIN-2021-21
en
refinedweb
66620/how-to-make-a-chain-of-function-decorators How can I make two decorators in Python that would do the following? @makebold @makeitalic def say(): return "Hello" ...which should return: "<b><i>Hello</i></b>" I'm not trying to make HTML this way in a real application - just trying to understand how decorators and decorator chaining works. Hello @kartik," Hope this works!! Here is what you asked for: from functools ...READ MORE Yes it is possible. You can refer ...READ MORE Hi, @There, Regarding your query I would suggest .. Hii @kartik, Given a list of lists l, flat_list = ...READ MORE Hello @kartik, You can use itertools.chain(): import itertools list2d = [[1,2,3], ...READ MORE OR Already have an account? Sign in.
https://www.edureka.co/community/66620/how-to-make-a-chain-of-function-decorators
CC-MAIN-2021-21
en
refinedweb
Download presentation Presentation is loading. Please wait. Published byJacob Crowley Modified over 7 years ago 1 FpML Versioning An AWG Discusion Document 2 Namespace URIs & Versions An XML parser locates the schema for a document based on its namespace URI To be backwards or forwards compatible the namespace used to reference the schema MUST be the same across several versions –Otherwise you MUST edit the instance to change the namespace before reprocessing You can not use resolution to map several namespaces to the same schema –The parser detects that the namespace used within the schema is different from the one being referenced –Neither can you resolve to a chameleon schema with no target namespace 3 Versioning in FpML To Date Based on major.minor numbering –Major increments to indicate a breaking change –Minor increments with each non-breaking change Instance documents can only be processed against the specific schema they reference –Must be edited to use another Numbering rules have not be rigorously implemented –4.1 EQD model incompatible with 4.0 –4.3 CD model technically incompatible with 4.2 4 Technical vs. Marketing Numbering Standards committee prefers to minimise major number changes –Makes it seem that differences between all releases are minor – historically not true –Must read the release notes to discover the details of the changes Goes against FpML rules of operation 5 Why Change? To implement document backwards compatibility –Compatible schemas must share a common namespace Most FpML releases are structurally backwards compatible BUT documents must be altered before processing –Change would eliminate need for document alterations –Can reduce the maintain required to update implementations 6 Build Numbers Can be considered as an additional part of the version number –e.g. Major.Minor.Build Useful to implementers using working drafts No changes to build numbers are considered 8 Option 1: Continue Ad-hoc Numbering Continue with current system for 5 and later Pros: –More control over version numbering Cons: –Not possible to implement backwards compatibility –Versions give no indication of technical difference since last release 9 Option 2: Rigorous Two Part Numbering Implement version rigorously to FpML operating rules –Current FpML 4.3 should be 6.1! Pros: –Version numbers reflect technical difference and ease of implementation –Allows backwards compatibility through namespace URI containing only major number Cons: –Major version numbers would increment more frequently 10 Option 3: Two Versions Each schema given two identifiers –A marketing name used in publications e.g. FpML 5-A –A technical major.minor number in the schema Pros: –Marketing identifier changes less frequently than technical version Cons: –Not easy to determine compatibility from marketing name –Not easy to map between marketing and technical numbering –Namespace based on marketing name would not support backwards compatibility 11 Option 4: Three Part Version Number Version number becomes –Architecture.Major.Minor (e.g 5.0.0, 5.0.1, 5.1.0) Schema namespace URI predictably derived from version number –Architecture.Major (e.g. 5.0, 5.1) Pros: –Leading digit remains consistent across many releases –Supports document backwards compatibility Cons: –Architecture number out of synchronisation with documentation Come up with a better name? –Changes version attribute –Version is not a number - may affect some implementations 12 Option 5: Three Separate Values The version number in Option 4 is not a syntactically valid number –Has two decimal points Could use three different attributes Derive namespace from Architecture and Major numbers 13 Option 6: Backwards Compatibility Attribute Define additional attributes to define minimum compatible version number [AJ] How does this help define a series of compatible namespace URLs? 14 Observations Implementers need to see how similar or different releases are –Current scheme disguises the amount of change Backwards compatibility can only be done if version numbers are technically defined –Breaking changes MUST change the URI 15 Other Thoughts Should we have a standard root element attribute that defines schema status? –status=wd or tr or rec Similar presentations © 2021 SlidePlayer.com Inc.
https://slideplayer.com/slide/708283/
CC-MAIN-2021-21
en
refinedweb
AttributeError: 'Plot_OldSync' object has no attribute 'mpyplot' - Planet Winter last edited by Planet Winter Hi everyone, I am new to backtrader and the forum. After looking at several platforms I came back to backtester. I am not a professional programmer. However I attempted to import csv data and plot it which fails with AttributeError: 'Plot_OldSync' object has no attribute 'mpyplot' Can someone point me to the right direction? pudb debugging didn't get me further. EDIT: setting todate and fromdate to the same values for both data, does not help. Thanks in advance, Daniel import datetime import backtrader as bt import backtrader.feeds as btfeeds import os import sys class GdaxCSVLHOC(btfeeds.GenericCSVData): """ bundle loading of gdax CSV files pass dataname as parameter with the name of the CSV file """ params = ( # reversed=True takes into account that the CSV data has already been reversed # and has the standard expected date ascending order ('reverse', False), ('headers', True), ('separator', ","), # defaults to min #fromdate=datetime.datetime(2000, 1, 1), # defaults to max #todate=datetime.datetime(2000, 12, 31), ('nullvalue', 0.0), # Format used to parse the datetime CSV field ('dtformate', '%Y-%m-%d %H:%M:%S'), # Format used to parse the time CSV field if "present" #('tmformat', ) #Fields order ('datetime', 0), ('high', 2), ('low', 1), ('open', 3), ('close', 4), ('volume', 5), ('openinterest', -1) ) if __name__ == '__main__': # Create a cerebro entity cerebro = bt.Cerebro() # Data is in an extra folder modpath = os.path.dirname(os.path.abspath(sys.argv[0])) datapath = os.path.join(modpath, '..', 'data') eth_eur = GdaxCSVLHOC(dataname = os.path.join(datapath, 'ETH-EUR_gdax.csv'), # Do not pass values before this date fromdate=datetime.datetime(2017, 6, 3), # Do not pass values after this date todate=datetime.datetime(2017, 8, 10) ) eth_usd = GdaxCSVLHOC(dataname = os.path.join(datapath, 'ETH-USD_gdax.csv'), # Do not pass values before this date fromdate=datetime.datetime(2017, 6, 3), # Do not pass values after this date todate=datetime.datetime(2017, 8, 10) ) cerebro.adddata(eth_eur) cerebro.adddata(eth_usd) cerebro.run() cerebro.plot() I am running this in an up to date virtualenv. Backtrace: (venv) [noisy:~/devel/planet-trader]: python strategies/bt_strat_gdax_csv.py Traceback (most recent call last): File "strategies/bt_strat_gdax_csv.py", line 76, in <module> cerebro.plot() File "/home/daniel/devel/planet-trader/venv/lib/python2.7/site-packages/backtrader/cerebro.py", line 943, in plot plotter.show() File "/home/daniel/devel/planet-trader/venv/lib/python2.7/site-packages/backtrader/plot/plot.py", line 777, in show self.mpyplot.show() AttributeError: 'Plot_OldSync' object has no attribute 'mpyplot' Python 2.7.13 my pip packages: backtrader==1.9.54.122 certifi==2017.7.27.1 chardet==3.0.4 cycler==0.10.0 DateTime==4.2 functools32==3.2.3.post2 idna==2.5 matplotlib==2.0.2 numpy==1.13.0 pyparsing==2.2.0 python-dateutil==2.6.0 pytz==2017.2 requests==2.18.3 six==1.10.0 subprocess32==3.2.7 urllib3==1.22 zope.interface==4.4.2 Hi try to update to a recent version of backtrader or try to add a strategy. The plot method will exit without a strategy before creating the mpyplot object, which is being tried to be shown in the show method later HTH Daniel - Planet Winter last edited by Hi dasch, Thanks! I did a pip install --upgrade backtrader in my virtualenv which helped! I was sure I already did that but was apparently not true. This now gives me an IndexError I will look into that. best, Daniel - backtrader administrators last edited by Notwithstanding that it is unclear if you added an strategy, the addition of the data feeds is missing the specifics about timeframeand compression. Cryptocurrency traders tend apparently to look at timeframes smaller than 1-daywhich is the default when adding a timeframe. See: Community - FAQ
https://community.backtrader.com/topic/569/attributeerror-plot_oldsync-object-has-no-attribute-mpyplot
CC-MAIN-2021-21
en
refinedweb
Problem You need to present floating-point output in a well-defined format, either for the sake of precision (scientific versus fixed-point notation) or simply to line up decimal points vertically for easier reading. Solution Use the standard manipulators provided in and to control the format of floating-point values that are written to the stream. There are too many combinations of ways to cover here, but Example 10-3 offers a few different ways to display the value of pi. Example 10-3. Formatting pi #include #include #include using namespace std; int main( ) { ios_base::fmtflags flags = // Save old flags cout.flags( ); double pi = 3.14285714; cout << "pi = " << setprecision(5) // Normal (default) mode; only << pi << ' '; // show 5 digits, including both // sides of decimal point. cout << "pi = " << fixed // Fixed-point mode; << showpos // show a "+" for positive nums, << setprecision(3) // show 3 digits to the *right* << pi << ' '; // of the decimal. cout << "pi = " << scientific // Scientific mode; << noshowpos // don't show plus sign anymore << pi * 1000 << ' '; cout.flags(flags); // Set the flags to the way they were } This will produce the following output: pi = 3.1429 pi = +3.143 pi = 3.143e+003 Discussion Manipulators that specifically manipulate floating-point output divide into two categories. There are those that set the format, which, for the purposes of this recipe, set the general appearance of floating-point and integer values, and there are those that fine-tune the display of each format. The formats are as follows: Normal (the default) In this format, the number of digits displayed is fixed (with a default of six) and the decimal is displayed such that only a set number of digits are displayed at one time. So, by default, pi would be displayed as 3.14286, and pi times 100 would display 314.286. Fixed In this format, the number of digits displayed to the right of the decimal point is fixed, while the number of those displayed to the left is not. In this case, again with a default precision of six, pi would be displayed as 3.142857, and pi times 100 would be 314.285714. In both cases, the number of digits displayed to the right of the decimal point is six while the total number of digits can grow indefinitely. Scientific The value is shown as a single digit, followed by a decimal point, followed by a number of digits determined by the precision setting, followed by the letter "e" and the power of ten to raise the preceding value to. In this case, pi times 1,000 would display as 3.142857e+003. Table 10-2 shows all manipulators that affect floating-point output (and sometimes numeric output in general). See Table 10-1 for general manipulators you can use together with the floating-point manipulators. In all three formats, all manipulators have the same effects except setprecision. In the default mode, "precision" refers to the number of digits on both sides of the decimal point. For example, to display pi in the default format with a precision of 2, do this: cout << "pi = " << setprecision(2) << pi << ' '; Your output will look like this: pi = 3.1 By comparison, consider if you want to display pi in fixed-point format instead: cout << "pi = " << fixed << setprecision(2) << pi << ' '; Now the output will look like this: pi = 3.14 This is because, in fixed-point format, the precision refers to the number of digits to the right of the decimal point. If we multiply pi by 1,000 in the same format, the number of digits to the right of the decimal remains unchanged: cout << "pi = " << fixed << setprecision(2) << pi * 1000 << ' '; produces: pi = 3142.86 This is nice, because you can set your precision, set your field width with setw, right-justify your output with right (see Recipe 10.1 ), and your decimal points will all be lined up vertically. Since a manipulator is just a convenient way of setting a format flag on the stream, remember that the settings stick around until you undo them or until the stream is destroyed. Save the format flags (see Example 10-3) before you start making changes, and restore them when you are done See Also Recipe 10.3 Building C++ Applications Code Organization Numbers Strings and Text Dates and Times Managing Data with Containers Algorithms Classes Exceptions and Safety Streams and Files Science and Mathematics Multithreading Internationalization XML Miscellaneous Index
https://flylib.com/books/en/2.131.1/formatting_floating_point_output.html
CC-MAIN-2021-21
en
refinedweb
Introduction This tutorial takes you through a step by step method to setup you Dash button hardware from RAK Wireless to work with Node-red Software. In this tutorial we will take our OpenHAB setup to control a bunch of appliances via the on board buttons on the RAk Dash button. Dash button hardware: Here is a glimpse of the Dash Button from RAK wireless: The Rak wireless Dash button consists of the following: - 1) A RAK473 Ameba SDK based wifi module - 2) 4 physical buttons - 3) 4 rbg leds The Button also provides a JTAG interface for programming, although you can program this with a RAK creator/creator pro base board (without the on-board RAK473 module) Node Red: According to the website :. Node-red is based on NodeJS and is a versatile GUI based tool to define your workflows for connecting physcial hardware to web services/software operations. It is completely browser based and provides tonnes of expandability options like creating custom nodes, importing nodes by other developers etc. We will be looking into more on that in just a second. First lets get our base setup shall we :) Setup your OpenHAB environment. To setup your OpenHAB environment and connecting your smart swirches(Sonoff, Belkin wemos etc) Please follow the extensive tutorial here: By the end of the tutorial you will have your wifi enabled switch connected to a central server. Whats more, you can even connect the switches to alexa. Take the project for a spin and let me know your thoughts....on with the setup --->>> Setup your node-red server We will install node red on the same Raspberry pi where we would have installed the OpenHAB server. Thought this is not a necessary step. You can have both of them run on completely different machines (PC or raspberry pis) just make sure that in the places that i caution about the service IP of the URL you make the necessary change to include the IP of the node-red or openhab server where appropriate: There are two ways to get started with Node-RED on a Raspberry Pi. - use the version preinstalled in Raspbian full image since November 2015. - or manual install using an install script. You can then start using the editor. For the Raspbian pre-installed image please follow the tutorial here: I'll cover the manual method below for better clarity and flexibility in setup. While installing manually, I would recommend using a more recent versions of node.js such as v8.x The simplest way to install Node.js and other dependencies is sudo apt-get install build-essential python-rpi.gpio bash <(curl -sL Accessing GPIO Starting Node-REDcommand also sets it to 256MB by default. If you do want to change that, the file you need to edit (as sudo) is /lib/systemd/system/nodered.service Adding Autostart capability using SystemD Once you install, point your browser (PC or device) to the ip of the raspberry and port 1880 to access the node-red UI like below: Now that you have node-red also setup. Let dig into the Node-red IDE Understanding Node-red operation: Node red is all about defining a flow of your actions as boxes and connecting lines. Each box or specifically a 'Node' is a block of js code that runs in the background to acheiev some putput based on some input. Nodes ideally have one input and one output. Node can also have no input at all. Like the timer node that we will see shortly. Such nodes just generate events based on some internal operations. In node-red a node is 'triggered' based on its input or internal operations. Once a node is triggered it can send what is called a message to the next node it connects to in its output. Messages are nothing but javascript json objects that can be sent to other nodes as input objects for further processing. This way each node corresponds to a chain of operations that act on the original input data and return a processed data after doing operations. Nodes: From version 0.15 you can install nodes directly using the editor. To to this select Manage Palette from the menu (top right), and then select the install tab in the palette. You can now search for new nodes to install, update, and enable and disable existing nodes. Installing npm packaged nodes To install an npm-packaged node, you can either install it locally within your user data directory (by default, $HOME/.node-red ): cd $HOME/.node-red npm install <npm-package-name> You will then need to stop and restart Node-RED for it to pick-up the new nodes. Installing individual node files During development it is also possible to install nodes by copying their .js and .html files into a nodes directory within your user data directory. If these nodes have any npm dependencies, they must be also be installed within the user data directory. This is only really recommended for development purposes. Adding the openhab2 node OpenHAB provides a convenient node for controlling OpenHAB Items via the node-red IDE. Go to Palette and search for OpenHab2 and install it. Make sure you install the OpenHab2 version if you installed OpenHAB2 (which is the recommended version) The node will look like this: You would have to scroll down to the bottom of the left pane to see these nodes. Nodes are arranged alphabetically. We will be concerned with a flow like this DASH BUTTON pressed ----> HTTP GET REQUEST --->OPENHAB --->SWITCH OFF LIGHT 1 This part: HTTP GET REQUEST --->OPENHAB --->SWITCH OFF LIGHT 1 Would be taken care by node-red as it defines a flow of actions, which can be explained like, "When we get an incoming HTTP GET response to node-red, access OpenHAB server and switch of a connected appliance" The part: DASH BUTTON pressed Will be programmed on to the RAK Dash button. That will be covered shortly. Sooooo now lets design our workflow. Design the workflow: For our workflow we need two main nodes: - HTTP node - OpenHAB2 node The http node will handle the incoming GET request and will pass on the request to the openhab2 node which will toggle the lights based on the request parameters given to the http node. Drag a http node to the work area and a openhab2 output node as well. Your workarea would look like this: Connect them. Connect the output grey square of the http node to the input grey square of the openhab2 out node. Like so: Now this shows that a http get request would in turn trigger a openhab2 based output request to a connected devices. Lets attach a dummy response object to the http request node. This is mandatory as any HTTP request that a client makes should result in a HTTP output. like so Now to make sure your able to debug messages coming from the HTTP requsest object add a debug node to the HTTP request nodes output like so: Now your test flow is almost complete. We just need to configure the nodes: 1) HTTP lights node (http request node) Every request node needs to tell what kind of HTTP action can it support. Lets say our HTTP request object can handle HTTP GET operation. Double click the http node and it will shows a menu like so: Here you Give a name to the node. Set the Method as GET and set the request URL to /togglelights. Lets see what this means: The url section tells the node-red flow that this node can accept an HTTP GET request @ http://<ip of node-red server>:<port of server>/togglelights url. Neat huh. with a few nodes, you have officially created a small HTTP service !!! Now lets configure the openhab node. Double click on the node. It will show up like so: Here again give a name to the node. But now you see a config called Controller with a pencil icon in it. This is called a configuration node. These nodes save a global configuration object related to the service you want to activate/trigger from your node. In our case it will be our OpenHAB2 server configuration. Click on the pencil icon. It will show a menu like so: Give a name to your configuration so that you can access it in other openhab2 controller nodes. Set the protocol to http. In the host section. Either give the OPENHAB2 server IP or a host name if you have DNS setup. in port set 8080 or the port you assigned in your OpenHab2 setup config file. Click update. You will return to the previous node config menu, but this time, if your openhab config was right. You will be able to see your devices in the Item drop down like so: Now You can select the switch you want to toggle when you press the DASH button. in the Topic section select ItemCommand as you want to send a command to the Item (ON, OFF etc). And in the PAYLOAD section put ON or OFF or TOGGLE based on the requirement. ON will turn the appliance on, OFF with switch off and TOGGLE will toggle on/off. Lets choose TOGGLE for now. Now deploy this flow by clicking on deploy on the far top right of the tool. The flow is now deployed. Note: If node-red complains about you mis-configuring any of the above nodes. Please review your node configuration and set them up as explained above. You shouldn't get an error during deployment to ensure proper workflow deployment Now to test this flow. Try to send a GET request to the URL like shown below: this should show up as a request in the Debug panel in the right hand side panel. and you should see your appliance switching on/off on successive requests Note: For sending/testing REST services. I would recommend the POSTMAN tool Setting up you Dashbutton firmware: Here is the fun part. Now you need to setup your dash button hardware to Try and call the above URL to control your OpenHAB lights. We will see how quickly this seemingly trivial flow can be enhanced to include more interesting stuff. Without much ado, setup you RAM Ameba SDK based Arduino IDE by following the instruction below: - Download the Arduino IDE.see the link - Download the CREATOR Arduino library. Arduino IDE 1.6.5 support third party hardware by providing hardware configuration. You need add CREATOR's configuration in "File" -> "Preferences". And fill below URL in "Additional Boards Manager URLs:" - - I also suggest to enable "Show verbose output" options on "compilation" and "upload" in Preference. - Open "Device Manager" in "Tools" -> "Board" -> "Board Manager". Wait for IDE update core configuration. Scroll down the menu, you will see RAK CREATOR in the list. Press "Install" at the right side. - Select CREATOR in "Tools" -> "Board" -> "CREATOR RTL8711". Now you are able to develop Arduino code and upload image onto CREATOR. Now upload the program below to your board: #include <HttpClient.h> #include <WiFi.h> #include <WiFiClient.h> #define LED1 0 #define LED2 1 #define LED3 2 #define LED4 3 #define RED 0 #define GREEN 1 #define BLUE 2 #define OFF 3 void printWifiStatus(); void led_off(); void http_get(char* str); void led_ctrl(uint8_t led_num, uint8_t rgb); char ssid[] = "RAK_2.4GHz_1"; // your network SSID (name) char pass[] = "rakwireless205"; // your network password (use for WPA, or use as key for WEP) int keyIndex = 0; // your network key Index number (needed only for WEP) // Name of the server we want to connect to const char kHostname[] = "sonoff_switch_ip_here:port"; const char kPath[] = "/togglelights"; const int kHttpPort = 80; // Number of milliseconds to wait without receiving any data before we give up const int kNetworkTimeout = 30*1000; // Number of milliseconds to wait if no data is available before trying again const int kNetworkDelay = 1000; int status = WL_IDLE_STATUS; /* power enable */ int pwr_en = 15; /* leds */ int led1_r = 25; int led1_g = 24; int led1_b = 19; int led2_r = 0; int led2_g = 2; int led2_b = 6; int led4_r = 12; int led4_g = 11; int led4_b = 13; int led3_r = 22; int led3_g = 21; int led3_b = 1; /* keys */ int key1 = 23; int key2 = 14; int key3 = 10; int key4 = 20; void setup() { Serial.begin(9600); pinMode(pwr_en, OUTPUT); digitalWrite(pwr_en, 1); pinMode(led2_r, OUTPUT); pinMode(led2_g, OUTPUT); pinMode(led2_b, OUTPUT); pinMode(led1_b, OUTPUT); pinMode(led3_r, OUTPUT); pinMode(led3_g, OUTPUT); pinMode(led3_b, OUTPUT); pinMode(led4_b, OUTPUT); pinMode(key2, INPUT_PULLUP); pinMode(key3, INPUT_PULLUP); pinMode(key4, INPUT_PULLUP); led_off(); while ( status != WL_CONNECTED) { Serial.print("Attempting to connect to SSID: "); Serial.println(ssid); status = WiFi.begin(ssid, pass); // wait 10 seconds for connection: delay(10000); } Serial.println("Connected to wifi"); printWifiStatus(); } void loop() { led_off(); checkButtonStatus(key1); delay(100); } void checkButtonStatus(int key){ if (digitalRead(key) == 0) { delay(50); if (digitalRead(key) == 0) { led_ctrl(LED4,BLUE); delay(500); Serial.print(key + " was pressed"); http_get(); } } }"); led_ctrl(LED1,RED); led_ctrl(LED2,RED); led_ctrl(LED3,RED); led_ctrl(LED4,RED); delay(500); } void led_off() { led_ctrl(LED1,OFF); led_ctrl(LED2,OFF); led_ctrl(LED3,OFF); led_ctrl(LED4,OFF); } void http_get() { int err = 0; WiFiClient c; HttpClient http(c); char path[100]; err = http.get(kHostname, kHttpPort, kPath); Serial.println(path); if (err == 0) { Serial.println("startedRequest ok"); err = http.responseStatusCode(); if (err >=0) { Serial.print("Got status code: "); Serial.println(err); } else { Serial.print("Getting response failed: "); Serial.println(err); } } else { Serial.print("Connect failed: "); Serial.println(err); } http.stop(); } void http_post(char* data) { int err = 0; WiFiClient client; while(1){ if (!client.connect("xxxxxxx", 8080)) { //port can be 8080 or as configured in the openhab config Serial.println("Connect to server failed. Retry after 1s."); client.stop(); delay(1000); continue; }else{ break; } } Serial.println("connected to server"); // Make a HTTP request: client.print("POST /rest/items/My_Item/"); client.print("OFF"); client.print("Host: "); client.println("xxxxxxx"); //you open hab url in the form abc.xyz.com client.println(); while (!client.available()) delay(100); while (client.available()) { char c = client.read(); Serial.write(c); } client.stop(); } void led_ctrl(uint8_t led_num, uint8_t rgb) { switch (led_num) { case LED1: if (rgb == RED) { digitalWrite(led1_r, 0); digitalWrite(led1_g, 1); digitalWrite(led1_b, 1); } else if (rgb == GREEN) { digitalWrite(led1_r, 1); digitalWrite(led1_g, 0); digitalWrite(led1_b, 1); } else if (rgb == BLUE) { digitalWrite(led1_r, 1); digitalWrite(led1_g, 1); digitalWrite(led1_b, 0); } else if (rgb == OFF) { digitalWrite(led1_r, 1); digitalWrite(led1_g, 1); digitalWrite(led1_b, 1); } break; case LED2: if (rgb == RED) { digitalWrite(led2_r, 0); digitalWrite(led2_g, 1); digitalWrite(led2_b, 1); } else if (rgb == GREEN) { digitalWrite(led2_r, 1); digitalWrite(led2_g, 0); digitalWrite(led2_b, 1); } else if (rgb == BLUE) { digitalWrite(led2_r, 1); digitalWrite(led2_g, 1); digitalWrite(led2_b, 0); } else if (rgb == OFF) { digitalWrite(led2_r, 1); digitalWrite(led2_g, 1); digitalWrite(led2_b, 1); } break; case LED3: if (rgb == RED) { digitalWrite(led3_r, 0); digitalWrite(led3_g, 1); digitalWrite(led3_b, 1); } else if (rgb == GREEN) { digitalWrite(led3_r, 1); digitalWrite(led3_g, 0); digitalWrite(led3_b, 1); } else if (rgb == BLUE) { digitalWrite(led3_r, 1); digitalWrite(led3_g, 1); digitalWrite(led3_b, 0); } else if (rgb == OFF) { digitalWrite(led3_r, 1); digitalWrite(led3_g, 1); digitalWrite(led3_b, 1); } break; case LED4: if (rgb == RED) { digitalWrite(led4_r, 0); digitalWrite(led4_g, 1); digitalWrite(led4_b, 1); } else if (rgb == GREEN) { digitalWrite(led4_r, 1); digitalWrite(led4_g, 0); digitalWrite(led4_b, 1); } else if (rgb == BLUE) { digitalWrite(led4_r, 1); digitalWrite(led4_g, 1); digitalWrite(led4_b, 0); } else if (rgb == OFF) { digitalWrite(led4_r, 1); digitalWrite(led4_g, 1); digitalWrite(led4_b, 1); } break; default: break; } } A short explanation of the important parts: 1) Setup the buttons as digital input pins. These are defined like so: Now that you have setup key1. We proceed to define what happens when you press Key1. This happens in the checkButtonStatus function void checkButtonStatus(int key){ if (digitalRead(key) == 0) { delay(50); if (digitalRead(key) == 0) { led_ctrl(LED4,BLUE); delay(500); Serial.print(key + " was pressed"); http_get(); } } } Here we check if the button is digital HIGH, trigger a HTTP GET function. Here we use the following function: err = http.get(kHostname, kHttpPort, kPath); where kHostName and KhttpPort are your node red host and port and kPath is nothing but /togglelights. Now you should be all setup. Upload this code to your Ameba dash button. Once uploaded, click the button 1 on the Dash Button. This will trigger the node-red workflow you defined, and send a HTTP request object to trigger the OpenHab node. The OpenHab node will in turn hit your OpenHab2 server with the HTTP POST request to the item you specified in you configuration toggle it. NEATTTTTT !!!!! For some server modifications: Now you may want to do the following small changes based on your setup: - Node-red and Openhab on different servers. If your OpenHab and node-red are on same box you can specify the OpenHab IP in node-red configuration node as 127.0.0.1. But if they are on a different IP all together, make sure you specify the IP of DNS resolved name correctly in the configuration node. - Making your setup secure: Its good to make sure your Node-red and Openhab 2 setups are behind https endpoints and not plain http as this is more secure. For this, i would advise putting node-red and openhab2 behind a nginx/apache webserver proxy setup Openhab: Node-red: When you make OpenHab behind a https endpoint, make sure you change the protocol configuration in the OpenHab configuration node as HTTPS. - Securing access by adding Basic auth: You can make sure that your openhab2 server is behind a Basic auth setup as explained here: Workflow modification: Since Node-red provides a drag-drop-draw functionality; you can literally change the workflow in any way you want. Just draw your flow !!! For eg: here is a simple tutorial to add twitter access to your node: After creating the twitter config, just connect you twitter node to your http request like so: Now when ever your click the button, you would send out a tweet as well as switch off/on your appliance. COOOL !!!! There are tonnes of other integration available via plugins in Node-red. You can mix and match these integration to your hearts desire and share your nodes as well: Sharing your nodes/workflows: When you have finished your workflow and want to share it with your buddies. Its is quite easy, Select your node flow in its entirety by click and dragging the mouse to form a selection of your flow. Then in the top right menu, select Export -> Clipboard. This will give you a json like so: [{"id":"55809f53.af20a","type":"openhab2-out","z":"7a5d96b3.1b2978","name":"ToggleOHLights","controller":"94b7105b.cc478","itemname":"Switch1","topic":"ItemCommand","payload":"TOGGLE","x":654,"y":874,"wires":[]},{"id":"fbf368e2.404b38","type":"http in","z":"7a5d96b3.1b2978","name":"toggle lights","url":"/togglelights","method":"get","upload":false,"swaggerDoc":"","x":416,"y":883,"wires":[["3493d584.f5ae0a","55809f53.af20a","ad2cd216.6c8ff","34da0c09.bcac54"]]},{"id":"3493d584.f5ae0a","type":"debug","z":"7a5d96b3.1b2978","name":"","active":true,"console":"false","complete":"payload","x":624,"y":789,"wires":[]},{"id":"ad2cd216.6c8ff","type":"http response","z":"7a5d96b3.1b2978","name":"","statusCode":"","headers":{},"x":639,"y":934,"wires":[]},{"id":"34da0c09.bcac54","type":"twitter out","z":"7a5d96b3.1b2978","name":"Tweet","x":692,"y":1001,"wires":[]},{"id":"94b7105b.cc478","type":"openhab2-controller","z":"","name":"MyOpenHab","protocol":"http","host":"ttn-gateway.local","port":"8080","path":"","username":"","password":""}] You buddies can then import this json using the Settings - > import menu. For creating you own nodes: For the Hard-CORE maker, Node-red provides a way to create your own node via the Node-red APIs. Check a sample tutorial here: API Reference for node-red Node-red provides a clean set of HTTP APIs for the following components: - 1) Admin - 2) Runtime - 3) Storage For details please visit: For the impatient: And for the frantic maker in you. Node-red Cookbook provides pebuilt nodes/workflows for your to download and use with a few changes:
https://www.hackster.io/naresh-krish/node-red-integration-with-rak-wireless-dash-button-4d2efe
CC-MAIN-2019-18
en
refinedweb
Programming Errors Find answers and solutions to various general programming errors you encounter while developing applications using .NET and other technologies. You may share your knowledge on common errors and their solutions here. You can make money from this site by posting original and quality articles that comply with Google AdSense policies.. Also, we offer several other reward programs including monthly profit sharing, cash rewards per post, contests & prizes etc to contributing members. Articles Unable to build Unit Test project due to build error in referencing dependent projectUnable to build Unit Test project due to build error in referencing dependent project in visual studio 2013 project. Error: The type or namespace name 'xxx' could not be found(are you missing a using directive or an assembly reference?) Warning: the primary reference "xxxx\bin\Debug\xyz.dll" could not be resolved because it was built against the .NETFramework, Version=v4.5.1" framework. This is a higher version than the currently targeted framework ".NETFramework, Version=v4.5" Aspx web page error on Page Postbackwhen web page is trying to REQUEST large data in the page postback the error will occurs if the key m below error occurs in the server side. Because microsoft security limits the maximum number of web form keys members to 1000 for HTTP request , this is limit in ASP.NET applications reject requests that have more than 1000 . so by default it accept only 1000 . so we have to increase this Manuelly in web.config Please see the "Resolution" below for in web.config page. System. Microsoft JScript runtime error: 'Sys' is undefinedMicrosoft JScript runtime error: 'Sys' is undefined error is raised when trying to work with Partitial page rending inAjax enabled Website This error raises when trying to call the Sys namespace before Javascript is loaded. Attempt by security transparent method 'System.Web.Http.GlobalConfiguration.get_Configuration()'How to resolve the error: Attempt by security transparent method 'System.Web.Http.GlobalConfiguration.get_Configuration()' to access security critical type 'System.Web.Http.HttpConfiguration' failed. Error: Could not load type 'WebApplication1.WebForm2'.In this article we will see how to resolve the following error when you are trying to browse your web page in Visual Studio 2012. Error: Could not load type 'WebApplication1.WebForm2'. We will also see the difference between CodeFile and CodeBehind file. Error: Invalid code syntax for BindItemIn this article we will see how to resolve the below error Error: Invalid code syntax for BindItem when binding a DataBound control like FormView in a web page. I got this error when binding FormView with the datasource. Routing does not work in MVC4In this article we will explore some of the updates and configuration required on Windows 7 operating system in case the Routing does not work in MVC4 that is if the routing is not supported in your application. Error when deploying a Windows Service: An exception occurred during the Install phase.Error when deploying a Windows Service: An exception occurred during the Install phase.System.Security.SecurityException: The source was not found, but some or all event logs could not be searched. Inaccessible logs: Security. at System.Diagnostics.EventLog.FindSourceRegistration(String source, String machineName, Boolean readOnly) Common Oracle Errors and SolutionsHello Friends, Many times we come across many oracle related errors which are very annoying and we spend several hours and day spending on finding the solution. I have aggregated some solutions to common occurring problems for a oracle developer which are as following:
http://www.dotnetspider.com/resources/Category5018-Programming-Errors.aspx
CC-MAIN-2019-18
en
refinedweb
Question: I need to check if a certain property exists within a class. Please refer to the LINQ query in question. For the life of me I cannot make the compiler happy. class Program { static void Main(string[] args) { ModuleManager m = new ModuleManager(); IModule module = m.FindModuleForView(typeof(HomeView)); Console.WriteLine(module.GetType().ToString()); Console.ReadLine(); } } public class ModuleManager { [ImportMany] public IEnumerable<Lazy<IModule>> Modules { get; set; } [ImportMany] public IEnumerable<Lazy<View>> Views { get; set; } public ModuleManager() { /()); } } public IModule FindModuleForView(Type view) { //THIS IS THE PROBLEM var module = from m in Modules where ( from p in m.Value.GetType().GetProperties() where p.GetType().Equals(view) select p ) select m; } public CompositionContainer _container { get; set; } } public interface IModule { } [Export] public class HomeModule : IModule { public HomeModule() { } [Export] public HomeView MyHomeView { get { return new HomeView(); } set { } } } public class HomeView : View { } public class View { } Solution:1 The inner query should return bool, but returns PropertyInfo. I haven't tested this, but I think you want something like: var module = (from m in Modules where m.Value.GetType().GetProperties() .Select(p => p.PropertyType).Contains(view) select m).FirstOrDefault(); Edit: Incorporating the Enumerable.Any suggestion on another answer: var module = (from m in Modules where m.Value.GetType().GetProperties() .Any(p => p.PropertyType.Equals(view)) select m).FirstOrDefault(); Solution:2 GetProperties() returns an array of PropertyInfo objects. Your call to p.GetType() is always going to return typeof(PropertyInfo) - never the "view" type you've passed in. You probably want this instead: from p in m.Value.GetType().GetProperties() where p.PropertyType.Equals(view) select p Edit As Robert pointed out, your logic to determine if the above query returns any properties is also wrong. An easy way around that is to see if anything came back from the subquery: var module = from m in Modules where ( from p in m.Value.GetType().GetProperties() where p.PropertyType.Equals(view) select p ).Any() select m Keep in mind that that query might return more than one module. You will probably want to return .First() from the results. Solution:3 The where keyword expects a predicate that returns a boolean condition, but you are providing a subquery that returns an IEnumerable. Can you rework your subquery so that it returns an actual boolean condition? You can convert it to a boolean result by using the FirstOrDefault() extension method. This method will return null if there are no records. So this should work (untested): where ( from p in m.Value.GetType().GetProperties() where p.PropertyType.Equals(view) select p ).FirstOrDefault() != null Solution:4 Even if you can get your query working, I don't think this is a good way to link your model with your view. I'd recommend creating a new question with more detail about what you are trying to do (and why), asking how you can create the association/link between the model and the view. Note:If u also have question or solution just comment us below or mail us on toontricks1994@gmail.com EmoticonEmoticon
http://www.toontricks.com/2019/04/tutorial-c-help-with-linq.html
CC-MAIN-2019-18
en
refinedweb
A library for Dart developers. Created from templates made available by Stagehand under a BSD-style license. A simple usage example: import 'package:prompter/prompter.dart'; main() { var awesome = new Awesome(); } Please file feature requests and bugs at the issue tracker. Add this to your package's pubspec.yaml file: dependencies: prompter_mrm: rm/prompter_mrm.dart'; We analyzed this package on Apr 4, 2019, and provided a score, details, and suggestions below. Analysis was completed with status completed using: Detected platforms: Flutter, other Primary library: package:prompter_mrm/prompter_mrm prompter_mrm.dart. Packages with multiple examples should provide example/README.md. For more information see the pub package layout conventions.
https://pub.dartlang.org/packages/prompter_mrm
CC-MAIN-2019-18
en
refinedweb
NEW: Learning electronics? Ask your questions on the new Electronics Questions & Answers site hosted by CircuitLab. Microcontroller Programming » Trouble with my first program! help Hello, I just wanted some help with the step 10c of the nerdkits guide, I just got the kit today and started building it everything went fine, and I even got the "Congratulations" screen right off the bat. Now I am trying the nest step which is loading a new program ( the initialload ). when I turn on the switch so that I can program the chip I get in the lcd screen black marks across 1st and 3rd row, which I already looked up on the forums and its apparently normal when the switch is on. USB cable is conected and my macbook seems to recognize it (since I already downloaded the driver), I opened "makefile" in "text edit" and changed the "AVRDUDEFLAGS" to the port as instructed. then from "terminal" being in the "instalload" folder typed "make" and got this: DAVID-ALVAREZs-MacBook:initialload davidsantiagoalvarez$ make make -C ../libnerdkits make[1]: Entering directory `/Users/davidsantiagoalvarez/Documents/Nerdkits/Code/libnerdkits' make[1]: Nothing to be done for `all'. make[1]: Leaving directory `/Users/davidsantiagoalvarez/Documents/Nerdkits/Code/libnerdkits'-//cclO7kVC.o: In function `main': /Users/davidsantiagoalvarez/documents/nerdkits/code/initialload/initialload.c:29: undefined reference to `lcd_init' /Users/davidsantiagoalvarez/documents/nerdkits/code/initialload/initialload.c:30: undefined reference to `lcd_home' /Users/davidsantiagoalvarez/documents/nerdkits/code/initialload/initialload.c:35: undefined reference to `lcd_line_one' /Users/davidsantiagoalvarez/documents/nerdkits/code/initialload/initialload.c:36: undefined reference to `lcd_write_string' /Users/davidsantiagoalvarez/documents/nerdkits/code/initialload/initialload.c:37: undefined reference to `lcd_line_two' /Users/davidsantiagoalvarez/documents/nerdkits/code/initialload/initialload.c:38: undefined reference to `lcd_write_string' /Users/davidsantiagoalvarez/documents/nerdkits/code/initialload/initialload.c:39: undefined reference to `lcd_line_three' /Users/davidsantiagoalvarez/documents/nerdkits/code/initialload/initialload.c:40: undefined reference to `lcd_write_string' /Users/davidsantiagoalvarez/documents/nerdkits/code/initialload/initialload.c:41: undefined reference to `lcd_line_four' /Users/davidsantiagoalvarez/documents/nerdkits/code/initialload/initialload.c:42: undefined reference to `lcd_write_string' make: *** [initialload.hex] Error 1 Can someone walk me thorough this problem. Note that when I unplug the USB, turn "programing switch" off and re-boot the circuit in the breadboard, I get the initial congratulations message saying that my hardware is ok. It sounds like that your include command is not right. Do you have the following line in your code? #include "../libnerdkits/lcd.h" I'm not sure what is causing your problem. There seems to be an issue with avr-gcc creating the object file from the initialload.c source. I've seen similar errors when includes weren't in the program file but the original initiallod file should compile fine if the computer and toolchain are configured correctly. It appears you maintianed the original code download folder structure which is good. However, I don't know if there is a special place the files have to be for a Mac to compile them (I'm a Windows guy), or if there is something else going on. We have a few Mac users here, one I know is quite regular (Ralph), I'm sure one of them will pop in to help out. Rick Hi Nerd_notyet, I am (once again) working on that exact same problem at the moment. Next to the "not a butterfly" "undefined reference to `lcd_init'" must be the second most asked question. In fact you can just Google "lcd_init" and spend a year reading the responses. I say once again because I have had this problem in the past, but I cannot remember the solution. This is not a Mac thing. I think Humberto usually jumps in and gives the answer. Would't it be nice to have a FAQ? Ralph Undefined reference is a linker error message. It means the linker can't find the missing functions in any of the object files supplied. Either you are not telling the linker where LCD.o is or LCD.o is not there. I don't see a LCD.o on your avr-gcc command line. Usually it is passed via the LINKOBJECTS variable in the make file: LINKOBJECTS=../libnerdkits/delay.o ../libnerdkits/lcd.o ../libnerdkits/uart.o is used on the avr-gcc command line: avr-gcc ${GCCFLAGS} ${LINKFLAGS} -o initialload.o initialload.c ${LINKOBJECTS} and then LCD.o must exist in ../libnerdkits/ Hello again, I just fixed this problem and dont know exactly how, all I did was download the sample code again from the nerdkits website. I do not know why but there were 2 files missing in the "initialload" folder that I initially downloaded (initialload.hex and initialload.o) I dont know how they were not in the initiall download. Thanks everyone for the quick response. hello again! I just went on to set up my temperature sensor and did everything (installed temp sensor to breadboard, changed /dev/ file), but when I go to execute the program this is what I get: (btw can you guys break down the steps to write the code a little more, I am not sure if indentation is critical while you write the code, and if it is can I get some pointers?) DAVID-ALVAREZs-MacBook:tempsensor davidsantiagoalvarez$ make done. Thank you. DAVID-ALVAREZs-MacBook:tempsensor davidsantiagoalvarez$ For what it looks, it seems like the computer is comunicating with the chip ok, but after this when I unplug USB and 9V battery and reboot, all it happens is that I get the initial message from my previous program. Am I missing a step? Note: I am able to go back and change message in "instalload" and upload it to chip without any problem. Why do you have "-U flash:w:tempsensor.hex:a" in LINKOBJECTS and LINKOBJECTS on the avrdude command line? Here is what your makefile should look like: I am simply, looking up the "tempsensor" file and executing it. The file is the one I downloaded from theNerdkits site and I have not touched it. Noter, that is the exact same code that I have. I just erased all files containing the code that I dowloaded from the site, and downloaded it again, and guess what, the initialload file is missing the .hex and .o files. I tried executing it again and it gave me the initial problem that I was having. also in the "tempsensor" file both the .hex and .o are missing. is this whats causing all my problems? and why is this happening? The hex and the .o files are created when the c source is compiled and linked so they should be created when you run the makefile. So, it's ok if they are missing from the download. If you have the correct makefile then I wonder why the line in your output looks like because that looks like things are mixed up. Maybe a mac thing? I don't know, I've never had a mac. Here's what I get from running the makefile. You can see the "-U flash:w:tempsensor.hex:a" is on the avrdude command line but without the LINKOBJECTS stuff. The "-U flash:w:tempsensor.hex:a" is what tells avrdude to upload the program to the chip. > "make.exe" all make -C ../libnerdkits make[1]: Entering directory `C:/Documents and Settings/Administrator/My Documents/NerdKits/Sample Source Code/Code/libnerdkits' make[1]: Nothing to be done for `all'. make[1]: Leaving directory `C:/Documents and Settings/Administrator/My Documents/NerdKits/Sample Source Code/Code/libnerdkits' avr-objcopy -j .text -O ihex tempsensor.o tempsensor.hex avrdude -c avr109 -p m328p -b 115200 -P COM2 -U flash:w:tempsensor.hex:a Connecting to programmer: . Found programmer: Id = "AVR ISP"; type = S Software Version = 3.0; Hardware Version = 0.1 Programmer supports auto addr increment. Programmer supports buffered memory access with buffersize=128 bytes. Programmer supports the following devices: Device code: 0x03 "tempsensor.hex" avrdude: input file tempsensor.hex auto detected as Intel Hex avrdude: writing flash (10306 bytes): Writing | ################################################## | 100% 2.78s avrdude: 10306 bytes of flash written avrdude: verifying flash memory against tempsensor.hex: avrdude: load data flash data from input file tempsensor.hex: avrdude: input file tempsensor.hex auto detected as Intel Hex avrdude: input file tempsensor.hex contains 10306 bytes avrdude: reading on-chip flash data: Reading | ################################################## | 100% 2.47s avrdude: verifying ... avrdude: 10306 bytes of flash verified avrdude done. Thank you. > Process Exit Code: 0 > Time Taken: 00:07 ok for some reason, I just executed "tempsensor" and it ran like you showed in the last post, then in my LCD I got "ADCR 178 of 1024" first row, and just the letter "F" in the second row of my LCD. The 178 number changes apparently with temperature (but is this how is supposed to be displayed in the screen?). Also, I just tried to program the previous "initialload" again just to get a hand on doing it, and it gave me this: DAVID-ALVAREZs-MacBook:initialload davidsantiagoalvarez$ make make -C ../libnerdkits make[1]: Nothing to be done for `all'.-//ccUa0QaO.o: In function `main': /Users/davidsantiagoalvarez/desktop/code/initialload/initialload.c:29: undefined reference to `lcd_init' /Users/davidsantiagoalvarez/desktop/code/initialload/initialload.c:30: undefined reference to `lcd_home' /Users/davidsantiagoalvarez/desktop/code/initialload/initialload.c:35: undefined reference to `lcd_line_one' /Users/davidsantiagoalvarez/desktop/code/initialload/initialload.c:36: undefined reference to `lcd_write_string' /Users/davidsantiagoalvarez/desktop/code/initialload/initialload.c:37: undefined reference to `lcd_line_two' /Users/davidsantiagoalvarez/desktop/code/initialload/initialload.c:38: undefined reference to `lcd_write_string' /Users/davidsantiagoalvarez/desktop/code/initialload/initialload.c:39: undefined reference to `lcd_line_three' /Users/davidsantiagoalvarez/desktop/code/initialload/initialload.c:40: undefined reference to `lcd_write_string' /Users/davidsantiagoalvarez/desktop/code/initialload/initialload.c:41: undefined reference to `lcd_line_four' /Users/davidsantiagoalvarez/desktop/code/initialload/initialload.c:42: undefined reference to `lcd_write_string' make: *** [initialload.hex] Error 1 What can be causing all this? and how do I fix it? Thank you. deja vu, same as the first time. Re-read the answer I gave above and fix the makefile. The only difference between make files for these projects would be the program name. Hi Nerd_notyet, Lets take a step back here and make sure everything looks good in your Makefile before digging deeper to find the issue. Please post your Makefile for the tempsensor program you are trying to upload (and go ahead and post your initialload Makefile too while you are at it). Like Noter spotted earlier, it seems that your LINKEDOBJECTS line is either showing up in weird places, or not showing up at all. That is strange, and leads me to believe you either have a currupted Makefile (unlikely since you downloaded twice) or you are altering it when you edit the Makefile. Which text editor are you using to edit your Makefile? Humberto This is the makefile for initialload: and this one for tempsensor: the only thing that I am doing after downloading this code from the website is going into it to change /dev/ and nothing more. I am using textedit in my macbook to do the editing. Thank you Nerd_notyet, what does your libnerdkits folder look like? What files are there? Reboot!! You probable should use Xcode instead of "text edit". Xcode is a real IDE so it gives you colored syntax which is nice. It is kinda a pain to get used to but once you do it is nice. I suspect Ralph may be onto something here. TextEdit might be adding rich text format things that are confusing make. You can try Xcode as an editor like Ralph suggested. I'm not a mac person but I have heard plenty of people love TextMate for coding, they have a 30 day free trial. TextEdit will work but it is like NotePad in Windows and defaults to doing stupid things to the end of line. At least with TextEdit you can override the default behavior. I compared your Initialload Make file to mine and did not see anything different, so possible it is something TextEdit is doing. When I try to download the Sample Source Code I get a message that says that it cannot be opened because the associated helper application does not exist. It says, "change the associated the association in your preference" How do I do that? I have an Imac and i use a free program that you can get from the app store called Tex Wrangler, also i am using Xcode and it works just like Ralph says. hope this can help. Tinkerer- "When I try to download the Sample Source Code I get a message that says that it cannot be opened because the associated helper application does not exist. It says, "change the associated the association in your preference" How do I do that?" That sounds like your browser doesn't know what to run to open the file. Instead, right-click and choose save-as or save link as, then save the file to wherever you want (desktop or wherever) and then go double-click on it or open it with the proper application. BM Yes. I finally got it to download. Thanks I am still having problems with initialload. After messing around with the files and with Makefile, I thought that I had it. The DOS routine looked good. It erased the old program, wrote the new flash data, read the flash data, verified it and wrote " avrdude done. Thank you." The problem is that the LCD only displays two lines of black squares. No letters or numbers. The position of the switch doesn't matter. It worked before so there probably is no problem with the wiring or the hardware. I pulled out all the LCD wires and rewired it hoping that there was a loose connection but it still does the same thing. Is there some way that I can diagnose the LCD ? Hi Tinkerer, If you got the avrdude done message then the new program definitely made it onto the chip. Did you make sure to reset power to the chip after you flipped the switch out of programming mode? You might want to remove the switch entirely and then reset power to the switch to make absolutely sure you are not booting into programming mode. I tried removing the switch and resetting the power but I still get two lines of squares on the LCD. I tried using a really big 9 volt battery from a toy machine gun to eliminate any possibility of a battery related problem. The potential across the rails is exactly 5 volts. I switched all of the LCD leads to different holes in the breadboard. Page 30 of the Guide tells what the LCD pins do. Would it do any good to check the voltage on some of these pins? Tinkerer, Make sure that you turn the programming switch on "run mode", which is when you have the switch not connecting ground on pin 14, and reset the power after you loaded the program on the chip to get the program to display on the LCD and to run on the nerdkit. Hopes this helps. -Dan Dan, The switch is removed so I know that I am in the "run mode". Are you suggesting that I go back to the "program mode" with the switch connecting pin 14 to ground and then reinstall the initialload program and try again to go into the "run mode" ? Tinkerer I am still stuck.Let me summarize. 1. The original setup went well and I got the Congratulations Message 2. I got the avrdude done message which means that the new program made it into the chip. 3. When I flipped the switch out of the programming mode and reset the power I got two lines of black squares on the LCD. 4. I replaced the small 9V battery with a much larger one and verified that the potential across the rails is exactly 5 volts. 5. I removed the switch entirely to make sure I was not rebooting into the programming mode. Still no change. 6. I rewired everything and still got the two lines of squares. 7. I replaced the switch, flipped it up and tried to reprogram the chip. Now I get the following message. " make is not recognized as an internal or external command, operable program or batch file". For some reason it wont compile even though it did before. Any suggestions? 1st question - I don't think I saw you state - Are you running Windows or Mac? 2) Can you take a couple of overhead photo's showing your wiring and post them (instructions for adding photo's are below in yellow where it says Supported Markup) Extra eyes can often see something we overlook. You also stated you rewired after your congratulations message so there is a possibility of something there. Thanks for the reply. I am using Windows Vista. Lets see if I can send the pictures. I noticed some of your wires either aren't pushed in or are not in all the way. Double check the connections. It looks like they are going to the correct places from what I see other than the one (LCD pin 6) that doesn't look connected. As for your software issue, uninstall WINAVR then right click the install program and select install as administrator. You should see your make issue go away and hopefully it'll work again. Rick. Your advice on how to solve the software problem worked. After reinstalling WINAVR, I was able to reprogram the chip. It went through the whole routine and gave the avrude done message. However when I switched back to the run mode I still get only two lines of blanks on the LCD. Its hard to tell if the chip failed to run the program or if it did run the program but the LCD failed to display the result. LCD pin #6 was reconnected and all the wires are pushed all the way in. The same thing happens when the switch is removed. Any ideas? Al Try building for and uploading the blink program. That way you can verify the program is loading and running. That will at least narrow it down to the LCD and its connections. Rick. I followed your recommendations but did not get the blinking light when I switched to the run mode and rebooted. The blink program uploaded with no problem and I got the avrude done message. I am sure that the LED is installed in the correct place with the correct polarity. The LED itself is OK since it lights up when placed across the rails. We have to conclude that the chip won't run for some reason. Its strange since it ran fine when I first ran the welcome message. I guess the LCD is OK. What's odd is that if it is uploading the program, it is working. The only way AVRDUDE will upload is if the chip is communicating with it. There has to be something going awry after the program is loaded. Do me a favor and try to program the switch with the progamming switch in the wrong (run) position. Just to see what happens. The odds of your chip being able to program itself (which we know it can since you get the avrdude done message) and then not succeeding in running its program is extremely unlikely. Follow Rick_S suggestion, and also post one more picture of your current setup, perhaps there is weird now that we missed earlier. Rick: You are onto something. The chip uploads the program for both positions of the switch so we know that it is always in the programming mode. Looking at page 33 of the Guide we see that this should only happen when pin 14 (row 24) is grounded. With the switch removed there is nothing left to ground pin 14 and yet it stays in the programming mode. It is tempting to say that the board itself grounds row 24 because of some flaw or that the chip has some kind of wiring defect but if either one of these flaws existed, the chip would not have been successful in running the welcome message when the setup was first assembled. It worked in the run mode at that time but it doesn’t now. What do you think? Humberto: I am attaching two more pictures of the current setup which emphasize the area near pin 14. The switch is removed for clarity. I hope that this helps. Tinkerer, strip everything off the breadboard and start over. That has been my solution for persistant unexplained problems, you'll be amazed how rapidly you can re-assemble the breadboard after your forth or fifth try . Sorry Rick I couldn't resist using your icon, hope I didn't make a copyright violation :-) Ralph Ralph: Ok I'll strip the board. However, there is something I would like to try first. It would be interesting to wire 5 volts onto pin 14 to see if the chip will run. Is this safe or will I blow the chip? Does it load the program with the switch removed altogether? Maybe you have a bad switch. I would think that more likely than a bad mcu or breadboard. There is definaltely something odd going on there though. Ralph: All of those smileys I use are either free off the web or came bundled with forum software. It doesn't bother me if you use them. I made up a text file so I can just copy/paste them in when I want. Rick: Yes. I knew it would but I tried it again just to be sure. I asked Ralph if it would be safe put 5 volts on pin 14 to see if the chip would run but he hasn't answered yet. I have not tried this yet because I am afraid of blowing the chip. I wouldn't do that. Try moving the chip up to a different row so that row is no longer used. Or if you can, try a different breadboard. Rick: I have been busy. First I completely stripped the board and the LCD and reassembled everything. Same result; the chip reprograms initialload with any position of the switch (or with the switch removed) but will not run the program. I then bought a similar board at Radio Shack and reassembled everything for the third time. Same result. It just has to be the chip. I just ordered a replacement chip from your store and look forward to its delivery. Not my store, I don't work here or receive any compensation. I'm just another hobbiest like you who tends to hang out here way more than my wife likes. I do think at this point you are correct and that it has to be the mcu. Makes me wonder if the bootloader code somehow glitched or if somehow that leg got electricaly damaged. Have you sent an e-mail to support at nerd kits dot com to tell them directly about this. I don't know how long you've had your kit, but they are very good about making sure the product is good for you. Thier support is excellent. Rick: Wow; all this time I thought I was communicating with a Nerdkits person. Its nice of you to help neophytes like me. My wife doesn't complain about my hobbies. I think she is glad to be rid of me. Anyway, I figured that I have nothing to lose by putting 5 volts on pin 14 since I have already ordered a new chip. Presto. The chip runs the program. I repositioned the switch so that it grounds pin 14 in one position and puts 5 volts on it in the other position. Now I can switch from the program mode to the run mode at will. When the new chip arrives I think I'll just set it aside thinking, "If it aint broke, dont fix". As far as making a claim with Nerdkits, I'll just leave everything alone. Its not worth the trouble. Besides, the chip worked fine when I first assembled the circuit. I might have done some dumb thing that damaged it. I had a chip that I damaged one of the outputs by accidentally shorting it out to 12V. It did the same as you are describing except I was using it for an output and the internal pullup no longer worked. So I used an external and it worked fine. So while the chips are pretty durable, we can do things that make them malfunction. I'm glad you figured out a workaround. And thanks for the compliments. I can't think of many things better than being considered one of the NK crew. Please log in to post a reply.
http://www.nerdkits.com/forum/thread/1533/
CC-MAIN-2019-18
en
refinedweb
From: Gennadiy Rozental (gennadiy.rozental_at_[hidden]) Date: 2002-12-09 01:07:51 Hi, everybody I started to work with Boost.Test issues and feature requests. Today I committed first post-release revision. Here cumulative list of changes: * Facility for automatic registration of unit tests is introduced It was requested during original Boost.Test review and now it supports automatic registration for free function based test cases. Here an example: #include <boost/test/auto_unit_test.hpp> BOOST_AUTO_UNIT_TEST( test1 ) { BOOST_CHECK( true ); } BOOST_AUTO_UNIT_TEST( test2 ) { BOOST_CHECK( true ); } * libraries file names changed to: boost_prg_exec_monitor boost_test_exec_monitor boost_unit_test_framework * Added building dynamic libraries into Jamfile Unfortunately it does not work as expected on windows platform. I would greatly appreciate any input. See: * Catch system errors switch introduced This will work among the lines described in: Unfortunately nobody replied, so I presume that everybody agree. Anyway I will be expecting user responses on how well this is working Environment variable name: BOOST_TEST_CATCH_SYSTEM_ERRORS[="no"] UTF cla: --catch_system_errors[="no"] * MS C runtime debug hooks introduced It allows me to catch _ASSERT bases assertion for MSVC * switched to csignal and csetjmp in execution_monitor.cpp * SIGABRT catch added * I switched to use typedef c_string literal instead of char const* and to c_string literal() instead of NULL. Eliminated NULLs all over the place. Different definition of NULL symbol causing small problems for some compilers * class wrapstrstream separated in standalone file and renamed to wrap_stringstream For now it will be located in test/detail. Once I prepare doc page for it I will present it for adding into utility * unit_test_result_saver introduced to properly managed reset_current_test_set calls in case of exceptions * switch back to use scoped_ptr instead of raw test_suite pointer in unit_test_main.cpp * BOOST_CPP_MAIN_CONFIRMATION renamed to BOOST_PRG_MAN_CONFIRM and changed it's logic a bit. It now should have value "no" to turn off pass confirmation * added tests for auto unit test facility and catching assert statements * Jamfile added info examples directory * Added example input for the unit_test_example5 * Other minor code/doc fixes Let me know if you have any problems. Gennadiy. Boost list run by bdawes at acm.org, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk
https://lists.boost.org/Archives/boost/2002/12/40829.php
CC-MAIN-2019-18
en
refinedweb
Dj-Tempalate from django.shortcuts import render 2 from projects.models import Project 3 4 def project_index(request): 5 projects = Project.objects.all() 6 context = { 7 'projects': projects 8 } 9 return render(request, 'project_index.html', context) There’s quite a lot going on in this code block, so let’s break it down. In line def from django.urls import path 2 from . import views 3 4 url. Share Your Knowledge With a Blog A blog is a great addition to any personal portfolio site. Whether you update it monthly or weekly, it’s a great place to share your knowledge as you learn. In this section, you’re going to build a fully functioning blog that will allow you to perform the following tasks: - Create, update, and delete blog posts - Display posts to the user as either an index view or a detail view - Assign categories to posts - Allow users to comment on posts You’ll also learn how to use the Django Admin interface, which is where you’ll create, update, and delete posts and categories as necessary. Before you get into building out the functionality of this part of your site, create a new Django app named blog. Don’t delete projects. You’ll want both apps in your Django project: $ python manage.py startapp blog This may start to feel familiar to you, as its your third time doing this. Don’t forget to add blog to your INSTALLED_APPS in personal_porfolio/settings.py: INSTALLED_APPS = [ "django.contrib.admin", "django.contrib.auth", "django.contrib.contenttypes", "django.contrib.sessions", "django.contrib.messages", "django.contrib.staticfiles", "projects", "blog", ] Hold off on hooking up the URLs for now. As with the projects app, you’ll start by adding your models. Blog App: Models The models.py file in this app is much more complicated than in the projects app. You’re going to need three separate database tables for the blog: Category Comment These tables need to be related to one another. This is made easier because Django models come with fields specifically for this purpose. Below is the code for the Category and Post models: 1 from django.db import models 2 3 class Category(models.Model): 4 name = models.CharField(max_length=20) 5 6 class Post(models.Model): 7 title = models.CharField(max_length=255) 8 body = models.TextField() 9 created_on = models.DateTimeField(auto_now_add=True) 10 last_modified = models.DateTimeField(auto_now=True) 11 categories = models.ManyToManyField('Category', related_name='posts') The Category model is very simple. All that’s needed is a single CharField in which we store the name of the category. The title and body fields on the Post model are the same field types as you used in the Project model. We only need a CharField for the title as we only want a short string for the post title. The body needs to be a long-form piece of text, so we use a TextField. The next two fields, created_on and last_modified, are Django DateTimeFields. These store a datetime object containing the date and time when the post was created and modified respectively. On line 11, the DateTimeField takes an argument auto_now_add=True. This assigns the current date and time to this field whenever an instance of this class is created. On line 12, the DateTimeField takes an argument auto_now=True. This assigns the current date and time to this field whenever an instance of this class is saved. That means whenever you edit an instance of this class, the date_modified is updated. The final field on the post model is the most interesting. We want to link our models for categories and posts in such a way that many categories can be assigned to many posts. Luckily, Django makes this easier for us by providing a ManytoManyField field type. This field links the Category models and allows us to create a relationship between the two tables. The ManyToManyField takes two arguments. The first is the model with which the relationship is, in this case its Category. The second allows us to access the relationship from a Category object, even though we haven’t added a field there. By adding a related_name of posts, we can access category.posts to give us a list of posts with that category. The third and final model we need to add is Comment. We’ll use another relationship field similar the ManyToManyField that relates Category. However, we only want the relationship to go one way: one post should have many comments. You’ll see how this works after we define the Comment class: 16 class Comment(models.Model): 17 author = models.CharField(max_length=60) 18 body = models.TextField() 19 created_on = models.DateTimeField(auto_now_add=True) 20 post = models.ForeignKey('Post', on_delete=models.CASCADE) The first three fields on this model should look familiar. There’s an author field for users to add a name or alias, a body field for the body of the comment, and a created_on field that is identical to the created_on field on the Post model. On line 20, we use another relational field, the ForeignKey field. This is similar to the ManyToManyField but instead defines a many to one relationship. The reasoning behind this is that many comments can be assigned to one post. But you can’t have a comment that corresponds to many posts. The ForeignKey field takes two arguments. The first is the other model in the relationship, in this case, Post. The second tells Django what to do when a post is deleted. If a post is deleted, then we don’t want the comments related to it hanging around. We, therefore, want to delete them as well, so we add the argument on_delete=models.CASCADE. Once you’ve created the models, you can create the migration files with makemigrations: $ python manage.py makemigrations blog The final step is to migrate the tables. This time, don’t add the app-specific flag. Later on, you’ll need the User model that Django creates for you: $ python manage.py migrate Now that you’ve created the models, we can start to add some posts and categories. You won’t be doing this from the command line as you did with the projects, as typing out a whole blog post into the command line would be unpleasant to say the least! Instead, you’ll learn how to use the Django Admin, which will allow you to create instances of your model classes in a nice web interface. Don’t forget that you can check out the source code for this section on GitHub before moving onto the next section. Blog App: Django Admin The Django Admin is a fantastic tool and one of the great benefits of using Django. As you’re the only person who’s going to be writing blog posts and creating categories, there’s no need to create a user interface to do so. On the other hand, you don’t want to have to write blog posts in the command line. This is where the admin comes in. It allows you to create, update, and delete instances of your model classes and provides a nice interface for doing so. Before you can access the admin, you need to add yourself as a superuser. This is why, in the previous section, you applied migrations project-wide as opposed to just for the app. Django comes with built-in user models and a user management system that will allow you to login to the admin. To start off, you can add yourself as superuser using the following command: $ python manage.py createsuperuser You’ll then be prompted to enter a username followed by your email address and password. Once you’ve entered the required details, you’ll be notified that the superuser has been created. Don’t worry if you make a mistake since you can just start again: Username (leave blank to use 'jasmine'): jfiner Email address: jfiner@example.com Password: Password (again): Superuser created successfully. Navigate to localhost:8000/admin and log in with the credentials you just used to create a superuse. You’ll see a page similar to the one below: The User and Groups models should appear, but you’ll notice that there’s no reference to the models you’ve created yourself. That’s because you need to register them inside the admin. In the blog directory, open the file admin.py and type the following lines of code: 1 from django.contrib import admin 2 from blog.models import Post, Category 3 4 class PostAdmin(admin.ModelAdmin): 5 pass 6 7 class CategoryAdmin(admin.ModelAdmin): 8 pass 9 10 admin.site.register(Post, PostAdmin) 11 admin.site.register(Category, CategoryAdmin) On line 2, you import the models you want to register on the admin page. Note: We’re not adding the comments to the admin. That’s because it’s not usually necessary to edit or create comments yourself. If you wanted to add a feature where comments are moderated, then go ahead and add the Comments model too. The steps to do so are exactly the same! On line 5 and line 9, you define empty classes CategoryAdmin. For the purposes of this tutorial, you don’t need to add any attributes or methods to these classes. They are used to customize what is shown on the admin pages. For this tutorial, the default configuration is enough. The last two lines are the most important. These register the models with the admin classes. If you now visit localhost:8000/admin, then you should see that the Category models are now visible: If you click into Posts or Categorys, you should be able to add new instances of both models. I like to add the text of fake blog posts by using lorem ipsum dummy text. Create a couple of fake posts and assign them fake categories before moving onto the next section. That way, you’ll have posts you can view when we create our templates. Don’t forget to check out the source code for this section before moving on to building out the views for our app. Blog App: Views You’ll need to create three view functions in the views.py file in the blog directory: blog_indexwill display a list of all your posts. blog_detailwill display the full post as well as comments and a form to allow users to create new comments. blog_categorywill be similar to blog_index, but the posts viewed will only be of a specific category chosen by the user. The simplest view function to start with is blog_index(). This will be very similar to the project_index() view from your project app. You’ll just query the Post models and retrieve all its objects: 1 from django.shortcuts import render 2 from blog.models import Post 3 4 def blog_index(request): 5 posts = Post.objects.all().order_by('-created_on') 6 context = { 7 "posts": posts, 8 } 9 return render(request, "blog_index.html", context) On line 3, you import the Post model, and on line 6 inside the view function, you obtain a Queryset containing all the posts in the database. order_by() orders the Queryset according to the argument given. The minus sign tells Django to start with the largest value rather than the smallest. We use this, as we want the posts to be ordered with the most recent post first. Finally, you define the context dictionary and render the template. Don’t worry about creating it yet. You’ll get to creating those in the next section. Next, you can start to create the blog_category() view. The view function will need to take a category name as an argument and query the Post database for all posts that have been assigned the given category: 13 def blog_category(request, category): 14 posts = Post.objects.filter( 15 categories__name__contains=category 16 ).order_by( 17 '-created_on' 18 ) 19 context = { 20 "category": category, 21 "posts": posts 22 } 23 return render(request, "blog_category.html", context) On line 14, you’ve used a Django Queryset filter. The argument of the filter tells Django what conditions need to be met for an object to be retrieved. In this case, we only want posts whose categories contain the category with the name corresponding to that given in the argument of the view function. Again, you’re using order_by() to order posts starting with the most recent. We then add these posts and the category to the context dictionary, and render our template. The last view function to add is blog_detail(). This is more complicated as we are going to include a form. Before you add the form, just set up the view function to show a specific post with a comment associated with it. This function will be almost equivalent to the project_detail() view function in the projects app: 21 def blog_detail(request, pk): 22 post = Post.objects.get(pk=pk) 23 comments = Comment.objects.filter(post=post) 24 context = { 25 "post": post, 26 "comments": comments, 27 } 28 29 return render(request, "blog_detail.html", context) The view function takes a pk value as an argument and, on line 22, retrieves the object with the given pk. On line 23, we retrieve all the comments assigned to the given post using Django filters again. Lastly, add both context dictionary and render the template. To add a form to the page, you’ll need to create another file in the blog directory named forms.py. Django forms are very similar to models. A form consists of a class where the class attributes are form fields. Django comes with some built-in form fields that you can use to quickly create the form you need. For this form, the only fields you’ll need are author, which should be a CharField, and body, which can also be a CharField. Note: If the CharField of your form corresponds to a model CharField, make sure both have the same max_length value. blog/forms.py should contain the following code: from django import forms class CommentForm(forms.Form): author = forms.CharField( max_length=60, widget=forms.TextInput(attrs={ "class": "form-control", "placeholder": "Your Name" }) ) body = forms.CharField(widget=forms.Textarea( attrs={ "class": "form-control", "placeholder": "Leave a comment!" }) ) You’ll also notice an argument widget has been passed to both the fields. The author field has the forms.TextInput widget. This tells Django to load this field as an HTML text input element in the templates. The body field uses a forms.TextArea widget instead, so that the field is rendered as an HTML text area element. These widgets also take an argument attrs, which is a dictionary and allows us to specify some CSS classes, which will help with formatting the template for this view later. It also allows us to add some placeholder text. When a form is posted, a POST request is sent to the server. So, in the view function, we need to check if a POST request has been received. We can then create a comment from the form fields. Django comes with a handy is_valid() on its forms, so we can check that all the fields have been entered correctly. Once you’ve created the comment from the form, you’ll need to save it using save() and then query the database for all the comments assigned to the given post. Your view function should contain the following code: 21 def blog_detail(request, pk): 22 post = Post.objects.get(pk=pk) 23 24 form = CommentForm() 25 if request.method == 'POST': 26 form = CommentForm(request.POST) 27 if form.is_valid(): 28 comment = Comment( 29 author=form.cleaned_data["author"], 30 body=form.cleaned_data["body"], 31 post=post 32 ) 33 comment.save() 34 35 comments = Comment.objects.filter(post=post) 36 context = { 37 "post": post, 38 "comments": comments, 39 "form": CommentForm(), 40 } 41 return render(request, "blog_detail.html", context) On line 25, we create an instance of our form class. Don’t forget to import your form at the beginning of the file: from . import CommentForm We then go on to check if a POST request has been received. If it has, then we create a new instance of our form, populated with the data entered into the form. The form is then validated using is_valid(). If the form is valid, a new instance of Comment is created. You can access the data from the form using form.cleaned_data, which is a dictionary. They keys of the dictionary correspond to the form fields, so you can access the author using form.cleaned_data['author']. Don’t forget to add the current post to the comment when you create it. Note: The life cycle of submitting a form can be a little complicated, so here’s an outline of how it works: - When a user visits a page containing a form, they send a GETrequest to the server. In this case, there’s no data entered in the form, so we just want to render the form and display it. - When a user enters information and clicks the Submit button, a POSTrequest, containing the data submitted with the form, is sent to the server. At this point, the data must be processed, and two things can happen: - The form is valid, and the user is redirected to the next page. - The form is invalid, and empty form is once again displayed. The user is back at step 1, and the process repeats. The Django forms module will output some errors, which you can display to the user. This is beyond the scope of this tutorial, but you can read more about rendering form error messages in the Django documentation. On line 34, save the comment and go on to add the form to the context dictionary so you can access the form in the HTML template. The final step before you get to create the templates and actually see this blog up and running is to hook up the URLs. You’ll need create another urls.py file inside blog/ and add the URLs for the three views: from django.urls import path from . import views urlpatterns = [ path("", views.blog_index, name="blog_index"), path("<int:pk>/", views.blog_detail, name="blog_detail"), path("<category>/", views.blog_category, name="blog_category"), ] Once the blog-specific URLs are in place, you need to add them to the projects URL configuration using include(): from django.contrib import admin from django.urls import path, include urlpatterns = [ path("admin/", admin.site.urls), path("projects/", include("projects.urls")), path("blog/", include("blog.urls")), ] With this set up, all the blog URLs will be prefixed with blog/, and you’ll have the following URL paths: localhost:8000/blog: Blog index localhost:8000/blog/1: Blog detail view of blog with pk=1 localhost:8000/blog/python: Blog index view of all posts with category python These URLs won’t work just yet as you still need to create the templates. In this section, you created all the views for your blog application. You learned how to use filters when making queries and how to create Django forms. It won’t be long now until you can see your blog app in action! As always, don’t forget that you can check out the source code for this section on GitHub. Blog App: Templates The final piece of our blog app is the templates. By the end of this section, you’ll have created a fully functioning blog. You’ll notice there are some bootstrap elements included in the templates to make the interface prettier. These aren’t the focus of the tutorial so I’ve glossed over what they do but do check out the Bootstrap docs to find out more. The first template you’ll create is for the blog index in a new file blog/templates/blog_index.html. This will be very similar to the projects index view. You’ll use a for loop to loop over all the posts. For each post, you’ll display the title and a snippet of the body. As always, you’ll extend the base template personal_porfolio/templates/base.html, which contains our navigation bar and some extra formatting: 1 {% extends "base.html" %} 2 {% block page_content %} 3 <div class="col-md-8 offset-md-2"> 4 <h1>Blog Index</h1> 5 <hr> 6 {% for post in posts %} 7 <h2><a href="{% url 'blog_detail' post.pk%}">{{ post.title }}</a></h2> 8 <small> 9 {{ post.created_on.date }} | 10 Categories: 11 {% for category in post.categories.all %} 12 <a href="{% url 'blog_category' category.name %}"> 13 {{ category.name }} 14 </a> 15 {% endfor %} 16 </small> 17 <p>{{ post.body | slice:":400" }}...</p> 18 {% endfor %} 19 </div> 20 {% endblock %} On line 7, we have the post title, which is a hyperlink. The link is a Django link where we are pointing to the URL named blog_detail, which takes an integer as its argument and should correspond to the pk value of the post. Underneath the title, we’ll display the created_on attribute of the post as well as its categories. On line 11, we use another for loop to loop over all the categories assigned to the post. On line 17, we use a template filter slice to cut off the post body at 400 characters so that the blog index is more readable. Once that’s in place, you should be able to access this page by visiting localhost:8000/blog: Next, create another HTML file blog/templates/blog_category.html where your blog_category template will live. This should be identical to blog_index.html, except with the category name inside the h1 tag instead of Blog Index: {% extends "base.html" %} {% block page_content %} <div class="col-md-8 offset-md-2"> <h1>{{ category | title }}</h1> <hr> {% for post in posts %} <h2><a href="{% url 'blog_detail' post.pk%}">{{ post.title }}</a></h2> <small> {{ post.created_on.date }} | Categories: {% for category in post.categories.all %} <a href="{% url 'blog_category' category.name %}"> {{ category.name }} </a> {% endfor %} </small> <p>{{ post.body | slice:":400" }}...</p> {% endfor %} </div> {% endblock %} Most of this template is identical to the previous template. The only difference is on line 4, where we use another Django template filter title. This applies titlecase to the string and makes words start with an uppercase character. With that template finished, you’ll be able to access your category view. If you defined a category named python, you should be able to visit localhost:8000/blog/python and see all the posts with that category: The last template to create is the post_detail template. In this template, you’ll display the title and full body of a post. Between the title and the body of the post, you’ll display the date the post was created and any categories. Underneath that, you’ll include a comments form so users can add a new comment. Under this, there will be a list of comments that have already been left: 1 {% extends "base.html" %} 2 {% block page_content %} 3 <div class="col-md-8 offset-md-2"> 4 <h1>{{ post.title }}</h1> 5 <small> 6 {{ post.created_on.date }} | 7 Categories: 8 {% for category in post.categories.all %} 9 <a href="{% url 'blog_category' category.name %}"> 10 {{ category.name }} 11 </a> 12 {% endfor %} 13 </small> 14 <p>{{ post.body | linebreaks }}</p> 15 <h3>Leave a comment:</h3> 16 <form action="/blog/{{ post.pk }}/" method="post"> 17 {% csrf_token %} 18 <div class="form-group"> 19 {{ form.author }} 20 </div> 21 <div class="form-group"> 22 {{ form.body }} 23 </div> 24 <button type="submit" class="btn btn-primary">Submit</button> 25 </form> 26 <h3>Comments:</h3> 27 {% for comment in comments %} 28 <p> 29 On {{comment.created_on.date }} 30 <b>{{ comment.author }}</b> wrote: 31 </p> 32 <p>{{ comment.body }}</p> 33 <hr> 34 {% endfor %} 35 </div> 36 {% endblock %} The first few lines of the template in which we display the post title, date, and categories is the same logic as for the previous templates. This time, when rendering the post body, use a linebreaks template filter. This tag registers line breaks as new paragraphs, so the body doesn’t appear as one long block of text. Underneath the post, on line 16, you’ll display your form. The form action points to the URL path of the page to which you’re sending the POST request to. In this case, it’s the same as the page that is currently being visited. You then add a csrf_token, which provides security and renders the body and author fields of the form, followed by a submit button. To get the bootstrap styling on the author and body fields, you need to add the form-control class to the text inputs. Because Django renders the inputs for you when you include {{ form.body }} and {{ form.author }}, you can’t add these classes in the template. That’s why you added the attributes to the form widgets in the previous section. Underneath the form, there’s another for loop that loops over all the comments on the given post. The comments, body, author, and created_on attributes are all displayed. Once that template is in place, you should be able to visit localhost:8000/blog/1 and view your first post: You should also be able to access the post detail pages by clicking on their title in the blog_index view. The final finishing touch is to add a link to the blog_index to the navigation bar in base.html. This way, when you click on Blog in the navigation bar, you’ll be able to visit the blog. Check out the updates to base.html in the source code to see how to add that link. With that now in place, your personal portfolio site is complete, and you’ve created your first Django site. The final version of the source code containing all the features can be found on GitHub, so check it out! Click around the site a bit to see all the functionality and try leaving some comments on your posts! You may find a few things here and there that you think need polishing. Go ahead and tidy them up. The best way to learn more about this web framework is through practice, so try to extend this project and make it even better! If you’re not sure where to start, I’ve left a few ideas for you in the conclusion below! stay tuned for Part 2 of this series! [ Improve Your Python With 🐍 Python Tricks 💌 – Get a short & sweet Python Trick delivered to your inbox every couple of days. >> Click here to learn more and see examples ]
https://www.cebuscripts.com/tag/started/
CC-MAIN-2019-18
en
refinedweb
A package of pre-build TextInputFormatter objects to use with Flutter's TextField or TextFormField widgets. UppercaseInputFormatter, example 'THISISMYTEXT' LowercaseInputFormatter, example 'thisismytext' AlternatingCapsInputFormatter, example 'ThIsIsMyTeXt' new TextField( inputFormatters: [ UppercaseInputFormatter(), ], ), This widget set relies on these external third-party components:. If you want to discuss this project, please join the Discord chat. example/lib/main.dart import 'package:flutter/material.dart'; import 'package:text_formatters/text_formatters.dart'; void main() => runApp(new MyApp()); class MyApp extends StatelessWidget { // This widget is the root of your application. @override Widget build(BuildContext context) { return new MaterialApp( title: 'Text Formatters Demo', home: new MyHomePage(), ); } } class MyHomePage extends StatefulWidget { MyHomePage({Key key}) : super(key: key); @override _MyHomePageState createState() => new _MyHomePageState(); } class _MyHomePageState extends State<MyHomePage> { @override Widget build(BuildContext context) { return new Scaffold( appBar: new AppBar( title: new Text('Text Formatters Example'), ), body: new Column( mainAxisAlignment: MainAxisAlignment.center, children: <Widget>[ new TextField( decoration: InputDecoration( labelText: "(no formatter)", ), ), new TextField( decoration: InputDecoration( labelText: "UppercaseInputFormatter", ), inputFormatters: [ UppercaseInputFormatter(), ], ), new TextField( decoration: InputDecoration( labelText: "LowercaseInputFormatter", ), inputFormatters: [ LowercaseInputFormatter(), ], ), new TextField( decoration: InputDecoration( labelText: "AlternatingCapsInputFormatter", ), inputFormatters: [ AlternatingCapsInputFormatter(), ], ), ], ), ); } } Add this to your package's pubspec.yaml file: dependencies: text_formatters: :text_formatters/text_formatters.dart'; We analyzed this package on Apr 16, 2019, and provided a score, details, and suggestions below. Analysis was completed with status completed using: Detected platforms: Error(s) prevent platform classification: Fix dependencies in pubspec.yaml. Fix dependencies in pubspec.yaml. Running flutter packages pub upgrade failed with the following output: ERR: The current Flutter SDK version is 1.4.7. Because text_formatters depends on groovin_string_masks any which requires Flutter SDK version ^0.5.1, version solving failed. Fix platform conflicts. (-20 points) Error(s) prevent platform classification: Fix dependencies in pubspec.yaml. Make sure dartdoc successfully runs on your package's source files. (-10 points) Dependencies were not resolved.
https://pub.dartlang.org/packages/text_formatters
CC-MAIN-2019-18
en
refinedweb
I cannot happen because x and y might refer to the same location: int foo(int *x, int *y) { *x = 0; *y = 1; return *x; } Generated code typically looks like this: foo: movl $0, (%rdi) movl $1, (%rsi) movl (%rdi), %eax ret Failure to optimize is particularly frustrating when we know for sure that x and y are not aliases. The “restrict” keyword in C was introduced to help solve this sort of problem but we’re not going to talk about that today. Rather, we’re going to talk about an orthogonal solution, the aliasing rules in C and C++ that permit the compiler to assume that an object will not be aliased by a pointer to a different type. Often this is called “strict aliasing” although that term does not appear in the standards. Consider, for example, this variant of the program above: int foo(int *x, long *y) { *x = 0; *y = 1; return *x; } Since a pointer-to-int and a pointer-to-long may be assumed to not alias each other, the function can be compiled to return zero: foo2: movl $0, (%rdi) xorl %eax, %eax movq $1, (%rsi) ret As we see here, the aliasing rules give C and C++ compilers leverage that they can use to generate better code. On the other hand, since C is a low-level language and C++ can be used as a low-level language, both permit casting between pointer types, which can end up creating aliases that violate the compiler’s assumptions. For example, we might naively write code like this to access the representation of a floating point value: unsigned long bogus_conversion(double d) { unsigned long *lp = (unsigned long *)&d; return *lp; } This function is undefined under the aliasing rules and while it happens to be compiled into the same code that would be emitted without the strict aliasing rules, it is easy to write incorrect code that looks like it is getting broken by the optimizer: #include <stdio.h> long foo(int *x, long *y) { *x = 0; *y = 1; return *x; } int main(void) { long l; printf("%ld\n", foo((int *)&l, &l)); } $ gcc-5 strict.c ; ./a.out 1 $ gcc-5 -O2 strict.c ; ./a.out 0 $ clang strict.c ; ./a.out 1 $ clang -O2 strict.c ; ./a.out 0 An exception to the strict aliasing rules is made for pointers to character types, so it is always OK to inspect an object’s representation via an array of chars. This is necessary to make memcpy-like functions work properly. So far, this is very well known. Now let’s look at a few consequences of strict aliasing that are perhaps not as widely known. Physical Subtyping is Broken An old paper that I like uses the term “physical subtyping” to refer to the struct-based implementation of inheritance in C. Searching for “object oriented C” returns quite a few links. Additionally, many large C systems (the Linux kernel for example) implement OO-like idioms. Any time this kind of code casts between pointer types and dereferences the resulting pointers, it violates the aliasing rules. Many aliasing rule violations can be found in this book about object oriented C. Some build systems, such as Linux’s, invoke GCC with its -fno-strict-aliasing flag to avoid problems. Update based on some comments from Josh Haberman and Sam Tobin-Hochstadt: It looks like the specific case where the struct representing the derived type includes its parent as its first member should not trigger UB. The language in this part of the standard is very hard to parse out. This program from the Cerberus project illustrates the problem with changing the type of a pointer to struct: #include <stdio.h> typedef struct { int i1; } s1; typedef struct { int i2; } s2; void f(s1 *s1p, s2 *s2p) { s1p->i1 = 2; s2p->i2 = 3; printf("%i\n", s1p->i1); } int main() { s1 s = {.i1 = 1}; f(&s, (s2 *)&s); } $ gcc-5 effective.c ; ./a.out 3 $ gcc-5 -O2 effective.c ; ./a.out 2 $ clang-3.8 effective.c ; ./a.out 3 $ clang-3.8 -O2 effective.c ; ./a.out 3 Chunking Optimizations Are Broken Code that processes bytes one at a time tends to be slow. While an optimizing compiler can sometimes make a naive character-processing loop much faster, in practice we often need to help the compiler out by explicitly processing word-sized chunks of bytes at a time. Since the data reinterpretation is generally done by casting to a non-character-typed pointer, the resulting accesses are undefined. Search the web for “fast memcpy” or “fast memset”: many of the hits will return erroneous code. Example 1, example 2, example 3. Although I have no evidence that it is being miscompiled, OpenSSL’s AES implementation uses chunking and is undefined. One way to get chunking optimizations without UB is to use GCC’s may_alias attribute, as seen here in Musl. This isn’t supported even by Clang, as far as I know. Offset Overlap is Bad Here is a devilish little program by Richard Biener and Robbert Krebbers that I found via the Cerberus report: #include <stdio.h> #include <stdlib.h> struct X { int i; int j; }; int foo(struct X *p, struct X *q) { q->j = 1; p->i = 0; return q->j; } int main() { unsigned char *p = malloc(3 * sizeof(int)); printf("%i\n", foo((struct X *)(p + sizeof(int)), (struct X *)p)); } It is ill-formed according to LLVM and GCC: $ clang-3.8 krebbers.c ; ./a.out 0 $ clang-3.8 -O2 krebbers.c ; ./a.out 1 $ gcc-5 krebbers.c ; ./a.out 0 $ gcc-5 -O2 krebbers.c ; ./a.out 1 int8_t and uint8_t Are Not Necessarily Character Types This bug (and some linked discussions) indicate that compiler developers don’t necessarily consider int8_t and uint8_t to be character types for aliasing purposes. Wholesale replacement of character types with standard integer types — as advocated here, for example — would almost certainly lead to interesting strict aliasing violations when the resulting code was run through a compiler that doesn’t think int8_t and uint8_t are character types. Happily, no compiler has done this yet (that I know of). Summary A lot of C code is broken under strict aliasing. Separate compilation is probably what protects us from broader compiler exploitation of the brokenness, but it is a very poor kind of protection. Static and dynamic checking tools are needed. If I were writing correctness-oriented C that relied on these casts I wouldn’t even consider building it without -fno-strict-aliasing. Pascal Cuoq provided feedback on a draft of this piece. Nice writeup! One thing that works well in practice in cases where you really don’t want to process single bytes at a time is to use a union for going from one type to another. Granted, writing one member of a union and reading from a different one also isn’t allowed per the standard, but every compiler I’ve come across allows it. It’s useful in cases where you need to work with a type-punned pointer. The section on physical subtyping doesn’t quite make clear if the standard approach to subtyping in C, where structs have their parent as their first field, is also broken. The code snippet there has unrelated struct types, but now I’m worried about related ones. Of course, disallowing that would break every serious C program I’ve ever seen, in fundamental ways, but that hasn’t stopped anyone yet. It is possible to violate the strict aliasing rule even without casting pointers, by taking pointers to different members of a union. Sam, see here: > … compiler developers don’t necessarily consider int8_t and uint8_t to be character types for aliasing purposes It makes sense. What about compilers for systems where CHAR_BIT is not 8? Regarding the rest of the post, thank you; this is eye-opening. I’m particularly concerned with where compilers are going with subtyping and optimizations. It’s almost as if the compiler folks and language designers want C to have immutable types. (I see a new type qualifier in the future.) I have yet to find a way to create an “opaque shell type” suitable for static allocation (hence, not incomplete) which would then be used as argument to functions using the real private type which size and alignment are <= to opaque shell type. It works, but it violates strict aliasing. And the problem is, there is simply no other way… tried to recompile C code that I Wrote in 1996 at university. (a monte carlo simulation of multi agents). in 2013 it was still compiling. And now it is not. I have written a drivers for linux that worked in 2000, python C extension 3 years ago for fun. And I always wrote some C code once in a while, because C was fun. C code used to do exactly what I was thinking it was doing. I also tried to rewrite code 2 months ago, it was a major pain. As a former physicist I limit myself to $CC [-O2] code.c I have written enough code that worked to have been pretty confident in my code. It is still the same syntax, but it is not the same language anymore. A lot of new sybilin limitations and flags that make no sense appeared in the compilerS. If it has the same syntax, but not the same behaviour. I would call modern C appellation of what llvm and gcc implements a lie. I am not confident anymore, not of my code, but of the output of the compilers. It is ++C– for me. Like the notation implies the new stuffs have some hidden side effects and incorrect assumptions that don’t make it idempotent to my plain old C. And stackoverflow proposed me no sets of flags to set recent C compilers to the old behaviour state. I think the geniuses behind the new compiler theory have lost track in their delirium of the need of users. They seems to kind of force a non backward compatible C behaviour. C users want a boring C compiler and don’t want any kind of bullshit crap à la Java to prevent poor coders to make mistakes and C compilers to map the code in anything other than expected in order to change poorly performing C by construction in extremely fast C. llvm and gcc are painfully changing C in a new beast I hate. I’m not sure about C, but C++ specifically supports converting a pointer to a standard-layout struct to a pointer to its initial member and vice versa in order to support structural subtyping. There are also related special cases around the common initial sequence of standard-layout structs inside a union. Could someone plz provide an explanation why the “long l” piece of code above produces different results based on the optimization level?.. If x and y don’t alias, the stores to x and y can be executed in any order (this is useful because it gives the compiler more freedom to reorder or schedule code): int foo(int *x, long *y) { *x = 0; *y = 1; return *x; } Dave5: The function foo() receives two pointers to the same memory location. Without optimization is does as the code says: write 0, write 1, read back (1) and return 1. With optimization, it assumes that x and y point to different memory locations because they have different types. Because it just wrote 0 to *x, it skips reading back from *x and just returns 0. You are 100% correct that separate compilation is saving us from a lot of bugs. I have personally seen code break due to aliasing issues when cross-module inlining was enabled. My thought is that we should throw out the aliasing rules and take a subset; add a keyword that is the inverse of restrict, and say only that arguments to the same function with different types implicitly have the restrict keyword. I predict that this will yield a significant fraction of the fortran-like performance optimizations that the aliasing rules currently allow, while fixing a lot of breakage that already happens. > Separate compilation is probably what protects us from broader compiler exploitation of the brokenness, but it is a very poor kind of protection.. > Granted, writing one member of a union and reading from a different one also isn’t allowed per the standard, but every compiler I’ve come across allows it. Why do you think the standard forbids it? The C committee apparently disagrees; their response to DR 283 added a footnote clarifying that the standard allowed this behavior. IIRC, the older C90 standard had labeled the behavior “implementation-defined,” because it was supposed to be defined by the data representation. I still remember the surprise I felt when I realized that (at least in C++) ‘signed char’, ‘unsigned char’ and ‘char’ are all different type, can have templates specialized for each, overloaded functions etc… And of course the only type that is considered to may-alias any other type is ‘char’. Thank you all for the answer. 🙂 Chunking is still possible to implement without aliasing violations, provided the compiler is good at optimizing regular memcpy() calls. (So this example is sort of moot, but it illustrates the point.) void *chunking_memcpy(void *dest, const void *src, size_t n) { char *cdest = dest; const char *csrc = src; const char *cend = csrc + n; unsigned long chunk; while (csrc + sizeof(chunk) <= cend) { memcpy(&chunk, csrc, sizeof(chunk)); memcpy(cdest, &chunk, sizeof(chunk)); cdest += sizeof(chunk); csrc += sizeof(chunk); } while (csrc < cend) { *cdest++ = *csrc++; } return dest; } With gcc 5.3, the inner loop becomes .L3: movq -8(%rcx), %r9 addq $8, %rcx addq $8, %rsi movq %r9, -8(%rsi) cmpq %rcx, %r8 jnb .L3 (Then it does some crazy stuff to unroll the tail fixup loop, but still.) @Ryan Prichard: I may be wrong, but the C standard says: “6.5.2.3 Structure and union members.” To me, the last sentence (“This might be a trap representation.”) means that this might very well blow up in my face. Even in the newest version of the standard it still is unspecified behaviour, right? The C++11 standard says the following: “9.5 Unions [class.union] In a union, at most one of the non-static data members can be active at any time, that is, the value of at most one of the non-static data members can be stored in a union at any time.” Digging a bit deeper into the standard doesn’t really clear things up for me. See this thread for more info: Tavian, you’re right, I should have mentioned memcpy() in the post, it’s a very clear way to implement chunking. The only problem is that it may generate non-optimized builds that perform quite poorly. Stefan and Ryan: I wanted to steer clear of unions in this post, but basically they are a reliable way to do type-punning in modern C and C++. The standards really are a mess, but the compiler implementors have done the right thing here. Jason, that’s great to hear that things aren’t breaking! Even so, I’m somewhat worried that we’ll be seeing a slow series of time bombs going off as people tweak and tune LTO to increase its reach. Obviously nobody should highly optimize code unless they know it UB sane. My laptop’s a space monkey. I agree, there are some tricky cases where strict aliasing can bite. The attribute approach definitely seems like reasonable one. Static analyzers and instrumentation are also definitely necessary, particularly if you optimize across translation units. @regehr Yep. And because of course nobody can keep all the undefined behaviours straight at once, the loop’s test should be “cend – csrc >= sizeof(chunk)” instead of “csrc + sizeof(chunk) <= cend" to avoid making a pointer point out of bounds. Tavian I was just working on a different post that talks about avoiding pointers that are out of bounds! As far as I can tell, Clang does handle `may_alias`: . I would expect this to change if you apply LTO across libc as well. > Chunking is still possible to implement without aliasing violations, provided the > compiler is good at optimizing regular memcpy() calls. Nice trick! Though this doesn’t help you if the chunking function you’re implementing *is* `memcpy`. >> Chunking is still possible to implement without aliasing violations, provided the >> compiler is good at optimizing regular memcpy() calls. > Nice trick! Though this doesn’t help you if the chunking >function you’re implementing *is* `memcpy`. This crystallizes the situation, completely. C is originally a self-hosting system programming language, but has been turned into something that can no longer self-host. That says pretty clearly to me, You Compiler Folks Are Doing It Wrong. Bytes are bytes, and I write statements in the order they are intended to execute. Whenever you break that ‘as if’ constraint, your compiler is broken. Correctness must always come first. None of this shit would have flown back when I was a gcc2 maintainer. (Having written a link time optimizer for M68K, I know pretty well what works and what doesn’t.) Seems to me that gcc’s downhill progress correlates with the growth of x86. The natural outcome from compiler writers desperate to wring performance out of an intractably non-designed instruction set. Instruction reordering wasn’t even an issue on SPARC, MIPS, i860, etc. It’s only due to x86’s byzantine architecture that most of this was ever needed. Sorry but that way things are going in C, if you think correctness is more important than speed, you better not use C or C++ if you can. > Devon H. O’Dell says: > March 15, 2016 at 7:47 am > It makes sense. What about compilers for systems where CHAR_BIT is not 8? AFAIK `int8_t` cannot exist on such a system, since `char` is the smallest type has at least 8 bits. The clang 3.8 release notes say they are even more aggressive now with alignment optimisations: I’m confused by the memcpy() notes there… Is this not safe anymore: int64_t foo; memcpy(&foo, “12345678”, 8); I thought memcpy was the safe way to move things between different alignments. > I’m confused by the memcpy() notes there… Is this not safe anymore: > > int64_t foo; > memcpy(&foo, “12345678”, 8); No, I believe this is safe. I think what the LLVM notes are claiming is something similar to what ARM CC has done for some time: Memcpy can move between things of different alignments, but it is allowed to make assumptions about the alignment of the source and destination based on their types. For example, in your code the compiler is allowed to assume `&foo` is 8-byte aligned. Having said that, I’m having a hard time parsing the odd char array example the LLVM docs use. Jason ” add a keyword that is the inverse of restrict” As I understand it, the C++ folks have gone the other way ???? and have added a “reference” type, which is sort of ??? equivilant to a FORTRAN or Pascal or BASIC var type. That allowed them to have FORTRAN / Pascal /BASIC levels of optimisation, at a time when they hadn’t yet broken aliasing??? Forgive my ignorance. It seems to me that people like language extensions better than they like language restrictions. Perhaps we should be lobbying for an extension to C that would allow optimisable pointers, like Var or Reference, so that pointers could be aliased the way programmers intend, and compiler writers could achieve optimisations. >> I’m confused by the memcpy() notes there… Is this not safe anymore: >> >> int64_t foo; >> memcpy(&foo, “12345678”, 8); >No, I believe this is safe. I don’t see how that is safe. Is there anything in the language standard that says sizeof(int64_t) is at least eight? Aren’t implementations allowed to use 16-bits, or more, for char? Oh, the good old aliasing 🙂 I keep maintaining a memory manager for my SAT solver, and I had to dig into aliasing for the reasons explained. It’s fun. For highly-optimized math libraries (thinking of Gaussian Elimination here) the judicious use of the “restrict” keyword can make huge differences, I’ve heard. Martin Albrecht could probably do a nice writeup on that. See — arith library for dense matrices over F2
https://blog.regehr.org/archives/1307
CC-MAIN-2019-18
en
refinedweb
DBSCAN (Density-Based Spatial Clustering of Applications with Noise) is a popular unsupervised learning method utilized in model building and machine learning algorithms. Before we go any further, we need to define what an “unsupervised” learning method is. Unsupervised learning methods are when there is no clear objective or outcome we are seeking to find. Instead, we are clustering the data together based on the similarity of observations. To help clarify, let’s take Netflix as an example. Based on previous shows you have watched in the past, Netflix will recommend shows for you to watch next. Anyone who has ever watched or been on Netflix has seen the screen below with recommendations (Yes, this image is taken directly from my Netflix account and if you have never watched Shameless before I suggest you get on that ASAP). Because I watched ‘Shameless’, Netflix recommends several other similar shows to watch. But where is Netflix gathering those recommendations from? Considering it is trying to predict the future with what show I am going to watch next, Netflix has nothing to base the predictions or recommendations on (no clear definitive objective). Instead, Netflix looks at other users who have also watched ‘Shameless’ in the past, and looks at what those users watched in addition to ‘Shameless’. By doing so, Netflix is clustering its users together based on similarity of interests. This is exactly how unsupervised learning works. Simply clustering observations together based on similarity, hoping to make accurate conclusions based on the clusters. Back to DBSCAN. DBSCAN is a clustering method that is used in machine learning to separate clusters of high density from clusters of low density. Given that DBSCAN is a density based clustering algorithm, it does a great job of seeking areas in the data that have a high density of observations, versus areas of the data that are not very dense with observations. DBSCAN can sort data into clusters of varying shapes as well, another strong advantage. DBSCAN works as such: - Divides the dataset into n dimensions - For each point in the dataset, DBSCAN forms an n dimensional shape around that data point, and then counts how many data points fall within that shape. - DBSCAN counts this shape as a cluster. DBSCAN iteratively expands the cluster, by going through each individual point within the cluster, and counting the number of other data points nearby. Take the graphic below for an example: Going through the aforementioned process step-by-step, DBSCAN will start by dividing the data into n dimensions. After DBSCAN has done so, it will start at a random point (in this case lets assume it was one of the red points), and it will count how many other points are nearby. DBSCAN will continue this process until no other data points are nearby, and then it will look to form a second cluster. As you may have noticed from the graphic, there are a couple parameters and specifications that we need to give DBSCAN before it does its work. The two parameters we need to specify are as such: What is the minimum number of data points needed to determine a single cluster? How far away can one point be from the next point within the same cluster? Referring back to the graphic, the epsilon is the radius given to test the distance between data points. If a point falls within the epsilon distance of another point, those two points will be in the same cluster. Furthermore, the minimum number of points needed is set to 4 in this scenario. When going through each data point, as long as DBSCAN finds 4 points within epsilon distance of each other, a cluster is formed. IMPORTANT: In order for a point to be considered a “core” point, it must contain the minimum number of points within epsilon distance. The point itself is included in the count. View the documentation and look at the min_samples parameter in particular. You will also notice that the Blue point in the graphic is not contained within any cluster. DBSCAN does NOT necessarily categorize every data point, and is therefore terrific with handling outliers in the dataset. Lets examine the graphic below: The left image depicts a more traditional clustering method, such as K-Means, that does not account for multi-dimensionality. Whereas the right image shows how DBSCAN can contort the data into different shapes and dimensions in order to find similar clusters. We also notice in the right image, that the points along the outer edge of the dataset are not classified, suggesting they are outliers amongst the data. Advantages of DBSCAN: - Is great at separating clusters of high density versus clusters of low density within a given dataset. - Is great with handling outliers within the dataset. Disadvantages of DBSCAN: - While DBSCAN is great at separating high density clusters from low density clusters, DBSCAN struggles with clusters of similar density. - Struggles with high dimensionality data. I know, this entire article I have stated how DBSCAN is great at contorting the data into different dimensions and shapes. However, DBSCAN can only go so far, if given data with too many dimensions, DBSCAN suffers Below I have included how to implement DBSCAN in Python, in which afterwards I explain the metrics and evaluating your DBSCAN Model DBSCAN Implementation in Python 1. Assigning the data as our X values # setting up data to cluster X = data# scale and standardizing data X = StandardScaler().fit_transform(X) 2. Instantiating our DBSCAN Model. In the code below, epsilon = 3 and min_samples is the minimum number of points needed to constitute a cluster. # instantiating DBSCAN dbscan = DBSCAN(eps=3, min_samples=4)# fitting model model = dbscan.fit(X) 3. Storing the labels formed by the DBSCAN labels = model.labels_ 4. Identifying which points make up our “core points” import numpy as np from sklearn import metrics# identify core samples core_samples = np.zeros_like(labels, dtype=bool)core_samples[dbscan.core_sample_indices_] = True print(core_samples) 5. Calculating the number of clusters # declare the number of clusters n_clusters = len(set(labels)) - (1 if -1 in labels else 0) print(n_clusters_) 6. Computing the Silhouette Score print("Silhoette Coefficient: %0.3f" % metrics.silhouette_score(X, labels) Metrics for Measuring DBSCAN’s Performance: Silhouette Score: The silhouette score is calculated utilizing the mean intra- cluster distance between points, AND the mean nearest-cluster distance. For instance, a cluster with a lot of data points very close to each other (high density) AND is far away from the next nearest cluster (suggesting the cluster is very unique in comparison to the next closest), will have a strong silhouette score. A silhouette score ranges from -1 to 1, with -1 being the worst score possible and 1 being the best score. Silhouette scores of 0 suggest overlapping clusters. Inertia: Inertia measures the internal cluster sum of squares (sum of squares is the sum of all residuals). Inertia is utilized to measure how related clusters are amongst themselves, the lower the inertia score the better. HOWEVER, it is important to note that inertia heavily relies on the assumption that the clusters are convex (of spherical shape). DBSCAN does not necessarily divide data into spherical clusters, therefore inertia is not a good metric to use for evaluating DBSCAN models (which is why I did not include inertia in the code above). Inertia is more often used in other clustering methods, such as K-means clustering.
https://elutins.medium.com/dbscan-what-is-it-when-to-use-it-how-to-use-it-8bd506293818
CC-MAIN-2021-10
en
refinedweb
web commit by youri: update 1: This guide should allow you to learn how to create a new port or 2: simply fix a port that you need. There are three target demographics 3: listed below: 4: 5: - binary packages user with pkgin or pkg_add 6: (you should be confident here) 7: - build from source, use options 8: (you will know this after reading the guide) 9: - port developers 10: (you should be able to get started here) 11: 12: 13: ## pkgsrc tree 14: 15: You should have a copy of the pkgsrc tree sitting somewhere on your 16: disk, already bootstrapped. 17: The tree contains a `Makefile`, a `README`, distfiles, packages, 18: category directories containing the ports, the bootstrap directory 19: and some documentation. 20: 21: The `mk/*` directory contains the pkgsrc framework Makefiles but 22: also shell and Awk scripts 23: 24: `pkglocate` is a script to find port names in the tree, though 25: `pkgtools/pkgfind` is much faster. 26: 27: 28: ## use the right tools 29: 30: If you want to get started working on ports like creating new ones 31: or simply fix ones you need, you should know about these tools: 32: 33: - install package developer utilities: 34: 35: pkgin -y in pkg_developer 36: 37: It contains very useful programs like: 38: 39: - checkperms: 40: 41: verify file permissions 42: - createbuildlink: 43: 44: create buildlink3.mk files, which I'll explain later 45: - digest: 46: 47: create hashes for messages with crypto algorithms such as sha512 and many others 48: - lintpkgsrc: 49: 50: checks the whole pkgsrc tree, list all explicitly broken packages for example 51: - pkg_chk: 52: 53: checks package versions and update if necessary 54: - pkg_tarup: 55: 56: create archives of installed programs for later use on other machines or backups 57: - pkgdiff: 58: 59: show diffs of patched files 60: - pkglint: 61: 62: verify the port you're creating for common mistakes (very useful!) 63: - revbump: 64: 65: update package version by one bump by increasing PKGREVISION 66: - url2pkg: 67: 68: create a blank port from the software download link, it saves you some time by filling out a few basic Makefile settings 69: - verifypc: 70: 71: sanity check for pkg-config in ports 72: 73: 74: ## port contents 75: 76: A pkgsrc port should at least contain: 77: 78: - `Makefile` : a comment, developer info, software download site 79: and lots of other possibilities 80: - `DESCR` : a paragraph containing the description for the software 81: of the port we're making 82: - `PLIST` : the list of files to install, pkgsrc will only install 83: the files listed here to your prefix 84: - `distinfo` : hashes of the software archive and patches or files 85: in the port 86: 87: 88: Here's how they would look like for a small port I submitted not 89: long ago in pkgsrc-wip 90: 91: Makefile: 92: 93: [[!format make """ 94: # [[!paste id=rcsid1]][[!paste id=rcsid2]] 95: 96: PKGNAME= osxinfo-0.1 97: CATEGORIES= misc 98: GHCOMMIT= de74b8960f27844f7b264697d124411f81a1eab6 99: DISTNAME= ${GHCOMMIT} 100: MASTER_SITES= 101: 102: MAINTAINER= youri.mout@gmail.com 103: HOMEPAGE= 104: COMMENT= Small Mac OS X Info Program 105: LICENSE= isc 106: 107: ONLY_FOR_PLATFORM= Darwin-*-* 108: 109: DIST_SUBDIR= osxinfo 110: WRKSRC= ${WRKDIR}/osxinfo-${GHCOMMIT} 111: 112: .include "../../databases/sqlite3/buildlink3.mk" 113: .include "../../mk/bsd.pkg.mk" 114: """]] 115: 116: DESCR: 117: 118: Small and fast Mac OS X info program written in C. 119: 120: 121: PLIST: 122: 123: @comment [[!paste id=rcsid1]][[!paste id=rcsid2]] 124: bin/osxinfo 125: 126: distinfo: 127: 128: [[!paste id=rcsid1]][[!paste id=rcsid2]] 129: 130: 131: 132: Size (osxinfo/de74b8960f27844f7b264697d124411f81a1eab6.tar.gz) = 5981 bytes 133: 134: 135: ## make 136: 137: Now you know what kind of files you can see when you're in a port 138: directory. The command used to compile it is the NetBSD `make` but 139: often `bmake` on non NetBSD systems to avoid Makefile errors. Typing 140: make alone will only compile the program but you can also use other 141: command line arguments to make such as extract, patch, configure, 142: install, package, ... 143: 144: I'll try to list them and explain them in logical order. You can run them together. 145: 146: - `make clean` will remove the source file from the work directory 147: so you can restart with either new options, new patches, ... 148: - `make fetch` will simply fetch the file and check if the hash 149: corresponds. It will throw an error if it doesn't. 150: - `make distinfo` or `make mdi` to update the file hashes in the 151: `distinfo` file mentionned above. 152: - `make extract` extracts the program source files from it's archive 153: in the work directory 154: - `make patch` applies the local pkgsrc patches to the source 155: - `make configure` run the GNU configure script 156: - `make` or `make build` or `make all` will stop after the program 157: is compiled 158: - `make stage-install` will install in the port destdir, where 159: pkgsrc first installs program files to check if the files correspond 160: with the `PLIST` contents before installing to your prefix. For 161: `wget`, if you have a default WRKOBJDIR (I'll explain later), the 162: program files will first be installed in 163: `<path>/pkgsrc/net/wget/work/.destdir` then after a few checks, 164: in your actual prefix like `/usr/pkg` 165: - `make test` run package tests, if they have any 166: - `make package` create a package without installing it, it will 167: install dependencies though 168: - `make replace` upgrade or reinstall the port if already installed 169: - `make deinstall` deinstall the program 170: - `make install` installs from the aforementionned `work/.destdir` 171: to your prefix 172: - `make bin-install` installs a package for the port, locally if 173: previously built or remotely, as defined by BINPKG_SITES in 174: `mk.conf`, you can make a port install dependencies from packages 175: rather than building them with the DEPENDS_TARGET= bin-install 176: in `mk.conf` 177: - `make show-depends` show port dependencies 178: - `make show-options` show various port options, as defined by `options.mk` 179: - `make clean-depends` cleans all port dependencies 180: - `make distclean` remove the source archive 181: - `make package-clean` remove the package 182: - `make distinfo` or `make mdi` to update the `distinfo` file 183: containing file hashes if you have a new distfile or patch 184: - `make print-PLIST` to generate a `PLIST` file from files found 185: in `work/.destdir` 186: 187: You should be aware that there are many make options along with 188: these targets, like 189: 190: - `PKG_DEBUG_LEVEL` 191: - `CHECK_FILES` 192: - and many others described the the NetBSD pkgsrc guide 193: 194: 195: ## pkgsrc configuration 196: 197: The framework uses an `mk.conf` file, usually found in /etc. Here's 198: how mine looks: 199: 200: [[!format make """ 201: # Tue Oct 15 21:21:46 CEST 2013 202: 203: .ifdef BSD_PKG_MK # begin pkgsrc settings 204: 205: DISTDIR= /pkgsrc/distfiles 206: PACKAGES= /pkgsrc/packages 207: WRKOBJDIR= /pkgsc/work 208: ABI= 64 209: PKGSRC_COMPILER= clang 210: CC= clang 211: CXX= clang++ 212: CPP= ${CC} -E 213: 214: PKG_DBDIR= /var/db/pkg 215: LOCALBASE= /usr/pkg 216: VARBASE= /var 217: PKG_TOOLS_BIN= /usr/pkg/sbin 218: PKGINFODIR= info 219: PKGMANDIR= man 220: BINPKG_SITES= 221: DEPENDS_TARGET= bin-install 222: X11_TYPE= modular 223: TOOLS_PLATFORM.awk?= /usr/pkg/bin/nawk 224: TOOLS_PLATFORM.sed?= /usr/pkg/bin/nbsed 225: ALLOW_VULNERABLE_PACKAGES= yes 226: MAKE_JOBS= 8 227: SKIP_LICENSE_CHECK= yes 228: PKG_DEVELOPER= yes 229: SIGN_PACKAGES= gpg 230: PKG_DEFAULT_OPTIONS+= -pulseaudio -x264 -imlib2-amd64 -dconf 231: .endif # end pkgsrc settings 232: """]] 233: 234: - I use `DISTDIR`, `PACKAGES`, `WRKOBJDIR` to move distfiles, 235: packages and source files somewhere else to keep my pkgsrc tree 236: clean 237: - `PKGSRC_COMPILER`, `CC`, `CXX`, `CPP` and `ABI` are my compiler 238: options. I'm using clang to create 64 bit binaries here 239: - `PKG_DBDIR`, `VARBASE`, `LOCALBASE`, `PKG_TOOLS_BIN` are my prefix 240: and package database path and package tools settings 241: - `PKGINFODIR`, `PKGMANDIR` are the info and man directories 242: - `BINPKG_SITES` is the remote place where to get packages with the 243: `bin-install` make target 244: - `DEPENDS_TARGET` is the way port dependencies should be installed. 245: `bin-install` will simply install a package instead of building 246: the port 247: - `X11_TYPE` sould be `native` or `modular`, the latter meaning we 248: want X11 libraries from pkgsrc instead of using the `native` ones 249: usually in `/usr/X11R7` in Linux or BSD systems and `/opt/X11` 250: on Mac OS X with XQuartz 251: - `TOOLS_PLATFORM.*` points to specific programs used by pkgsrc, 252: here I use the one that was generated by pkgsrc bootstrap for 253: maximum compatibility 254: - `ALLOW_VULNERABLE_PACKAGES` allows you to disallow the installation 255: of vulnerable packages in critical environments like servers 256: - `MAKE_JOBS` the number of concurrent make jobs, I set it to 8 but 257: it breaks some ports 258: - `SKIP_LICENSE_CHECK` will skip the license check. If disabled you 259: will have to define a list of licenses you find acceptable with 260: `ACCEPTABLE_LICENSES` 261: - `PKG_DEVELOPER` this option will show more details during the port building 262: - `SIGN_PACKAGES` allows you to `gpg` sign packages. More info in 263: my [blog post]() about it 264: - `PKG_DEFAULT_OPTIONS` allows you to enable or disable specific 265: options for all ports (as defined with ports' options.mk files), 266: I disabled a few options so less ports would break, pulseaudio 267: doesn't build on Mac OS X for example, neither do x264, dconf 268: 269: Keep in mind that there are many other available options. 270: 271: 272: ## creating a simple port 273: 274: Let's create a little port using the tools we've talked about above. 275: I will use a little window manager called 2bwm. 276: 277: - We need an url for the program source files archive. It can be a 278: direct link to a tar or xz archive. Mine's 279: `` 280: 281: - Now that we have a proper link for our program source, create a 282: directory for your port: 283: 284: $ mkdir ~/pkgsrc/wm/2bwm 285: 286: - Use `url2pkg` to create the needed files automatically: 287: 288: $ url2pkg 289: 290: You'll be presented with a text editor like `vim` to enter basic 291: Makefile options: 292: 293: - `DISTNAME`, `CATEGORIES`, `MASTER_SITES` should be set automatically 294: - enter your mail address for `MAINTAINER` so users know whom to 295: contact if the port is broken 296: - make sure the `HOMEPAGE` is set right, for 2bwm it is a github page 297: - write a `COMMENT`, it should be a one-line description of the program 298: - find out which license the program uses, in my case it is the 299: `isc` license. You can find a list of licenses in `pkgsrc/mk/licenses.mk`. 300: - Below you will see `.include "../../mk/bsd.pkg.mk"` at the end 301: of the Makefile and above this should go the port's needed 302: dependencies to build, we'll leave that empty at the moment and 303: try to figure out what 2bwm needs 304: - exit vim and it should fetch and update the file hashes for you. 305: If it says `permission denied` you can just run `make mdi` to 306: fetch and upadate the `distinfo` file 307: 308: So now you have valid `Makefile` and `distinfo` files but you need 309: to write a paragraph in `DESCR`. You can usually find inspiration 310: on the program's homepage. 311: 312: Here's how they look like at the moment: 313: 314: Makefile: 315: [[!format make """ 316: # [[!paste id=rcsid1]][[!paste id=rcsid2]] 317: 318: DISTNAME= 2bwm-0.1 319: CATEGORIES= wm 320: MASTER_SITES= 321: 322: MAINTAINER= yrmt@users.sourceforge.net 323: HOMEPAGE= 324: COMMENT= Fast floating WM written over the XCB library and derived from mcwm 325: LICENSE= isc 326: 327: .include "../../mk/bsd.pkg.mk" 328: """]] 329: 330: distinfo: 331: 332: 333: [[!paste id=rcsid1]][[!paste id=rcsid2]] 334: 335: SHA1 (2bwm-0.1.tar.gz) = e83c862dc1d9aa198aae472eeca274e5d98df0ad 336: RMD160 (2bwm-0.1.tar.gz) = d9a93a7d7ae7183f5921f9ad76abeb1401184ef9 337: Size (2bwm-0.1.tar.gz) = 38419 bytes 338: 339: DESCR: 340: 341: A fast floating WM, with the particularity of having 2 borders, 342: written over the XCB library and derived from mcwm written by 343: Michael Cardell. In 2bWM everything is accessible from the keyboard 344: but a pointing device can be used for move, resize and raise/lower. 345: 346: But our PLIST file is still empty. 347: 348: 349: #### build stage 350: 351: Let's try to build the port to see if things work but as soon as 352: the build stage starts, we get this error: 353: 354: > 2bwm.c:26:10: fatal error: 'xcb/randr.h' file not found 355: 356: Let's find out which port provides this file ! 357: 358: $ pkgin se xcb 359: 360: returns these possible packages: 361: 362: xcb-util-wm-0.3.9nb1 Client and window-manager helpers for ICCCM and EWMH 363: xcb-util-renderutil-0.3.8nb1 Convenience functions for the Render extension 364: xcb-util-keysyms-0.3.9nb1 XCB Utilities 365: xcb-util-image-0.3.9nb1 XCB port of Xlib's XImage and XShmImage 366: xcb-util-0.3.9nb1 = XCB Utilities 367: xcb-proto-1.9 = XCB protocol descriptions (in XML) 368: xcb-2.4nb1 Extensible, multiple cut buffers for X 369: 370: Package content inspection allowed me to find the right port 371: 372: $ pkgin pc libxcb|grep randr.h 373: 374: So we can add the libxcb `buildlink3.mk` file to the Makefile above 375: the bsd.pkg.mk include: 376: 377: .include "../../x11/libxcb/buildlink3.mk" 378: 379: This allows the port to link 2bwm against the libxcb port. Let's 380: try to build the port again! 381: 382: $ make clean 383: $ make 384: 385: Reports another error ! 386: 387: > 2bwm.c:27:10: fatal error: 'xcb/xcb_keysyms.h' file not found 388: 389: It looks like this file is provided by xcb-util-keysyms, so let's add: 390: 391: .include "../../x11/xcb-util-keysyms/buildlink3.mk" 392: 393: in our Makefile. 394: 395: Clean, build again, and add more dependencies until it passes the 396: build stage. Here's how my Makefile ends up looking like: 397: 398: [[!format make """ 399: # [[!paste id=rcsid1]][[!paste id=rcsid2]] 400: 401: DISTNAME= 2bwm-0.1 402: CATEGORIES= wm 403: MASTER_SITES= 404: 405: MAINTAINER= yrmt@users.sourceforge.net 406: HOMEPAGE= 407: COMMENT= Fast floating WM written over the XCB library and derived from mcwm 408: LICENSE= isc 409: 410: .include "../../x11/libxcb/buildlink3.mk" 411: .include "../../x11/xcb-util-wm/buildlink3.mk" 412: .include "../../x11/xcb-util-keysyms/buildlink3.mk" 413: .include "../../x11/xcb-util/buildlink3.mk" 414: .include "../../mk/bsd.pkg.mk" 415: """]] 416: 417: 418: #### install phase 419: 420: Geat ! We got our program to compile in pkgsrc. Now we must generate 421: the PLIST file so we can actually install the program, but we must 422: `make stage-install` to make sure that it installs in the right 423: place. 424: 425: 426: $ find /pkgsrc/work/wm/2bwm/work/.destdir/ 427: 428: returns: 429: 430: /pkgsrc/work/wm/2bwm/work/.destdir/ 431: /pkgsrc/work/wm/2bwm/work/.destdir//usr 432: /pkgsrc/work/wm/2bwm/work/.destdir//usr/local 433: /pkgsrc/work/wm/2bwm/work/.destdir//usr/local/bin 434: /pkgsrc/work/wm/2bwm/work/.destdir//usr/local/bin/2bwm 435: /pkgsrc/work/wm/2bwm/work/.destdir//usr/local/bin/hidden 436: /pkgsrc/work/wm/2bwm/work/.destdir//usr/local/share 437: /pkgsrc/work/wm/2bwm/work/.destdir//usr/local/share/man 438: /pkgsrc/work/wm/2bwm/work/.destdir//usr/local/share/man/man1 439: /pkgsrc/work/wm/2bwm/work/.destdir//usr/local/share/man/man1/2bwm.1 440: /pkgsrc/work/wm/2bwm/work/.destdir//usr/local/share/man/man1/hidden.1 441: /pkgsrc/work/wm/2bwm/work/.destdir//usr/pkg 442: 443: This doesn't look right since our `LOCALBASE` is `/usr/pkg`. 444: 445: 446: $ make print-PLIST 447: 448: returns nothing, because 2bwm installs files in the wrong place so 449: we need to fix 2bwm's own Makefile to use the right `DESTDIR` and 450: `PREFIX`, that is set to the right place by pkgsrc. Let's inspect 451: how 2bwm installs: 452: 453: From 2bwm's Makefile: 454: 455: [[!format make """ 456: install: $(TARGETS) 457: test -d $(DESTDIR)$(PREFIX)/bin || mkdir -p $(DESTDIR)$(PREFIX)/bin 458: install -pm 755 2bwm $(DESTDIR)$(PREFIX)/bin 459: install -pm 755 hidden $(DESTDIR)$(PREFIX)/bin 460: test -d $(DESTDIR)$(MANPREFIX)/man1 || mkdir -p $(DESTDIR)$(MANPREFIX)/man1 461: install -pm 644 2bwm.man $(DESTDIR)$(MANPREFIX)/man1/2bwm.1 462: install -pm 644 hidden.man $(DESTDIR)$(MANPREFIX)/man1/hidden.1 463: """]] 464: 465: This looks fine since it installs in a `DESTDIR`/`PREFIX` but it sets 466: 467: > PREFIX=/usr/local 468: 469: and 470: 471: > MANPREFIX=$(PREFIX)/share/man 472: 473: In the beginning of the Makefile. We should remove the first line 474: and edit the man prefix: 475: 476: > MANPREFIX=${PKGMANDIR} 477: 478: so pkgsrc can install the program's files in the right place. We 479: have two ways of modifying this file, either patch the Makefile or 480: use `sed` substitution which is a builtin pkgsrc feature that allows 481: you to change lines in files with a sed command before building the 482: port. 483: 484: I will show how to do both ways so you can get an introduction on 485: how to generate patch files for pkgsrc. 486: 487: #### patching the Makefile : 488: 489: - edit the file you need to modify with `pkgvi`: 490: 491: 492: $ pkgvi /pkgsrc/work/wm/2bwm/work/2bwm-0.1/Makefile 493: 494: which should return: 495: 496: > pkgvi: File was modified. For a diff, type: 497: pkgdiff "/Volumes/Backup/pkgsrc/work/wm/2bwm/work/2bwm-0.1/Makefile" 498: 499: and this returns our diff. 500: 501: 502: - create the patch with `mkpatches`, it should create a `patches` 503: directory in the port containing the patch and an original file 504: removed with `mkpatches -c`. 505: 506: $ find patches/* 507: patches/patch-Makefile 508: 509: - now that the patch has been created, we need to add it's hash to 510: distinfo otherwise pkgsrc won't pick it up: 511: 512: $ make mdi 513: you should get this new line: 514: 515: > SHA1 (patch-Makefile) = 9f8cd00a37edbd3e4f65915aa666ebd0f3c04e04 516: 517: 518: - you can now clean and `make patch` and `make stage-install 519: CHECK_FILES=no` since we still haven't generated a proper PLIST. 520: Let's see if 2wm files were installed in the right place this 521: time: 522: 523: $ find /pkgsrc/work/wm/2bwm/work/.destdir/ 524: 525: /pkgsrc/work/wm/2bwm/work/.destdir/ 526: /pkgsrc/work/wm/2bwm/work/.destdir//usr 527: /pkgsrc/work/wm/2bwm/work/.destdir//usr/pkg 528: /pkgsrc/work/wm/2bwm/work/.destdir//usr/pkg/bin 529: /pkgsrc/work/wm/2bwm/work/.destdir//usr/pkg/bin/2bwm 530: /pkgsrc/work/wm/2bwm/work/.destdir//usr/pkg/bin/hidden 531: 532: It looks like it is alright ! Let's generate the PLIST: 533: 534: $ make print-PLIST > PLIST 535: 536: containing: 537: 538: @comment [[!paste id=rcsid1]][[!paste id=rcsid2]] 539: bin/2bwm 540: bin/hidden 541: 542: There you have a working port you can install normally with 543: 544: $ make install 545: 546: 547: #### using the sed substitution framework 548: 549: You should be able to fix the prefix error much quicker than with 550: the patching explained above thanks to the sed substitution framework. 551: Here's how it looks like in my port Makefile: 552: 553: [[!format make """ 554: SUBST_CLASSES+= makefile 555: SUBST_STAGE.makefile= pre-build 556: SUBST_MESSAGE.makefile= Fixing makefile 557: SUBST_FILES.makefile= Makefile 558: SUBST_SED.makefile= -e 's,/usr/local,${PREFIX},g' 559: SUBST_SED.makefile+= -e 's,share/man,${PKGMANDIR},g' 560: """]] 561: 562: As you can see, you can do multiple commands on multiple files, it 563: is very useful for very small fixes like this. 564: 565: 566: #### pkglint 567: 568: Now that we have a working port, we must make sure it complies to the pkgsrc rules. 569: 570: $ pkglint 571: 572: Returns 573: 574: ERROR: DESCR:4: File must end with a newline. 575: ERROR: patches/patch-Makefile:3: Comment expected. 576: 2 errors and 0 warnings found. (Use -e for more details.) 577: 578: Fix the things pkglint tells you to do until you get the glorious: 579: 580: > looks fine. 581: 582: Then you should do some testing on the program itelf on at least 583: two platforms such as NetBSD, Mac OS X. Other platforms supported 584: by pkgsrc can be found at [pkgsrc.org](). If you 585: would like to submit your pkgsrc upstream you can either subscribe 586: to pkgsrc-wip or ask a NetBSD developer to add it for you. 587: 588: You can find the 2bwm port I submitted in 589: [pkgsrc-wip](). 590: 591: 592: ## pkgsrc and wip 593: 594: If you want to submit your port for others to use you can either 595: subscribe to pkgsrc-wip or ask a NetBSD developer to add it for you. 596: 597: pkgsrc-wip is hosted on 598: [sourceforge]() and you 599: can easily get cvs access to it if you create an account on there 600: and send an email to NetBSD developer `@wiz` (Thomas Klausner) 601: asking nicely for commit access. 602: 603: 604: ## the options framework 605: 606: You can create port options with the `options.mk` file, like for `wm/dwm` 607: 608: 609: [[!format make """ 610: # [[!paste id=rcsid1]][[!paste id=rcsid2]] 611: 612: PKG_OPTIONS_VAR= PKG_OPTIONS.dwm 613: PKG_SUPPORTED_OPTIONS= xinerama 614: PKG_SUGGESTED_OPTIONS= xinerama 615: 616: .include "../../mk/bsd.options.mk" 617: 618: # 619: # Xinerama support 620: # 621: # If we don't want the Xinerama support we delete XINERAMALIBS and 622: # XINERAMAFLAGS lines, otherwise the Xinerama support is the default. 623: # 624: .if !empty(PKG_OPTIONS:Mxinerama) 625: . include "../../x11/libXinerama/buildlink3.mk" 626: .else 627: SUBST_CLASSES+= options 628: SUBST_STAGE.options= pre-build 629: SUBST_MESSAGE.options= Toggle the Xinerama support 630: SUBST_FILES.options= config.mk 631: SUBST_SED.options+= -e '/^XINERAMA/d' 632: . include "../../x11/libX11/buildlink3.mk" 633: .endif 634: """]] 635: 636: This file should be included in the Makefile: 637: 638: .include "options.mk" 639: 640: If you type `make show-options`, you should see this: 641: 642: Any of the following general options may be selected: 643: xinerama Enable Xinerama support. 644: 645: These options are enabled by default: 646: xinerama 647: 648: These options are currently enabled: 649: xinerama 650: 651: You can select which build options to use by setting PKG_DEFAULT_OPTIONS 652: or PKG_OPTIONS.dwm. 653: 654: Running `make PKG_OPTIONS=""` should build without the `xinerama` dwm option enabled by default. 655: 656: The options.mk file must contain these variables: 657: 658: - `PKG_OPTIONS_VAR` sets the options variable name 659: - `PKG_SUPPORTED_OPTIONS` lists all available options 660: - `PKG_SUGGESTED_OPTIONS` lists options enabled by default 661: 662: It allows you to change configure arguments and include other buildlinks, and various other settings. 663: 664: 665: ## hosting a package repo 666: 667: Now that you've created a few ports, you might want to make precompiled 668: packages available for testing. You will need pkgsrc's `pkg_install` 669: on the host system. I host my [packages]() 670: on a FreeBSD server with a bootstrapped pkgsrc. 671: 672: use this shell function to : 673: 674: [[!format sh """ 675: add () { 676: # upload the package to remote server 677: scp $1 youri@saveosx.org:/usr/local/www/saveosx/packages/Darwin/2013Q4/x86_64/All/ 2> /dev/null 678: 679: # update the package summary 680: ssh youri@saveosx.org 'cd /usr/local/www/saveosx/packages/Darwin/2013Q4/x86_64/All/; 681: rm pkg_summary.gz; 682: /usr/pkg/sbin/pkg_info -X *.tgz | gzip -9 > pkg_summary.gz' 683: 684: # pkgin update 685: sudo pkgin update 686: } 687: """]] 688: 689: - upload a package 690: - update the package summary, which is an archive containing 691: information about all present packages that will be picked up by 692: pkg_install and pkgin. It looks like this for one package: 693: 694: PKGNAME=osxinfo-0.1 695: DEPENDS=sqlite3>=3.7.16.2nb1 696: COMMENT=Small Mac OS X Info Program 697: SIZE_PKG=23952 698: BUILD_DATE=2014-06-29 12:45:08 +0200 699: CATEGORIES=misc 700: HOMEPAGE= 701: LICENSE=isc 702: MACHINE_ARCH=x86_64 703: OPSYS=Darwin 704: OS_VERSION=14.0.0 705: PKGPATH=wip/osxinfo 706: PKGTOOLS_VERSION=20091115 707: REQUIRES=/System/Library/Frameworks/CoreFoundation.framework/Versions/A/CoreFoundation 708: REQUIRES=/System/Library/Frameworks/Foundation.framework/Versions/C/Foundation 709: REQUIRES=/System/Library/Frameworks/IOKit.framework/Versions/A/IOKit 710: REQUIRES=/usr/lib/libSystem.B.dylib 711: REQUIRES=/usr/pkg/lib/libsqlite3.0.dylib 712: FILE_NAME=osxinfo-0.1.tgz 713: FILE_SIZE=9710 714: DESCRIPTION=Small and fast Mac OS X info program written in C. 715: DESCRIPTION= 716: DESCRIPTION=Homepage: 717: DESCRIPTION= 718: 719: 720: - update pkgin 721: 722: 723: And this shell alias to upload all my built packages, but I still 724: need to run `add()` mentionned above to update the pkg_summary 725: 726: [[!format sh """ 727: up='rsync -avhz --progress /pkgsrc/packages/ youri@saveosx.org:/usr/local/www/saveosx/packages/Darwin/2013Q4/x86_64/' 728: """]] 729: 730: Then you should be able to set the url in repositories.conf to use 731: your packages with pkgin. You can also install them directly with 732: something like `pkg_add 733:` of 734: course. 735: 736: 737: ## build all packages 738: 739: see jperkin's excellent blog 740: [posts]() 741: about this. 742: 743: 744: ## faq 745: 746: #### what if the port I'm making is a dependency for another one? 747: 748: You should just generate the buildlink3.mk file we've talked about 749: earlier like this: 750: 751: $ createbuildlink > buildlink3.mk 752: 753: #### what if the program is only hosted on GitHub ? 754: 755: pkgsrc supports fetching archives from specific git commits on 756: GitHub like this: 757: [[!format make """ 758: PKGNAME= 2bwm-0.1 759: CATEGORIES= wm 760: GHCOMMIT= 52a097ca644eb571b22a135951c945fcca57a25c 761: DISTNAME= ${GHCOMMIT} 762: MASTER_SITES= 763: DIST_SUBDIR= 2bwm 764: WRKSRC= ${WRKDIR}/2bwm-${GHCOMMIT} 765: """]] 766: 767: You can then easily update the git commit and the distinfo with it 768: to update the program. 769: 770: #### what if the program doesn't have a Makefile 771: 772: You can do all Makefile operations directly from the port's Makefile 773: like this: 774: 775: 776: [[!format make """ 777: post-extract: 778: ${CHMOD} a-x ${WRKSRC}/elementary/apps/48/internet-mail.svg 779: 780: do-install: 781: ${INSTALL_DATA_DIR} ${DESTDIR}${PREFIX}/share/icons 782: cd ${WRKSRC} && pax -rw -pe . ${DESTDIR}${PREFIX}/share/icons/ 783: """]] 784: 785: To install, but you can also build programs from the Makefile. This 786: is what qt4-sqlite3 uses: 787: 788: [[!format make """ 789: do-build: 790: cd ${WRKSRC}/src/tools/bootstrap && env ${MAKE_ENV} ${GMAKE} 791: cd ${WRKSRC}/src/tools/moc && env ${MAKE_ENV} ${GMAKE} 792: cd ${WRKSRC}/src/plugins/sqldrivers/sqlite && env ${MAKE_ENV} ${GMAKE} 793: """]] 794: 795: 796: You can install the following type of files: 797: 798: `INSTALL_PROGRAM_DIR` : directories that contain binaries 799: 800: `INSTALL_SCRIPT_DIR` : directories that contain scripts 801: 802: `INSTALL_LIB_DIR` : directories that contain shared and static libraries 803: 804: `INSTALL_DATA_DIR`: directories that contain data files 805: 806: `INSTALL_MAN_DIR` : directories that contain man pages 807: 808: `INSTALL_PROGRAM` : binaries that can be stripped from debugging symbols 809: 810: `INSTALL_SCRIPT` : binaries that cannot be stripped 811: 812: `INSTALL_GAME` : game binaries 813: 814: `INSTALL_LIB` : shared and static libraries 815: 816: `INSTALL_DATA` : data files 817: 818: `INSTALL_GAME_DATA` : data files for games 819: 820: `INSTALL_MAN` : man pages 821: 822: 823: `INSTALLATION_DIRS` : A list of directories relative to PREFIX that 824: are created by pkgsrc at the beginning of the install phase. The 825: package is supposed to create all needed directories itself before 826: installing files to it and list all other directories here. 827: 828: #### common errors 829: 830: - > Makefile:19: *** missing separator. Stop. 831: 832: This means you're not using the right `make`. On most systems, the 833: make installed from the pkgsrc bootstrap is called `bmake` 834: 835: - If you have a feeling a port is stuck in the building stage, 836: disable make jobs in your mk.conf 837: 838: 839: [[!cut id=rcsid1 text="$Net"]] 840: [[!cut id=rcsid2 text="BSD$"]] 841: [[!meta title="An introduction to packaging"]] 842: [[!meta author="Youri Mouton"]]
https://wiki.netbsd.org/cgi-bin/cvsweb/wikisrc/pkgsrc/intro_to_packaging.mdwn?rev=1.9;f=h;only_with_tag=MAIN;ln=1
CC-MAIN-2021-10
en
refinedweb
US9779392B1 - Apparatuses, methods and systems for a publishing and subscribing platform of depositing negotiable instruments - Google PatentsApparatuses, methods and systems for a publishing and subscribing platform of depositing negotiable instruments Download PDF Info - Publication number - US9779392B1US9779392B1 US12/859,741 US85974110A US9779392B1 US 9779392 B1 US9779392 B1 US 9779392B1 US 85974110 A US85974110 A US 85974110A US 9779392 B1 US9779392 B1 US 9779392B1 - Authority - US - United States - Prior art keywords - check - platform - image - deposit - micr - Prior art date - Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.) - Active, expires Links - 238000000151 deposition Methods 0.000 title abstract description 25 - 238000000034 methods Methods 0.000 claims description 105 - 239000000976 inks Substances 0.000 claims description 40 - 230000004044 response Effects 0.000 claims description 31 - 230000003287 optical Effects 0.000 claims description 9 - 230000001276 controlling effects Effects 0.000 claims description 2 - 238000003860 storage Methods 0.000 description 64 - 238000004891 communication Methods 0.000 description 43 - 238000010586 diagrams Methods 0.000 description 34 - 239000011800 void materials Substances 0.000 description 34 - 239000000203 mixtures Substances 0.000 description 16 - 238000004422 calculation algorithm Methods 0.000 description 13 - 281000001425 Microsoft companies 0.000 description 12 - 230000005540 biological transmission Effects 0.000 description 11 - 238000005516 engineering processes Methods 0.000 description 11 - 230000002093 peripheral Effects 0.000 description 10 - 230000001413 cellular Effects 0.000 description 9 - 230000037250 Clearance Effects 0.000 description 8 - 230000035512 clearance Effects 0.000 description 8 - 238000004458 analytical methods Methods 0.000 description 7 - 230000033228 biological regulation Effects 0.000 description 7 - 239000011248 coating agents Substances 0.000 description 7 - 238000000576 coating method Methods 0.000 description 7 - 230000000875 corresponding Effects 0.000 description 7 - 238000000605 extraction Methods 0.000 description 7 - 230000004048 modification Effects 0.000 description 7 - 238000006011 modification reactions Methods 0.000 description 7 - 240000006028 Sambucus nigra Species 0.000 description 6 - 239000002131 composite materials Substances 0.000 description 6 - 230000003993 interaction Effects 0.000 description 6 - 239000010410 layers Substances 0.000 description 6 - 230000014759 maintenance of location Effects 0.000 description 6 - 230000037361 pathway Effects 0.000 description 6 - 230000002688 persistence Effects 0.000 description 6 - 238000003708 edge detection Methods 0.000 description 5 - 238000003384 imaging method Methods 0.000 description 5 - 239000011159 matrix materials Substances 0.000 description 5 - 241000212893 Chelon labrosus Species 0.000 description 4 - 241000023320 Luma <angiosperm> Species 0.000 description 4 - 238000004140 cleaning Methods 0.000 description 4 - 238000010191 image analysis Methods 0.000 description 4 - 230000000737 periodic Effects 0.000 description 4 - 230000002441 reversible Effects 0.000 description 4 - 238000007619 statistical methods Methods 0.000 description 4 - 241000723668 Fax Species 0.000 description 3 - 280000999506 Python companies 0.000 description 3 - 241000700605 Viruses Species 0.000 description 3 - 281000118867 Yahoo! companies 0.000 description 3 - 238000006243 chemical reactions Methods 0.000 description 3 - 230000002747 voluntary Effects 0.000 description 3 - 281000096612 AOL companies 0.000 description 2 - 281000019761 Intel, Corp. companies 0.000 description 2 - 281000084437 MSN companies 0.000 description 2 - 241001275944 Misgurnus anguillicaudatus Species 0.000 description 2 - 281000175722 Motorola companies 0.000 description 2 - 239000003570 air Substances 0.000 description 2 - 230000004075 alteration Effects 0.000 description 2 - 230000002155 anti-virotic Effects 0.000 description 2 - 230000003190 augmentative Effects 0.000 description 2 - 238000004364 calculation methods Methods 0.000 description 2 - 239000003795 chemical substances by application Substances 0.000 description 2 - 230000003111 delayed Effects 0.000 description 2 - 235000019800 disodium phosphate Nutrition 0.000 description 2 - 238000009826 distribution Methods 0.000 description 2 - 230000000694 effects Effects 0.000 description 2 - 230000002708 enhancing Effects 0.000 description 2 - 230000000977 initiatory Effects 0.000 description 2 - 238000007689 inspection Methods 0.000 description 2 - UQSXHKLRYXJYBZ-UHFFFAOYSA-N iron== [Fe]=O UQSXHKLRYXJYBZ-UHFFFAOYSA-N 0.000 description 2 - 239000007787 solids Substances 0.000 description 2 - 235000010384 tocopherol Nutrition 0.000 description 2 - 230000001702 transmitter Effects 0.000 description 2 - 235000019731 tricalcium phosphate Nutrition 0.000 description 2 - 230000003442 weekly Effectsyljk1LDk4LjQ5OSBMIDk2LjE1NTcsMTIwLTU1NywxMjAuMzMyIEwgMTE5LjI0MSwxMTUuMNi4xNTU3LDEyMC4zMzIgTCA3Ni4yMzkzLDEzMy2LjE1NTcsMTIwLjMzMiBMIDk3LjI2NDIsMT4jE3ODEsMTM5LjU0NCBMIDQ3LjU3Ny43NDMyjAMDcuOTQsMTYyLjggTCAxMDQuMTQxLDE3MCxNDEsMTcwLjE1NyBMIDEwMC4zNDEsMTc3LjUxzI5LDE2OS4wODcgTCAxNDIuMDk0LDE3NS44Mi4wOTQsMTc1Ljg5IEwgMTQ2LjQ1OSwxODI2LjQ1OSwxODIuNjkzIEwgMTcwLjA5NywxODEuNTwLjA5NywxODEuNTg1IEwgMTgyLjg3NSwyMDEuNTk5NywxODIuMDE2IEwgMTg0Ljk0MiwxOTUucwLjA5NywxODEuNTg1IEwgMTgwLjk1NSwxNjAuNIuODc1LDIwMS41MDEgTCAyMDYuNTEzLDIwMC4YuNTEzLDIwMC4zOTIgTCAyMTcuMzcxLDE3OSMuOTM3LDE5NS4wNjcgTCAyMTEuNTMzNzEsMTc5LjM2OCBMIDIwNC41OTMsMTC41OTMsMTU5LjQ1MSBMIDIwOC4zOTIsMTUyLOC4zOTIsMTUyLjA5NCBMIDIxMi4xOTIsMTQOTMsMTU5LjQ1MSBMIDE4MC45NTUsMTYwLzAuMjk1LDEzNy43MyBMIDIzOS4wODksMTM4wODksMTM3LjMxOCBMIDI1MS44NjgsMTUNC45ODksMTM3Ljc1IEwgMjUzLjkzNCwxNT5LjA4OSwxMzcuMzEg2OCwxNTcuMjM0IEwgMjc1LjUwNSwxNTYuMTI1LjUwNSwxNTYuMTI2IEwgMjg2LjM2NCwxMzUuMTAxcyLjkyOSwxNTAuOCBMIDI4MC41MywxMzYuMMzUuMTAxIEwgMjczLjU4NSwxMTUuMTgjU4NSwxMTUuMTgwLjI2MSwxMjAuMDc4IEwgMjUzLjcxNSwxMjLjc4NjUnIHk9JzE1Ny41Mz44NDEzJyB5PScxNzYuMzMi4wNzknIHk9JzE4Ni4wMC43ODYnIHk9JzE2MyOC4zNDYnIHk9JzEzOS424xNTg0LDI3LjQwOCBMIDI2Ljc0NDEsMzNDQxLDMzLjU5NCBMIDMzLjI4NTEsMzIuMuNzQ0MSwzMy41OTQgTCAyMS4xMDExLDM3LjIxNi43NDQxLDMzLjU5NCBTAxMSwzNy4yMTQ2IEwgMTQuMzcyNSwzNi44NTkgTCAxNC40MzUzLDM4LjE5IxLjEwMTEsMzcuMjE0NiBAzOSwzNy41Mjg3IEwgMTEuMzI3Miw0My40MzM4LDM5LjAzNzYgTCAxMi45ODAyLDQzLjILjMyNzIsNDMuNDg1OCBMIDguNjExNzgsNDMuNjEzOCIyAzMC4wODMxLDQ1LjYyNjYgTCAyOC44MzM0LDQgzMzQsNDguMDQ2NCBMIDI3LjU4MzYsNTAuNDjc0NSw0Ni4yNDE5IEwgMzAuMDI0OCw0OCjQ4LDQ4LjY2MTcgTCAyOC43NzUsNTEuMY3ODgsNDUuOTM0MiBMIDMzLjUwNTYsNDUuODjUwNTYsNDUuODAxNiBMIDM2LjMzMjUsNDUuNjjA5MzEsNDYuNzM3NiBMIDM5LjU0NDksNDkuMDAwgyODIsNTQuNzY5IEwgNTkuNDM1Nyw1MC41Dg4Niw1MC4zMjA4IEwgNTcuNDY3OSw0NC42cuNDY3OSw0NC42Nzc5IEwgNTguNzE3Nyw0Mi4yNTguNzE3Nyw0Mi4yNTgxIEwgNTkuOTY3NCwzOS44uNDY3OSw0NC42NzuNTI2Miw0Ni4wNjQ0IEwgNTEuODM4MSw0Ni4xMzUsMzguNTI5IEwgNzEuNDQ4LDQyLjQNy4yNDE4LDM4LjQwNjc44NjI1LDQ0LjA0OTcgTCA3Ny41NTk3LDQzLjcz41NTk3LDQzLjczNTYgTCA4MC42MzY0LDM3LjczLY3MjgzJyB5PSc0NC4xMOTA1JyB5PSc0uNDIyNCcgeT0nNTIuMjIuNTU1OScgeT0nNDUuOTUjM2NDcnIHk9JzM5LjA1(C)[C@H](C=C(Cl)Cl)[C@@H]1C(=O)OCC1=CC=CC(OC=2C=CC=CC=2)=C1 RLLPVAHGXHCWKJ-IEBWSBKVSA-N 0.000 description 1 - 281000119562 Apple, Inc. companies 0.000 description 1 - 241000010972 Ballerus ballerus Species 0.000 description 1 - 281000011627 Broadcom companies 0.000 description 1 - 235000006719 Cassia obtusifolia Nutrition 0.000 description 1 - 244000201986 Cassia tora Species 0.000 description 1 - 235000014552 Cassia tora Nutrition 0.000 description 1 - 281000173828 Federal Express, Corp. companies 0.000 description 1 - 281000046984 Federal Home Loan Banks companies 0.000 description 1 - 241000272183 Geococcyx californianus Species 0.000 description 1 - 280000233134 Hardware Resources companies 0.000 description 1 - 241000282619 Hylobates lar Species 0.000 description 1 - 280000123055 Industry Standard companies 0.000 description 1 - 241000764238 Isis Species - 281999990648 Local Government companies 0.000 description 1 - 281000057883 McAfee companies 0.000 description 1 - 281000137097 Multi Media Interface companies 0.000 description 1 - 241001025261 Neoraja caerulea Species 0.000 description 1 - 241000207834 Oleaceae Species 0.000 description 1 - 280000277209 Open Mobile companies 0.000 description 1 - 241000218641 Pinaceae Species 0.000 description 1 - 239000004793 Polystyrene Substances 0.000 description 1 - 102000002067 Protein Subunits Human genes 0.000 description 1 - 108010001267 Protein Subunits Proteins 0.000 description 1 - 210000001525 Retina Anatomy 0.000 description 1 - 244000171263 Ribes grossularia Species 0.000 description 1 - 281000138565 Safenet companies 0.000 description 1 - 281000147993 Sendmail companies 0.000 description 1 - 281999990655 State Government companies 0.000 description 1 - 241000287181 Sturnus vulgaris Species 0.000 description 1 - 281000138667 Symantec companies 0.000 description 1 - 210000001138 Tears Anatomy 0.000 description 1 - 280000448493 Telnet companies 0.000 description 1 - 281000156093 United Parcel Service companies - 281000027377 VLSI Technology companies 0.000 description 1 - 280000547844 Video Connection companies 0.000 description 1 - 281000154009 WinZip companies 0.000 description 1 - OJIJEKBXJYRIBZ-UHFFFAOYSA-N [Cd].jcjU3MjkndNIDI0OC4geT0nNDUuNDc2OScMTU0JyB5PSc0NS40NzYNTkg [Cd].[Ni] OJIJEKBXJYRIBZ-UHFFFAOYSA-N 0.000 description 1 - 238000010521 absorption reactions Methods 0.000 description 1 - 230000015572 biosynthetic process Effects 0.000 description 1 - 239000000969 carriers Substances 0.000 description 1 - 238000007906 compression Methods 0.000 description 1 - 239000000835 fibers Substances 0.000 description 1 - 238000001914 filtration Methods 0.000 description 1 - 238000007667 floating Methods 0.000 description 1 - 230000002068 genetic Effects 0.000 description 1 - 238000003707 image sharpening Methods 0.000 description 1 - 238000005417 image-selected in vivo spectroscopy Methods 0.000 description 1 - 281999990745 international associations companies - 229910000103 lithium hydride Inorganic materials 0.000 description 1 - 229910001416 lithium ion Inorganic materials 0.000 description 1 - 238000004519 manufacturing process Methods 0.000 description 1 - 239000000463 materials Substances 0.000 description 1 - 230000005055 memory storage Effects 0.000 description 1 - 229920000642 polymers Polymers 0.000 description 1 - 238000004886 process control Methods 0.000 description 1 - 230000001681 protective Effects 0.000 description 1 - 239000004065 semiconductors Substances 0.000 description 1 - 239000011257 shell materials Substances 0.000 description 1 - 230000003068 static Effects 0.000 description 1 - 238000003786 synthesis reactions Methods 0.000 description 1 - 230000002194 synthesizing Effects 0.000 description 1 - 230000001131 transforming Effects 0.000 description 1 - 230000001960 triggered Applicant hereby claims priority under 35 USC §119 for U.S. provisional patent application Ser. No. 61/235,216 filed on Aug. 19, 2009, entitled “SYSTEMS AND METHODS FOR USING NEGOTIABLE INSTRUMENTS WITH PUBLISH AND SUBSCRIBE”. The entire contents of the aforementioned applications are herein expressly incorporated by reference. The present invention is directed generally to apparatuses, methods, and systems of processing and exchanging financial deposit information, and more particularly, to APPARATUSES, METHODS AND SYSTEMS FOR A PUBLISHING AND SUBSCRIBING PLATFORM OF DEPOSITING NEGOTIABLE INSTRUMENTS. Negotiable instruments such as checks, money orders, banknotes etc., have been widely used to make payments and purchases. For instance, a payor may tender a negotiable instrument to a payee to satisfy the payor's obligation to the payee. For example, an employer may provide salary paychecks to an employee (e.g., payee) in satisfaction of obligations owed for the employee's work. In order to obtain the payment amount, the payee may need to deposit the check in an account at the payee's bank, and have the bank process the check. In some cases, the payee may take the paper check to a branch of the payee's bank, and cash the check at the bank counter. Once the check is approved and all appropriate accounts involved have been credited, the check may be stamped with a cancellation mark by a bank clerk, such as a “paid” stamp. The payor's bank and payee's bank may then keep a record of the deposit information associated with the deposited negotiable instrument. The APPARATUSES, METHODS AND SYSTEMS FOR A PUBLISHING AND SUBSCRIBING PLATFORM OF DEPOSITING NEGOTIABLE INSTRUMENTS (hereinafter “PS-PLATFORM”) provides a negotiable instrument data publish and subscribe framework, whereby financial institutions may exchange negotiable instrument deposit data and/or validation information within the publish and subscribe framework. In one embodiment, the PS-PLATFORM may register a financial institution as a subscriber and provide financial transaction information to the financial institution based on the subscription. In one implementation, a method is disclosed, comprising: receiving a negotiable instrument deposit information subscription request and a subscription rule from a financial institution; registering the financial institution as a negotiable instrument deposit information subscriber based on the received subscription rule; and in response to receiving information with regard to a proposed deposit of a negotiable instrument, sending a status notification of the negotiable instrument to the registered financial institution. This disclosure details a negotiable instrument data publish and subscribe framework, whereby financial institutions may exchange negotiable instrument deposit data and/or validation information within the publish and subscribe framework. PS-PLATFORMs may, in one embodiment, implement a remote deposit application on a secured network system, whereby the remote deposit application may obtain, process and store images of a negotiable instrument, and generate virtualized negotiable instruments for deposit. In one embodiment, the PS-PLATFORM may register a financial institution as a subscriber and provide deposit information of a negotiable instrument to the financial institution based on the subscription characteristics. For example, in one embodiment, a user (e.g., the payee) who wants to deposit a check, may capture an image of the check by a user image capture device, e.g., a scanner connected to a computer, a mobile device having a built-in camera, a digital camera, and/or the like. In another implementation, the user may take a video clip of the check and submit the video file. In one embodiment, the user may send the captured check image to a financial institution, e.g., a payee's bank with PS-PLATFORM service. The PS-PLATFORM server receiving the check image may then process the check image and extract deposit data from the digital check image. For example, in one implementation, the PS-PLATFORM may perform a series of image analysis procedures to enhance the received check image and extract deposit information such as payee's name, payee's bank, account number, bank routing number, deposit amount, and/or the like. In one implementation, after initial verification of the extracted deposit data, the PS-PLATFORM may post the deposit through direct banking and save the check image and the associated check deposit information in a transaction depository. In one implementation, the PS-PLATFORM may generate a substitute check (e.g., an X9.37 cash letter file) based on the received check image and send it to a clearinghouse bank (e.g., a regional branch of the Federal Reserve) for check clearance. In one implementation, the PS-PLATFORM may confirm, or cancel the soft posting of deposit funds based on the result of check clearance. It is to be understood that, depending on the particular needs and/or characteristics of a PS-PLATFORM application, associated IDE, associated operating system, user interface, object, administrator, server, hardware configuration, network framework, and/or the like, various embodiments of the PS-PLATFORM may be implemented that enable a great deal of flexibility and customization. The instant disclosure discusses embodiments of the PS-PLATFORM primarily within the context of remote deposit of “checks” from a payee to a bank hereinafter. However, it is to be understood that the system described herein may be readily adopted for deposits of other types of negotiable instruments, such as a money order, a bank note, and/or the like, and configured/customized for a wide range of other applications or implementations. It is to be understood that the PS-PLATFORM may be further adapted to other implementations or communications and/or data transmission applications, such as but not limited to a general entity-to-entity payment system. For example, In some embodiments, the PS-PLATFORM may allow a payee to apply the deposit, or a portion thereof, to the payment of one or more bills, such as a credit card payment, insurance bill payment, car payment, etc. For another example, in some embodiments, the PS-PLATFORM may allow users to apply the deposit (or portion thereof) to a non-US Dollar denominated account. For example, in one implementation, a user may wish to apply a deposit of a $100 (USD) check into a Euro-denominated account. In one implementation, if the user selects an “USD to Euro” deposit option, the PS-PLATFORM may determine and notify the user of the exchange rate that will be used for the transaction (i.e., how much, in Euros, will be deposited into the user's account). In some embodiments, the PS-PLATFORM may prompt the user to approve the conversion, while in other embodiments, the conversion and deposit will occur automatically. In one embodiment, a negotiable instrument may include a type of contract that obligates one party to pay a specified sum of money to another party. Negotiable instrument as used herein is an unconditioned writing that promises or orders payment of a fixed amount of money. For example, negotiable instruments may be money orders, cashier's checks, drafts, bills of exchange, promissory notes, and/or the like. In one implementation, the check may be presented from a first person to a second person to affect the transfer of money from the first person to the second person. It may also include a check that is presented from a company or business to a person. In either case, the check may be taken by the receiving party and deposited into an account at a financial institution of the receiving party. The receiving party may endorse the check and then present it for deposit. For example, in one implementation, a payee may carry the paper check to a local bank branch and deposit the check with a bank representative at the counter. In another implementation, the payee may carry the paper check to an automated teller machine (ATM) and deposit the check via ATM automatic deposit service. In another implementation, the payee may employ a remote deposit capture scheme for deposit, e.g., by capturing an image of the check and send the image to a remote deposit server for deposit. While “banks” and “checks” are referred to as examples, the scope of the invention encompasses financial institutions and financial instruments involved in a payment system. In one embodiment, the financial institutions 125 a-c may obtain deposit data related to the negotiable instrument, e.g., a check 118. For example, in one implementation, a financial institution may receive a deposit request from a user (also referred to as “payee” or “depositor” hereinafter) 102, such as an individual or entity who owns an account held at that financial institution. In one implementation, the financial institution may receive deposit data from the user by collecting deposit information over the counter at a local bank branch, at an ATM, and/or the like. In an alternative implementation, the user 105 may send an image of the check 118 together with user specified deposit data to the financial institution server. In one implementation, the obtained data (may be referred to herein as “check data”) may be stored in storage maintained by or affiliated with the financial institution. For example, check data obtained from a check received at financial institutions 125 a-c may be stored in check data storage 132, check data obtained from a check received at financial institution 140 may be stored in check data storage 142, and check data obtained from a check received at financial institution 150 may be stored in check data storage 152. The information of the check data may be arranged in any format. In one implementation, the information may be provided as a string of data in the format of field (label:value), such as account number (label:value), amount (label:value), name (label:value), etc. The check data may be stored as any type of electronic file in any type of file format, for example. The check data storage may be any type of memory or storage device and may comprise a database, for example. In one embodiment, a PS-PLATFORM server 120 may communicate with financial institutions 125 a-c. For example, in one implementation, the PS-PLATFORM server 120 may be housed at an intermediary financial service institution. For another example, the PS-PLATFORM server may be associated with one or more of the financial institutions 125 a-c. In one implementation, the PS-PLATFORM server 120 may comprise a computer system, such as a server, that comprises one or more processors and/or software modules that are capable of receiving and servicing requests from the financial institutions 125 a-c, accessing check data storage 122, 132, 142, and/or 152, and storing data in the check data storage 122. An example computing environment in which the PS-PLATFORM server 120 may be embodied is described with respect to In one implementation, the PS-PLATFORM server 120 may receive a request for information from one of the financial institutions (e.g., financial institutions 125 a-c) regarding a check that has been presented for deposit at that financial institution and may determine whether the check has already been presented for deposit to any of the other financial institutions (e.g., financial institutions 140, iso). This may be performed by the PS-PLATFORM 120 querying the other financial institutions or their associated check data storages. For example, in one implementation, a user 105 may present a check 118 for deposit to financial institutions 125 a-c. The financial institutions 125 a-c may obtain check data from the check 118 and store the check data in the check data storage 132. Additionally, the financial institutions 125 a-c may send the check data to the PS-PLATFORM 120 in the form of a data file. In an alternative implementation, the PS-PLATFORM server 120 may received an image of the check and obtain deposit information from the image, and store the obtained information for validation. In one implementation, responsive to receiving the check data from financial institutions 125 a-c, the PS-PLATFORM server 120 may access the check data storage 142 and 152, either directly or via a request to the financial institutions 140, 150, respectively, to determine whether the check 118 has already been presented to the financial institution 140 or 150. Such a determination may be made by comparing identifying information (e.g., account number, check number, routing number, account name, etc.) of the check 118 to information stored in the check data storages 142, 152. If there is a match of such identifying information with information stored in one of the check data storages 142, 152, then it may be determined that the check 118 has previously been presented for processing and the check being currently presented to the financial institutions 125 a-c is invalid. Moreover, in one implementation, the PS-PLATFORM server 120 may access the check data storage 132, either directly or via a request to the financial institutions 125 a-c to determine whether the check 118 has already been presented to the financial institutions 125 a-c. In one embodiment, if the check 118 has already been presented to the financial institutions 125 a-c, 140, or 150, the PS-PLATFORM 120 may advise the financial institutions 125 a-c and the financial institutions 125 a-c may deny the processing (e.g., deposit or cashing) of the check 118. In this way, real time validation may be provided to the financial institutions 125 a-c regarding whether the check 118 was deposited earlier elsewhere. In one implementation, the PS-PLATFORM server 120 may comprise check data storage 122 (e.g., a database) and may receive and store check data from the financial institutions 125 a-c for the checks the financial institutions 125 a-c receive for processing from users. Thus, when the PS-PLATFORM server 120 receives a request for information from one of the financial institutions regarding a check that has been presented for deposit at that financial institution, the PS-PLATFORM server 120 may query its own check data storage 122 to determine whether the check has already been presented for deposit to any of the other financial institutions. For example, the user 105 may present the check 118 for processing (e.g., deposit, cashing, etc.) to financial institutions 125 a-c. The financial institutions 125 a-c may obtain check data from the check 118 and send it to the PS-PLATFORM server 120 for storage in the check data storage 122. The PS-PLATFORM server 120 may access the check data storage 122 to determine whether the check 118 has already been presented to a financial institution for processing. If the check 118 has already been presented to a financial institution, the PS-PLATFORM server 120 may advise the financial institutions 125 a-c and the financial institutions 125 a-c may deny the deposit of the check 118. The PS-PLATFORM server 120 may thus act as a clearinghouse that stores the information provided by each of the financial institutions 125 a-c. In such an implementation, the check data storages of the financial institutions (e.g., the check data storages 132, 142, 152) may not be used to store the check data and/or may not be queried by the PS-PLATFORM server 120 to determine whether the check has already been presented for deposit to any of the other financial institutions. In this manner, the PS-PLATFORM server 120 does not request or access check data storage of the financial institutions, such as check data storages 132, 142, 152. In one implementation, the PS-PLATFORM server 120 may generate and transmit a notification to the financial institution that the check 118 is written on (i.e., the paying financial institution) that the check 118 has been presented (e.g., represented) to a financial institution for deposit. In this manner, the financial institution receiving the notification may cancel the check 118 or otherwise flag the check 118 or data pertaining to the check 118, such as in storage (e.g., a database) that maintains a record and information pertaining to the check 118. This may allow the paying financial institution to avoid providing funds upon a representment of the check 118 by the user 105. So after the check 118 has been presented once for deposit or cashing, the check 118 can no longer be presented, even if the check 118 is not physically altered or damaged (“franked”) after the initial presentment. In one implementation, the PS-PLATFORM server 120 may be considered to be channel agnostic as it may provide real time check validation information to a financial institution regardless of the channel(s) that the check is presented to the financial institution(s) for the initial processing and each subsequent presentment for processing. Channels may include, for example, remote deposit including check scanning and imaging, bank teller, ATM, and ACH (Automated Clearinghouse). In an alternative implementation, the PS-PLATFORM server 120 may provide check validation information to a financial institution in batch files, e.g., periodically, intermittently and/or continuously. In an alternative implementation, the PS-PLATFORM server 120 may provide validation data upon request from a financial institution. For example, in one implementation, the financial institution may send a request to the PS-PLATFORM server 120 at the time of check presentment to determine if the check has already been presented. Such request may be generated via telephone calls, emails, instant messages, and/or the like. Information provided to the PS-PLATFORM server 120, checked by the PS-PLATFORM server 120, and/or provided by the PS-PLATFORM server 120 may include, but not limited to timestamp, MICR (magnetic ink character recognition) data, amount, check number, routing number, account number, signature line, endorsement, payee, and/or the like. In one embodiment, the user 105 may be a payee who may deposit a check into an account at the payee's bank 160 by converting the check into electronic data (e.g., digital check images, etc.) and sending the data to the bank via a communication network 113. In one implementation, secured transport protocol such as SSL, and/or the like may be employed for the communication between the user 105 and the PS-PLATFORM server 120. In one embodiment, the user 105 may deposit the check on different occasions and through a variety of different devices and technologies of generating electronic check data. For example, in one implementation, the user 105 may deposit the check at home 106 by obtaining a check image via an image capture device (e.g., a camera, a scanner, etc.) controlled by a home computer. In another implementation, the user 105 may use a mobile device with a built-in camera (e.g., iPhone, BlackBerry, etc.) to take a picture of the check. In another implementation, the user 105 may deposit the check at a retail Point of Sale (POS) terminal 108, a kiosk or a Check 21 ATM 109, etc. by submitting the paper check to the deposit facility to generate images of the check for deposit. In a further implementation, the user 105 may take live video of the check via a device with built-in video camera (e.g., Apple iPhone, etc.) and send the video clip of the check to the PS-PLATFORM server 120. In one embodiment, the electronic data sent from the user 105 may include extracted data information from the check. For example, in one implementation, the user 105 may use a Magnetic Ink Character Recognition (MICR) device to scan and translate the MICR information (e.g., account number, routing number, check number, etc.) located on the check and transmit the data to the PS-PLATFORM server 120 along with digital image files or video clip files of the check. In one implementation, the electronic data may include a user entered value indicating an amount to be deposited, and/or other user submitted information. The PS-PLATFORM facilitates connections through the communication network 113 based on a broad range of protocols that include WiFi, Bluetooth, 3G cellular, Ethernet, ATM, and/or the like. In one embodiment, the communication network 113 may be the Internet, a Wide Area Network (WAN), a public switched telephone network (PSTN), a cellular network, a voice over internet protocol (VoIP) network, a Local Area Network (LAN), a Peer-to-Peer (P2P) connection, an ATM network and/or the like. In one implementation, the user 105 may communicate with financial institutions 125 by phone, email, instant messaging, facsimile, and/or the like. In one embodiment, the financial institutions 125 may be any type of entity capable of processing a transaction involving a check deposit. For example, the financial institution 125 may be a retail bank, investment bank, investment company, regional branch of the Federal Reserve, clearinghouse bank, correspondent bank, and/or the like. In one embodiment, the financial institution 125 may include a PS-PLATFORM server 120, the payee's bank 160 and the payer's bank 165. In one implementation, the PS-PLATFORM server 120 may be housed within the payee's bank 160 as a built-in facility of the payee's bank for processing remote check deposits. In another implementation, the PS-PLATFORM server 120 may be associated with an entity outside the payee's bank, as a remote deposit service provider. In one embodiment, the PS-PLATFORM server 120 may receive and process electronic data of deposit information from the user 105 via the communication network. For example, in one implementation, the PS-PLATFORM server 120 may generate check image in compliance with deposit formats (e.g., a Check 21 compliant check image file, a X9.37 cash letter check image, and/or the like), based on the received electronic data from the user 105. In one implementation, the PS-PLATFORM server may analyze metadata associated with the received check image/video files such as GPS information, time stamp of image capture, IP address, MAC address, system identifier (for retail POS/kiosk deposits) and/or the like. In a further implementation, the PS-PLATFORM server 120 may receive and process biometrics data from the user 105. For example, in one implementation, a payee may be instructed to submit an image or video clip of himself/herself. In such cases, the PS-PLATFORM may perform face recognition procedures for user authentication, obtaining payee information for check clearance, and/or the like. In one implementation, upon receipt and approval of the electronic deposit data, the payee's bank 160 may credit the corresponding funds to the payee's account. In one implementation, the PS-PLATFORM server 120 may clear the check by presenting the electronic check information to an intermediary bank 170, such as a regional branch of the Federal Reserve, a correspondent bank and/or a clearinghouse bank. In one embodiment, the payer's account at the payer's bank 165 may be debited the corresponding funds. In one embodiment, the PS-PLATFORM entities such as the PS-PLATFORM server 120, and/or the like, may also communicate with a PS-PLATFORM database 119. In some embodiments, distributed PS-PLATFORM databases may be integrated in-house with the PS-PLATFORM server 120, and/or the payee's bank 160. In other embodiments, the PS-PLATFORM entities may access a remote PS-PLATFORM database 119 via the communication network 113. In one embodiment, the PS-PLATFORM entities may send data to the database 119 for storage, such as, but not limited to user account information, application data, transaction data, check image data, user device data, and/or the like. In one embodiment, the PS-PLATFORM database 119 may be one or more online database connected to a variety of vendors and financial institutions, such as hardware vendors (e.g., Apple Inc., Nokia, Sony Ericsson, etc.), deposit banks (e.g., Bank of America, Wells Fargo, etc.), service vendors (e.g., clearinghouse banks, etc.) and/or the like, and obtain updated hardware driver information, software updates from such vendors. In one embodiment, the PS-PLATFORM server 120 may constantly, intermittently, and/or periodically download updates, such as updated user profile, updated software programs, updated command instructions, and/or the like, from the PS-PLATFORM database 119 via a variety of connection protocols, such as Telnet FTP, HTTP transfer, P2P transmission and/or the like. In one embodiment, a system administrator 140 may communicate with the PS-PLATFORM entities for regular maintenance, service failure, system updates, database renewal, security surveillance and/or the like via the communication network 113. For example, in one implementation, the system administrator 140 may be a system manager at the payee's bank, who may directly operate with the PS-PLATFORM server 120 via a user interface to configure system settings, inspect system operations, and/or the like. In one embodiment, the PS-PLATFORM controller 203 may be housed separately from other modules and/or databases within the PS-PLATFORM system, while in another embodiment, some or all of the other modules and/or databases may be housed within and/or configured as part of the PS-PLATFORM controller. Further detail regarding implementations of PS-PLATFORM controller operations, modules, and databases is provided below. In the implementation illustrated in In one implementation, the PS-PLATFORM controller 203 may further be coupled to a plurality of modules configured to implement PS-PLATFORM functionality and/or services. The plurality of modules may, in one embodiment, be configurable to establish a secured communications channel with a remote image capture device and implement a remote deposit service application. In some embodiments, the remote deposit service application may obtain and analyze check images, and generate virtual checks (e.g., Check 21 X9.37 cash letter files, etc.) for deposit. In one embodiment, the daemon application may comprise modules such as, but not limited to a Image Upload module 205, a Virus Scan module 206, a Check Image Persistence module 210, an Image Analysis module 212, a TRA & EWS Service module 214, a Soft Post module 215, a MICR Extraction module 218, a TIFF Generation module 220, a Check Information Extraction module 222, an Endorsement Detection 224, a Cash Letter Generation module 225, and/or the like. In one embodiment, the Image Upload module 205 may establish a secured communications channel with a user image capture device and receive submitted check images. In one embodiment, the Image Upload module 205 may initialize an image upload application which may remotely control the image capture device to obtain and upload check images via the secured communications channel as will be further illustrated in In one embodiment, the Check Image Persistence module 210 may check the persistence of the received check image files. For example, in one implementation, the Check Image Persistence module 210 may check the image file format, file storage pattern, and/or the like. In one implementation, the Check Image Persistence module 210 may check the storage format of the metadata associated with the check image file. In one embodiment, the Image Analysis module 212 may process the received check digital file, such as image usability and quality check, video frame image grab, and/or the like, as will be further illustrated in In one embodiment, the Magnetic Ink Character Recognition (MICR) Extraction module 218 may perform an optical character recognition (OCR) procedure on the processed check image and extract the MICR line on the check. Checks typically contain MICR information (e.g., routing number, account number and check number) on the bottom left-hand corner of the check. In one embodiment, the Check Information Extraction module 222 may perform an optical character recognition (OCR) procedure to extract information of the check, including the payee's name, the deposit amount, check number, and/or the like. In one embodiment, the Endorsement Detection module 224 may detect whether the check image contains a depositor's signature. In another embodiment, the MICR information may consist of characters written in a magnetic ink. The MICR information may be read electronically by passing the check through the MICR device, which may translate the characters by magnetizing the ink. If a user converts the check into electronic data by scanning the check using a MICR device, the MICR module may directly parse the information contained in the MICR data submitted by the user. In one embodiment, the Soft Post module 215 may provisionally credit the payee's account with the deposit amount after processing the received check image. In one embodiment, the Cash Letter Generation module 225 may generate and submit an X9.37 cash letter check image file to a clearinghouse bank (e.g., a regional branch of the Federal Reserve Bank, etc.) to clear the transaction and/or implement representment check after the soft post, as will be further illustrated in In one implementation, the PS-PLATFORM controller 203 may further be coupled to one or more databases configured to store and/or maintain PS-PLATFORM data. A user database 226 may contain information pertaining to account information, contact information, profile information, identities of hardware devices, Customer Premise Equipments (CPEs), and/or the like associated with users, device configurations, system settings, and/or the like. A hardware database 228 may contain information pertaining to hardware devices with which the PS-PLATFORM PS-PLATFORM affiliated entities. A transaction database 230 may contain data pertaining to check deposit transactions. In one embodiment, the transaction database 230 may include fields such as, but not limited to: check deposit timestamp, payee's name, payee' bank name, account number, bank routing number, deposit amount, deposit method, deposit device, check image index, check clearance information, and/or the like. A check image database 235 may contain a repository of processed check images associated with a transaction. The PS-PLATFORM database may be implemented using various standard data-structures, such as an array, hash, (linked) list, struct, structured text file (e.g., XML), table, and/or the like. For example, in one embodiment, the XML for a transaction in the transaction database 225 may take a form similar to the following example: <Transaction> - . . . - <ID> MyTransaction1_0008</ID> - <Receive Time> 5/12/2009 11:30:00</Receive Time> - <Device_ID> iPhone 6HS8D </Device_ID> - <Payee_Name> - Joe Dow - </Payee_Name> - <Payee_Bank> First Regional Bank </Payee_Bank> - <Deposit_amount> 1000.00<Deposit_amount> - <Post_Time> 5/12/2009 11:31:23</Post_Time> - <Image_ID> MyImage3214232</Image_ID> - <Clearance_bank> Clearinghouse Bank </Clearance_bank> - <Deposit_status> confirmed </Deposit_status> - . . . </Transaction> In In another embodiment, for mobile deposit 250, as shown in In another embodiment, as shown in In one embodiment, the account 2060 may be any type of deposit account for depositing funds, such as a savings account, a checking account, a brokerage account, and the like. The user 105 may deposit the check 118 or other negotiable instrument in the account 2060 either electronically or physically. The financial institutions 125 a-c may process and/or clear the check 118 or other negotiable instrument. The user 105 may communicate with financial institutions 125 a-c by way of a communications network such as an intranet, the Internet, a local area network (LAN), a wide area network (WAN), a wireless fidelity (WiFi) network, a public switched telephone network (PSTN), a cellular network, a voice over Internet protocol (VoIP) network, and the like. The user 105 may communicate with financial institutions 125 a-c by phone, email, instant messaging, text messaging, web chat, facsimile, mail, and the like. Financial institutions 125 a-c also may communicate with each other and the PS-PLATFORM server 120 by way of a communications network. In an implementation, the user 105 may receive payment from another individual such as a payor in the form of the check 118 or other negotiable instrument that is drawn from an account at one of the financial institutions 125 a-c. The user 105 may endorse the check 118 (e.g., sign the back of the check 118) and indicate an account number on the check 118 for depositing the funds. The user 105 may present the check 118 to the financial institutions 125 a-c for processing (e.g., to deposit or cash the check 118) using any channel, such as presenting the physical check 118 to the financial institution 125 a (e.g., via a teller), providing the check 118 in an ATM associated with the financial institution 125 a, or remotely depositing the check 118 by providing an image of the check 118 to the financial institution 125 a. It is noted that although examples described herein may refer to a check, the techniques and systems described herein are contemplated for, and may be used for, checking the validity of any negotiable instrument such as a money order, a cashier's check, a check guaranteed by a bank, or the like. In an implementation, the user 105 may access the financial institutions 125 a-c via the institution system 2005 by opening a communication pathway via a communications network 113 using a user computing device. There may be several ways in which the communication pathway may be established, including, but not limited to, an Internet connection via a website 2018 of the institution system 2005. The user 105 may access the website 2018 and log into the website 2018 using credentials, such as, but not limited to, a username and a password. Each financial institution 125 a-c may receive or generate a digital image representing a check they receive for deposit and may use any known image processing software or other application(s) to obtain the check data of the check from the digital image. For example, each financial institution 125 a-c may include any combination of systems and subsystems obtain the check data and generate a data file to be sent to the PS-PLATFORM server 120. The electronic devices may receive the digital image and may parse or otherwise obtain the check data from the check (e.g., the bank from which the check is drawn, place of issue, check number, date of issue, payee, amount of currency, signature of the payor, routing/account number in MICR format, transit number, etc.). A data file may be generated that comprises some or all of the check data. The data file may be any type of data file comprising data in any type of format from which data pertaining to the check may be stored and retrieved. The data file may be stored in check data storage of the financial institution or the PS-PLATFORM server 120. The data file may be provided to the PS-PLATFORM server 120. After the financial institution that has received the check for deposit is advised by the PS-PLATFORM server 120 that the check has not been previously deposited or cashed (e.g., received a notification from the PS-PLATFORM server 120 that the check is valid), the financial institution may clear the check using known techniques. Each financial institution may comprise a check clearing module (e.g., the check clearing module 2025 of the institution system 2005) that may communicate with a check clearinghouse such that a Check 21 compliant data file, for example, may be delivered to the check clearinghouse and funds may be received by the financial institution. In an implementation, the user 105 may use an imaging device (e.g., scanner, camera, etc.) to generate a digital image of the check 118. The digital image may be used to create a digital image file that may be sent to the institution system 2005 and used by the financial institutions 125 a-c, in conjunction with the institution system 2005, to process a deposit or cashing of the check 118 whose image is comprised within the digital image file. In an implementation, the digital image file may be augmented by secondary data which may be information relating to the deposit of the check 118, such as an account number and a deposit amount, for example. In one embodiment, the user 105 may place the check 118 on a background and generate a digital image comprising an image of the check 118 (e.g., a check image) and a portion of the background (e.g., a background image) using the imaging device or other device such as a camera that may be standalone or part of a phone or other user computing device. Any background may be used. It is noted that although examples and implementations described herein may refer to a check image, the term “check image” may refer to any foreground image in a digital image (as opposed to the background image). Thus, the “check image” may refer to the foreground image in implementations involving any negotiable instrument, form, or document. In one embodiment, a user computing device may be integral with the device used to make the digital image of the check 118 and/or the digital image file or separate from the device used to make the digital image of the check 118 and/or the digital image file. As shown and discussed in In one embodiment, the digital image file comprising an image of the check 118 may be transmitted to the institution system 2005. The user 105 may send the digital image file and any secondary data to the institution system 2005 along with a request to deposit the check 118 into an account, such as the account 2060. Any technique for sending a digital image file or digital image to the institution system 2005 may be used, such as providing a digital image file from storage to the website 2018 associated with the institution system 2005. In one embodiment, the financial institutions 125 a-c in conjunction with the institution system 2005 may process the deposit request (or check cashing request, for example) using the digital image of the check 118 received from the user 105 or the actual check 118 that may be presented by the user 105 to the financial institutions 125 a-c. In an implementation, upon receiving the actual check 118, the financial institutions 125 a-c may use an imaging device or other computing device to create a digital image of the check 118. Thus, when the check is presented to the financial institutions 125 a-c for processing, such as deposit or cashing, an image of the check may be made (if the check is received physically as opposed to electronically) and passed to an image processor 2022. In an implementation, the institution system 2005 may comprise an image processor 2022 that processes the digital image of the check 118. In an implementation, the institution system 2005 may retrieve the image of the check 118 from the digital image file and process the check 118 from the image for deposit. Any image processing technology, software, or other application(s) may be used to retrieve the image of the check 118 from the digital image file and to obtain the check data of the check 118 from the digital image file. The institution system 2005 may determine whether the financial information associated with the check 118 may be valid. In an implementation, the image of the check 118 may be operated on by the image processor 2022. These operations, at a high level, are intended to ensure that the image of the check 118 is suitable for one or more subsequent processing tasks. These operations may include any of the following: deskewing, dewarping, magnetic ink character recognition, cropping (either automatically, or having the user 105 manually identify the corners and/or edges of the check 118 for example), reducing the resolution of the image, number detection, character recognition, and the like. For example, the image processor 2022 may deskew and/or dewarp the image using known techniques such that the image properly rotated and aligned. The image processor 2022,. The image processor 2022 may additionally perform any of the following operations, in further examples: convert from JPEG to TIFF, detect check information, perform signature detection on the image of the check, and the like. Alternatively or additionally, edge detection may be used to detect the check. Edge detection techniques are well known and any suitable method may be used herein. institution system 2005 may optically recognize the characters on the MICR line. The institution system 2005 may perform the operations and generate a data file comprising the data from the check. In an implementation, in addition to being used to determine whether the check has been previously processed for example, such data may be used by a financial institution to generate a Check 21 compliant format or file or substitute check. In an implementation, the image processor 2022 may process multiple frames of the image if the image is comprised of multiple frames (e.g., the front side and the back side of the check 118). For example, after receiving the digital image file, the image processor 2022 may retrieve the image(s) of the check 118 and process the image or an image based on the image for deposit. The image processor 2022 may use any known image processing software or other application(s) to obtain the image and any relevant data of the check 118 from the digital image file. The image processor 2022 has access to data, files, and documents pertaining to the user 105 as well as any other data, files, and documents that are internal or external to the institution system 2005 that may be useful in processing the digital image file and/or the data contained therein. The image processor 2022 may extract data from the image of the check 118 and provide the data to a check data generator 2024. The check data generator 2024 may generate a data file that comprises the appropriate check data from the check 118. In an implementation, the data file may be stored in storage of the financial institutions 125 a-c, such as check data storage 132 or storage 2008, for example. The data file may be sent to the PS-PLATFORM server 120 either upon request from the PS-PLATFORM server 120 (e.g., pursuant to another financial institution requesting validation of a check presented to that financial institution for processing) or upon generation of the data file (e.g., to be stored in storage of the PS-PLATFORM server 120, to be analyzed for validation of the check for the financial institution providing the data file, etc.). Thus, in an implementation, the check data generator 2024 may generate a data file and provide it to the PS-PLATFORM server 120. In response, the PS-PLATFORM server 120 may advise the financial institutions 125 a-c whether to accept the check or not, or may provide real time validation information that may be analyzed by the financial institutions 125 a-c in determining whether or not to accept the check. If the presented check is determined to be valid, the check may then be sent for clearing. If a digital image of the check 118 is not used, a representative of the financial institution 103 may manually obtain data from the check 118 and provide that data to the check data generator 2024, e.g., via a computing device associated with the institution system 2005, such as one of the computing devices 2006. After receiving validation of the check 118 from the PS-PLATFORM server 120, the check 118 (or data from the check 118) may be provided to a check clearing module 2025 of the institution system 2005. The check clearing module 2025 may perform known check clearing processes on the check 118 in order to credit the funds of the check 118 to the account 160. In an implementation, the check clearing module 2025 may provide the image of the check 118 or data from the check 118 to a clearinghouse to perform the check clearing operations. Check clearing operations are used by banks to do the final settlement of the check 118, such as removing funds from the account of the payor and transferring those funds to the user's bank. The user's bank may choose to make the funds available to the user 105 immediately and take on the risk that the check 118 does not clear. However, for various reasons, the bank may only make those funds available to the user 105 after the check 118 finally clears. In an implementation, to credit funds to the account, the financial institution may generate an ACH debit entry and/or a substitute check. ACH transactions typically include payment instructions to debit and/or credit an account. Banks often employ ACH service providers to settle ACH transactions. Examples of ACH service providers include regional branches of the Federal Reserve and the Electronic Payments Network (EPN).. It will be appreciated that the examples herein are for purposes of illustration and explanation only, and that an embodiment is not limited to such examples. In one embodiment, the institution system 2005 may include a user interface module 2020 and a data source access engine 2027. The user interface module 2020 may generate and format one or more pages of content 2019 as a unified graphical presentation that may be provided to a user computing device associated with the user 105 (e.g., if the user 105 is depositing the check electronically via a user computing device). In an implementation, the page(s) of content 2019 may be provided to the user computing device via a secure website 2018 associated with the institution system 2005. In one embodiment, the institution system 2005 has the ability to retrieve information from one or more data sources 2029 via the data source access engine 2027. Data pertaining to the user 105 and/or the account 160 and/or validating, processing, and clearing of a check may be retrieved from data source(s) 2029 and/or external data sources. The retrieved data may be stored centrally, perhaps in storage 2008. Other information may be provided to the institution system 2005 from the user 105 or the PS-PLATFORM server 120. In one embodiment, database(s) 2029 may contain data, metadata, email, files, and/or documents that the institution system 2005 maintains pertaining to the user 105, such as personal data such as name, physical address, email address, etc. and financial data such as credit card numbers and deposit account numbers. Such data may be useful for validating and/or processing the check 118 or a digital image of the check 118. Additionally or alternatively, the financial institutions 125 a-c or the institution system 2005 may access this information when clearing a check. In one embodiment, the institution system 2005 may comprise one or more computing devices 2006. The computing device(s) 2006 may have one or more processors 2007, storage 2008 (e.g., storage devices, memory, etc.), and software modules 2009. The computing device(s) 2006, including processor(s) 2007, storage 2008, and software modules 2009, may be used in the performance of the techniques and operations described herein. Within embodiment, examples of software modules 2009 may include modules that may be used in conjunction with receiving and processing a digital image or digital image file comprising an image of the check 108, retrieving data from the digital image or digital image file, generating a data file for use by the framework 120 and/or other financial institutions in validating and/or processing the check 108, and requesting and receiving validation information from the framework 120 pertaining to a check, for example. While specific functionality is described herein as occurring with respect to specific modules, the functionality may likewise be performed by more, fewer, or other modules. In one embodiment, communication models are used to handle receiving messages from and distributing messages to multiple nodes in a distributed computing environment. An example of a communication model is the publish and subscribe model. Entities that produce the messages or information are “publishers” and entities that are interested in the messages are “subscribers”. The publish and subscribe model involves an asynchronous messaging capability and is event-driven because communication between the producer of information and the consumer of information is triggered by business events, such as the presentment of a check, such as a check 118, to a financial institution, such as the financial institution 125 a. In one implementation, a variety of data structure and file formats may be utilized to generate a message, which may include a numeric ID uniquely identifying the message. For example, in one implementation, the message ID may be a 4 digit number where the first 2 digits indicate a type of the message, e.g., 09xx indicates the message is related to remote deposit, etc., and the next two digits may further indicate the type of the deposit message, e.g., 0901—Deposit Initiation, 0902—Deposit Reversal, 0903—Deposit Response, 0904—Deposit Notification, 0905—Deposit Reversal Notification, and/or the like. In one implementation, the remote deposit message may include fields as shown in the following example: For another example, an example XML implementation of the remote deposit message may take a form similar to the following: - </RDC> - <?xml xmlns:xs= version=“1.0” encoding=“ISO-8859-1” ?> - <RDC> - <Message ID>0901</Message ID> - <CAPTURE METADATA> - <Bank ID>0011913102</BankID> - <Deposit Timestamp>01212009:12:20:35</Deposit Timestamp> - <Location> 0731460: 3744795<Location> - </CAPTURE METADATA> - <DEPOSIT SLIP> - <DEPOSIT METADATA> - <Check ID>CHQ0017</Check ID> - <Deposit Transit Number>35706112</ Deposit Transit Number> - < Deposit Account Number>01123456</ Deposit Account Number> - <Amount>55.37</Amount> - </DEPOSIT METADATA> - <Security Token> - TWFuIGlzIGRpc3Rpbmd1aXNoZWQsIG5vdCBvbmx5IGJ5IGhpc yByZWFzb24sIGJ1dCBieSBOaGlz IHNpbmd1bGFyIHBhc3Npb24gZnJvbSBvdGhlciBhbmltYWxzL CB3aGljaCBpcyBhIGx1c3Qgb2Yg= - </Security Token> - . . . - </DEPOSIT SLIP> - <CHECK> - <Check ID>CHQ0017</Check ID> - <CHECK METADATA> - <Payor Transit Number>01136822</Payor Transit Number> - <Payor Account Number>112234</Payor Account Number> - <MICR>A021001208A7000609C11051</MICR> - <Check Number>11051</ Check Number> - <Check Date>12252008</Check Date> - <CAR>55.37</CAR> - <Memo>Christmas Ornaments Purchase</Memo> - </CHECK METADATA> - <Front Image>FFD8FFE000104A4 . . . 6494600010 - </Front Image> - <Back Image>7020332E3000384 . . . 494D040407 - </Back Image> - > - . . . - </CHECK> - . . . - </RDC> In one embodiment, publish and subscribe systems comprise publishers which generate messages and subscribers which receive the messages. Depending on the implementation, the framework 2320 may act as a publisher of a message (e.g., comprising a notification and/or check data) or as a message broker which receives a message from one of the financial institutions and sends the message to one or more subscribers (e.g., financial institutions) that have previously subscribed to receive some or all of these notifications. In one embodiment, a messaging system uses a set of rules to ensure that a particular message is provided to the proper subscriber(s). A rule is a condition that describes the message or messages that are desired by a subscriber. As described herein, a rules engine 2340 may be used to apply the rules to the check data for determining which entities to send the messages. There are a variety of standards governing the expression of the rules and the structure of the messages, and any standard(s) may be used with the techniques and operations described herein. For example, in one implementation, a financial institution, e.g., payor's bank may request to subscribing for deposit information of all negotiable instruments issued by the payor's bank. In one implementation, the PS-PLATFORM may send a message comprising deposit information to the subscribed payor's bank when a negotiable instrument issued by the payor's bank is processed. In another implementation, the PS-PLATFORM may generate a bulk of messages comprising deposited negotiable instrument information within a period of time. In one implementation, the PS-PLATFORM may generate messages to the subscriber periodically. In an alternative implementation, the PS-PLATFORM may send subscribers notifications upon request, e.g., in response to receiving a clearinghouse request for a proposed deposit of a negotiable instrument. For another example, a subscription rule from a financial institution may specify a time range of desired information, which may request the PS-PLATFORM provide deposit information within the specified period of time, e.g., the past 6 months. In another implementation, the subscription rule may further specify a variety of parameters for the deposit information, such as a range of deposit amount (e.g., an amount greater than 1000 USD, etc.), a set of payor's banks (e.g., information with regard to negotiable instruments issued by the set of payor's banks may be monitored, etc.), a range of deposit occurrences (e.g., the most recent 10 deposits associated with one account may be monitored, etc.) and/or the like. In a further implementation, any combination of the discussed example parameters may be employed by the subscription rule. In one implementation, the subscription rule may be implemented with a variety of data-structures, such as an array, hash, (linked) list, struct, structured text file (e.g., XML), table, and/or the like. For example, in one embodiment, an example XML for a subscription rule may take a form similar to the following example: - </RDC> - <?xml xmlns:xs= version=“1.0” encoding=“ISO-8859-1” ?> - <RDC> - <Message ID>0916</Message ID> - <Subscription Rule> - <Rule ID> 0009314</Rule ID> - <Subscriber ID> 0011913102</Subscriber ID> - <Subscriber Name> - ABC Bank - </Subscriber Name> - > - <Notification Trigger> - <Trigger 1> Payor Bank=ABC Bank </Trigger 1> - <Trigger 2> Payee Bank=ABC Bank </Trigger 2> - </Notification Trigger> - <Notification Condition> - <Condition 1> MICR Transit field matches any of 011312239 011331365 011665344 - </Condition 1> - <Condition 2> Check amount exceeds 50000 and MICR Transit starts with 01131 - </Condition 2> - </Notification Condition> - <CALLBACK SERVICE> - <URL> . . . </URL> - . . . - </CALLBACK SERVICE> - </Subscription Rule> In an implementation, the publish and subscribe framework 2320 is content-based. In a content-based system, messages are only delivered to a subscriber if the attributes or content of those messages (pursuant to the check data) match constraints defined by the subscriber. Thus, a subscriber registers to receive messages based on particular data, such as the account on which the check is drawn, the routing number, the name, etc. The framework 2320 sends the message to the subscriber only if the subscription matches the data. In one embodiment, a subscriber may subscribe by submitting one or more rules to the framework 2320. Each rule may describe the message or messages that are desired by the subscriber using the check data. The rules engine 2340 may be used to apply the stored subscription rules to messages published or otherwise provided by the framework 2320 pursuant to the check data. If an incoming message or check data satisfies a subscription rule, then the message, the check data, and/or a notification is published to the particular subscriber that submitted the subscription rule. Incoming check data is evaluated against a set of subscription rules to determine which subscribers are to receive a message. In an implementation, the messages that are delivered to subscribers who are qualified to receive them may include not only messages generated based on the check data of the presented check, but also messages that have been published at an earlier time and that have been stored in storage of the framework 2320 such as a publish and subscribe database 2324. Previously published messages that were stored in the publish and subscribe database 2324 and that satisfy the subscription rule(s) may be delivered to the subscriber. Having been stored in the publish and subscribe database 2324, the previously published messages can be retrieved for publication to the subscriber using known data retrieval methods used by database servers. In an implementation, a subscriber may specify the delivery mechanism, the destination, and the notification protocol of the message as part of the subscription requesting the message. For example, the subscriber may specify when registering that they would like to receive messages by means of email. The subscriber may then specify an email address at which to receive the message. Alternatively, the subscriber may specify that they would like their messages delivered to a remote database by means of a database link. In one embodiment, the database link can be created using SQL and may define a database name, a path, a logon account, and a protocol used to connect to the remote database. In addition, the subscriber may specify how quickly they would like to have a message delivered, such as within one second, one minute, etc. of the creation of the message. In one embodiment, the publish and subscribe framework 2320 may operate to provide message brokering services between the financial institutions involved in an exchange of messages. The framework 2320 may be implemented in a server or other computing device, and may comprise a receive check data module 2322, a notification generator 2324, a registration engine 2330, a rules engine 2340, and a dispatch engine 2350. In one embodiment, the receive check data module 2322 may receive check data 2315 from a financial institution when the financial institution receives a check for processing (e.g., deposit or cashing) from a user. The check data 2315 comprises data pertaining to the check (e.g., the check 118) that has been presented from the user to the financial institution (e.g., the financial institution 125 a) for processing. In one embodiment, the registration engine 2330 may receive preferences from financial institutions 125 a-c and may use the preferences to register the financial institutions 125 a-c as subscribers. Rules may be established using the preferences of the financial institution that allow message (e.g., check data and/or notification) routing decisions to be made based on, for example, user identification, paying bank, amount of check, time of day, transaction type, presentment method, level of authentication, transaction profile, and/or other criteria. In one embodiment, the rules engine 2340 checks the received check data against the rules. If a rule is satisfied, a notification is generated by the notification generator 2324. The notification may be sent to the dispatch engine 2350. In an implementation, the dispatch engine 2350 sends the notification and/or check data over a subscribed feed to the financial institution whose rules meet the received check data. The notification with check data may be sent by any known communication channel, such as a communication channel previously selected by the financial institution when registering as a subscriber. In an implementation, the notification with check data may be sent by email, text message, instant message, facsimile, or telephone, for example. In an implementation, each financial institution may subscribe to a feed of its own checks and may be apprised of a check that is on them and that is being presented to any financial institution associated with the publish and subscribe framework 2320. This gives the financial institution the ability to determine from a database or other storage of check processing data accessible by the financial institution whether a payment has previously been made on the check (i.e., whether the check has been processed previously). If the payment has previously been made on the check, then payment of the check due to representment of the check may be disallowed and avoided. If the payment has not been made already on the check, the financial institution may allow payment to be made on the check. In an implementation, if the payment has not been made already on the check, the financial institution may memo post the check, deduct money from the account holder, and leverage the float on the money between that moment in time and when the check is presented for clearing. In another embodiment, for kiosk/ATM/retail deposit, a user may be instructed from an ATM/kiosk screen to place or insert the check into a check deposit slip for scanning, and/or the like. In another embodiment, for mobile deposit, a user operating a mobile device may access the PS-PLATFORM website via the mobile device, or may launch a PS-PLATFORM component pre-installed on the mobile device and connect to the PS-PLATFORM server to submit deposit requests via the PS-PLATFORM component. In one embodiment, in response to the user request, the PS-PLATFORM server may initialize a remote deposit component 302. For example, in one implementation, the PS-PLATFORM may retrieve and load a user interface page for remote deposit. In one embodiment, the PS-PLATFORM may instruct the user to capture and submit an image or video streaming of the check 305-306, as will be further illustrated in For another example, for mobile deposit via a mobile device, the user may launch a “remote deposit” application on a menu of the mobile device to send a request for mobile deposit (e.g., via SMS, etc.), and the PS-PLATFORM may determine whether the mobile device has been registered based on its physical address (MAC address). In a further implementation, the PS-PLATFORM may instruct the user to submit biometrics information for authentication. For example, if the user is operating a video camera, video files and/or live video streaming of the user may be transmitted to the PS-PLATFORM to authenticate the user by face recognition procedures. In one embodiment, the PS-PLATFORM may process the received check image or video file 310 (as will be further illustrated in In one implementation, the PS-PLATFORM may determine the check is a duplicate or representment 315, (as will be further illustrated in In one embodiment, the PS-PLATFORM may verify the check image based on the extracted information, e.g., determining whether the extracted check information is valid 317. For example, in one implementation, the extracted information from the check image may be compared with the user submitted deposit information, e.g., payee's name, deposit amount, etc. For another example, the PS-PLATFORM may check if a proper endorsement is contained on the back side of the check image. In one embodiment, if the check is not valid 320, the bank may reject the check image and abort check deposit 321. In another embodiment, if the check is valid, the PS-PLATFORM may provisionally credit the indicated amount of funds into the user's amount 325. For example, in one implementation, the PS-PLATFORM may post the deposit to the payee's bank, and the payee's bank may provisionally credit the deposit amount to the payee's account. In one embodiment, the PS-PLATFORM may perform a check clearing procedure to control fraudulent items 330, as will be further illustrated in In an alternative embodiment,er's bank. The bank also may convert the digital image into a substitute check and present the substitute check to an intermediary bank (e.g., a regional branch of the Federal Reserve) to complete the check clearing process. In one implementation,). In one embodiment, upon completion of the deposit, the PS-PLATFORM may further instruct the user to void the physical check 340 to avoid representment. For example, in one embodiment, the PS-PLATFORM may instruct the user to place the physical check to a certain equipment such that a stimuli may be applied to the physical check to permanently modify the check as “void”. For another example, if the deposit takes place at a kiosk or ATM, the deposit facility may print “ELECTRONICALLY PRESENTED” across the front of the original check when the check is scanned by equipment designed to convert the check to an electronic image for further processing. For another example, in one implementation, the MICR information of a check may be printed using a magnetic ink or toner containing iron oxide, which may be magnetically voided. For another example, the physical check may contain a radio frequency identification (RFID) tag. When the RFID tag receives a particular radio signal, the RFID tag may be modified as “void”. For further examples, if the physical checks contain tags sensitive to heat, the check may be voided by the heat generated by the application of a bright light source, such as one that may be found in a scanner. In an alternative implementation, the PS-PLATFORM may instruct the user to physically destroy the check and submit digital evidence of check detriment. For example, the user may tear the check, capture an image of the torn check pieces, and submit the captured image to the PS-PLATFORM for verification, as shown in In one embodiment, the PS-PLATFORM may create a record of the deposited check and store the deposited check record in a repository to prevent check re-presentment. For example, in one implementation, the PS-PLATFORM may store an image of the check associated with extracted information, such as payee's name, deposit date, bank name, MICR information including the bank routing number and account number, and/or the like. In another implementation, the PS-PLATFORM may create a record of the check based on a portion of the check image which may be unique to represent the check. For example, in one implementation, the created check record may only include an image of the front side of a check.. As shown in At 349, the payee may void the check. For example, the payee may write and/or stamp “void” on the check. At 350, the payee may send the check to the financial institution associated with the account for depositing funds. The check may be sent via a common carrier, such as the United States Post Office, FedEx®, United Parcel Service®, and the like. The process may then proceed to 350. It will appreciated that 349 and 350 may be performed to provide additional security features. For example, by removing the check from circulation, it may be less likely that the check will be deposited more than once. At 351, the bank may receive the electronic data representative of the check along with information pertaining to the account for depositing funds. At 352, the bank may credit funds to the account. The credit may be a provisional credit, enabling the payee to access the funds while the check is being cleared. A provisional credit may be voided if the bank determines that the transaction is erroneous and/or fraudulent. At 353, an embodiment, a payee may receive a check in return for the sale of goods, such as a used car, for example. The payee may endorse the check and/or send electronic data representative of the check to the payee's payee's bank along with information pertaining to the account for depositing funds. Upon receipt of the MICR information and account information, the payee's bank may credit funds to the payee's account and generate an ACH debit entry to the payer's account, which may be presented to the ACH service provider for processing. The ACH service provider may process the debit entry by identifying the account and bank from which the check is drawn. The bank from which the check is drawn (i.e., the payer's bank) may be referred to as a receiving depository financial institution (RDFI). If the payer's bank verifies the transaction, the ACH service provider may settle the transaction by debiting the payer's bank and crediting the payee's bank. The payer's bank may then debit the payer's account. A substitute check is typically a paper reproduction of (e.g., substitute. If a bank does not have a voluntary agreement and/or refuses to accept an electronic image, the financial institution is required under Check 21 to accept a substitute check in lieu of the original check. In an embodiment, a payee may receive a check as a birthday gift, for example. The payee may endorse the check and/or send electronic data representative of the check to the payee's bank. may credit funds to the payee's account. If the payee's bank and the payer's bank have a voluntary agreement for accepting electronic images of checks, the payee's bank may generate an electronic image of the check and/or simply forward the digital images received from the payee to the payer's bank. If there is no agreement between the banks, the payee's bank may convert the digital images into a substitute check and present the substitute check to the payer's bank and/or a check clearing service provider (e.g., a regional branch of the Federal Reserve) to clear the check. Returning to At 363, an embodiment. For example, as noted above, the payee may use a phone, email, instant messaging, and/or fax machine. At 364, the payee may void the check and/or send the check to the bank. The process may then proceed to 365. It will be appreciated that 364. At 365, the bank may receive the check information and account information. At 366, the bank may credit funds to the account. As noted above, the credit may be a provisional credit, enabling the payee to access the funds while the transaction is being processed. At 367, the bank may void the provisional credit if the original check is not sent and/or received within a predetermined period of time. At 368, the bank may receive the check. At 369, the bank may generate an ACH debit entry, substitute check, and/or electronic image. At 370, the bank may process the ACH debit entry, substitute check, and/or electronic image. It will appreciated that 369 and 370 may be performed to provide additional security features.. In some embodiments, the PS-PLATFORM may impose limitations on the deposit amount and/or the availability of the deposit. For example, the PS-PLATFORM may place a daily, weekly, and/or other periodic limit for the total amount of remote deposits for a given period. The PS-PLATFORM may additionally or alternatively limit the number of periodic transactions for a given user. Depending on the implementation, such limits may be pre-specified for users (such as a default of limiting users to 3 remote deposits per day, and limiting total daily deposits to $10,000) and/or determined based on risks associated with a use and/or a transaction. For example, a user may have a pre-specified deposit limit of $10,000 per day, but if the user requests and/or attempts to deposit an amount greater than that (e.g., a check for $15,000), rather than simply rejecting the deposit, the PS-PLATFORM may notify the user that the amount is greater than there specified deposit limit. In some such embodiments, the PS-PLATFORM may allow the user to request that the deposit limit be raised for this transaction, in some embodiments for an associated fee, and the PS-PLATFORM may notify a pre-specified bank or financial institution administrator to approve or reject the request. one implementation, if the PS-PLATFORM determines the deposit amount has exceeded a maximum one-time remote deposit amount defined by the payee's bank. If yes, the PS-PLATFORM may notify the user via a user interface and provide options for the user to proceed. For example, the user may select to submit q request to raise the deposit limit 371, cancel the remote deposit and exit 372, or to only deposit the maximum available amount for next business day availability and send the deposit information to a closest branch for in-person deposit service 373. In one embodiment, the PS-PLATFORM server may authenticate the user login, and then retrieve user profile 383. In one implementation, the user profile may record information including the user name, user contact information, user credit history, user account information, and/or the like. In one implementation, to assist the depositor in determining which accounts may be available for deposit, the PS-PLATFORM may determine a list of available accounts associated with the user for deposit 384. For example, the PS-PLATFORM may retrieve a list of user authorized accounts for remote deposits. For another example, if the PS-PLATFORM is affiliated with a payee's bank, the PS-PLATFORM may only retrieve a list of user accounts associated with the payee's bank. For another example, the PS-PLATFORM may determine that, based upon the types of the accounts, checking, savings, and investment accounts, may be available for deposit of the negotiable instrument. In an alternative implementation, if an indication of deposit amount is available at 384, for example, the user has submitted an amount of deposit to PS-PLATFORM, or the account selection 381-391 take place after the user has submitted a check image and the PS-PLATFORM has processed the check image to obtain deposit data, the PS-PLATFORM may determine a list of available accounts for deposit based on the requirement of each account. For example, the PS-PLATFORM may filter accounts that have a maximum allowable deposit amount lower than the deposit amount. For another example, to assist the depositor in determining which accounts may be available for deposit, a financial institution may display a list of financial accounts to the depositor. In a further implementation, if the PS-PLATFORM is affiliated with a financial institution, the PS-PLATFORM may generate a list of accounts, wherein the PS-PLATFORM is granted access to the account by the account owner even if the account is at a different financial institution. For example, a user may submit a remote deposit request to the PS-PLATFORM server at Bank A but Bank A may provide an option for the user to directly deposit the check into his/her account at Bank B, if Bank A is authorized by the user to access his/her account at Bank B. In one embodiment, the PS-PLATFORM may display a user interface for account selection 385, e.g., a dropdown list, a click-and-choose list, etc., and the user may submit a selection of account 386. The PS-PLATFORM may then determine whether the PS-PLATFORM is granted permission to access the selected account 387. For example, in one implementation, as discussed above, if the PS-PLATFORM is associated with a first payee's bank, but the selected account is associated with a different payee's bank, then the first bank needs to be granted permission by the account owner to access the account at the different bank in order to proceed with check deposit. For another example, if the PS-PLATFORM is a remote deposit service agency, then the PS-PLATFORM may only access an account at a payee's bank only with authorization of the account owner. In one embodiment, if the permission is granted 390, the PS-PLATFORM may proceed to determine whether the submitted selection of accounts include more than one account 392; and otherwise, the PS-PLATFORM may notify the user that the selected account is unavailable 391. In one embodiment, if there are multiple accounts selected 392, the PS-PLATFORM may display a user interface for amount allocation 393 the user and request the user submit amount allocations 394 for each selected account. For example, in one implementation, if the user selected to deposit into both a checking account and a savings account, the user may then split the deposit amount and enter the portions of amount associated with each account for deposit processing. In one embodiment, at 3410, a financial institution, such as the financial institutions 125 a-c, may register with the publish and subscribe framework 2320 as a subscriber. The financial institution may provide the framework 2320 with notification preferences, constraints, and/or rules as to the messages and/or data (e.g., notifications, check data, etc.) the financial institution is to receive. For example, a rule may be that a notification is to be provided to the financial institution regarding every check whose data is received by the framework 2320 having a routing number assigned to that financial institution. This may be accomplished using a nine-digit routing number located on the bottom left hand corner of the check. A unique routing number may be assigned to every financial institution in the United States. In one embodiment, the financial institution may also select one or more delivery channel preferences and/or protocols for receiving the messages and/or data. For example, the financial institution may select that a notification is to be delivered via a delivery channel such as EJB (Enterprise Java Beans), SOAP (Simple Object Access Protocol), or email (SMTP (Simple Mail Transfer Protocol)), for example. In one embodiment, at 3420, the framework 2320 may create the subscription for the financial institution using the registration engine 2330 and may store the subscriber rules in storage, such as in the publish and subscribe database 2324. Known subscription creation techniques may be used. In one implementation, at 3430, the framework 2320 may receive and/or generate a notification regarding a check that has been presented for processing at another financial institution. The receive check data module 2322 may receive the check data and the notification generator 2324 may generate the notification. The notification may comprise check data. At 3440, the framework may send the notification to the subscriber financial institution via the dispatch engine 2350 in accordance with the subscriber rules as checked by the rules engine. Thus, the notification is sent via a subscribed feed. If the check data meets the rules provided by the subscriber financial institution, then the notification may be sent to the subscriber financial institution. At 3450, the subscriber financial institution receives the notification via the subscribed over the delivery channel feed that the subscriber financial institution had selected during registration. At 3460, the subscriber financial institution may act on the notification. For example, the subscriber financial institution may check its storage to determine if the check has already been processed and funded. In such a case, it may be determined that the check is being represented and the subscriber financial institution may deny payment of the represented check. Additionally or alternatively, the subscriber financial institution may send an instruction to the financial institution that has received the check from a user for processing to discontinue processing the check and not allow the check to be deposited or cashed. As another example, upon receiving the notification, the subscriber financial institution may post the check and deduct the funds from the account that the check is drawn against. In one embodiment, after the user endorses the check, they may present the check to a financial institution (e.g., the payee's bank) for deposit using any known technique, such as remote deposit, via an ATM, or presenting the check to a teller at the financial institution, for example. The financial institution bank receives the check (or an image of the check) at 3510. In an implementation, with respect to remote deposit, a request for access may be received from the user who wishes to deposit a check remotely. The user may request access to a deposit system operated by a financial institution as described above by way of a computing device, such as a PC or a mobile device, such a cellular phone, a PDA, a handheld computing device, etc. operated by the user. The access may be through some sort of user login, in some examples. In one implementation, the user may transmit an image file of the check to a financial institution that may be associated with an account for depositing funds, where it is received.. The user may send the image file and any secondary data to the financial institution, using any technique, along with a request to deposit the check into a particular user account. At 3520, the financial institution obtains check data from the check, e.g., by processing one or more images of the check with a computing device (e.g., the image processor 2222) or by a representative of the financial institution obtaining data from the check. In an implementation, the financial institution may open an image file of the check 118 and parse financial information from the image file to obtain the check data such as the check data of the check 118. The image file may be processed using any known technology to retrieve the check data. In an implementation, the financial institution may capture the check information by a scan to create an image of the front and back of the check and may process the image to obtain information such as the payee name, bank, payee's account number, the amount of the check, and the MICR data. The obtained check data may be stored in storage of the financial institution, in an implementation. In one embodiment, the image may be cleaned by the institution system using cleaning operations described herein (e.g., deskewed, dewarped, cropped, etc.). In an implementation, the image of the check on a background may be processed using techniques to remove warping or dewarp the image, to crop the image, to deskew the image (e.g., rotate the image to horizontal), to identify the corners, etc. Any similar image processing technology may be used, such as edge detection, filtering to remove imagery except the check image or check data in the received digital image file, image sharpening, and technologies to distinguish between the front and the back sides of the check. The financial institution may identify and/or remove at least a portion of data that is extraneous to the check, such as background data. In one implementation, cleaning operations may be augmented by detecting operations, e.g. after the cleaning operations are performed, that obtain data from the image. In an implementation, the detection operations may include any of the following: optically read the MICR line, courtesy amount recognition (CAR), legal amount recognition (LAR), signature block, and payee. Such operations obtain data pertaining to the check, for example. In an implementation, a data file comprising some or all of the data obtained may be generated by the institution system. Any data file format may be used. The data pertains to the check being processed for deposit or cashing, for example, and may also comprise secondary information which may be information relating to the check, such as an account number, a deposit amount, or a routing number associated with the check, and/or relating to the account for depositing funds, such as the account number and/or the name on the account. In an implementation, the signature from the check may be copied and stored as an image in the data file. In such an implementation, the information from the check other than the signature may be recreated by the institution system (e.g., machine-recreated) and stored in the data file, whereas the signature may be copied, but not machine-recreated, and also stored in the data file. The data file may be stored in storage, such as a database, associated with the institution system. At 3530, the financial institution may send the check data (e.g., as a data file) to a framework, such as the publish and subscribe framework 2320, via a communications network. The data file may be transmitted to the framework 2320 using various means, including, but not limited to, an Internet connection (e.g., via a website) or a cellular transmission. The framework 320 may receive the check data and, at 3540, may compare the check data with the subscriber rules that have been previously stored in storage, such as the publish and subscribe database 2324. At 3550, the framework 2320 may send the check data to subscriber financial institutions that meet the subscriber rules. In an implementation, a subset of the check data may be sent and/or a notification may be generated and provided to the subscriber financial institutions that meet the subscriber rules. The framework 2320 may generate and transmit an advisory to one or more financial institutions regarding the check whose data is represented in the data file. The advisory may contain information pertaining to the check, such as a bank identifier, the account number, and the check number, for example. The receiving financial institution(s) may use the information in the advisory to flag or otherwise cancel the check 118 so that the check 308 may not be fraudulently or mistakenly represented. At 3560, the framework 2320 may store the check data in storage of the framework, such as in the publish and subscribe database 2324 or other storage. In one embodiment, for different devices (e.g., scanners, digital camera, mobile devices, etc.), certain permissions may be used in order to allow the browser component to remotely control the image capture device. For example, in one implementation, a user image device may require certificate authentication to allow secure remote control from the PS-PLATFORM server. Such certificates may be digital signatures interfacing an image capture device driver, a Secure Socket Layer (SSL) encrypted certificate, and/or the like. In one embodiment, if the image capture device is a scanner, the drivers of the scanner may be different for different operating environments—e.g., the same scanner may use a different driver depending on whether it is being operated from an environment based on one of the WINDOWS operating systems, an environment based on one of the operating systems used in APPLE computers, an environment based on a version of the LINUX operating system, etc. For example, a Canon imageFORMULA DR-7580 Production Scanner requires a DR-7580 ISIS/Twain Driver version 1.7 for Windows 2000 SP4, XP 32 bit SP3, XP 64 bit SP3, Vista 32 bit SP2, Vista 64 bit SP2, Windows7 32 bit and Windows7 64 bit, and Canon LiDE 50 drivers for Mac OS X. In that case, each driver may use different certificates, and different environments may use various different environment-specific technologies to allow the scanner to be controlled from a remote web server. In such cases, the PS-PLATFORM may obtain a large number of certificates, and may interface with a large number of different technologies, to support a large number of scanner-environment combinations and/or other image capture device environment, in order to allow its software to control scanners for a variety of customers. As such, in one embodiment, if the user image capture device is remotely controllable by the PS-PLATFORM server via 410, the PS-PLATFORM may retrieve security certificate for the corresponding image capture device 435, and control the image capture device to obtain the check images. For example, in one implementation, the browser component, which may be a security JavaScript running on the web browser application, may create a device digital certificate to enable HTTPS on the image capture device. In one implementation, the JavaScript may download a certificate from the remote PS-PLATFORM if available to interface with the driver of the image capture device, and create a public key for the certificate to be used in SSL encryption to establish an encrypted channel between the PS-PLATFORM server and the image capture device. In one implementation, the PS-PLATFORM may instruct a user to place the front/back sides of the check in front of the image capture device to create images of the check 440. For example, in one implementation, if a scanner connected to a computer is used, the browser component running on a home computer connected to a scanner may control the home scanner to start upon user request and automatically collect the images in an appropriate format and resolution, and can then upload the image for deposit. In such cases, the user may place the physical check in the scanner bed and to click “start” on the browser interface. In one implementation, the browser component may instruct the user to flip the physical check and repeat the process for both sides of the check via a web page interface, in order to obtain images of the front and the back. For another example, in one implementation, a mobile device, such as an Apple iPhone may initiate a pre-installed program, or download and install a software package instantly from the PS-PLATFORM server, which may facilitate the PS-PLATFORM controlling the mobile device (e.g., the iPhone) to obtain and upload check images. In such cases, a user may position the mobile device and take pictures or videos of both sides of the check, as illustrated in For another example, in one implementation, for kiosk/ATM deposit, a user may be instructed from the screen of a kiosk/ATM machine to place or insert the check into a check deposit slip for scanning, and/or the like. In one implementation, the PS-PLATFORM may also instruct the user to enter an amount of the check to be deposited. For example, in one implementation, the user may enter a deposit amount on a PS-PLATFORM website, on a kiosk/ATM machine, or send an amount number to the PS-PLATFORM server from a mobile device, and/or the like. In an alternative implementation, the PS-PLATFORM may implement an “atomic deposit” without requesting the user to input deposit information in addition to the check image submission. In this case, the user device (e.g., the mobile device, the home scanner, etc.) may be decoupled from the transaction once the submission of the digital image file for deposit of the check is made. The transaction is thereafter managed by the PS-PLATFORM server. In this manner, incomplete transactions are avoided by moving the transaction processing to the PS-PLATFORM server side at a financial institution (e.g., payee's bank, etc.) after the user submits the digital image file. Any loss or severing of a communications connection between the user computing device and the PS-PLATFORM server, such as due to browser navigation away from the PS-PLATFORM web page, communication failures, user logouts, etc. on the user side, will not affect the processing and the deposit of the check in the digital image file. Thus, the transaction will not be left in an orphaned state. In another embodiment, if the image capture device is not controllable by the browser application component, the PS-PLATFORM may load an instruction user interface page 415, and instruct the user manually upload the check images. For example, in one implementation, the PS-PLATFORM server may not have certificates for scanner drivers for a Macintosh computer. In one implementation, the PS-PLATFORM may instruct the user to enter a deposit amount 420 (as illustrated in a schematic user interface 450 shown in In one embodiment, the PS-PLATFORM may receive check digital files 445 from the remote user device. In one implementation, the user may send the obtained check images to PS-PLATFORM server via email, mobile MMS, PS-PLATFORM browser uploading, and/or the like. In one embodiment, if the user image capture device is video-enabled, the PS-PLATFORM may receive video clips of the check. In one implementation, video files may be saved in a series of compliant formats (e.g., AVI, MPEG4, RM, DIVX, etc.) and submitted to the PS-PLATFORM server in similar manners with those of submitting check image files as discussed above. In one implementation, the PS-PLATFORM may instruct or assist the user to compress one or more video files into a package (e.g., WinZip package, etc.) and submit the pack to the PS-PLATFORM. In another implementation, live video streaming of a check may be captured and transmitted to the PS-PLATFORM server. For example, a user may request to establish a real-time video streaming link with the PS-PLATFORM server when submitting the remote deposit request. In such cases, the user device and the PS-PLATFORM server may employ streaming video streaming software packages such as Apple Quicktime Streaming Servers, Marcromedia Communication Server, and/or the like. In one implementation, the user may create a video in a common streaming media format such as MPEG4, 3GPP and/or the like, and upload an encrypted HTTP streaming video on the PS-PLATFORM web server. For example, in one implementation, a user may employ an Apple iPhone may establish HTTP live streaming to the PS-PLATFORM server via Quicktime Software systems. In one embodiment, if a video file is received or live video streaming is detected 506, the PS-PLATFORM may generate still check images from the video streaming 510. For example, in one implementation, the PS-PLATFORM may utilize video screen capture software packages to generate screen frame grabs and save the grabbed image files. In one implementation, software such as Apple QuickTime, WM Capture, CamStudio, and/or the like, may be employed to obtain frame grabs of the check video streaming. In one embodiment, if the received digital deposit file is an image file, or at least one check image file has been grabbed from the received video clip, the PS-PLATFORM may determine whether the check image is valid 515. In one implementation, the PS-PLATFORM may determine the usability and quality of the check image. For example, in one implementation, the PS-PLATFORM may check whether the check image is in compliance with the image format requirement, the resolution requirement (e.g., at least 200 dpi), and/or the like. In a further implementation, the PS-PLATFORM may perform an Optical Character Recognition (OCR) procedure on the generated check image to determine whether the characters on the check image is legible, and/or whether an endorsement is contained on the back side image of the check. Depending upon the standards imposed by the Check 21 Act and the payee MICR line at the bottom of the digital image. If the MICR line is unreadable or the characters identified do not correspond to known and verifiable information, the bank may reject the image. In one implementation, if the check image fails to meet the system requirements 520, the PS-PLATFORM may send a request to the user for resubmission of a check image 522. In another implementation, if the check image is determined to be valid 520, the PS-PLATFORM may proceed to process the check deposit image 525, including large image file compression 530, image quantumization and enhanced edge/corner detection 532, and dewarping/cropping the check image 534 for presentment, as will be further illustrated in In one embodiment, the PS-PLATFORM may convert the processed check image for presentment and deposit 540. For example, in one implementation, the PS-PLATFORM may save the check image in compliance with the requirements of the payee's bank for substitute checks, such as a Check 21 X9.37 cash letter file, and/or the like. public Bitmap ConvertToGrayscale(Bitmap source) { - Bitmap bm=new Bitmap(source.Width,source.Height); - for(int y=0;y<bm.Heighty++) - { - for(int x=0;x<bm.Width;x++) - { - Color c=source.GetPixel(x,y); - int luma=(int)(c.R*0.3+c.G*0.59+c.B*0.11); - bm.SetPixel(x,y,Color. FromArgb(luma,luma,luma)); - } - } - return bm; } In one embodiment, the PS-PLATFORM may determine and divide the check image into a number of tiles/sub-images 608. For example, a sub-image may be parsed from the original check image at pixel (100,350) with a width of 100 pixels and height of 50 pixels. In one implementation, the number of tiles/subimages may be pre-determined by a system operator as a constant. In another implementation, the number may be determined by a formula in proportion to the size of the image. In one embodiment, for each tile/sub-image, a histogram may be generated 610. In one embodiment, grayscale threshold values for each histogram may be determined 613, using a variety of algorithm such as, but not limited to statistical analysis (as will be further illustrated in In one embodiment, the PS-PLATFORM may apply a convolution filter matrix to the quantumized image 618. The convolution filter matrix may be designed to sharpen and enhance edges of the quantumized check image. For example, in one implementation, the PS-PLATFORM may employ a Java Advanced Image (JAI) package for one implementation of applying a sample edge-enhancing convolution filter matrix, which may take a form similar to: float data [ ]={0f, 0f, 0f, −1f, −1f, −1f, 0f, 0f, 0f, 0f, −1f, −1f, −3f, −3f, −1f, −1f, 0f, 0f, −1f, −3f, −3f, −1f, −3f, −3f, −1f, 0f, −1f, −3f, −3f, −6f, 20f, −6f, −3f, −3f, −1f, −1f, −3f, −1f, 40f, 20f, 40f, −1f, −3f, −1f, −1f, −3f, −3f, −6f, 20f, −6f, −3f, −3f, −1f, −f, −1f, −3f, −3f, −1f, −3f, −3f, −1f, 0f, 0f, −1f, −1f, −3f, −3f, −3f, −1f, −1f, 0f, 0f, 0f, 0f, −1f, −1f, −1f, 0f, 0f, 0f}; KernelJAI kernel=new KernelJAI (new Kernel (9, 9, data)); PlamarImage temp=JAI.create(“convolve”, img, kernel); In one embodiment, the PS-PLATFORM may detect edges/corners of the check image 620 (as will be further illustrated in In one implementation, “first_high” may be located by going from left to right on the histogram and comparing the number of counts (Y) of each indexed value (X) to the previous value until the reaching a right X limit. This may be started with a maximum value number of gray level counts being the gray level count Y at point [0][0] of the histogram. The right X limit going from left to right is set to a mode value unless the “second_high” X value is less than the mode value. In that case, the right traversing limit becomes “second_high”. The “first_high” gray index X value is then obtained 635. For discrete distributions, in one implementation, the mode is the value with the greatest frequency and for continuous distributions, it is the point where the probability density is at a maximum. It is possible for a distribution to have two or more modes. In one embodiment, the lowest Y value traversing from left to right on the histogram may be located, denoted as “first_min” 637. Also, the lowest Y value traversing from right to left on the histogram may be located, denoted as “second_min” 638. In one implementation, the procedures for locating “first_min” and “second_min” may be similar with that of finding “first_high” and “second_high” on the histogram within the interval bounded by “first_high” and the mode value. The resulting point found is denoted as “first_min”. In one implementation, “first_min” may be set to 0 by default. If nothing is found, the index gray value is at point [0][0] or the gray value count for gray value zero(black). In one implementation, the PS-PLATFORM may then locate “second_min” by traversing from right to left on the histogram within the interval bounded by “first_min” and “second_high.” The resulting minimum value located is denoted as “second_min.” In one embodiment, “first_min” and “second_min” may be adjusted 640 in special cases. In one implementation, if “first_min” and “second_min” are the same, then the PS-PLATFORM may check whether “second_min” is greater than a boundary value “B1”, wherein boundary values “B1” and “B2” are defined such that B1 is the upper bound of gray value encompassing significant magnitudes of order in gray value counts and B2 is the lower bound of the gray value on the histogram such that magnitudes of order in gray value counts converge to sufficiently small count value or zero from the histogram. For example, in one implementation, the image boundaries may be 0 and 255 if there exists a full gray value usage for a given image. In one implementation, if the “second_min” is greater than “B1,” then “second_min” is reset to be the resulting value of “second_min” minus the standard deviation times a scaling factor “k”, e.g., a suggested scaling factor in such cases is k=0.3. In that case, the adjusted “second_min” would be (second_min−(standard deviation*0.3)), the “first_min” may then be set to B1. In another implementation, if the determined “first_min” as of 638 is greater than zero but the determined “second_min” as of 637 returns empty or by default is 0, then the “second_min” may be reset to be “first_min” subtracted by the standard deviation multiplied by a scaling constant k. In this case, a suggested scaling constant is k=1. For example, in one implementation, a Java implementation of the algorithm of locating “first_min”, “second_min” on a given histogram by statistical analysis 630˜640, may take a form similar to: In one embodiment, threshold values may then be determined using the determined and adjusted “first_min” and “second_min” as lower and upper bounds 642. In one implementation, an image processing clamp method may be adopted which requires input parameters such as the boundary gray level values “first_min” and “second_min”, and returns the threshold value. For example, in one implementation, a Java implementation of a clamp function may take a form similar to: - int clamp(int x, int low, int high) { - return (x<low) ? low : ((x>high) ? high : x); - } and move the membership function pixel by pixel on the X-axis of the histogram over the range of gray values 663 (as shown in 675). At every position of the membership movement, a measure of fuzziness may be calculated 665 based on the a variety of measure definitions, such as but not limited to linear index of fuzziness, quadratic index of fuzziness, logarithmic fuzzy entropy, fuzzy correlation, fuzzy expected value, weighted fuzzy expected value, fuzzy divergence, hybrid entropy, and/or the like. For example, in one implementation, the measure of fuzziness may be calculated via the logarithmic fuzzy entropy, which may be defined as: where Gn(Smn)=−Smn ln Smn−(1−Smn)ln(1−Smn), and Smn is the membership function value at pixel (m,n) for an image of size M×N. In one embodiment, the PS-PLATFORM may determine the position with a minimum value of the calculated fuzziness measure 668, and then define the grayscale threshold as the gray value corresponding to the minimum fuzziness position 670, as shown in 680. In one embodiment, if the grayscale check image has vague corners or edges, the enhanced quantumized image may contain reflection at the edge/corners, as shown in 693-694 of In one embodiment, grayscale bin values N1 and N2 may be determined based on a predetermined bin count limit L 684 satisfying: (i) N1 and N2 are greater than or equal to the corresponding gray value of the bin count limit L; (ii) N1 is less than the minimum fuzziness threshold T of the histogram; and (iii) N2 is greater than the minimum fuzziness threshold T of the histogram. In one embodiment, the PS-PLATFORM may determine a minimum bin count value M and an average bin count value AVG within the histogram window defined by the range [N1+1, N2−1] 685 (as illustrated by the red circle 695 in In one embodiment, if the calculated reflection score is less than a predetermined minimum score P 687 (e.g., P=0.4), then it may indicate a corner sub-image without reflection. The PS-PLATFORM may proceed to implement corner detection algorithm of the quadrant 688. For example, the PS-PLATFORM may implement detection algorithm such as the Moravec corner detection algorithm, the multi-scale Harris operator, the Shi and Tomasi corner detection algorithm, the level curve curvature approach, LoG, DoG, and DoH feature detection, the Wang and Brady corner detection algorithm, the SUSAN corner detector, the Trajkovic and Hedley corner detector, the FAST feature detector, automatic synthesis of point detectors with Genetic Programming, affine-adapted interest point operators, and/or the like. In one embodiment, if the reflection score is less than P, then the corner sub-image is considered to be with reflection, and the corner detection implementation may be skipped to avoid a false or misleading corner. In this case, the determined corners may be less than four. In one embodiment, the PS-PLATFORM may determine whether the determined corners are sufficient to project all four corners by symmetry 689. For example, in one implementation, if there are three reflection-free corners and one corner with reflection, or two diagonal corners without reflections, then the position(s) of the corner(s) with reflection may be determined by symmetric projection 692. In another implementation, if there is only one reflection-free corner, or two reflection-free corners on a horizontal/vertical line, the PS-PLATFORM may determine that there is not sufficient information to project all four corners. In that case, the PS-PLATFORM may implement corner detection algorithm of a quadrant with reflection 690 (e.g., a corner with relatively higher reflection score), and provide additional information to determine all four corners of the check image 692. In one embodiment a received check image may contain a skewed, distorted or warped image of a check. In such cases, the check image needs to be “dewarped” prior to information extraction. For example,. In another embodiment, the digital image may be distorted to a degree so that the shape of the check image is not rectangular, but rather, trapezoidal, as shown in the figure. The image may be distorted in other manners, and thus, the present subject matter is not limited to a trapezoidal distortion. It should be appreciated that an image may be distorted in ways other than the non-limiting and exemplary trapezoidal distortion. The present disclosure is not limited to any one type of distortion. For example, in one implementation, check 700 has sides 740 a-d and corners 750 a-d. The check image may appear not to be rectangular as sides 742 a and 742 b, while parallel to each other, are shown to not be equal in length. Sides 742 c and 742 d, while also parallel to each other, are shown to not be equal in length, angles 762 and 760 are not 90 degrees. To deskew an image, the angle of a reference line within the digital image is determined. More than one reference line may be used as well. The angle of the line or lines is determined using a coordinate system as a reference. For example, the angle of the line is determined in reference to a set of axes, such as y-axis and x-axis 720 of. The image is then digitally rotated so that the angle is zero (o), and another attempt at OCR is performed on the image to determine if the rotated digital image is acceptable for use. the deposit request. In one embodiment, if the four corners of the check image has been determined, the PS-PLATFORM may determined whether the check image is skewed, warped or distorted. For example, in one implementation, the slope of a line between two corners may be used to determine the amount of distortion. If the line is strictly horizontal or vertical, then the image is considered to be without distortion. Otherwise, the image may be modified to remove the distortion. In one implementation, the PS-PLATFORM may implement techniques such as spatial transformation (also known as image warping) and/or the like based on the determined four corners of the check image to remove distortion and skew. For example, a Java implementation using the JAI packages of dewarping a check image may take a form similar to: . . . ParameterBlockJAI pb=new ParameterBlockJAI(“Warp”); pb.addSource(image); pb.setParameter(“warp”, new WarpPerspective(pt)); pb.setParameter(“interpolation”, - new InterpolationBilinear( )); RenderedOp rop=null; /* - * perform the Warp operation - */ rop=new RenderedOp(“Warp”, pb, null); if (rop.getWidth( )>image.getWidth( )*3 - ∥ rop.getHeight( )>image.getHeight( )*3) { - return cropped; } holder=rop.getAsBufferedImage( ); ParameterBlockJAI pb2=new ParameterBlockJAI(“Warp”); pb2.addSource(virtimage); pb2.setParameter(“warp”, new WarpPerspective(pt)); pb2.setParameter(“interpolation”, - new InterpolationBilinear( )); RenderedOp rop2=null; /* - * perform the Warp operation - */ rop2=new RenderedOp(“Warp”, pb2, null); virtimage=rop2.getAsBufferedImage( ); // ImageAnalysis.saveImage(holder, // “c:\\temp\\tester\\output\\afterwarp-Gray.jpg”); // virtimage= // ImageManipulation.virtualEdgeEnhance2(holder); } - . . . In another embodiment, if no such “void check” indication is found, the PS-PLATFORM may compare check identification data with the check identification data within a limited subset of previously deposited checks 810, e.g., checks deposited to the same payee's account within the past ten months, the last one hundred deposits to the same payee's account, etc. For example, in one implementation, check identification data may comprise Check identification data is any data that identifies the financial instrument in a relatively unique fashion. For example, check identification data may comprise MICR data such as payer account number, payer bank routing number, and check number, payer information such as payer name and address, the deposit amount of the check and or the like. In one implementation, check image characteristics such as a combination of characteristics from a payer signature line may also be used as check identification data. In one implementation, log files containing check identification data may be kept for all deposits. Transaction logs may be maintained for customer accounts, or for the financial institution as a whole. In one implementation, if there is a match 815 with any previously deposited check, the PS-PLATFORM may flag, delay or terminate the deposit transaction 820. For example, in one implementation, flagging the transaction may indicate to setting a flag that will cause some further scrutiny 832 of the transaction at a later time. The subsequent transaction analysis 832 may automatically analyze further aspects of the check and comparing it to a suspected duplicate check, or may report to a system operator (e.g., bank clerks, etc.). For another example, the PS-PLATFORM may delay transaction by causing the transaction to be delayed and processed at a later time. In one implementation, delaying and flagging may be used together to provide adequate time for additional scrutiny required by the flag. In a further implementation, the PS-PLATFORM may terminate and abort the transaction. In one embodiment, when the transaction is flagged, delayed, or terminated, a notification may be sent to a customer advising them of the action and optionally of the reason for the action as well. In one embodiment, the customer may be notified that the check has been identified as a possible duplicate and may be instructed to provide additional check identification data, such as re-capturing and submission of a new check image. In one embodiment, if there is no match detected in the comparison of 810, the PS-PLATFORM may determine whether to enlarge the comparison range of previously deposited checks 825. If so 827, the PS-PLATFORM may enlarge the comparison range to compare check identification data within a full set of previously deposited checks 830. For example, in one implementation, the PS-PLATFORM may further search and compare the check identification data with all the stored deposit information in the database. In one implementation, the limited subset comparison may be performed in real time, and the remainder of the comparison may be conducted at a later time. For example, in one implementation, the PS-PLATFORM may perform a multiple representment check within a 6-month database in real time on all PS-PLATFORM channels and records. In another implementation, the PS-PLATFORM may call a service to handle bulk transactions at batch time and to obtain a consolidated re-presentment report response, as will be further illustrated in After a full set comparison, if a match is located 833, the PS-PLATFORM may perform subsequent transaction analysis 832 of the check deposit. Otherwise, the check is considered to be valid 835, and the PS-PLATFORM may proceed to deposit funds to the payee's account. In one embodiment, the representment detection in one embodiment of the PS-PLATFORM as shown in In one embodiment, when the PS-PLATFORM receives a check presentment to a payee's bank 836, the PS-PLATFORM may process the request and send extracted deposit information and check images to a centralized warning system for presentment services 840. In one embodiment, the presentment services 840 may include registration service 841, real-time detection service 842 and batch detection service 843. In one implementation, the payee's bank may subscribe via the registration service 841 in order to receive presentment notification published by the warning system. In one implementation, attributes associated with the payee's bank may be maintained in a registration repository, such as a list of the transit routing numbers the payee's bank owns, callback service URL for presentment notification, callback credentials, and/or the like. For example, in one implementation, when a payee's bank sent check information and check images to the centralized warning system for representment check, the system may send check presentment notifications to a variety of subscribed banks 838. In one embodiment, the real-time detection service 842 may implement a representment search within a centralized repository 845 of previously deposits. In one implementation, the real-time detection service may take a form similar to the process shown in In one embodiment, the batch detection service 843 may process representment detection for bulk transactions in an “off-line” manner. For example, in one implementation, a payee's 839 may bank use this service to send all remote-deposit transactions at the end of each day for re-presentment validation and expects a next-day response. In one implementation, an X9.37 image cash letter containing the remote deposit transactions may be submitted to the centralized warning system and a response report containing presentment/re-presentment information may be published to the payee's bank 839. In one embodiment, the clearinghouse bank may provide check identification service 870 to identity a type of the check 873. For example, in one implementation, a check type identifier 873 at the clearinghouse bank may determine whether the deposited check is s U.S. Treasury check 874, a U.S. Postal money order 875, a cashier's check 876, a Federal Reserve Bank check 877, a certified check 878, a Federal Home Loan Bank check 879, a teller's check 880, a state/local government check 881, an on-us check 882, and/or the like, based on the received check information. In one implementation, the check type identifier 873 may inspect the MICR information on the check to determine its type. In one embodiment, if the check type identifier 873 is unable to determine the type of the check, the identification service 870 may proceed to other inspection procedures 885 to determine whether the deposited check is a fraudulent. For example, in one implementation, the identification service 870 may send alerts and present the check data to fraudulence experts. In one embodiment, the external agency or clearinghouse bank may return an indication of the check type to the payee's bank. In one implementation, the payee's bank may determine whether the check is a Regulation CC compliant based on the received check deposit data and the check type indication from the clearinghouse 857. For example, in one implementation, U.S. treasury checks, certified checks and/or the like, may be considered as Regulation CC safe and eligible for next business day availability of deposit confirmation 860. In another implementation, if the check is not Regulation CC safe, the payee's bank may inspect the payee's account to ensure full funds availability in account to cover the deposited amount upon fraud 862, and apply appropriate holds and limits on the deposit amount 865. The deposit may render limited fund availability 868 from the payee's bank. For example, in one implementation, a user may request to deposit a $1000.00 non-Regulation CC safe check, but only has $500.00 existing amount in the account. In such cases, the PS-PLATFORM may receive and verify remote deposit data from the user and the payee's bank may provisionally credit $1000.00 to the user's account. In one implementation, the payee's bank may generate a substitute check and send the substitute check for clearinghouse check. In one implementation, if the payee's bank receives a clearinghouse result indicating that the deposited check is not Regulation CC compliant, the payee's bank/PS-PLATFORM may notify the user via a user interface to provide a variety of options of deposit, e.g., display messages on the RDC Deposit website, send emails/instant messages to the user device, and/or the like. For example, the user may choose to deposit the maximum allowable funds at the moment, or to cancel the deposit, or to provisionally post the check but mail the physical check to the bank for clearance, and/or the like. In one implementation, the clearinghouse 8004 may process the received check deposit information, and send the data to a representment data store 8005 for representment detection. For example, in one implementation, a query may be formed based on the received check deposit information within the stored check data at 8005. In an alternative implementation, if the payee's bank 8001 is a subscriber with the PS-FLATFORM, e.g., registered for the deposit data publishing service, PS-PLATFORM may query a “listener registration” data store 8007 for the payee bank's subscription, and send published messages associated with the subscription to the payee's bank 8002. For example, the payee's bank 8002 may subscribe for information regarding for a payee's most recent 50 deposits. In that case, the clearinghouse may inspect the subscribed information regarding to the payee's most recent 50 deposits to detect check representment. In one embodiment, the PS-PLATFORM may update registration information periodically, intermittently and/or responsively to client request. In one implementation, a financial institution may provide information for registration with subscription service such as, but not limited to, a list of the transit routing numbers it owns, callback service URL for presentment notification, callback credentials, and/or the like. In another implementation, if a funding bank 8010, e.g., a payor's bank, has subscribed with the PS-PLATFORM, the clearinghouse may send a presentment notification published by a warning system to the payor's bank. In one implementation, the PS-PLATFORM may detect representment by using the check item metadata information, and perform a search and compare on the check item repository to detect re-presentment. If a match is not found, the PS-PLATFORM may record the check item transaction information and add it to the previously presented check repository. For example, in one implementation, the PS-PLATFORM may use the transit routing number from the check item metadata and publishes the presentment/re-presentment information to a subscriber bank (e.g., the payee's bank) the check item was drawn on “checkDeposited”. In the case of a reversal of presentment it removes the record of the check item and publishes the reversal of presentment information to the subscriber bank that the check item was drawn on “checkDepositReversed”. In an alternative embodiment, if the payee's bank provides an X9.37 image cash letter containing the remote deposit transactions for clearinghouse check, the PS-PLATFORM may utilize information contained in the cash letter, and publish presentment/representment notifications to the subscriber bank. For example, in one implementation, the check metadata received from a financial institution may comprise fields similar to the following: In one embodiment, the account number misread characters threshold 8120 and the MICR misread characters threshold 8122 may be specified by the PS-PLATFORM system administrator. If the received check metadata does not violate any of the criteria, the PS-PLATFORM may proceed with a MICR match 8115 to detect representment and send the result to the financial institution. In one embodiment, as shown in In one implementation, a MICR reader may use different characters to indicate the 4 MICR symbols on the extracted MICR string. For example, some readers may use the ‘A’ ‘B’ ‘C’ ‘D’ format, e.g., A021001208A7000609C11051, and some users may use ‘;’ ‘:’ ‘<’ ‘-’, e.g., :083000593:6901526<1672. For another example, MICR misreads may be represented by characters like ‘*’, ‘@’, eg., A131476231A62**125C10302, and blank spaces in the MICR line may be preserved. In one implementation, at MICR comparison 8210, the MICR string from the check metadata may be modified to replace all non-numeric characters except misreads with spaces, which preserves blank spaces in the original MICR string. For example, the obtained MICR may be a string with numerals, misreads and blank spaces. For previously discussed examples, the modified MICR strings may take forms similar to “021001208 7000609 11051,” “083000593 6901526 1672,” “131476231 62**125 10302,” and/or the like. In one implementation, a MICR string retrieved from previously stored record may also be modified as above. In one implementation, a string comparison of the two MICRs may be made to determine a match 8212. In the case of misreads, a regular expression based comparison is made. Persistence of all MICR strings in the previously stored record may have the original MICR characters used to represent the symbols so as to preserve the original data. In one implementation, if there are misreads in either of the MICR, a regular expression (RegEx) comparison is performed. This returns a successful match only if the non-misread characters between the two MICRs match exactly. The list is further traversed to determine the number of MICR strings that matched on the RegEx comparison. For example, in one implementation, if the PS-PLATFORM detects the MICR in the received check data is the same as a MICR in the generated list 8216, then a representment occurrence is detected and a notification may be generated. For another example, if the PS-PLATFORM has traversed the list of MICR and does not find a match 8220, in one implementation, the PS-PLATFORM may send a notification to the financial institution indicating no re-presentment has been found 8220. In an alternative implementation, the PS-PLATFORM may include a re-presentment probability factor and number of potential matches 8222 in the notification to indicate an estimate of potential re-presentment. In one implementation, a subscriber bank may reach each distributed clearinghouse and exchange information via routing between clearinghouses. In one implementation, a central data repository of check data may be employed for each distributed clearinghouse to access. In alternative implementation, distributed database may be associated with each clearinghouse. In an alternative embodiment, a dead-check repository may be employed to store records of negotiable instruments, such as checks, that have been processed by a network of financial institutions. The records, accessible when preparing to deposit, either remotely or physically, a negotiable instrument, serve as an indication to a financial institution as to whether the negotiable instrument has been previously presented. Thus, accessing the record before confirming the deposit secures deposits by preventing re-presentment of negotiable instruments. Alternately, at 904, the record of the check may be created by financial institution and may include forming a representation of the check from information contained on the check. The representation may be an image. For example, if the electronic data representative of the check is in the form of a digital image, the digital image may be used as the record of the check. Alternatively, the representation may be a data string including one or more identifying characteristics of the check organized in a predefined order. Alternate to 903 or 904, at 902, if the electronic data representative of the check is a digital image of the check, the record of the check may be created by financial institution and may include forming an image of one or more portions of the electronic data representative of the check. In this manner, one or more portions of the check deemed to include identifying characteristics may be used as the record of the check. If more than one portion is used to create the record, the portions may be subsequent portions, with each portion containing one or more identifying characteristics. Or the image may comprise non-subsequent portions of the check. For example, the portion of the check containing the date and check number may be placed next to the portion of the check containing the signature of the payer to form the record of the check. Other means of creating a record of the check may be employed. Furthermore, financial institution may not be responsible for creating the record of the check and may instead provide, by way of network for example, the electronic data representative of the check to an outside source (not shown) or to another financial institution capable of creating the record. The outside source or other financial institution may create the record by forming a representation of the check from information contained on the check, by forming an image of one or more portions of the check, or by other means. The record may then be sent to financial institution through network for example. An analysis may be performed to determine whether the record of the check, created by one of financial institution or an outside source, is unique or has already been created and stored in the repository. The analysis may include comparing the created record to a plurality of records stored in dead-check repository. If the record of the check has not already been stored in the repository, the record may serve as an indication that the check attempted to be deposited by payee has not previously been deposited. Accordingly, financial institution may proceed with the remote deposit as desired by payee. Additionally, financial institution may perform actions to assist in subsequent record assessments used to determine if subsequent remote and/or physical deposits may be processed. At 888, the unique record of the check is stored in dead-check repository. At 907, following the storing of the check in a dead-check repository 906, the remote deposit is processed. At 908, funds associated with the check are credited to account held with financial institution. At 909 the funds identified by the check are deducted from the account of payer, for example, account associated with the check. If it is determined that the record of the check already exists in dead-check repository, then the remote deposit is desirably rejected and/or other predetermined actions may be taken (e.g., notify the payer, the payee, the financial institution, other financial institutions). Alternatively, the dead-check repository may detect similarities but not an exact match between the record being verified and a record stored in dead-check repository. For example, one inconsistency between the record being verified and a stored record may exist, while multiple portions of the record being verified may match the stored record. If such an inconsistency leads to an uncertainty in the unique record determination, at 905, financial institution may proceed depending upon predetermined rules developed by financial institution. At 912, financial institution, upon receipt of the check, obtains identifying characteristics of the check. Several identifying characteristics may include, but are not limited to, the signature of payer, the date as indicated by payer, and the account number associated with account from which the funds identified on the check will be deducted. At 913, a digital image comprising some or all the portions of the electronic data representative of the check and/or the identifying characteristics may be formed. The digital image may be a smaller sized image than the electronic data representation of the check, for example. At 914, a record of the check in a format consistent with the format of the records stored in dead-check repository is created. The creation of the record in a format consistent with the stored records may include, for example, forming a composite digital image comprising each digital image of each portion. Each digital image may be arranged in a predefined manner to form the composite digital image consistent with the plurality of records stored in the dead-check repository. At 915, a confirmation process is implemented to determine if the record of the check is unique or has already been stored in the repository. As the record has been formed to be consistent with the format of the other records stored in dead-check repository, the confirmation process may include a direct comparison between the created record and the stored records. Optionally at 916, the record of the check is stored with the plurality of records currently stored in the database. If the record matches a record already stored in the repository, the record may be stored with the previously stored record to serve as an indication that the check associated with the record was presented for re-presentment, for example. At 921, payee is provided with a notification related to the uniqueness or verification of the negotiable instrument, based on a comparison of a record created based on the negotiable instrument and previously generated records stored in a repository, for example. The notification may be provided electronically though email or text message over network, for example. If the notification is that the negotiable instrument is unique or does not already exist in the repository, then at 922, payee receives indication that the appropriate funds have been deposited into account owned by payee. If, on the other hand, the notification indicates that the negotiable instrument is not unique or already exists in the repository, at 923 payee receives an indication and e.g., a notification to visit or otherwise contact financial institution 120 a in order to attempt to deposit the negotiable instrument. At 924, payee may be required by, for example financial institution, to void the negotiable instrument. This may include sending a copy of the negotiable instrument to financial institution or to an entity as indicated by financial institution. Payee may be required to perform the void process within a predetermined period of time.. In one embodiment, check 1000 also has unmodified modifiable section 1002. Unmodified modifiable section 1002 may use various types of modifiable inks or other mechanisms so that when a stimulus is applied to unmodified modifiable section 1002, 1002 1000. In an alternative embodiment, check 1000 has modified modifiable section 1008, which was unmodified modifiable section 1002. After the application of a stimulus to unmodified modifiable section 1002, the ink or mechanism within unmodified modifiable section 1002 may be changed to show modified modifiable section 1008. Check 1000 now shows the term, “VOID” within modified modifiable section 1008. The change may be used to provide information to prevent a second or subsequent presentment of check 1000 for deposit. In an alternative embodiment, an ink sensitive to various stimuli that modifies after removal of a protective cover may be employed. Check 1000 has removable coating 1010 a which seals modifiable ink section 1010 b from exposure to stimuli. The stimuli may be of various types, including, but not limited to, air or light. For example, prior to deposit of the negotiable instrument, there may be a requirement to remove coating 1010 a to indicate the underlying code, which is shown as “VO”. Upon removal of coating 1010 a, ink section 1010 b is exposed to light, causing the ink to modify to show “VO”. Thus, coating 1010 a may be of a range of materials that blocks the exposure of section 1010 b to a stimulus. Once exposed, ink section 1010 b may be permanently or temporarily modified. Ink section 1010 b indicia may also be encoded to increase the difficulty of defeating the void process. For example, ink section 1010 b may be a bar code unique to check 1000 itself and may be designed in a manner that is difficult to determine what the bar code will be. In order to deposit check 1000, coating 1010 a may be removed to show the encoded indicia. If the indicia is encoded, the user may be forced to remove coating 1010 a because of the difficulty of determining the code without exposing the indicia by removing coating 1010 a. After the deposit is processed, bank 10430 may wish to prevent the use of check 10414 in another deposit operation. In one exemplary and non-limiting embodiment, bank 10430 may cause the modification of check 10414 to prevent a subsequent presentment of check 10414. Bank 10430 may send a communication to account owner 10410 to void the check. The communication may be directed at scanner 10412 with or without the knowledge of account owner 10410. In other words, bank 10430 may not complete the deposit operation if account owner 10410 intercedes in the void operation. Bank 10430 may send a signal to scanner 10412 to scan a surface of check 10414 at a speed to cause the modification of an ink section on check 10414, as described above. Once the scan operation is completed, bank 10430 may wait to complete the deposit operation until a communication or notice is received that check 10414 was voided. The notice may include the slow scanned image showing the modification of check 10414. In an alternative embodiment, a user receives 10500 a check from another individual, for example, if the user is owed money or the check is used as payment for a good or service. The user endorses 10502 the check by signing the check, thus indicating the intent to deposit the check into an account. The user generates 10504 a digital image file by scanning at least one surface of the check using a scanner. The user sends 10506 the digital image file to the bank which controls the user's account. After processing the deposit request, a communication is generated and transmitted to void 10508 the check. The communication may be directed to the user and/or may be directed to another mechanism. For example, the communication may be directed to the user's scanner with or without the knowledge of the user. The communication may contain instructions to re-scan the check at a certain speed to cause the application of a stimulus to modify the check. In an alternative embodiment, the bank receives 10600 a deposit request from a user. After acknowledging the deposit request, the bank then receives 10602 a digital image of the check. The digital image may be used by the bank to process the deposit request. The digital image may be used alone or in conjunction with additional information such as MICR information. After verifying 10604 the digital information, the bank processes 10606 the deposit request. The verification may include, but is not limited to, the verification of the quality of the digital image, the verification of any data retrieved from the digital image, the verification of additional information received along with the digital image, and/or the verification that the check has not been deposited before. After the bank verifies 10604 the digital information received and processes 10606 the deposit request, the bank then may transmit 10608 a void signal to void the check. As described earlier, there may be various manners in which to void the check, including, but not limited to, the application of a stimulus such as light, heat or sound. Upon application of the stimulus, the check is voided 10610. In an alternative embodiment, a scanner is used to apply the stimulus. A bank receives 10700 a deposit request. The bank then receives 10702 a digital image of the check and account information. The bank verifies 704 the information and processes 10706 the deposit request. After the deposit is in process, to complete the process, the bank transmits 10708 a void signal to the user's scanner. The void signal may contain instructions to rescan a surface of the check at a certain speed to cause the application of a stimulus. The ink may be modified based upon the application of a certain magnitude or brightness of light, or heat may be generated by that brightness of light, for a certain amount of time, which may correspond to a scan speed. After the stimulus is applied, the bank deposits 10710 the funds into the user's account. The present disclosure may incorporate a check modifiable by various stimuli. In an alternative embodiment, a system may use radio waves to modify a check. Check 10814 has embedded RFID tag 10804. RFID tag 10804 is an object that is sensitive to radio signals and can be incorporated into check 10814. RFID tag 10804 can be read and modified at various distances. Typically, an RFID tag, such as RFID tag 10804, has two parts: an integrated circuit for storing and processing information as well as receiving instructions via radio waves and an antenna for receiving and transmitting a signal. Some RFID tags do not have the integrated circuit, thus reducing cost and bulk of using an RFID tag. The RFID tag may be programmed to initially indicate that check 10814 has not been deposited. Account owner 10802 may use scanner 10812 to deposit check 11814 into account 10860 of bank 10830 using communication pathway 10820. After check 10814 is deposited into account 10860, bank 10830 may wish to modify RFID tag 10804 to indicate that check 10814 has been deposited. Thus, when the information contained by RFID tag 10804 is subsequently read, RFID tag 10804 may indicate that check 10814 has previously been deposited. Bank 10830 may cause radio transmitter 10806 to transmit a radio communication, through communication connection 10840, to RFID tag 10804 of check 10814. The radio signal may cause RFID tag 10804 to modify its information to indicate that check 10814 has been previously deposited. Communication connection 10840 may be of various types, including, but not limited to, a wireless cellular connection or an internet connection. Additionally, radio transmitter 10806 may be of various types, including, but not limited to, a local internet access point and a cellular transceiver. The type of scanner used may also be of various types. In an alternative embodiment, a scanner designed for the deposit and voiding of checks through remote means may be employed. Deposit machine 10912 is configured to provide deposit services. Deposit machine 10912 may be an integrated machine or a system having various parts, including a scanner to create a digital image of a check, such as check 10914 and a stimulus generator to cause the application of a stimulus to check 10914. Account owner 10902 initiates deposit machine 10912 to generate a digital image of check 10914, the image being transmitted to bank 10930 via communication connection 10920 for deposit into account 10960. After the bank processes the deposit of check 10914, bank 10930 may transmit a void signal to deposit machine 10912 to initiate a void process. The void signal may be transmitted using various communication methods, including, but not limited to, an internet connection, a telephone connection such as a wireless telephone, or a facsimile transmission if deposit machine 10912 is configured to receive facsimile messages. Deposit machine 10912 may void check 10914 according to the configuration of deposit machine 10912 and/or the void message received. For example, deposit machine 10912 may be configured to apply an ultraviolet light in response to a void signal. Deposit machine 10912 may also be configured to rescan check 10914 and send the rescanned digital image to bank 10930 to show that the void stimulus has been applied and that check 10914 has been voided. In one embodiment, as shown in In one embodiment, as shown in In one embodiment, as shown in In one embodiment, the PS-PLATFORM may prompt the user to login with an online ID and password 1204. Upon successful login, the PS-PLATFORM may provide deposit account information to the user, and allow the user to input a deposit amount 1205. In one embodiment, the PS-PLATFORM may provide details for the user on digital signature verification of the website 1208, and instruct the user to scan a paper check 1210. In one implementation, the PS-PLATFORM may remotely control a scanner connected to the personal computer of the user, and let the user choose the scanner from a drop-down list 1212. In one embodiment, the PS-PLATFORM may then instruct the user to place the paper check in the scanner bed. If the paper check is not properly positioned as shown, the PS-PLATFORM may display an incomplete, skewed or improperly positioned check image to the user 1215 and 1218 such that the user may choose to rescan. In one implementation, if the user has properly positioned the paper check in a rectangle area as instructed via the PS-PLATFORM interface 1220, the PS-PLATFORM may request the user to select a bottom right corner of the scanned check image and then detect an area of the image of the check from its background 1222. In one embodiment, the PS-PLATFORM may instruct the user to endorse the check and scan the back side of the check 1225. If the PS-PLATFORM detects that the check is not endorsed 1228, the image may be denied and an error message may be displayed 1230. In one embodiment, if both sides of the check have been successfully scanned and the PS-PLATFORM verifies the uploaded images, the PS-PLATFORM may deposit the funds into the user account and provide deposit details to the user including the scanned check images. In one implementation, the PS-PLATFORM may instruct the user to void and dispose the deposited paper check 1235. In one embodiment, an image of ripped check may be submitted to verify the check has been voided in a similar manner as shown in Typically, users, which may be people and/or other systems, may engage information technology systems (e.g., computers) to facilitate information processing. In turn, computers employ processors to process information; such processors 1303 13. In one embodiment, the PS-PLATFORM controller 1301 may be connected to and/or communicate with entities such as, but not limited to: one or more users from user input devices 1311; peripheral devices 1312; an optional cryptographic processor device 1328; and/or a communications network 1313. PS-PLATFORM controller 1301 may be based on computer systems that may comprise, but are not limited to, components such as: a computer systemization 1302 connected to memory 1329. communications, operations, storage, etc. Optionally, the computer systemization may be connected to an internal power source 1386. Optionally, a cryptographic processor 13 PS-PLATFORM controller and beyond through various interfaces. Should processing requirements dictate a greater amount speed and/or capacity, distributed processors (e.g., Distributed PS-PLATFORM), mainframe, multi-core, parallel, and/or super-computer architectures may similarly be employed. Alternatively, should deployment requirements dictate greater portability, smaller Personal Digital Assistants (PDAs) may be employed. Depending on the particular implementation, features of the PS-PLATFORM may be achieved by implementing a microcontroller such as CAST's R8051XC2 microcontroller; Intel's MCS 51 (i.e., 8051 microcontroller); and/or the like. Also, to implement certain features of the PS-PLATFORM, some feature implementations may rely on embedded components, such as: Application-Specific Integrated Circuit (“ASIC”), Digital Signal Processing (“DSP”), Field Programmable Gate Array (“FPGA”), and/or the like embedded technology. For example, any of the PS-PLATFORM component collection (distributed or otherwise) and/or features may be implemented via the microprocessor and/or via embedded components; e.g., via ASIC, coprocessor, DSP, FPGA, and/or the like. Alternately, some implementations of the PS-PLATFORM may be implemented with embedded components that are configured and used to achieve a variety of features or signal processing. Depending on the particular implementation, the embedded components may include software solutions, hardware solutions, and/or some combination of both hardware/software solutions. For example, PS-PLATFORM PS-PLATFORM features. A hierarchy of programmable interconnects allow logic blocks to be interconnected as needed by the PS-PLATFORM PS-PLATFORM may be developed on regular FPGAs and then migrated into a fixed version that more resembles ASIC implementations. Alternate or coordinating implementations may migrate PS-PLATFORM controller features to a final ASIC instead of or in addition to FPGAs. Depending on the implementation all of the aforementioned embedded components and microprocessors may be considered the “CPU” and/or “processor” for the PS-PLATFORM. The power source 13 1386 is connected to at least one of the interconnected subsequent components of the PS-PLATFORM 1309 may accept, communicate, and/or connect to a number of storage devices such as, but not limited to: storage devices 13 1310 may accept, communicate, and/or connect to a communications network 1313. Through a communications network 1313, the PS-PLATFORM controller is accessible through remote clients 1333 b (e.g., computers with web browsers) by users 1333 a. PS-PLATFORM), architectures may similarly be employed to pool, load balance, and/or otherwise increase the communicative bandwidth required by the PS-PLATFORM 1310 may be used to engage with various communications network types 1313. For example, multiple network interfaces may be employed to allow for the communication over broadcast, multicast, and/or unicast networks. Input Output interfaces (I/O) 1308 may accept, communicate, and/or connect to user input devices 1311, peripheral devices 1312, cryptographic processor devices 13 1311 may be card readers, dongles, finger print readers, gloves, graphics tablets, joysticks, keyboards, mouse (mice), remote controls, retina readers, trackballs, trackpads, and/or the like. Peripheral devices PS-PLATFORM controller may be embodied as an embedded, dedicated, and/or monitor-less (i.e., headless) device, wherein access would be provided over a network interface connection. Cryptographic units such as, but not limited to, microcontrollers, processors 1326, interfaces 1327, and/or devices 1328 may be attached, and/or communicate with the PS-PLATFORM. PS-PLATFORM controller and/or a computer systemization may employ various forms of memory 1329 will include ROM 1306, RAM 1305, and a storage device 1314. A storage device 13. PS-PLATFORM component(s) 13 1314, they may also be loaded and/or stored in memory such as: peripheral devices, RAM, remote storage facilities through a communications network, ROM, various forms of memory, and/or the like. The operating system component 1315 is an executable program component facilitating the operation of the PS-PLATFORM controller. Typically, the operating system facilitates PS-PLATFORM controller to communicate with other entities through a communications network 1313. Various communication protocols may be used by the PS-PLATFORM controller as a subcarrier transport mechanism for interaction, such as, but not limited to: multicast, TCP/IP, UDP, unicast, and/or the like. An information server component 13−) PS-PLATFORM PS-PLATFORM database 1319, operating systems, other program components, user interfaces, Web browsers, and/or the like. Access to the PS-PLATFORM PS-PLATFORM. PS-PLATFORM. A user interface component 13. A Web browser component 13 PS-PLATFORM enabled nodes. The combined application may be nugatory on systems employing standard Web browsers. A mail server component 1321 is a stored program component that is executed by a CPU 1303. The mail server may be a conventional Internet mail server such as, but not limited to sendmail, Microsoft Exchange, and/or the like. The mail server may allow for the execution of program components through facilities such as ASP, ActiveX, (ANSI) (Objective−) PS-PLATFORM. Access to the PS-PLATFORM 1322 is a stored program component that is executed by a CPU 130. A cryptographic server component 1320 is a stored program component that is executed by a CPU 1303, cryptographic processor 1326, cryptographic processor interface 1327, cryptographic processor device 13 PS-PLATFORM PS-PLATFORM component to engage in secure transactions if so desired. The cryptographic component facilitates the secure accessing of resources on the PS-PLATFORM PS-PLATFORM database component 13 PS-PLATFORM PS-PLATFORM database is implemented as a data-structure, the use of the PS-PLATFORM database 1319 may be integrated into another component such as the PS-PLATFORM component 13 1319 includes several tables 1319 a-h. A users table 1319 a includes fields such as, but not limited to: a user_ID, user_name, user_password, user_bank_ID, account_ID, transaction_ID, user_accountNo, user_transaction, user_hardware, and/or the like. The user table may support and/or track multiple entity accounts on a PS-PLATFORM. A hardware table 1319 b includes fields such as, but not limited to: hardware_ID, hardware_type, hardware_name, data_formatting_requirements, bank_ID, protocols, addressing_info, usage_history, hardware_requirements, user_ID, and/or the like. A transaction table 1319 c includes fields such as, but not limited to transaction_ID, transaction_time, transaction_account, transaction_payee, transaction_bank, transaction_payer, transaction_status, transaction_clearance, and/or the like. A check table 1319 d includes fields such as check_ID, account_ID, transaction_ID, image_timestamp, image_MICR, image_status, image_user, image_device, image_account and/or the like. An accounts table 1319 e includes fields such as, but not limited to account_ID, user_ID, bank_ID, account_number, routing_number, account_type, account_amount, account_limit, and/or the like. A message table 1019 f includes fields such as, but not limited to user_ID, bank_ID, message_ID, message_title, message_body, message_timestamp, message_checkID, message_ruleID, message_subID, and/or the like. A subscription table 1019 g includes fields such as, but not limited to subscription ID, subscription_bank, subscription_rule, subscription_user, subscription level, and/or the like. A rules table 1019 h includes fields such as, but not limited to rule_ID, rule_bank, rule_user, rule_subscription, rule_title, rule_body, rule_transaction, and/or the like. In one embodiment, the PS-PLATFORM database may interact with other database systems. For example, employing a distributed database system, queries and data access by search PS-PLATFORM component may treat the combination of the PS-PLATFORM database, an integrated data security layer database as a single database entity. In one embodiment, user programs may contain various user interface primitives, which may serve to update the PS-PLATFORM. Also, various accounts may require custom database tables depending upon the environments and the types of clients the PS-PLATFORM 1319 a-d. The PS-PLATFORM may be configured to keep track of various settings, inputs, and parameters via database controllers. The PS-PLATFORM database may communicate to and/or with other components in a component collection, including itself, and/or facilities of the like. Most frequently, the PS-PLATFORM database communicates with the PS-PLATFORM component, other program components, and/or the like. The database may contain, retain, and provide information regarding other nodes and data. The PS-PLATFORM component 1335 is a stored program component that is executed by a CPU. In one embodiment, the PS-PLATFORM component incorporates any and/or all combinations of the aspects of the PS-PLATFORM that was discussed in the previous figures. As such, the PS-PLATFORM affects accessing, obtaining and the provision of information, services, transactions, and/or the like across various financial institutions and communications networks. The PS-PLATFORM component enabling access of information between nodes may be developed by employing standard development tools and languages such as, but not limited to: Apache components, Assembly, ActiveX, binary executables, (ANSI) (Objective−) PS-PLATFORM server employs a cryptographic server to encrypt and decrypt communications. The PS-PLATFORM component may communicate to and/or with other components in a component collection, including itself, and/or facilities of the like. Most frequently, the PS-PLATFORM component communicates with the PS-PLATFORM database, operating systems, other program components, and/or the like. The PS-PLATFORM may contain, communicate, generate, obtain, and/or provide program component, system, user, and/or data communications, requests, and/or responses. The structure and/or operation of any of the PS-PLATFORM PS-PLATFORM.: - w3c-post http:// . . . Value1. The entirety of this application (including the Cover Page, Title, Headings, Field, Background, Summary, Brief Description of the Drawings, Detailed Description, Claims, Abstract, Figures,.
https://patents.google.com/patent/US9779392B1/en
CC-MAIN-2021-10
en
refinedweb
This section describes how to use the Python SDK to submit a job. The job aims to count the number of times INFO, WARN, ERROR, and DEBUG appear in a log file. Note: Make sure that you have signed up Batch Compute service in advance. Contents: - Prepare a job - Upload data file to OSS - Upload task program to OSS - Use SDK to submit job - Check result 1. Prepare a job The job aims to count the number of times INFO, WARN, ERROR, and DEBUG appear in a log file. This job contains the following tasks: - The split task is used to divide the log file into three parts. - The count task is used to count the number of times INFO, WARN, ERROR, and DEBUG appear in each part of the log file. In the count task, InstanceCount must be set to 3, indicating that three count tasks are started concurrently. - The merge task is used to merge all the count results. DAG 1.1. Upload data file to OSS Download the data file used in this example: log-count-data.txt Upload the log-count-data.txt file to oss://your-bucket/log-count/log-count-data.txt. your-bucketindicates the bucket created by yourself. In this example, it is assumed that the region is cn-shenzhen. - To upload the file to the OSS, see Upload files to the OSS. 1.2. Upload task program to OSS The job program used in this example is complied using Python. Download the program: log-count.tar.gz. In this example, it is unnecessary to modify the sample codes. You can directly upload log-count.tar.gz to the OSS, for example oss://your-bucket/log-count/log-count.tar.gz. The upload method has been described earlier. - Batch Compute supports only the compressed packages with the extension tar.gz. Make sure that you use the preceding method (gzip) for packaging; otherwise, the package cannot be parsed. If you must modify codes, decompress the file, modify the codes, and then follow these steps to pack the modified codes: The command is as follows: > cd log-count # Switch to the directory. > tar -czf log-count.tar.gz * # Pack all files under this directory to log-count.tar.gz. You can run the following command to check the content of the compressed package: $ tar -tvf log-count.tar.gz The following list are displayed: conf.py count.py merge.py split.py 2. Use SDK to submit job For more information about how to upload and install the Python SDK, click here. If the SDK version is v20151111, you must specify a cluster ID or use the AutoCluster parameters when submitting a job.In this example, the AutoCluster is used. You must configure the following parameters for the AutoCluster: - Available image ID. You can use the image provided by the system or custom an image. For more information about how to custom an image, see Use an image. InstanceType. For more information about the instance type, see Currently supported instance types. Create a path for storing the StdoutRedirectPath (program outputs) and StderrRedirectPath (error logs) in the OSS. In this example, the created path is oss://your-bucket/log-count/logs/. - To run the program in this example, modify variables with comments in the program based on the previously described variables and OSS path variables. The following provides a program submission template when the Python SDK is used. For specific meanings of parameters in the program, click here. #encoding=utf-8 import sys from batchcompute import Client, ClientError from batchcompute import CN_SHENZHEN as REGION from batchcompute.resources import ( JobDescription, TaskDescription, DAG, AutoCluster ) ACCESS_KEY_ID='' # Enter your AccessKeyID ACCESS_KEY_SECRET='' # Enter your AccessKeySecret IMAGE_ID = 'img-ubuntu' # Enter your image ID INSTANCE_TYPE = 'ecs.sn1.medium' # Enter the instance type based on the region WORKER_PATH = '' # 'oss://your-bucket/log-count/log-count.tar.gz' Enter the OSS storage path of the uploaded log-count.tar.gz LOG_PATH = '' # 'oss://your-bucket/log-count/logs/' Enter the OSS storage path of the error feedback and task outputs OSS_MOUNT= '' # 'oss://your-bucket/log-count/' Mount on to the "/home/inputs" and "/home/outputs" client = Client(REGION, ACCESS_KEY_ID, ACCESS_KEY_SECRET) def main(): try: job_desc = JobDescription() # Create auto cluster. cluster = AutoCluster() cluster.InstanceType = INSTANCE_TYPE cluster.ResourceType = "OnDemand" cluster.ImageId = IMAGE_ID # Create split task. split_task = TaskDescription() split_task.Parameters.Command.CommandLine = "python split.py" split_task.Parameters.Command.PackagePath = WORKER_PATH split_task.Parameters.StdoutRedirectPath = LOG_PATH split_task.Parameters.StderrRedirectPath = LOG_PATH split_task.InstanceCount = 1 split_task.AutoCluster = cluster split_task.InputMapping[OSS_MOUNT]='/home/input' split_task.OutputMapping['/home/output'] = OSS_MOUNT # Create map task. count_task = TaskDescription(split_task) count_task.Parameters.Command.CommandLine = "python count.py" count_task.InstanceCount = 3 count_task.InputMapping[OSS_MOUNT] = '/home/input' count_task.OutputMapping['/home/output'] = OSS_MOUNT # Create merge task merge_task = TaskDescription(split_task) merge_task.Parameters.Command.CommandLine = "python merge.py" merge_task.InstanceCount = 1 merge_task.InputMapping[OSS_MOUNT] = '/home/input' merge_task.OutputMapping['/home/output'] = OSS_MOUNT # Create task dag. task_dag = DAG() task_dag.add_task(task_name="split", task=split_task) task_dag.add_task(task_name="count", task=count_task) task_dag.add_task(task_name="merge", task=merge_task) task_dag.Dependencies = { 'split': ['count'], 'count': ['merge'] } # Create job description. job_desc.DAG = task_dag job_desc.Priority = 99 # 0-1000 job_desc.Name = "log-count" job_desc.Description = "PythonSDKDemo" job_desc.JobFailOnInstanceFail = True job_id = client.create_job(job_desc).Id print('job created: %s' % job_id) except ClientError, e: print (e.get_status_code(), e.get_code(), e.get_requestid(), e.get_msg()) if __name__ == '__main__': sys.exit(main()) 3. Check job status You can view the job status by referring to Obtain the job information. jobInfo = client.get_job(job_id) print (jobInfo.State) A job may be in one of the following states: Waiting, Running, Finished, Failed, and Stopped. 4. Check job execution result You can log on to the OSS console and check the following file under your bucket: /log-count/merge_result.json. The expected result is as follows: {"INFO": 2460, "WARN": 2448, "DEBUG": 2509, "ERROR": 2583} Alternatively, you can use the OSS SDK to obtain the results.
https://www.alibabacloud.com/help/doc-detail/42411.htm
CC-MAIN-2021-10
en
refinedweb
kt_dart 0.7.0-dev.1 kt_dart: ^0.7.0-dev.1 Packages # annotation # import 'package:kt_dart/annotation.dart'; Annotations such as @nullableor @nonNullgiving hints about method return and argument types collection # import 'package:kt_dart/collection.dart'; Collection types, such as KtIterable, KtCollection, KtList, KtSet, KtMapwith over 150 methods as well as related top-level functions. The collections are immutable by default but offer a mutable counterpart i.e. KtMutableList. Planned # Planned modules for the future are async, tuples, comparison, range, sequence, text.
https://pub.dev/packages/kt_dart/versions/0.7.0-dev.1
CC-MAIN-2021-10
en
refinedweb
Source JythonBook / SimpleWebApps.rst Chapter 13: Simple Web Applications - Final v1.0 One of the major benefits of using Jython is the ability to make use of Java platform capabilities programming in the Python programming language instead of Java. In the Java world today, the most widely used web development technique is the Java servlet. Now in JavaEE, there are techniques and frameworks used so that we can essentially code HTML or other markup languages as opposed to writing pure Java servlets. However, sometimes writing a pure Java servlet still has its advantages. We can use Jython to write servlets and this adds many more advantages above and beyond what Java has to offer because now we can make use of Python language features as well. Similarly, we can code web start applications using Jython instead of pure Java to make our lives easier. Coding these applications in pure Java has proven sometimes to be a difficult and sometimes grueling task. We can use some of the techniques available in Jython to make our lives easier. We can even code WSGI applications with Jython making use of the modjy integration in the Jython project. In this chapter, we will cover three techniques for coding simple web applications using Jython: servlets, web start, and WSGI. We’ll get into details on using each of these different techniques here, but we will discuss deployment of such solutions in Chapter 17. Servlets Servlets are a Java platform technology for building web-based applications. They are a platform- and server-independent technology for serving content to the web. If you are unfamiliar with Java servlets, it would be worthwhile to learn more about them. An excellent resource is wikipedia (); however, there are a number of other great places to find out more about Java servlets. Writing servlets in Jython is a very productive and easy way to make use of Jython within a web application. Java servlets are rarely written using straight Java anymore. Most Java developers make use of Java Server Pages (JSP), Java Server Faces (JSF), or some other framework so that they can use a markup language to work with web content as opposed to only working with Java code. However, in some cases it is still quite useful to use a pure Java servlet. For these cases we can make our lives easier by using Jython instead. There are also great use-cases for JSP; similarly, we can use Jython for implementing the logic in our JSP code. The latter technique allows us to apply a model-view-controller (MVC) paradigm to our programming model, where we separate our front-end markup from any implementation logic. Either technique is rather easy to implement, and you can even add this functionality to any existing Java web application without any trouble. Another feature offered to us by Jython servlet usage is dynamic testing. Because Jython compiles at runtime, we can make code changes on the fly without recompiling and redeploying our web application. This can make it very easy to test web applications, because usually the most painful part of web application development is the wait time between deployment to the servlet container and testing. Configuring Your Web Application for Jython Servlets Very little needs to be done in any web application to make it compatible for use with Jython servlets. Jython contains a built-in class named PyServlet that facilitates the creation of Java servlets using Jython source files. We can make use of PyServlet quite easily in our application by adding the necessary XML configuration into the application’s web.xml descriptor such that the PyServlet class gets loaded at runtime and any file that contains the .py suffix will be passed to it. Once this configuration has been added to a web application, and jython.jar has been added to the CLASSPATH then the web application is ready to use Jython servlets. See Listing 13-1. Listing 13-1. Making a Web Application Compatible with Jython > Any servlet that is going to be used by a Java servlet container also needs to be added to the web.xml file as well, since this allows for the correct mapping of the servlet via the URL. For the purposes of this book, we will code a servlet named NewJythonServlet in the next section, so the following XML configuration will need to be added to the web.xml file. See Listing 13-2. Listing 13-2. Coding a Jython Servlet <servlet> <servlet-name>NewJythonServlet</servlet-name> <servlet-class>NewJythonServlet</servlet-class> </servlet> <servlet-mapping> <servlet-name>NewJythonServlet</servlet-name> <url-pattern>/NewJythonServlet</url-pattern> </servlet-mapping> Writing a Simple Servlet In order to write a servlet, we must have the javax.servlet.http.HttpServlet abstract Java class within our CLASSPATH so that it can be extended by our Jython servlet to help facilitate the code. This abstract class, along with the other servlet implementation classes, is part of the servlet-api.jar file. According to the abstract class, there are two methods that we should override in any Java servlet, those being doGet and doPost. The former performs the HTTP GET operation while the latter performs the HTTP POST operation for a servlet. Other commonly overridden methods include doPut, doDelete, and getServletInfo. The first performs the HTTP PUT operation, the second performs the HTTP DELETE operation, and the last provides a description for a servlet. In the following example, and in most use-cases, only the doGet and doPost are used. Let’s first show the code for an extremely simple Java servlet. This servlet contains no functionality other than printing its name along with its location in the web application to the screen. Following that code we will take a look at the same servlet coded in Jython for comparison (Listing 13-3). Listing 13-3. NewJavaServlet.java NewJavaServlet extends HttpServlet { NewJavaServlet Test</title>"); out.println("</head>"); out.println("<body>"); out.println("<h1>Servlet NewJavaServlet at " + request.getContextPath () + "<); } @Override public String getServletInfo() { return "Short description"; } } All commenting has been removed from the code in an attempt to make the code a bit shorter. Now, Listing 13-4 is the equivalent servlet code written in Jython. Listing 13-4. from javax.servlet.http import HttpServlet class NewJythonServlet (HttpServlet): def doGet(self,request,response): self.doPost (request,response) def doPost(self,request,response): toClient = response.getWriter() response.setContentType ("text/html") toClient.println ("<html><head><title>Jython Servlet Test</title>" + "<body><h1>Servlet Jython Servlet at" + request.getContextPath() + "</h1></body></html>") def getServletInfo(self): return "Short Description" Not only is the concise code an attractive feature, but also the easy development lifecycle for working with dynamic servlets. As stated previously, there is no need to redeploy each time you make a change because of the compile at runtime that Jython offers. Simply change the Jython servlet, save, and reload the webpage to see the update. If you begin to think about the possibilities you’ll realize that the code above is just a basic example, you can do anything in a Jython servlet that you can with Java and even most of what can be done using the Python language as well. To summarize the use of Jython servlets, you simply include jython.jar and servlet-api.jar in your CLASSPATH. Add necessary XML to the web.xml, and then finally code the servlet by extending the javax.servlet.http.HttpServlet abstract class. Using JSP with Jython Harnessing Jython servlets allows for a more productive development lifecycle, but in certain situations Jython code may not be the most convenient way to deal with front-facing web code. Sometimes using a markup language such as HTML works better for developing sophisticated front-ends. For instance, it is easy enough to include JavaScript code within a Jython servlet. However, all of the JavaScript code would be written within the context of a String. Not only does this eliminate the usefulness of an IDE for situations such as semantic code coloring and auto completion, but it also makes code harder to read and understand. Cleanly separating such code from Jython or Java makes code more clear to read, and easier to maintain in the long run. One possible solution would be to choose from one of the Python template languages such as Django, but using Java Server Pages (JSP) technology can also be a nice solution. Using a JSP allows one to integrate Java code into HTML markup in order to generate dynamic page content. We are not fans of JSP. There, we said it: JSP can make code a living nightmare if the technology is not used correctly. Although JSP can make it very easy to mix JavaScript, HTML, and Java into one file, it can make maintenance very difficult. Mixing Java code with HTML or JavaScript is a bad idea. The same would also be true for mixing Jython and HTML or JavaScript. The Model-View-Controller (MVC) paradigm allows for clean separation between logic code, such as Java or Jython, and markup code such as HTML. JavaScript is always gets grouped into the same arena as HTML because it is a client-side scripting language. In other words, JavaScript code should also be separated from the logic code. In thinking about MVC, the controller code would be the markup and JavaScript code used to capture data from the end-user. Model code would be the business logic that manipulates the data. Model code is contained within our Jython or Java. The view would be the markup and JavaScript displaying the result. Clean separation using MVC can be achieved successfully by combining JSP with Jython servlets. In this section we will take a look at a simple example of how to do so. As with many of the other examples in this text it will only brush upon the surface of great features that are available. Once you learn how to make use of JSP and Jython servlets you can explore further into the technology. Configuring for JSP There is no real configuration above and beyond that of configuring a web application to make use of Jython servlets. Add the necessary XML to the web.xml deployment descriptor, include the correct JARs in your application, and begin coding. What is important to note is that the .py files that will be used for the Jython servlets must reside within your CLASSPATH. It is common for the Jython servlets to reside in the same directory as the JSP web pages themselves. This can make things easier, but it can also be frowned upon because this concept does not make use of packages for organizing code. For simplicity sake, we will place the servlet code into the same directory as the JSP, but you can do it differently. Coding the Controller/View The view portion of the application will be coded using markup and JavaScript code. Obviously, this technique utilizes JSP to contain the markup, and the JavaScript can either be embedded directly into the JSP or reside in separate .js files as needed. The latter is the preferred method in order to make things clean, but many web applications embed small amounts of JavaScript within the pages themselves. The JSP in this example is rather simple, there is no JavaScript in the example and it only contains a couple of input text areas. This JSP will include two forms because we will have two separate submit buttons on the page. Each of these forms will redirect to a different Jython servlet, which will do something with the data that has been supplied within the input text. In our example, the first form contains a small textbox in which the user can type any text that will be redisplayed on the page once the corresponding submit button has been pressed. Very cool, eh? Not really, but it is of good value for learning the correlation between JSP and the servlet implementation. The second form contains two text boxes in which the user will place numbers; hitting the submit button in this form will cause the numbers to be passed to another servlet that will calculate and return the sum of the two numbers. Listing 13-5 is the code for this simple JSP. Listing 13-5. JSP Code for a Simple Controller/Viewer Application testJSP.jsp <%@page <%@ taglib <title>Jython JSP Test</title> </head> <body> <form method="GET" action="add_to_page.py"> <input type="text" name="p"> <input type="submit"> </form> <br/> <p>${page_text}</p> <br/> <form method="GET" action="add_numbers.py"> <input type="text" name="x"> + <input type="text" name="y"> = ${sum} <br/> <input type="submit" title="Add Numbers"> </form> </body> </html> In this JSP example, you can see that the first form redirects to a Jython servlet named add_to_page.py, which plays the role of the controller. In this case, the text that is contained within the input textbox named p will be passed into the servlet, and redisplayed in on the page. The text to be redisplayed will be stored in an attribute named page_text, and you can see that it is referenced within the JSP page using the ${} notation. Listing 13-6 is the code for add_to_page.py. Listing 13-6. A Simple Jython Controller Servlet ####################################################################### # add_to_page.py # # Simple servlet that takes some text from a web page and redisplays # it. #######################################################################) Quick and simple, the servlet takes the request and obtains value contained within the parameter p. It then assigns that value to a variable named addtext. This variable is then assigned to an attribute in the request named page_text and forwarded back to the testJython.jsp page. The code could just as easily have forwarded to a different JSP, which is how we’d go about creating a more in-depth application. The second form in our JSP takes two values and returns the resulting sum to the page. If someone were to enter text instead of numerical values into the text boxes then an error message would be displayed in place of the sum. While very simplistic, this servlet demonstrates that any business logic can be coded in the servlet, including database calls, and so on. See Listing 13-7. Listing 13-7. Jython Servlet Business Logic ####################################################################### # add_numbers.py # # Calculates the sum for two numbers and returns it. #######################################################################) If we add the JSP and the servlets to the web application we created in the previous Jython Servlet section, then this example should work out-of-the-box. It is also possible to embed code into Java Server Pages by using various template tags known as scriptlets to enclose the code. In such cases, the JSP must contain Java code unless a special framework such as the Bean Scripting Framework () is used along with JSP. For more details on using Java Server Pages, please take a look at the Sun Microsystems JSP documentation () or pick up a book such as Beginning JSP, JSF and Tomcat Web Development: From Novice to Professional from Apress. Applets and Java Web Start At the time of this writing, applets in Jython 2.5.0 are not yet an available option. This is because applets must be statically compiled and available for embedding within a webpage using the <applet> or <object> tag. The static compiler known as *jythonc***has been removed in Jython 2.5.0 in order to make way for better techniques. Jythonc was good for performing certain tasks, such as static compilation of Jython applets, but it created a disconnect in the development lifecycle as it was a separate compilation step that should not be necessary in order to perform simple tasks such as Jython and Java integration. In a future release of Jython, namely 2.5.1 or another release in the near future, a better way to perform static compilation for applets will be included. For now, in order to develop Jython applets you will need to use a previous distribution including jythonc and then associate them to the webpage with the <applet> or <object> tag. In Jython, applets are coded in much the same fashion as a standard Java applet. However, the resulting lines of code are significantly smaller in Jython because of its sophisticated syntax. GUI development in general with Jython is a big productivity boost compared to developing a Java Swing application for much the same reason. This is why coding applets in Jython is a viable solution and one that should not be overlooked.. In this section we will develop a small web start application to demonstrate how it can be done using the object factory design pattern and also using pure Jython along with the standalone Jython JAR file for distribution. Note that there are probably other ways to achieve the same result and that these are just a couple of possible implementations for such an application. Coding a Simple GUI-Based Web Application The web start application that we will develop in this demonstration is very simple, but they can be as advanced as you’d like in the end. The purpose of this section is not to show you how to develop a web-based GUI application, but rather, the process of developing such an application. You can actually take any of the Swing-based applications that were discussed in the GUI chapter and deploy them using web start technology quite easily. As stated in the previous section, there are many different ways to deploy a Jython web start application. We prefer to make use of the object factory design pattern to create simple Jython Swing applications. However, it can also be done using all .py files and then distributed using the Jython stand-alone JAR file. We will discuss each of those techniques in this section. We find that if you are mixing Java and Jython code then the object factory pattern works best. The JAR method may work best for you if developing a strictly Jython application. Object Factory Application Design The application we’ll be developing in this section is a simple GUI that takes a line of text and redisplays it in JTextArea. We used Netbeans 6.7 to develop the application, so some of this section may reference particular features that are available in that IDE. To get started with creating an object factory web start application, we first need to create a project. We created a new Java application in Netbeans named JythonSwingApp and then added jython.jar and plyjy.jar to the classpath. First, create the Main.java class which will really be the driver for the application. The goal for Main.java is to use the Jython object factory pattern to coerce a Jython-based Swing application into Java. This class will be the starting point for the application and then the Jython code will perform all of the work under the covers. Using this pattern, we also need a Java interface that can be implemented via the Jython code, so this example also uses a very simple interface that defines a start() method which will be used to make our GUI visible. Lastly, the Jython class named below is the code for our Main.java driver and the Java interface. The directory structure of this application is as shown in Listing 13-8. Listing 13-8. Object Factory Application Code - JythonSwingApp - JythonSimpleSwing.py - jythonswingapp - Main.java jythonswingapp.interfaces JySwingType.java Main.java package jythonswingapp; import jythonswingapp.interfaces.JySwingType; import org.plyjy.factory.JythonObjectFactory; public class Main { JythonObjectFactory factory; public static void invokeJython(){ JySwingType jySwing = (JySwingType) JythonObjectFactory .createObject(JySwingType.class, "JythonSimpleSwing"); jySwing.start(); } public static void main(String[] args) { invokeJython(); } } As you can see, Main.java doesn’t do much else except coercing the Jython module and invoking the start() method. In Listing 13-9, you will see the JySwingType.java interface along with the implementation class that is obviously coded in Jython. Listing 13-9. JySwingType.java Interface and Implementation JySwingType.java package jythonswingapp.interfaces; public interface JySwingType { public void start(); } JythonSimpleSwing.py import javax.swing as swing import java.awt as awt from jythonswingapp.interfaces import JySwingType import add_player as add_player import Player as Player class JythonSimpleSwing(JySwingType, = "" If you are using Netbeans, when you clean and build your project a JAR file is automatically generated for you. However, you can easily create a JAR file at the command-line or terminal by ensuring that the JythonSimpleSwing.py module resides within your classpath and using the java -jar option. Another nice feature of using an IDE such as Netbeans is that you can make this into a web-start application by going into the project properties and checking a couple of boxes. Specifically, if you go into the project properties and select Application - Web Start from the left-hand menu, then check the Enable Web Start option then the IDE will take care of generating the necessary files to make this happen. Netbeans also has the option to self sign the JAR file which is required to run most applications on another machine via web start. Go ahead and try it out, just ensure that you clean and build your project again after making the changes. To manually create the necessary files for a web start application, you’ll need to generate two additional files that will be placed outside of the application JAR. Create the JAR for your project as you would normally do, and then create a corresponding JNLP file which is used to launch the application, and an HTML page that will reference the JNLP. The HTML page obviously is where you’d open the application if running it from the web. Listing 13-10 is some example code for generating a JNLP as well as embedding in HTML. Listing 13-10. JNLP Code for Web Start launch.jnlp <?xml version="1.0" encoding="UTF-8" standalone="no"?> <jnlp codebase="file:/path-to-jar/" href="launch.jnlp" spec="1.0+"> <information> <title>JythonSwingApp</title> <vendor>YourName</vendor> <homepage href=""/> <description>JythonSwingApp</description> <description kind="short">JythonSwingApp</description> </information> <security> <all-permissions/> </security> <resources> <j2se version="1.5+"/> <jar eager="true" href="JythonSwingApp.jar" main="true"/> <jar href="lib/PlyJy.jar"/> <jar href="lib/jython.jar"/> </resources> <application-desc </application-desc> </jnlp> launch.html <html> <head> <title>Test page for launching the application via JNLP</title> </head> <body> <h3>Test page for launching the application via JNLP</h3> <a href="launch.jnlp">Launch the application</a> <!-- Or use the following script element to launch with the Deployment Toolkit --> <!-- Open the deployJava.js script to view its documentation --> <!-- <script src=""></script> <script> var url="http://[fill in your URL]/launch.jnlp" deployJava.createWebStartLaunchButton(url, "1.6") </script> --> </body> </html> In the end, Java web start is a very good way to distribute Jython applications via the web. Distributing via Standalone JAR It is possible to distribute a web start application using the Jython standalone JAR option. To do so, you must have a copy of the Jython standalone JAR file, explode it, and add your code into the file, then JAR it back up to deploy. The only drawback to using this method is that you may need to ensure files are in the correct locations in order to make it work correctly, which can sometimes be tedious. In order to distribute your Jython applications via a JAR, first download the Jython standalone distribution. Once you have this, you can extract the files from the jython.jar using a tool to expand the JAR such as Stuffit or 7zip. Once the JAR has been exploded, you will need to add any of your .py scripts into the Lib directory, and any Java classes into the root. For instance, if you have a Java class named org.jythonbook.Book, you would place it into the appropriate directory according to the package structure. If you have any additional JAR files to include with your application then you will need to make sure that they are in your classpath. Once you’ve completed this setup, JAR your manipulated standalone Jython JAR back up into a ZIP format using a tool such as those noted before. You can then rename the ZIP to a JAR. The application can now be run using the java “-jar” option from the command line using an optional external .py file to invoke your application. $ java -jar newStandaloneJar.jar {optional .py file} This is only one such technique used to make a JAR file for containing your applications. There are other ways to perform such techniques, but this seems to be the most straight forward and easiest to. This section will show you how to utilize WSGI to create a very simple “Hello Jython” application by utilizing modjy. Modjy is an implementation of a WSGI compliant gateway/server for Jython, built on Java/J2EE servlets. Taken from the modjy website (), modjy is characterized as follows: Note. Running a Modjy Application in Glassfish To run a modjy application in any Java servlet container, the first step is to create a Java web application that will be packaged up as a WAR file. You can create an application from scratch or use an IDE such as Netbeans 6.7 to assist. Once you’ve created your web application, ensure that jython.jar resides in the CLASSPATH as modjy is now part of Jython as of 2.5.0. Lastly, you will need to configure the modjy servlet within the application deployment descriptor (web.xml). In this example, we took the modjy sample application for Google App Engine and deployed it in my local Glassfish environment. To configure the application deployment descriptor with modjy, we simply configure the modjy servlet, provide the necessary parameters, and then provide a servlet mapping. In the configuration file shown in Listing 13-11, note that the modjy servlet class is com.xhaus.modjy.ModjyServlet. The first parameter you will need to use with the servlet is named python.home. Set the value of this parameter equal to your Jython home. Next, set the parameter python.cachedir.skip equal to true. The app_filename parameter provides the name of the application callable. Other parameters will be set up the same for each modjy application you configure. The last piece of the web.xml that needs to be set up is the servlet mapping. In the example, we set up all URLs to map to the modjy servlet. Listing 13-11. Configuring the Modjy Servlet web.xml <?xml version="1.0" encoding="ISO-8859-1"?> <!DOCTYPE web-app PUBLIC "-//Sun Microsystems, Inc.//DTD Web Application 2.3//EN" ""> <web-app> <display-name>modjy demo application</display-name> <description> modjy WSGI demo application </description> <servlet> <servlet-name>modjy</servlet-name> <servlet-class>com.xhaus.modjy.ModjyJServlet</servlet-class> <init-param> <param-name>python.home</param-name> <param-value>/Applications/jython/jython2.5.0/</param-value> </init-param> <init-param> <param-name>python.cachedir.skip</param-name> <param-value>true</param-value> </init-param> <!-- There are two different ways you can specify an application to modjy 1. Using the app_import_name mechanism 2. Using a combination of app_directory/app_filename/app_callable_name Examples of both are given below See the documentation for more details. --> <!-- This is the app_import_name mechanism. If you specify a value for this variable, then it will take precedence over the other mechanism <init-param> <param-name>app_import_name</param-name> <param-value>my_wsgi_module.my_handler_class().handler_method</param-value> </init-param> --> <!-- And this is the app_directory/app_filename/app_callable_name combo The defaults for these three variables are ""/application.py/handler So if you specify no values at all for any of app_* variables, then modjy will by default look for "handler" in "application.py" in the servlet context root. <init-param> <param-name>app_directory</param-name> <param-value>some_sub_directory</param-value> </init-param> --> <init-param> <param-name>app_filename</param-name> <param-value>demo_app.py</param-value> </init-param> <!-- Supply a value for this parameter if you want your application callable to have a different name than the default. <init-param> <param-name>app_callable_name</param-name> <param-value>my_handler_func</param-value> </init-param> --> <!-- Do you want application callables to be cached? --> <init-param> <param-name>cache_callables</param-name> <param-value>1</param-value> </init-param> <!-- Should the application be reloaded if it's .py file changes? --> <!-- Does not work with the app_import_name mechanism --> <init-param> <param-name>reload_on_mod</param-name> <param-value>1</param-value> </init-param> <init-param> <param-name>log_level</param-name> <param-value>debug</param-value> <!-- <param-value>info</param-value> --> <!-- <param-value>warn</param-value> --> <!-- <param-value>error</param-value> --> <!-- <param-value>fatal</param-value> --> </init-param> <load-on-startup>1</load-on-startup> </servlet> <servlet-mapping> <servlet-name>modjy</servlet-name> <url-pattern>/*</url-pattern> </servlet-mapping> </web-app> The demo_app should be coded as shown in Listing 13-12. As part of the WSGI standard, the application provides a function that the server calls for each request. In this case, that function is named handler. The function must take two parameters, the first being a dictionary of CGI-defined environment variables. The second is a callback that returns the HTTP headers. The callback function should also be called as follows start_response(status, response_headers, exx_info=None), where status is an HTTP status, response_headers is a list of HTTP headers, and exc_info is for exception handling. Let’s take a look at the demo_app.py application and identify the features we’ve just discussed. Listing 13-12. import sys import string def escape_html(s): return s.replace('&', '&').replace('<', '<').replace('>', '>') def cutoff(s, n=100): if len(s) > n: return s[:n]+ '.. cut ..' return s def handler(environ, start_response): writer = start_response("200 OK", [ ('content-type', 'text/html') ])" keys = environ.keys() keys.sort() for ix, name in enumerate(keys): if ix % 2: background='#ffffff' else: background='#eeeeee' style = " style='background-color:%s;'" % background value = escape_html(cutoff(str(environ[name]))) or ' ' environ_str = "%s\\n<tr><td%s>%s</td><td%s>%s</td></tr>" % \\ (environ_str, style, name, style, value) environ_str = "%s\\n</table>" % environ_str response_parts = response_parts + environ_str + '</body></html>\\n' response_text = string.Template(response_parts) return [response_text.substitute(version=sys.version, platform=sys.platform)] This application returns the environment configuration for the server on which you run the application. As you can see, the page is quite simple to code and really resembles a servlet. Once the application has been set up and configured, simply compile the code into a WAR file and deploy it to the Java servlet container of your choice. In this case, we used Glassfish V2 and it worked nicely. However, this same application should be deployable to Tomcat, JBoss, or the like. Summary There are various ways that we can use Jython for creating simple web-based applications. Jython servlets are a good way to make content available on the web, and you can also utilize them along with a JSP page which allows for a Model-View-Controller situation. This is a good technique to use for developing sophisticated web applications, especially those mixing some JavaScript into the action because it really helps to organize things. Most Java web applications use frameworks or other techniques in order to help organize applications in such a way as to apply the MVC concept. It is great to have a way to do such work with Jython as well. This chapter also discussed creation of WSGI applications in Jython making use of modjy. This is a good low-level way to generate web applications as well, although modjy and WSGI are usually used for implementing web frameworks and the like. Solutions such as Django use WSGI in order to follow the standard put forth for all Python web frameworks with PEP 333. You can see from the section in this chapter that WSGI is also a nice quick way to write web applications, much like writing a servlet in Jython. In the next chapters, you will learn more about using web frameworks available to Jython, specifically Django and Pylons. These two frameworks can make any web developers life much easier, and now that they are available on the Java platform via Jython they are even more powerful. Using a templating technique such as Django can be really productive and it is a good way to design a full-blown web application. Techniques discussed in this chapter can also be used for developing large web applications, but using a standard framework such as those discussed in the following chapter should be considered. There are many great ways to code Jython web applications today, and the options continue to grow!
https://bitbucket.org/javajuneau/jythonbook/src/8c33c08e533d63316dc4abe0d8e611acf48ee881/SimpleWebApps.rst
CC-MAIN-2015-27
en
refinedweb
You can subscribe to this list here. Showing 1 results of 1 Hi Jeff, Jeff Peery wrote: > Hello, I'm a newbie with py2exe and I'm trying to use it with InnoSetup > and a simple script I wrote. I attached my program. When I run the thing > it can't find the frame1.pyo file... I'm not sure why. any help would be > much appreciated! Thanks! I am not sure what your problem is. Did the following: Frame1.py: commented the import of serial as I don't have that, at least import fails and in OnButton1Button I just put a return in. After this I run py2exe from within Boa (Menu File, last py2exe option for setup.py) but it has problems with some of the innosetup stuff. Removed the "self.compiled_files" from the following. script = InnoScript(appname, lib_dir, dist_dir, self.windows_exe_files, self.lib_files) #, self.compiled_files) At this point it compiles but the Inno stuff is still not completing. If I go to the dist folder I can run "AgSetUnits.exe" and get the same display then if I run it from within Boa. To get the InnoSetup stuff to complete I had to remove the Compression settings and all "{cm" stuff. After that it generates an InnoSetup exe which I can run (did not complete and install, but I guess that would work too). Can you run the exe in the dist folder? See you Werner > > ------------------------------------------------------------------------ > New Yahoo! Messenger with Voice. Call regular phones from your PC > <**> > and save big. > > > ------------------------------------------------------------------------
http://sourceforge.net/p/py2exe/mailman/py2exe-users/?style=flat&viewmonth=200605&viewday=10
CC-MAIN-2015-27
en
refinedweb
LimPy Limited Python LimPy parses a limited version of the Python grammar. It supports basic Python syntax, but is known not to support the following: - classes - function definition - multi-variable assignment - list and generator comprehensions The goal is to be able to expose various Python objects to a scripting environment where non-professional programmers can write simple code to solve various problems. Origins LimPy originated in a survey system at YouGov where it gave the users scripting questionnaires the ability to include various bits of Python code that is executed during survey interviews. A previous version of the survey system allowed Python code but it was not type checked and non-syntactical bugs were only reached at run time. LimPy was successful in still offering much of the power of Python at runtime but checking types, operations on types, and function/method call signatures before runtime to ensure that an entire class of bugs was avoided. Role LimPy checks code. You supply it with a namespace of helper objects, another namespace of variables and source code and it will raise various LimPy exceptions if there are problems or return the parsed code and the updated namespace of variables if there were new variables defined in the source. LimPy does not execute the code. That is up to your runtime system to handle. The returned variables namespace contains only types as values, not real runtime values. Deciding what to do with that namespace is up to your runtime code. Because LimPy is a strict subset of Python, it can typically be exec'd directly by Python. Testing LimPy includes some unit tests. To run them, simply invoke setup.py test or install the latest pytest. See the jenkins script for the routine used at YouGov to perform continuous integration testing on this project. Changes 2.0 - Added limpy.types.Signature, which replaces build_sig, SigInfo, and signature functions. Any code that uses or references these deprecated functions will need to be updated. - TypeSpecification.add_method now only accepts a Signature instance. - LimPy now expects all dynamically-dispatched types to be classes that must provide IDynamicType (and need not necessarily be subclasses of DynamicDispatch). Clients upgrading to LimPy 2.0 will typically just need to update their @limpy.types.signature decorators to instead use @limpy.types.Signature. Any calls to a TypeSpecification.add_method will need to first construct a Signature instance (with the same parameters). For libraries that do more intimate things with the signatures, it will be necessary to update those references. See the repository changelog for details on how this was done within the LimPy project itself. 1.2.2 - Improved newline counting and tests. - Empty source or source only comments is now valid LimPy. 1.2.1 - Restored Python 2.5 compatibility. 1.2 - Updated to PLY 3.4 - Now by default LimPy does not write files to the current directory. 1.1 - LimPy no longer allows assignment to Python reserved words.
https://bitbucket.org/yougov/limpy/src/a7bfee50e6e5?at=2.0b9
CC-MAIN-2015-27
en
refinedweb
I have a problem in C# that I have faced a few times before, but I have always managed to work around it. Basically, I need to, from one class, access another class that has not been created in the current class. Both classes are in the same namespace. Example: Code://Form1.cs ClassTwo testClass; public void Initialize() { testClass = new ClassTwo(); testClass.doTest(); } public void aTest(string text) { this.Text=text; } //End of Form1.cs //ClassTwo.cs public void doTest() { Form1.aTest("This is a test"); } //End of ClassTwo.cs This is basically what I want to do. But it won't allow me to access the method(s) in Form1. How can I do this without creating a second instance of Form1?
http://cboard.cprogramming.com/csharp-programming/99111-accessing-classes.html
CC-MAIN-2015-27
en
refinedweb
#include <coherence/net/AbstractPriorityTask.hpp> Inherits Object, PortableObject, and PriorityTask. Inherited by PriorityAggregator, PriorityFilter, and PriorityProcessor. List of all members. It implements all AbstractPriorityTask interface methods and is intended to be extended for concrete uses.. This implementation is a no-op. Implements PriorityTask. Specify this task's scheduling priority. Valid values are one of the SCHEDULE_* constants. Specify the maximum amount of time a calling thread is willing to wait for a result of the request execution.
http://docs.oracle.com/cd/E15357_01/coh.360/e18813/classcoherence_1_1net_1_1_abstract_priority_task.html
CC-MAIN-2015-27
en
refinedweb
RDF::Query::Algebra::NamedGraph - Algebra class for NamedGraph patterns This document describes RDF::Query::Algebra::NamedGraph version 2.914... qualify_uris ( \%namespaces, $base_uri ) Returns a new algebra pattern where all referenced Resource nodes representing QNames (ns:local) are qualified using the supplied %namespaces.>
http://search.cpan.org/dist/RDF-Query/lib/RDF/Query/Algebra/NamedGraph.pm
CC-MAIN-2015-27
en
refinedweb
#include <sys/sem.h> int semctl(int semid, int semnum, int cmd, ...); The semctl() function operates on XSI semaphores (see the Base Definitions volume of IEEE Std 1003.1-2001, Section 4.15, Semaphore). It is unspecified whether this function interoperates with the realtime interprocess communication facilities defined in Realtime .; The following semaphore control operations as specified by cmd are executed with respect to the semaphore specified by semid and semnum. The level of permission required for each operation is shown with each command; see XSI Interprocess Communication . The symbolic names for the values of cmd are defined in the <sys/sem.h> header: The following values of cmd operate on each semval in the set of semaphores: The following values of cmd are also available: sem_perm.uid sem_perm.gid sem_perm.mode The mode bits specified in IPC General Description are copied into the corresponding bits of the sem_perm.mode associated with semid. The stored values of any other bits are unspecified. This command can only be executed by a process that has an effective user ID equal to either that of a process with appropriate privileges or to the value of sem_perm.cuid or sem_perm.uid in the semid_ds data structure associated with semid. If successful, the value returned by semctl() depends on cmd as follows: Otherwise, semctl() shall return -1 and set errno to indicate the error. The semctl() function shall fail if: The following sections are informative. None. The fourth parameter in the SYNOPSIS section is now specified as "..." in order to avoid a clash with the ISO C standard when referring to the union semun (as defined in Issue 3) and for backwards-compatibility.get() , semop() , sem_close() , sem_destroy() , sem_getvalue() , sem_init() , sem_open() , sem_post() , sem_unlink() , sem_wait() , the Base Definitions volume of IEEE Std 1003.1-2001, <sys/sem.h>
http://www.makelinux.net/man/3posix/S/semctl
CC-MAIN-2015-27
en
refinedweb
. This document is an editor's draft without any normative standing. RDDL 1.0 4 RDDL 2.0 5 Using GRDDL 6 Natures 7 Purposes 8 References. A user, encountering a namespace “in the wild”10].. For the resource identified by a namespace URI, there may exist other resources related to it. Borrowing on the terminology defined by [rddl10], ash.]. @@describe grddl@@ @@incorporate Dan's USPS example: @@note that grddl doesn't actually require xslt except in the general case@@ @@note that the preceding examples use grddl too.@@ @@note that this technique would allow for non-human readable namespace documents but that's counter to the spirit of the webarch good practice.@@
http://www.w3.org/2001/tag/doc/nsDocuments-2005-11-07/
CC-MAIN-2015-27
en
refinedweb
Melrose SDK - license manager Melrose SDK - license manager Hi Subhashini, This has already been discussed earlier. Please refer to this link for more information Regards Rooven Intel AppUp(SM) Center Intel AppUp(SM) Developer Program Intel Technical Support Thanks for a valuable reply.. my process is below, 1. make the .air application using Flash builder(flex SDK 4.1 and AIR 2.0) 2. import the melrose sdk(licensing.swc). 3. Write the below code in my application, import com.adobe.licensing.LicenseManager; private static const MY_UNIQUE_32_HEX_NUM:String = "0xD2315FC6-0xB309425F-0xB3085EE2-0xBD4D7748"; private static var UPDATE_MODE: Boolean = false; private static var DEBUG_MODE: Boolean = true; protected function initApp():void { var licenseManager:LicenseManager = new LicenseManager(); licenseManager.checkLicense(this, MY_UNIQUE_32_HEX_NUM, UPDATE_MODE, DEBUG_MODE); } here the "MY_UNIQUE_32_HEX_NUM" value is my GUID.(example :0xD2315FC6,0xB309425F,0xB3085EE2,0xBD4D7748) 4.publish my .air application i'm getting the following error "Your license can not be validated". is all above steps are correct or wrong ? if it is wrong please guide me the correct steps. Awaiting for your awesome reply !!! Hi Subhashini, It appears that you need to change Debug mode to False to match up with Praveen's example. Also, Adobe states "Once the application is ready to release, you must set debug to false to publish your application or it will be rejected." Please see Regards Hal G. Technical Support Team Intel AppUp(SM) Developer Program Intel AppUp(SM) Center Hi Hal , Thx for your reply.. if i'm set the DEBUG_MODE is false i 'm getting the same error "Your license can not validated " and if i'm set the DEBUG_MODE is true the check license pop up have 3 buttons like " Use Current License " "Skip license check" and "Delete Current License". What i do now? Awaiting for reply!! Hi SubhashiniBalaji, Thank you for your reply. Based on the code that you provided above, you should also use null or leave the MY_UNIQUE_32_HEX_NUM string empty as shown below, in addition to what Hal said above. private static const MY_UNIQUE_32_HEX_NUM:String = “”; Please let us know if this helps. Regards Rooven Intel AppUp(SM) Center Intel AppUp(SM) Developer Program Intel Technical Support Finally you were able to succeed or do you need still need some help
https://software.intel.com/fr-fr/forums/topic/322018
CC-MAIN-2015-27
en
refinedweb
C# If Statements, Switch Statements, For Loops, and While Loops So far in our series, we've covereddownloading and installing the free Visual C# 2005 Express Edition IDE,basic variable types, methods (including the crucial 'Main' method),properties, classes, objects, comments, encapsulation, instantiation,namespaces, assemblies, the 'static' keyword, using references, syntaxerrors, the 'private' versus 'public' keywords, brace pairing, and theConsole class. We've also looked at code folding in the […]Read more
http://www.csharphelp.com/tag/loops/
CC-MAIN-2015-27
en
refinedweb
Agenda See also: IRC log <scribe> Scribe: Addison Phillips <scribe> ScribeNick: aphillip Richard: respond on our behalf to CSS on ruby issue not yet Addison: convert widgets comments to a table and forward to webapps@ ... set up document edit transition with dan for ws-i18n richard: ask IanJ if we can publish charreq as a historical Note not yet richard: look at framework for i18n guidelines and suggest actions to take on document on agenda ichard: publish qa-bidi-unicode-controls s/^ichard/richard/ <scribe> DONE richard: ready bp-html-bidi for publication, announce a review, and publish for review <scribe> DONE <r12a> richard: working on test suite, particulary on text direction ... at link above ... can run in four mode ... also added section on vertical text ... need more of those ... xhtml 1.1 still works without need for css, even though served as xml I propose that we leave as a Working Draft and set the metadata that will be available on the w3c beta site to say that this is obsolete-retired. It will then appear under Obsolete Specifications at (and only on that page). richard: had a look at this... way out of date ... doesn't make sense to work on it any more ... would be weird since it the ideas captured are out of date ... spoke to IanJ and he suggested we leave it as-is <r12a> richard: and set metadata on beta site (above link) ... to make it obsolete/retired ... or say it is superseded ... but then we need something to supercede it with ... so propose: leave as WD and mark as retired <scribe> chair: anyone object to publishing as retired WD ? no objections <scribe> ACTION: richard: publish i18n-guide-frameword as (retired, obsolete) WD [recorded in] <r12a> addison: note their concern about ITS, our responses richard: possibly <widget> element should have dir? ... some confusion about scope of 'dir' ... they are talking about support for ITS, when they should talk about bidi support in widget packaging ack <fsasaki> felix: marcos happier to have dir without its namespace ... can we porpose that? addison: yes, can do that felix: could also map from their namespace to the its rules ... want to avoid control characters yves: part of response is still weird propose putting it on <widget> and removing its namespace? <fsasaki> +1 any objections? <r12a> addison: good start... wants to be more than just a BP? ... some in xml BP from ITS <r12a> <r12a> <r12a> addison: not so much in UTR#20 richard: needs some rationales, etc. ... very bare-bones at moment, so please send comments or comment in wiki <r12a> richard: charmodnorm wiki available ... (link above) ... conformance criteria listed in first section ... and then have a new section for writing new conformance criteria ... put proposals in wiki or send via email ... have thought off-and-on about it ... gonna have to go back to CSS with late normalization addison: concerned that we've already lost that battle richard: clarify position and state it ... how far to push? andrew: how far to push vs. what can we do? ... put together BP for what is broken, what developers have to do, etc. ... may be no point pushing it, but they might react to "css is broken, do this to work around" richard: changed the write up about internationalization <r12a> typo: This is widely used abbreviation (should have 'a' in there) This is scribe.perl Revision: 1.135 of Date: 2009/03/02 03:52:20 Check for newer version at Guessing input format: RRSAgent_Text_Format (score 1.00) FAILED: s/^ichard/richard/ Succeeded: s/enw/new/ Found Scribe: Addison Phillips Found ScribeNick: aphillip Default Present: aphillip, Richard, +1.303.945.aaaa, Felix, [Oracle], AndrewC, YvesS, dchiba Present: aphillip Richard +1.303.945.aaaa Felix [Oracle] AndrewC YvesS dchiba Agenda: Got date from IRC log name: 15 Jul 2009 Guessing minutes URL: People with action items: richard[End of scribe.perl diagnostic output]
http://www.w3.org/2009/07/15-core-minutes.html
CC-MAIN-2015-27
en
refinedweb
Feature #8626 Add a Set coercion method to the standard lib: Set(possible_set) Description =begin I'd like to be able to take an object that may already be a Set (or not) and ensure that it is one For example: set1 = Set.new set2 = Set(set1) set3 = Set(nil) assert set1.equal?(set2) assert_instance_of(Set, set3) This is different from the behavior of Set.new in that it will return the same set rather than creating a new one set1 = Set.new set2 = Set.new(set1) # <--- completely new object in memory set2 = Set(set1) # <--- same object from memory My thoughts about the implementation are simple: def Set(possible_set) possible_set.is_a?(Set) ? possible_set : Set.new(possible_set) end I'm not sure if there are edge cases to unexpected behavior that I haven't thought of and I'm wondering if it ought to have a Set.try_convert as well =end History #1 Updated by Jim Gay almost 2 years ago I've created a pull request for MRI here #2 Updated by Jim Gay over 1 year ago This has been merged into ruby trunk in Also available in: Atom PDF
https://bugs.ruby-lang.org/issues/8626
CC-MAIN-2015-27
en
refinedweb
Fun with Robot In my last blog I mentioned we're nearly done with the baseline API for mustang. In verifying the baseline API work I initially took the approach of grabbing screenshots, dumping them in Paint and zooming in. Painful to say the least! There must be a better way. Enter Robot In addition to being able to inject mouse and keyboard events into the system java.awt.Robot gives you the ability render part of the desktop into an image. Exactly what I want. The following code grabs a screen shot returning a BufferedImage: Robot robot = new Robot(); BufferedImage image = robot.createScreenCapture(new Rectangle(x, y, width, height)); So Robot gives me an image of part of the desktop, but I don't really want to continually do a keyboard gesture to create the image. I want the image to update as I move the mouse around. Enter AWTEventListener AWTEventListener can be used to listen to all events. Think of it as a way to globally install a *Listener. Using AWTEventListener I can determine when the mouse moves over any component. Here's the code for that: Toolkit toolkit = Toolkit.getDefaultToolkit(); AWTEventListener listener = new AWTEventListener() { public void eventDispatched(AWTEvent event) { if (event.getID() == MouseEvent.MOUSE_MOVED) { mouseMoved(event); } } }; toolkit.addAWTEventListener(new , AWTEvent.MOUSE_MOTION_EVENT_MASK); Combining AWTEventListener with Robot I have a way to zoom in on part of my window as the cursor is being moved. Excellent! The following screen shot shows this in effect. The bottom area is the zoomed in component. Here's the code: import javax.swing.*; import java.awt.*; import java.awt.event.*; public class MagnifyComponent extends JComponent { private int factor; private Image image; private int w; private int h; private Robot robot; MagnifyComponent() { try { robot = new Robot(); } catch (AWTException e) { } this.factor = 6; Toolkit.getDefaultToolkit().addAWTEventListener( new EventHandler(), AWTEvent.MOUSE_MOTION_EVENT_MASK); } public void reshape(int x, int y, int w, int h) { super.reshape(x, y, w, h); updateImageSize(); } private void updateImageSize() { int w = getWidth(); int h = getHeight(); image = null; this.w = w / factor / 2; this.h = h / factor / 2; } public void paintComponent(Graphics g) { if (isOpaque()) { g.setColor(getBackground()); g.fillRect(0, 0, getWidth(), getHeight()); } if (image != null) { g.drawImage(image, 0, 0, w * factor * 2, h * factor * 2, 0, 0, w + w, h + h, null); } } private void updateImage(int x, int y) { if (w > 0 && h > 0) { image = robot.createScreenCapture(new Rectangle( x - w, y - h, w + w, h + h)); } repaint(); } private class EventHandler implements AWTEventListener { public void eventDispatched(AWTEvent event) { if (event.getID() == MouseEvent.MOUSE_MOVED) { MouseEvent me = (MouseEvent)event; Component source = me.getComponent(); int x = me.getX(); int y = me.getY(); Point location = source.getLocationOnScreen(); x += location.x; y += location.y; updateImage(x, y); } } } } I'm aware of at least one accessibility helper in Windows that zooms in on the desktop as you move the mouse, but I've found it a bit intrusive. In so far as the app always takes up a big chunk of the desktop, where as I really only wanted the zooming feature for this window. Enjoy! Warning: I'm sure there are performance issues with this code on slower/older machines. As this isn't for production and was only needed in testing, I did no testing other than on my machine. If you're to incorporate something like this in a real app do performance analysis first! - Login or register to post comments - Printer-friendly version - zixle's blog - 1868 reads
https://weblogs.java.net/blog/zixle/archive/2005/05/fun_with_robot_1.html
CC-MAIN-2015-27
en
refinedweb
java.lang.Object oracle.ide.model.Recognizeroracle.ide.model.Recognizer public abstract class Recognizer The Recognizer class is the IDE mechanism by which URLs are mapped to Node types. A Recognizer is also responsible for creating a Node, once its URL is recognized. The IDE framework provides two built-in ways to recognize a URL: mapExtensionToClass(String,Class). To configure the XML recognizer to map an XML document type to a Node type, see XMLRecognizer. There are methods in XMLRecognizer for recognizing based on the doctype declaration, the root element name (with or without namespace URI), and schema instance URI. If the mapping of a URL to a Node type is more complex than what the IDE provides by default, there is an API for registering custom Recognizer implementations via one of the following methods: registerRecognizer(String,Recognizer) registerRecognizer(String[],Recognizer) registerLowPriorityRecognizer(Recognizer) registerLowPriorityRecognizer(Recognizer), the custom Recognizer is not evaluated until after the IDE's built-in XML and URL recognizers. When none of the custom, XML, or URL recognizers is able to map a URL to a Node type, the IDE may make a final attempt to recognize the URL based on byte signatures. If that fails, then the URL is mapped to UnrecognizedTextNode. registerConversion(Class,Class)for details. public Recognizer() public abstract java.lang.Class<? extends Node> recognize(java.net.URL url) Nodetype. Efficient implementation of this method is essential for the IDE's performance. In particular, I/O operations should be avoided or deferred as much as possible. If a Recognizer does not recognize a particular URL, this method must return null. The returned Class representing the Node type is later passed to the create(URL,Class)method which creates the Nodeinstance. url- unique URLidentifying the document. Nodetype as a Classobject. public Node create(java.net.URL url, java.lang.Class nodeType) throws java.lang.IllegalAccessException, java.lang.InstantiationException Nodeinstance of the specified type with the specified URL. If the type is null, then the returned Node is also null. url- unique URLidentifying the document. nodeType- the type of the Node. The specified Classobject must represent a type of Node, or else a ClassCastException will occur. Node. java.lang.IllegalAccessException java.lang.InstantiationException public java.net.URL validate(java.net.URL newURL, java.net.URL oldURL) throws RecognizerException nullif the name does not validate. This method may modify the URLto make it valid, such as adding a file extension. The old URLis used as the validation base. For example, if the new URLdoes not have the correct file extension, the old URLextension may be added to the new URLduring the validation process. newURL- the new URLto validate. oldURL- the old URLused as the validation base. URLor nullif the name does not validate. RecognizerException- if validation fails. The reason why validation failed is in the exception's message. The message should be suitably formatted so that it can be displayed to the user. public boolean canConvert(java.net.URL oldURL, java.net.URL newURL) URLcan be converted to the new URL. This method is called on the new URL's Recognizer. It is called when a Nodeis being renamed and the new name causes a Nodetype conversion. If the call to canConvert(URL, URL)returns true, the Nodeconversion will go through, otherwise, it will not. The base implementation looks at the two URLs to see what the corresponding Node class will be. If the conversion is compatible, then it is allowed. Compatibility is tested by checking the Nodeclass types that have been registered through registerConversion(Class, Class). If a mapping from the oldURL's recognized Nodeclass can be found to the newURL's recognized Nodeclass, then this method returns true. Otherwise, this method returns false. Of course, if the Node class for the oldURL and the newURL are identical, then conversion is allowed. oldURL- the url of the node being renamed. newURL- the new url for the node. public static void mapExtensionToClass(java.lang.String extension, java.lang.Class cls) extension, which is a file extension, to the given Classwhich must be a Nodeclass. Addins that extend this class must provide their own static implementation of this method. Using the default implementation for any other purpose than to map extension defined by end users to registered node types will break current IDE behavior. In other words, this method is only for internal IDE purposes. The extension passed in is allowed to contain or omit the leading "."; if it is omitted, it will be added automatically. public static void mapExtensionToXML(java.lang.String extension) public static void registerRecognizer(java.lang.String fileExtension, Recognizer recognizer) fileExtension- The file extension to recognize. The extension may or may not begin with '.' but an extension is presumed to follow a '.' in the URL. So, for example, calling registerRecognizer("txt", MyRecognizer);will cause "file:/C:/readme.txt"to be recognized by MyRecognizer, but not "file:/C:/readmetxt". recognizer- The Recognizer that handles the recognition of a URL with the specified file extension. public static void registerRecognizer(java.lang.String[] fileExtensions, Recognizer recognizer) fileExtensions- The file extensions to recognize. The extensions may or may not begin with '.' but an extension is presumed to follow a '.' in the URL. So, for example, calling registerRecognizer(new String[]{"txt"}, MyRecognizer);will cause "file:/C:/readme.txt"to be recognized by MyRecognizer, but not "file:/C:/readmetxt". recognizer- The Recognizer that handles the recognition of a URL with the specified file extension. public static void registerLowPriorityRecognizer(Recognizer recognizer) registerRecognizer(String,Recognizer)or registerRecognizer(String[],Recognizer). recognizer- The low-priority Recognizer to register. public static Recognizer getDefaultRecognizer() public static void setDefaultRecognizer(Recognizer recognizer) public static java.lang.Class<? extends Node> getDefaultNodeType() Recognizerrecognizes a given URL. protected static boolean isXmlExtension(java.lang.String extension) public static final void registerConversion(java.lang.Class<? extends Node> oldNodeType, java.lang.Class<? extends Node> newNodeType) Nodeclasses that is to be considered valid. oldNodeType- The Nodeclass that the conversion is occuring from. newNodeType- The Nodeclass that the oldNodeTypeis being converted to. public static Recognizer findRecognizer(java.net.URL url) Recognizerthat is able to specify the Nodeclass that should be instantiated for the given URL. If no Recognizercan determine the Nodeclass, then nullis returned. public static java.lang.Class<? extends Node> recognizeURL(java.net.URL url) Classof the Nodethat should be instantiated for the specified url. If no Recognizercan determine the Nodeclass, then the value of getDefaultNodeType()is returned. public static java.lang.Class<? extends Node> recognizeURL(java.net.URL url, java.lang.Class<? extends Node> defaultNodeType) public static final java.util.Map getExtensionToClassMap() public static final java.util.Map getExtensionToContentTypeMap() public static final DocumentInfo getDocumentInfo(java.lang.Class nodeClass) public static final java.lang.Class getClassForExtension(java.lang.String extension) public static final ContentType getContentTypeForExtension(java.lang.String extension) ContentTypeassociated with the given extension. If no ContentType association has been registered, this method returns null. public static final void mapExtensionToContentType(java.lang.String extension, ContentType contentType) extension, which is a file extension, to the given ContentType. public static final void registerDocumentInfo(java.lang.Class nodeClass, DocumentInfo info) public static final java.io.File sanitizeExtension(java.lang.String extension) extensionbegins with a '.'. If it does, the extensionis just returned as-is. If it doesn't, one is prepended to the extension, and the result is returned. The extension is returned as a Fileto make the extension follow the case-sensitivity rules of the local file system.
http://docs.oracle.com/cd/E14571_01/apirefs.1111/e13403/oracle/ide/model/Recognizer.html
CC-MAIN-2015-27
en
refinedweb
Feb 25, 2011 12:53 PM|Rom9854|LINK Hello, I´m currently using IIS7 to run a perl cgi script which calls a ".exe" file. My others perl cgi files work properly under my IIS configuration and this particular file is working fine on the command line mode. Actually when I run this file from my cgi, I can see the process of the Task Manager but it just doesn´t stop as it should and my cgi also doesn´t deliver a complete set of HTTP headers (error 502.2 when I manually shut the process down or after the cgi timeout). I basically try a lot of different IIS configurations such as enable Load User profil, allowing net.tcp , editing the file permission and everything I found. Does someone already experienced such an issue? Thanks. Romain Feb 25, 2011 03:28 PM|HCamper|LINK Hello, Could you look at this post in the IIS Net Forum guide for perl set-up using active state perl engine? If you follow the steps it may fix your problems. Try this in your scripts it must be the first line #!/usr/bin/perl -w . For this issue "doesn´t deliver a complete set of HTTP headers error 502.2" The problem with perl is when the Engine gets its first script command. Thank You, Cheers Martin :) Feb 28, 2011 08:19 AM|Rom9854|LINK Thank you for your quick answer, Unfortunetly, the informations provided on your link were not so helpfull. The first line of my perl script is already as you show me. Basically it looks like if the executable called by my perl is blocked in the middle of its execution and then it avoid the perl script to continue and finish the http headers set. I only got the 502.2 error if I manually kill the process or wait to cgi timeout. I can already assume that it´s not a .exe compilation error or a misfunction of my perl script cause everything is working well on the command line. Maybe it´s a kind of IIS restriction which avoid the executable to end running.... Another thing that you might know: I´m creating a new IO::socket in my perl with this configuration sub new { my ($class, $prefix) = @_; my $self = {}; bless $self, $class; $self->{'prefix'} = $prefix; $self->{'consoleSocket'} = new IO::Socket::INET ( PeerAddr => 'localhost', PeerPort => '8081', Proto => 'tcp', ); return $self; Of course I enabled the net.tcp on my web site but I may have also miss something here ;) Feb 28, 2011 04:55 PM|HCamper|LINK Hello, General note for command line code when you run code it is using your permissions and not those set at the Web Server. The error 502.2 is Bad gateway. Which is from the Server status sub status list here . The error would make sense for these cases: A) if user executing the code has limitied permissions to execute the code or use networking. B) The information for the address or port are incorrect or not accessible. C) One possible resolution is add the Network Service to allowed accounts in the location for where the code is located. D) Change the time of execution for the CGI process here Thank You, Martin Mar 01, 2011 08:01 AM|Rom9854|LINK Yes this is also not helping, I made a small software with the same behavior maybe it could help to solve the issue. This is the perl code calling the .exe into a pipe via open2, it changes case of all e-characters on STDIN and printf to STDOUT, stops on q. use strict; use IPC::Open2; use POSIX ":sys_wait_h"; $|=1; my $in; my $out; my $filter = "demo.exe"; my $pid = open2($out, $in, $filter); $|=1; print "Content-Type:text/html\n\n"; print "<html><body>\n"; print "PID: $pid In: $in Out: $out\n"; print "In: Whatever\n"; print $in "Whatever\n"; #getIO($out); print "In: Q\n"; print $in "Q\n"; getIO($out); close($in); close($out); print "Waitpid\n"; waitpid($pid, &WNOHANG); print "done\n"; print "</body></html>\n"; exit(0); sub getIO { my $fh = shift; my $data = ''; print "Out: "; while ( !eof($fh) ) { printf ("%s", getc($fh) ); } print "\nOutput done\n"; } then here is the code in c++ to compile the executable #ifdef WINCONSOLE #include <iostream> int main(int argc, char *argv[]) #else #error Console only program #endif { char c= 0; while ( ! std::cin.eof() ) { c = ( std::cin.get()) & 0xff; if ( c == 'e' ) c='E'; else if ( c == 'E' ) c='e'; std::cout << c; if ( c == 'q' || c=='Q' ) break; } return 0; } So this soft with the same behavior run also on the command line and show the same behavior on IIS. Maybe it can enlight you about the issue i experienced. Mar 07, 2011 11:04 AM|HCamper|LINK Hello, The problem is with coding at the open2 library is throwing an error always at this point: my $pid = open2($out, $in, $filter); I found where the block was by creating a dump of the execution and then use IIS Diag Trace. The error has been a combination of required values with not finding the perl librarys location. I tested on Active State Perl version 5.8 / Linux version 5.8 with code errors. Thank You, Martin Mar 08, 2011 08:35 AM|HCamper|LINK Hello, I do not have a fix but suggestions. Get a simple example like file R/W sample that use Open2 that the library is used fully. Set-up an enviornmental variable that points to the Perl libraries. Use the enviormental variable in the perl include section. Use IIS Diag Trace,Debugger,Process Monitor to track problems. The rest of fix is working on coding. Thanks, Martin Mar 10, 2011 01:46 PM|Rom9854|LINK Hello, Anyway the bug is quite tricky, I made it run on apache and also to the command line but still no improvement under IIS... Always the same kind of warning which I think prevent the ".exe" to end :Can't call method "close" on an undefined value at C:/Perl/lib/IPC/Open3.pm line 370. And of course no such warning under apache or command line :´( Is there no way the issue is coming from IIS? Thanks, Romain Mar 10, 2011 02:53 PM|HCamper|LINK Hello, Yes, This problem is a tough one. The error "Can't call method "close" on an undefined value at C:/Perl/lib/IPC/Open3.pm line 370" is not in the hands of IIS. On question is IIS an issue for the error no, IIS is responding to the error from Perl. You might try the ASPN Forum at Active State for advise for their version of libraries. You might install the Strawberry Perl version for Windows 7 and see if they have a better implementation of the Perl Libraries. I have heard they fixed some issues. I hope this helps, LOL Martin :) Mar 15, 2011 09:01 AM|Rom9854|LINK Hello, I´m getting a little bit further into the issue, so maybe it will be easier now. Basically I change the way my perl is using the piping mode, and it appears that the soft is still stucking under IIS. After some research, I found that the IIS security is avoiding my perl to acces the pipe once it´s started. After going deep down into the perl library I also see that the perl wasn´t the issue after all. So now I´m quite sure the issue is from a kind of IIS security restriction or something like that (which also explained why it´s working under apache and the command line ^^). I tryed to enable the net.pipe binding but it´s still not helping... Does anyone have an Idea? Cheers, Romain Apr 27, 2011 06:17 PM|HCamper|LINK Hello, Considerations for Coding Server Side Script and Execution: How to provide what a Standards Based Web Server expects for requests and responses how this effects server side code and scripting engines. The content delivery methods that will code (requests,responses) for all scripting engines either "Linux" or "Windows". A) The headers are a used to identify and communicate between Client and Server. B) The headers must be correctly formed between Server and Client. C) The first output returned from any script should be the headers. D) Failures for headers and delivery will generate (error message or description) which may be different between types of servers. Given what are descriptions / requirement from A) and B) the first content must be the headers. D) Generic header for purpose in scripting "Content-type: text/plain\n\n"; E) Sepcific header for HyperText for purpose in HTML "<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01//EN" "" Note: Case and quotes will have an effect on how the Scripting engine,Server,Client process the information.<div style="TEXT-ALIGN: left" dir=ltr class=mw-geshi> <div style="FONT-FAMILY: monospace" class="html4strict source-html4strict"> References:</div></div> See the ITEF org and for http 1.1 description in RFC2616 for content-coding = token. See Wikipedia for HTML . See Wikipedia for XHTML . See Wikipedia for Quirks mode . Hope this helps Martin 12 replies Last post Apr 27, 2011 06:17 PM by HCamper
http://forums.iis.net/p/1175955/1974195.aspx
CC-MAIN-2015-27
en
refinedweb
Last updated: June 16, 2005 (accessibility update) This paper presents a basic overview of EMF and its code generator patterns. For a more complete description of all the features of EMF, refer to EMF: Eclipse Modeling Framework, Second Edition (Addison-Wesley Professional, 2008) or to the Javadoc for the framework classes themselves. Introduction Defining an EMF Model Generating a Java Implementation Using the Generated EMF Classes Advanced Topics EMF is a Java framework and code generation facility for building tools and other applications based on a structured model. For those of you that have bought into the idea of object-oriented modeling, EMF helps you rapidly turn your models into efficient, correct, and easily customizable Java code. For those of you that aren't necessarily sold on the value of formal models, EMF is intended to provide you with the same benefits and a very low cost of entry. So, what do we mean when we say model? When talking about modeling, we generally think about things like Class Diagrams, Collaboration Diagrams, State Diagrams, and so on. UML (Unified Modeling Language) defines a (the) standard notation for these kinds of diagrams. Using a combination of UML diagrams, a complete model of an application can be specified. This model may be used purely for documentation or, given appropriate tools, it can be used as the input from which to generate part of or, in simple cases, all of an application. Given that this kind of modeling typically requires expensive Object Oriented Analysis and Design (OOA/D) tools, you might be questioning our assertion, above, that EMF provides a low cost of entry. The reason we can say that is because an EMF model requires just a small subset of the kinds of things that you can model in UML, specifically simple definitions of the classes and their attributes and relations, for which a full-scale graphical modeling tool is unnecessary. While EMF uses XMI (XML Metadata Interchange) as its canonical form of a model definition[1] , you have several ways of getting your model into that form: The first approach is the most direct, but generally only appeals to XML gurus. The second choice is the most desirable if you are already using full-scale modeling tools. The third approach provides pure Java programmers a low-cost way to get the benefits of EMF and its code generator using just a basic Java development environment (for example, Eclipse's Java Development Tools). The last approach is most applicable in creating an application that must read or write a particular XML file format. Once you specify an EMF model, the EMF generator can create a corresponding set of Java implementation classes. You can edit these generated classes to add methods and instance variables and still regenerate from the model as needed: your additions will be preserved during the regeneration. If the code you added depends on something that you changed in the model, you will still need to update the code to reflect those changes; otherwise, your code is completely unaffected by model changes and regeneration. In addition to simply increasing your productivity, building your application using EMF provides several other benefits like model change notification, persistence support including default XMI and schema-based XML serialization, a framework for model validation, and a very efficient reflective API for manipulating EMF objects generically. Most important of all, EMF provides the foundation for interoperability with other EMF-based tools and applications. EMF consists of two fundamental frameworks: the core framework and EMF.Edit. The core framework provides basic generation and runtime support to create Java implementation classes for a model. EMF.Edit extends and builds on the core framework, adding support for generating adapter classes that enable viewing and command-based (undoable) editing of a model, and even a basic working model editor. The following sections describe the main features of the core EMF framework. EMF.Edit is described in a separate paper, EMF.Edit Overview. For instructions on how to run the EMF and EMF.Edit generator, refer to Tutorial: Generating an EMF Model. For those of you that are familiar with OMG (Object Management Group) MOF (Meta Object Facility), you may be wondering how EMF relates to it. Actually, EMF started out as an implementation of the MOF specification but evolved from there based on the experience we gained from implementing a large set of tools using it. EMF can be thought of as a highly efficient Java implementation of a core subset of the MOF API. However, to avoid any confusion, the MOF-like core meta model in EMF is called Ecore. In the current proposal for MOF 2.0, a similar subset of the MOF model, which it calls EMOF (Essential MOF), is separated out. There are small, mostly naming differences between Ecore and EMOF; however, EMF can transparently read and write serializations of EMOF. To help describe EMF, we'll start by assuming we have a trivial, one-class model like this: The model shows a single class called Book with two attributes: title of type String and pages of type int. Our model definition, trivial as it is, can be provided to the EMF code generator in a number of ways. If you have a modeling tool that works with EMF[2] , you can simply draw the class diagram as shown above. Alternatively, we could describe the model directly in an XMI document that would look something like this: <ecore:EPackage xmi: <eClassifiers xsi: <eStructuralFeatures xsi: <eStructuralFeatures xsi: </eClassifiers> </ecore:EPackage> The XMI document contains all the same information as the class diagram, but a little less compactly. Every class and attribute in a diagram has a corresponding class or attribute definition in the XMI document. For those of you that have neither a graphical modeling tool nor an interest in trying to enter all the XMI syntax by hand, a third option is available for describing your model. Since the EMF generator is a code-merging generator, by providing partial Java interfaces (annotated with model information) ahead of time, the generator can use the interfaces as its generation metadata and merge the generated code with the rest of the implementation. We could have defined our Book model class in Java like this: /** * @model */ public interface Book { /** * @model */ String getTitle(); /** * @model */ int getPages(); } With this approach, we provide all the model information in the form of Java interfaces with standard get methods[3] to identify the attributes and references. The @model tag is used to identify to the code generator which interfaces, and parts of those interfaces, correspond to model elements and therefore require code generation. For our simple example, all of our model information is actually available though Java introspection of this interface, so no additional model information is needed. In the general case, however, the @model tag may be followed by additional details about the model element. If for example, we wanted the pages attribute to be read-only (that is, no generation of a set method), we would need to add the following to the annotation: /** * @model changeable="false" */ int getPages(); Because only information that is different from the default needs to be specified, annotations can be kept simple and concise. Sometimes, you might want to describe a model with a schema that specifies how instance serializations should look. This can be useful for writing an application that must use XML to integrate with an existing application or to comply with a standard. Here is how we would specify a schema that's equivalent to our simple book model: <xsd:schema <xsd:complexType <xsd:sequence> <xsd:element <xsd:element </xsd:sequence> </xsd:complexType> </xsd:schema> This approach differs somewhat from the other three, mainly because EMF must apply certain restrictions to the serialization that it eventually uses, to ensure compliance with the schema. As a result, the model that is a created for a schema looks slightly different from one specified in one of the other ways. The details of these differences are beyond the scope of this overview. In the remainder of this paper, we'll use UML diagrams for their clarity and conciseness. All of the modeling concepts we'll illustrate could also be expressed using annotated Java or directly with XMI, and most have XML Schema equivalents. Regardless of how the information is provided, the code generated by EMF will be the same. For each class in the model, a Java interface and corresponding implementation class will be generated. In our example, the generated interface for Book looks like this: public interface Book extends EObject { String getTitle(); void setTitle(String value); int getPages(); void setPages(int value); } Each generated interface contains getter and setter methods for each attribute and reference of the corresponding model class. Interface Book extends the base interface EObject. EObject is the EMF equivalent of java.lang.Object, that is, it is the base of every EMF class. EObject and its corresponding implementation class EObjectImpl (which we will look at later) provide a relatively lightweight base class that lets Book participate in the EMF notification and persistence frameworks. Before we start looking at what exactly EObject brings into the mix, let's continue looking at how EMF generates Book. Each generated implementation class includes implementations of the getters and setters defined in the corresponding interface, plus some other methods required by the EMF framework. Class BookImpl will include, among other things, implementations of the title and pages accessors. The pages attribute, for example, has the following generated implementation:)); } ... } The generated get method is optimally efficient. It simply returns an instance variable representing the attribute. The set method, although a little more complicated, is also quite efficient. In addition to setting the instance variable pages, the set method needs to send change notification to any observers that may be listening to the object by calling the eNotify() method. To optimize the case where there are no observers (for example, in a batch application), construction of the notification object (ENotificationImpl) and the call to eNotify() are guarded by a call to eNotificationRequired(). The default implementation of eNotificationRequired() simply checks if there are any observers (adapters) attached to the object. Therefore, when EMF objects are used without observers, the call to eNotificationRequired() amounts to nothing more than an efficient null pointer check, which is inlined when using a JIT compiler. The generated accessor patterns for other types of attributes, like the String-typed title attribute, have some minor differences but are fundamentally the same as those shown for pages[4] . The generated accessors for references, especially two-way ones, are a little more complicated and start to show the real value of the EMF generator. Let's expand our example model with another class Writer that has an association with class Book: The association between a book and its writer is, in this example, a single one-way reference. The reference (role) name used to access the Writer from a Book is author. Running this model through the EMF generator will, in addition to generating the new interface Writer and implementation class WriterImpl, generate additional get and set methods in interface Book: Writer getAuthor(); void setAuthor(Writer value); Since the author reference is one-way, the implementation of the setAuthor() method looks much like a simple data setter, like the earlier one for setPages(): public void setAuthor(Writer newAuthor) { Writer oldAuthor = author; author = newAuthor; if(eNotificationRequired()) eNotify(new ENotificationImpl(this, ...)); } The only difference is that here we're setting an object pointer instead of just a simple data field. Because we're dealing with an object reference, however, the getAuthor() method is a little more complicated. This is because the get method for some types of references, including the type of author, needs to deal with the possibility that the referenced object (in this case Writer) may persist in a different resource (document) from the source object (in this case Book). Because the EMF persistence framework uses a lazy loading scheme, an object pointer (in this case author) may at some point in time be a proxy for the object, instead of the actual referenced object[5] . As a result, the getAuthor() method looks like this: public Writer getAuthor() { if (author != null && author.eIsProxy()) { Writer oldAuthor = author; author = (Writer)eResolveProxy((InternalEObject)author); if (author != oldAuthor) { if (eNotificationRequired()) eNotify(new ENotificationImpl(this, Notification.RESOLVE, ...)); } } return author; } Instead of simply returning the author instance variable, we first call the inherited framework method eIsProxy() method to check if the reference is a proxy, and then call eResolveProxy() if it is. The latter method calls EcoreUtil.resolve(), a static utility method that attempts to load the target object's document, and consequently the object, using the proxy's URI. If successful, it will return the resolved object. If, however, the document fails to load, it will just return the proxy again[6] . Now that we understand how proxy resolution affects the get pattern for certain types of references, we can look at how the set pattern changes when an association is made two-way. Let's change our one-way author association to this: The association is now two-way, as indicated by the lack of an arrowhead on the Writer end of the association line. The role name used to access Books from a Writer is books. If we regenerate our model, the getAuthor() method will be unaffected, but setAuthor() will now look like this: public void setAuthor(Writer newAuthor) { if (newAuthor != author) { NotificationChain msgs = null; if (author != null) msgs = ((InternalEObject)author).eInverseRemove(this, ..., msgs); if (newAuthor != null) msgs = ((InternalEObject)newAuthor).eInverseAdd(this, ..., msgs); msgs = basicSetAuthor(newAuthor, msgs); if (msgs != null) msgs.dispatch(); } else if (eNotificationRequired()) eNotify(new ENotificationImpl(this, ...)); // send "touch" notification } As you can see, when setting a two-way reference like author, the other end of the reference needs to be set as well (by calling eInverseAdd()). We also need to remove the inverse of any previous author (by calling eInverseRemove()) because in our model the author reference is singular (that is, a book can only have one author)[7] and therefore this book cannot be in more than one Writer's books reference. Finally, we set the author reference by calling another generated method (basicSetAuthor()) which looks like this: public NotificationChain basicSetAuthor(Writer newAuthor, NotificationChain msgs) { Writer oldAuthor = author; author = newAuthor; if (eNotificationRequired()) { ENotificationImpl notification = new ENotificationImpl(this, ...); if (msgs == null) msgs = notification; else msgs.add(notification); } return msgs; } This method looks very similar to the one-way reference set method, except that if the msgs argument is non-null, the notification gets added to it, intsead of being fired directly[8] . Because of all the forward/reverse adding/removing during a two-way reference set operation, as many as four (three in this particular example) different notifications may be generated. A NotificationChain is used to collect all these individual notifications so their firing can be deferred until after all the state changes have been made. The queued-up notifications are sent by calling msgs.dispatch(), as shown in the setAuthor() method, above. You may have noticed in our example that the books association (from Writer to Book) is multiplicity many (that is, 0..*). In other words, one writer may have written many books. Multiplicity-many references (that is, any reference where the upper bound is greater than 1) in EMF are manipulated using a collection API, so only a get method is generated in the interface: public interface Writer extends EObject { ... EList getBooks(); } Notice that getBooks() returns an EList as opposed to a java.util.List. Actually, they are almost the same. EList is an EMF subclass of java.util.List that adds two move methods to the API. Other than that, from a client perspective, you can consider it a standard Java List. For example, to add a book to the books association, you can simply call: aWriter.getBooks().add(aBook); or to iterate over them you would do something like this: for (Iterator iter = aWriter.getBooks().iterator(); iter.hasNext(); ) { Book book = (Book)iter.next(); ... } As you can see, from a client perspective, the API for manipulating multiplicity-many references is nothing special. However, because the books reference is part of a two-way association (it's the inverse of Book.author), we still need to do all the fancy inverse handshaking that we showed for the setAuthor() method. Looking at the implementation of the getBooks() method in WriterImpl shows us how the multiplicity-many case gets handled: public EList getBooks() { if (books == null) { books = new EObjectWithInverseResolvingEList(Book.class, this, LibraryPackage.WRITER__BOOKS, LibraryPackage.BOOK__AUTHOR); } return books; } The getBooks() method returns a special implementation class, EObjectWithInverseResolvingEList, which is constructed with all the information it needs to do the reverse handshaking during add and remove calls. EMF actually provides 20 different specialized EList implementations[9] to efficiently implement all types of multiplicity-many features. For one-way associations (that is, those with no inverse) we use EObjectResolvingEList. If the reference doesn't need proxy resolution we'd use EObjectWithInverseEList or EObjectEList, and so on. So for our example, the list used to implement the books reference is created with the argument LibraryPackage.BOOK__AUTHOR (a generated static int constant representing the reverse feature). This will be used during the add() call to call eInverseAdd() on the Book, similar to the way eInverseAdd() was called on the Writer during setAuthor(). Here's what eInverseAdd() looks like in class BookImpl: public NotificationChain eInverseAdd(InternalEObject otherEnd, int featureID, Class baseClass, NotificationChain msgs) { if (featureID >= 0) { switch (eDerivedStructuralFeatureID(featureID, baseClass)) { case LibraryPackage.BOOK__AUTHOR: if (author != null) msgs = ((InternalEObject)author).eInverseRemove(this, .., msgs); return basicSetAuthor((Writer)otherEnd, msgs); default: ... } } ... } It first calls eInverseRemove() to remove any previous author (as we described previously when we looked at the setAuthor() method), and then it calls basicSetAuthor() to actually set the reference. Although our particular example only has one two-way reference, eInverseAdd() uses a switch statement that includes a case for every two-way reference available on class Book[10] . Let's add a new class, Library, which will act as the container for Books. The containment reference is indicated by the black diamond on the Library end of the association. In full, the association indicates that a Library aggregates, by value, 0 or more Books. By-value aggregation (containment) associations are particularly important because they identify the parent or owner of a target instance, which implies the physical location of the object when persisted. Containment affects the generated code in several ways. First of all, because a contained object is guaranteed to be in the same resource as its container, proxy resolution isn't needed. Therefore, the generated get method in LibraryImpl will use a non-resolving EList implementation class: public EList getBooks() { if (books == null) { books = new EObjectContainmentEList(Book.class, this, ...); } return books; } In addition to not performing proxy resolution, an EObjectContainmentEList also implements the contains() operation very efficiently (that is, in a constant time, vs. linear time in the general case). This is particularly important because duplicate entries are not allowed in EMF reference lists, so contains() is called during add() operations as well. Because an object can only have one container, adding an object to a containment association also means removing the object from any container it's currently in, regardless of the actual association. For example, adding a Book to a Library's books list may involve removing it from some other Library's books list. That's no different than any other two-way association where the inverse has multiplicity 1. Let's assume, however, that the Writer class also had a containment association to Book, called ownedBooks. Then, if a given book instance is in the ownedBooks list of some Writer, when we add it to a Library's books reference, it would need to be removed from the Writer first. To implement this kind of thing efficiently, the base class EObjectImpl has an instance variable (eContainer) of type EObject that it uses to store the container generically. As a result, containment references are always implicitly two-way. To access the Library from a Book, you can write something like this: EObject container = book.eContainer(); if (container instanceof Library) library = (Library)container; If you want to avoid the downcast, you can change the association to be explicitly two-way instead: and let EMF generate a nice typesafe get method for you: public Library getLibrary() { if (eContainerFeatureID != LibraryPackage.BOOK__LIBRARY) return null; return (Library)eContainer; } Notice that the explicit get method uses the eContainer variable from EObjectImpl instead of a generated instance variable as we saw previously for non-container references (like getAuthor(), above)[11] . So far, we've looked at how EMF handles simple attributes and various types of references. Another commonly used type of attribute is an enumeration. Enumeration-type attributes are implemented using the Java typesafe enum pattern[12] . If we add an enumeration attribute, category, to class Book: and regenerate the implementation classes, interface Book will now include a getter and setter for category: BookCategory getCategory(); void setCategory(BookCategory value); In the generated interface, the category methods use a typesafe enumeration class called BookCategory. This class defines static constants for the enumeration's values and other convenience methods, like this: public final class BookCategory extends AbstractEnumerator { public static final int MYSTERY = 0; public static final int SCIENCE_FICTION = 1; public static final int BIOGRAPHY = 2; public static final BookCategory MYSTERY_LITERAL = new BookCategory(MYSTERY, "Mystery"); public static final BookCategory SCIENCE_FICTION_LITERAL = new BookCategory(SCIENCE_FICTION, "ScienceFiction"); public static final BookCategory BIOGRAPHY_LITERAL = new BookCategory(BIOGRAPHY, "Biography"); public static final List VALUES = Collections.unmodifiableList(...)); public static BookCategory get(String name) { ... } public static BookCategory get(int value) { ... } private BookCategory(int value, String name) { super(value, name); } } As shown, the enumeration class provides static int constants for the enumerations's values as well as static constants for the enumeration's singleton literal objects themselves. The int constants have the same names as the model's literal names[13] . The literal constants have the same names with _LITERAL appended. The constants provide convenient access to the literals when, for example, setting the category of a book: book.setCategory(BookCategory.SCIENCE_FICTION_LITERAL); The BookCategory constructor is private and therefore the only instances of the enumeration class that will ever exist are the ones used for the statics MYSTERY_LITERAL, SCIENCE_FICTION_LITERAL, and BIOGRAPHY_LITERAL. As a result, equality comparisons (that is .equals() calls) are never needed. Literals can always be reliably compared using the simpler and more efficient == operator, like this: book.getCategory() == BookCategory.MYSTERY_LITERAL When comparing against many values, a switch statement using the int values is better yet: switch (book.getCategory().value()) { case BookCategory.MYSTERY: // do something ... break; case BookCategory.SCIENCE_FICTION: ... } For situations where only the literal name (String) or value (int) is available, convenience get() methods, which can be used to retrieve the corresponding literal object, are also generated in the enumeration class. In addition to the model interfaces and implementation classes, EMF generates at least two more interfaces (and implementation classes): a factory and a package. The factory, as its name implies, is used for creating instances of your model classes, while the package provides some static constants (for example, the feature constants used by the generated methods) and convenience methods for accessing your model's metadata[14] . Here is the factory interface for the book example: public interface LibraryFactory extends EFactory { LibraryFactory eINSTANCE = new LibraryFactoryImpl(); Book createBook(); Writer createWriter(); Library createLibrary(); LibraryPackage getLibraryPackage(); } As shown, the generated factory provides a factory method (create) for each class defined in the model, an accessor for your model's package, and a static constant reference (that is, eINSTANCE) to the factory singleton. The LibraryPackage interface provides convenient access to all the metadata of our model: public interface LibraryPackage extends EPackage { ... LibraryPackage eINSTANCE = LibraryPackageImpl.init(); static final int BOOK = 0; static final int BOOK__TITLE = 0; static final int BOOK__PAGES = 1; static final int BOOK__CATEGORY = 2; static final int BOOK__AUTHOR = 3; ... static final int WRITER = 1; static final int WRITER__NAME = 0; ... EClass getBook(); EAttribute getBook_Title(); EAttribute getBook_Pages(); EAttribute getBook_Category(); EReference getBook_Author(); ... } As you can see, the metadata is available in two forms: int constants and the Ecore meta objects themselves. The int constants provide the most efficient way to pass around meta information. You may have noticed that the generated methods use these constants in their implementations. Later, when we look at how EMF adapters can be implemented, you'll see that the constants also provide the most efficient way to determine what has changed when handling notifications. Also, just like the factory, the generated package interface provides a static constant reference to its singleton implementation. Let's say we want to create a subclass, SchoolBook, of our Book model class, like this: The EMF generator handles single inheritance as you'd expect: the generated interface extends the super interface: public interface SchoolBook extends Book and the implementation class extends the super implementation class: public class SchoolBookImpl extends BookImpl implements SchoolBook As in Java itself, multiple interface inheritance is supported, but each EMF class can only extend one implementation base class. Therefore, when we have a model with multiple inheritance, we need to identify which of the multiple bases should be used as the implementation base class. The others will then be simply treated as mixin interfaces, with their implementations generated into the derived implementation class. Consider the following example: Here we've made SchoolBook derive from two classes: Book and Asset. We've identified Book as the implementation base (extended) class as shown[15] . If we regenerate the model, interface SchoolBook will now extend the two interfaces: public interface SchoolBook extends Book, Asset The implementation class looks the same as before, only now it includes implementations of the mixed-in methods getValue() and setValue(): public class SchoolBookImpl extends BookImpl implements SchoolBook { public float getValue() { ... } public void setValue(float newValue) { ... } ... } You can add behavior (methods and instance variables) to the generated Java classes without having to worry about losing your changes if you later decide to modify the model and then regenerate. For example, let's add a method, isRecommended(), to class Book. To do this you simply go ahead and add the new method signature to the Java interface Book: public interface Book ... { boolean isRecommended(); ... } and its implementation in class BookImpl: public boolean isRecommended() { return getAuthor().getName().equals("William Shakespeare"); } The EMF generator won't wipe out this change because it isn't a generated method to begin with. Every method generated by EMF includes a Javadoc comment that contains an @generated tag, like this: /** * ... * @generated */ public String getTitle() { return title; } Any method in the file that doesn't contain this tag (like isRecommended()) will be left untouched whenever we regenerate. In fact, if we want to change the implementation of a generated method, we can do that by removing the @generated tag from it[16] : /** * ... * @generated// (removed) */ public String getTitle() { // our custom implementation ... } Now, because of the missing @generated tag, the getTitle() method is considered to be user code; if we regenerate the model, the generator will detect the collision and simply discard the generated version of the method. Actually, before discarding a generated method, the generator first checks if there is another generated method in the file with the same name, but with Gen appended. If it finds one, then instead of discarding the newly generated version of the method it redirects the output to it. For example, if we want to extend the generated getTitle() implementation, instead of completely discarding it, then we can do that by simply renaming it like this: /** * ... * @generated */ public String getTitleGen() { return title; } and then adding our override as a user method that does whatever we want: public String getTitle() { String result = getTitleGen(); if (result == null) result = ... return result; } If we regenerate now, the generator will detect the collision with our user version of getTitle(), but because we also have the @generated getTitleGen() method in the class, it will redirect the newly generated implementation to it, instead of discarding it. In addition to attributes and references, you can add operations to your model classes. If you do, the EMF generator will generate their signature into the interface and a method skeleton into the implementation class. EMF does not model behavior, so the implementation must be provided by user-written Java code. This may be done by removing the @generated tag from the generated implementation, as described above, and adding the code right there. Alternatively, the Java code can be included right in the model. In Rose, you can enter it in the text box on the Semantics tab of an Operation Specification dialog. The code will then be stored in the EMF model as an annotation on the operation[17] , and will be generated into its body. Using the generated classes, a client program can create and initialize a Book with the following simple Java statements: LibraryFactory factory = LibraryFactory.eINSTANCE; Book book = factory.createBook(); Writer writer = factory.createWriter(); writer.setName("William Shakespeare"); book.setTitle("King Lear"); book.setAuthor(writer); Because the Book to Writer association (author) is two-way, the inverse reference (books) is automatically initialized. We can verify this by iterating over the books reference like this: System.out.println("Shakespeare books:"); for (Iterator iter = writer.getBooks().iterator(); iter.hasNext(); ) { Book shakespeareBook = (Book)iter.next(); System.out.println(" title: " + shakespeareBook.getTitle()); } Running this program would produce output something like this: Shakespeare books: title: King Lear To create a document named mylibrary.xmi containing the above model, all we need to do is create an EMF resource at the beginning of the program, put the book and writer into the resource, and call save() at the end: //()); // Get the URI of the model file. URI fileURI = URI.createFileURI(new File("mylibrary.xmi").getAbsolutePath()); // Create a resource for this file. Resource resource = resourceSet.createResource(fileURI); // Add the book and writer objects to the contents. resource.getContents().add(book); resource.getContents().add(writer); // Save the contents of the resource to the file system. try { resource.save(Collections.EMPTY_MAP); } catch (IOException e) {} Notice that a resource set (interface ResourceSet) is used to create the EMF resource. A resource set is used by the EMF framework to manage resources that may have cross document references. Using a registry (interface Resource.Factory.Registry), it creates the right type of resource for a given URI based on its scheme, file extension, or other possible criteria. Here, we register the XMI resource implementation as the default for this resource set[18] . During load, the resource set also manages the demand-loading of cross document references. Running this program will produce the file mylibrary.xmi with contents something like this: <xmi:XMI xmi: <library:Book <library:Writer </xmi:XMI> To load the document mylibrary.xmi, as saved above, we set up a resource set, and then simply demand-load the resource into it, as follows: //()); // Register the package -- only needed for stand-alone! LibraryPackage libraryPackage = LibraryPackage.eINSTANCE; // Get the URI of the model file. URI fileURI = URI.createFileURI(new File("mylibrary.xmi").getAbsolutePath()); // Demand load the resource for this file. Resource resource = resourceSet.getResource(fileURI, true); // Print the contents of the resource to System.out. try { resource.save(System.out, Collections.EMPTY_MAP); } catch (IOException e) {} Again, we create a resource set and, for the stand-alone case, register a default resource implementation. Also, we need to ensure that our package is registered in the package registry, which the resource uses to obtain the appropriate metadata and factory for the model it is loading. Simply accessing the eINSTANCE field of a generated package interface is sufficient to ensure that it is registered. This example uses the second form of save(), which takes an OutputStream, to print the serialization to the console. Splitting a model into multiple documents, with cross references between them, is simple. If we wanted to serialize the books and writers, in the save example above, into separate documents, all we need to do is create a second resource: Resource anotherResource = resourceSet.createResource(anotherFileURI); and add the writer to it, instead of the first: resource.getContents().add(writer);// (replaced) anotherResource.getContents().add(writer); This would produce two resources, each containing one object, with a cross document reference to the other. Note that a containment reference necessarily implies that the contained object is in the same resource as its container. So, for example, suppose that we had created an instance of Library containing our Book via the books containment reference. That would have automatically removed the Book from the contents of the resource, which in this sense, also behaves like a containment reference. If we then added the Library to the resource, the book would implicitly belong to the resource as well, and its details would again be serialized in it. If you want to serialize your objects in a format other than XMI, that can be arranged as well. You will need to supply your own serialization and parsing code. Create your own resource class (as a subclass of ResourceImpl) that implements your preferred serialization format, and then either register it locally with your resource set, or with the global factory registry if you want it to always be used with your model. Previously, when we looked at set methods in generated EMF classes, we saw that notifications are always sent when an attribute or reference is changed. For example, the BookImpl.setPages() method included the following line: eNotify(newENotificationImpl(this, ..., oldPages, pages)); Every EObject can maintain a list of observers (also referred to as adapters), which will be notified whenever a state change occurs. The framework eNotify() method iterates through this list and forwards the notification to the observers. An observer can be attached to any EObject (for example, book) by adding to the eAdapters list like this: Adapter bookObserver = ... book.eAdapters().add(bookObserver); More commonly, however, adapters are added to EObjects using an adapter factory. In addition to their observer role, adapters are more generally used as a way to extend the behavior of the object they're attached to. A client generally attaches such extended behavior by asking an adapter factory to adapt an object with an extension of the required type. Typically it looks something like this: EObject someObject = ...; AdapterFactory someAdapterFactory = ...; Object requiredType = ...; if(someAdapterFactory.isFactoryForType(requiredType)) { Adapter theAdapter = someAdapterFactory.adapt(someObject, requiredType); ... } Usually, the requiredType represents some interface supported by the adapter. For example, the argument might be the actual java.lang.Class for an interface of the chosen adapter. The returned adapter could then be downcast to the requested interface like this: MyAdapter theAdapter = (MyAdapter)someAdapterFactory.adapt(someObject, MyAdapter.class); Adapters are often used this way to extend the behavior of an object without subclassing. To handle notifications in an adapter we need to override the eNotifyChanged() method, which is called on every registered adapter by eNotify(). A typical adapter implements eNotifyChanged() to perform some action for some or all of the notifications, based on the notification's type. Sometimes adapters are designed to adapt a specific class (for example, Book). In this case, the notifyChanged() method might look something like this: public void notifyChanged(Notification notification) { Book book = (Book)notification.getNotifier(); switch (notification.getFeatureID(Book.class)) { case LibraryPackage.BOOK__TITLE: // book title changed doSomething(); break; caseLibraryPackage.BOOK__CATEGORY: // book category changed ... case ... } } The call to notification.getFeatureID() is passed the argument Book.class to handle the possibility that the object being adapted is not an instance of class BookImpl, but is instead an instance of a multiple-inheritance subclass where Book is not the primary (first) interface. In that case, the feature ID passed in the notification will be a number relative to the other class and therefore needs to be adjusted before we can switch using the BOOK__ constants. In single-inheritance situations, this argument is ignored. Another common type of adapter is not bound to any specific class, but instead uses the reflective EMF API to perform its function. Instead of calling getFeatureID() on the notification, it might call getFeature() instead, which returns the actual Ecore feature (that is, the object in the metamodel that represents the feature). Every generated model class can also be manipulated using the reflective API defined in interface EObject: public interface EObject ... { .. Object eGet(EStructuralFeature feature); void eSet(EStructuralFeature feature, Object newValue); boolean eIsSet(EStructuralFeature feature); void eUnset(EStructuralFeature feature); } Using the reflective API, we could set the name of an author like this: writer.eSet(LibraryPackage.eINSTANCE.getWriter_Name(), "William Shakespeare"); or get the name like this: String name = (String)writer.eGet(LibraryPackage.eINSTANCE.getWriter_Name()); Notice that the feature being accessed is identified by metadata obtained from the singleton instance of the library package. Using the reflective API is slightly less efficient then calling the generated getName() and setName() methods directly[19] , but opens up the model for completely generic access. For example, the reflective methods are used by the EMF.Edit framework to implement a full set of generic commands (for example, AddCommand, RemoveCommand, SetCommand) that can be used with any model. See the EMF.Edit Overview for details. In addition to eGet() and eSet(), the reflective API includes two other related methods: eIsSet() and eUnset(). The eIsSet() method can be used to find out if an attribute is set or not[20] , while eUnset() can be used to unset (or reset) it. The generic XMI serializer, for example, uses eIsSet() to determine which attributes need to be serialized during a resource save operation. There are several flags that can be set on a model feature to control the generated code pattern for that feature. Typically, the default settings of these flags will be fine, so you shouldn't need to change them very often. Unsettable (default is false) A feature that is declared to be unsettable has a notion of an explicit unset or no-value state. For example, a boolean attribute that is not unsettable can take on one of two values: true or false. If, instead, the attribute is declared to be unsettable, it can then have any of three values: true, false, or unset. The get method on a feature that is not set will return its default value, but for an unsettable feature, there is a distinction between this state and when the feature has been explicitly set to the default value. Since the unset state is outside of the set of allowed values, we need to generate additional methods to put a feature in the unset state and to determine if it is in that state. For example, if the pages attribute in class Book is declared to be unsettable, then we'll get two more generated methods: boolean isSetPages(); void unsetPages(); in addition to the original two: int getPages(); void setPages(int value); The isSet method returns true if the feature has been explicitly set. The unset method changes an attribute that has been set back to its unset state. When unsettable is false, we don't get the generated isSet or unset methods, but we still get implementations of the reflective versions: eIsSet() and eUnset() (which every EObject must implement). For non-unsettable attributes, eIsSet() returns true if the current value is different from the default value, and eUnset() sets the feature to the default value (more like a reset). ResolveProxies (default is true) ResolveProxies only applies to non-containment references. ResolveProxies implies that the reference may span documents, and therefore needs to include proxy checking and resolution in the get method, as described earlier in this paper. You can optimize the generated get pattern for references that you know will never be used in a cross document scenario by setting resolveProxies to false. In that case, the generated get method will be optimally efficient[21] . Unique (default is true) Unique only applies to multiplicity-many attributes, indicating that such an attribute may not contain multiple equal objects. References are always treated as unique. Changeable (default is true) A feature that is not changeable will not include a generated set method, and the reflective eSet() method will throw an exception if you try to set it. Declaring one end of a bi-directional relationship to be not changeable is a good way to force clients to always set the reference from the other end, but still provide convenient navigation methods from either end. Declaring one-way references or attributes to be not changeable usually implies that the feature will be set or changed by some other (user-written) code. Volatile (default is false) A feature that is declared volatile is generated without storage fields and with empty implementation method bodies, which you are required to fill in. Volatile is commonly used for a feature whose value is derived from some other feature, or for a feature that is to be implemented by hand using a different storage and implementation pattern. Derived (default is false) The value of a derived feature is computed from other features, so it doesn't represent any additional object state. Framework classes, such as EcoreUtil.Copier, that copy model objects will not attempt to copy such features. The generated code is unaffected by the value of the derived flag, except for the package implementation class, which initializes the metadata for the model. Derived features are typically also marked volatile and transient. Transient (default is false) Transient features are used to declare (modeled) data whose lifetime never spans application invocations and therefore doesn't need to be persisted. The (default XMI) serializer will not save features that are declared to be transient. Like derived, transient's only effect on the generated code is the metadata initialization in the package implementation class. As mentioned previously, all the classes defined in a model (for example, Book, Writer) implicitly derive from the EMF base class EObject. However, all the classes that a model uses are not necessarily EObjects. For example, assume we want to add an attribute of type java.util.Date to our model. Before we can do so, we need to define an EMF DataType to represent the external type. In UML, we use a class with the datatype stereotype for this purpose: As shown, a data type is simply a named element in the model that acts as a proxy for some Java class. The actual Java class is provided as an attribute with the javaclass stereotype, whose name is the fully qualified class being represented. With this data type defined, we can now declare attributes of type java.util.Date like this: If we regenerate, the publicationDate attribute will now appear in the Book interface: import java.util.Date; public interface Book extends EObject { ... Date getPublicationDate(); void setPublicationDate(Date value); } As you can see, this Date-typed attribute is handled pretty much like any other. In fact, all attributes, including ones of type String, int, and so on, have a data type as their type. The only thing special about the standard Java types is that their corresponding data types are predefined in the Ecore model, so they don't need to be redefined in every model that uses them. A data type definition has one other effect on the generated model. Since data types represent some arbitrary class, a generic serializer and parser (for example, the default XMI serializer) has no way of knowing how to save the state of an attribute of that type. Should it call toString()? That's a reasonable default, but the EMF framework doesn't want to require that, so it generates two more methods in the factory implementation class for every data type defined in the model: /** * @generated */ public Date createJavaDateFromString(EDataType eDataType, String initialValue) { return (Date)super.createFromString(eDataType, initialValue); } /** * @generated */ public String convertJavaDateToString(EDataType eDataType, Object instanceValue) { return super.convertToString(eDataType, instanceValue); } By default, these methods simply invoke the superclass implementations, which provide reasonable, but inefficient, defaults: convertToString() simply calls toString() on the instanceValue, but createFromString() tries, using Java reflection, to call a String constructor or, failing that, a static valueOf() method, if one exists. Typically you should take over these methods (by removing the @generated tags) and change them to appropriate custom implementations: /** * @generated// (removed) */ public String convertJavaDateToString(EDataType eDataType, Object instanceValue) { return instanceValue.toString(); ) Here is the complete class hierarchy of the Ecore model (shaded boxes are abstract classes): This hierarchy includes the classes that represent the EMF model elements described in this paper: classes (and their attributes, references and operations) data types, enumerations, packages and factories. EMF's implementation of Ecore is itself generated using the EMF generator and as such has the same lightweight and efficient implementation as described in the previous sections of this paper. [1] Actually, the EMF meta model is itself an EMF model, the default serialized form of which is XMI. [2] Currently, EMF supports import from Rational Rose, but the generator architecture can easily accommodate other modeling tools as well. [3] EMF uses a subset of the JavaBean simple property accessor naming patterns. [4] There are several user-specifiable options that can be used to change the generated patterns. We'll describe some of them later (see Generation control flags, later in this document). [5] Containment references, which we'll describe later (see Containment references), cannot span documents. There is also a flag that users can set in a reference's meta data to indicate that resolve does not need to be called because the reference will never be used in a cross document scenario (see Generation control flags). In these cases, the generated get method simply returns the pointer. [6] Applications that need to deal with and handle broken links should call eIsProxy() on the object returned by a get method to see if it is resolved or not (for example, book.getAuthor().eIsProxy()). [7] This clearly fails to allow for multiple authors, but it keeps the example model simple. [8] The reason we bother to delegate to a basicSet() method at all is because it's also needed by the eInverseAdd() and eInverseRemove() methods, which we'll look at a little later. [9] Actually, all of the concrete EList implementations are simple subclasses of one very functional and efficient base implementation class, EcoreEList. [10] In eInverseAdd(), instead of simply switching on the supplied feature id, it first calls eDerivedStructuralFeatureID(featureID, baseClass). For simple single inheritance models, this method has a default implementation that ignores the second argument and returns the featureID passed in. For models that use multiple inheritance, eDerivedStructuralFeatureID() may have a generated override that adjusts a feature ID relative to a mixin class (that is, baseClass) to a feature ID relative to the concrete derived class of the instance. [11] EObjectImpl also has an int-typed eContainerFeatureID instance variable to keep track of which reference is currently used for the eContainer. [12] See Replace Enums with Classes. [13] To conform to proper Java programming style, the static constant names are converted to upper case if the modeled enumeration's literal names are not already upper case. [14] While your program isn't strictly required to use the Factory or Package interfaces, EMF does encourage clients to use the factory to create instances by generating protected constructors on the model classes, thereby preventing you from simply calling new to create your instances. You can, however, change the access to public in the generated classes manually, if that's what you really want. Your preferences will not be overwritten if you later decide to regenerate the classes. [15] Actually, the first base class in the Ecore model is the one used as the implementation base class. In the UML diagram, the <<extend>> stereotype is needed to indicate that Book should be first in the Ecore representation. [16] If you know ahead of time that you're going to want to provide your own custom implementation for some feature, then a better way of doing this is to model the attribute as volatile, which instructs the generator to only generate a skeleton method body in the first place, which you are then expected to implement. [17] EMF includes a generic mechanism for annotating metamodel objects with additional information. This mechanism can also be used to attach user documentation to elements of the model, and when a model is created from XML Schema, EMF relies on it to capture serialization details that cannot be expressed directly using Ecore. [18] The second line of the above code is only required when run stand-alone (that is, directly invoked in a JVM, with the required EMF JAR files on the class path). The same registration is automatically made in the global resource factory registry when EMF is run within Eclipse. [19] Implementations of the reflective methods are also generated for each model class. They switch on the feature type, and simply call the appropriate generated typesafe methods. [20] See the Unsettable flag under Generation control flags for what constitutes a set attribute. [21] Think carefully before declaring a feature to not resolve proxies. Just because you don't need to use the reference in a cross document situation doesn't mean that someone else who wants to use your model may not. Declaring a feature to not resolve proxies is kind of like declaring a Java class to be final.
http://help.eclipse.org/juno/topic/org.eclipse.emf.doc/references/overview/EMF.html
CC-MAIN-2015-27
en
refinedweb
9.1, "About Using the Oracle JDeveloper 11g Migration Wizard for Oracle SOA Suite Applications" Section 9.2, "Upgrade Tasks Associated with All Java Applications" Section 9.3, "Upgrade Tasks Associated with All Oracle SOA Suite Applications" When you open an Oracle Application Server 10g Oracle SOA Suite application in Oracle JDeveloper 11g, the Oracle JDeveloper Migration Wizard attempts to upgrade your application automatically to Oracle Fusion Middleware 11g. However, there are some limitations to what the Oracle JDeveloper Migration Wizard can perform automatically. Refer to specific sections of this chapter for information about the types of manual tasks you might have to perform on your Oracle SOA Suite applications before or after using the Migration Wizard. Applying the Latest Patch Sets For best results, Oracle recommends that you apply the most recent patch sets to your Oracle SOA Suite environment and that you use the latest 10g Release 3 (10.1.3) Oracle JDeveloper before upgrading to 11g. Keeping Oracle JDeveloper and Oracle SOA Suite at the Same Version Level As a general rule, you should always update your Oracle SOA Suite and Oracle JDeveloper installations at the same time and run the same version of both these Oracle products. Verifying That You Have the Required SOA Composite Editor Oracle JDeveloper Extension To upgrade your Oracle SOA Suite applications to 11g, you must have the Oracle SOA Composite Editor extension for Oracle JDeveloper 11g. To verify that the extension is installed, select About from the Oracle JDeveloper Help menu, and click the Version tab. You should see an entry in the list of components called SOA Composite Editor. If this component does not appear on the Version tab of the About dialog box, close the About dialog and select Check for Updates from the Help menu. Use the Check for Updates wizard to locate and install the latest version of the SOA Composite Editor extension. Before you begin upgrading your Oracle SOA Suite applications, be sure to review the Oracle Fusion Middleware Upgrade Guide for Java EE, which contains information about upgrading standard Java EE applications to Oracle WebLogic Server. If your applications contain any custom Java code, you should review the Java code against the procedures and recommendations available in the Oracle Fusion Middleware Upgrade Guide for Java EE. The following information should reviewed when you are upgrading any Oracle SOA Suite application to Oracle Fusion Middleware 11g: Understanding Oracle SOA Suite API Changes for Oracle Fusion Middleware 11g Reviewing Your Projects for Dependent JAR Files Upgrading Applications That Require Proxy Settings for Web Services Recreating build.xml and build.properties Files Not Upgraded by the Migration Wizard Upgrading Projects That Use UDDI-Registered Resources Using the Oracle SOA Suite Command-Line Upgrade Tool Table 9-1 describes the APIs you can use in an Oracle SOA Suite application. For each Oracle Application Server 10g API, it provides a summary of the changes for Oracle Fusion Middleware 11g and where you can get more information about upgrading your applications that use the API. The following sections introduce the Oracle Business Rules 11g SDK and API and provide instructions for upgrading to the Oracle Business Rules API: Overview of the Oracle Business Rules SDK and API Changes for 11g Accessing a Dictionary in the Development Environment Accessing a Repository in a Production Environment In Oracle Fusion Middleware11g, the Oracle Business Rules SDK and API has been significantly improved. In Oracle Application Server 10g, developers were required to manually manage accessing the repository and creating and using RuleSession instances. In Oracle Fusion Middleware 11g, the Decision Point API and decision function features provide an interface and implementation that simplifies the definition and execution of rules. When upgrading to Oracle Business Rules 11g, look first at how you can use these new features, as documented in the Oracle Fusion Middleware User's Guide for Oracle Business Rules. However, if you want to continue to use the Oracle Business Rules SDK in the same way as you did for 11g, then the following sections describe how to directly translate the Oracle Business Rules 10g SDK to the 11g SDK. All of the classes discussed in this section are in the rulesdk2.jar file and under the oracle.rules.sdk2 package. In Oracle Business Rules 10g, it was common for an application developer to use a file-based repository while developing the application, and then switch to using a WebDAV repository for production. In Oracle Business Rules 11g, you can instead access the rule dictionary file directly before it is packaged into a format which can be deployed to MDS. In the following examples, compare the code required to access a dictionary in development mode in 10g (Example 9-1) with the code required in 11g (Example 9-2). Note that in general, you should use the Decision Point API rather than continuing to access the RuleRepository directly. Example 9-1 Accessing a Dictionary with Oracle Business Rules 10g in a Development Environment String path; // the path to the file repository Locale locale; // the desired Locale // The following code assumes that the path and locale have been set appropriately RepositoryType rt = RepositoryManager.getRegisteredRepositoryType("oracle.rules.sdk.store.jar"); RuleRepository repos = RepositoryManager.createRuleRepositoryInstance(rt); RepositoryContext rc = new RepositoryContext(); rc.setLocale(locale); rc.setProperty("oracle.rules.sdk.store.jar.path", path); repos.init(rc); Example 9-2 Accessing a Dictionary with Oracle Business Rules 11g in a Development Environment protected static final String DICT_LOCATION ="C:\\scratch\\CarRental.rules"; ... RuleDictionary dict = null; Reader reader = null; try { reader = new FileReader(new File(DICT_LOCATION));();} } } In Oracle Business Rules 10g, WebDAV was the recommended production repository. In Oracle Business Rules 11g, WebDAV is no longer supported and Metadata Services (MDS) is the recommended repository. Also, the dictionary "name" and "version" have been replaced with a "package" and "name" (similarly to the Java class naming scheme). In Oracle Business Rules 10g, the version did not provide true versioning. In Oracle Business Rules 11g, the equivalent to specifying a version is to simply change the name. For example, a 10g dictionary with the name foo.bar.MyDict and version 2 would in 11g be packaged as foo.bar and name MyDict2. In the following examples, compare the code required to access a dictionary in production mode in 10g (Example 9-3) with the code required in 11g (Example 9-4). Example 9-3 Accessing a Dictionary with Oracle Business Rules 10g in a Production Environment String url; // the URL for the WebDAV repository Locale locale; // the desired Locale // The following code assumes that the url and locale have been set appropriately RepositoryType rt = RepositoryManager.getRegisteredRepositoryType("oracle.rules.sdk.store.webdav"); RuleRepository repos = RepositoryManager.createRuleRepositoryInstance(rt); RepositoryContext rc = new RepositoryContext(); rc.setLocale(locale); rc.setProperty("oracle.rules.sdk.store.webdav.url", url); repos.init(rc); RuleDictionary dictionaryWithInitialVersion = repos.loadDictionary(dictionaryName); RuleDictionary dictionarySpecificVersion = repos.loadDictionary(dictionaryName, dictionaryVersion); Example 9-4 Accessing a Dictionary with Oracle Business Rules 11g in a Production Environment import static oracle.rules.sdk2.repository.RepositoryManager.createRuleRepositoryInstance; import static oracle.rules.sdk2.repository.RepositoryManager.getRegisteredRepositoryType; import static oracle.rules.sdk2.store.mds.Keys.CONNECTION; ... private static final String DICT_PKG = "oracle.middleware.rules.demo"; private static final String DICT_NAME = "CarRental"; private static final DictionaryFQN DICT_FQN = new DictionaryFQN(DICT_PKG, DICT_NAME); ... RuleRepository repo = createRuleRepositoryInstance(getRegisteredRepositoryType(CONNECTION)); repo.init(new RepositoryContext() {{ setDictionaryFinder(new DecisionPointDictionaryFinder(null)); }}); RuleDictionary dict = repo.load(DICT_FQN); Oracle SOA Suite 11g introduces a new infrastructure management API that replaces several other APIs previously available in Oracle SOA Suite 10g. For more information about the new API, refer to the Oracle Fusion Middleware Release Notes, which are available as part of the Oracle Fusion Middleware 11g documentation library. If you are upgrading an Oracle SOA Suite application that depends upon references to custom JAR file libraries. note that these references may not get upgraded automatically when you open and upgrade your application in Oracle JDeveloper 11g. As a result, you should review your projects for these types of dependencies and, after the upgrade, make sure that you add any missing references, by selecting the Libraries and Classpath link in the Oracle JDeveloper 11g Project Properties dialog. If you are upgrading an application that uses Web services resources outside your company firewall, you must modify a configuration file that will enable the upgrade to accesses proxy settings and adjust them accordingly during the upgrade of the application. To configure Oracle JDeveloper 11g to use the proxies during the upgrade process: Locate the following file in the JDEV_HOME /bin directory: ant-sca-upgrade.xml Edit the file and modify the following settings to identify the proxy server and port required to resolve the Web services addresses for the applications you are upgrading. when you are upgrading an ESB project to Oracle Mediator 11g; the other is used when you are upgrading Oracle BPEL Process Manager projects. Stop and start Oracle JDeveloper 11g so your changes can take effect and then open and upgrade your application in Oracle JDeveloper 11g. When you open and upgrade an application in Oracle JDeveloper 11g, the build.xml and build.properties files associated with your application projects are not upgraded. Instead, you must recreate these files in Oracle JDeveloper 11g after the upgrade. The following information is important if any of your Oracle BPEL Process Manager or Oracle Enterprise Service Bus 10g projects use remote resources that are registered in a UDDI (Universal Description, Discover and Integration) registry, such as Oracle Service Registry (OSR). Refer to the following for more information: Verifying that serviceKey Endpoints Are Available Before Upgrade Changing to the orauddi Protocol If you have a 10g Release 3 (10.1.3) project that references an endpoint URL that uses a serviceKey from OSR, then you must be sure that the endpoint URL is up and available when you upgrade the application. Otherwise, the upgrade of the application will fail. To prevent such a failure, verify that the endpoint URLs are available, and if necessary, modify the endpoint URLs in bpel.xml or the routing service to point to new URL which is accessible. In Oracle Application Server 10g Release 3 (10.1.3), Oracle BPEL Process Manager and Oracle Enterprise Service Bus projects used the uddi protocol to obtain resource references from OSR. In Oracle Fusion Middleware 11g, Oracle BPEL Process Manager and Oracle Mediator projects use the orauddi protocol. As a result, prior to upgrading your Oracle BPEL Process Manager or Oracle Enterprise Service Bus projects, you must do the following: Use the service registry to modify the registered service so it uses the new bindings supported by Oracle Fusion Middleware 11g. For example, in OSR, do the following: Log in to your Oracle Service Registry. On the Search tab, click Businesses. Click Add Name to search for a business by name. From the search results, click the name of the business you want to modify. In the left pane of the View Business page, right-click the service you want to modify and select Add Binding from the context menu. From the Type drop-down menu on the Add Binding page, select wsdlDeployment.In the Access Point field, enter the URL. For example: Click Add Binding. Click Save Changes. Open and upgrade the application in Oracle JDeveloper 11g. In Oracle JDeveloper, edit the composite.xml file for the upgraded project and configure the endpoint URL using the UDDI Registry option on the Resource Palette. For more information about the UDDI registry, visit the following URL: The following sections describe how to use the Oracle SOA Suite command-line upgrade tool: Benefits of Using the Oracle SOA Suite Command-Line Upgrade Tool Using the Oracle SOA Suite Command-Line Upgrade Tool with Oracle JDeveloper 11g Limitations When Upgrading Human Workflow Applications with the Oracle SOA Suite Command-Line Upgrade Tool Upgrading BPEL or ESB Projects with the Oracle SOA Suite Command-Line Upgrade Tool Combining Multiple BPEL Projects Into a Single Composite with the Oracle SOA Suite Command-Line Upgrade Tool Upgrading Oracle Enterprise Service Bus (ESB) Projects with the Oracle SOA Suite Command-Line Upgrade Tool Upgrading Domain Value Maps (DVMs) and Cross References with the Oracle SOA Suite Command-Line Upgrade Tool The Oracle SOA Suite Command-Line Upgrade Tool has the following benefits: You can use the command-line tool to automate your application upgrade work, using scripts or other command-line automation tools. The command-line tool upgrades both Oracle BPEL Process Manager projects, and upgrades Oracle Enterprise Service Bus 10g projects to Oracle Mediator 11g. With the command-line tool, you can merge BPEL projects together and create a single composite out of multiple BPEL projects. This is not possible when you use the Oracle JDeveloper Migration Wizard. The functionality is exactly the same as the JDeveloper mode when it comes to dealing with SOA project contents because the same codebase is used. The Oracle SOA Suite Command-Line Upgrade Tool is compatible (with restrictions) with the Oracle JDeveloper Migration Wizard. In other words, you can choose to remain in command-line mode all the way through the upgrade process (upgrade, compile, package, and deploy), or you can choose to move to Oracle JDeveloper, or you use both tools, with no functionality loss. However, it is important to note that the command-line tool upgrades SOA project artifacts only. Other Oracle JDeveloper artifacts (for example, the .jpr and .jws files) are ignored. To work around this restriction, note the following: The Oracle SOA Suite Command-Line Upgrade Tool copies files from the BPEL suitcase directory (the BPEL subdirectory or the directory hosting the BPEL files) to the specified target directory, specified on the command line. The above copying action does not copy the .jpr or .jws files. After the upgrade, the target directory contains only the upgraded SOA project contents. To remedy this problem in Oracle JDeveloper, you can create a new application or new project, and then define the project directory to be the newly upgraded composite directory. If you attempt to use the Oracle SOA Suite Command-Line Upgrade Tool to upgrade an application that contains Oracle Human Workflow projects, note that the tool will create a separate project for each upgraded task form. This resulting project is an Oracle ADF project and Oracle ADF does not support command line deployment. As a result, after using the Oracle SOA Suite Command-Line Upgrade Tool, you must open the upgraded projects in Oracle JDeveloper 11g deploy it from JDeveloper. For information about how to open the upgraded project in Oracle JDeveloper, see Chapter 9, "Using the Oracle SOA Suite Command-Line Upgrade Tool with Oracle JDeveloper 11g". The files required to run the Oracle SOA Suite Command-Line Upgrade Tool are installed automatically when you install Oracle JDeveloper 11g and when you install Oracle SOA Suite 11g: When you install the Oracle SOA Suite 11g, the required XML files are installed in the bin directory of the Oracle SOA Suite Oracle home. When you install Oracle JDeveloper 11g, the required XML files are installed in the bin directory of the Oracle JDeveloper home. You can use this files in this directory, together with Apache Ant, to migrate your 10g Release 3 (10.1.3) SOA projects to 11g. Note:For the purposes of this procedure, it is assumed you are running the Oracle SOA Suite Command-Line Upgrade Tool from the Oracle home of an Oracle SOA Suite installation. If you are running it from a Oracle JDeveloper home, replace ORACLE_HOME with JDEV_HOME in the following procedures. To use the ANT project: Set the ORACLE_HOME environment variable so it is defined as the path to the Oracle SOA Suite Oracle home. Set the other environment variables that are required by the command-line tool, by running the following script: On UNIX systems: . ORACLE_HOME/bin/soaversion.sh On Windows systems: ORACLE_HOME\bin\soaversion.cmd Run the Ant project, as follows: ant -f ORACLE_HOME/bin/ant-sca-upgrade.xml -Dsource sourceDir -Dtarget targetDir -DappName app_name bpel_or_mediator_identifier For examples of using the Oracle SOA Suite Command-Line Upgrade Tool, see Example 9-5 and Example 9-6. For a description of each command-line argument, see Table 9-2. After the upgrade is comlpete, run the SCA-Compiler ( ant-sca-compile.xml) which will verify the migrated sources. Because the SCA-Upgrade will not generate complete artifacts in all cases, you will see some errors or warnings from SCA-Compiler with information on how to fix them. Check the SCA Compiler for reference. After you get a clean pass from the SCA-Compiler, use the ant-sca-package.xml tool to package your application. You can then deploy the application using ant-sca-deploy.xml. After deployment, your project will be available to the server for testing. Note, however, that in most cases, you will likely want to open the upgraded project in Oracle JDeveloper 11g. From there, you can easily review, verify, and make any necessary updates to the application projects. For more information, see Section 9.3.6.2, "Using the Oracle SOA Suite Command-Line Upgrade Tool with Oracle JDeveloper 11g". Example 9-5 Command-Line Example on the UNIX Operating System ant -f /Disk03/Oracle/Middleware/SOA_home/bin/ant-sca-upgrade.xml -Dsource /disk03/myProjects/my_bpel_app/ -Dtarget /disk03/my11gProjects/my_bpel_app/ -DappName my_bpel_app bpel Example 9-6 Command-Line Example on the Windows Operating System ant -f C:\Oracle\Middleware\SOA_home\bin\ant-sca-upgrade.xml -Dsource C:\myProjects/my_bpel_app/ -Dtarget C:\my11gProjects/my_bpel_app/ -DappName my_bpel_app bpel When you run the Oracle SOA Suite Command-Line Upgrade Tool, note the following: If the sourceDir and the targetDir directories are the same directory, then the command-line upgrade tool will automatically create a backup of the directory before migration; if you must re-run the upgrade, you can restore the source files using the backup directory. You can identify the backup directory by the ".backup" suffix in its name. If the sourceDir and targetDir directories are different, a backup directory is not created and is not necessary, because the files in the sourceDir are not modified during the migration. After upgrade, the logs can be found in the following output directory: ORACLE_HOME/upgrade/logs Using the Oracle SOA Suite Command-Line Upgrade Tool, you can merge BPEL projects together and create a single composite. This is not possible when you use the Migration Wizard in Oracle JDeveloper. To combine multiple BPEL projects into a single composite, provide multiple source directories as part of the -Dsource property on the command line. Path separators can be a colon (:) or a semicolon (;). Ant will convert the separator to platform's local conventions. As a guideline, also use double quotes to identity the multiple source directories to prevent Ant from parsing the input in an unexpected manner. The first source directory specified will be considered as the root of the 11g project and will determine the composite name. For example: ant -f ORACLE_HOME/bin/ant-sca-upgrade.xml -Dsource "sourceDir1:sourceDir2" -Dtarget targetDir -DappName app_name The first project in the source list is considered the root project and only those services are exposed as Composite Services. Anytime you use the merge feature, it is recommended that the projects be related. Merging of projects is supported for BPEL projects only. ESB projects cannot be merged with other BPEL or other ESB projects. The Oracle SOA Suite Command-Line Upgrade Tool can also be used to upgrade ESB projects to Oracle Mediator 11g. To upgrade an ESB project, use the instructions in Section 9.3.6.4, "Upgrading BPEL or ESB Projects with the Oracle SOA Suite Command-Line Upgrade Tool", but be sure to use the mediator argument, as follows: ant -f ORACLE_HOME/bin/ant-sca-upgrade.xml -Dsource sourceDir -Dtarget targetDir -DappName app_name mediator If you use domain value maps (DVMs) or Cross References in your Oracle BPEL Process Manager 10g or Oracle Enterprise Service Bus 10g projects, then note the following: The xPath functions you use to access the domain value maps or cross references are upgraded automatically to Oracle BPEL Process Manager and Oracle Mediator 10g when you open and upgrade your applications in Oracle JDeveloper 11g. However, you must perform a manual upgrade task to upgrade the domain value maps and cross references that are saved in the Oracle Enterprise Service Bus repository. The upgrade process moves the domain value maps from the ESB repository to the Oracle Fusion Middleware 11g Metadata Services (MDS) repository. For more information, see "Managing the MDS Repository" in the Oracle Fusion Middleware Administrator's Guide. To upgrade your 10g domain value maps in the ESB repository perform the following tasks. Change directory to the Oracle Enterprise Service Bus Oracle home. Use the export script to export the metadata to a ZIP file. For example, on UNIX systems: ORACLE_HOME/export.sh metadata10g.zip Use Apache Ant and the upgrade-xrefdvm target in the sca-upgrade.xml file to use the metadata ZIP file to generate an Oracle SOA Suite archive JAR file: Change directory to the Oracle SOA Suite 11g Oracle home. Start Oracle JDeveloper 11g and create a new application. Import the Oracle SOA Suite archive into a new SOA project: From the Oracle JDeveloper 11g File menu, select Import, then SOA Archive into SOA Project. In the Create SOA Project from SOA Archive Dialog Box, select JAR Options in the navigation tree on the left, and then click Browse to locate the sca_XrefDvmFiles10g_rev1.0.jar file that you created previously in this procedure. Select File Groups > Project Output from the navigational tree on the left, and enter XrefDvmFiles10g in the Target Directory in Archive field. Click OK to create the new SOA project, which is called XrefDvmFiles10g. The new project consists of an empty composite, along with the upgraded XRef and DVM files. Create a JAR file for the XRef and DVM metadata, and then deploy the JAR file to the Oracle SOA Infrastructure. For more information, see "Deploying and Using Shared Metadata Across SOA Composite Applications," same deployment process.
http://docs.oracle.com/cd/E14571_01/upgrade.1111/e10127/upgrade_soa_apps.htm
CC-MAIN-2015-27
en
refinedweb
Jeroen Breedveld wrote: > Hi all, > > I posted this to the ant-user list but haven't got any reaction sofar. > Since I built custom ant tasks it seems to me that this question can > also be posted here. > Sorry, I'm looking at it and was going to respond on ant-user. The namespace used for loaders is the "reference" namespace and a reference to the project object is added to this namespace for the project name. I'm going to add some code to make this into an error rather than a ClassCastException. For now, use distinct names. Conor -- To unsubscribe, e-mail: <mailto:ant-dev-unsubscribe@jakarta.apache.org> For additional commands, e-mail: <mailto:ant-dev-help@jakarta.apache.org>
http://mail-archives.apache.org/mod_mbox/ant-dev/200301.mbox/%3C3E19775D.6030801@cortexebusiness.com.au%3E
CC-MAIN-2015-27
en
refinedweb
You must have seen a lot of very fancy Status Bars in different samples and commercial applications with progress bars, animation, images etc etc. Here, I present a technique for making a Text Only Status bar with many text-only panes and with it's own tool tips extracted from the Status Bar panes themselves. You can easily replace the standard status bar in an existing SDI/MDI app by including: #include "TextualStatusBar.h" at the top. For a dialog based app you can create it in OnCreate(). OnCreate() Although this might not be the best Status Bar around, I've shown you the way to deal with a Status bar and tool tip control as a child window. Furthermore, there are a couple of other (read, better!) ways for adding tool tips to any control. The technique I used in the sample is the same one that I used in an app because it was a requirement. I caught WM_NCHITTEST over the status bar and updated the tool tip text. WM_NCHITTEST This example also teaches how to get to the individual panes of the status bar and perform an operation on them. This example also illustrates the tight connection between MFC CStatusBar and CStatusBarCtrl classes. CStatusBar CStatusBarCtrl Please don not hesitate to mail me any bug, suggestion, clarification, query.
http://www.codeproject.com/Articles/497/Text-Only-Status-Bar?fid=625&df=90&mpp=10&sort=Position&spc=None&tid=77680
CC-MAIN-2015-27
en
refinedweb
Member 140 Points Nov 08, 2006 09:18 PM|john@seedprod.com|LINK I just installed AJAX Beta 2 and the updated AjaxControlToolkit. I'm getting this error when I try to access the site. Any Ideas? Stack Trace: AjaxControlToolkit Member 140 Points Nov 09, 2006 02:41 PM|john@seedprod.com|LINK Thanks The solution that worked for me was I had to change my permissions on the C:\WINDOWS\Microsoft.NET\Framework\v2.0.50727\Temporary ASP.NET Files folder. That corrected the issue! Star 8728 Points Microsoft Nov 14, 2006 09:59 PM|David Anson|LINK Jan 23, 2007 09:45 PM|eappell|LINK <%@ Register Assembly="AjaxControlToolkit" Namespace="AjaxControlToolkit" TagPrefix="cc1" %> I've got the AjaxControlToolkit.dll file in my Bin folder, and I've re-added it a couple times, and then updated the reference, but no joy. I'm not sure what else to do. One interesting thing that's happening is, I've deployed this app to our development server and it runs fine - no errors, even on the page that references the assembly. However, the development server has NOT yet been upgraded to ASP.NET AJAX 1.0. This error only happens on my local instance... So it appears to be related to the new AJAX release, not the AjaxControlToolkit assembly (I do have the most recent release of the ACT on the dev server in my Bin directory)... If anyone has any other ideas for how I might get this working, please reply! Thanks, eddie Member 8 Points Jan 24, 2007 04:12 PM|eappell|LINK I tried what you suggested, but I still get the same error. I did try installing it to the GAC, but I get an Access denied error there as well: "Failure adding assembly to the cache: Access denied." This is an app that I am constantly updating and re-publishing to the site, so it's got me very concerned since I have to launch a new version of it next week... PLEASE, if anyone can tell me how to resolve this, let me know - I am DESPERATE. Thanks, eddie Jan 24, 2007 04:33 PM|eappell|LINK One more thing... If I open up the Sample website from the AjaxControlToolkit and try to compile I get the same error: Error 1 Could not load file or assembly 'AjaxControlToolkit' or one of its dependencies. Access is denied. So this seems to be a system-wide error. I re-ran the installation of the 1.0 extensions (repair) and the CTP, but no luck... Jan 24, 2007 06:10 PM|kirtid|LINK Jan 25, 2007 01:37 AM|eappell|LINK Wel, I didn't really change anything, but when I came back from lunch everything seems to be working fine now. I'm really stumped as to what may have caused this, or what may have fixed it, but I'm glad it's working. Now I've got to install asp.net ajax 1.0 on my server and hope the same thing doesn't happen there... BTW, I did have the release of asp.net ajax installed - in fact I uninstalled it a couple times and reinstalled it, so I'm pretty sure the installation was successful. Also, I am using the most recent version of AjaxControlToolkit.dll which came out yesterday... Thanks for all your help, this one, I'm afraid, is a mystery... Eddie Member 4 Points Jul 13, 2007 05:59 PM|bishpop999|LINK Oct 15, 2007 07:46 AM|skqi|LINK copy to bin folder will do. Check Member 6 Points May 21, 2008 11:44 AM|javikiller|LINK I have almost the same problem, i have VS2005 installed on 3 computers in the office, and 4 days ago one of them stop work with ajax, i tried to remove the folder into Visual Studio 8 that i named AjaxToolkit and again unzipping the AjaxControlToolkit.zip with the same name, but now i have a builds errors like this: Warning 9 Resolved file has a bad image, no metadata, or is otherwise inaccessible. Could not load file or assembly 'C:\Archivos de programa\Microsoft Visual Studio 8\AjaxToolkit\SampleWebSite\Bin\AjaxControlToolkit.dll' or one of its dependencies. The module was expected to contain an assembly manifest. PayUp.Web When i trid to add the reference in my web project a warning icon shows in the reference Ajaxcontroltoolkit reference, and the propierties dont fill the culture, description, version, etc, and in the others computers the reference ajaxcontroltoolkit's propierties fill all the fields correctly. I tried to remove the temporaly files in C:\WINDOWS\Microsoft.NET\Framework\v2.0.50727 but the error still there and i have tried to unistall the VS05 and the error still there. I dont know how to do!!</div></div> Jun 09, 2008 04:46 PM|lasstlos|LINK Here's what I did to resolve this under VS2008 (its probably very similar under VS2005): First, on any project where I'm using dlls, I always create a dll folder to hold them. I NEVER rely on the bin folder for references (since that would be cleared out when running a clean operation and most source control programs do not include the bin file when checking in/out) or on local copies of the dll (c:\ajax\2.0\bin\ajaxcontroltoolkit.dll). Sure, the build operation puts a copy there but thats its job--leave the bin file to the build script. If you ever work on your code somewhere else, publish it to another machine, or hand it off to someone else; you don't have to worry about whether the new machine has its ajaxcontroltoolkit.dll installed in the same location on its hard drive as yours. As a side note, under the properties -> advanced for the dll, I have a build action set to content and a copy to output directory set to do not copy Second, I go into the references, remove any current ajaxcontroltoolkit references (or any other special dlls that I was using and am moving to the dll folder), and add the dll references back in pointing to the copies in my dll folder. Thirdly, if I'm going to be using something like the ajaxcontroltoolkit on most of my pages, I go ahead and throw the following into the web.config file so I dont have register statements on every page: <configuration> <system.web> <pages> <controls> <add tagPrefix="cc1" assembly="AjaxControlToolkit" namespace="AjaxControlToolkit"/> </controls> <pages> <compilation> <assemblies> <add assembly="AjaxControlToolkit"/> </assemblies> </compilation> </system.web> </configuration> Hope this helps, it took me a while to get it right If this is an intermittent problem for you (ie, it works for a bit after a build and then you get a parser error), try stopping and starting IIS. In my case, the ajaxcontroltoolkit that the build script had placed in the bin folder was being replaced with an earlier verison of the toolkit. Restarting IIS seems to have fixed this. Since doing this, my version in the bin folder has not reverted to 1.16xxx, it stays a 1.19xxx and I don't have any more frustrating errors. Sep 03, 2008 06:18 PM|awarberg|LINK I too have a website, which emits these errors at random. ASP.NET is a great idea and has worked well for me many years. It appears, however, that Microsoft is now losing grips of its product, as to which the sporadic nature of this issue can only attest. It seems that an overall goal of Microsoft is to weave their users/developers into a net, from which it is diffult to migrate. Now that javascript libraries such as jquery has reached maturity it would seem a good occasion to look for alternatives to the ASP.NET heavyweighter and adopt for more to-the-point methods such as jquery + web services. Best regards Andreas Sep 09, 2008 02:57 PM|jwfoster|LINK <identity impersonate="true" userName="UserName" password="Password" /> <br /><br />In my case, the fix was changing the user that was being impersonated on my local machine. I was able to pinpoint it to this issue because the site was working on my local machine with my development web.config, but not with my live web.config, and this was one of the main differences. I didn't make it as far as determining what specific security access was different between the two users. I tried granting my live web.config impersontated user full access to the bin folder on my local machine to no avail. Sep 18, 2008 12:23 PM|Siva_V|LINK Thank you .. I followed your advice and got my issue fixed.. Thanks a lot chris poterThank you .. I followed your advice and got my issue fixed.. Thanks a lot chris poter Member 186 Points Sep 22, 2008 04:43 PM|lividsquirrel|LINK I was not able to solve this problem until I finally granted access to "Everyone" on the .../Temporary ASP.NET Files folder. However, when I looked into this, I found this worked because my application was impersonating a specific service account. I would encourage everyone to clearly identify the user that the process is running as! Does your web app impersonate the current user? If so, it would likely work on your development machine, where you are an admin, but you would see this error on a server where you didn't have privileges to the temporary asp.net folder. Also consider what identity your app pool is running under. Member 186 Points Member 34 Points Sep 23, 2009 11:08 AM|vasireddybharath|LINK bishpop999, You are right...It worked for me also. Thanks man..! Member 34 Points Sep 23, 2009 11:12 AM|vasireddybharath|LINK bishpop999, It worked for me. Thanks alot..... Member 3 Points Dec 14, 2009 03:38 PM|sajeelmunir|LINK The given solution on the following site solve this problem. "AJAX .NET 2.0" " "Ajax Control Toolkit" Could not load file or assembly 'AjaxControlToolkit' or one of its dependencies Member 6 Points Member 2 Points Member 4 Points Nov 10, 2011 01:18 PM|nicolas.fortin|LINK You may receive this error if you are running your site with impersonation. Check your web.config impersonation tag and/or the security context that the particular user is running under: <br /><br /> <identity impersonate="true" userName="UserName" password="Password" /> <br /><br /> That solved my problem Thanks for the answer. Jan 30, 2014 02:36 PM|Crazytw|LINK I got this error with no reason and I came across this post. None of reply helped me but I looked at my error message and went to C:\Users\"loginUserID"\AppData\Local\Temp\Temporary ASP.NET Files added my creditial to folder Temporary ASP.NET Files, solved my problem. I thought this may help somebody. AjaxControlToolkit 30 replies Last post Jan 30, 2014 02:36 PM by Crazytw
http://forums.asp.net/t/1043427.aspx?Could+not+load+file+or+assembly+AjaxControlToolkit+or+one+of+its+dependencies+Access+is+denied+
CC-MAIN-2014-15
en
refinedweb
Hussein B a écrit : (snip) > Thank you both for your kind help and patience :) > Built-in modules are compiled For which definition of "compiled" ? > but even if they are so, when importing > them (sys for example), Python will run their code in order to create > bindings and objects, right? As the name imply, built-in modules are built in the interpreter - IOW, they are part of the interpreter *exposed* as modules[1]. Unless you have a taste for gory implementation details, just don't worry about this. Other "ordinary" modules need of course to be executed once when first loaded - IOW, the first time they are imported. All statements[2] at the top-level of the module are then sequentially executed, so that any relevant object (functions, classes, whatever) are created and bound in the module's namespace. [1] There's an ambiguity with the term "module", which is used for both the module object (instance of class 'module') that exists at runtime and the .py source file a module object is usually - but not necessarily - built from. [2] 'def' and 'class' are executable statements that create resp. a function or class object and bind them to the function/class name in the containing namespace.
https://mail.python.org/pipermail/python-list/2008-August/474097.html
CC-MAIN-2014-15
en
refinedweb
Visual GC Most of us are aware of the Java Tuning using the heap sizing, GC settings. But there are several more which one can use. Please refer the following websites for more details. I was curious to know, how the Permgen, Oldgen, Eden spaces look (I can think of Hell or Heaven as no one has experienced ) So I wrote a small code to test the VisualGC output from JVisualVM and following are a few results which you can see. The details need to written as it would consume some more time for me to understand / digest. public class StudentBean { private long sId; private String fName; private String sName; private String division; private String address; public StudentBean(long sId, String fName, String sName, String division, String address) { super(); this.sId = sId; this.fName = fName; this.sName = sName; this.division = division; this.address = address; } @Override public String toString() { return “StudentBean [sId=" + sId + ", fName=" + fName + ", sName=" + sName + ", division=" + division + ", address=" + address + "]“; } public long getsId() { return sId; } public void setsId(long sId) { this.sId = sId; } public String getfName() { return fName; } public void setfName(String fName) { this.fName = fName; } public String getsName() { return sName; } public void setsName(String sName) { this.sName = sName; } public String getDivision() { return division; } public void setDivision(String division) { this.division = division; } public String getAddress() { return address; } public void setAddress(String address) { this.address = address; } } import java.util.Random; public class StudentTest { public static void main(String[] args) throws InterruptedException { Random randomize = new Random(); for(;;) { StudentBean sBean = new StudentBean(randomize.nextLong(), Integer.toString(randomize.nextInt()), Integer.toString(randomize.nextInt()), Integer.toString(randomize.nextInt()), Integer.toString(randomize.nextInt())); System.out.println(sBean); Thread t = Thread.currentThread(); synchronized (t) { t.wait(3000); } } } } Default -Xms128M -Xmx256M -Xms128M -Xmx256M -Xmn64M -Xms128M -Xmx256M -Xmn64M -XX:+UseConcMarkSweepGC -Xms128M -Xmx256M -Xmn64M -XX:+UseParallelGC -Xms128M -Xmx256M -Xmn64M -XX:+UseSerialGC I would like to experiment the other options as well. The number of permutations and combinations would be high, but wanted to have a fair idea of how the Java Memory Model changes based on the Java options which are given by the user while invoking the JRE. Will write a more detailed report(?) based on my understanding / grasping power. (Note: Opinions expressed in this article and its replies are the opinions of their respective authors and not those of DZone, Inc.)
http://java.dzone.com/articles/visual-gc
CC-MAIN-2014-15
en
refinedweb
I am taking columns from a csv file , now I don't have the exact number of columns so I wish to use Double dimension array so here is what I am doing sub column_segregation { my ($file,$col) = @_; use Text::CSV; my @array_A2 = (); open my $io,'<',$file or die "$!"; my $csv = Text::CSV->new ({ binary => 1 }); while (my $row = $csv->getline($io)) { push @array_A2, $row->[$col]; } close $io; return (@array_A2); } [download] so this routine will return an array , now i want each column should go to each corresponding column in Double dimension array. i.e. column 1 should go to column of 1 DD array and so on . So I have never worked with DD array so how to put it in the DD array column is what I need help with <\n> G'day torres09, You asked about this less than 48 hours ago: indefinite number of columns You were provided with code for doing this and links to documentation explaining about this. How about you go back and read what's already been provided. If there was something you didn't understand, either in the code examples posted or the documentation linked to, then ask a specific question about that issue. -- Ken I think, I don't understand the problem completely. $csv->getline($io) already returns a ref to an array where, each entry "n" is the cell of column "n+1". So this should already give you the correct array: while (my $row = $csv->getline($io)) { push @DD, $row; } [download] $DD[$r][$c] should give you the entry in row "r+1" and column "c+1". So I have never worked with DD array so how to put it in the DD array column is what I need help with Hmm, some of the answer in indefinite number of columns show that, so Basic debugging checklist to Data::Dump::dd some data to find out which one is the one you want Another example of working with arrays of arrays is in How to traverse a two dimentional array in a constructor? including a link to a free book which teaches this You may view the original node and the consideration vote tally. Also, you could google it. I hope that helps. -Michael A foolish day Just another day Internet cleaning day The real first day of Spring The real first day of Autumn Wait a second, ... is this poll a joke? Results (427 votes), past polls
http://www.perlmonks.org/index.pl?node_id=1044753
CC-MAIN-2014-15
en
refinedweb
#include "RTOp.h" #include "RTOp_obj_null_vtbl.h" Include dependency graph for RTOp_TOp_max_vec_scalar.h: This graph shows which files directly or indirectly include this file: Go to the source code of this file. element-wise transformation: z0 = max(z0,min_ele); This operator class implementation was created automatically by 'new_rtop.pl'. ToDo: Write the documentation for this class! Definition in file RTOp_TOp_max_vec_scalar.h.
http://trilinos.sandia.gov/packages/docs/r10.2/packages/moocho/src/RTOpPack/doc/html/RTOp__TOp__max__vec__scalar_8h.html
CC-MAIN-2014-15
en
refinedweb
Hi, I'm trying to compile this project: on my Mac OS X 10.8.5 (Mountain Lion), with command line tools from Xcode 5.0. the project includes cl.hpp version 1.2.4 from here: CMake runs cleanly and finds the OpenCL framework and headers, but when I 'make' the project I get: In file included from /Users/nyholku/https:/github.com/smistad/GPU-Marching-Cubes/OpenCLUtilities/openCLUtilities.cpp:1: In file included from /Users/nyholku/https:/github.com/smistad/GPU-Marching-Cubes/OpenCLUtilities/openCLUtilities.hpp:9: /Users/nyholku/https:/github.com/smistad/GPU-Marching-Cubes/OpenCLUtilities/OpenCL/cl.hpp:678:1: error: expected unqualified-id { ^ and a lot of other errors. Looking the code that includes cl.hpp I see these definition before the include: #define __NO_STD_VECTOR // Use cl::vector instead of STL version #define __CL_ENABLE_EXCEPTIONS #define __USE_GL_INTEROP #include <iostream> #include <fstream> #include <utility> #include <string> #include <GL/glew.h> #include <GLUT/glut.h> #include "OpenCLUtilities/OpenCL/cl.hpp" If I undef the __NO_STD_VECTOR the compilation gets further (but not complete cleanly) but I expect it would not work. So I'm guessing this has something to do with the vector-template but I'm not sure how to approach this problem as presumably this project build just fine on Windows and Linux ie this is likely some setup or version incompatibility with Mac OS / tools. The compiler is: Apple LLVM version 5.0 (clang-500.2.79) (based on LLVM 3.3svn) Target: x86_64-apple-darwin12.5.0 Thread model: posix br Kusti
http://www.khronos.org/message_boards/showthread.php/9206-cl-hpp-on-Mac-OS-X-fails-with-expected-unqualified-id-error?mode=hybrid
CC-MAIN-2014-15
en
refinedweb
I've learned LZ77 in theory, know I wanna practice, my first attempt was to code a C++ function that finds the best match when comparing the sliding window with the look ahead buffer, this is what i came up with: (btw it must return a array of int, 0->contains distance to run backwards from the lookahead buffer 1->contains how long the match is, it cant be less then 2) Somebody help me at making this function work, it's awful so far. And I can't call it on main without bumping into error report.Somebody help me at making this function work, it's awful so far. And I can't call it on main without bumping into error report.Code:#include <iostream> #include <string> using namespace std; int * find_match(char * sliding_window, char * look_ahead) { int distance = 0; int lenght = 0; // for loop runs every character pointed in var sliding_window // it searches for a character match longer than 2 for(int x = 0; *(sliding_window+x) != '\0'; x++) { if(sliding_window[x] == look_ahead[0]) if(sliding_window[x+1] == look_ahead[1]) if(sliding_window[x+2] == look_ahead[2]) { //if reached here, then it means a match longer than 2 bytes has been found // distance = ???? dont know how to calculate distance lenght++; //previously 2, but 2 has already been checked if it matches, and it does while(sliding_window[x+lenght] == look_ahead[lenght]) lenght++; } if(lenght > 0) // if it's bigger than 0, then it means it's already found a match break; } if(distance != 0 && lenght != 0) { int return_array[] = {distance,lenght}; return return_array; } else { return 0; } } int main() { char * str1 = "Just testing my function."; char * str2 = "I'm just testing."; int lenght_distance[] = find_match(str1, str2); cout << "distance: " << lenght_distance[0]; return 0; }
http://cboard.cprogramming.com/cplusplus-programming/139030-lz77-match-searching-help.html
CC-MAIN-2014-15
en
refinedweb
On Wed, Feb 07, 2007 at 10:40:44PM +0100, Udo Richter wrote: > Marko Mäkelä wrote: > > do understand this right: If your softdevice detects IsUserInactive, > then it sends a 'stop playback' to VDR? Weired... It doesn't actually send anything to VDR; it'll just return 0 from cDevice::PlayVideo() and cDevice::PlayAudio(). It will also have to sleep a little, because otherwise VDR would use 100% of CPU. > I guess the most natural behavior would be to pretend that you're > playing back normally, eg. make softdevice eat the incoming data stream > at normal speed without decoding it.. Before writing my previous message, I added #if 0 #endif around the NowReplaying() check in shutdown.c, and it seems to work. VDR will be powered off during playback if there is no timed recording going on. >". Marko
http://www.linuxtv.org/pipermail/vdr/2007-February/012067.html
CC-MAIN-2014-15
en
refinedweb
Hello World, I'm currently trying to get OpenGL to work using c++, CMake and QT-creator, and I am now trying to create a shader program using external vertex and fragment shaders saved seperately from the main script. I have already managed to create a simple 2d triangle using glBegin() and glEnd(), but I didn't use any custom shader program. I have configured to add the shaders to the executable in the following line: add_executable(4d main.cpp Vertexshader.vs Fragmentshader.fs) Where Vertexshader.vs and Fragmentshader.fs have been saved in the source directory. I am puzzeled how I should now acces the information I stored in these files to compile them in runtime. I have found several scripts that somehow use CMake and GLSL shaders, but they are either a massive avelanche of code, or they don't seem to work in the first place. What I am looking for is the most simple piece of code implementing both CMake and GLSL shaders possible and/or available. Is there any basic tutorial/source code material available? Just for clarity, here is the entire CMakeLists.txt code: Code :cmake_minimum_required(VERSION 2.8) PROJECT(4D) find_package(GLUT REQUIRED) include_directories(${GLUT_INCLUDE_DIRS}) link_directories(${GLUT_LIBRARY_DIRS}) add_definitions(${GLUT_DEFINITIONS}) find_package(OpenGL REQUIRED) include_directories(${OpenGL_INCLUDE_DIRS}) link_directories(${OpenGL_LIBRARY_DIRS}) add_definitions(${OpenGL_DEFINITIONS}) add_executable(4d main.cpp Vertexshader.vs Fragmentshader.fs) target_link_libraries(4d ${OPENGL_LIBRARIES} ${GLUT_LIBRARY} ) And the main.cpp code: Code :#include <iostream> // ensure apple compatibility #ifdef __apple__ # include <GLUT/glut.h> # include <OpenGL/OpenGL.h> #else # include <GL/glew.h> # include <GL/freeglut.h> #endif # using namespace std; void render(void) { glBegin(GL_TRIANGLES); glVertex2f(-0.5, -0.5); glVertex2f(0.5, -0.5); glVertex2f(0.0, 0.5); glEnd(); glutSwapBuffers(); } int main (int argc, char* argv[]) { glutInit(&argc, argv); glutInitDisplayMode(GLUT_RGBA|GLUT_DOUBLE|GLUT_DEPTH); glutInitWindowSize(1280, 720); glutCreateWindow("Test"); glutDisplayFunc(render); glutMainLoop(); } Best Regards, Jan Heemstra Ps: English is not my native language. I apologise for any language errors.
http://www.opengl.org/discussion_boards/showthread.php/182354-How-to-Calculate-tangent?goto=nextoldest
CC-MAIN-2014-15
en
refinedweb
Mastering LOB Development for Silverlight 5: A Case Study in Action — Save 50% Develop a full LOB Silverlight 5 application from scratch with the help of expert advice and an accompanying case study with this book and ebook (For more resources on Silverlight, see here.) Validation One of the most important parts of the Silverlight application is the correct implementation of validations in our business logic. These can be simple details, such as the fact that the client must provide their name and e-mail address to sign up, or that before selling a book, it must be in stock. In RIA Services, validations can be defined on two levels: - In entities, via DataAnnotations. - In our Domain Service, server or asynchronous validations via Invoke. DataAnnotations The space named System.ComponentModel.DataAnnotations implements a series of attributes allowing us to add validation rules to the properties of our entities. The following table shows the most outstanding ones: The following code shows us how to add a field as "required": [Required()] public string Name { get { return this._name; } set { (...) } } In the UI layer, the control linked to this field (a TextBox, in this case), automatically detects and displays the error. It can be customized as follows: These validations are based on the launch of exceptions. They are captured by user controls and bound to data elements. If there are errors, these are shown in a friendly way. When executing the application in debug mode with Visual Studio, it is possible to find that IDE captures exceptions. To avoid this, refer to the following link, where the IDE configuration is explained:. Where can validations be added? The answer is in the metadata definition, entities, in our Domain Service, within the server project. Going back to our example, the server project is SimpleDB.Web and the Domain Service is MyDomainService. medatada.cs. These validations are automatically copied to the entities definition file and the context found on the client side. In the Simple.DB.Web.g.cs file, when the hidden folder Generated Code is opened, you will be surprised to find that some validations are already implemented. For example, the required field, field length, and so on. These are inferred from the Entity Framework model. Simple validations For validations that are already generated, let's see a simple example on how to implement those of the "required" field and "maximum length": [Required()] [StringLength(60)] public string Name { get { return this._name; } set { (...) } } Now, we will implement the syntactic validation for credit cards (format dddddddd- dddd-dddd). To do so, use the regular expression validator and add the server file MyDomainService.metadata.cs, as shown in the following code: [RegularExpression(@"\d{4}-\d{4}-\d{4}-\d{4}", ErrorMessage="Credit card not valid format should be: 9999-9999-9999-9999")] public string CreditCard { get; set; } To know how regular expressions work, refer to the following link: and refer to this free tool to try them in a quick way:. Custom and shared validations Basic validations are acceptable for 70 percent of validation scenarios, but there are still 30 percent of validations which do not fit in these patterns. What do you do then? RIA Services offers CustomValidatorAttribute. It permits the creation of a method which makes a validation defined by the developer. The benefits are listed below: - Its code: The necessary logic can be implemented to make validations. - It can be oriented for validations to be viable in other modules (for instance, the validation of an IBAN [International Bank Account]). - It can be chosen if a validation is executed on only the server side (for example, a validation requiring data base readings) or if it is also copied to the client. To validate the checksum of the CreditCard field, follow these steps: - Add to the SimpleDB.Web project, the class named ClientCustomValidation. Within this class, define a static model, ValidationResult, which accepts the value of the field to evaluate as a parameter and returns the validation result. public class ClientCustomValidation { public static ValidationResult ValidMasterCard(string strcardNumber) } - Implement the summarized validation method (the part related to the result call back is returned). public static ValidationResult ValidMasterCard(string strcardNumber) { // Let us remove the "-" separator string cardNumber = strcardNumber.Replace("-", ""); // We need to keep track of the entity fields that are // affected, so the UI controls that have this property / bound can display the error message when applies List<string> AffectedMembers = new List<string>(); AffectedMembers.Add("CreditCard"); (...) // Validation succeeded returns success // Validation failed provides error message and indicates // the entity fields that are affected return (sum % 10 == 0) ? ValidationResult.Success : new ValidationResult("Failed to validate", AffectedMembers); } To make validation simpler, only the MasterCard has been covered. To know more and cover more card types, refer to the page. In order to find examples of valid numbers, go to. - Go to the file MyDomainService.metadata.cs and, in the Client entity, add the following to the CreditCard field: [CustomValidation(typeof(ClientCustomValidation), "ValidMasterCard")] public string CreditCard { get; set; } If it is executed now and you try to enter an invalid field in the CreditCard field, it won't be marked as an error. What happens? Validation is only executed on the server side. If it is intended to be executed on the client side as well, rename the file called ClientCustomValidation.cs to ClientCustomValidation.shared. cs. In this way, the validation will be copied to the Generated_code folder and the validation will be launched. In the code generated on the client side, the entity validation is associated. /// <summary> /// Gets or sets the 'CreditCard' value. /// </summary> [CustomValidation(typeof(ClientCustomValidation), "ValidMasterCard")] [DataMember()] [RegularExpression("\\d{4}-\\d{4}-\\d{4}-\\d{4}", ErrorMessage="Credit card not valid format should be: 9999-9999-9999-9999")] [StringLength(30)] public string CreditCard { This is quite interesting. However, what happens if more than one field has to be checked in the validation? In this case, one more parameter is added to the validation method. It is ValidationContext, and through this parameter, the instance of the entity we are dealing with can be accessed. public static ValidationResult ValidMasterCard( string strcardNumber, ValidationContext validationContext) { client currentClient = (client)validationContext.ObjectInstance; Entity-level validations Fields validation is quite interesting, but sometimes, rules have to be applied in a higher level, that is, entity level. RIA Services implements some machinery to perform this kind of validation. Only a custom validation has to be defined in the appropriate entity class declaration. Following the sample we're working upon, let us implement one validation which checks that at least one of the two payment methods (PayPal or credit card) is informed. To do so, go to the ClientCustomValidation.shared.cs (SimpleDB web project) and add the following static function to the ClientCustomValidation class: public static ValidationResult ValidatePaymentInformed(client CurrentClient) { bool atLeastOnePaymentInformed = ((CurrentClient.PayPalAccount != null && CurrentClient.PayPalAccount != string.Empty) || (CurrentClient.CreditCard != null && CurrentClient.CreditCard != string.Empty)); return (atLeastOnePaymentInformed) ? ValidationResult.Success : new ValidationResult("One payment method must be informed at least"); } Next, open the MyDomainService.metadata file and add, in the class level, the following annotation to enable that validation: [CustomValidation(typeof(ClientCustomValidation), ValidatePaymentInformed")] [MetadataTypeAttribute(typeof(client.clientMetadata))] public partial class client When executing and trying the application, it will be realized that the validation is not performed. This is due to the fact that, unlike validations in the field level, the entity validations are only launched client-side when calling EndEdit or TryValidateObject. The logic is to first check if the fields are well informed and then make the appropriate validations. In this case, a button will be added, making the validation and forcing it to entity level. To know more about validation on entities, go to. Define the command launching the validation on the current entity in the ViewModel as the following code: private RelayCommand _validateCommand; public RelayCommand ValidateCommand { get { if (_validateCommand == null) { _validateCommand = new RelayCommand(() => { // Let us clear the current validation list CurrentSelectedClient.ValidationErrors.Clear(); var validationResults = new List<ValidationResult>(); ValidationContext vcontext = new ValidationContext(CurrentSelectedClient, null, null); // Let us run the validation Validator.TryValidateObject(CurrentSelectedClient, vcontext, validationResults); // Add the errors to the entities validation error // list foreach (var res in validationResults) { CurrentSelectedClient.ValidationErrors.Add(res); } },(() => (CurrentSelectedClient != null)) ); } return _validateCommand; } } Define the button in the window and bind it to the command: <Button Content="Validate" Command="{Binding Path=ValidateCommand}" /> While executing, it will be appreciated that the fields be blank, even if we click the button. Nonetheless, when adding a breaking point, the validation is shown. What happens is, there is a missing element showing the result of that validation. In this case, the choice will be to add a header whose DataContext points to the current entity. If entity validations fail, they will be shown in this element. For more information on how to show errors, check the link. The TextBox added will show the entity validation errors. The final result will look as shown in the following screenshot: (For more resources on Silverlight, see here.) Domain Services validations All validations made so far could be replicated on the client side. However, there are scenarios where validation must only be executed on the server side, either because it needs to access local resources, such as a database lookup, or because intermediate data used for validations cannot be exposed on the client side due to security reasons. Let us see how to execute validations on the server side only and how to perform, from our Silverlight application, calls to asynchronous validations. Server validations In order to implement a server-side validation, we have the option to define a custom validation (as previously seen) without modifying the name of the file from .cs to .shared.cs. In this way, the validation won't be copied to the client side and will only be executed on the server side. This approach is not wrong, but sometimes it is advisable to be more explicit, that is, before making an insertion or an update, it may be adequate to execute some validations. RIA Services allows us to launch validation exceptions from an operation in our Domain Service. To see how this works, add the following validation to the UpdateClients method to check whether the credit card number is not used by another user. The following are the steps: - Define a server-side function to check if the card number is doubled as the following code: public bool CreditCardNumberAlreadyExists(client currentclient) { List<client> cliensWithSameCreditCard = null; if (currentclient.CreditCard != null && currentclient.CreditCard != string.Empty) { cliensWithSameCreditCard = (from c in this.ObjectContext.clients where c.CreditCard == currentclient.CreditCard && c.ID != currentclient.ID select c).ToList<client>(); } return (cliensWithSameCreditCard != null && cliensWithSameCreditCard.Count > 0); } - Update the update server-side method. To do so, it is necessary to read the database, check it and, if there is a clash, launch an exception using the following code: public void UpdateClient(client currentclient) { // Is there a collision? Throw the exception if (CreditCardNumberAlreadyExists(currentclient)) { // Let us mark the field affected (it will show up the // error on the UI binded element) ValidationResult error = new ValidationResult("Credit card already exists for another account", new string[] { "CreditCard" }); throw new ValidationException(error, null, currentclient); } // if no error just perform the update this.ObjectContext.clients.AttachAsModified(currentclient, this.ChangeSet.GetOriginal(currentclient)); } - Control the client-side error, when the call is made to SubmitChanges. Moreover, it will show an error message if the error occurs. _context.SubmitChanges(s => { if (s.HasError) { foreach (var validationError in CurrentSelectedClient.ValidationErrors) { MessageBox.Show(validationError.ErrorMessage); } s.MarkErrorAsHandled(); } } , null); To keep the sample as easy as possible, the message is being shown from the ViewModel. If automatic Unit Testing is going to be added later, or if the ViewModel is reused in a WP7 application, you should use one of the mechanisms described in the previous chapter (IDialogService or Messenger) and decouple the UI from the ViewModel. Asynchronous validations Server validation, defined in the previous section, is very interesting. Nevertheless, wouldn't it be interesting to make the validation without submitting changes? Yes, it would. The method previously defined, CreditCardNumberAlreadyExists, can be reused and invoked in an asynchronous way. In this case, add to the validation command, the invoke to the validation itself. When you get the result, check it and, if an error occurs, it is included in the notification to be displayed in the UI. // Let us perform as well the server invoke (credit card // validation) InvokeOperation<bool> inv; // we will use this to get the result of the operation inv = _context.CreditCardNumberAlreadyExists(CurrentSelectedClient); _context.CreditCardNumberAlreadyExists(CurrentSelectedClient). Completed += ((s, e) => { if (inv.Value == true) { ValidationResult creditcardExists = new ValidationResult( "Credit Card already registered", new string[] { "CreditCard" }); CurrentSelectedClient.ValidationErrors.Add(creditcardExists); } } ); Advanced topics Now that we have covered the basics, let's check some advanced topics that we will come across in live project developments. Cancelling changes When working with RIA Services, something of a data island is brought client side. We can work with it and, once we are ready, send it to the server. What happens if we want to cancel changes and start again? For instance, a user is modifying a client file and realizes that they are working on the wrong client, so they want to cancel the changes made. The entities with which we are working implement the IRrevertibleChangeTracking. This interface defines a method named Reject, which restores the affected entity and the associated ones (if applicable) to the original value. In this case, if changes are to be cancelled, it will only be necessary to implement the following code lines (to see it working, press the button Cancel Changes in the sample application): // Let us clear the current validation list CurrentSelectedClient.ValidationErrors.Clear(); // Let us cast the entity IRevertibleChange IRevertibleChangeTracking revertible = CurrentSelectedClient as IRevertibleChangeTracking; // Reject Changes revertible.RejectChanges(); The entity also implements a method called GetOriginal. Why shouldn't it be used? Because it returns a disconnected entity. If the entity had data associated from other entities, they will not be reflected (see association later). Transactions Also, it may happen that one of the entities returned an error when trying to save it. In this case, WCF RIA Services calls the function SaveChanges, which internally wraps all those changes (Unit of work) in a transaction. That is, if any of them fail, none of them will be saved. What if we want to configure the transaction in detail? What if we are not using the ADO.NET Entity Framework? We can override the Submit method and configure the transaction at our convenience (see). For more information in this area, refer to the link. Another interesting topic is to add audit and save a changes log. For more information, check the link. Domain Service and partial classes At the beginning of this chapter, we pointed out that Visual Studio wizards generate a Domain Service class, which was to be taken as a starting point and customized according to our needs. What happens if changes are entered in the ADO.NET Entity Framework model? For instance, when adding a new table or changing a field type, is it necessary to regenerate the Domain Service and manually enter customizations again? No, it is not. Partial classes can be used to implement our customized methods. Therefore, we can refresh our Domain Service without the fear of losing all our changes made. Let us now see, step by step, how to add a partial class to the sample. - Go to the server project, Packt.Booking.Server.Web, open the MyDomainService.cs file and add Partial to the class definition. public partial class MyDomainService : LinqToEntitiesDomainService <SimpleClientEntities> - Add a new class called MyDomainServicep (Add New Class|). - Change the header defining that class by the same one placed previously to create an extension of it: public partial class MyDomainService : LinqToEntitiesDomainService <SimpleClientEntities> - Add the using namespace as the following code: using System.ServiceModel.DomainServices.EntityFramework; using System.ComponentModel.DataAnnotations; - Now, open the MyDomainService.cs file and cut the customized methods to paste them in the new file, MyDomainServiceP.cs: public partial class MyDomainService : LinqToEntitiesDomainService <SimpleClientEntities> { public bool CreditCardNumberAlreadyExists(client currentclient) { (...) What about the entities file? If changes are not many, it is worthwhile to enter changes manually so as to avoid losing the information we have manually entered (that is, validations). Include When having a look at the entities that have been created so far, it is seen that they have links to other related entities. For instance, in the client entity, apart from the country number identifier, we can find a property of the country type. If a breaking point is added when loading these data, the property will be null. What is happening then? By default, the queries generated by RIA Services do not include those bound entities. The bad use of this technique could make our application consume too much bandwidth, as well as resources. What to do then? In the cases where it is justified (for instance, when loading the entity of the associated country or a master-detail association), these entities can be added to our queries. What if queries return a lot of registers? Associating more entities means more load. The ideal thing to do here is to use pagination (bear in mind that a few users are capable of processing more than 100 registers at one time). To see pagination solutions, check these two links,. ly/90ZNtA and (server paging). Let us see how to include the country entity when loading every client record: - First, edit the Domain Service in the server project and include the entity in the query, bringing the clients as the following code: public IQueryable<client> GetClients() { return this.ObjectContext.clients.Include("country"); } - Still in the server project, open the file containing the entities, MyDomainService.metadata.cs and search for the client entity. In the nested class, the country property will be found. Add the annotation include to it using the following code: public partial class client { internal sealed class clientMetadata { (...) [Include] public country country { get; set; } - Doing so, the country entity of each client will be brought when loading. In the sample, the DataGrid can be modified to add the column indicating the Name field. <sdk:DataGrid.Columns> <sdk:DataGridTextColumn Composition RIA services also knows a special type of association. In a hierarchy of entities, one entity is referred to as the parent entity and the other related entities are referred to as descendant entities. The child entities cannot exist without the master entity and usually these entities are always displayed or modified together. A typical example is a shop system where we have one entity for each order. This entity also has a list of items; each one has a reference to a product and a quantity. In our sample application, we decided to make a composition between a floor and the rooms of this floor. Most of the readers probably know compositions from UML class diagrams as the following image: These data classes typically have the following characteristics: - The relationship between the entities can be represented as a tree with the descendant entities connected to a single parent entity. The descendant entities can extend for any number of levels. This means that we can also decide to configure a composition between a building and the relating floors. This can be a suitable approach in case that, for instance, if we develop a graphic designer, where all the information is needed to render a map of a given building. - The lifetime of a descendant entity is contained within the lifetime of the parent entity. This means if you delete the floor, all rooms will be deleted as well. - The descendant entity does not have a meaningful identity outside of the context of the parent entity. - Data operations on the entities require the entities to be treated as a single unit. For example, adding, deleting, or updating a record in the descendant entity requires a corresponding change in the parent entity. Let us move from theory to practical work. Defining a composition with RIA services is very simple, Just apply the CompositionAttribute attribute to the property that defines the association in the metadata of the entity, as in the following code: [MetadataTypeAttribute(typeof(Floor.FloorMetadata))] public partial class Floor { internal sealed class FloorMetadata { [Include] [Composition] public EntityCollection<Room> Rooms { get; set; } } } When applying the CompositionAttribute attribute to a property, the data from the descendant entity is not automatically retrieved with the parent entity. To include the descendent entity in the query results, you must apply the IncludeAttribute attribute, as previously described. Compositions in RIA services gain the following behaviors: - Hierarchical change tracking: When a child entity is modified, the parent also transitions to the Modified state. When a parent is in the Modified state, all of its children are included in the change-set that is sent to the server, including any unmodified children. Therefore, our update method must be modified, which will be seen later. - Public entity sets for child Types are not generated on the code-generated DomainContext. Children are only accessible via their parent relationship. Effectively, this means that you do not have to manage your child entities manually. Just create a new room, add this room to the floor where it belongs and call the SaveChanges method of the DomainContext. The client will send all entities, including the child entities, to the server, which updates them in the correct order, for example, first updating the master entity and next an insertion for the child entities. In addition to this code, our update method must also be changed for the Floor. The problem is that the Entity Framework makes a so called deep-attach, which means that it also adds all child entities of this composition to the DataContext object. If more than one room is added at one Unit of Work to your client context, the system will send all the entities to the server, but two of them have the same primary key. Therefore, if you do not change the update method, an exception will be thrown with the following message: InValidOperationException was unhandled by user code: An entity with the same identity already exists in this EntitySet. public void UpdateFloor(Floor currentFloor) { currentFloor.Rooms.Clear(); if (currentFloor.EntityState == EntityState.Detached) { Floor original = ChangeSet.GetOriginal(currentFloor); if (original != null) { ObjectContext.Floors.AttachAsModified(currentFloor, original); } else { ObjectContext.Floors.Attach(currentFloor); } } foreach (Room change in ChangeSet. GetAssociatedChanges(currentFloor, p => p.Rooms)) { ChangeOperation changeOperation = ChangeSet.GetChangeOperation(change); switch (changeOperation) { case ChangeOperation.Insert: if (change.EntityState == EntityState.Added) break; if (change.EntityState != EntityState.Detached) { ObjectContext.ObjectStateManager. ChangeObjectState(change, EntityState.Added); } else { ObjectContext.Rooms.AddObject(change); } break; case ChangeOperation.Update: ObjectContext.Rooms.AttachAsModified(change, ChangeSet.GetOriginal(change)); break; case ChangeOperation.Delete: if (change.EntityState == EntityState.Detached) { ObjectContext.Rooms.AttachAsModified(change); } ObjectContext.DeleteObject(change); break; } } } The code is not as complicated as it probably seems. The following steps must be followed: - Remove all child entities from the collection. Do not worry, the changes do not get lost, because the Entity Framework tracks all changes and they will be taken care of later. - If the parent entity is detached, attach it. It is important to check if the original entity is not null. If any property has not been changed directly, but only added, removed, or changed some of its child entities, the original entity will be null and the method AttachAsModified fails. - Get all changed child entities from the change tracker and handle them, depending on their change operation. You probably recognized that this code snippet will look the same for all compositions you may have in your application and that it is not a good idea to just copy and paste it. It is good practice to follow the DRY principle (Don't Repeat Yourself), and then provide a generic solution in the demo application, which can be found in the UpdateFloor method of the BookingDomainService. More information about composition in general can be found at the MSDN website: en-us/library/ee707346%28v=VS.91%29.aspx. Some information about why a custom update method is required can be found at ruminations/archive/2009/11/18/compositionsupport- in-ria-services.aspx. Solving the many-to-many relationship issue A limitation of RIA services is that it does not support many-to-many relationships (for more information, refer to). What workarounds are available? - The easiest one consists of adding a dummy field to our table and regenerating the model. The linked table will be shown correctly. Once the update has been made, the additional column can be deleted. - Another option, although a little more complicated, consists of adding the entity to the model yourself, mapping it to the linking table, and then adding the foreign key relationships to the other two tables. - As a third option, there is a Codeplex project, solving the problem of many-to-many in RIA services (). RIA services and MVVM RIA services is a great technology, but how does it fit into the MVVM pattern? Can we easily encapsulate it in a Model? How can we isolate RIA services in the model definition in order to allow developers to implement automated unit testing? Encapsulating RIA services in a model When RIA services came into the market, its pros were highlighted as a RAD (Rapid Application Development) technology. It was praised so much that most members of the community have the wrong perception that it cannot be used with applications building an architecture (that is, based on the MVVM pattern). On the contrary, RIA services can be encapsulated in a model by using one of the following approaches: - Database first: get advantage of the entities extracted from the database and use them as the transport layer. - Use objects POCO and T4 templates to generate the code. This means hard work (). - Use CodeFirst and POCO objects (at the time of printing, the RTM version of the RIA services SP2 was not available yet). Another wrong perception is that RIA services only works with ADO.NET Entity Framework. In fact, it can be combined with NHibernate and other technologies, although it means more work on our behalf (). Which approach should be taken to implement our model layer? Define the operations in a contract (interface). The contract will only expose the entities we are dealing with (nothing about context or RIA services particularities). Implement a model which inherits from this contract, so that: - We will work internally with RIA services and instantiate a context to work with. - For it to be consumed by one or several ViewModels, we will only deal with the contract previously defined (make sure the RIA services part is present). - The model which will be created, unlike other models, will have a status. That is to say, it will bear the record of the elements that have been modified or inserted, for instance. It will track the 'island' of objects or, as in Unit of Work say, anything we have brought from the server. As it has a status, it must be decided how to instantiate it. A singleton for the whole application? A model instance for every ViewModel? In the following section, this issue will be dealt in depth and we will provide a solution based on the Factory pattern. As in the previous chapter, a contract and a model were defined. In this one, we will see how, except for refactoring and using RIA services entities, we will be able to use it almost entirely and replace the Model Mock implementation with the real one based on RIA services. Two interesting entries about it can be found on the Internet, by Shawn Wildermuth—RIA services and MVVM (), and by John Papa—MVVM why and how (). Context lifetime discussion and model factory The RIA services context is inspired by the Entity Framework and other O/R Mapping Tools. Most of them implement the Unit of Work pattern, which is described by Martin Fowler under his site, eaaCatalog/unitOfWork.html. Martin Fowler, a well-known author and software architect, published a list of patterns for Enterprise Applications on his side as a short summary of his book Patterns of Enterprise Application Architecture which can be found at This is how it works: - A Unit of Work is started, typically the first time when data is retrieved or queried from the database. The entities itself will be stored in the session object and the changes are tracked by framework. In RIA services, you start it by instantiating a new context object. - The user manipulates the data and the entities will be added or removed from this context and single properties or even complex relationships are updated and changed. Often, there is also an in-memory caching system to ensure that only one entity exists for one record of the database. - The job is done and when it's time to commit, the framework decides what to do. It can open a new transaction, handle concurrency, and write all changes to the database. In RIA, a commit is done by the SaveChanges method. This pattern is great and provides a lot of advantages. For example, by writing changes to the database at one point of the time, the system is able to make optimizations, for example, when a lot of entities are added to the session object, it is more efficient to make a bulk insert than a lot of single insert operations. The change tracking system also allows to only update the changed fields, instead of sending the whole entity to the database. Furthermore, we are free to define what a Unit of Work, in our context, is. For a normal web application, it is very easy. Typically, you define that one request is a Unit of Work. In a desktop or RIA application, it is more complicated and we have multiple options. For the sake of simplicity, we decided to use one domain context in our sample application only and to avoid losing the changes that have to be asked to the user. Whenever we start editing another context, the changes are lost when they do not save it. But this is not the best approach, especially when we have some background progresses (the Domain Context is not thread-safe) and also for scenarios where the user should be able to modify multiple entities in parallel, for example, when they edit their notes or other documents. Because Managed Extensibility Framework (MEF) is being used, there is the option to configure our model implementation without using shared instances, which means that each view model gets its own model object. Because this means that the smallest Unit of Work is equal to the lifetime of a view model, a better approach is necessary. Therefore, a model factory that has a method to create a new model must be defined as the following code: public interface IModelFactory { IModel CreateModel(); } This factory is injected as a shared instance to each view model, and whenever they want to start a Unit of Work, they can use the factory to create a new model object. In our test scenario, person entities must be edited. The main requirement is that each person can be edited and saved without affecting the other items in our list. Therefore, implement the following approach: - Use a main model to load all person objects from the RIA service. - Whenever a person is changed, create a new object for this person only to start a new Unit of Work. Now we must detach this person from its old model and attach it to the new model, but because of the fact that the changes are lost if we do so, we have to get a copy of the person from the new model and copy the changes from the old person to the new person. - Replace the old person in the list with the new person object. When it is changed we do not have to do anything because there is already a separate model for this person. - If the user wants to save the person, the SaveChanges method of our model can be used to finish the Unit of Work. This domain context can still be reused in case the person is edited again. We provide a full example for a very simple scenario, which can be extended following the same approach for more advanced applications. Summary In this article, we took a look at validation, advanced topics, RIA Services, and MVVM in Silverlight 5. Further resources related to this subject: - Animation in Silverlight 4[article] - Integrating Silverlight 4 with SharePoint 2010 [article] - Creating a WCF Service, Business Object and Data Submission with Silverlight 4[article] About the Author : Braulio Díez Braulio Díez is a freelancer specializing in Microsoft technologies who has more than 15 years of experience working on international projects. He is also a Silverlight MVP, consultant, technical writer, open source developer, trainer, and speaker. Post new comment
http://www.packtpub.com/article/silverlight-5-lob-development-validation-advanced-topics-mvvm
CC-MAIN-2014-15
en
refinedweb
substream Volatile namespaces for Primus Want to see pretty graphs? Log in now!Want to see pretty graphs? Log in now! npm install substream SubStream SubStream is a simple stream multiplexer for Primus. It allows you to create simple message channels which only receive the information you send to it. These channels are in fact, streams, which is why this module is called SubStreams because it adds small streams on top of the main stream and intercepts them. Installation npm install --save substream The module can only be used in conjunction with Primus so make sure that your application is using that as real-time backend. Getting started In all the code examples we assume that the following code is present: 'use strict'; var Primus = require('primus') , http = require('http'); var server = http.createServer() , primus = new Primus(server); // // Custom code here, just above the listen call. // server.listen(8080); Which is the most minimal bootstrapping code required to create a Primus powered server. Once you've setup the server you need to add SubStream as a plugin in to Primus: // // The `primus.use` method adds the plugin to primus. It requires a name in // order to easily retrieve it again, it needs to be unique but for the sake of // clairity, we're using to use substream as a name. // primus.use('substream', require('substream')); After you've added plugins, you might want to re-compile the client library that Primus serves as it automatically the client-side plugin to the framework as well as the custom substream.js library to create the actual name spaces. To save the client just run: primus.save(__dirname +'/primus.js'); But this is only needed if you serve the file manually and not through the automatically generated /primus/primus.js path. Now that we've set everything up correctly we can start creating some substreams. The client To create or access a substream in the Primus client start off with making a connection: var primus = new Primus('http://<your url here:whateverportnumber>'); var foo = primus.substream('foo'); The substream method automatically creates a namespaced stream if you didn't create it before. Or it will return your previously created stream when you call it again. So now we have a foo stream we can just write to it: foo.write('data'); Awesome, all works as intended. But this was just one single substream, we can add more: var bar = primus.substream('bar') , baz = primus.substream('baz'); You can create an infinite amount substreams on top of one single base stream. The data is not leaked between streams. It's all "sandboxed". As the returned substreams are streams or eventemitters we can just listen to data, end or close events. But it also proxies all the other events that Primus emits such as the reconnect, offline events etc. (The full list is in the Primus README.md). So for receiving and writing data you can just do: bar.on('data', function () { }); bar.write('hello from bar'); foo.on('data', function (data) { console.log('recieved data', data); }).on('end', function () { console.log('connection has closed or substream was closed'); }); The server The server portion of this module isn't that different than the client portion. It follows the same API stream/eventemitter API: primus.on('connection', function (spark) { var foo = spark.substream('foo') , bar = spark.substream('bar') , baz = spark.substream('baz'); foo.on('data', function (data) { console.log('foo received:', data); }); // // You can even pipe data // fs.createReadSteam(__dirname +/'example.js').pipe(bar, { end: false }); // // To stop receiving data, simply end the substream: // baz.end(); }) License MIT
https://www.npmjs.org/package/substream
CC-MAIN-2014-15
en
refinedweb
Taskbar Extensions. - Unified Launching and Switching - Jump Lists - Destinations - Tasks - Customizing Jump Lists - Thumbnail Toolbars - Icon Overlays - Progress Bars - Deskbands - Notification Area - Thumbnails - Related topics Unified Launching and Switching As of the Windows 7 taskbar, Quick Launch is no longer a separate toolbar. The launcher shortcuts that Quick Launch typically contained are now pinned to the taskbar itself, mingled with buttons for currently running applications. When a user starts an application from a pinned launcher shortcut, the icon transforms into the application's taskbar button for as long as the application is running. When the user closes the application, the button reverts to the icon. However, both the launcher shortcut and the button for the running application are just different forms of the Windows 7 taskbar button. . While the application is running, its taskbar button becomes the single place to access all of the following features, each discussed in detail below. - Tasks: common application commands, present even when the application is not running. - Destinations: recently and frequently accessed files specific to the application. - Thumbnails: window switching, including switch targets for individual tabs and documents. - Thumbnail Toolbars: basic application control from the thumbnail itself. - Progress Bars and Icon Overlays: status notifications. The taskbar button can represent a launcher, a single application window, or a group. An identifier known as an Application User Model ID (AppUserModelID) is assigned to each group. An AppUserModelID can be specified to override standard taskbar grouping, which allows windows to become members of the same group when they might not otherwise be seen as such. Each member of a group is given a separate preview in the thumbnail flyout that is shown when the mouse hovers over the group's taskbar button. Note that grouping itself remains optional. As of Windows 7, taskbar buttons can now be rearranged by the user through drag-and-drop operations. Note The Quick Launch folder (FOLDERID_QuickLaunch) is still available for backward compatibility although there is no longer a Quick Launch UI. However, new applications should not ask to add an icon to Quick Launch during installation. For more information, see Application User Model IDs (AppUserModelIDs). Jump Lists A user typically launches a program with the intention of accessing a document or performing tasks within the program. The user of a game program might want to get to a saved game or launch as a specific character rather than restart a game from the beginning. To get users more efficiently to their final goal, a list of destinations and common tasks associated with an application is attached to that application's taskbar button (as well as to the equivalent Start menu entry). This is the application's Jump List. The Jump List is available whether the taskbar button is in a launcher state (the application isn't running) or whether it represents one or more windows. Right-clicking the taskbar button shows the application's Jump List, as shown in the following illustration. By default, a standard Jump List contains two categories: recent items and pinned items, although because only categories with content are shown in the UI, neither of these categories are shown on first launch. Always present are an application launch icon (to launch more instances of the application), an option to pin or unpin the application from the taskbar, and a Close command for any open windows. Destinations The Recent and Frequent categories are considered to contain destinations. A destination, usually a file, document, or URL, is something that can be edited, browsed, viewed, and so on. Think of a destination as a thing rather than an action. Typically, a destination is an item in the Shell namespace, represented by an IShellItem or IShellLink. These portions of the destination list are analogous to the Start menu's recently used documents list (no longer shown by default) and frequently used application list, but they are specific to an application and therefore more accurate and useful to the user. The results used in the destination list are calculated through calls to SHAddToRecentDocs. Note that when the user opens a file from Windows Explorer or uses the common file dialog to open, save, or create a file, SHAddToRecentDocs is called for you automatically, which results in many applications getting their recent items shown in the destination list without any action on their part. Launching a destination is much like launching an item using the Open With command. The application launches with that destination loaded and ready to use. Items in the destination list can also be dragged from the list to a drop destination such as an email message. By having these items centralized in a destination list, it gets users where they want to go that much faster, which is the goal. As items appear in a destination list's Recent category (or the Frequent category or a custom category as discussed in a later section), a user might want to ensure that the item is always in the list for quick access. To accomplish this, he or she can pin that item to the list, which adds the item to the Pinned category. When a user is actively working with a destination, he or she wants it easily at hand and so would pin it to the application's destination list. After the user's work there is done, he or she simply unpins the item. This user control keeps the list uncluttered and relevant. A destination list can be regarded as an application-specific version of the Start menu. A destination list is not a shortcut menu. Each item in a destination list can be right-clicked for its own shortcut menu. APIs - IApplicationDestinations::RemoveDestination - IApplicationDestinations::RemoveAllDestinations - IApplicationDocumentLists::GetList - SHAddToRecentDocs Tasks Another built-in portion of a Jump List is the Tasks category. While a destination is a thing, a task is an action, and in this case it is an application-specific action. Put another way, a destination is a noun and a task is a verb. Typically, tasks are IShellLink items with command-line arguments that indicate particular functionality that can be triggered by an application. Again, the idea is to centralize as much information related to an application as is practical.. APIs Customizing Jump Lists An application can define its own categories and add them in addition to or in place of the standard Recent and Frequent categories in a Jump List. The application can control its own destinations in those custom categories based on the application's architecture and intended use. The following screen shot shows a Custom Jump List with a History category. If an application decides to provide a custom category, that application assumes responsibility for populating it. The category contents should still be user-specific and based on user history, actions, or both, but through a custom category an application can determine what it wants to track and what it wants to ignore, perhaps based on an application option. For example, an audio program might elect to include only recently played albums and ignore recently played individual tracks. If a user has removed an item from the list, which is always a user option, the application must honor that. The application must also ensure that items in the list are valid or that they fail gracefully if they have been deleted. Individual items or the entire contents of the list can be programmatically removed. The maximum number of items in a destination list is determined by the system based on various factors such as display resolution and font size. If there isn't space enough for all items in all categories, they are truncated from the bottom up. APIs Thumbnail Toolbars To provide access to a particular window's key commands without making the user restore or activate the application's window, an active toolbar control can be embedded in that window's thumbnail preview. For example, Windows Media Player might offer standard media transport controls such as play, pause, mute, and stop. The UI displays this toolbar directly below the thumbnail as shown in the following illustration—it does not cover any part of it. This toolbar is simply. Because there is limited room to display thumbnails and a variable number of thumbnails to display, applications are not guaranteed a given toolbar size. If space is restricted, buttons in the toolbar are truncated from right to left. Therefore, when you design your toolbar, you should prioritize the commands associated with your buttons and ensure that the most important come first and are least likely to be dropped because of space issues. Note When an application displays a window, its taskbar button is created by the system. When the button is in place, the taskbar sends a TaskbarButtonCreated message to the window. Its value is computed by calling RegisterWindowMessage(L("TaskbarButtonCreated")). That message must be received by your application before it calls any ITaskbarList3 method. API - ITaskbarList3::ThumbBarAddButtons - ITaskbarList3::ThumbBarSetImageList - ITaskbarList3::ThumbBarUpdateButtons - THUMBBUTTON Icon Overlays. To display an overlay icon, the taskbar must be in the default large icon mode, as shown in the following screen shot. . Because a single overlay is overlaid on the taskbar button and not on the individual window thumbnails, this is a per-group feature rather than per-window. Requests for overlay icons can be received from individual windows in a taskbar group, but they do not queue. The last overlay received is the overlay shown. APIs Progress Bars A taskbar button can be used to display a progress bar. This enables a window to provide progress information to the user without that user having to switch to the window itself. The user can stay productive in another application while seeing at a glance the progress of one or more operations occurring in other windows. It is intended that a progress bar in a taskbar button reflects a more detailed progress indicator in the window itself. This feature can be used to track file copies, downloads, installations, media burning, or any operation that's going to take a period of time. This feature is not intended for use with normally peripheral actions such as the loading of a webpage or the printing of a document. That type of progress should continue to be shown in a window's status bar. The taskbar button. AP. APIs Notification Area. When a notification balloon is displayed, the icon becomes temporarily visible, but even then a user can choose to silence them. An icon overlay on a taskbar button therefore becomes an attractive choice when you want your application to communicate that information to your users. Thumbnails. Note As in Windows Vista, Aero must be active to view thumbnails. API - ITaskbarList3::RegisterTab - ITaskbarList3::SetTabActive - ITaskbarList3::SetTabOrder - ITaskbarList3::UnregisterTab - ITaskbarList4::SetTabProperties Thumbnail representations for windows are normally automatic, but in cases where the result isn't optimal, the thumbnail can be explicitly specified. By default, only top-level windows have a thumbnail automatically generated for them, and the thumbnails for child windows appear as a generic representation. This can result in a less than ideal (and even confusing) experience for the end user. A specific switch target thumbnail for each child window, for instance, provides a much better user experience. API - DwmSetWindowAttribute - DwmSetIconicThumbnail - DwmSetIconicLivePreviewBitmap - DwmInvalidateIconicBitmaps - WM_DWMSENDICONICTHUMBNAIL - WM_DWMSENDICONICLIVEPREVIEWBITMAP You can select a particular area of the window to use as the thumbnail. This can be useful when an application knows that its documents or tabs will appear similar when viewed at thumbnail size. The application can then choose to show just the part of its client area that the user can use to distinguish between thumbnails. However, hovering over any thumbnail brings up a view of the full window behind it so the user can quickly glance through them as well. If there are more thumbnails than can be displayed, the preview reverts to the legacy thumbnail or a standard icon. API To add Pin to Taskbar to an item's shortcut menu, which is normally required only for file types that include the IsShortCut entry, is done by registering the appropriate context menu handler. This also applies to Pin to Start Menu. See Registering Shell Extension Handlers for more information. Related topics
http://msdn.microsoft.com/en-us/library/dd378460(v=vs.85).aspx
CC-MAIN-2014-15
en
refinedweb
.4 ** Emacs can be compiled with POSIX ACL support. This happens by default if a suitable support library is found at build time, like libacl on GNU/Linux. To prevent this, use the configure option `--without-acl'. * Startup Changes in Emacs 24.4 * Changes in Emacs 24.4 +++ ** . +++ ** . ** New option `scroll-bar-adjust-thumb-portion'. * Editing Changes in Emacs 24.4 ** New commands `toggle-frame-fullscreen' and `toggle-frame-maximized', bound to <f11> and M-<f10>, respectively. * Changes in Specialized Modes and Packages in Emacs 24.4 **. ** cl-lib *** New macro cl-tagbody. +++ *** letf is now just an alias for cl-letf. **. ** ERC *** New option `erc-accidental-paste-threshold-seconds'. If set to a number, this can be used to avoid accidentally paste large amounts of data into the ERC input. ** Icomplete is a bit more like IDO. *** key bindings to navigate through and select the completions. *** The icomplete-separator is customizable, and its default has changed. *** Removed icomplete-show-key-bindings. ** Image mode --- ***. ** Isearch *** `C-x 8 RET' in Isearch mode reads a character by its Unicode name and adds it to the search string. ** MH-E has been updated to MH-E version 8 two types of operation: when its arg ADJACENT is non-nil (when called interactively with C-u C-u) it works like the utility `uniq'. Otherwise by default it deletes duplicate lines everywhere in the region without regard to adjacency. ** Tramp +++ *** New connection method "adb", which allows to access Android devices by the Android Debug Bridge. The variable `tramp-adb-sdk-dir' must be set to the Android SDK installation directory. +++ *** Handlers for `file-acl' and `set-file-acl' for remote machines which support POSIX ACLs. ** Woman *** The commands `woman-default-faces' and `woman-monochrome-faces' are obsolete. Customize the `woman-* faces instead. ** Obsolete packages: *** longlines.el is obsolete; use visual-line-mode instead. *** terminal.el is obsolete; use term.el instead. * New Modes and Packages in Emacs 24.4 ** New nadvice.el package offering. * Incompatible Lisp Changes in Emacs 24.4 **'. * Lisp changes in Emacs 24.4 ** Support for filesystem notifications. Emacs now supports notifications of filesystem changes, such as creation, modification, and deletion of files. This requires the 'inotify' API on GNU/Linux systems. On MS-Windows systems, this is supported for Windows XP and newer versions. ** Face changes *** The in Emacs 24.4 on non-free operating systems +++ ** The "generate a backtrace on fatal error" feature now works on MS Windows. The backtrace is written to the 'emacs_backtrace.txt' file in the directory where Emacs was running. * Installation Changes in Emacs 24.3 ** The default X toolkit is now Gtk+ version 3. If you don't pass `--with-x-toolkit' to configure, or if you use `- about possibly-questionable C code. On a recent GNU system there should be no warnings; on older and on non-GNU systems the generated warnings may be useful. **' and `vcdiff' have been removed (from the bin and libexec directories, respectively). The former is no longer relevant, the latter is replaced by lisp (in vc-sccs.el). *' to nil. *** `C-h f' now reports previously-autoloaded functions as "autoloaded", even after their associated libraries have been loaded (and the autoloads have been redefined as functions). **. *** Setting `imagemagick-types-inhibit' to t now disables the use of ImageMagick to view images. (You must call `imagemagick-register-types' afterwards if you do not use customize to change this.) *** The new variable `imagemagick-enabled-types' also affects which ImageMagick types are treated as images. The function `imagemagick-filter-types' returns the list of types that will be treated as images. **. ** Internationalization *** New language environment: Persian. *** New input method `vietnamese-vni'. ** Nextstep (GNUstep / Mac OS X) port *** Support for fullscreen and the frame parameter fullscreen. ***.) ***) *** CL's main entry is now (require 'cl-lib). `cl-lib' is like the old `cl' except that it uses the namespace cleanly; i.e., all its definitions have the "cl-" prefix (and internal definitions use the "cl--" prefix). If `cl' provided a feature under the name `foo', then `cl-lib' provides it under the name `cl-foo' instead; with the exceptions of the few ") ** Diff mode *** Changes are now highlighted using the same color scheme as in modern VCSes. Deletions are displayed in red (new faces `diff-refine-removed' and `smerge-refined-removed', and new definition of `diff-removed'), insertions in green (new faces `diff-refine-added' and `smerge-refined-added', and new definition of `diff-added'). *** The variable `diff-use-changed-face' defines whether to use the face `diff-changed', or `diff-removed' and `diff-added' to highlight changes in context diffs. *** The new command `diff', and `dired-do-touch' yanks the attributes of the file at point. *** When the region is active, `m' (`dired-mark'), `u' (`dired-unmark'), `DEL' (`dired-unmark-backward'), and . ** Compile has a new option `compilation-always-kill'. ** Customize *** `custom-reset-button-menu' now defaults to t. *** Non-option variables are never matched in `customize-apropos' and `customize-apropos-options' (i.e., the prefix argument does nothing for these commands now). **. *** Remote processes are now also supported on remote MS-Windows hosts. **. ** notifications.el supports now version 1.2 of the Notifications API. The function `notifications-get-capabilities' returns the supported server properties. ** Flymake uses fringe bitmaps to indicate errors and warnings. See `flymake-fringe-indicator-position', `flymake-error-bitmap' and `flymake-warning-bitmap'. ** The FFAP option `ffap-url-unwrap-remote' can now be a list of strings, specifying URL types that should be converted to remote file names at the FFAP prompt. The default is now '("ftp"). ** New Ibuffer `derived-mode' filter, bound to `/ M'. The old binding for `/ M' (filter by used-mode) is now bound to `/ m'. ** New option `mouse-avoidance-banish-position' specifies where the `banish' mouse avoidance setting moves the mouse. ** In Perl mode, new option `perl-indent-parens-as-block' causes non-block closing brackets to be aligned with the line of the opening bracket. ** In Proced mode, new command `proced-renice' renices marked processes. ** New option `async-shell-command-buffer' specifies the buffer to use for a new asynchronous `shell-command' when the default output buffer `*Async Shell Command*' is already in use. ** `S' in Tabulated List mode (and modes that derive from it), sorts the column at point, or the Nth column if a numeric prefix argument is given. ** `which-func-modes' now defaults to t, so Which Function mode, when enabled, applies to all applicable major modes. ** `winner-mode-hook' now runs when the mode is disabled, as well as when it is enabled. ** Follow mode no longer works by using advice. The option `follow-intercept-processes' has been removed. ** `javascript-generic-mode''. ** `random' by' contains a substring "\?", that substring is inserted literally even if the LITERAL arg is non-nil, instead of causing an error to be signaled. ** `select-window' now always makes the window's buffer current. It does so even if the window was selected before. ** The function `x-select-font' can return a font spec, instead of a font name as a string. Whether it returns a font spec or a font name depends on the graphical library. ** `face-spec-set' no longer sets frame-specific attributes when the third argument is a frame (that usage was obsolete since Emacs 22.2). ** `set-buffer-multibyte' now signals an error in narrowed buffers. ** The CL package's `get-setf-method' function' are' ** CL-style generalized variables are now in core Elisp. `setf' is autoloaded; `push' and `pop' accept' also accepts a (declare DECLS) form, like `defmacro'. The interpretation of the DECLS is determined by `defun-declarations-alist'. ** New macros `setq-local' and `defvar-local'. ** Face underlining can now use a wave.
https://emba.gnu.org/emacs/emacs/-/blame/21cd50b803cb63b66f81db0a18dbaac6d7269348/etc/NEWS
CC-MAIN-2021-17
en
refinedweb
How to import folders? Hey everyone, I‘m new here and I just started exploring Pythonista yesterday. I looked for some more examples to play around and I found tons of examples here - thanks Taha Dhiaeddine Amdouni for sharing. I downloaded all the stuff and now I wonder how to get the folders into pythonista 3. I figured out how to import files, but I‘m not able to get a folder import to work. Does anyone have an idea or a hint for me? Thanks alot Rownn I work on iPad. @rownn, would you be ok placing then in Pythonista iCloud folder? E.g. using the Files app. Then you can open and run things from Pythonista. Hi mikael, thx for your reply. You mean coping the folders into a pythonista folder directly? That would be great, but I cannot find any pythonista folder, neither on iCloud Drive nor On My iPad. I guess there are alot of stuff hidden somehow - honestly, I‘m not really familiar with iDevices. @rownn, open Pythonista hamburger menu. At the top there are iPad/iPhone and then iCloud. Go to iCloud. If there is nothing there, create a folder (e.g. Tdamdouni). Then go to Files app and iCloud. You should see a Pythonista 3 directory in the root. Really? It is that easy? Many thx!!! Or run script from share default view from PIL import Image import appex import ui def main(): if not appex.is_running_extension(): print('This script is intended to be/ run from the sharing extension.') return path=appex.get_file_path() if len(path)>5: import shutil shutil.move(path,'/private/var/mobile/Containers/Shared/AppGroup/29427C70-DDD5-4356-A6C7-0FBAD22F6230/Pythonista3/Documents/' ) if __name__ == '__main__': main()
https://forum.omz-software.com/topic/6099/how-to-import-folders
CC-MAIN-2021-17
en
refinedweb
Parsing configuration from environment variables as typing.NamedTuple Project description This project allows you to describe the configuration of your application as a typing.NamedTuple, like so: import typing class MyAppConfig(typing.NamedTuple): some_string: str some_int: int some_float: float some_bool: bool = True Then, gather the configuration from the environment variables: import env_var_config config = env_var_config.gather_config_for_class(MyAppConfig) The code will look for variables that are called like the fields of the tuple, but uppercase. As you might have noticed, you can set default values on the fields. If the fields with defaults are not found in the environment, they’re set to their default value (which is quite unsurprising). If a value without a default is missing, though, an error will be raised. That is, unless you set allow_empty option to True in the call to gather_config_for_class. Then all missing values will be initialized with the default value for their type, such as an empty string for str, 0 for int, etc. Installation pip install env_var_config Project details Release history Release notifications | RSS feed Download files Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
https://pypi.org/project/env-var-config/0.1.0/
CC-MAIN-2021-17
en
refinedweb
Transcript Calin: Hello, everyone, and thank you very much for having me here, first of all, because I've been upgraded to this massive room because of the interest that has been registered so I wasn't expecting this so I'm both humbled and grateful for all of you to come here. Imagine that you are the systems engineer at the fintech and you have worked on the infrastructure of a payment services provider product for more than a year and within half an hour into a planned penetration test, your whole production cluster gets compromised. We are talking about firewall rules being bypassed and we're talking about Google Cloud services that you shouldn't have access to also being bypassed. In a day and age in which no security control is completely impenetrable, I am here today to talk to you about our hurdles as a financial-regulated institution and some of the things we've learned and what you can take away as well. So who am I? I am Ana Calin and I am the systems engineer in that little story I was telling you about and I work for Paybase. The things I'm going to cover today. First of all, I'm going to give you a tiny bit of context as who is Paybase and what we do, then we'll look at the things we've achieved so far and some of our tech stacks, so not the complete tech stack but some of it, then I'll give you proper details about this compromise that I keep on talking about. Some of the things you can take away from security and resilience point of view and then challenges we've encountered on our road to compliance, specifically PCI DSS compliance and challenges we've managed to avoid specifically because of our cloud native setup. Paybase Paybase is an API-driven payments services provider. Our business model is B2B, so business to business, and our customers are marketplaces, gig/sharing economies, cryptocurrencies or, in more general terms, any businesses that connect a buyer with the seller. Our offering is very important because platform businesses, such as the ones I just mentioned, face a very tough challenge. Not only do they have to solve very complex technical issues, they also now, with the updated regulation, have to either become a regulated payments institution themselves or they have to integrate with a third party. The current solutions, well, they are very costly and they are very inflexible, so we came up with a product that is less expensive, very flexible, and also less complex. In other words, we just make regulation easier for our customers. What We Have Achieved so Far We are under two years old, one and a half, about the same thing. We have built our own processing platform from scratch, completely cloud native and with more than 90% open source software and we're currently onboarding our first seven clients and we are in no shortage of demand. We are FCA authorized, we have an electronic money institution license and in 2017, we have received an Innovate UK grant worth 700,000 pounds in order to democratize eMoney infrastructure for startups and SMEs. We are also PCI DSS Level 1 compliant, this is the highest level of compliance and in my opinion, having worked directly with it, it's a huge achievement given that the current state of this standard is one that most very large institutions that need to comply choose to actually pay annual fines rather than become compliant. This makes sense financially to them because of the technical complexity of their systems, the technical debt and many of them because of the hundreds of legacy applications that they are still running. Some fun facts about PCI DSS. Between 2012 and 2017, there has been a 167% increase in companies that have become compliant and yet in 2017, more than 80% of the businesses that should be compliant remain non-compliant. Our Tech Stack We follow a microservices-based infrastructure but our actual application is a distributed monolith that is separated by entities and the reason why we do this is because that way, we get flexibility in how we separate concerns and we scale. We have built everything on top of Google Cloud Platform and the reason why we chose Google Cloud is because at the time, we needed a Kubernetes-managed service and Google Cloud had the best offering and if I am to give my own personal opinion, I still think that, from a Kubernetes-managed-service point of view, they still have the best offering in the market today. Our infrastructure is built with Terraform as infrastructure as code and our applications are deployed with Kubernetes wrapped around in Helm. From an observability stack, which is really important, just in general to have a good view of what's happening into your cluster, we use EFK, so Elasticsearch, Fluentd, and Kibana for low collection and Prometheus and Grafana for metric aggregation. If you are thinking, "Why do we need the different observability stacks?" first of all, although they are complementary to each other, they give us different things. EFK gives us events about errors and what happen in the application and Prometheus and Grafana give us metrics, so for example, the error rate of a service, the CPU usage, whether a service is up or down. From an actual application point of view, we have more than 50 microservices. Our application is written mainly in JavaScript but we use Protobuf as an interface definition language so that in the event we decide that we want to rewrite parts of our application into a more pragmatic language, then Protobuf, gRPC in this case, would give us that, so we're not locked into our choice of programming language. The communication between services is done synchronously via gRPC with HTTP/2 as a transport layer and we also use NSQ as a distributed messaging broker which helps us synchronously process event messaging. Details about the Compromise Now that you know a bit of our tech stack I'm going to tell you a bit about our compromise, what happened. First of all, a bit of context. This happened in the scope of a planned internal infrastructure penetration test, this happened in our production cluster, but at the time, the production cluster wasn't actively used by customers. The pentester, let's call him Mark because, well, that's his actual name, had access to a privilege container within the cluster, so it wasn't a 100% complete compromise from the outside. There were a few weak links. First of all, GKE, so Kubernetes managed service from Google Cloud comes insecure by default or it came at the end of last year. What do I mean by insecure by default? If you are to provision a cluster, you would get the compute engine scope with the cluster, you would get the compute engine default service account and you would get legacy metadata endpoints enabled. The compute engine scope refers to an OAuth scope, so an access scope that allows different Google Cloud services to talk to each other. In this case, GKE speaking to GCE and it was a read/write access. I'm going to be naughty and I'm going to take the blame and put it on someone else, in this case, Terraform. The first bit on the screen, it's a part of the documentation taken yesterday from how to provision a container cluster with Terraform. As you can read, it says, "The following scopes are necessary to ensure the current functionality of the cluster." and the very first one is compute/read/write, which is this engine scope that I'm talking about. I can tell you that this is a complete lie, ever since we did that penetration test, we realized that you don't need this, we took it out and our clusters are functioning just right. Compute engine service account. When you provision a cluster in GKE, if you don't specify a specific service account, it uses the default service account associated with your project. You might think that this is not necessarily a problem, well, it's a big problem because this service account is associated with the editor role in Google Cloud Platform, which has a lot of permissions. I'm going to talk to you about the legacy metadata endpoints in a second. Metadata Endpoints My next point was about queueing the metadata endpoints, so within Google Cloud you can queue the metadata endpoints of a node and if this is enabled, this gives you details about the Kubernetes API as the kubelet. What that means is that if you run this particular command or just the kubelet you will be able to get access to certain secrets that that relate to Kubernetes API from within any pod or node within a GKE cluster and from there on, well, the world is your oyster. What can you do to disable this? First of all, a very quick disclaimer. The latest version of GKE, so 1.12, which is the very current one, comes with this metadata endpoint disabled, but if you're not on that version, you can disable it either by building your cluster with gcloud CLI and specifying the metadata disable legacy endpoint flag or you can do it in Terraform by adding that following workload metadata config block into your note config block. The result, it should be something like this and I've added some resources for you guys if you have GCP and you want to check this particular issue at the very end. The weak link, number two, Tiller is the server side component of Helm. If you read the documentation of Helm, it says that you should never provision Helm or Tiller in a production cluster with all of the default options. We did this and we said we were going to change it later on, but when it came to the penetration test, we decided to live life on the edge and live it on and see how far a penetration tester can go. The default options means that Tiller comes with mTLS disabled, it performs no authentication by default and this is all very bad because Tiller has permissions that it can create any Kubernetes API resource in the cluster, so it can do anything or remove anything. How would you go about getting access to Tiller from a non-privileged pod? I have taken a random cluster that has Tiller installed and I have deployed fluentd in a pod in a different namespace than the kube-systems namespace that the Tiller lives. It also has the Helm client installed, so can see that the very first command is Helm version, and this gives me nothing, but if I telnet directly, the address of Tiller and the port then all of a sudden I'm connected to Tiller and, well, I can do everything I want from here. What can you do in terms of mitigations? Ideally, you would enable mTLS or run Tillerless Tiller, but if you are unable to do that, you should at the very minimum bind Tiller to local host so that it only listens for requests that come from the pod IP that it lives in and the result, it should look like this bit here.re, so unable to connect to remote host. Security and Resilience Let's have a look at some of the security and resilience notes that we learned and we picked up along the way. A secure Kubernetes cluster should, at the minimum, use a dedicated service account with minimal permissions. Here, I'm not talking about service account in the context of Kubernetes but service account in the context of GKE. If I am to translate this to an Azure world that's called the service principle. A secure Kubernetes cluster should also use minimal scopes, so least privilege principle. It should use network policies, and network policies are useful for restricting access towards certain pods that run in a more privileged mode. or it should use Istio with authorization rules enabled and the authorization rules, if they are set up properly, achieve the same thing as network policies. It should also provide some pod security policies, and these are useful for not allowing any pods that don't meet certain criteria to be built within your cluster, so in the event an attacker get into your cluster, they can only deploy pods with certain criteria. You should use scanned images because using untrusted image it's counterintuitive and you don't want to install vulnerabilities within your cluster without knowing. You should always have RBAC enabled, a note on RBAC. GKE, AKS and EKS, they all come with RBAC enabled today but AKS only had RBAC enabled recently so you should look into this if you don't have RBAC. A resilient Kubernetes cluster should- here we're talking about resilience rather than security- first of all, be architected with failure and elasticity in mind by default. In other words, no matter what you're building, always assume that it's going to fail or someone is going to break it. You should have a stable observability stack so you get visibility into what's happening and you should be testing your clusters with a tool such as Chaos Engineering. I know that Chaos Engineering can be very intimidating for most us, I personally was intimidated as well by it, but after I played with it, and it's just a matter of running that particular command on the screen to actually install it in a cluster, it's really easy. It's a great way of testing how resilient your applications are, especially if you are running JVM-based beasts such as Elasticsearch. This particular command says install chaoskube which is a flavor of Chaos Engineering and make it run every 15 minutes, so every 15 minutes, a random pod into your cluster will be killed. You don't necessarily have to install it straight away into your production cluster, start with other clusters, see what the behavior is and move on from there. Challenges The last part of my talk is mostly around challenges in terms of compliance, please come and talk to me after if you've had similar challenges or if you are looking at moving towards the same direction. Challenge number one, as a PCI compliant payment services provider with many types of databases, I want to be able to query data sets in a secure and database agnostic manner so that engineers and customers can use it easily and so that we are not prone to injections. This particular challenge has two different points to it. Especially when you're using more than one type of databases, you want to make it easy for your customers, especially if you are API driven, to be able to query your data sets, so this is about customer experience. The second part of it is about security, so PCI DSS requirement number 6.5.1 requires you to make sure that your applications are not vulnerable to any injection flaws whether they are SQL or any other types. "So how did we approach this?" I hear you say. Meet PQL. PQL is domain specific language that our head of engineering wrote, it is inspired by SQL. It is injection resistant because it doesn't allow for mutative syntax, it is database agnostic and it adheres to logical operator precedence. How it looks? This is an example of how it can look, the way we've achieved this was through syntactical analysis so by parsing tokenized input into AST and through lexical analysis. Challenge number two. As a PCI compliant payment services provider, I am required to implement only one primary function per server to prevent functions that require different security levels from coexisting onto the same server and this is requirement 2.2.1 of PCI DSS. This was a difficult one because we don't have any servers. Yes, Google Cloud has some servers that we use through association, but we don't really have any servers, we run everything in containers and note that the actual standard doesn't even say anything about virtual machines, never mind containers. The way we approached this was by trying to understand what the requirement is and the requirement is prevent functions that require different security levels from coexisting into the same space otherwise from accessing each other. We said that we're going to think that the server equals a deployable unit, so a pod or a container and in that case, we meet the requirement because we're using network policies which restrict traffic between the different services and we’re using other bits of security as well. We're also using pod security policies to make sure that only pods that are allowed, that meet certain criteria can come into the cluster and a very important one, we're using only internal, trusted and approved images and scanned as well and I'll come back to the scanning in a second. Those were specific challenges, some examples, this is not an exhausted list but those were challenges that we had to think outside the box. Now let's look at challenges that we actually didn't have to deal with because of our setup. As a PCI compliant payment services provider, I am required to remove all test data and accounts from system components before the system becomes active or goes into production, this is requirement 6.4.4. and it makes sense to have this requirement. The normal way of organizations splitting their Google Cloud infrastructure- I've taken Google Cloud as an example but this sort of applies to other cloud providers- it's by having one massive organization and then you'd have a project and under that project you'd have all of the services and companies like AWS actually suggest that you should have accounts or two different accounts, one for billing and one for the rest of the things you are doing. If you are to do this, then you can split your environments within a GKE cluster at the namespace level, so you'd have a production namespace, you would have a quality insurance namespace, and a staging namespace all living within the same cluster, then you'd be able to access all of the different services. From a PCI DSS point of view, you have the concept of a cardholder data environment, which basically is the scope of the audit that you have to perform. If you are to do it this way then your scope would be everything within the VPC, but we did it in a different way. This is the way in which we split our environments, for each environment, we have a specific project and then we have a few extra projects for the more important things that we need to do. For example, we have a dedicated project for our image repo, we have a dedicated project for out Terraform state, and we have one for our backups, this is very important from PCI but also from a compromise. From PCI point of view, it's important because we managed to reduce the scope to just the GKE cluster within the production project, and from a compromise point of view, because most of the compromise happened with that compute engine default service account that only had editor role within the project, so yes, it managed to bypass our firewall rules, yes, it managed to get access to the buckets within the production project. It didn't manage to get access to our image repo, it didn't manage to get access to our backups or any of the other environments, so from that point of view, it was quite contained. What do you get from this? You get an increased security, you get separation of concerns, get a reduction of the scope and of course easier to organize RBAC. I'm not saying that by doing this, you won't be able to get it as secure, I'm sure you will, but it's going to take much more work. Removal test data and accounts from system components before the account goes live. This actually doesn't apply to us because the test data would only ever be in all of the other environments, never in the production environment to begin with. Challenge number four that we've avoided. As a PCI compliant payment services provider, I am required to perform quarterly internal vulnerability scans, address vulnerabilities and perform risk scans to verify all high-risk vulnerabilities are resolved in accordance with the entity's vulnerability ranking and this is requirement 11.2.1. This is a very interesting one. First of all, random comment, that's just the image that I liked so I decided to put it there because it's my talk, but this particular challenge is about interpretation. When you are running everything within containers, you don't really have the concept of internal infrastructure or unless you're going to say, "Well, the way to meet this is by doing very frequent penetration tests," which is not necessarily viable to any organization. What we said is that we make sure that all of the images that ever reach our cluster have been scanned and no image that hasn't been scanned or hasn't passed certain vulnerability ranking requirements will ever get into our cluster. I've made a diagram. When a developer pushes code into GitHub, some integration tests run, and if those integration tests have passed then an image build starts and if the image build has been successful then there's another step that inherits from the tag of the first image and takes that image and it scans everything in it. If the scan passes, so it doesn't have any vulnerabilities higher than low, then the initial image, it's retrieved and then it's successfully pushed into GCR. If the image scan doesn't pass, then the build is failed and our developers have no other way of getting that code into an environment. That's a way of dealing with it. I hear you ask, "Well, yes. But what if you find a vulnerability that wasn't there three months ago when you last updated your image?". This is both a proactive and a reactive measure. That doesn't happen because we deploy multiple times a day and because we are running a monolith. Every time any change into our code happens, a new image is pushed and all of our services, although they have different configurations, they will always be running on the same image. From a database and other application point of view, we ensure on a regular basis to check our images and also update versions and so on. Summary Security is not a point in time but an ongoing journey, a never-ending ongoing journey and that's important to keep in mind. All of the things that we've done, that doesn't mean that we're all of a sudden secure, it just means we are more secure. We should all think of security as a way to make it really hard for attackers to get in so that they have to use so many resources that they won't bother rather than, "Yes, we're secure. We can sleep quietly at night." You can never sleep at night in this job. You can use open source software and achieve a good level of security. It's just a certain amount of work but it can be done, we like a challenge because we're all engineers. I want to leave you with the fact that we really need to challenge the PCI DSS status quo. It's really hard for different organizations to become compliant, especially organizations that are not in the fintech environment and that can sometimes make them fail to reach market. For us, specifically, it was really hard to become compliant because one of the things that we dealt with was the educational process that we had to undertake. Our auditor initially said that he had knowledge of containers and Google Cloud services and it turned out that he didn't that much so we spent a lot of time educating him and this shouldn't be our job. This should be the people who are training the QSAs to do their job. If you have a similar setup or if you're looking to go into this direction, please come and talk to me and hopefully together we can find a way to challenge PCI DSS and make it better for everyone. I have, as I promised, some resources if you are interested to read or check your clusters and I will make sure to make the slides available. See more presentations with transcripts Community comments
https://www.infoq.com/presentations/paybase-microservices-arch/
CC-MAIN-2021-17
en
refinedweb
In this chapter, we'll cover the following recipes: Building with windows and views Adding a tabgroup to your app Creating and formatting labels Creating textfields for user input Working with keyboards and keyboard toolbars Enhancing your app with sliders and switches Passing custom variables between windows Creating buttons and capturing click events Informing your users with dialogs and alerts Creating charts using Raphael JS Building an actionbar in Android The ability to create user-friendly layouts with rich, intuitive controls is an important factor in successful app designs. With mobile apps and their minimal screen real estate, this becomes even more important. Titanium leverages a huge amount quantity of native controls found in both the iOS and Android platforms, allowing a developer to create apps just as rich in functionality as those created by native language developers. How does this compare to the mobile Web? When it comes to HTML/CSS-only mobile apps, savvy users can definitely tell the difference between them and a platform such as Titanium, which allows you to use platform-specific conventions and access your iOS or Android device's latest and greatest features. An application written in Titanium feels and operates like a native app, because all the UI components are essentially native. This means crisp, responsive UI components utilizing the full capabilities and power of your device. Most other books at this point would start off by explaining the fundamental principles of Titanium and, maybe, give you a rundown of the architecture and expand on the required syntax. Yawn...! We're not going to do that, but if you want to find out more about the differences between Titanium and PhoneGap, check out. Instead, we'll be jumping straight into the fun stuff: building our user interface and making a real-world app! In this chapter, you'll learn all of this: How to build an app using windows and views, and understanding the differences between the two Putting together a UI using all the common components, including TextFields, labels, and switches Just how similar the Titanium components' properties are to CSS when it comes to formatting your UI You can pick and choose techniques, concepts, and code from any recipe in this chapter to add to your own applications or, if you prefer, you can follow each recipe from beginning to end to put together a real-world app that calculates loan repayments, which we'll call LoanCalc from here on. The complete source code for this chapter can be found in the /Chapter 1/LoanCalc folder. We're going to start off with the very basic building blocks of all Titanium applications: windows and views. By the end of this recipe, you'll have understood how to implement a window and add views to it, as well as the fundamental differences between the two, which are not as obvious as they may seem at first glance. If you are intending to follow the entire chapter and build the LoanCalc app, then pay careful attention to the first few steps of this chapter, as you'll need to perform these steps again for every subsequent app in the book. Note Note We are assuming that you have already downloaded and installed Appcelerator Studio, along with XCode and iOS SDK or Google's Android SDK, or both. To follow along with this recipe, you'll need Titanium installed plus the appropriate SDKs. All the examples generally work on either platform unless specified explicitly at the start of a particular recipe. The quickest way to get started is by using Appcelerator Studio, a full-fledged Integrated Development Environment (IDE) that you can download from the Appcelerator website. If you prefer, you can use your favorite IDE, such as TextMate, Sublime Text, Dashcode, Eclipse, and so on. Combined with the Titanium CLI, you can build, test, deploy, and distribute apps from the command line or terminal. However, for the purposes of this book, we're assuming that you'll be using Appcelerator Studio, which you can download from. To prepare for this recipe, open Appcelerator Studio and log in if you have not already done so. If you need to register a new account, you can do so for free from within the application. Once you are logged in, navigate to File | New | Mobile App Project and select the Classic category on the left (we'll come back to Alloy later on), then select Default Project and click on Next. The details window for creating a new project will appear. Enter LoanCalc, the name of the app, and fill in the rest of the details with your own information, as shown in the following screenshot. We can also uncheck the iPad and Mobile Web options, as we'll be building our application for the iPhone and Android platforms only: Note Pay attention to the app identifier, which is written normally in backwards domain notation (for example, com.packtpub.loancalc). This identifier cannot be changed easily after the project has been created, and you'll need to match it exactly when creating provisioning profiles to distribute your apps later on. Don't panic, however: you can change it. First, open the Resources/app.js file in your Appcelerator Studio. If this is a new project, the studio creates a sample app by default, containing a couple of Windows inside of a TabGroup; certainly useful, but we'll cover tabgroups in a later recipe, so we go ahead and remove all of the generated code. Now, let's create a Window object, to which we'll add a view object. This view object will hold all our controls, such as textfields and labels. In addition to creating our base window and view, we'll also create an imageview component to display our app logo before adding it to our view (you can get the images we have used from the source code for this chapter; be sure to place them in the Resources folder). Finally, we'll call the open() method on the window to launch it: //create a window that will fill the screen var win1 = Ti.UI.createWindow({ backgroundColor: '#BBB' }); //create the view, this will hold all of our UI controls //note the height of this view is the height of the window //minus 20 points for the status bar and padding var view = Ti.UI.createView({ top: 20, bottom: 10, left: 10, right: 10, backgroundColor: '#fff', borderRadius: 2 }); //now let's add our logo to an imageview and add that to our //view object. By default it'll be centered. var logo = Ti.UI.createImageView({ image: 'logo.png', width: 253, height: 96, top: 10 }); view.add(logo); //add the view to our window win1.add(view); //finally, open the window to launch the app win1.open(); Firstly, it's important to explain the differences between windows and views, as there are a few fundamental differences that may influence your decision on using one compared to the other. Unlike views, windows have some additional abilities, including the open() and close() methods. If you are coming from a desktop development background, you can imagine a Window as the equivalent of a form or screen; if you prefer web analogies, then a window is more like a page, whereas views are more like a Div. In addition to these methods, windows have display properties such as full screen and modal; these are not available in views. You'll also notice that while creating a new object, the create keyword is used, such as Ti.UI.createView() to create a view object. This naming convention is used consistently throughout the Titanium API, and almost all components are instantiated in this way. Windows and views can be thought of as the building blocks of your Titanium application. All your UI components are added to either a window, or a view (which is the child of a Window). There are a number of formatting options available for both of these objects, the properties and syntax of which will be very familiar to anyone who has used CSS in the past. Note that these aren't exactly like CSS, so the naming conventions will be different. Font, Color, BorderWidth, BorderRadius, Width, Height, Top, and Left are all properties that function in exactly the same way as you would expect them to in CSS, and apply to windows and almost all views. Note It's important to note that your app requires at least one window to function and that window must be called from within your entry point (the app.js file). You may have also noticed that we have sometimes instantiated objects or called methods using Ti.UI.createXXX, and at other times, we have used Ti.UI.createXXX. Ti. This is simply a shorthand namespace designed to save time during coding, and it will execute your code in exactly the same manner as the full Titanium namespace does. Tabgroups are one of the most commonly used UI elements and form the basis of the layout for many iOS and Android apps in the market today. A tabgroup consists of a sectioned set of tabs, each containing an individual window, which in turn contains a navigation bar and title. On iOS devices, these tabs appear in a horizontal list at the bottom of screen, whereas they appear as upside-down tabs at the top of the screen on Android devices by default, as shown in the following image: We are going to create two separate windows. One of these will be defined inline, and the other will be loaded from an external CommonJS JavaScript module. Before you write any code, create a new JavaScript file called window2.js and save it in your Resources directory, the same folder in which your app.js file currently resides. Now open the window2.js file you just created and add the following code: //create an instance of a window module.exports = (function(){ var win = Ti.UI.createWindow({ backgroundColor: '#BBB', title: 'Settings' }); return win; })(); If you have been following along with the LoanCalc app so far, then delete the current code in the app.js file that you created and replace it with the following source. Note that you can refer to the Titanium SDK as Titanium or Ti; in this book, I'll be using Ti: //create tab group var tabGroup = Ti.UI.createTabGroup(); //create the window var win1 = Ti.UI.createWindow({ backgroundColor: '#BBB', title: 'Loan Calculator' }); //create the view, this will hold all of our UI controls var view = Ti.UI.createView({ top: 10, bottom: 10, left: 10, right: 10, backgroundColor: '#fff', borderRadius: 2, layout: 'vertical' }); //now let's add our logo to an imageview and add that to our //view object var logo = Ti.UI.createImageView({ image: 'logo.png', width: 253, height: 96, top: 10 }); view.add(logo); //add the view to our window win1.add(view); //add the first tab and attach our window object (win1) to it var tab1 = Ti.UI.createTab({ icon:'calculator.png', title:'Calculate', window: win1 }); //create the second window for settings tab var win2 = require("window2"); //add the second tab and attach our external window object //(win2 / window2) to it var tab2 = Ti.UI.createTab({ icon:'settings.png', title:'Settings', window: win2 }); //now add the tabs to our tabGroup object tabGroup.addTab(tab1); tabGroup.addTab(tab2); //finally, open the tabgroup to launch the app tabGroup.open(); Logically, it's important to realize that the tabgroup, when used, is the root of the application and it cannot be included via any other UI component. Each tab within the tabgroup is essentially a wrapper for a single window. Windows should be created and assigned to the window property. At the time of writing this book, it may be possible to still use the url property (depending on the SDK you are using), but do not use it as it will be removed in later SDKs. Instead, we'll be creating windows using a CommonJS pattern, which is considered the proper way of developing modular applications. The tabs icon is loaded from an image file, generally a PNG file. It's important to note that in both Android and the iPhone, all icons will be rendered in grayscale with alpha transparency—any color information will be discarded when you run the application. You'll also notice in the Resources folder of the project that we have two files for each image—for example, one named settings.png and one named [email protected]. These represent normal and high-resolution retina images, which some iOS devices support. It's important to note that while specifying image filenames, we never use the @2x part of the name; iOS will take care of using the relevant image, if it's available. We also specify all positional and size properties (width, height, top, bottom, and so on) in non-retina dimensions. This is also similar to how we interact with images in Android: we always use the normal filename, so it is settings.png, despite the fact there may be different versions of the file available for different device densities on Android. Finally, notice that we're in the view and we're using vertical as a layout. This means that elements will be laid out down the screen one after another. This is useful in avoiding having to specify the top values for all elements, and, if you need to change one position, having to change all the elements. With a vertical layout, as you modify one element's top or height value, all others shift with it. Apple can be particularly picky when it comes to using icons in your apps; wherever a standard icon has been defined by Apple (such as the gears icon for settings), you should use the same. A great set of 200 free tab bar icons is available at. Whether they are for presenting text content on the screen, identifying an input field, or displaying data within a tablerow, labels are one of the cornerstone UI elements that you'll find yourself using all the time with Titanium. Through them, you'll display the majority of your information to the user, so it's important to know how to create and format them properly. In this recipe, we'll create three different labels, one for each of the input components that we'll be adding to our app later on. Using these examples, we'll explain how to position your label, give it a text value, and format it. Open up your app.js file, and put these two variables at the top of your code file, directly under the tabgroup creation declaration. These are going to be the default values for our interest rate and loan length for the app: //application variables var numberMonths = 36; //loan length var interestRate = 6.0; //interest rate Let's create labels to identify the input fields that we'll be implementing later on. Type the following source code into your app.js file. If you are following along with the LoanCalc sample app, this code should go after your imageview logo, added to the view from the previous recipe: var amountRow = Ti.UI.createView({ top: 10, left: 0, width: Ti.UI.FILL, height: Ti.UI.SIZE }); //create a label to identify the textfield to the user var labelAmount = Ti.UI.createLabel({ width : Ti.UI.SIZE, height : 30, top : 0, left : 20, font : { fontSize : 14, fontFamily : 'Helvetica', fontWeight : 'bold' }, text : 'Loan amount: $' }); amountRow.add(labelAmount); view.add(amountRow); var interestRateRow = Ti.UI.createView({ top: 10, left: 0, width: Ti.UI.SIZE, height: Ti.UI.SIZE }); //create a label to identify the textfield to the user var labelInterestRate = Ti.UI.createLabel({ width : Ti.UI.SIZE, height : 30, top : 0, left : 20, font : { fontSize : 14, fontFamily : 'Helvetica', fontWeight : 'bold' }, text : 'Interest Rate: %' }); interestRateRow.add(labelInterestRate); view.add(interestRateRow); var loanLengthRow = Ti.UI.createView({ top: 10, left: 0, width: Ti.UI.FILL, height: Ti.UI.SIZE }); //create a label to identify the textfield to the user var labelLoanLength = Ti.UI.createLabel({ width : 100, height : Ti.UI.SIZE, top : 0, left : 20, font : { fontSize : 14, fontFamily : 'Helvetica', fontWeight : 'bold' }, text : 'Loan length (' + numberMonths + ' months):' }); loanLengthRow.add(labelLoanLength); view.add(loanLengthRow); By now, you should notice a trend in the way in which Titanium instantiates objects and adds them to views/windows, as well as a trend in the way formatting is applied to most basic UI elements using the JavaScript object properties. Margins and padding are added using the absolute positioning values of top, left, bottom, and right, while font styling is done with the standard font properties, which are fontSize, fontFamily, and fontWeight in the case of our example code. Here are a couple of important points to note: The width property of our first two labels is set to Ti.UI.SIZE, which means that Titanium will automatically calculate the width of the Label depending on the content inside (a string value in this case). This Ti.UI.SIZEproperty can be used for both the width and height of many other UI elements as well, as you can see in the third labelthat we created, which has a dynamic height for matching the label's text. When no height or width property is specified, the UI component will expand to fit the exact dimensions of the parent view or window that encloses it. You'll notice that we're creating views that contain a label each. There's a good reason for this. To avoid using absolute positioning, we're using a vertical layout on the main view, and to ensure that our text fields appear next to our labels, we're creating a row as a view, which is then spaced vertically. Inside the row, we add the label, and in the next recipes, we will have all the text fields next to the labels. The textAlignproperty of the labels works the same way as you'd expect it to in HTML. However, you'll notice the alignment of the text only if the width of your label isn't set to Ti.UI.SIZE, unless that label happens to spread over multiple lines. TextFields in Titanium are single-line textboxes used to capture user input via the keyboard, and usually form the most common UI element for user input in any application, along with labels and buttons. In this section, we'll show you how to create a Textfield, add it to your application's View, and use it to capture user input. We'll style our textfield component using a constant value for the first time. Type the following code after the view has been created but before adding that view to your window. If you've been following along from the previous recipe, this code should be entered after your labels have been created: //creating the textfield for our loan amount input var tfAmount = Ti.UI.createTextField({ width: 140, height: 30, right: 20, borderStyle:Ti.UI.INPUT_BORDERSTYLE_ROUNDED, returnKeyType:Ti.UI.RETURNKEY_DONE, hintText: '1000.00' }); }); interestRateRow.add(tfInterestRate); In this example, we created a couple of basic textfield with a rounded border style, and introduced some new property types that don't appear in labels and imageviews, including hintText. The hintText property displays a value in the textfield, which disappears when that textfield has focus (for example, when a user taps it to enter some data using their keyboard). The user input is available in the textfield property called value; as you must have noted in the preceding recipe, accessing this value is simply a matter of assigning it to a variable (for example, var myName = txtFirstName.value), or using the value property directly. textfield are one of the most common components in any application, and in Titanium there are a couple of points and options to consider whenever you use them. It's important to note that when you want to retrieve the text that a user has typed in a textfield, you need to reference the value property and not the text, like many of the other string-based controls! Try experimenting with other textfield border styles to give your app a different appearance. Other possible values are the following: Ti.UI.INPUT_BORDERSTYLE_BEZEL Ti.UI.INPUT_BORDERSTYLE_LINE Ti.UI.INPUT_BORDERSTYLE_NONE Ti.UI.INPUT_BORDERSTYLE_ROUNDED When a textfield or textarea control gains focus in either an iPhone or an Android phone, the default keyboard is what you see spring up on the screen. There will be times, however, when you wish to change this behavior; for example, you may only want to have the user input numeric characters into a textfield when they are providing a numerical amount (such as their age or a monetary value). Additionally, keyboard toolbars can be created to appear above the keyboard itself, which will allow you to provide the user with other options, such as removing the keyboard from the window, or allowing copy and paste operations via a simple button tap. In the following recipe, we're going to create a toolbar that contains both a system button and another system component called flexiblespace. These will be added at the top of our numeric keyboard, which will appear whenever the TextField for amount or interest rate gains focus. Note that in this example, we have updated the tfAmount and tfInterestRate textfield objects to contain the keyboardType and returnKeyType properties. Note that toolbars are iOS-specific, and currently they may not be available for Android in the Titanium SDK. Open your app.js file and type the following code. If you have been following along from the previous recipe, this code should replace the previous recipe's code for adding the amount and interest rate textfields: //flexible space for button bars var flexSpace = Ti.UI.createButton({ systemButton:Ti.UI.iPhone.SystemButton.FLEXIBLE_SPACE }); //done system button var buttonDone = Ti.UI.createButton({ systemButton:Ti.UI.iPhone.SystemButton.DONE, bottom: 0 }); //add the event listener 'click' event to our done button buttonDone.addEventListener('click', function(e){ tfAmount.blur(); tfInterestRate.blur(); interestRate = tfInterestRate.value; }); //creating the textfield for our loan amount input var tfAmount = Ti.UI.createTextField({ width: 140, height: 30, right: 20, borderStyle:Ti.UI.INPUT_BORDERSTYLE_ROUNDED, returnKeyType:Ti.UI.RETURNKEY_DONE, hintText: '1000.00', keyboardToolbar: [flexSpace,buttonDone], keyboardType:Ti.UI.KEYBOARD_PHONE_PAD });, keyboardToolbar: [flexSpace,buttonDone], keyboardType:Ti.UI.KEYBOARD_PHONE_PAD }); interestRateRow.add(tfInterestRate); In this recipe, we created a textfield and added it to our view. You should have noticed by now how many properties are universal among the different UI components: width, height, top, and right are just four properties that are used in our textfield called tfAmount and were used in previous recipes for other components. Many touchscreen phones do not have physical keyboards; however, we are using a touchscreen keyboard to gather our input data. Depending on the data you require, you may not need a full keyboard with all the QWERTY keys, and you may want to just display a numeric keyboard, for example, if you were using the telephone dialing features on your iPhone or Android device. Additionally, you may require the QWERTY keys, but in a specific format. A custom keyboard makes the user input quicker and less frustrating for the user by presenting custom options, such as keyboards for inputting web addresses and e-mails with all the www and @ symbols in convenient touch locations. In this example, we're setting keyboardType to Ti.UI.KEYBOARD_PHONE_PAD, which means that whenever the user clicks on that field, they see a numeric keypad. In addition, we are specifying the keyboardToolbar property to be an array of our Done button as well as the the flexspace button, so we get a toolbar with the Done button. The event listener added to the Done button ensures that we can pick up the click, capture the values, and blur the field, essentially hiding the keypad. Tip Downloading the example code You can download the example code files from your account at for all the Packt Publishing books you have purchased. If you purchased this book elsewhere, you can visit and register to have the files e-mailed directly to you. Try experimenting with other keyboard styles in your Titanium app! Sliders and switches are two UI components that are simple to implement and can bring that extra level of interactivity into your apps. Switches, as the name suggests, have only two states—on and off—which are represented by boolean values (true and false). Sliders, on the other hand, take two float values—a minimum value and a maximum value—and allow the user to select any number between and including these two values. In addition to its default styling, the slider API also allows you to use images for both sides of the track and the slider thumb image that runs along it. This allows you to create some truly customized designs. We are going to add a switch to indicate an on/off state and a slider to hold the loan length, with values ranging from a minimum of 6 months to a maximum of 72 months. Also, we'll add some event handlers to capture the changed value from each component, and in the case of the slider, we will update an existing label with the new slider value. Don't worry if you aren't yet 100 percent sure about how event handlers work, as we'll cover them in further detail in Chapter 6, Getting to Grips With Properties and Events. If you're following with the LoanCalc app, the next code should replace the code in your window2.js file. We'll also add a label to identify what the switch component does and a view component to hold it all together: //create an instance of a window module.exports = (function(){ var win = Ti.UI.createWindow({ backgroundColor: '#BBB', title: 'Settings' }); //create the view, this will hold all of our UI controls var view = Ti.UI.createView({ width: 300, height: 70, left: 10, top: 10, backgroundColor: '#fff', borderRadius: 5 }); //create a label to identify the switch control to the user var labelSwitch = Ti.UI.createLabel({ width: Ti.UI.SIZE, height: 30, top: 20, left: 20, font: {fontSize: 14, fontFamily: 'Helvetica', fontWeight: 'bold'}, text: 'Auto Show Chart?' }); view.add(labelSwitch); //create the switch object var switchChartOption = Ti.UI.createSwitch({ right: 20, top: 20, value: false }); view.add(switchChartOption); win.add(view); return win; })(); Now let's write the slider code; go back to your app.js file and type the following code underneath the interestRateRow.add(tfInterestRate); line: //create the slider to change the loan length var lengthSlider = Ti.UI.createSlider({ width: 140, top: 200, right: 20, min: 12, max: 60, value: numberMonths, thumbImage: 'sliderThumb.png', highlightedThumbImage: 'sliderThumbSelected.png' }); lengthSlider.addEventListener('change', function(e){ //output the value to the console for debug console.log(lengthSlider.value); //update our numberMonths variable numberMonths = Math.round(lengthSlider.value); //update label labelLoanLength.text = 'Loan length (' + Math.round(numberMonths) + ' months):'; }); loanLengthRow.add(lengthSlider); In this recipe, we added two new components to two separate views within two separate windows. The first component—a switch—is fairly straightforward, and apart from the standard layout and positioning properties, it takes one main boolean value to determine its on or off status. It also has only one event, change, which is executed whenever the switch changes from the on to off position or vice versa. On the Android platform, the switch can be altered to appear as a toggle button (default) or a checkbox. Additionally, Android users can display a text label using the title property, which can be changed programmatically by using the titleOff and titleOn properties. The slider component is more interesting and has many more properties than a Switch. sliders are useful for instances where we want to allow the user to choose between a range of values; in this case, it is a numeric range of months from 12 to 60. This is a much more effective method of choosing a number from a range than listing all the possible options in a picker, and is much safer than letting a user enter possibly invalid values via a textfield or textarea component. Pretty much all of the slider can be styled using the default properties available in the Titanium API, including thumbImage and highlightedThumbImage, as we did in this recipe. The highlightedThumbImage property allows you to specify the image that is used when the slider is being selected and used, allowing you to have a default and an active state. You'll often find a need to pass variables and objects between different screen objects in your apps, such as windows, in your apps. One example is between a master and a child view. If you have a tabular list of data that shows only a small amount of information per row, and you wish to view the full description, you might pass that description data as a variable to the child window. In this recipe, we're going to apply this very principle to a variable on the settings window (in the second tab of our LoanCalc app), by setting the variable in one window and then passing it back for use in our main window. Under the declaration for your second window, win2 in the app.js file, include the following additional property called autoShowChart and set it to false. This is a custom property, that is, a property that is not already defined by the Titanium API. Often, it's handy to include additional properties in your objects if you require certain parameters that the API doesn't provide by default: //set the initial value of win2's custom property win2.autoShowChart = false; Now, in the window2.js file, which holds all the subcomponents for your second window, replace the code that you created earlier to add the switch with the following code. This will update the window's autoShowChart variable whenever the switch is changed: //create the switch object var switchChartOption = Ti.UI.createSwitch({ right: 20, top: 20, value: false }); //add the event listener for the switch when it changes switchChartOption.addEventListener('change', function(e){ win.autoShowChart = switchChartOption.value; }); //add the switch to the view view.add(switchChartOption); How this code works is actually pretty straightforward. When an object is created in Titanium, all the standard properties are accessible in a dictionary object of key-value pairs; all that we're doing here is extending that dictionary object to add a property of our own. We can do this in two ways. As shown in our recipe's source code, this can be done after the instantiation of the window object, or it can also be done immediately within the instantiation code. In the source code of the second window, we are simply referencing the same object, so all of its properties are already available for us to read from and write to. In any given app, you'll notice that creating buttons and capturing their click events is one of the most common tasks you do. This recipe will show you how to declare a button control in Titanium and attach a click event to it. Within that click event, we'll perform a task and log it to the info window in Appcelerator Studio. This recipe will also demonstrate how to implement some of the default styling mechanisms available for you via the API. Open your app.js file and type the following code. If you're following along with the LoanCalc app, the following code should go after you created and added the textfield controls: //calculate the interest for this loan button var buttonCalculateInterest = Ti.UI.createButton({ title: 'Calculate Total Interest', id: 1, top: 10 }); //add the event listener buttonCalculateInterest.addEventListener('click', calculateAndDisplayValue); //add the first button to our view view.add(buttonCalculateInterest); //calculate the interest for this loan button var buttonCalculateRepayments = Ti.UI.createButton({ title: 'Calculate Total Repayment', id: 2, top: 10 }); //add the event listener buttonCalculateRepayments.addEventListener('click', calculateAndDisplayValue); //add the second and final button to our view view.add(buttonCalculateRepayments); Now that we've created our two buttons and added the event listeners, let's create the calculateAndDisplayValue() function to do some simple fixed interest mathematics and produce the results, which we'll log to the Appcelerator Studio console: //add the event handler which will be executed when either of //our calculation buttons are tapped function calculateAndDisplayValue(e) { //log the button id so we can debug which button was tapped console.log('Button id = ' + e.source.id); if (e.source.id == 1) { //Interest (I) = Principal (P) times Rate Per Period //(r) times Number of Periods (n) / 12 var totalInterest = (tfAmount.value * (interestRate / 100) * numberMonths) / 12; //log result to console console.log('Total Interest = ' + totalInterest); } else { //Interest (I) = Principal (P) times Rate Per Period (r) //times Number of Periods (n) / 12 var totalInterest = (tfAmount.value * (interestRate / 100) * numberMonths) / 12; var totalRepayments = Math.round(tfAmount.value) + totalInterest; //log result to console console.log('Total repayments' + totalRepayments); } } //end function Most controls in Titanium are capable of firing one or more events, such as focus, onload, or (as in our recipe) click. The click event is undoubtedly the one you'll use more often than any other. In the preceding source code, you will notice that, in order to execute code from this event, we are adding an event listener to our button, which has a signature of click. This signature is a string and forms the first part of our event listener. The second part is the executing function for the event. It's important to note that other component types can also be used in a similar manner; for example, an imageview can be declared. It can contain a custom button image, and can have a click event attached to it in exactly the same way as a regular button can. There are a number of dialogs available for you to use in the Titanium API, but for the purposes of this recipe, we'll be concentrating on the two main ones: alert dialog and option dialog. These two simple components perform two similar roles, but with a key difference. The alert dialog is normally used only to show the user a message, while the option dialog asks the user a question and can accept a response in the form of a number of options. Generally, an alert dialog only allows a maximum of two responses from the user, whereas the option dialog can contain many more. There are also key differences in the layout of these two dialog components, which will become obvious in the following recipe. First, we'll create an alert dialog that simply notifies the user of an action that can not be completed due to missing information. In our case, that they have not provided a value for the loan amount in tfAmount TextField. Add the following code to the calculateAndDisplayValue() function, just under the initial console.log command: if (tfAmount.value === '' || tfAmount.value === null) { var errorDialog = Ti.UI.createAlertDialog({ title: 'Error!', message: 'You must provide a loan amount.' }); errorDialog.show(); return; } Now let's add the option dialog. This is going to display the result from our calculation and then give the user the choice of viewing the results as a pie chart (in a new window), or of canceling and staying on the same screen. We need to add a couple of lines of code to define the optionsMessage variable that will be used in the option dialog, so add this code below the line calculating totalRepayments: console.log('Total repayments = ' + totalRepayments) : var optionsMessage = "Total repayments on this loan equates to $" + totalRepayments; Then add the following code just below the line of code defining totalInterest: console.log('Total interest = ' + totalInterest) : var optionsMessage = "Total interest on this loan equates to $" + totalInterest; Finally, at the end of the function, add this code: //check our win2 autoShowChart boolean value first (coming //from the switch on window2.js) if (win2.autoShowChart == true) { // openChartWindow(); } else { var resultOptionDialog = Ti.UI.createOptionDialog({ title: optionsMessage + '\n\nDo you want to view this in a chart?', options: ['Okay', 'No'], cancel: 1 }); //add the click event listener to the option dialog resultOptionDialog.addEventListener('click', function(e){ console.log('Button index tapped was: ' + e.index); if (e.index == 0) { // openChartWindow(); } }); resultOptionDialog.show(); } //end if The alert dialog, in particular, is a very simple component that simply presents the user with a message as a modal, and it has only one possible response, which closes the alert. Note that you should be careful not to call an alert dialog more than once while a pending alert is still visible, for example, if you're calling that alert from within a loop. The option dialog is a much larger modal component that presents a series of buttons with a message at the bottom of the screen. It is generally used to allow the user to pick more than one item from a selection. In our code, resultOptionDialog presents the user with a choice of two options—Okay and No. One interesting property of this dialog is Cancel, which dismisses the dialog without firing the click event, and also styles the button at the requested index in a manner that differentiates it from the rest of the group of buttons. Note that we've commented out the openChartWindow() function because we haven't created it yet. We'll be doing that in the next recipe. Just like the Window object, both of these dialogs are not added to another View, but are presented by calling the show() method instead. You should call the show() method only after the dialog has been properly instantiated and any event listeners have been created. The following images show the difference between the alert dialog and the option dialog: Let's finish off our calculations visually by displaying charts and graphs. Titanium lacks a native charting API. However, there are some open source options for implementing charts, such as Google Charts. While the Google solution is free, it requires your apps to be online every time you need to generate a chart. This might be okay for some circumstances, but it is not the best solution for an application that is meant to be usable offline. Plus, Google Charts returns a generated JPG or PNG file at the requested size and in rasterized format, which is not great for zooming in when viewing on an iPhone or iPad. A better solution is to use the open source and MIT-licensed Raphael library, which (luckily for us) has a charting component! It is not only free but also completely vector-based, which means any charts that you create will look great in any resolution, and can be zoomed in to without any loss of quality. Note Note that this recipe may not work on all Android devices. This is because the current version of Raphael isn't supported by non-WebKit mobile browsers. However, it will work as described here for iOS. Download the main Raphael JS library from. The direct link is. Download the main Charting library from (the direct link is), and any other charting libraries that you wish to use. Download the Pie Chart library, which is at. If you're following along with the LoanCalc example app, then open your project directory and put your downloaded files into a new folder called charts under the Resources directory. You can put them into the root folder if you wish, but bear in mind that you will have to ensure that your references in the following steps are correct. To use the library, we'll be creating a webview in our app, referencing a variable that holds the HTML code to display a Raphael chart, which we'll call chartHTML. A webview is a UI component that allows you to display web pages or HTML in your application. It does not include any features of a full-fledged browser, such as navigation controls or address bars. Create a new file called chartwin.js in the Resources directory and add the following code to it: //create an instance of a window module.exports = (function() { var chartWin = Ti.UI.createWindow({ title : 'Loan Pie Chart' }); chartWin.addEventListener("open", function() { //create the chart title using the variables we passed in from //app.js (our first window) var chartTitleInterest = 'Total Interest: $' + chartWin.totalInterest; var chartTitleRepayments = 'Total Repayments: $' + chartWin.totalRepayments; //create the chart using the sample html from the //raphaeljs.com website var chartHTML = '<html><head> <title>RaphaelJS Chart</title><meta name="viewport" content="width=device-width, initial-scale=1.0"/> <script src="charts/raphael-min.js" type="text/javascript" charset="utf-8"></script> <script src="charts/g.raphael-min.js" type="text/javascript" charset="utf-8"></script> <script src="charts/g.pie-min.js" type="text/javascript" charset="utf-8"></script> <script type="text/javascript" charset="utf-8"> window.onload = function () { var r = Raphael("chartDiv"); r.text.</div> </body></html>'; //add a webview to contain our chart var webview = Ti.UI.createWebView({ width : Ti.UI.FILL, height : Ti.UI.FILL, top : 0, html : chartHTML }); chartWin.add(webview); }); return chartWin; })(); Now, back in your app.js file, create a new function at the end of the file, called openChartWindow(). This function will be executed when the user chooses Okay from the previous recipe's option dialog. It will create a new window object based on the chartwin.js file and pass to it the values needed to show the chart: //we'll call this function if the user opts to view the loan //chart function openChartWindow() { //Interest (I) = Principal (P) times Rate Per Period (r) //times Number of Periods (n) / 12 var totalInterest = (tfAmount.value * (interestRate / 100) * numberMonths) / 12; var totalRepayments = Math.round(tfAmount.value) + totalInterest; var chartWindow = require("chartwin"); chartWindow.numberMonths = numberMonths; chartWindow.interestRate = interestRate; chartWindow.totalInterest = totalInterest; chartWindow.totalRepayments = totalRepayments; chartWindow.principalRepayments = (totalRepayments - totalInterest); tab1.open(chartWindow); } Finally, remember to uncomment the two // openChartWindow() lines that you added in the previous recipe. Otherwise, you won't see anything! Essentially, what we're doing here is wrapping the Raphael library, something that was originally built for the desktop browser, into a format that can be consumed and displayed using the iOS's WebKit browser. You can find out more about Raphael at and, and learn how it renders charts via its JavaScript library. We'll not be explaining this in detail; rather, we will cover the implementation of the library to work with Titanium. Our implementation consists of creating a webview component that (in this case) will hold the HTML data that we constructed in the chartHTML variable. This HTML data contains all of the code that is necessary to render the charts, including the scripts listed in item #2 of the Getting Ready section of this recipe. If you have a chart with static data, you can also reference the HTML from a file using the url property of the webview object, instead of passing all the HTML as a string. The chart itself is created using some simple JavaScript embedded in the r.piechart(150, 180, 130, n1, n2) HTML data string, where n1 and n2 are the two values we wish to display as slices in the pie chart. The other values define the center point of the chart from the top and left, respectively, followed by the chart radius. All of this is wrapped up in a new module file defined by the chartwin.js file, which accesses the properties passed from the first tab's window in our LoanCalc app. This data is passed using exactly the same mechanism as explained in a previous recipe, Passing custom variables between Windows. Finally, the chart window is passed back to the app.js file, within the openChartWindow() function, and from there, we use tab1.open() to open a new window within tab1. This has the effect of sliding the new window, similar to the way in which many iOS apps work (in Android, the new window would open normally). The following screenshot shows the Raphael JS Library being used to show a pie chart based on our loan data: In Android 3.0, Google introduced the actionbar, a tab-style interface that sits under the title bar of an application. The actionbar behaves a lot like the tabgroup, which we're used to in iOS, and coincidently it can be created in the same way as we created a TabGroup previously, which makes it very easy to create one! All that we need to do is make some minor visual tweaks in our application to get it working on Android. You will be running this recipe on Android 4.x, so make sure you're running an emulator or device that runs 4.x or higher. I'd recommend using GenyMotion, available at, to emulate Android. It's fast and way more flexible than, the built-in Android SDK emulators. It's also fully supported in Titanium and in Appcelerator Studio. The complete source code for this chapter can be found in the /Chapter 1/LoanCalc folder. There's not much to do to get the actionbar working, as we've already created a tabgroup for our main interface. We just need to do just a few tweaks to our app views, buttons, and labels. First, let's make sure that all our labels are rendering correctly. Add the following attribute to any label that you've created: color: '#000' Now we need to fix our buttons. Let's add a tweak to them after we've created them (for Android only). Add the following code after your buttons. To do this, we're going to use .applyProperties, which allows us to make multiple changes to an element at the same time: if (Ti.Platform.osname.toLowerCase() === 'android') { buttonCalculateRepayments.applyProperties({ color : '#000', height : 45 }); buttonCalculateInterest.applyProperties({ color : '#000', height : 45 }); } This block checks whether we're running Android and makes some changes to the buttons. Let's add some more code to the block to adjust the textfield height as well, as follows: if (Ti.Platform.osname.toLowerCase() === 'android') { buttonCalculateRepayments.applyProperties({ color : '#000', height : 45 }); buttonCalculateInterest.applyProperties({ color : '#000', height : 45 }); tfAmount.applyProperties({ color : '#000', height : 35 }); tfInterestRate.applyProperties({ color : '#000', height : 35 }); } Finally, we're going to make a tweak to our settings window to make it play nicely on Android devices with different widths. Edit the window2.js file and remove the width of the view variable, changing it to the following: var view = Ti.UI.createView({ height : 70, left : 10, right: 10, top : 10, backgroundColor : '#fff', borderRadius : 5 }); We'll need to update the labelSwitch variable too, by adding this line: color: '#000' Now let's run the app in the Android emulator or on a device, and we should see the following: We've not done much here to get an actionbar working. That's because Titanium takes care of the heavy lifting for us. You must have noticed that the only changes we made were visual tweaks to the other elements on the screen; the actionbar just works! This is a really nice feature of Titanium, wherein you can create one UI element, a tabgroup, and have it behave differently for iOS and Android using the same code. Having said that, there are some additional tweaks that you can do to your actionbar using the Ti.Android.ActionBar API. This gives specific access to properties and events associated with the actionbar. More information can be found at. So, for example, you can change the properties of actionBar by accessing it via the current window: actionBar = win.activity.actionBar; if (actionBar) { actionBar.backgroundImage = "/bg.png"; actionBar.title = "New Title"; } As you can see, it's really easy to create an actionbar using a tabgroup and alter its properties in Android.
https://www.packtpub.com/product/appcelerator-titanium-smartphone-app-development-cookbook-second-edition/9781849697705
CC-MAIN-2021-17
en
refinedweb
chrono::vehicle::ChVehicleGeometry Class Reference Description Utility class defining geometry (visualization and collision) and contact materials for a rigid vehicle body. Holds vectors of primitive shapes (any one of which may be empty) and a list of contact materials. Each shape defines its position and orientation relative to the parent body, geometric dimensions, and an index into the list of contact materials. #include <ChSubsysDefs.h> Collaboration diagram for chrono::vehicle::ChVehicleGeometry: The documentation for this class was generated from the following files: - /builds/uwsbel/chrono/src/chrono_vehicle/ChSubsysDefs.h - /builds/uwsbel/chrono/src/chrono_vehicle/ChSubsysDefs.cpp
http://api.projectchrono.org/classchrono_1_1vehicle_1_1_ch_vehicle_geometry.html
CC-MAIN-2021-17
en
refinedweb
Data Conversion in Swift. Even operations which don’t look like data processing, for example, showing the user interface or handling button clicks, are actually sending packets of data to an API and receiving responses. Now, in 2020, we still work with algorithms and data. What became more complex are the data structures and, as a result, the conversions between them. Let’s have a look at the most popular Swift data types, structures and classes and find (or write) functions converting one into another. A note about Swift types Swift inherited its types from Objective-C, but Swift has a different naming convention for complex types. Classes from the Foundation framework typically start with the prefix NS (from NextStep). In Swift the NS is usually dropped and instead of NSString you have String and Data instead of NSData. In some cases it’s kind of renaming, but not everywhere. The String class doesn’t have access to all NSString methods. It’s not an alias, it’s a wrap around NSString. But Swift allows to make a fast conversion if you need to access properties and methods of the original NSString class: let str: String = "Some String" let nsstr = str as NSString Such conversions never fail, that’s why you don’t need to use ! or ? after as. Data to String and back In the C programming language Data and String were the same data type, and they were the same as arrays of bytes. Modern Strings are not just buffers of data, they also have information about encoding and many useful methods. At the same time, Data (aka NSData) can logically contain text, which means there should be a way to convert it to String (aka NSString) and back. I use exclamation marks for the sake of simplicity. The data method returns optional Data objects. Read here how to deal with optionals in a proper way. Numbers to String and back Another popular conversion is numbers ( Int, Double and others) to String and back. The easiest (but not always the best) way to convert number to String is String interpolation. In other words, inserting variables (or constants) to strings using \() constructions. The disadvantage of this method is that you can’t format it. For example, if you want to use a currency formatting, it’s better to use String.format: Format specifiers in Swift are inherited from Objective-C, and format specifiers in Objective-C are inherited from the legendary printf function family in C. You can find a list of Swift format specifiers here. Another important note: when you turn String into Int or Double you get an optional value. It makes sense, because Int("a") should return nil. Int("1.5") will also return nil, not 1 or 2. String to Dictionary and Array and back This is a big topic, because it depends on the String format. The 3 most popular formats are JSON, XML and YAML. - JSON — JavaScript Object Notation. A variation of JavaScript language, or more precisely of the part of JavaScript language responsible for describing objects and arrays. An interesting fact about JSON is that it can be pasted into JavaScript code and parsed by web browsers or another JavaScript engines. - XML — eXtensible Markup Language. It’s a popular language for creating layouts of web pages and UIs. It’s the main format of Universal Windows Platform layouts, Android layouts and a close relative of HTML (HyperText Markup Language) and kind of a parent of PLIST (Property List) format actively used for iOS and macOS development. - YAML — Yet Another Markup Language. Another popular language to represent data. Often used for configuration files, for example, in Flutter. These 3 languages look very different, but they have some things in common: - They are text strings. - Swift dictionaries and arrays can be serialised into them and deserialised from them as long as each data type inside dictionaries and arrays is serialisable. JSON The easiest case is JSON. All modern languages support JSON as it’s the most common way of information exchange with server APIs. This example shows how to convert a JSON String to Dictionary and back. In the end it prints this: { "key1" : "val1", "arr" : [ 1, 2, 3 ], "key2" : "val2" } It’s easy to see that it’s the same data structure as we provide, but formatted differently . If you need to prepare data for sending to the API, you’d better remove the .prettyPrinted option to get a more optimised (but less human readable) output. XML The next is XML. Swift has the internal class XMLParser, but it doesn’t make conversions. It’s very powerful, but very uncomfortable to use. For example, that’s how we can parse a simple XML String: The output of this code is: Found element parent with attributes [:]Found element child with attributes ["attr": "attr"] As you can see, it parses XML correctly, but it doesn’t create a Dictionary or an Array object. To get a dictionary, you need an external library. For example, SWXMLHash. This library is a wrapper around XMLParser (aka NSXMLParser), but it also makes conversion, which is exactly what we need. You can add it with Cocoapods: pod 'SWXMLHash' If you’re not familiar with Cocoapods or you want to find another useful libraries for your project, you can read this article. Then you parse XML document with just one line of code: let xmlDict = SWXMLHash.parse(xml) How to write XML? The easiest way is to generate String with String.format, but you may have problems with some symbols… you’ll need to escape them. Has anyone done it for us? I didn’t find any modern solution. All the libraries are either deprecated or do something different. But I found a post here. It’s in Objective-C and doesn’t handle all the cases, so I wrote an updated version: XML can have only one root element, that’s why we need to provide it specifically. See the example below: let dict: [String: Any] = [ "child1": [1, 2, 3], "child2": "Hello, world", "child3": [ ["a": "b"], "cd", 5 ] ]let xml = convertDictionaryToXML(dictionary: dict, startElement: "root", isFirstElement: true) print(xml) The output will be the following: <?xml version="1.0" encoding="utf-8"?> <root> <child2>Hello, world</child2> <child3> <a>b</a> </child3> <child3>cd</child3> <child3>5</child3> <child1>1</child1> <child1>2</child1> <child1>3</child1> </root> This is a valid XML. As a side note I will mention that the elements of arrays are ordered, but the keys in the dictionary are not. This is a common rule for all programming languages. YAML Swift doesn’t provide an integrated way to create or parse YAML. But there are good libraries for it. For example, Yams. Include it with Cocoapods: pod 'Yams' Import it: import Yaml And use: let strYAML: String? = try? Yams.dump(object: dictionary)let loadedDictionary = try? Yams.load(yaml: mapYAML) as? [String: Any] Numeric conversions Swift makes it easy to convert different numeric types. The problem is that conversion from Double (and Float) to Int will always ignore the decimal part. To fix this issue and get a mathematically correct rounding, use the round function on Double before converting it to Int: Converting Bool Bool or Boolean is a logical type, which can have only two values: true and false. true in Objective-C is also known as YES and false is NO. Even if you program in Swift, you may see YES and NO in PLIST files and Objective-C libraries. Usually true is represented in memory as 1 and false as 0. In C, for example, you can write if (1) { ... } and while (1) { ... } makes an infinite loop unless you terminate it with break or return. JavaScript offers the conception of truth-y and false-y values. For example, 1 is truth-y and nil is false-y. Swift needs an explicit conversion to Bool for these cases. Constructions like let b = Bool(1) will cause a warning. And in my opinion it’s the right thing. Because I don’t know if -1 is truth-y or false-y. Different programmers will give different answers. And different programming languages will interpret it differently. Many APIs return 0 for success and non-0 value for errors. The WIN32 API defines a constant S_OK, which is 0. And if you write: if (some_call()) { // ... } the inner code will never be executed if some_call returns S_OK. I’m writing all this to say that all conversions between boolean values and numeric values should be forbidden to avoid all this mess. On the other hand, conversions between Bool and String make sense. As you can see from the example above, Swift considers only “true” and “false” as valid boolean strings. That’s why, working in Swift, you should serialise Bool this way. If you use external APIs, for example, REST API, you should check how boolean values are presented in the documentation and make explicit conversions. For example: Date and Time For both date and time Swift offers the data type Date. It’s easy to confuse with Data, but it’s totally different. Technically, Date is a combination of two values: - Timestamp - Time zone When we talk about dates, it makes sense to convert it to (from) two data types — Double and String. Double (aka TimeInterval) is the way Swift represents timestamps (amount of seconds since Epoch, 1 January 1970). And String has many purposes, starting with showing date and/or time to the user and ending with sending them to the server in ISO-8601 format. The example above shows how Date can be converted into timestamp or one of possible strings. The opposite conversion is also possible. For example: As you can probably guess, objects date1, date2 and date3 refer to the same date and time (except milliseconds, they will be different). Conclusion Swift is a strictly typed programming language, that’s why even simple data conversions need to be made explicitly. We reviewed conversions between the most common data types and data formats used in modern programming. Happy coding and see you next time!
https://alex-nekrasov.medium.com/data-conversion-in-swift-e2ba90a55748?source=---------4----------------------------
CC-MAIN-2021-17
en
refinedweb
WAIT(2) NetBSD System Calls Manual WAIT(2)Powered by man-cgi (2021-03-02). Maintained for NetBSD by Kimmo Suominen. Based on man-cgi by Panagiotis Christias. NAME wait, waitpid, wait4, wait3 -- wait for process termination LIBRARY Standard C Library (libc, -lc) SYNOPSIS #include <sys/wait.h> pid_t wait(int *status); pid_t waitpid(pid_t wpid, int *status, int options); #include <sys/resource.h>() call provides a more general interface for programs that need to wait for certain child processes, that need resource utilization sta- tistics accumulated by child processes, or that require options. The other wait functions are implemented using wait4(). The wpid parameterNOHANG . process. Note that these macros expect the status value itself, not a pointer to the status value.. 6 AT&T UNIX. NetBSD 5.0.1 May 24, 2004 NetBSD 5.0.1
http://man.netbsd.org/NetBSD-5.0.1/wait.2
CC-MAIN-2021-17
en
refinedweb
A modern Python Framework for microboard automation and control applications development Project description Rackio Framework A modern Python Framework for microboard automation and control applications development. Github-Rackio Framework Documentation The complete Rackio documentation can be found in Read the Docs Rackio Framework Documentation Requirements - Python 3.6+ - falcon - pyBigParser Installation pip install Rackio Examples Basic Setup from rackio import Rackio, TagEngine app = Rackio() tag_engine = TagEngine() # Tags definitions tag_engine.set_tag("RAND1", "float") tag_engine.set_tag("RAND2", "float") tag_engine.set_tag("T1", "float") tag_engine.set_tag("T2", "float") tag_engine.set_tag("T3", "float") if __name__ == "__main__": app.run() Rackio comes with some built-in features that let you start creating rapid and fast coding prototypes. Adding controls Controls are objects that interact with the tags, changing their values accordingly to a condition Value Actions These actions only change tags values with a defined constant value. from rackio.controls import Condition, ValueAction, Control # Conditions definitions cond1 = Condition("T1",">=", "T2") cond2 = Condition("T1","<", "T2") # Actions definitions act1 = ValueAction("T3", 40) act2 = ValueAction("T3", 80) # Controls Definitions control1 = Control("C1", cond1, act1) control2 = Control("C2", cond2, act2) app.append_control(control1) app.append_control(control2) Math Actions These actions change tags values with a defined mathematical expression, and defined tags can be used inside these expressions. from rackio.controls import MathAction # Conditions definitions cond1 = Condition("T1",">=", "T2") cond2 = Condition("T1","<", "T2") # Actions definitions act1 = MathAction("T3", "T1 + T2") act2 = MathAction("T3", "T2 - T1") # Controls Definitions control1 = Control("C1", cond1, act1) control2 = Control("C2", cond2, act2) app.append_control(control1) app.append_control(control2) Once Rackio is up and running, will trigger some actions if the associated condtions are met, by observing continously all the tags values for changes. Supported functions within expressions You can define your mathematical expression following the same arithmetic rules that python can handle, but only a set of math functions and constants are supported. cos sin abs log10 log exp tan pi e Adding continous tasks Rackio can be extended to add custom continous tasks and operations @app.rackit(1) def writer1(): tag_engine.write_tag("T1", 15) tag_engine.write_tag("T2", 40) direction = 1 while True: time.sleep(0.5) value = 24 + 2 * random() tag_engine.write_tag("RAND1", value) T1 = tag_engine.read_tag("T1") T1 += direction tag_engine.write_tag("T1", T1) if T1 >= 60: direction *= -1 if T1 <= 5: direction *= -1 You can register a defined function as a continous task to be perform by Rackio. You can also provide functions as tasks lists @app.rackit_on(period=1) def reader(): rand1 = tag_engine.read_tag("RAND1") rand2 = tag_engine.read_tag("RAND2") T1 = tag_engine.read_tag("T1") T2 = tag_engine.read_tag("T2") T3 = tag_engine.read_tag("T3") print("") print("RAND1: {}".format(rand1)) print("RAND2: {}".format(rand2)) print("T1 : {}".format(T1)) print("T2 : {}".format(T2)) print("T3 : {}".format(T3)) By specify its period, you can keep control of the time execution for these tasks. Testing the RESTful API Once your application is up and running, it will deploy a RESTful API with falcon, and the json format is the standard supported by this API. Reading tags with httpie Once your application is up and running you can access through the API, if you want to try with httpie, you can install it with the following command: pip install httpie Now execute the next command in your terminal http localhost:8000/api/tags you will get the following HTTP/1.0 200 OK Date: Tue, 11 Jun 2019 23:54:55 GMT Server: WSGIServer/0.2 CPython/3.7.1 content-length: 177 content-type: application/json [ { "tag": "RAND1", "value": 25.597755601381692 }, { "tag": "RAND2", "value": 49.12890172456638 }, { "tag": "T1", "value": 57 }, { "tag": "T2", "value": 40 }, { "tag": "T3", "value": 97 } ] if you want to access an specific tag, for example tag T2 http localhost:8000/api/tags/T2 you will get the following HTTP/1.0 200 OK Date: Tue, 11 Jun 2019 23:58:40 GMT Server: WSGIServer/0.2 CPython/3.7.1 content-length: 26 content-type: application/json { "tag": "T2", "value": 40 } Writing tags with httpie You can change this tag value by executing http POST localhost:8000/api/tags/T2 value=50 And you will get the following HTTP/1.0 200 OK Date: Wed, 12 Jun 2019 00:01:21 GMT Server: WSGIServer/0.2 CPython/3.7.1 content-length: 16 content-type: application/json { "result": true } Reading tags history You can read tags history using the API also http localhost:8000/api/tags/history/RAND1 And you will get the following HTTP/1.0 200 OK Date: Tue, 18 Jun 2019 02:52:43 GMT Server: WSGIServer/0.2 CPython/3.7.1 content-length: 4917 content-type: application/json { "tag": "RAND1", "value": [ 0.0, 24.628376069489793, 25.757258388362462, 25.55412553374292, 24.555658954786043, 25.06933481716872, 25.40130983961439, 25.689521224514724, 25.81125032707667, 25.639558206736673, 25.349485473327377, 24.799801913324295, 25.227466610598572, 25.27254049615728, 25.105421823573916, 24.82832764778826, 24.65831512999663, 25.26014559203846, 25.216187451359872, 25.151243977491735 ] } This way you can create your custom HMTL and Javascript Views to perform AJAX requests on Rackio. Things to do Rackio is work in progress framework, some features are still in development and they will be release soon for better applications, these features are listed below: - Finish RESTful API - Capability for users to add custom HTML files for HMI - Token Based Authentication for API access - Web Based Monitoring and Admin - Alarms definitions - Modbus and MQTT protocols - Automatic Datalogging - Trends and Historical data Project details Release history Release notifications | RSS feed Download files Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
https://pypi.org/project/Rackio/0.9.0/
CC-MAIN-2021-17
en
refinedweb
§Comet §Using chunked responses with Comet A common use of chunked responses is to create a Comet socket. A Comet socket is. Because Ok.chunked leverages Akka Streams to take a Flow[ByteString], we can send a Flow of elements and transform it so that each element is escaped and wrapped in the Javascript method. The Comet helper automates Comet sockets, pushing an initial blank buffer data for browser compatibility, and supporting both String and JSON messages. §Comet Imports To use the Comet helper, import the following classes: import akka.stream.HTML) } §Using Comet with iframe The comet helper should typically be used with a forever-iframe technique, with an HTML page like: <script type="text/javascript"> var cometMessage = function(event) { console.log('Received event: ' + event) } </script> <iframe src="/comet"></iframe> Note: add the following config to your application.conf and also ensure that you have route mappings set up in order to see the above comet in action. play.filters.headers { frameOptions = "SAMEORIGIN" contentSecurityPolicy = "connect-src 'self'" } For an example of a Comet helper, see the Play Streaming Example. §Debugging Comet The easiest way to debug a Comet stream that is not working is to use the log() operation to show any errors involved in mapping data through the stream. Next: WebSockets
https://www.playframework.com/documentation/2.8.2/ScalaComet
CC-MAIN-2021-17
en
refinedweb
Exploring Kotlin’s hidden costs — Part 2 Local functions, null safety and varargs This is part 2 of an ongoing series about the Kotlin programming language. Don’t forget to read part 1 if you haven’t already. Let’s take a new look behind the curtain and discover the implementation details of more Kotlin features. Local functions) + sumSquare(2) } Let’s begin by mentioning their biggest limitation: local functions can not be declared inline (yet?) and a function containing a local function can not be declared inline either. There is no magical way to avoid the cost of function calls in this case. After compilation, these local functions are converted to Function objects, just like lambdas and with most of the same limitations described in the previous article regarding non-inline functions. The Java representation of the compiled code looks like this: public static final int someMath(final int a) { Function1 sumSquare$ = new Function1(1) { // $FF: synthetic method // $FF: bridge method public Object invoke(Object var1) { return Integer.valueOf(this.invoke(((Number)var1).intValue())); } public final int invoke(int b) { return (a + b) * (a + b); } }; return sumSquare$.invoke(1) + sumSquare$.invoke(2); } There is however one less performance hit compared to lambdas: because the actual instance of the function is known from the caller, its specific method will be called directly instead of its generic synthetic method from the Function interface. This means that no casting or boxing of primitive types will occur when calling a local function from the outer function. We can verify this by looking at the bytecode: ALOAD 1 ICONST_1 INVOKEVIRTUAL be/myapplication/MyClassKt$someMath$1.invoke (I)I ALOAD 1 ICONST_2 INVOKEVIRTUAL be/myapplication/MyClassKt$someMath$1.invoke (I)I IADD IRETURN We can see that the method being invoked twice is the one accepting an int and returning an int, and that the addition is performed immediately without any intermediate unboxing operation. Of course there is still the cost of creating a new Function object during each method call. This can be avoided by rewriting the local function to be non-capturing: fun someMath(a: Int): Int { fun sumSquare(a: Int, b: Int) = (a + b) * (a + b) return sumSquare(a, 1) + sumSquare(a, 2) } Now the same Function instance will be reused an still no casting or boxing will occur. The only penalty of this local function compared to a classic private function will be the generation of an extra class with a few methods. Local functions are an alternative to private functions with the added benefit of being able to access local variables of the outer function. That benefit comes with the hidden cost of the creation of a Functionobject for each call of the outer function, so non-capturing local functions are preferred. Null safety One of the best features of the Kotlin language is that it makes a clear distinction between nullable and non-null types. This enables the compiler to effectively prevent unexpected NullPointerExceptions at runtime by forbidding any code assigning a null or nullable value to a non-null variable. Non-null arguments runtime checks Let’s declare a public function taking a non-null String as argument: fun sayHello(who: String) { println("Hello $who") } And now take a look at the Java representation of the compiled code: public static final void sayHello(@NotNull String who) { Intrinsics.checkParameterIsNotNull(who, "who"); String var1 = "Hello " + who; System.out.println(var1); } Notice that the Kotlin compiler is a good Java citizen and adds the @NotNull annotation to the argument, so Java tools can use this hint to show a warning when a null value is passed. But an annotation is not enough to enforce null safety from the external callers. That’s why the compiler also adds at the very beginning of our function a static method call that will check the argument and throw an IllegalArgumentException if it’s null. The function will fail early and consistently rather than failing randomly later with a NullPointerException, in order to make the unsafe caller code easier to fix. In practice, every public function has one static call to Intrinsics.checkParameterIsNotNull() added for each non-null reference argument. These checks are not added to private functions because the compiler guarantees that the code inside a Kotlin class is null safe. The performance impact of these static calls is negligible and they are really useful when debugging and testing an app. That being said, you may see them as an unnecessary extra cost for release builds. In that case, it’s possible to disable runtime null checks by using the -Xno-param-assertions compiler option or by adding the following ProGuard rule: -assumenosideeffects class kotlin.jvm.internal.Intrinsics { static void checkParameterIsNotNull(java.lang.Object, java.lang.String); } Note that this ProGuard rule will only take effect with optimizations enabled. Optimizations are disabled in the default Android ProGuard configuration. Nullable primitive types This seems obvious but needs to be reminded: a nullable type is always a reference type. Declaring a variable for a primitive type as nullable prevents Kotlin from using the Java primitive value types like int or float and instead the boxed reference types like Integer or Float will be used, involving the extra cost of boxing and unboxing operations. Contrary to Java which allows you to be sloppy and use an Integer variable almost exactly like an int variable, thanks to autoboxing and disregard of null safety, Kotlin forces you to write safe code when using nullable types so the benefits of using non-null types become clearer: fun add(a: Int, b: Int): Int { return a + b }fun add(a: Int?, b: Int?): Int { return (a ?: 0) + (b ?: 0) } Use non-null primitive types whenever possible for more readable code and better performance. About arrays There are 3 types of arrays in Kotlin: IntArray, FloatArrayand others: an array of primitive values. Compiles to int[], float[]and others. Array<T>: a typed array of non-null object references. This involves boxing for primitive types. Array<T?>: a typed array of nullable object references. This also involves boxing for primitive types, obviously. If you need an array for a non-null primitive type, prefer using IntArraythan Array<Int>for example, to avoid boxing. Varargs Kotlin allows to declare functions with a variable number of arguments, like Java. The declaration syntax is a bit different: fun printDouble(vararg values: Int) { values.forEach { println(it * 2) } } Just like in Java, the vararg argument actually gets compiled to an array argument of the given type. You can then call these functions in three different ways: 1. Passing multiple arguments printDouble(1, 2, 3) The Kotlin compiler will transform this code to a creation and initialization of a new array, exactly like the Java compiler does: printDouble(new int[]{1, 2, 3}); So there is the overhead of the creation of a new array, but this is nothing new compared to Java. 2. Passing a single array This is where things differ. In Java, you can directly pass an existing array reference as vararg argument. In Kotlin, you need to use the spread operator: val values = intArrayOf(1, 2, 3) printDouble(*values) In Java, the array reference is passed “as-is” to the function, with no extra array allocation. However, the Kotlin spread operator compiles differently, as you can see in this Java representation: int[] values = new int[]{1, 2, 3}; printDouble(Arrays.copyOf(values, values.length)); The existing array always gets copied when calling the function. The benefit is safer code: it allows the function to modify the array without impacting the caller code. But it allocates extra memory. Note that calling a Java method with a variable number of arguments from Kotlin code has the same effect. 3. Passing a mix of arrays and arguments The main benefit of the spread operator is that it also allows mixing arrays with other arguments in the same call. val values = intArrayOf(1, 2, 3) printDouble(0, *values, 42) How does this get compiled? The resulting code is quite interesting: int[] values = new int[]{1, 2, 3}; IntSpreadBuilder var10000 = new IntSpreadBuilder(3); var10000.add(0); var10000.addSpread(values); var10000.add(42); printDouble(var10000.toArray()); In addition to the creation of a new array, a temporary builder object is used to compute the final array size and populate it. This adds another small cost to the method call. Calling a function with a variable number of arguments in Kotlin adds the cost of creating a new temporary array, even when using values from an existing array. For performance-critical code where the function is called repeatedly, consider adding a method with an actual array argument instead of vararg.
https://bladecoder.medium.com/exploring-kotlins-hidden-costs-part-2-324a4a50b70?responsesOpen=true&source=---------9----------------------------
CC-MAIN-2021-17
en
refinedweb
Java Tutorial – A Complete Comprehensive Guide for Java Beginners Do you want to learn the Java programming language and become an expert in Java? And are you still looking for the best Java tutorial? Techvidvan Java tutorial is specially designed for beginners as well as Java professionals to learn Java programming language from scratch. There will be all the information on Java programming language in this tutorial. Let’s have a quick look at what we will learn in this Java tutorial. 1. Introduction to Java programming language 2. Features of Java 3. History of Java 4. Java Architecture 5. Advantages of Java 6. Disadvantages of Java 7. Applications of Java 8. C++ Vs Java 9. Java Support systems 10. Java and the Internet 11. Java and WWW 12. Edition of Java platform 13. Top Companies Using Java So let us start the tutorial. Keeping you updated with latest technology trends, Join TechVidvan on Telegram Introduction to Java Java is a general-purpose and object-oriented programming language developed for the distributed environment and software development for consumer electronic devices such as TVs, VCRs, toasters, etc. Java Programming Language is a platform-independent language, which means there is no limitation to any particular hardware or operating system. It provides users the facility to ‘write once, run anywhere’. Many operating systems such as Sun Solaris, RedHat, Windows, etc., support Java. Java is a concurrent, class-based, and object-oriented language. It is freely accessible, and we can run it on all the platforms or the operating systems. Java is simple and easy to learn. If we want to print “HelloWorld!”, we would type: Java Hello World Example public class HelloWorld { public static void main(String[] args) { System.out.println("HelloWorld!"); } } Features of Java According to the TIOBE index, the rank of Java constantly remains in the top 2 languages because of its robust and security features. Let us discuss the features of Java that make it so popular among all the programming languages. Following are some of the Java features that make it so popular in the programming world: 1. Simple Java is a simplified version of the C++ language, and therefore, it is familiar also. Moreover, it eliminates all redundant and unreliable code. There is no support of pointers, preprocessor header files, operator overloading, and multiple inheritances in Java. This makes Java easier as compared to C++. 2. Object-oriented Java is an object-oriented language and mainly focuses on objects rather than processes. Java follows the Object-Oriented Programming (OOP) concept like: - Objects - Classes - Inheritance - Encapsulation / Data hiding - Abstraction - Polymorphism Note- Java is not a pure object-oriented language as it allows the use of primitive data types. 3. Platform-independent Java is a platform-independent language as the source code of Java can run on multiple operating systems. Java programs can run on any machine or the operating system that does not need any special software installed. Although the JVM needs to be present in the machine. Java code compiles into bytecode(.class file), which is platform-independent. We can run this bytecode on Windows, Linux, Mac OS, etc. 4. Portable Java is portable because Java code is executable on all the major platforms. Once we compile Java source code to bytecode, we can use it in any Java-supported platform without modification, unlike other languages that require compiling the code for each platform. Java is portable because we can carry bytecode over to any other platform it runs on. 5. Robust The following features make Java robust and powerful: - There is no use of explicit pointers in Java. - Java provides a strong memory management - It supports Automatic garbage collection, so there is no need to delete the unreferenced objects manually. - Java also provides exception handling and type-checking mechanisms. 6. Secure Java is a secure language because of the following reasons: - Java does not support pointers that make Java robust and secure. - All Java programs run inside a virtual machine sandbox. - The Java Runtime Environment(JRE) has a classloader that dynamically loads the classes into the Java Virtual Machine. - The Bytecode Verifier of Java inspects the parts of code for checking the illegal code that can bypass access. - The Security Manager of Java decides what resources to allot to a class. Such access includes reading and writing into files. - Java also helps us develop virus-free systems. 7. Multithreaded and Interactive Java is a multithreaded language that means it can handle different tasks simultaneously. Java supports multithreaded programs, in which there is no need to wait for one task to finish for another to start. This feature of Java significantly improves the interactive performance of graphical applications. History of Java Java first appeared in the year 1995 as Oak. They also decided to call this language the project Green, before it could find its popularity as Java by finding its roots in coffee, which in turn is attributed to Java- an island in Indonesia. - The first version 1.0 of Java came in 1996 when Sun Microsystems promised the principle of WORA (Write Once, Run Anywhere) property. - Then Java 2 (J2SE 1.2) was introduced in December 1998-1999. J2EE(Java Enterprise Editions) was for enterprise applications. - In 2006, Sun renamed new J2 versions as Java EE, Java ME, and Java SE. - September 2018 marked the release of Java SE 11 (LTS). - March 2019 marked the release of Java SE 12 (LTS). - On September 10th, 2019, Java SE 13 was released. - Java SE 14 came in March 2020, which is the latest version of Java. Java Architecture – Java Environment Now, we will learn the architecture of Java and its main components like JRE, JVM, and JDK. The following diagram shows the architecture of Java: 1. JVM (Java Virtual Machine) Java Virtual Machine(JVM) provides a runtime environment in which bytecode executes. Java Virtual Machine is platform-dependent. The JVM performs the following tasks: - Verifying the code - Executing the code - Providing a runtime environment 2. JRE (Java Runtime Environment) JRE is a collection of tools that allow the development of applications and provide a runtime environment to run Java programs. JVM is a part of JRE. JRE is also platform-dependent. JRE facilitates the execution of Java programs and comprises JVM, Runtime class libraries, User interface toolkits, Deployment technologies, Java plugin, etc. 3. JDK (Java Development Kit) Java Development Kit is a kit that provides the environment to develop and execute a Java program. JDK includes development tools to provide an environment to develop Java programs. It also contains JRE that runs your Java code. JDK also contains other resources like the interpreter/loader, compiler (javac), an archiver (jar), and a documentation generator (Javadoc). These components together help you to build Java programs. Java Development Kit includes- - appletviewer (for viewing Java applets) - javac (Java compiler) - java (Java interpreter) - javap (Java disassembler) - javah (for C header files) - javadoc (for creating HTML files) - jdb (Java debugger) Proceeding ahead in this java tutorial, let us see advantages and limitations of java. Advantages of Java - It is a platform-independent language because we can run Java code on any machine without any special software. - It is an object-oriented language because of its classes and objects. Object-oriented programming increases the ease of code development and increases efficiency. - It is a secure language, and the reason behind being secure is pointers, Java does not use pointers. - It supports multithreading; we can execute many programs simultaneously. - Java is a robust language as it has many features like automatic garbage collection, no use of explicit pointers, exception handling, etc. - Java is a high-level programming language that makes it easy to learn and understand. - It provides efficient memory management. Disadvantages of Java - Java is a high-level language. Therefore Java must deal with the compilation and abstraction levels of a virtual machine. - Java exhibits poor performance because of the garbage collector, wrong caching configuration, and deadlocks, among processes. - Java has very few GUI(Graphical User Interface) builders like Swing, SWT, JSF, and JavaFX. - We could end up writing long, complicated code if we try to carry out a simple set of activities. This affects the readability of code. Difference Between C++ and Java The main difference between C++ and Java is that Java is an Object-oriented language, while C++ just adds an object-oriented feature to C. Let’s see what makes Java different from C++: - No support for operator overloading in Java as in C++. - Java does not provide template classes as in C++. - Java does not support explicit pointers, but C++ supports. - There is no support for global variables in Java as in C++. - Java uses a finalize () function and C++ uses destructor function. - There are no header files in Java as in C++. - Java does not support “goto” statements, unique C++. - C++ supports multiple inheritances through classes, but Java supports the same through interfaces. - Java does not support “call-by-reference”; it only supports “call-by-value.” - There is no support for structures and unions in Java, as in C++. - Java does not support the “virtual” keyword. Let’s compare a hello world program in C++ and Java. Example of C++ Programming Language- #include <iostream> using namespace std; int main() { cout << "HelloWorld!"; return 0; } Example of Java Programming Language- public class Hello { public static void main(String[] args) { System.out.println("Hello, World!"); } } Java Language and the Internet Now, we will discuss how the Internet and Java are related to each other. The diagram below shows the relationship between Java and the Internet. Java Tutorial – Java Language and Internet Java is often called the Internet language as HotJava was the first application program in Java. HotJava is a Web browser that runs applets on the Internet. Internet users can create applets using Java and run them locally using HotJava. Java applets make the Internet a true extension of the storage system on local computers. Java Programming and World Wide Web Do you know how Java and the World Wide Web(WWW) are related? World Wide Web is an information system where any information or file has Uniform Resource Locators (URLs), and hypertext links interlink. We can access WWW with the help of the Internet. The Internet and Java both had the same philosophy, and therefore they were incorporated with each other easily. It was possible only because of Java by providing features like graphics, animation, games, and a wide range of special effects in WWW. Java Tutorial for beginners – Java with WWW Java uses Applets to communicate with any web page. The steps involved are – 1. The user sends requests for a hyperlink document to a web server of the remote computer. 2. The hyperlink document contains the applet tag that identifies the applet. 3. The Java source code file compiles the bytecode for that applet and transfers it to the user’s computer. 4. The browser enabled by Java programming interprets the bytecode and provides the output to the user. Java Support Systems The operations of Java and Java-enabled browsers on the Internet requires a variety of support systems, like, - Internet Connection - Web server - Web Browser - HTML(HyperText Markup Language) which is a language for creating hypertext for the web. - APPLET tag - Java code - Bytecode - Proxy Server that acts as an intermediate server between the client workstation and the original server. - Mail Server Applications of Java Programming Java is a widely-used language, and there are many uses of Java. The following are some of the application areas that use Java: 1. Desktop applications 2. Web applications 3. Mobile applications (Android) 4. Cloud computing 5. Enterprise applications 6. Cryptography 7. Smart cards 8. Computer games 9. Web servers and application servers 10. Scientific applications 11. Operating Systems 12. Embedded systems 13. Real-time software Java Platform Editions Let us now discuss the editions Java platform: 1. Java ME (Micro Edition – J2ME) Java Micro Edition is useful for developing small devices like mobile phones. The Java Micro Edition(ME) API is a subset of the Java Standard Edition(SE) API. 2. Java SE (Standard Edition – J2SE) The Standard Edition of Java(SE) holds the core functionality of Java. It includes everything from basic types and objects to high-level classes for GUI, database access, networking, and security, etc. 3. Java EE (Enterprise Edition – J2EE) Java Enterprise Edition builds on top of Java SE. It provides an API and runtime environment for the development and running of large-scale and secure network applications. 4. Java Card Java Card edition lets us build smart cards using Java. Top Companies Using Java When we say that Java is an immensely popular language, we are not merely talking about learners or developers. The following big companies use Java to build or improve their products and services: Conclusion Java is the king of all the programming languages. We can see Java holds the first position in the TIOBE index for the last two years. Java is useful in developing applications, but we can also use Java in Big Data, networking, Data Science, etc. Its outstanding features make it evergreen and popular to learn and build a career in it. In this Java tutorial, we briefly discussed the Java programming language. We discussed its features, advantages, and disadvantages, and also learned the comparison of Java with C++. We have also learned about its applications and the top companies that are using Java. Hope you found this tutorial useful. Do not forget to share your TechVidvan Experience with us.
https://techvidvan.com/tutorials/java-introduction/
CC-MAIN-2021-17
en
refinedweb
Machine Learning for Cybersecurity In this chapter, we will cover the fundamental techniques of machine learning. We will use these throughout the book to solve interesting cybersecurity problems. We will cover both foundational algorithms, such as clustering and gradient boosting trees, and solutions to common data challenges, such as imbalanced data and false-positive constraints. A machine learning practitioner in cybersecurity is in a unique and exciting position to leverage enormous amounts of data and create solutions in a constantly evolving landscape. This chapter covers the following recipes: - Train-test-splitting your data - Standardizing your data - Summarizing large data using principal component analysis (PCA) - Generating text using Markov chains - Performing clustering using scikit-learn - Training an XGBoost classifier - Analyzing time series using statsmodels - Anomaly detection using Isolation Forest - Natural language processing (NLP) using hashing vectorizer and tf-idf with scikit-learn - Hyperparameter tuning with scikit-optimize Technical requirements In this chapter, we will be using the following: - scikit-learn - Markovify - XGBoost - statsmodels The installation instructions and code can be found at. Train-test-splitting your data In machine learning, our goal is to create a program that is able to perform tasks it has never been explicitly taught to perform. The way we do that is to use data we have collected to train or fit a mathematical or statistical model. The data used to fit the model is referred to as training data. The resulting trained model is then used to predict future, previously-unseen data. In this way, the program is able to manage new situations without human intervention. One of the major challenges for a machine learning practitioner is the danger of overfitting – creating a model that performs well on the training data but is not able to generalize to new, previously-unseen data. In order to combat the problem of overfitting, machine learning practitioners set aside a portion of the data, called test data, and use it only to assess the performance of the trained model, as opposed to including it as part of the training dataset. This careful setting aside of testing sets is key to training classifiers in cybersecurity, where overfitting is an omnipresent danger. One small oversight, such as using only benign data from one locale, can lead to a poor classifier. There are various other ways to validate model performance, such as cross-validation. For simplicity, we will focus mainly on train-test splitting. Getting ready Preparation for this recipe consists of installing the scikit-learn and pandas packages in pip. The command for this is as follows: pip install sklearn pandas In addition, we have included the north_korea_missile_test_database.csv dataset for use in this recipe. How to do it... The following steps demonstrate how to take a dataset, consisting of features X and labels y, and split these into a training and testing subset: - Start by importing the train_test_split module and the pandas library, and read your features into X and labels into y: from sklearn.model_selection import train_test_split import pandas as pd df = pd.read_csv("north_korea_missile_test_database.csv") y = df["Missile Name"] X = df.drop("Missile Name", axis=1) - Next, randomly split the dataset and its labels into a training set consisting 80% of the size of the original dataset and a testing set 20% of the size: X_train, X_test, y_train, y_test = train_test_split( X, y, test_size=0.2, random_state=31 ) - We apply the train_test_split method once more, to obtain a validation set, X_val and y_val: X_train, X_val, y_train, y_val = train_test_split( X_train, y_train, test_size=0.25, random_state=31 ) - We end up with a training set that's 60% of the size of the original data, a validation set of 20%, and a testing set of 20%. The following screenshot shows the output: How it works... We start by reading in our dataset, consisting of historical and continuing missile experiments in North Korea. We aim to predict the type of missile based on remaining features, such as facility and time of launch. This concludes step 1. In step 2, we apply scikit-learn's train_test_split method to subdivide X and y into a training set, X_train and y_train, and also a testing set, X_test and y_test. The test_size = 0.2 parameter means that the testing set consists of 20% of the original data, while the remainder is placed in the training set. The random_state parameter allows us to reproduce the same randomly generated split. Next, concerning step 3, it is important to note that, in applications, we often want to compare several different models. The danger of using the testing set to select the best model is that we may end up overfitting the testing set. This is similar to the statistical sin of data fishing. In order to combat this danger, we create an additional dataset, called the validation set. We train our models on the training set, use the validation set to compare them, and finally use the testing set to obtain an accurate indicator of the performance of the model we have chosen. So, in step 3, we choose our parameters so that, mathematically speaking, the end result consists of a training set of 60% of the original dataset, a validation set of 20%, and a testing set of 20%. Finally, we double-check our assumptions by employing the len function to compute the length of the arrays (step 4). Standardizing your data For many machine learning algorithms, performance is highly sensitive to the relative scale of features. For that reason, it is often important to standardize your features. To standardize a feature means to shift all of its values so that their mean = 0 and to scale them so that their variance = 1. One instance when normalizing is useful is when featuring the PE header of a file. The PE header contains extremely large values (for example, the SizeOfInitializedData field) and also very small ones (for example, the number of sections). For certain ML models, such as neural networks, the large discrepancy in magnitude between features can reduce performance. Getting ready Preparation for this recipe consists of installing the scikit-learn and pandas packages in pip. Perform the following steps: pip install sklearn pandas In addition, you will find a dataset named file_pe_headers.csv in the repository for this recipe. How to do it... In the following steps, we utilize scikit-learn's StandardScaler method to standardize our data: - Start by importing the required libraries and gathering a dataset, X: import pandas as pd data = pd.read_csv("file_pe_headers.csv", sep=",") X = data.drop(["Name", "Malware"], axis=1).to_numpy() Dataset X looks as follows: - Next, standardize X using a StandardScaler instance: from sklearn.preprocessing import StandardScaler X_standardized = StandardScaler().fit_transform(X) The standardized dataset looks like the following: How it works... We begin by reading in our dataset (step 1), which consists of the PE header information for a collection of PE files. These vary greatly, with some columns reaching hundreds of thousands of files, and others staying in the single digits. Consequently, certain models, such as neural networks, will perform poorly on such unstandardized data. In step 2, we instantiate StandardScaler() and then apply it to rescale X using .fit_transform(X). As a result, we obtained a rescaled dataset, whose columns (corresponding to features) have a mean of 0 and a variance of 1. Summarizing large data using principal component analysis Suppose that you would like to build a predictor for an individual's expected net fiscal worth at age 45. There are a huge number of variables to be considered: IQ, current fiscal worth, marriage status, height, geographical location, health, education, career state, age, and many others you might come up with, such as number of LinkedIn connections or SAT scores. The trouble with having so many features is several-fold. First, the amount of data, which will incur high storage costs and computational time for your algorithm. Second, with a large feature space, it is critical to have a large amount of data for the model to be accurate. That's to say, it becomes harder to distinguish the signal from the noise. For these reasons, when dealing with high-dimensional data such as this, we often employ dimensionality reduction techniques, such as PCA. More information on the topic can be found at. PCA allows us to take our features and return a smaller number of new features, formed from our original ones, with maximal explanatory power. In addition, since the new features are linear combinations of the old features, this allows us to anonymize our data, which is very handy when working with financial information, for example. Getting ready The preparation for this recipe consists of installing the scikit-learn and pandas packages in pip. The command for this is as follows: pip install sklearn pandas In addition, we will be utilizing the same dataset, malware_pe_headers.csv, as in the previous recipe. How to do it... In this section, we'll walk through a recipe showing how to use PCA on data: - Start by importing the necessary libraries and reading in the dataset: from sklearn.decomposition import PCA import pandas as pd data = pd.read_csv("file_pe_headers.csv", sep=",") X = data.drop(["Name", "Malware"], axis=1).to_numpy() - Standardize the dataset, as is necessary before applying PCA: from sklearn.preprocessing import StandardScaler X_standardized = StandardScaler().fit_transform(X) - Instantiate a PCA instance and use it to reduce the dimensionality of our data: pca = PCA() pca.fit_transform(X_standardized) - Assess the effectiveness of your dimensionality reduction: print(pca.explained_variance_ratio_) The following screenshot shows the output: How it works... We begin by reading in our dataset and then standardizing it, as in the recipe on standardizing data (steps 1 and 2). (It is necessary to work with standardized data before applying PCA). We now instantiate a new PCA transformer instance, and use it to both learn the transformation (fit) and also apply the transform to the dataset, using fit_transform (step 3). In step 4, we analyze our transformation. In particular, note that the elements of pca.explained_variance_ratio_ indicate how much of the variance is accounted for in each direction. The sum is 1, indicating that all the variance is accounted for if we consider the full space in which the data lives. However, just by taking the first few directions, we can account for a large portion of the variance, while limiting our dimensionality. In our example, the first 40 directions account for 90% of the variance: sum(pca.explained_variance_ratio_[0:40]) This produces the following output: 0.9068522354673663 This means that we can reduce our number of features to 40 (from 78) while preserving 90% of the variance. The implications of this are that many of the features of the PE header are closely correlated, which is understandable, as they are not designed to be independent. Generating text using Markov chains Markov chains are simple stochastic models in which a system can exist in a number of states. To know the probability distribution of where the system will be next, it suffices to know where it currently is. This is in contrast with a system in which the probability distribution of the subsequent state may depend on the past history of the system. This simplifying assumption allows Markov chains to be easily applied in many domains, surprisingly fruitfully. In this recipe, we will utilize Markov chains to generate fake reviews, which is useful for pen-testing a review system's spam detector. In a later recipe, you will upgrade the technology from Markov chains to RNNs. Getting ready Preparation for this recipe consists of installing the markovify and pandas packages in pip. The command for this is as follows: pip install markovify pandas In addition, the directory in the repository for this chapter includes a CSV dataset, airport_reviews.csv, which should be placed alongside the code for the chapter. How to do it... Let's see how to generate text using Markov chains by performing the following steps: - Start by importing the markovify library and a text file whose style we would like to imitate: import markovify import pandas as pd df = pd.read_csv("airport_reviews.csv") As an illustration, I have chosen a collection of airport reviews as my text: "The airport is certainly tiny! ..." - Next, join the individual reviews into one large text string and build a Markov chain model using the airport review text: from itertools import chain N = 100 review_subset = df["content"][0:N] text = "".join(chain.from_iterable(review_subset)) markov_chain_model = markovify.Text(text) Behind the scenes, the library computes the transition word probabilities from the text. - Generate five sentences using the Markov chain model: for i in range(5): print(markov_chain_model.make_sentence()) - Since we are using airport reviews, we will have the following as the output after executing the previous code: On the positive side it's a clean airport transfer from A to C gates and outgoing gates is truly enormous - but why when we arrived at about 7.30 am for our connecting flight to Venice on TAROM. The only really bother: you may have to wait in a polite manner. Why not have bus after a short wait to check-in there were a lots of shops and less seating. Very inefficient and hostile airport. This is one of the time easy to access at low price from city center by train. The distance between the incoming gates and ending with dirty and always blocked by never ending roadworks. Surprisingly realistic! Although the reviews would have to be filtered down to the best ones. - Generate 3 sentences with a length of no more than 140 characters: for i in range(3): print(markov_chain_model.make_short_sentence(140)) With our running example, we will see the following output: However airport staff member told us that we were put on a connecting code share flight. Confusing in the check-in agent was friendly. I am definitely not keen on coming to the lack of staff . Lack of staff . Lack of staff at boarding pass at check-in. How it works... We begin the recipe by importing the Markovify library, a library for Markov chain computations, and reading in text, which will inform our Markov model (step 1). In step 2, we create a Markov chain model using the text. The following is a relevant snippet from the text object's initialization code: class Text(object): reject_pat = re.compile(r"(^')|('$)|\s'|'\s|[\"(\(\)\[\])]") def __init__(self, input_text, state_size=2, chain=None, parsed_sentences=None, retain_original=True, well_formed=True, reject_reg=''): """ input_text: A string. state_size: An integer, indicating the number of words in the model's state. chain: A trained markovify.Chain instance for this text, if pre-processed. parsed_sentences: A list of lists, where each outer list is a "run" of the process (e.g. a single sentence), and each inner list contains the steps (e.g. words) in the run. If you want to simulate an infinite process, you can come very close by passing just one, very long run. retain_original: Indicates whether to keep the original corpus. well_formed: Indicates whether sentences should be well-formed, preventing unmatched quotes, parenthesis by default, or a custom regular expression can be provided. reject_reg: If well_formed is True, this can be provided to override the standard rejection pattern. """ The most important parameter to understand is state_size = 2, which means that the Markov chains will be computing transitions between consecutive pairs of words. For more realistic sentences, this parameter can be increased, at the cost of making sentences appear less original. Next, we apply the Markov chains we have trained to generate a few example sentences (steps 3 and 4). We can see clearly that the Markov chains have captured the tone and style of the text. Finally, in step 5, we create a few tweets in the style of the airport reviews using our Markov chains. Performing clustering using scikit-learn Clustering is a collection of unsupervised machine learning algorithms in which parts of the data are grouped based on similarity. For example, clusters might consist of data that is close together in n-dimensional Euclidean space. Clustering is useful in cybersecurity for distinguishing between normal and anomalous network activity, and for helping to classify malware into families. Getting ready Preparation for this recipe consists of installing the scikit-learn, pandas, and plotly packages in pip. The command for this is as follows: pip install sklearn plotly pandas In addition, a dataset named file_pe_header.csv is provided in the repository for this recipe. How to do it... In the following steps, we will see a demonstration of how scikit-learn's K-means clustering algorithm performs on a toy PE malware classification: - Start by importing and plotting the dataset: import pandas as pd import plotly.express as px df = pd.read_csv("file_pe_headers.csv", sep=",") fig = px.scatter_3d( df, x="SuspiciousImportFunctions", y="SectionsLength", z="SuspiciousNameSection", color="Malware", ) fig.show() The following screenshot shows the output: - Extract the features and target labels: y = df["Malware"] X = df.drop(["Name", "Malware"], axis=1).to_numpy() - Next, import scikit-learn's clustering module and fit a K-means model with two clusters to the data: from sklearn.cluster import KMeans estimator = KMeans(n_clusters=len(set(y))) estimator.fit(X) - Predict the cluster using our trained algorithm: y_pred = estimator.predict(X) df["pred"] = y_pred df["pred"] = df["pred"].astype("category") - To see how the algorithm did, plot the algorithm's clusters: fig = px.scatter_3d( df, x="SuspiciousImportFunctions", y="SectionsLength", z="SuspiciousNameSection", color="pred", ) fig.show() The following screenshot shows the output: The results are not perfect, but we can see that the clustering algorithm captured much of the structure in the dataset. How it works... We start by importing our dataset of PE header information from a collection of samples (step 1). This dataset consists of two classes of PE files: malware and benign. We then use plotly to create a nice-looking interactive 3D graph (step 1). We proceed to prepare our dataset for machine learning. Specifically, in step 2, we set X as the features and y as the classes of the dataset. Based on the fact that there are two classes, we aim to cluster the data into two groups that will match the sample classification. We utilize the K-means algorithm (step 3), about which you can find more information at:. With a thoroughly trained clustering algorithm, we are ready to predict on the testing set. We apply our clustering algorithm to predict to which cluster each of the samples should belong (step 4). Observing our results in step 5, we see that clustering has captured a lot of the underlying information, as it was able to fit the data well. Training an XGBoost classifier Gradient boosting is widely considered the most reliable and accurate algorithm for generic machine learning problems. We will utilize XGBoost to create malware detectors in future recipes. Getting ready The preparation for this recipe consists of installing the scikit-learn, pandas, and xgboost packages in pip. The command for this is as follows: pip install sklearn xgboost pandas In addition, a dataset named file_pe_header.csv is provided in the repository for this recipe. How to do it... In the following steps, we will demonstrate how to instantiate, train, and test an XGBoost classifier: - Start by reading in the data: import pandas as pd df = pd.read_csv("file_pe_headers.csv", sep=",") y = df["Malware"] X = df.drop(["Name", "Malware"], axis=1).to_numpy() - Next, train-test-split a dataset: from sklearn.model_selection import train_test_split X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.3) - Create one instance of an XGBoost model and train it on the training set: from xgboost import XGBClassifier XGB_model_instance = XGBClassifier() XGB_model_instance.fit(X_train, y_train) - Finally, assess its performance on the testing set: from sklearn.metrics import accuracy_score y_test_pred = XGB_model_instance.predict(X_test) accuracy = accuracy_score(y_test, y_test_pred) print("Accuracy: %.2f%%" % (accuracy * 100)) The following screenshot shows the output: How it works... We begin by reading in our data (step 1). We then create a train-test split (step 2). We proceed to instantiate an XGBoost classifier with default parameters and fit it to our training set (step 3). Finally, in step 4, we use our XGBoost classifier to predict on the testing set. We then produce the measured accuracy of our XGBoost model's predictions. Analyzing time series using statsmodels A time series is a series of values obtained at successive times. For example, the price of the stock market sampled every minute forms a time series. In cybersecurity, time series analysis can be very handy for predicting a cyberattack, such as an insider employee exfiltrating data, or a group of hackers colluding in preparation for their next hit. Let's look at several techniques for making predictions using time series. Getting ready Preparation for this recipe consists of installing the matplotlib, statsmodels, and scipy packages in pip. The command for this is as follows: pip install matplotlib statsmodels scipy How to do it... In the following steps, we demonstrate several methods for making predictions using time series data: - Begin by generating a time series: from random import random time_series = [2 * x + random() for x in range(1, 100)] - Plot your data: %matplotlib inline import matplotlib.pyplot as plt plt.plot(time_series) plt.show() The following screenshot shows the output: - There is a large variety of techniques we can use to predict the consequent value of a time series: - Autoregression (AR): from statsmodels.tsa.ar_model import AR model = AR(time_series) model_fit = model.fit() y = model_fit.predict(len(time_series), len(time_series)) - Moving average (MA): from statsmodels.tsa.arima_model import ARMA model = ARMA(time_series, order=(0, 1)) model_fit = model.fit(disp=False) y = model_fit.predict(len(time_series), len(time_series)) - Simple exponential smoothing (SES): from statsmodels.tsa.holtwinters import SimpleExpSmoothing model = SimpleExpSmoothing(time_series) model_fit = model.fit() y = model_fit.predict(len(time_series), len(time_series)) The resulting predictions are as follows: How it works... In the first step, we generate a simple toy time series. The series consists of values on a line sprinkled with some added noise. Next, we plot our time series in step 2. You can see that it is very close to a straight line and that a sensible prediction for the value of the time series at time is . To create a forecast of the value of the time series, we consider three different schemes (step 3) for predicting the future values of the time series. In an autoregressive model, the basic idea is that the value of the time series at time t is a linear function of the values of the time series at the previous times. More precisely, there are some constants, , and a number, , such that: As a hypothetical example, may be 3, meaning that the value of the time series can be easily computed from knowing its last 3 values. In the moving-average model, the time series is modeled as fluctuating about a mean. More precisely, let be a sequence of i.i.d normal variables and let be a constant. Then, the time series is modeled by the following formula: For that reason, it performs poorly in predicting the noisy linear time series we have generated. Finally, in simple exponential smoothing, we propose a smoothing parameter, . Then, our model's estimate, , is computed from the following equations: In other words, we keep track of an estimate, , and adjust it slightly using the current time series value, . How strongly the adjustment is made is regulated by the parameter. Anomaly detection with Isolation Forest Anomaly detection is the identification of events in a dataset that do not conform to the expected pattern. In applications, these events may be of critical importance. For instance, they may be occurrences of a network intrusion or of fraud. We will utilize Isolation Forest to detect such anomalies. Isolation Forest relies on the observation that it is easy to isolate an outlier, while more difficult to describe a normal data point. Getting ready The preparation for this recipe consists of installing the matplotlib, pandas, and scipy packages in pip. The command for this is as follows: pip install matplotlib pandas scipy How to do it... In the next steps, we demonstrate how to apply the Isolation Forest algorithm to detecting anomalies: - Import the required libraries and set a random seed: import numpy as np import pandas as pd random_seed = np.random.RandomState(12) - Generate a set of normal observations, to be used as training data: X_train = 0.5 * random_seed.randn(500, 2) X_train = np.r_[X_train + 3, X_train] X_train = pd.DataFrame(X_train, columns=["x", "y"]) - Generate a testing set, also consisting of normal observations: X_test = 0.5 * random_seed.randn(500, 2) X_test = np.r_[X_test + 3, X_test] X_test = pd.DataFrame(X_test, columns=["x", "y"]) - Generate a set of outlier observations. These are generated from a different distribution than the normal observations: X_outliers = random_seed.uniform(low=-5, high=5, size=(50, 2)) X_outliers = pd.DataFrame(X_outliers, columns=["x", "y"]) - Let's take a look at the data we have generated: %matplotlib inline import matplotlib.pyplot as plt p1 = plt.scatter(X_train.x, X_train.y, c="white", s=50, edgecolor="black") p2 = plt.scatter(X_test.x, X_test.y, c="green", s=50, edgecolor="black") p3 = plt.scatter(X_outliers.x, X_outliers.y, c="blue", s=50, edgecolor="black") plt.xlim((-6, 6)) plt.ylim((-6, 6)) plt.legend( [p1, p2, p3], ["training set", "normal testing set", "anomalous testing set"], loc="lower right", ) plt.show() The following screenshot shows the output: - Now train an Isolation Forest model on our training data: from sklearn.ensemble import IsolationForest clf = IsolationForest() clf.fit(X_train) y_pred_train = clf.predict(X_train) y_pred_test = clf.predict(X_test) y_pred_outliers = clf.predict(X_outliers) - Let's see how the algorithm performs. Append the labels to X_outliers: X_outliers = X_outliers.assign(pred=y_pred_outliers) X_outliers.head() The following is the output: - Let's plot the Isolation Forest predictions on the outliers to see how many it caught: p1 = plt.scatter(X_train.x, X_train.y, c="white", s=50, edgecolor="black") p2 = plt.scatter( X_outliers.loc[X_outliers.pred == -1, ["x"]], X_outliers.loc[X_outliers.pred == -1, ["y"]], c="blue", s=50, edgecolor="black", ) p3 = plt.scatter( X_outliers.loc[X_outliers.pred == 1, ["x"]], X_outliers.loc[X_outliers.pred == 1, ["y"]], c="red", s=50, edgecolor="black", ) plt.xlim((-6, 6)) plt.ylim((-6, 6)) plt.legend( [p1, p2, p3], ["training observations", "detected outliers", "incorrectly labeled outliers"], loc="lower right", ) plt.show() The following screenshot shows the output: - Now let's see how it performed on the normal testing data. Append the predicted label to X_test: X_test = X_test.assign(pred=y_pred_test) X_test.head() The following is the output: - Now let's plot the results to see whether our classifier labeled the normal testing data correctly: p1 = plt.scatter(X_train.x, X_train.y, c="white", s=50, edgecolor="black") p2 = plt.scatter( X_test.loc[X_test.pred == 1, ["x"]], X_test.loc[X_test.pred == 1, ["y"]], c="blue", s=50, edgecolor="black", ) p3 = plt.scatter( X_test.loc[X_test.pred == -1, ["x"]], X_test.loc[X_test.pred == -1, ["y"]], c="red", s=50, edgecolor="black", ) plt.xlim((-6, 6)) plt.ylim((-6, 6)) plt.legend( [p1, p2, p3], [ "training observations", "correctly labeled test observations", "incorrectly labeled test observations", ], loc="lower right", ) plt.show() The following screenshot shows the output: Evidently, our Isolation Forest model performed quite well at capturing the anomalous points. There were quite a few false negatives (instances where normal points were classified as outliers), but by tuning our model's parameters, we may be able to reduce these. How it works... The first step involves simply loading the necessary libraries that will allow us to manipulate data quickly and easily. In steps 2 and 3, we generate a training and testing set consisting of normal observations. These have the same distributions. In step 4, on the other hand, we generate the remainder of our testing set by creating outliers. This anomalous dataset has a different distribution from the training data and the rest of the testing data. Plotting our data, we see that some outlier points look indistinguishable from normal points (step 5). This guarantees that our classifier will have a significant percentage of misclassifications, due to the nature of the data, and we must keep this in mind when evaluating its performance. In step 6, we fit an instance of Isolation Forest with default parameters to the training data. Note that the algorithm is fed no information about the anomalous data. We use our trained instance of Isolation Forest to predict whether the testing data is normal or anomalous, and similarly to predict whether the anomalous data is normal or anomalous. To examine how the algorithm performs, we append the predicted labels to X_outliers (step 7) and then plot the predictions of the Isolation Forest instance on the outliers (step 8). We see that it was able to capture most of the anomalies. Those that were incorrectly labeled were indistinguishable from normal observations. Next, in step 9, we append the predicted label to X_test in preparation for analysis and then plot the predictions of the Isolation Forest instance on the normal testing data (step 10). We see that it correctly labeled the majority of normal observations. At the same time, there was a significant number of incorrectly classified normal observations (shown in red). Depending on how many false alarms we are willing to tolerate, we may need to fine-tune our classifier to reduce the number of false positives. Natural language processing using a hashing vectorizer and tf-idf with scikit-learn We often find in data science that the objects we wish to analyze are textual. For example, they might be tweets, articles, or network logs. Since our algorithms require numerical inputs, we must find a way to convert such text into numerical features. To this end, we utilize a sequence of techniques. A token is a unit of text. For example, we may specify that our tokens are words, sentences, or characters. A count vectorizer takes textual input and then outputs a vector consisting of the counts of the textual tokens. A hashing vectorizer is a variation on the count vectorizer that sets out to be faster and more scalable, at the cost of interpretability and hashing collisions. Though it can be useful, just having the counts of the words appearing in a document corpus can be misleading. The reason is that, often, unimportant words, such as the and a (known as stop words) have a high frequency of occurrence, and hence little informative content. For reasons such as this, we often give words different weights to offset this. The main technique for doing so is tf-idf, which stands for Term-Frequency, Inverse-Document-Frequency. The main idea is that we account for the number of times a term occurs, but discount it by the number of documents it occurs in. In cybersecurity, text data is omnipresent; event logs, conversational transcripts, and lists of function names are just a few examples. Consequently, it is essential to be able to work with such data, something you'll learn in this recipe. Getting ready The preparation for this recipe consists of installing the scikit-learn package in pip. The command for this is as follows: pip install sklearn In addition, a log file, anonops_short.log, consisting of an excerpt of conversations taking place on the IRC channel, #Anonops, is included in the repository for this chapter. How to do it… In the next steps, we will convert a corpus of text data into numerical form, amenable to machine learning algorithms: - First, import a textual dataset: with open("anonops_short.txt", encoding="utf8") as f: anonops_chat_logs = f.readlines() - Next, count the words in the text using the hash vectorizer and then perform weighting using tf-idf: from sklearn.feature_extraction.text import HashingVectorizer from sklearn.feature_extraction.text import TfidfTransformer my_vector = HashingVectorizer(input="content", ngram_range=(1, 2)) X_train_counts = my_vector.fit_transform(anonops_chat_logs,) tf_transformer = TfidfTransformer(use_idf=True,).fit(X_train_counts) X_train_tf = tf_transformer.transform(X_train_counts) - The end result is a sparse matrix with each row being a vector representing one of the texts: X_train_tf <180830 x 1048576 sparse matrix of type <class 'numpy.float64'>' with 3158166 stored elements in Compressed Sparse Row format> print(X_train_tf) The following is the output: How it works... We started by loading in the #Anonops text dataset (step 1). The Anonops IRC channel has been affiliated with the Anonymous hacktivist group. In particular, chat participants have in the past planned and announced their future targets on Anonops. Consequently, a well-engineered ML system would be able to predict cyber attacks by training on such data. In step 2, we instantiated a hashing vectorizer. The hashing vectorizer gave us counts of the 1- and 2-grams in the text, in other words, singleton and consecutive pairs of words (tokens) in the articles. We then applied a tf-idf transformer to give appropriate weights to the counts that the hashing vectorizer gave us. Our final result is a large, sparse matrix representing the occurrences of 1- and 2-grams in the texts, weighted by importance. Finally, we examined the frontend of a sparse matrix representation of our featured data in Scipy. Hyperparameter tuning with scikit-optimize In machine learning, a hyperparameter is a parameter whose value is set before the training process begins. For example, the choice of learning rate of a gradient boosting model and the size of the hidden layer of a multilayer perceptron, are both examples of hyperparameters. By contrast, the values of other parameters are derived via training. Hyperparameter selection is important because it can have a huge effect on the model's performance. The most basic approach to hyperparameter tuning is called a grid search. In this method, you specify a range of potential values for each hyperparameter, and then try them all out, until you find the best combination. This brute-force approach is comprehensive but computationally intensive. More sophisticated methods exist. In this recipe, you will learn how to use Bayesian optimization over hyperparameters using scikit-optimize. In contrast to a basic grid search, in Bayesian optimization, not all parameter values are tried out, but rather a fixed number of parameter settings is sampled from specified distributions. More details can be found at. Getting ready The preparation for this recipe consists of installing a specific version of scikit-learn, installing xgboost, and installing scikit-optimize in pip. The command for this is as follows: pip install scikit-learn==0.20.3 xgboost scikit-optimize pandas How to do it... In the following steps, you will load the standard wine dataset and use Bayesian optimization to tune the hyperparameters of an XGBoost model: - Load the wine dataset from scikit-learn: from sklearn import datasets wine_dataset = datasets.load_wine() X = wine_dataset.data y = wine_dataset.target - Import XGBoost and stratified K-fold: import xgboost as xgb from sklearn.model_selection import StratifiedKFold - Import BayesSearchCV from scikit-optimize and specify the number of parameter settings to test: from skopt import BayesSearchCV n_iterations = 50 - Specify your estimator. In this case, we select XGBoost and set it to be able to perform multi-class classification: estimator = xgb.XGBClassifier( n_jobs=-1, objective="multi:softmax", eval_metric="merror", verbosity=0, num_class=len(set(y)), ) - Specify a parameter search space: search_space = { "learning_rate": (0.01, 1.0, "log-uniform"), "min_child_weight": (0, 10), "max_depth": (1, 50), "max_delta_step": (0, 10), "subsample": (0.01, 1.0, "uniform"), "colsample_bytree": (0.01, 1.0, "log-uniform"), "colsample_bylevel": (0.01, 1.0, "log-uniform"), "reg_lambda": (1e-9, 1000, "log-uniform"), "reg_alpha": (1e-9, 1.0, "log-uniform"), "gamma": (1e-9, 0.5, "log-uniform"), "min_child_weight": (0, 5), "n_estimators": (5, 5000), "scale_pos_weight": (1e-6, 500, "log-uniform"), } - Specify the type of cross-validation to perform: cv = StratifiedKFold(n_splits=3, shuffle=True) - Define BayesSearchCV using the settings you have defined: bayes_cv_tuner = BayesSearchCV( estimator=estimator, search_spaces=search_space, scoring="accuracy", cv=cv, n_jobs=-1, n_iter=n_iterations, verbose=0, refit=True, ) - Define a callback function to print out the progress of the parameter search: import pandas as pd import numpy as np def print_status(optimal_result): """Shows the best parameters found and accuracy attained of the search so far.""" models_tested = pd.DataFrame(bayes_cv_tuner.cv_results_) best_parameters_so_far = pd.Series(bayes_cv_tuner.best_params_) print( "Model #{}\nBest accuracy so far: {}\nBest parameters so far: {}\n".format( len(models_tested), np.round(bayes_cv_tuner.best_score_, 3), bayes_cv_tuner.best_params_, ) ) clf_type = bayes_cv_tuner.estimator.__class__.__name__ models_tested.to_csv(clf_type + "_cv_results_summary.csv") - Perform the parameter search: result = bayes_cv_tuner.fit(X, y, callback=print_status) As you can see, the following shows the output: Model } Model } <snip> Model #50 Best accuracy so far: 0.989 Best parameters so far: {'colsample_bylevel': 0.013417868502558758, 'colsample_bytree': 0.463490250419848, 'gamma': 2.2823050161337873e-06, 'learning_rate': 0.34006478878384533, 'max_delta_step': 9, 'max_depth': 41, 'min_child_weight': 0, 'n_estimators': 1951, 'reg_alpha': 1.8321791726476395e-08, 'reg_lambda': 13.098734837402576, 'scale_pos_weight': 0.6188077759379964, 'subsample': 0.7970035272497132} How it works... In steps 1 and 2, we import a standard dataset, the wine dataset, as well as the libraries needed for classification. A more interesting step follows, in which we specify how long we would like the hyperparameter search to be, in terms of a number of combinations of parameters to try. The longer the search, the better the results, at the risk of overfitting and extending the computational time. In step 4, we select XGBoost as the model, and then specify the number of classes, the type of problem, and the evaluation metric. This part will depend on the type of problem. For instance, for a regression problem, we might set eval_metric = 'rmse' and drop num_class together. Other models than XGBoost can be selected with the hyperparameter optimizer as well. In the next step, (step 5), we specify a probability distribution over each parameter that we will be exploring. This is one of the advantages of using BayesSearchCV over a simple grid search, as it allows you to explore the parameter space more intelligently. Next, we specify our cross-validation scheme (step 6). Since we are performing a classification problem, it makes sense to specify a stratified fold. However, for a regression problem, StratifiedKFold should be replaced with KFold. Also note that a larger splitting number is preferred for the purpose of measuring results, though it will come at a computational price. In step 7, you can see additional settings that can be changed. In particular, n_jobs allows you to parallelize the task. The verbosity and the method used for scoring can be altered as well. To monitor the search process and the performance of our hyperparameter tuning, we define a callback function to print out the progress in step 8. The results of the grid search are also saved in a CSV file. Finally, we run the hyperparameter search (step 9). The output allows us to observe the parameters and the performance of each iteration of the hyperparameter search. In this book, we will refrain from tuning the hyperparameters of classifiers. The reason is in part brevity, and in part because hyperparameter tuning here would be premature optimization, as there is no specified requirement or goal for the performance of the algorithm from the end user. Having seen how to perform it here, you can easily adapt this recipe to the application at hand. Another prominent library for hyperparameter tuning to keep in mind is hyperopt.
https://www.packtpub.com/product/machine-learning-for-cybersecurity-cookbook/9781789614671
CC-MAIN-2021-17
en
refinedweb
This project will actually be a little different than our previous projects. Rather than taking advantage of the Blockly interface and emulation plug in on. we'll be bringing the Raspberry Pi into the mix. Follow the steps below to get started on this advanced project Thomas Macpherson-Pope pulled together from CodeBug.org.uk What you need: Micro USB CableMicro USB Cable Raspberry PiRaspberry Pi Computer The Project. To learn how to put the special project file for teather on to the CodeBug please see 10 CodeBug Projects in 10 Days: Teathering CodeBug with Python. Next we will need to fit the CodeBug onto the Raspberry Pi. CodeBug connects straight to Raspberry Pi's GPIO header through CodeBug''s expansion connector. While the Pi is diconnected from power, align CodeBug to the pins shown in the diagrams below and gently push CodeBug's connector onto the GPIO pins. Make sure to NEVER connect the Micro USB or use a battery with CodeBug while it is fitted to a Raspberry Pi. Make sure you select the appropriate Raspberry Pi model, the Raspberry Pi expanded its GPIO header from 26 pins to 40 with the Raspberry Pi Model B+ If you are still unsure which Raspberry Pi GPIO pins to connect CodeBug to, note the pin labels on the back of CodeBug, and connect these to the corresponding pins on your Raspberry Pi. Now it's time to power up the Raspberry Pi ! First, let's make sure to enable I2C by opening a Terminal window and running sudo raspi-config Next let's choose: Advanced Options > Would you like the I2C interface to be enabled? > Yes Would you like the I2C kernel module to be loaded by default? > Yes Next we will install the Python libraries that will talk to the CodeBug using the I2C GPIO pins. sudo apt-get update sudo apt-get install python3-codebug-i2c-tether Download this example to your Raspberry Pi (right click Save Link As…) We'll run the example in the Terminal with the following command python3 example.py We should see an arrow pointing up-left on the CodeBug’s LED display! We can write our own Tethered CodeBug programs using Python and a few simple commands to control CodeBug. In the next steps we will start an interactive Python session and enter commands to interact with the tethered CodeBug. Open a Terminal and type python3 There will be a python prompt appear >>>. Let's import the CodeBug I2C library by entering import codebug_i2c_tether cb = codebug_i2c_tether.CodeBug() cb.open() cb.set_pixel(2,2,1) We should see the center LED light up on CodeBug. Next we'll try setting a row of CodeBug’s LEDs at the same time cb.set_row(3,0b10100) We should see the third row of LEDs light up in the sequence. Now its time to write text on the CodeBug’s LEDs, using the command cb.write_text(0, 0,'A') An A will appear on the CodeBug LEDs Write scrolling text on the CodeBug’s LEDs, using the command. To access the full list of command available we can type the following on the interactive Python shell with the codebug_i2c_tether library imported. help(codebug_i2c_tether.CodeBug) We can write long programs for Tethered mode CodeBug by writing commands in the text editor and saving and running the file in the way we did with the examples earlier, (python yourfile.py). Tethered mode gives CodeBug access to the full computing power, functionality and network connectivity of the Raspberry Pi! Make the most of the variety of powerful yet easy to use Python modules allow the CodeBug to generate random numbers, create email alerts or even post to Twitter! See more CodeBug projects and learn how you can get one of your own by visiting: 10 CodeBug Projects in 10 Days
https://www.element14.com/community/community/stem-academy/codebug/blog/2015/10/07/10-codebug-projects-in-10-days-raspberry-pi-controlled-codebug-with-i2c
CC-MAIN-2019-30
en
refinedweb
ukesmith123Members Content count119 Joined Last visited Community Reputation153 Neutral About lukesmith123 - RankMember lukesmith123 posted a topic in 2D and 3D ArtI am converting some files to dds and I'm a little confused about which DXT versions of dds I should be using. For diffuse textures with no alpha should I use DXT1(no alpha) ? Also I have normal maps which contain specular maps in the alpha channel. Will DXT3 be poorer quality but smaller file size then using DXT5? Also should I create mip maps for normal map textures or not? thanks so much! lukesmith123 posted a topic in General and Gameplay ProgrammingWhen doing additive blending of two separate animations that run at the same time, should any unused bones of one animation simply remain in the bindpose? For instance when blending a shooting animation with a running animation, should the shooting animations lower body remain in the bindpose? lukesmith123 replied to lukesmith123's topic in Math and PhysicsThanks for the replies. The area of intersection sum is really useful. I found some good stuff in a book and I think the fastest method when using min/max is separating axis: [CODE] if (Max.X < B.Min.X || Min.X > B.Max.X) return false; if (Max.Y < B.Min.X || Min.Y > B.Max.Y) return false; return true; [/CODE] lukesmith123 posted a topic in Math and PhysicsWhats the fastest way to test containment between two 2D AABBs? Also I'm testing for intersections like this: [CODE] return (Abs(Corners[0].X - B.GetCorners[0].X) * 2 < (Corners[1].X + B.GetCorners[1].X)) && (Math.Abs(Corners[0].Y - B.GetCorners[0].Y) * 2 < (Corners[1].Y + B.GetCorners[1].Y)); [/CODE] if I always need to check for intersections as well as containment is there a better method I could use to check both at the same time more cheaply? lukesmith123 replied to lukesmith123's topic in General and Gameplay ProgrammingAh I see. Great answers thanks! lukesmith123 replied to lukesmith123's topic in General and Gameplay ProgrammingOne thing, with the last point you made, would that not be a much slower method than just creating a frustum from the 4 corners in the first place? Also do you have any opinion on whether projecting to screen space vs creating frustums from the portals is faster? lukesmith123 replied to lukesmith123's topic in General and Gameplay ProgrammingFantastic thanks so much! lukesmith123 replied to lukesmith123's topic in General and Gameplay ProgrammingOk, so would you check the players bounds against all of the cell bounding boxes each frame? or would you need to keep track of the players movement through portals to speed this up? So when you project the corners of the portal into screen space and build a 2D bounding box from them how do you project the objects to screen space and get a 2D box from those? Would you take the corners of the objects bounding box and project them to screen space and then create the smallest 2D box from the 8 corners? Thanks so much for the tips! lukesmith123 posted a topic in General and Gameplay ProgrammingI have a couple of questions about portal systems for visibility determination that I couldnt find any info on and I hope somebody here could answer. How do you keep track of which cell the player is currently in? Do you contain each cell in a bounding box? if so, what about cells that aren't box shaped? Also I have read that a good method is to project the portal into screen space and check which objects in the connecting cell are within the 2d bounds of the portal. Could anybody explain how to do this I dont really understand how to project the portal from world to screen space and test collisions with objects this way. thanks, lukesmith123 replied to lukesmith123's topic in General and Gameplay ProgrammingWow very nice thanks! lukesmith123 posted a topic in General and Gameplay ProgrammingIs there any good books on the subject of portals for visibility determination? I'm looking for something that is fairly in depth with example code. I have a few books that mention the subject for example the morgan kaufman collision book but it only skims the topic and doesnt go into detail. thanks lukesmith123 replied to lukesmith123's topic in Graphics and GPU ProgrammingThey both look excellent thank you. lukesmith123 posted a topic in Graphics and GPU ProgrammingCould anybody reccomend any books that cover the subject of calculating lightmaps aswell as radiosity calculation. I've read a few breif articles but I'm really hoping there are some more comprehensive books on the subject with code examples etc. thanks, lukesmith123 posted a topic in Engines and MiddlewareHi, I followed a tutorial in the book 'programming game AI by example' which teaches an introduction to scripting with lua. I'm getting some unexpected memory leaks which I cant understand how they can happen. But as I am very new to lua and luabind I'm guessing that I might be missing something obvious here. I register functions from the statemachine and bot class in the bot class constructor (lua_close is called in the bot class destructor): [CODE] lua = luaL_newstate(); luabind::open(lua); luaL_openlibs(lua); luabind::module(lua) [ luabind::class_<StateMachine<Bot>>("StateMachine") .def("ChangeState", &StateMachine<Bot>::ChangeState) .def("CurrentState", &StateMachine<Bot>::CurrentState) .def("SetCurrentState", &StateMachine<Bot>::SetCurrentState) ]; luabind::module(lua) [ luabind::class_<Bot>("Bot") .def("one", &Bot::one) .def("two", &Bot::two) .def("GetStateMachine", &Bot::GetStateMachine) ]; luaL_dofile(lua,"lua.lua"); luabind::object states = luabind::globals(lua); if (luabind::type(states) == LUA_TTABLE) { stateMachine->SetCurrentState(states["State_one"]); } [/CODE] And the statemachine class looks like this: [CODE] template <class entity_type> class StateMachine { private: entity_type* Owner; luabind::object currentState; public: StateMachine(entity_type* owner):Owner(owner){} void SetCurrentState(const luabind::object& s){currentState = s;} void UpdateStateMachine() { if(currentState.is_valid()) { (currentState)["Execute"](Owner); } } void ChangeState(const luabind::object& new_state) { (currentState)["Exit"](Owner); currentState = new_state; (currentState)["Enter"](Owner); } const luabind::object& CurrentState()const{return currentState;} }; [/CODE] Everything works as expected but I am left with 6 memory leaks when I exit the program. Also when I remove the luabind::module code where the classes are registered then the memory leaks dissapear. Does anybody have any ideas? thanks, lukesmith123 replied to lukesmith123's topic in Math and PhysicsEDIT: Ah I realised I had made a stupid mistake elsewhere and the code that I originally posted was fine. Sorry!
https://www.gamedev.net/profile/180866-lukesmith123/?tab=reputation
CC-MAIN-2017-30
en
refinedweb
Ok, this seems to be working rather well so far. I'm now using a DefaultSecurityManager and a DefaultAccessManager with a customised LoginModule (for password validation) and a custom PrincipalProvider (for LDAP access). I've also extended the servlet to wrap all resources in a custom AclResource, which delegates to the original resource but takes care of providing WebDAV's ACL properties in addition to those of the original resource. For this, I've created several properties which evaluate their values lazily, so that I don't have to build the entire ACL property when it's not even requested by the client. These properties are also invisible in allprop requests, as recommended by the DAV spec. (I should not that evaluating all this stuff seems to take a lot of typecasts on faith, i.e. casting interfaces to their Jackrabbit implementation counterparts, so I expect to have some work to do when the next version comes out and some of this changes) So far, I've taken care of DAV:supported-privilege-set and DAV:acl. On the server side, I take the privileges from the privilege registry, convert most of them to their DAV counterparts (those that seem to be exact matches that is) and use a "JCR:" namespace for the rest of them. This ensures that the client sees the actual privileges used by the server. Setting these seems to have the desired effect on the server side. I'm now worrying about two things: 1) DAV:current-user-privilege-set should return the ACL for the current user, the idea apparently being that regardless of the user is allowed to read the resource's full ACL, he should at least have access to his own privileges. But as far as I understand, I need JCR_READ_ACCESS_CONTROL permission to read any part of the ACL. Does that mean that a user is either allowed to read the full ACL or nothing, not even his own privileges? 2) I'm also trying to support the DAV:owner property, denoting the owner of a resource. This will be a simple string property containing the principal's qualified name*. Querying it should therefore be simple. Setting it should be allowed either (1) only for the owner and the admins, or (2) alternatively be controlled through a custom privilege "modify-owner". As far as I can tell, I have to provide my own ACLProvider so I can take care of compiling different permissions when DAV:owner is accessed (I have to handle the SET_PROPERTY permission manually in the grants() method). Is this the correct way to to this, or am I getting myself in too much trouble because I missed a more simply way? Also, in case of (2), how would I go about creating and registering a custom privilege - obviously, I'd have to put it in the privilege registry, but where can I do that, and how would I make it an aggregated privilege of JCR_ALL? *) It's a bit of a hack, but for now I'm using the qualified LDAP name to identify principals. The DAV spec says principals "should" be referenced by a HTTP or HTTPS URL, and obviously this would allow any compliant DAV client to browse the users, but I don't see a way to mirror the LDAP directory into a JCR collection (certainly not within a short implementation time and with good performance), so I have my client access the LDAP on its own and just use the qualified names in DAV requests. Not portable for a generic implementation, but good enough for us, for now at least. (Next up, once this works, is versioning, which will probably mess up my AclResource delegate and cause some more work there, too) Thanks as always, Marian. -- View this message in context: Sent from the Jackrabbit - Users mailing list archive at Nabble.com.
http://mail-archives.apache.org/mod_mbox/jackrabbit-users/200904.mbox/%3C22947463.post@talk.nabble.com%3E
CC-MAIN-2017-30
en
refinedweb
Defining Content¶ Resource is the term that Substance D uses to describe an object placed in the resource tree. Ideally, all resources in your resource tree will be content. "Content" is the term that Substance D uses to describe resource objects that are particularly well-behaved when they appear in the SDI management interface. The Substance D management interface (aka SDI) is a set of views imposed upon the resource tree that allow you to add, delete, change and otherwise manage resources. You can convince the management interface that your particular resources are content. To define a resource as content, you need to associate a resource with a content type. Registering Content¶ In order to add new content to the system, you need to associate a resource factory with a content type. A resource factory that generates content must have these properties: - It must be a class, or a factory function that returns an instance of a resource class. - Instances of the resource class must be persistent (it must derive from the persistent.Persistentclass or a class that derives from Persistent such as substanced.folder.Folder). - The resource class or factory must be decorated with the @contentdecorator, or must be added at configuration time via config.add_content_type. - It must have a type. A type acts as a globally unique categorization marker, and allows the content to be constructed, enumerated, and introspected by various Substance D UI elements such as "add forms", and queries by the management interface for the icon class of a resource. A type can be any hashable Python object, but it's most often a string. Here's an example which defines a content resource factory as a class: # in a module named blog.resources from persistent import Persistent from substanced.content import content @content('Blog Entry') class BlogEntry(Persistent): def __init__(self, title='', body=''): self.title = title self.body = body Here's an example of defining a content resource factory using a function instead: # in a module named blog.resources from persistent import Persistent from substanced.content import content class BlogEntry(Persistent): def __init__(self, title, body): self.title = title self.body = body @content('Blog Entry') def make_blog_entry(title='', body=''): return BlogEntry(title, body) When a resource factory is not a class, Substance D will wrap the resource factory in something that changes the resource object returned from the factory. In the above case, the BlogEntry instance returned from make_blog_entry will be changed; its __factory_type__ attribute will be mutated. Notice that when we decorate a resource factory class with @content, and the class' __init__ function takes arguments, we provide those arguments with default values. This is mandatory if you'd like your content objects to participate in a "dump". Dumping a resource requires that the resource be creatable without any mandatory arguments. It's a similar story if our factory is a function; the function decorated by the @content decorator should provide defaults to any argument. In general, a resource factory can take arguments, but each parameter of the factory's callable should be given a default value. This also means that all arguments to a resource factory should be keyword arguments, and not positional arguments. In order to activate a @content decorator, it must be scanned using the Pyramid config.scan() machinery: # in a module named blog.__init__ from pyramid.config import Configurator def main(global_config, **settings): config = Configurator() config.include('substanced') config.scan('blog.resources') # .. and so on ... Instead of using the @content decorator, you can alternately add a content resource imperatively at configuration time using the add_content_type method of the Configurator: # in a module named blog.__init__ from pyramid.config import Configurator from .resources import BlogEntry def main(global_config, **settings): config = Configurator() config.include('substanced') config.add_content_type('Blog Entry', BlogEntry) This does the same thing as using the @content decorator, but you don't need to scan() your resources if you use add_content_type instead of the @content decorator. Once a content type has been defined (and scanned, if it's been defined using a decorator), an instance of the resource can be constructed from within a view that lives in your application: # in a module named blog.views from pyramid.httpexceptions import HTTPFound from pyramid.view import ( view_config, view_defaults, ) @view_config(name='add_blog_entry', request_method='POST') def add_blogentry(context, request): title = request.POST['title'] body = request.POST['body'] entry = request.registry.content.create('Blog Entry', title, body) context[title] = entry return HTTPFound(request.resource_url(entry)) The arguments passed to request.registry.content.create must start with the content type, and must be followed with whatever arguments are required by the resource factory. Creating an instance of content this way isn't particularly more useful than creating an instance of the resource object by calling its class __init__ directly unless you're building a highly abstract system. But even if you're not building a very abstract system, types can be very useful. For instance, types can be enumerated: # in a module named blog.views @view_config(name='show_types', renderer='show_types.pt') def show_types(request): all_types = request.registry.content.all() return {'all_types':all_types} request.registry.content.all() will return all the types you've defined and scanned. Metadata¶ A content's type can be associated with metadata about that type, including the content type's name, its icon in the SDI management interface, an add view name, and other things. Pass arbitrary keyword arguments to the @content decorator or config.add_content_type to specify metadata. Names¶ You can associate a content type registration with a name that shows up when someone attempts to add such a piece of content using the SDI management interface "Add" tab by passing a name keyword argument to @content or config.add_content_type. # in a module named blog.resources from persistent import Persistent from substanced.content import content @content('Blog Entry', name='Cool Blog Entry') class BlogEntry(Persistent): def __init__(self, title='', body=''): self.title = title self.body = body Once you've done this, the "Add" tab in the SDI management interface will show your content as addable using this name instead of the type name. Icons¶ You can associate a content type registration with a management view icon class by passing an icon keyword argument to @content or add_content_type. # in a module named blog.resources from persistent import Persistent from substanced.content import content @content('Blog Entry', icon='glyphicon glyphicon-file') class BlogEntry(Persistent): def __init__(self, title='', body=''): self.title = title self.body = body Once you've done this, content you add to a folder in the sytem will display the icon next to it in the contents view of the management interface and in the breadcrumb list. The available icon class names are listed at . For glyphicon icons, you'll need to use two classnames: glyphicon and glyphicon-foo, separated by a space. You can also pass a callback as an icon argument: from persistent import Persistent from substanced.content import content def blogentry_icon(context, request): if context.body: return 'glyphicon glyphicon-file' else: return 'glyphicon glyphicon-gift' @content('Blog Entry', icon=blogentry_icon) class BlogEntry(Persistent): def __init__(self, title='', body=''): self.title = title self.body = body A callable used as icon must accept two arguments: context and request. context will be an instance of the type and request will be the current request; your callback will be called at the time the folder view is drawn. The callable should return either an icon class name or None. For example, the above blogentry_icon callable tells the SDI to use an icon representing a file if the blogentry has a body, otherwise show an icon representing gift. Add Views¶ You can associate a content type with a view that will allow the type to be added by passing the name of the add view as a keyword argument to @content or add_content_type. # in a module named blog.resources from persistent import Persistent from substanced.content import content @content('Blog Entry', add_view='add_blog_entry') class BlogEntry(Persistent): def __init__(self, title='', body=''): self.title = title self.body = body Once you've done this, if the button is clicked in the "Add" tab for this content type, the related view will be presented to the user. You can also pass a callback as an add_view argument: from persistent import Persistent from substanced.content import content from substanced.folder import Folder def add_blog_entry(context, request): if request.registry.content.istype(context, 'Blog'): return 'add_blog_entry' @content('Blog') class Blog(Folder): pass @content('Blog Entry', add_view=add_blog_entry) class BlogEntry(Persistent): def __init__(self, title='', body=''): self.title = title self.body = body A callable used as add_view must accept two arguments: context and request. context will be the potential parent object of the content (when the SDI folder view is drawn), and request will be the current request at the time the folder view is drawn. The callable should return either a view name or None if the content should not be addable in this circumstance. For example, the above add_blog_entry callable asserts that Blog Entry content should only be addable if the context we're adding to is of type Blog; it returns None otherwise, signifying that the content is not addable in this circumstance. Obtaining Metadata About a Content Object's Type¶ Return the icon class name for the blogentry's content type or None if it does not exist: request.registry.content.metadata(blogentry, 'icon') Return the icon for the blogentry's content type or glyphicon glyphicon-file if it does not exist: request.registry.content.metadata(blogentry, 'icon', 'glyphicon glyphicon-file') Affecting Content Creation¶ In some cases you might want your resource to perform some actions that can only take place after it has been seated in its container, but before the creation events have fired. The @content decorator and add_content_type method both support an after_create argument, pointed at a callable. For example: @content( 'Document', icon='glyphicon glyphicon-align-left', add_view='add_document', propertysheets = ( ('Basic', DocumentPropertySheet), ), after_create='after_creation' ) class Document(Persistent): name = renamer() def __init__(self, title, body): self.title = title self.body = body def after_creation(self, inst, registry): pass If the value provided for after_create is a string, it's assumed to be a method of the created object. If it's a sequence, each value should be a string or a callable, which will be called in turn. The callable(s) are passed the instance being created and the registry. Afterwards, substanced.event.ContentCreatedEvent is emitted. Construction of the root folder in Substance D is a special case. Most Substance D applications will start with: from substanced.db import root_factory def main(global_config, **settings): """ This function returns a Pyramid WSGI application. """ config = Configurator(settings=settings, root_factory=root_factory) The substanced.db.root_factory() callable contains the following line: app_root = registry.content.create('Root') In many cases you want to perform some extra work on the Root. For example, you might want to create a catalog with indexes. Substance D emits an event when the root is created, so you can subscribe to that event and perform some actions: from substanced.root import Root from substanced.event import subscribe_created from substanced.catalog import Catalog @subscribe_created(Root) def root_created(event): root = event.object catalog = Catalog() catalogs = root['catalogs'] catalogs.add_service('catalog', catalog) catalog.update_indexes('system', reindex=True) catalog.update_indexes('sdidemo', reindex=True) Names and Renaming¶ A resource's "name" ( __name__) is important to the system in Substance D. For example, traversal uses the value in URLs and paths to walk through hierarchy. Containers need to know when a resource's __name__ changes. To help support this, Substance D provides substanced.util.renamer(). You use it as a class attribute wrapper on resources that want "managed" names. These resources then gain a name attribute with a getter/setter from renamer. Getting the name returns the __name__. Setting name grabs the container and calls the rename method on the folder. For example: class Document(Persistent): name = renamer() Special Colander Support¶ Forms and schemas for resources become pretty easy in Substance D. To make it easier for forms to interact with the Substance D machinery, it includes some special Colander schema nodes you can use on your forms. NameSchemaNode¶ If you want your form to affect the __name__ of a resource, certain constraints become applicable. These constraints might be different, so you might want to know if you are on an add form versus an edit form. substanced.schema.NameSchemaNode provides a schema node and default widget that bundles up the common rules for this. For example: class BlogEntrySchema(Schema): name = NameSchemaNode() The above provides the basics of support for editing a name property, especially when combined with the renamer() utility mentioned above. By default the name is limited to 100 characters. NameSchemaNode accepts an argument that can set a different limit: class BlogEntrySchema(Schema): name = NameSchemaNode(max_len=20) You can also provide an editing argument, either as a boolean or a callable which returns a boolean, which determines whether the form is rendered in "editing" mode. For example: class BlogEntrySchema(Schema): name = NameSchemaNode( editing=lambda c, r: r.registry.content.istype(c, 'BlogEntry') ) PermissionSchemaNode¶ A form might want to allow selection of zero or more permissions from the site's defined list of permissions. PermissionSchemaNode collects the possible state from the system, the currently-assigned values, and presents a widget that manages the values. MultireferenceIdSchemaNode¶ References are a very powerful facility in Substance D. Naturally you'll want your application's forms to assign references. MultireferenceIdSchemaNode gives a schema node and widget that allows multiple selections of possible values in the system for references, including the current assignments. As an example, the built-in substanced.principal.UserSchema uses this schema node: class UserSchema(Schema): """ The property schema for :class:`substanced.principal.User` objects.""" groupids = MultireferenceIdSchemaNode( choices_getter=groups_choices, title='Groups', ) Overriding Existing Content Types¶ Perhaps you would like to slightly adjust an existing content type, such as Folder, without re-implementing it. For exampler, perhaps you would like to override just the add_view and provide your own view, such as: @mgmt_view( context=IFolder, name='my_add_folder', tab_condition=False, permission='sdi.add-content', renderer='substanced.sdi:templates/form.pt' ) class MyAddFolderView(AddFolderView): def before(self, form): # Perform some custom work before validation pass With this you can override any of the view predicates (such as permission) and override any part of the form handling (such as adding a before that performs some custom processing.) To make this happen, you can re-register, so to speak, the content type during startup: from substanced.folder import Folder from .views import MyAddFolderView config.add_content_type('Folder', Folder, add_view='my_add_folder', icon='glyphicon glyphicon-folder-close') This, however, keeps the same content type class. You can also go further by overriding the content type definition itself: @content( 'Folder', icon='glyphicon glyphicon-folder-close', add_view='my_add_folder', ) @implementer(IFolder) class MyFolder(Folder): def send_email(self): pass The class for the Folder content type has now been replaced. Instead of substanced.folder.Folder it is MyFolder. Note Overriding a content type is a pain-free way to make a custom Root object. You could supply your own root_factory to the Configurator but that means replicating all its rather complicated goings-on. Instead, provide your own content type factory, as above, for Root. Adding Automatic Naming for Content¶ On some sites you don't want to set the name for every piece of content you create. Substance D provides support for this with a special kind of folder. You can configure your site to use the autonaming folder by overriding the standard folder: from substanced.folder import SequentialAutoNamingFolder from substanced.interaces import IFolder from zope.interface import implementer @content( 'Folder', icon='glyphicon glyphicon-folder-close', add_view='add_folder', ) @implementer(IFolder) class MyFolder(SequentialAutoNamingFolder): """ Override Folder content type """ The add view for Documents can then be edited to no longer require a name: def add_success(self, appstruct): registry = self.request.registry document = registry.content.create('Document', **appstruct) self.context.add_next(document) return HTTPFound( self.request.sdiapi.mgmt_path(self.context, '@@contents') ) Note This does not apply to the root object. Affecting the Tab Order for Management Views¶ The tab_order parameter overrides the mgmt_view tab settings for a content type. Its value should be a sequence of view names, each corresponding to a tab that will appear in the management interface. Any registered view names that are omitted from this sequence will be placed after the other tabs. Handling Content Events¶ Adding and modifying data related to content is, thanks to the framework, easy to do. Sometimes, though, you want to intervene and, for example, perform some extra work when content resources are added. Substance D has several framework events you can subscribe to using Pyramid events. The substanced.events module imports these events as interfaces from substanced.interfaces and then provides decorator subscribers as convenience for each: substanced.interfaces.IObjectAddedas subscriber @subscriber_added substanced.interfaces.IObjectWillBeAddedas subscriber @subscriber_will_be_added substanced.interfaces.IObjectRemovedas subscriber @subscriber_removed substanced.interfaces.IObjectWillBeRemovedas subscriber @subscriber_will_be_removed substanced.interfaces.IObjectModifiedas subscriber @subscriber_modified substanced.interfaces.IACLModifiedas subscriber @subscriber_acl_modified substanced.interfaces.IContentCreatedas subscriber @subscriber_created As an example, the substanced.principal.subscribers.user_added() function is a subscriber to the IObjectAdded event: @subscribe_added(IUser) def user_added(event): """ Give each user permission to change their own password.""" if event.loading: # fbo dump/load return user = event.object registry = event.registry set_acl( user, [(Allow, get_oid(user), ('sdi.view', 'sdi.change-password'))], registry=registry, ) As with the rest of Pyramid, you can do imperative configuration if you don't like decorator-based configuration, using config.add_content_subscriber Both the declarative and imperative forms result in substanced.event.add_content_subscriber(). Note While the event subscriber is de-coupled logically from the action that triggers the event, both the action and the subscriber run in the same transaction. The IACLModified event (and @subscriber_acl_modified subscriber) is used internally by Substance D to re-index information in the system catalog's ACL index. Substance D also uses this event to maintain references between resources and principals. Substance D applications can use this in different ways, for example recording a security audit trail on security changes. Sometimes when you perform operations on objects you don't want to perform the standard events. For example, in folder contents you can select a number of resources and move them to another folder. Normally this would fire content change events that re-index the files. This is fairly pointless: the content of the file hasn't changed. If you looked at the interface for one of the content events, you would see some extra information supported. For example, in substanced.interfaces.IObjectWillBeAdded: class IObjectWillBeAdded(IObjectEvent): """ An event type sent when an before an object is added """ object = Attribute('The object being added') parent = Attribute('The folder to which the object is being added') name = Attribute('The name which the object is being added to the folder ' 'with') moving = Attribute('None or the folder from which the object being added ' 'was moved') loading = Attribute('Boolean indicating that this add is part of a load ' '(during a dump load process)') duplicating = Attribute('The object being duplicated or ``None``') moving, duplicating are flags that can be set on the event when certain actions are triggered. These help in cases such as the one above: certain subscribers might want "flavors" of standard events and, in some cases, handle the event in a different way. This helps avoid lots of special-case events or the need for a hierarchy of events. Thus in the case above, the catalog subscriber can see that the changes triggered by the event where in the special case of "moving". This can be seen in substanced.catalog.subscribers.object_added.
http://substanced.readthedocs.io/en/latest/content.html
CC-MAIN-2017-30
en
refinedweb
How to serve multiple languages¶ If you used the django CMS installer to start your project, you’ll find that it’s already set up for serving multilingual content. Our How to install django CMS by hand guide also does the same. This guide specifically describes the steps required to enable multilingual support, in case you need to it manually. Multilingual URLs¶ If you use more than one language, django CMS urls, including the admin URLS, need to be referenced via i18n_patterns(). For more information about this see the official Django documentation on the subject. Here’s a full example of urls.py: from django.conf import settings from django.conf.urls import include, url from django.contrib import admin from django.conf.urls.i18n import i18n_patterns from django.contrib.staticfiles.urls import staticfiles_urlpatterns admin.autodiscover() urlpatterns = [ url(r'^jsi18n/(?P<packages>\S+?)/$', 'django.views.i18n.javascript_catalog'), ] urlpatterns += staticfiles_urlpatterns() # note the django CMS URLs included via i18n_patterns urlpatterns += i18n_patterns('', url(r'^admin/', include(admin.site.urls)), url(r'^', include('cms.urls')), ) Monolingual URLs¶ Of course, if you want only monolingual URLs, without a language code, simply don’t use i18n_patterns(): urlpatterns += [ url(r'^admin/', include(admin.site.urls)), url(r'^', include('cms.urls')), ] Store the user’s language preference¶ The user’s preferred language is maintained through a browsing session. So that django CMS remembers the user’s preference in subsequent sessions, it must be stored in a cookie. To enable this, cms.middleware.language.LanguageCookieMiddleware must be added to the project’s MIDDLEWARE_CLASSES setting. See How django CMS determines which language to serve for more information about how this works. Working in templates¶ Display a language chooser in the page Configuring language-handling behaviour¶ CMS_LANGUAGES describes the all options available for determining how django CMS serves content across multiple languages.
http://docs.django-cms.org/en/release-3.4.x/how_to/languages.html
CC-MAIN-2017-30
en
refinedweb
Red Hat Bugzilla – Full Text Bug Listing Created attachment 497172 [details] Patch to fix the bug Description of problem: I found the bug in Ubuntu Natty: The progress_obj is documented as: But it is called as: Regression (worked fine in Ubuntu Maverick) Version-Release number of selected component (if applicable): Ubuntu version of python-urlgrabber: 3.9.1-4 How reproducible: 100% reproducible Steps to Reproduce: 1. Pass a progress_obj to the grabber 2. Implement the documented API 3. Code fails when called incorrectly by grabber.py Actual results: Code fails with a KeyboardInterrupt Expected results: Should work as documented, and as it used to work Additional info: See details in ubuntu bug: This code has been like this pretty much forever (2005 the text param. was added), I can see how you might be confused if you just looked at the documentation though. Alas. we can't just remove the param name from the call, as the API has other params: def start(self, filename=None, url=None, basename=None, size=None, now=None, text=None): ...so our only sane option is to treat it as a documentation bug, and fix that. Sorry, if that doesn't help you much. The documentation was fixed upstream..
https://bugzilla.redhat.com/show_bug.cgi?format=multiple&id=702457
CC-MAIN-2017-30
en
refinedweb
Create better add-ins for Word with Office Open XML Provided by: Stephanie Krieger, Microsoft Corporation | Juan Balmori Labra, Microsoft Corporation If you're building Office Add-ins to run in Word, you might already know that the JavaScript API for Office (Office.js) offers several formats for reading and writing document content. These are called coercion types, and they include plain text, tables, HTML, and Office Open XML. So what are your options when you need to add rich content to a document, such as images, formatted tables, charts, or even just formatted text? You can use HTML for inserting some types of rich content, such as pictures. Depending on your scenario, there can be drawbacks to HTML coercion, such as limitations in the formatting and positioning options available to your content. Because Office Open XML is the language in which Word documents (such as .docx and .dotx) are written, you can insert virtually any type of content that a user can add to a Word document, with virtually any type of formatting the user can apply. Determining the Office Open XML markup you need to get it done is easier than you might think. Note Office Open XML is also the language behind PowerPoint and Excel (and, as of Office 2013, Visio) documents. However, currently, you can coerce content as Office Open XML only in Office Add-ins created for Word. For more information about Office Open XML, including the complete language reference documentation, see Additional resources. To begin, take a look at some of the content types you can insert using Office Open XML coercion. Download the code sample Word-Add-in-Load-and-write-Open-XML, which contains the Office Open XML markup and Office.js code required for inserting any of the following examples into Word. Note Throughout this article, the terms content types and rich content refer to the types of rich content you can insert into a Word document. Figure 1. Text with direct formatting. You can use direct formatting to specify exactly what the text will look like regardless of existing formatting in the user's document. Figure 2. Text formatted using a style. You can use a style to automatically coordinate the look of text you insert with the user's document. Figure 3. A simple image. You can use the same method for inserting any Office-supported image format. Figure 4. An image formatted using picture styles and effects. Adding high quality formatting and effects to your images requires much less markup than you might expect. Figure 5. A content control. You can use content controls with your add-in to add content at a specified (bound) location rather than at the selection. Figure 6. A text box with WordArt formatting. Text effects are available in Word for text inside a text box (as shown here) or for regular body text. Figure 7. A shape. You can insert built-in or custom drawing shapes, with or without text and formatting effects. Figure 8. A table with direct formatting. You can include text formatting, borders, shading, cell sizing, or any table formatting you need. Figure 9. A table formatted using a table style. You can use built-in or custom table styles just as easily as using a paragraph style for text. Figure 10. A SmartArt diagram. Office 2013 offers a wide array of SmartArt diagram layouts (and you can use Office Open XML to create your own). Figure 11. A chart. You can insert Excel charts as live charts in Word documents, which also means you can use them in your add-in for Word. As you can see by the preceding examples, you can use Office Open XML coercion to insert essentially any type of content that a user can insert into their own document. There are two simple ways to get theOffice Open XML markup you need. Either add your rich content to an otherwise blank Word 2013 document and then save the file in Word XML Document format or use a test add-in with the getSelectedDataAsync method to grab the markup. Both approaches provide essentially the same result. Note An Office Open XML document is actually a compressed package of files that represent the document contents. Saving the file in the Word XML Document format gives you the entireOffice Open XML package flattened into one XML file, which is also what you get when using getSelectedDataAsync to retrieve the Office Open XML markup. If you save the file to an XML format from Word, note that there are two options under the Save as Type list in the Save As dialog box for .xml format files. Be sure to choose Word XML Document and not the Word 2003 option. Download the code sample named Word-Add-in-Get-Set-EditOpen-XML, which you can use as a tool to retrieve and test your markup. So is that all there is to it? Well, not quite. Yes, for many scenarios, you could use the full, flattened Office Open XML result you see with either of the preceding methods and it would work. The good news is that you probably don't need most of that markup. If you're one of the many add-in developers seeing Office Open XML markup for the first time, trying to make sense of the massive amount of markup you get for the simplest piece of content might seem overwhelming, but it doesn't have to be. In this topic, we'll use some common scenarios we've been hearing from the Office Add-ins developer community to show you techniques for simplifying Office Open XML for use in your add-in. We'll explore the markup for some types of content shown earlier along with the information you need for minimizing the Office Open XML payload. We'll also look at the code you need for inserting rich content into a document at the active selection and how to use Office Open XML with the bindings object to add or replace content at specified locations. Exploring the Office Open XML document package When you use getSelectedDataAsync to retrieve the Office Open XML for a selection of content (or when you save the document in Word XML Document format), what you're getting is not just the markup that describes your selected content; it's an entire document with many options and settings that you almost certainly don't need. In fact, if you use that method from a document that contains a task pane add-in, the markup you get even includes your task pane. Even a simple Word document package includes parts for document properties, styles, theme (formatting settings), web settings, fonts, and then some, in addition to parts for the actual content. For example, say that you want to insert just a paragraph of text with direct formatting, as shown earlier in Figure 1. When you grab the Office Open XML for the formatted text using getSelectedDataAsync, you see a large amount of markup. That markup includes a package element that represents an entire document, which contains several parts (commonly referred to as document parts or, in the Office Open XML, as package parts), as you see listed in Figure 13. Each part represents a separate file within the package. Tip You can edit Office Open XML markup in a text editor like Notepad. If you open it in Visual Studio 2015, you can use Edit >Advanced > Format Document (Ctrl+K, Ctrl+D) to format the package for easier editing. Then you can collapse or expand document parts or sections of them, as shown in Figure 12, to more easily review and edit the content of the Office Open XML package. Each document part begins with a pkg:part tag. Figure 12. Collapse and expand package parts for easier editing in Visual Studio 2015 Figure 13. The parts included in a basic Word Office Open XML document package With all that markup, you might be surprised to discover that the only elements you actually need to insert the formatted text example are pieces of the .rels part and the document.xml part. Note The two lines of markup above the package tag (the XML declarations for version and Office program ID) are assumed when you use the Office Open XML coercion type, so you don't need to include them. Keep them if you want to open your edited markup as a Word document to test it. Several of the other types of content shown at the start of this topic require additional parts as well (beyond those shown in Figure 13), and we'll address those later in this topic. Meanwhile, since you'll see most of the parts shown in Figure 13 in the markup for any Word document package, here's a quick summary of what each of these parts is for and when you need it: Inside the package tag, the first part is the .rels file, which defines relationships between the top-level parts of the package (these are typically the document properties, thumbnail (if any), and main document body). Some of the content in this part is always required in your markup because you need to define the relationship of the main document part (where your content resides) to the document package. The document.xml.rels part defines relationships for additional parts required by the document.xml (main body) part, if any. Important The .rels files in your package (such as the top-level .rels, document.xml.rels, and others you may see for specific types of content) are an extremely important tool that you can use as a guide for helping you quickly edit down your Office Open XML package. To learn more about how to do this, see Creating your own markup: best practices later in this topic. The document.xml part is the content in the main body of the document. You need elements of this part, of course, since that's where your content appears. But, you don't need everything you see in this part. We'll look at that in more detail later. Many parts are automatically ignored by the Set methods when inserting content into a document using Office Open XML coercion, so you might as well remove them. These include the theme1.xml file (the document's formatting theme), the document properties parts (core, add-in, and thumbnail), and setting files (including settings, webSettings, and fontTable). In the Figure 1 example, text formatting is directly applied (that is, each font and paragraph formatting setting applied individually). But, if you use a style (such as if you want your text to automatically take on the formatting of the Heading 1 style in the destination document) as shown earlier in Figure 2, then you would need part of the styles.xml part as well as a relationship definition for it. For more information, see the topic section Adding objects that use additional Office Open XML parts. Inserting document content at the selection Let's take a look at the minimal Office Open XML markup required for the formatted text example shown in Figure 1 and the JavaScript required for inserting it at the active selection in the document. Simplified Office Open XML markup We've edited the Office Open XML example shown here, as described in the preceding section, to leave just required document parts and only required elements within each of those parts. We'll walk through how to edit the markup yourself (and explain a bit more about the pieces that remain here) in the next section of the topic. :p> </w:body> </w:document> </pkg:xmlData> </pkg:part> </pkg:package> Note If you add the markup shown here to an XML file along with the XML declaration tags for version and mso-application at the top of the file (shown in Figure 13), you can open it in Word as a Word document. Or, without those tags, you can still open it using File> Open in Word. You'll see Compatibility Mode on the title bar in Word 2013, because you removed the settings that tell Word this is a 2013 document. Since you're adding this markup to an existing Word 2013 document, that won't affect your content at all. JavaScript for using setSelectedDataAsync Once you save the preceding Office Open XML as an XML file that's accessible from your solution, you can use the following function to set the formatted text content in the document using Office Open XML coercion. In this function, notice that all but the last line are used to get your saved markup for use in the setSelectedDataAsync method call at the end of the function. setSelectedDataASync requires only that you specify the content to be inserted and the coercion type. Note Replace yourXMLfilename with the name and path of the XML file as you've saved it in your solution. If you're not sure where to include XML files in your solution or how to reference them in your code, see the Word-Add-in-Load-and-write-Open-XML code sample for examples of that and a working example of the markup and JavaScript shown here. function writeContent() { var myOOXMLRequest = new XMLHttpRequest(); var myXML; myOOXMLRequest.open('GET', 'yourXMLfilename', false); myOOXMLRequest.send(); if (myOOXMLRequest.status === 200) { myXML = myOOXMLRequest.responseText; } Office.context.document.setSelectedDataAsync(myXML, { coercionType: 'ooxml' }); } Creating your own markup: best practices Let's take a closer look at the markup you need to insert the preceding formatted text example. For this example, start by simply deleting all document parts from the package other than .rels and document.xml. Then, we'll edit those two required parts to simplify things further. Important Use the .rels parts as a map to quickly gauge what's included in the package and determine what parts you can delete completely (that is, any parts not related to or referenced by your content). Remember that every document part must have a relationship defined in the package and those relationships appear in the .rels files. So you should see all of them listed in either .rels, document.xml.rels, or a content-specific .rels file. The following markup shows the required .rels part before editing. Since we're deleting the add-in and core document property parts, and the thumbnail part, we need to delete those relationships from .rels as well. Notice that this will leave only the relationship (with the relationship ID "rID1" in the following example) for document.xml. <pkg:part pkg: <pkg:xmlData> <Relationships xmlns=""> <Relationship Id="rId3" Type="" Target="docProps/core.xml"/> <Relationship Id="rId2" Type="" Target="docProps/thumbnail.emf"/> <Relationship Id="rId1" Type="" Target="word/document.xml"/> <Relationship Id="rId4" Type="" Target="docProps/app.xml"/> </Relationships> </pkg:xmlData> </pkg:part> Important Remove the relationships (that is, the Relationship tag) for any parts that you completely remove from the package. Including a part without a corresponding relationship, or excluding a part and leaving its relationship in the package, will result in an error. The following markup shows the document.xml part, which includes our sample formatted text content before editing. <pkg:part pkg: <pkg:xmlData> <w:document mc: :bookmarkStart w: <w:bookmarkEnd w: </w:p> <w:p/> <w:sectPr> <w:pgSz w: <w:pgMar w: <w:cols w: </w:sectPr> </w:body> </w:document> </pkg:xmlData> </pkg:part> Since document.xml is the primary document part where you place your content, let's take a quick walk through that part. (Figure 14, which follows this list, provides a visual reference to show how some of the core content and formatting tags explained here relate to what you see in a Word document.) The opening w:document tag includes several namespace ( xmlns ) listings. Many of those namespaces refer to specific types of content and you only need them if they're relevant to your content. Notice that the prefix for the tags throughout a document part refers back to the namespaces. In this example, the only prefix used in the tags throughout the document.xml part is w:, so the only namespace that we need to leave in the opening w:document tag is xmlns:w. Tip If you're editing your markup in Visual Studio 2015, after you delete namespaces in any part, look through all tags of that part. If you've removed a namespace that's required for your markup, you'll see a red squiggly underline on the relevant prefix for affected tags. If you remove the xmlns:mc namespace, you must also remove the mc:Ignorable attribute that precedes the namespace listings. Inside the opening body tag, you see a paragraph tag ( w:p ), which includes our sample content for this example. The w:pPr tag includes properties for directly-applied paragraph formatting, such as space before or after the paragraph, paragraph alignment, or indents. (Direct formatting refers to attributes that you apply individually to content rather than as part of a style.) This tag also includes direct font formatting that's applied to the entire paragraph, in a nested w:rPr (run properties) tag, which contains the font color and size set in our sample. Note You might notice that font sizes and some other formatting settings in Word Office Open XML markup look like they're double the actual size. That's because paragraph and line spacing, as well some section formatting properties shown in the preceding markup, are specified in twips (one-twentieth of a point). Depending on the types of content you work with in Office Open XML, you may see several additional units of measure, including English Metric Units (914,400 EMUs to an inch), which are used for some Office Art (drawingML) values and 100,000 times actual value, which is used in both drawingML and PowerPoint markup. PowerPoint also expresses some values as 100 times actual and Excel commonly uses actual values. Within a paragraph, any content with like properties is included in a run ( w:r ), such as is the case with the sample text. Each time there's a change in formatting or content type, a new run starts. (That is, if just one word in the sample text was bold, it would be separated into its own run.) In this example, the content includes just the one text run. Notice that, because the formatting included in this sample is font formatting (that is, formatting that can be applied to as little as one character), it also appears in the properties for the individual run. Also notice the tags for the hidden "_GoBack" bookmark (w:bookmarkStart and w:bookmarkEnd ), which appear in Word 2013 documents by default. You can always delete the start and end tags for the GoBack bookmark from your markup. The last piece of the document body is the w:sectPr tag, or section properties. This tag includes settings such as margins and page orientation. The content you insert using setSelectedDataAsync will take on the active section properties in the destination document by default. So, unless your content includes a section break (in which case you'll see more than one w:sectPr tag), you can delete this tag. Figure 14. How common tags in document.xml relate to the content and layout of a Word document. Tip: In markup you create, you might see another attribute in several tags that includes the characters w:rsid, which you don't see in the examples used in this topic. These are revision identifiers. They're used in Word for the Combine Documents feature and they're on by default. You'll never need them in markup you're inserting with your add-in and turning them off makes for much cleaner markup. You can easily remove existing RSID tags or disable the feature (as described in the following procedure) so that they're not added to your markup for new content. Be aware that if you use the co-authoring capabilities in Word (such as the ability to simultaneously edit documents with others), you should enable the feature again when finished generating the markup for your add-in. To turn off RSID attributes in Word for documents you create going forward, do the following: - In Word 2013, choose File and then choose Options. - In the Word Options dialog box, choose Trust Center and then choose Trust Center Settings. - In the Trust Center dialog box, choose Privacy Options and then disable the setting Store Random Number to Improve Combine Accuracy. To remove RSID tags from an existing document, try the following shortcut with the document open in Office Open XML: - With your insertion point in the main body of the document, press Ctrl+Home to go to the top of the document. - On the keyboard, press Spacebar, Delete, Spacebar. Then, save the document. After removing the majority of the markup from this package, we're left with the minimal markup that needs to be inserted for the sample, as shown in the preceding section. Using the same Office Open XML structure for different content types Several types of rich content require only the .rels and document.xml components shown in the preceding example, including content controls, Office drawing shapes and text boxes, and tables (unless a style is applied to the table). In fact, you can reuse the same edited package parts and swap out just the body content in document.xml for the markup of your content. To check out the Office Open XML markup for the examples of each of these content types shown earlier in Figures 5 through 8, explore the Word-Add-in-Load-and-write-Open-XML code sample referenced in the Overview section. Before we move on, let's take a look at differences to note for a couple of these content types and how to swap out the pieces you need. Understanding drawingML markup (Office graphics) in Word: What are fallbacks? If the markup for your shape or text box looks far more complex than you would expect, there is a reason for it. With the release of Office 2007, we saw the introduction of the Office Open XML Formats as well as the introduction of a new Office graphics engine that PowerPoint and Excel fully adopted. In the 2007 release, Word only incorporated part of that graphics engine, adopting the updated Excel charting engine, SmartArt graphics, and advanced picture tools. For shapes and text boxes, Word 2007 continued to use legacy drawing objects (VML). It was in the 2010 release that Word took the additional steps with the graphics engine to incorporate updated shapes and drawing tools. So, to support shapes and text boxes in Office Open XML Format Word documents when opened in Word 2007, shapes (including text boxes) require fallback VML markup. Typically, as you see for the shape and text box examples included in the Word-Add-in-Load-and-write-Open-XML code sample, the fallback markup can be removed. Word 2013 automatically adds missing fallback markup to shapes when a document is saved. However, if you prefer to keep the fallback markup to ensure that you're supporting all user scenarios, there's no harm in retaining it. If you have grouped drawing objects included in your content, you'll see additional (and apparently repetitive) markup, but this must be retained. Portions of the markup for drawing shapes are duplicated when the object is included in a group. Important When working with text boxes and drawing shapes, be sure to check namespaces carefully before removing them from document.xml. (Or, if you're reusing markup from another object type, be sure to add back any required namespaces you might have previously removed from document.xml.) A substantial portion of the namespaces included by default in document.xml are there for drawing object requirements. About graphic positioning In the code samples Word-Add-in-Load-and-write-Open-XML and Word-Add-in-Get-Set-EditOpen-XML, the text box and shape are setup using different types of text wrapping and positioning settings. (Also be aware that the image examples in those code samples are setup using in line with text formatting, which positions a graphic object on the text baseline.) The shape in those code samples is positioned relative to the right and bottom page margins. Relative positioning lets you more easily coordinate with a user's unknown document setup because it will adjust to the user's margins and run less risk of looking awkward because of paper size, orientation, or margin settings. To retain relative positioning settings when you insert a graphic object, you must retain the paragraph mark (w:p) in which the positioning (known in Word as an anchor) is stored. If you insert the content into an existing paragraph mark rather than including your own, you may be able to retain the same initial visual, but many types of relative references that enable the positioning to automatically adjust to the user's layout may be lost. Working with content controls Content controls are an important feature in Word 2013 that can greatly enhance the power of your add-in for Word in multiple ways, including giving you the ability to insert content at designated places in the document rather than only at the selection. In Word, find content controls on the Developer tab of the ribbon, as shown here in Figure 15. Figure 15. The Controls group on the Developer tab in Word. Types of content controls in Word include rich text, plain text, picture, building block gallery, check box, dropdown list, combo box, date picker, and repeating section. Use the Properties command, shown in Figure 15, to edit the title of the control and to set preferences such as hiding the control container. Enable Design Mode to edit placeholder content in the control. If your add-in works with a Word template, you can include controls in that template to enhance the behavior of the content. You can also use XML data binding in a Word document to bind content controls to data, such as document properties, for easy form completion or similar tasks. (Find controls that are already bound to built-in document properties in Word on the Insert tab, under Quick Parts.) When you use content controls with your add-in, you can also greatly expand the options for what your add-in can do using a different type of binding. You can bind to a content control from within the add-in and then write content to the binding rather than to the active selection. Note Don't confuse XML data binding in Word with the ability to bind to a control via your add-in. These are completely separate features. However, you can include named content controls in the content you insert via your add-in using OOXML coercion and then use code in the add-in to bind to those controls. Also be aware that both XML data binding and Office.js can interact with custom XML parts in your app, so it is possible to integrate these powerful tools. To learn about working with custom XML parts in the Office JavaScript API, see the Additional resources section of this topic. Working with bindings in your Word add-in is covered in the next section of the topic. First, let's take a look at an example of the Office Open XML required for inserting a rich text content control that you can bind to using your add-in. Important Rich text controls are the only type of content control you can use to bind to a content control from within your add-in. ="" xmlns: <w:body> <w:p/> <w:sdt> <w:sdtPr> <w:alias w: <w:id w: <w15:appearance w15: <w:showingPlcHdr/> </w:sdtPr> <w:sdtContent> <w:p> <w:r> <w:t>[This text is inside a content control that has its container hidden. You can bind to a content control to add or interact with content at a specified location in the document.]</w:t> </w:r> </w:p> </w:sdtContent> </w:sdt> </w:body> </w:document> </pkg:xmlData> </pkg:part> </pkg:package> As already mentioned, content controls, like formatted text, don't require additional document parts, so only edited versions of the .rels and document.xml parts are included here. The w:sdt tag that you see within the document.xml body represents the content control. If you generate the Office Open XML markup for a content control, you'll see that several attributes have been removed from this example, including the tag and document part properties. Only essential (and a couple of best practice) elements have been retained, including the following: The alias is the title property from the Content Control Properties dialog box in Word. This is a required property (representing the name of the item) if you plan to bind to the control from within your add-in. The unique id is a required property. If you bind to the control from within your add-in, the ID is the property the binding uses in the document to identify the applicable named content control. The appearance attribute is used to hide the control container, for a cleaner look. This is a new feature in Word 2013, as you see by the use of the w15 namespace. Because this property is used, the w15 namespace is retained at the start of the document.xml part. The showingPlcHdr attribute is an optional setting that sets the default content you include inside the control (text in this example) as placeholder content. So, if the user clicks or taps in the control area, the entire content is selected rather than behaving like editable content in which the user can make changes. Although the empty paragraph mark ( w:p/ ) that precedes the sdt tag is not required for adding a content control (and will add vertical space above the control in the Word document), it ensures that the control is placed in its own paragraph. This may be important, depending upon the type and formatting of content that will be added in the control. If you intend to bind to the control, the default content for the control (what's inside the sdtContent tag) must include at least one complete paragraph (as in this example), in order for your binding to accept multi-paragraph rich content. Note The document part attribute that was removed from this sample w:sdt tag may appear in a content control to reference a separate part in the package where placeholder content information can be stored (parts located in a glossary directory in the Office Open XML package). Although document part is the term used for XML parts (that is, files) within an Office Open XML package, the term document parts as used in the sdt property refers to the same term in Word that is used to describe some content types including building blocks and document property quick parts (for example, built-in XML data-bound controls). If you see parts under a glossary directory in your Office Open XML package, you may need to retain them if the content you're inserting includes these features. For a typical content control that you intend to use to bind to from your add-in, they're not required. Just remember that, if you do delete the glossary parts from the package, you must also remove the document part attribute from the w:sdt tag. The next section will discuss how to create and use bindings in your Word add-in. Inserting content at a designated location We've already looked at how to insert content at the active selection in a Word document. If you bind to a named content control that's in the document, you can insert any of the same content types into that control. So when might you want to use this approach? When you need to add or replace content at specified locations in a template, such as to populate portions of the document from a database When you want the option to replace content that you're inserting at the active selection, such as to provide design element options to the user When you want the user to add data in the document that you can access for use with your add-in, such as to populate fields in the task pane based upon information the user adds in the document Download the code sample Word-Add-in-JavaScript-AddPopulateBindings, which provides a working example of how to insert and bind to a content control, and how to populate the binding. Add and bind to a named content control As you examine the JavaScript that follows, consider these requirements: As previously mentioned, you must use a rich text content control in order to bind to the control from your Word add-in. The content control must have a name (this is the Title field in the Content Control Properties dialog box, which corresponds to the Alias tag in the Office Open XML markup). This is how the code identifies where to place the binding. You can have several named controls and bind to them as needed. Use a unique content control name, unique content control ID, and a unique binding ID. function addAndBindControl() { Office.context.document.bindings.addFromNamedItemAsync("MyContentControlTitle", "text", { id: 'myBinding' }, function (result) { if (result.status == "failed") { if (result.error.message == "The named item does not exist.") var myOOXMLRequest = new XMLHttpRequest(); var myXML; myOOXMLRequest.open('GET', '../../Snippets_BindAndPopulate/ContentControl.xml', false); myOOXMLRequest.send(); if (myOOXMLRequest.status === 200) { myXML = myOOXMLRequest.responseText; } Office.context.document.setSelectedDataAsync(myXML, { coercionType: 'ooxml' }, function (result) { Office.context.document.bindings.addFromNamedItemAsync("MyContentControlTitle", "text", { id: 'myBinding' }); }); } }); } The code shown here takes the following steps: Attempts to bind to the named content control, using addFromNamedItemAsync. Take this step first if there is a possible scenario for your add-in where the named control could already exist in the document when the code executes. For example, you'll want to do this if the add-in was inserted into and saved with a template that's been designed to work with the add-in, where the control was placed in advance. You also need to do this if you need to bind to a control that was placed earlier by the add-in. The callback in the first call to the addFromNamedItemAsync method checks the status of the result to see if the binding failed because the named item doesn't exist in the document (that is, the content control named MyContentControlTitle in this example). If so, the code adds the control at the active selection point (using setSelectedDataAsync ) and then binds to it. Note As mentioned earlier and shown in the preceding code, the name of the content control is used to determine where to create the binding. However, in the Office Open XML markup, the code adds the binding to the document using both the name and the ID attribute of the content control. After code execution, if you examine the markup of the document in which your add-in created bindings, you'll see two parts to each binding. In the markup for the content control where a binding was added (in document.xml), you'll see the attribute w15:webExtensionLinked/. In the document part named webExtensions1.xml, you'll see a list of the bindings you've created. Each is identified using the binding ID and the ID attribute of the applicable control, such as the following, where the appref attribute is the content control ID: ** we:binding id="myBinding" type="text" appref="1382295294"/. Important You must add the binding at the time you intend to act upon it. Don't include the markup for the binding in the Office Open XML for inserting the content control because the process of inserting that markup will strip the binding. Populate a binding The code for writing content to a binding is similar to that for writing content to a selection. function populateBinding(filename) { var myOOXMLRequest = new XMLHttpRequest(); var myXML; myOOXMLRequest.open('GET', filename, false); myOOXMLRequest.send(); if (myOOXMLRequest.status === 200) { myXML = myOOXMLRequest.responseText; } Office.select("bindings#myBinding").setDataAsync(myXML, { coercionType: 'ooxml' }); } As with setSelectedDataAsync, you specify the content to be inserted and the coercion type. The only additional requirement for writing to a binding is to identify the binding by ID. Notice how the binding ID used in this code (bindings#myBinding) corresponds to the binding ID established (myBinding) when the binding was created in the previous function. Note The preceding code is all you need whether you are initially populating or replacing the content in a binding. When you insert a new piece of content at a bound location, the existing content in that binding is automatically replaced. Check out an example of this in the previously-referenced code sample Word-Add-in-JavaScript-AddPopulateBindings, which provides two separate content samples that you can use interchangeably to populate the same binding. Adding objects that use additional Office Open XML parts Many types of content require additional document parts in the Office Open XML package, meaning that they either reference information in another part or the content itself is stored in one or more additional parts and referenced in document.xml. For example, consider the following: Content that uses styles for formatting (such as the styled text shown earlier in Figure 2 or the styled table shown in Figure 9) requires the styles.xml part. Images (such as those shown in Figures 3 and 4) include the binary image data in one (and sometimes two) additional parts. SmartArt diagrams (such as the one shown in Figure 10) require multiple additional parts to describe the layout and content. Charts (such as the one shown in Figure 11) require multiple additional parts, including their own relationship (.rels) part. You can see edited examples of the markup for all of these content types in the previously-referenced code sample Word-Add-in-Load-and-write-Open-XML. You can insert all of these content types using the same JavaScript code shown earlier (and provided in the referenced code samples) for inserting content at the active selection and writing content to a specified location using bindings. Before you explore the samples, let's take a look at few tips for working with each of these content types. Important Remember, if you are retaining any additional parts referenced in document.xml, you will need to retain document.xml.rels and the relationship definitions for the applicable parts you're keeping, such as styles.xml or an image file. Working with styles The same approach to editing the markup that we looked at for the preceding example with directly-formatted text applies when using paragraph styles or table styles to format your content. However, the markup for working with paragraph styles is considerably simpler, so that is the example described here. Editing the markup for content using paragraph styles The following markup represents the body content for the styled text example shown in Figure 2. <w:body> <w:p> <w:pPr> <w:pStyle w: </w:pPr> <w:r> <w:t>This text is formatted using the Heading 1 paragraph style.</w:t> </w:r> </w:p> </w:body> Note As you see, the markup for formatted text in document.xml is considerably simpler when you use a style, because the style contains all of the paragraph and font formatting that you otherwise need to reference individually. However, as explained earlier, you might want to use styles or direct formatting for different purposes: use direct formatting to specify the appearance of your text regardless of the formatting in the user's document; use a paragraph style (particularly a built-in paragraph style name, such as Heading 1 shown here) to have the text formatting automatically coordinate with the user's document. Use of a style is a good example of how important it is to read and understand the markup for the content you're inserting, because it's not explicit that another document part is referenced here. If you include the style definition in this markup and don't include the styles.xml part, the style information in document.xml will be ignored regardless of whether or not that style is in use in the user's document. However, if you take a look at the styles.xml part, you'll see that only a small portion of this long piece of markup is required when editing markup for use in your add-in: The styles.xml part includes several namespaces by default. If you are only retaining the required style information for your content, in most cases you only need to keep the xmlns:w namespace. The w:docDefaults tag content that falls at the top of the styles part will be ignored when your markup is inserted via the add-in and can be removed. The largest piece of markup in a styles.xml part is for the w:latentStyles tag that appears after docDefaults, which provides information (such as appearance attributes for the Styles pane and Styles gallery) for every available style. This information is also ignored when inserting content via your add-in and so it can be removed. Following the latent styles information, you see a definition for each style in use in the document from which you're markup was generated. This includes some default styles that are in use when you create a new document and may not be relevant to your content. You can delete the definitions for any styles that aren't used by your content. Note Each built-in heading style has an associated Char style that is a character style version of the same heading format. Unless you've applied the heading style as a character style, you can remove it. If the style is used as a character style, it appears in document.xml in a run properties tag ( w:rPr ) rather than a paragraph properties ( w:pPr ) tag. This should only be the case if you've applied the style to just part of a paragraph, but it can occur inadvertently if the style was incorrectly applied. If you're using a built-in style for your content, you don't have to include a full definition. You only must include the style name, style ID, and at least one formatting attribute in order for the coerced Office Open XML to apply the style to your content upon insertion. However, it's a best practice to include a complete style definition (even if it's the default for built-in styles). If a style is already in use in the destination document, your content will take on the resident definition for the style, regardless of what you include in styles.xml. If the style isn't yet in use in the destination document, your content will use the style definition you provide in the markup. So, for example, the only content we needed to retain from the styles.xml part for the sample text shown in Figure 2, which is formatted using Heading 1 style, is the following. Note A complete Word 2013 definition for the Heading 1 style has been retained in this example. <pkg:part pkg: <pkg:xmlData> <w:styles xmlns: <w:style w: <w:name w: <w:basedOn w: <w:next w: <w:link w: <w:uiPriority w: <w:qFormat/> <w:pPr> <w:keepNext/> <w:keepLines/> <w:spacing w: :color w: <w:sz w: <w:szCs w: </w:rPr> </w:style> </w:styles> </pkg:xmlData> </pkg:part> Editing the markup for content using table styles When your content uses a table style, you need the same relative part of styles.xml as described for working with paragraph styles. That is, you only need to retain the information for the style you're using in your content, and you must include the name, ID, and at least one formatting attribute, but are better off including a complete style definition to address all potential user scenarios. However, when you look at the markup both for your table in document.xml and for your table style definition in styles.xml, you see enormously more markup than when working with paragraph styles. In document.xml, formatting is applied by cell even if it's included in a style. Using a table style won't reduce the volume of markup. The benefit of using table styles for the content is for easy updating and easily coordinating the look of multiple tables. In styles.xml, you'll see a substantial amount of markup for a single table style as well, because table styles include several types of possible formatting attributes for each of several table areas, such as the entire table, heading rows, odd and even banded rows and columns (separately), the first column, etc. Working with images The markup for an image includes a reference to at least one part that includes the binary data to describe your image. For a complex image, this can be hundreds of pages of markup and you can't edit it. Since you don't ever have to touch the binary part(s), you can simply collapse it if you're using a structured editor such as Visual Studio, so that you can still easily review and edit the rest of the package. If you check out the example markup for the simple image shown earlier in Figure 3, available in the previously-referenced code sample Word-Add-in-Load-and-write-Open-XML, you'll see that the markup for the image in document.xml includes size and position information as well as a relationship reference to the part that contains the binary image data. That reference is included in the a:blip tag, as follows: <a:blip r: Be aware that, because a relationship reference is explicitly used ( r:embed="rID4" ) and that related part is required in order to render the image, if you don't include the binary data in your Office Open XML package, you will get an error. This is different from styles.xml, explained previously, which won't throw an error if omitted since the relationship is not explicitly referenced and the relationship is to a part that provides attributes to the content (formatting) rather than being part of the content itself. Note When you review the markup, notice the additional namespaces used in the a:blip tag. You'll see in document.xml that the xlmns:a namespace (the main drawingML namespace) is dynamically placed at the beginning of the use of drawingML references rather than at the top of the document.xml part. However, the relationships namespace (r) must be retained where it appears at the start of document.xml. Check your picture markup for additional namespace requirements. Remember that you don't have to memorize which types of content require what namespaces, you can easily tell by reviewing the prefixes of the tags throughout document.xml. Understanding additional image parts and formatting When you use some Office picture formatting effects on your image, such as for the image shown in Figure 4, which uses adjusted brightness and contrast settings (in addition to picture styling), a second binary data part for an HD format copy of the image data may be required. This additional HD format is required for formatting considered a layering effect, and the reference to it appears in document.xml similar to the following: <a14:imgLayer r: See the required markup for the formatted image shown in Figure 4 (which uses layering effects among others) in the Word-Add-in-Load-and-write-Open-XML code sample. Working with SmartArt diagrams A SmartArt diagram has four associated parts, but only two are always required. You can examine an example of SmartArt markup in the Word-Add-in-Load-and-write-Open-XML code sample. First, take a look at a brief description of each of the parts and why they are or are not required: Note If your content includes more than one diagram, they will be numbered consecutively, replacing the 1 in the file names listed here. layout1.xml: This part is required. It includes the markup definition for the layout appearance and functionality. data1.xml: This part is required. It includes the data in use in your instance of the diagram. drawing1.xml: This part is not always required but if you apply custom formatting to elements in your instance of a diagram, such as directly formatting individual shapes, you might need to retain it. colors1.xml: This part is not required. It includes color style information, but the colors of your diagram will coordinate by default with the colors of the active formatting theme in the destination document, based on the SmartArt color style you apply from the SmartArt Tools design tab in Word before saving out your Office Open XML markup. quickStyles1.xml: This part is not required. Similar to the colors part, you can remove this as your diagram will take on the definition of the applied SmartArt style that's available in the destination document (that is, it will automatically coordinate with the formatting theme in the destination document). Tip The SmartArt layout1.xml file is a good example of places you may be able to further trim your markup but might not be worth the extra time to do so (because it removes such a small amount of markup relative to the entire package). If you would like to get rid of every last line you can of markup, you can delete the dgm:sampData tag and its contents. This sample data defines how the thumbnail preview for the diagram will appear in the SmartArt styles galleries. However, if it's omitted, default sample data is used. Be aware that the markup for a SmartArt diagram in document.xml contains relationship ID references to the layout, data, colors, and quick styles parts. You can delete the references in document.xml to the colors and styles parts when you delete those parts and their relationship definitions (and it's certainly a best practice to do so, since you're deleting those relationships), but you won't get an error if you leave them, since they aren't required for your diagram to be inserted into a document. Find these references in document.xml in the dgm:relIds tag. Regardless of whether or not you take this step, retain the relationship ID references for the required layout and data parts. Working with charts Similar to SmartArt diagrams, charts contain several additional parts. However, the setup for charts is a bit different from SmartArt, in that a chart has its own relationship file. Following is a description of required and removable document parts for a chart: Note As with SmartArt diagrams, if your content includes more than one chart, they will be numbered consecutively, replacing the 1 in the file names listed here. In document.xml.rels, you'll see a reference to the required part that contains the data that describes the chart (chart1.xml). You also see a separate relationship file for each chart in your Office Open XML package, such as chart1.xml.rels. There are three files referenced in chart1.xml.rels, but only one is required. These include the binary Excel workbook data (required) and the color and style parts (colors1.xml and styles1.xml) that you can remove. Charts that you can create and edit natively in Word 2013 are Excel 2013 charts, and their data is maintained on an Excel worksheet that's embedded as binary data in your Office Open XML package. Like the binary data parts for images, this Excel binary data is required, but there's nothing to edit in this part. So you can just collapse the part in the editor to avoid having to manually scroll through it all to examine the rest of your Office Open XML package. However, similar to SmartArt, you can delete the colors and styles parts. If you've used the chart styles and color styles available in to format your chart, the chart will take on the applicable formatting automatically when it is inserted into the destination document. See the edited markup for the example chart shown in Figure 11 in the Word-Add-in-Load-and-write-Open-XML code sample. Editing the Office Open XML for use in your task pane add-in You've already seen how to identify and edit the content in your markup. If the task still seems difficult when you take a look at the massive Office Open XML package generated for your document, following is a quick summary of recommended steps to help you edit that package down quickly: Note Remember that you can use all .rels parts in the package as a map to quickly check for document parts that you can remove. Open the flattened XML file in Visual Studio 2015 and press Ctrl+K, Ctrl+D to format the file. Then use the collapse/expand buttons on the left to collapse the parts you know you need to remove. You might also want to collapse long parts you need, but know you won't need to edit (such as the base64 binary data for an image file), making the markup faster and easier to visually scan. There are several parts of the document package that you can almost always remove when you are preparing Office Open XML markup for use in your add-in. You might want to start by removing these (and their associated relationship definitions), which will greatly reduce the package right away. These include the theme1, fontTable, settings, webSettings, thumbnail, both the core and add-in properties files, and any taskpane or webExtension parts. Remove any parts that don't relate to your content, such as footnotes, headers, or footers that you don't require. Again, remember to also delete their associated relationships. Review the document.xml.rels part to see if any files referenced in that part are required for your content, such as an image file, the styles part, or SmartArt diagram parts. Delete the relationships for any parts your content doesn't require and confirm that you have also deleted the associated part. If your content doesn't require any of the document parts referenced in document.xml.rels, you can delete that file also. If your content has an additional .rels part (such as chart#.xml.rels), review it to see if there are other parts referenced there that you can remove (such as quick styles for charts) and delete both the relationship from that file as well as the associated part. Edit document.xml to remove namespaces not referenced in the part, section properties if your content doesn't include a section break, and any markup that's not related to the content that you want to insert. If inserting shapes or text boxes, you might also want to remove extensive fallback markup. Edit any additional required parts where you know that you can remove substantial markup without affecting your content, such as the styles part. After you've taken the preceding seven steps, you've likely cut between about 90 and 100 percent of the markup you can remove, depending on your content. In most cases, this is likely to be as far as you want to trim. Regardless of whether you leave it here or choose to delve further into your content to find every last line of markup you can cut, remember that you can use the previously-referenced code sample Word-Add-in-Get-Set-EditOpen-XML as a scratch pad to quickly and easily test your edited markup. Tip If you update an Office Open XML snippet in an existing solution while developing, clear temporary Internet files before you run the solution again to update the Office Open XML used by your code. Markup that's included in your solution in XML files is cached on your computer. You can, of course, clear temporary Internet files from your default web browser. To access Internet options and delete these settings from inside Visual Studio 2015, on the Debug menu, choose Options and Settings. Then, under Environment, choose Web Browser and then choose Internet Explorer Options. Creating an add-in for both template and stand-alone use In this topic, you've seen several examples of what you can do with Office Open XML in your add-ins for . We've looked at a wide range of rich content type examples that you can insert into documents by using the Office Open XML coercion type, together with the JavaScript methods for inserting that content at the selection or to a specified (bound) location. So, what else do you need to know if you're creating your add-in both for stand-alone use (that is, inserted from the Store or a proprietary server location) and for use in a pre-created template that's designed to work with your add-in? The answer might be that you already know all you need. The markup for a given content type and methods for inserting it are the same whether your add-in is designed to stand-alone or work with a template. If you are using templates designed to work with your add-in, just be sure that your JavaScript includes callbacks that account for scenarios where referenced content might already exist in the document (such as demonstrated in the binding example shown in the section Add and bind to a named content control). When using templates with your app, whether the add-in will be resident in the template at the time that the user created the document or the add-in will be inserting a template, you might also want to incorporate other elements of the API to help you create a more robust, interactive experience. For example, you may want to include identifying data in a customXML part that you can use to determine the template type in order to provide template-specific options to the user. To learn more about how to work with custom XML in your add-ins, see the additional resources that follow. Additional resources JavaScript API for Office Standard ECMA-376: Office Open XML File Formats (access the complete language reference and related documentation on Open XML here) - Exploring the JavaScript API for Office: Data Binding and Custom XML Parts
https://dev.office.com/docs/add-ins/word/create-better-add-ins-for-word-with-office-open-xml
CC-MAIN-2017-30
en
refinedweb
#include <GMapAreas.h> List of all members. Definition at line 435 of file GMapAreas.h. The currently supported areas can be rectangular ({GMapRect}), elliptical ({GMapOval}) and polygonal ({GMapPoly}). Every map area besides the definition of its shape contains information about display style and optional { URL}, which it may refer to. If this { URL} is not empty then the map area will work like a hyperlink. The classes also implement some useful functions to ease geometry manipulations Definition of base map area classes
http://djvulibre.sourcearchive.com/documentation/3.5.14/classGMapOval.html
CC-MAIN-2017-30
en
refinedweb
dinsdag 2 september 2008 Google Chrome I love their new 'start-page' concept, for instance. They've also created a comic where they explain the concepts and techniques of Chrome. Very interesting as well. :) maandag 25 augustus 2008 On reading books .... D... dinsdag 19 augustus 2008 Locking system with aspect oriented programming Intro A few months ago, I had to implement a 'locking system' at work. I will not elaborate to much on this system, but it's intention is that users can prevent that certain properties of certain entities are updated automatically; The software-system in where I had to implement this functionality, keeps a large database up-to-date by processing and importing lots of data-files that we receive from external sources. Because of that, in certain circumstances, users want to avoid that data that they've manually changed or corrected, gets overwritten with wrong information next time a file is processed. The application where I'm talking about, makes heavy use of DataSets and I've been able to create a rather elegant solution for it. At the same time, I've also been thinking on how I could solve this same problem in a system that is built around POCO's instead of Datasets, and that's what this post will be all about. :) Enter Aspects When the idea of implementing such a system first crossed my mind, I already realized that Aspects Oriented Programming could be very helpfull to solve this problem. A while ago, I already played with Aspect Oriented Programming using Spring.NET. AOP was very nice and interesing, but I found the runtime-weaving a big drawback. Making use of runtime weaving meant that you could not directly create an instance using it's constructor. So, instead of: MyClass c = new MyClass();you had to instantiate instances via a proxyfactory: ProxyFactory f = new ProxyFactory (new TestClass()); f.AddAdvice (new MethodInvocationLoggingAdvice()); ITest t = (ITest)f.GetProxy(); I am sure that you agree that this is quite a hassle, just to create a simple instance. (Yes, I know, offcourse you can make abstraction of this by making use of a Factory...). Recently however, I bumped at an article on Patrick De Boeck's weblog, where he was talking about PostSharp. PostSharp is an aspect weaver for .NET which weaves at compile-time! This means that the drawback that I just described when you make use of runtime-weaving has disappeared. So, I no longer had excuses to start implementing a similar locking system for POCO's. Bring it on I like the idea of Test-Driven-Development, so I started out with writing a first simple test: The advantage of writing your test first, is that you start thinking on how the interface of our class should look like. This first test tells us that our class should have a Lock and an IsLocked method. The purpose of the Lock method is to put a 'lock' on a certain property, so that we can avoid that this property is modified at run-time. The IsLocked method is there to inform us whether a property is locked or not. To define this contract, I've created an interface ILockable which contains these 2 methods. In order to get this first test working, I've created an abstract class LockableEntity which inherits from one of my base entity-classes implements this interface. This LockableEntity class looks like this: This is not sufficient to get a green bar on my first test, since I still need an AuditablePerson class: These pieces of code are sufficient to make my first test pass, so I continued with writing a second test: As you can see, in this test-case I define that it should be possible to unlock a property. Unlocking a property means that the value of that property can be modified by the user at runtime. To implement this simple functionality, it was sufficient to just add an UnLock method to the LockableEntity class: Simple, but now, a more challenging feature is coming up. Now, we can already 'lock' and 'unlock' properties, but there is nothing that really prevents us from changing a locked property. It's about time to tackle this problem and therefore, I've written the following test: Running this test obviously gives a red bar, since we haven't implemented any logic yet. The most simple way to implement this functionality, would be to check in the setter of the Name property whether there exists a lock on this property or not. If a lock exists, we should not change the value of the property, otherwise we allow the change. I think that this is a fine opportunity to use aspects. Creating the Lockable Aspect As I've mentionned earlier, I have used PostSharp to create the aspects. Once you've downloaded and installed PostSharp, you can create an aspect rather easy. There is plenty of documentation to be found on the PostSharp site, so I'm not going to elaborate here on the 'getting started' aspect (no pun intended). Instead, I'll directly dive into the Lockable aspect that I've created. This is how the definition of the class that defines the aspect looks like: Perhaps I should first elaborate a bit on how I would like to use this Lockable aspect. I'd like to be able to decorate the properties of a class that should be 'lockable' with an attribute. Like this: Decorating a property with the Lockable attribute, means that the user should be able to 'lock' this property. That is, prevent that it gets changed after it has been locked. To be able to implement this, I've created a class which inherits from the OnMethodInvocationAspect class (which eventually inherits from Attribute). Why did I choose this class to inherit from? Well, because there exists no OnPropertyInvocation class or whatsoever. As you probably know, the getters and setters of a property are actually implemented as get_ and set_ methods, so it is perfectly possible to use the OnMethodInvocationAspect class to add extra 'concerns' to the property. This extra functionality is written in the OnInvocation method that I've overriden in the LockableAttribute class. In fact, it does nothing more then checking whether we're in the setter method of the property, and if we are, check whether there exists a lock on the property. If there exists a lock, we won't allow the property-value to be changed. Otherwise, we just make sure that the implementation of the property itself is called. The implementation looks like this: Here, you can see that we use reflection to determine whether we're in the setter-method or in the getter-method of the property; we're only interested if this property is locked if we're about to change the value of the property. Next, we need to get the name of the property for which we're entering the setter method. This is done via the GetPropertyForSetterMethod method which uses reflection as well to get the PropertyInfo object for the given setter-method. Once this has been done, I can use the IsLocked method to check whether this property is locked or not. Note that I haven't checked whether the conversion from eventArgs.Delegate.Target to ILockable has succeeded or not. More on that later ... When the property is locked, I call the OnAttemptToModifyLockedProperty method (which is declared in ILockable), and which just raises the LockedPropertyChangeAttempt event (also declared in the ILockable interface). By doing so, the programmer can decide what should happen when someone / something attempts to change a locked property. This gives a bit more control to the programmer and is much more flexible then throwing an exception. When the property is not locked, we let the setter-method execute. With the creation of this aspect, our third test finally gives a green bar. Compile time Validation As I've said a bit earlier, I haven't checked in the OnInvocation method whether the Target really implemented the ILockable interface before I called methods of the ILockable type. The reason for this , is quite simple: the OnMethodInvocationAspect class has a method CompileTimeValidate which you can override to add compile-time validation logic (hm, obvious). I made use of this to check whether the types where I've applied the Lockable attribute really are ILockable types: Note that it should be possible to make this code more concise, but I could not just call method.DeclaringType.GetInterface("ILockable")since that gave a NotImplementedExceptionwhile compiling. Strange, but true Now, when I use the Lockable attribute on a type which is not ILockable, I'll get the following compiler errors: Pretty neat, huh ? Now, what's left is a way to persist the locks in a datastore, but that will be a story for some other time ... maandag 28 juli 2008 NHibernate in a remoting / WCF scenario This ? zaterdag 5 juli 2008 NHibernate IInterceptor: an AuditInterceptor ... dinsdag 1 juli 2008 NHibernate Session Management I_10<<. maandag 30 juni 2008 New Layout I've changed the layout of my weblog, I hope you like it. If you have any remarks regarding the layout, if you don't find it readable, if you miss something, please let me know. vrijdag 13 juni 2008 Setting Up Continuous IntegrationPart II: configuring CruiseControl.NET Now that we've created our buildscript in part I, it's time to set up the configuration file for CruiseControl.NET The ccnet.config file and multiple project-configurations :) MSBuild doesn't support my sln file format. The MSBuild XmlLogger Issue. zaterdag 7 juni 2008 Setting Up a Continuous Integration process using CruiseControl.NET and MSBuild. Part I: creating the MSBuild build script Intro. :) Requirements: - Make sure that the latest buildscript will be used - Clean the source directory - Get the latest version of the codebase out of Visual SourceSafe - Build the entire codebase - Execute the unit tests that I have using NUnit - Perform a statical code analysis using FxCop The MSBuild build-script. Skeleton of the buildscript'. Clean Target: Getlatest TargetIn order to get the latest version of the source out of SourceSafe, I’ve created the following step: Here, I just make use of the VssGet Task that is part of the MSBuild Community Tasks project. Also, notice that this Target depends on the createdirs Target; this means that, when you execute the getlatest Target, the createdirs Target will be executed before the getlatest Target is executed. The createdirs tasks looks like this: BuildAll Target. NUnit Target). FxCop Target ... Executing Targets via MSBuild. zondag 20 april 2008 using directives within namespaces Sometimes, I come across code examples where the programmer puts his using directives within the namespace declaration, like this: namespace MyNamespace { using System; using System.Data; using SomeOtherNamespace; public class MyClass { } } I am used to put my using directives outside the namespace block (which is no surprise, since VS.NET places them by default outside the namespace declaration when you create a new class): using System; using System.Data; namespace MyNamespace { public class MyClass { } } So, I'm wondering: what are the advantages of placing the using directives within the namespace declaration ? I've googled a little bit, but I haven't found any clue why I should do it as well. Maybe you'll know a good reason, and can convice me to adapt my VS.NET templates ? woensdag 12 maart 2008 VS.NET 2008: Form designer not working on Windows Vista I_21<< maandag 28 januari 2008 Debugging the .NET framework dinsdag 15 januari 2008 Cannot open log for source {0} on Windows 2003 Server I
http://fgheysels.blogspot.com/2008/
CC-MAIN-2017-30
en
refinedweb
In December I blogged about a little tool that i wrote to analyze hangs in dumps, and i showed the following output, but didnt really get into the details of why the process was stuck in here... ____________________________________________________________________________________________________________________ GC RELATED INFORMATION ____________________________________________________________________________________________________________________ The following threads are GC threads: 18 19 The following threads are waiting for the GC to finish: 14 16 24 26 27 28 30 31 36 37 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 57 58 59 60 62 63 64 65 66 67 68 69 70 71 72 73 74 77 78 The GC was triggered by thread: 75 The GC is working on suspending threads to continue with garbage collection The following threads can't be suspended because preemptive GC is disabled: 23 25 33 34 35 38 56 61 The Finalizer (Thread 20) is not blocked The issue the customer is running into is a hang during heavy load. The only way to get out of the hang is to recycle the process (IISReset). Debugging the issue: I have seen this issue before on a few occations and although as you will see later it was currently fixed in the framework, but in my earlier cases on this we ended up not needing a fix since the customers I worked with made code changes that made it such that they no longer were subject to the issue. So what is going on here? Thread 75 triggered a garbage collection by making an allocation that would have made Gen 0 go over its allocation budget. 0:075> kb 2000 ChildEBP RetAddr Args to Child 1124de6c 7c822124 77e6bad8 000002e8 00000000 ntdll!KiFastSystemCallRet 1124de70 77e6bad8 000002e8 00000000 00000000 ntdll!NtWaitForSingleObject+0xc 1124dee0 79e718fd 000002e8 ffffffff 00000000 kernel32!WaitForSingleObjectEx+0xac 1124df24 79e718c6 000002e8 ffffffff 00000000 mscorwks!PEImage::LoadImage+0x199 1124df74 79e7187c ffffffff 00000000 00000000 mscorwks!CLREvent::WaitEx+0x117 1124df84 7a0d0d0f ffffffff 00000000 00000000 mscorwks!CLREvent::Wait+0x17 1124dfa8 7a0d5289 ffffffff 000d4558 106cb970 mscorwks!SVR::gc_heap::wait_for_gc_done+0x99 1124dfcc 7a0d5fa2 00000000 00000000 00000020 mscorwks!SVR::GCHeap::GarbageCollectGeneration+0x267 1124e058 7a0d691f 106cb970 00000020 00000000 mscorwks!SVR::gc_heap::try_allocate_more_space+0x1c0 1124e078 7a0d7ecc 106cb970 00000020 00000000 mscorwks!SVR::gc_heap::allocate_more_space+0x2f 1124e098 7a08bd32 106cb970 00000020 00000002 mscorwks!SVR::GCHeap::Alloc+0x74 1124e0b4 79e7b43e 00000020 00000000 00080000 mscorwks!Alloc+0x60 1124e180 79e8f41c 79157f42 1124e230 00000001 mscorwks!AllocateArrayEx+0x1d1 1124e244 7937f5c2 064f60b8 064f60b8 064f60b8 mscorwks!JIT_NewArr1+0x167 1124e27c 5088a509 00000000 00000000 00000000 mscorlib_ni!System.Reflection.RuntimeMethodInfo.GetParameters()+0x4a ... Nothing strange there, allocation and garbage collection happens all the time... however a GC is usually extremely fast and in this case we have 45 threads waiting for the GC to finish... 0:014> kb ChildEBP RetAddr Args to Child 01a0fc74 7c822124 77e6bad8 000002e4 00000000 ntdll!KiFastSystemCallRet 01a0fc78 77e6bad8 000002e4 00000000 00000000 ntdll!NtWaitForSingleObject+0xc 01a0fce8 79e718fd 000002e4 ffffffff 00000000 kernel32!WaitForSingleObjectEx+0xac 01a0fd2c 79e718c6 000002e4 ffffffff 00000000 mscorwks!PEImage::LoadImage+0x199 01a0fd7c 79e7187c ffffffff 00000000 00000000 mscorwks!CLREvent::WaitEx+0x117 01a0fd8c 7a0851cb ffffffff 00000000 00000000 mscorwks!CLREvent::Wait+0x17 01a0fd9c 79f40e96 00000000 13d43bb8 0e874858 mscorwks!SVR::GCHeap::WaitUntilGCComplete+0x32 01a0fdd8 79e7385b 00000001 7a0e607b 00000001 mscorwks!Thread::RareDisablePreemptiveGC+0x1a1 01a0fde0 7a0e607b 00000001 13d43838 00000102 mscorwks!GCHolder<1,0,0>::GCHolder<1,0,0>+0x2d 01a0fe2c 7a0e673e 00000000 7a393704 7a114dea mscorwks!Thread::OnThreadTerminate+0x53 01a0fe38 7a114dea 0e874858 13d4385c 00000000 mscorwks!DestroyThread+0x43 01a0fe94 79f79c4f 00000000 00000000 00000000 mscorwks!ThreadpoolMgr::CompletionPortThreadStart+0x33d 01a0ffb8 77e6608b 000c4c00 00000000 00000000 mscorwks!ThreadpoolMgr::intermediateThreadProc+0x49 01a0ffec 00000000 79f79c09 000c4c00 00000000 kernel32!BaseThreadStart+0x34 so for some reason the GC appears to be taking some time... There are 2 GC threads (one per logical processor), thread 18 and 19 in this case... Thread 19 is simply waiting for work 0:019> kb ChildEBP RetAddr Args to Child 01e2fd68 7c822124 77e6bad8 000002d0 00000000 ntdll!KiFastSystemCallRet 01e2fd6c 77e6bad8 000002d0 00000000 00000000 ntdll!NtWaitForSingleObject+0xc 01e2fddc 79e718fd 000002d0 ffffffff 00000000 kernel32!WaitForSingleObjectEx+0xac 01e2fe20 79e718c6 000002d0 ffffffff 00000000 mscorwks!PEImage::LoadImage+0x199 01e2fe70 79e7187c ffffffff 00000000 00000000 mscorwks!CLREvent::WaitEx+0x117 01e2fe80 7a0d8898 ffffffff 00000000 00000000 mscorwks!CLREvent::Wait+0x17 01e2fea8 7a0d8987 01e2ff00 77e60eb5 01e2fec8 mscorwks!SVR::gc_heap::gc_thread_function+0x58 01e2ffb8 77e6608b 000d5050 00000000 00000000 mscorwks!SVR::gc_heap::gc_thread_stub+0x9b 01e2ffec 00000000 7a0d88eb 000d5050 00000000 kernel32!BaseThreadStart+0x34 But interestingly enough thread 18 is waiting to suspend all managed threads in order to continue the GC (to avoid for anyone to allocate any more data while it is performing the GC) 0:018> kb ChildEBP RetAddr Args to Child 01defb10 7c821524 77e98ef4 00000f64 01defb34 ntdll!KiFastSystemCallRet 01defb14 77e98ef4 00000f64 01defb34 01defe04 ntdll!NtGetContextThread+0xc 01defb24 7a0de046 00000f64 01defb34 00010002 kernel32!GetThreadContext+0x11 01defe04 7a0defc1 00000f64 1069d328 13aa3858 mscorwks!EnsureThreadIsSuspended+0x3f 01defe4c 7a0e290a 00000000 00000000 13aa3874 mscorwks!Thread::SuspendThread+0xd0 01defe9c 7a086e76 00000000 13aa399c 00000000 mscorwks!Thread::SysSuspendForGC+0x5a6 01deff88 7a0d867b 00000001 00000000 000d4368 mscorwks!SVR::GCHeap::SuspendEE+0x16c 01deffa8 7a0d8987 00000000 13aa39ac 01deffec mscorwks!SVR::gc_heap::gc_thread_function+0x3b 01deffb8 77e6608b 000d4368 00000000 00000000 mscorwks!SVR::gc_heap::gc_thread_stub+0x9b 01deffec 00000000 7a0d88eb 000d4368 00000000 kernel32!BaseThreadStart+0x34 Normally suspending all managed threads happens in a matter of nanoseconds, and pretty much the only thing that could cause the process to be stuck while suspending is if some thread has disabled preemptive GC, i.e. told the GC that it is in a state where it can't be disturbed... Yun Jin describes PreemptiveGC like this in one of his posts Preemptive GC: also very important. In Rotor, this is m_fPreemptiveGCDisabled field of C++ Thread class. It indicates what GC mode the thread is in: "enabled" in the table means the thread is in preemptive mode where GC could preempt this thread at any time; "disabled" means the thread is in cooperative mode where GC has to wait the thread to give up its current work (the work is related to GC objects so it can't allow GC to move the objects around). When the thread is executing managed code (the current IP is in managed code), it is always in cooperative mode; when the thread is in Execution Engine (unmanaged code), EE code could choose to stay in either mode and could switch mode at any time; when a thread are outside of CLR (e.g, calling into native code using interop), it is always in preemptive mode. In our case, as we can see from the output from my tool the following threads have preemptive GC disabled (which is very uncommon) 23 25 33 34 35 38 56 61 In the !threads output it looks like this (notice the PreEmptive GC column) 0:018> !threads ThreadCount: 59 UnstartedThread: 0 BackgroundThread: 59 PendingThread: 0 DeadThread: 0 Hosted Runtime: yes PreEmptive GC Alloc Lock ID OSID ThreadOBJ State GC Context Domain Count APT Exception 16 1 1008 000d0828 1808220 Enabled 06b79310:06b79db0 000f0728 1 MTA (Threadpool Worker) 20 2 1e7c 000d62d0 b220 Enabled 00000000:00000000 000cd190 0 MTA (Finalizer) 21 3 1d3c 000db120 1220 Enabled 00000000:00000000 000cd190 0 Ukn 22 4 1c38 000ed3c8 80a220 Enabled 00000000:00000000 000cd190 0 MTA (Threadpool Completion Port) 23 5 19ac 0013a520 180b222 Disabled 06c17754:06c18bd8 00158e70 2 MTA (Threadpool Worker) 24 6 1fd4 0014d1a0 b220 Enabled 00000000:00000000 000f0728 1 MTA 25 7 1568 0e81aae0 180b222 Disabled 02cea570:02cec4a8 00158e70 2 MTA (Threadpool Worker) 26 8 1ad0 0e82bd58 b220 Enabled 00000000:00000000 00158e70 0 MTA 27 9 1f04 0e829310 b220 Enabled 032b0cb8:032b2938 00158e70 0 MTA 28 a ffc 0e820d18 b220 Enabled 070173e4:070193c0 00158e70 1 MTA 14 b 1ee8 0e874858 1800220 Enabled 02b34750:02b34c78 000cd190 0 Ukn (Threadpool Worker) 30 d 1080 0e8aff40 b220 Enabled 00000000:00000000 0e87c310 0 MTA 31 e 11ec 0e8d1f28 8801220 Enabled 02b418e4:02b42c78 000cd190 0 MTA (Threadpool Completion Port) 33 f 1f1c 0e8e6420 180b222 Disabled 06bc1fa0:06bc3dcc 00158e70 2 MTA (Threadpool Worker) 34 10 1a6c 0e8e8b20 180b222 Disabled 02c449b8:02c45d74 00158e70 2 MTA (Threadpool Worker) 35 11 710 0e8ee550 180b222 Disabled 06cdde68:06cdea48 00158e70 2 MTA (Threadpool Worker) 36 c 1ae0 0e875228 180b220 Enabled 072bd88c:072bf508 000cd190 0 MTA (Threadpool Worker) 37 12 19c8 00147290 180b220 Enabled 06bb4b70:06bb5dcc 000f0728 1 MTA (Threadpool Worker) 38 13 18f0 0e8901d8 180b222 Disabled 02c49ce8:02c49d74 00158e70 2 MTA (Threadpool Worker) 39 14 19b8 0e8f7338 180b220 Enabled 073c6fbc:073c6fe8 00158e70 1 MTA (Threadpool Worker) 40 15 1308 0e8f8610 180b220 Enabled 02af10a4:02af10ac 00158e70 3 MTA (Threadpool Worker) 41 16 1f38 0e8f99e8 180b220 Enabled 070c7154:070c761c 000cd190 0 MTA (Threadpool Worker) 42 17 1be0 0e8facc0 180b220 Enabled 06ad3a74:06ad4cc8 000cd190 0 MTA (Threadpool Worker) 43 18 1efc 0e8fbf98 180b220 Enabled 0321b11c:0321c938 000cd190 0 MTA (Threadpool Worker) 44 19 1470 0e8fd160 1801220 Enabled 06ef6770:06ef7178 000cd190 0 MTA (Threadpool Worker) 45 1a 150 0e8fe438 180b220 Enabled 02a15a34:02a161bc 000cd190 0 MTA (Threadpool Worker) 46 1b 1b10 0e8ff710 180b220 Enabled 06b216b8:06b22d00 000f0728 1 MTA (Threadpool Worker) 47 1c 1c8c 10690ac0 180b220 Enabled 02be2430:02be3b00 000f0728 1 MTA (Threadpool Worker) 48 1d 16c8 10691ad8 180b220 Enabled 06b31520:06b3189c 000f0728 1 MTA (Threadpool Worker) 49 1e 1a5c 10692e20 180b220 Enabled 02ab3a48:02ab50ac 000f0728 1 MTA (Threadpool Worker) 50 1f 1908 10694168 180b220 Enabled 06b686fc:06b69db0 000f0728 1 MTA (Threadpool Worker) 51 20 284 106954d0 180b220 Enabled 0319d1e4:0319e900 000cd190 0 MTA (Threadpool Worker) 52 21 1d74 10696708 180b220 Enabled 06bac634:06baddcc 000f0728 1 MTA (Threadpool Worker) 53 22 58c 106979c8 180b220 Enabled 02c14784:02c15d58 000f0728 1 MTA (Threadpool Worker) 54 23 1860 10698d10 180b220 Enabled 072e1984:072e3508 00158e70 1 MTA (Threadpool Worker) 55 24 1c9c 1069ae00 180b220 Enabled 071a4784:071a5508 000cd190 0 MTA (Threadpool Worker) 56 25 15c4 1069d328 180b222 Disabled 06bc7174:06bc7dcc 00158e70 2 MTA (Threadpool Worker) 57 26 1968 106a09e0 180b220 Enabled 0321da48:0321e938 000cd190 0 MTA (Threadpool Worker) 58 27 1e10 106a3608 180b220 Enabled 070172c8:070173c0 000cd190 0 MTA (Threadpool Worker) 59 28 1a18 106a6418 180b220 Enabled 02ac036c:02ac10ac 000f0728 1 MTA (Threadpool Worker) 60 29 1d2c 106a9148 180b220 Enabled 06a9b5a4:06a9cc90 000cd190 0 MTA (Threadpool Worker) 61 2a 1b50 106ac0a8 180b222 Disabled 06cdaf94:06cdbda8 00158e70 2 MTA (Threadpool Worker) 62 2b 96c 106ae8d0 8801220 Enabled 02c00980:02c01d3c 000cd190 0 MTA (Threadpool Completion Port) 63 2c 1b18 106afa50 180b220 Enabled 0322214c:03222938 000cd190 0 MTA (Threadpool Worker) 64 2d 1d78 106b2d48 180b220 Enabled 06f26e78:06f26eb4 000cd190 0 MTA (Threadpool Worker) 65 2e 198c 106b5cb8 180b220 Enabled 06ed8418:06ed8c98 000cd190 0 MTA (Threadpool Worker) 66 2f 1fcc 106b8b28 180b220 Enabled 0728d400:0728d508 00158e70 1 MTA (Threadpool Worker) 67 30 1958 106bb998 180b220 Enabled 0304e7e8:03050420 000cd190 0 MTA (Threadpool Worker) 68 31 1b48 106beb58 180b220 Enabled 031a5b98:031a691c 000cd190 0 MTA (Threadpool Worker) 69 32 1d80 106c19c8 180b220 Enabled 06b13444:06b14d00 000f0728 1 MTA (Threadpool Worker) 70 33 17cc 106c4838 180b220 Enabled 0700d468:0700f3a4 000cd190 0 MTA (Threadpool Worker) 71 34 1424 106c8890 180b220 Enabled 06b6f8bc:06b6fdb0 000f0728 1 MTA (Threadpool Worker) 72 35 1e00 106bd870 180b220 Enabled 06b8dcbc:06b8ddb0 000f0728 1 MTA (Threadpool Worker) 73 36 1d08 106c95e0 180b220 Enabled 071a081c:071a1508 000cd190 0 MTA (Threadpool Worker) 74 37 1fdc 106ca680 180b220 Enabled 02c05810:02c05d3c 000cd190 0 MTA (Threadpool Worker) 75 38 1b8c 106cb930 180b220 Enabled 0335f888:0335f88c 00158e70 2 MTA (Threadpool Worker) 76 39 1928 106cc900 880b220 Enabled 072bc2f0:072bd508 000cd190 0 MTA (Threadpool Completion Port) 77 3a 1fd0 106d7270 8801220 Enabled 02c18ca4:02c19d58 000cd190 0 MTA (Threadpool Completion Port) 78 3b 1640 106d7908 180b220 Enabled 0294474c:02945a7c 000cd190 0 MTA (Threadpool Worker) Why have these threads disabled preemptive GC, not allowing the GC to suspend them, and ultimately blocking our process? All the blocked threads are sitting in this type of callstack... trying to enter a lock (JIT_MonTryEnter). Normally you would see a thread either owning the lock or waiting in an awarelock like in this post, but here it is just spinning trying to enter the lock... 0:056> kb 2000 ChildEBP RetAddr Args to Child 10b0f4d8 0eb65afc 06bc5448 06bc53f8 22a9c796 mscorwks!JIT_MonTryEnter+0xad WARNING: Frame IP not in any known module. Following frames may be wrong. 10b0f504 69918f30 10b0f560 06bc5448 06bc5448 0xeb65afc 00000000 00000000 00000000 00000000 00000000 System_Web_Services_ni+0x28f30 unfortunately the managed stack is not very helpful in telling us where the lock was taken 0:056> !clrstack OS Thread Id: 0x15c4 (56) ESP EIP 10b0f8f8 79e73eac [ContextTransitionFrame: 10b0f8f8] 10b0f948 79e73eac [GCFrame: 10b0f948] 10b0faa0 79e73eac [ComMethodFrame: 10b0faa0] and if we run !syncblk we have no active sync blocks, so no help there when it comes to finding out who owns this lock... 0:056> !syncblk Index SyncBlock MonitorHeld Recursion Owning Thread Info SyncBlock Owner ----------------------------- Total 165 CCW 4 RCW 1 ComClassFactory 0 Free 8 seems like we are stuck between a rock and a hard place here... I have shown the command !dumpstack before... it shows a mixture of a managed and native callstack but it is a raw stack, meaning that it will pretty much just display any addresses on the stack that happen to be pointing to code. This means that !dumpstack will not give a true stack trace (i.e. like !clrstack and kb where if functionA calls funcitonB we will see functionA directly below functionB in the stack) and anything shown by !dumpstack may or may not be correct. Anyways, with that little warning, here comes !dumpstack:) 0:056> !dumpstack OS Thread Id: 0x15c4 (56) Current frame: mscorwks!JIT_MonTryEnter+0xad ChildEBP RetAddr Caller,Callee 10b0f47c 638c7552 (MethodDesc 0x63a539a0 +0x62 System.Xml.Schema.XmlSchemaSet.Add(System.Xml.Schema.XmlSchemaSet)), calling mscorwks!JIT_MonTryEnter 10b0f480 793463bb (MethodDesc 0x7923bbc8 +0xdb System.Collections.Hashtable..ctor(Int32, Single)), calling mscorwks!JIT_Dbl2IntSSE2 10b0f498 638b36f6 (MethodDesc 0x63a505f0 +0xa6 System.Xml.Schema.SchemaInfo..ctor()), calling (MethodDesc 0x7923bbc8 +0 System.Collections.Hashtable..ctor(Int32, Single)) 10b0f49c 638b3719 (MethodDesc 0x63a505f0 +0xc9 System.Xml.Schema.SchemaInfo..ctor()), calling mscorwks!JIT_Writeable_Thunks_Buf 10b0f4d8 0eb65afc (MethodDesc 0xeb756f0 +0x5c CustomComponent.Tools.Web.Services.Extensions.ValidationExtension.ProcessMessage(System.Web.Services.Protocols.SoapMessage)), calling (MethodDesc 0x63a539a0 +0 System.Xml.Schema.XmlSchemaSet.Add(System.Xml.Schema.XmlSchemaSet)) 10b0f504 69918f30 (MethodDesc 0x699c4798 +0x3c System.Web.Services.Protocols.SoapMessage.RunExtensions(System.Web.Services.Protocols.SoapExtension[], Boolean)) 10b0f518 699227b3 (MethodDesc 0x699c5250 +0x2cb System.Web.Services.Protocols.SoapServerProtocol.Initialize()), calling (MethodDesc 0x699c4798 +0 System.Web.Services.Protocols.SoapMessage.RunExtensions(System.Web.Services.Protocols.SoapExtension[], Boolean)) ... !dumpstack is a bit tricky to deal with, but in the stack above if we start from the bottom of the part of the stack that i've shown we have SoapServerProtocol.Initialize() which calls SoapMessage.RunExtensions... this we can trust because !dumpstack is actually telling us that Initialize is calling RunExtensions in this case (assuming that we believe that Initialize was called). RunExtensions doesn't tell us what it is calling but that is because it is calling into a custom component CustomComponent.Tools.Web.Services.Extensions.ValidationExtension.ProcessMessage. This is in turn calling into XmlSchemaSet.Add which is calling into JIT_MonTryEnter. So long story short... the real stacktrack should look like this mscorwks!JIT_MonTryEnter+0xad System.Xml.Schema.XmlSchemaSet.Add(System.Xml.Schema.XmlSchemaSet) CustomComponent.Tools.Web.Services.Extensions.ValidationExtension.ProcessMessage(System.Web.Services.Protocols.SoapMessage)System.Web.Services.Protocols.SoapMessage.RunExtensions(System.Web.Services.Protocols.SoapExtension[], Boolean) System.Web.Services.Protocols.SoapServerProtocol.Initialize() ... !sos.clrstack just wasn't able to rebuild it because of the state it was in Looking through the output from ~* e !clrstack I find one stack that is currently in XmlSchemaSet.Add and thus is probably the one holding the lock we are trying to enter here OS Thread Id: 0x1308 (40) ESP EIP 104ce878 7c82ed54 [HelperMethodFrame: 104ce878] 104ce8d0 638b46bf System.Xml.Schema.SchemaNames..ctor(System.Xml.XmlNameTable) 104cebd4 638ca6fb System.Xml.Schema.XmlSchemaSet.GetSchemaNames(System.Xml.XmlNameTable) 104cebe0 638c96fc System.Xml.Schema.XmlSchemaSet.PreprocessSchema(System.Xml.Schema.XmlSchema ByRef, System.String) 104cebf8 638c86b7 System.Xml.Schema.XmlSchemaSet.Add(System.String, System.Xml.Schema.XmlSchema) 104cec04 638c7667 System.Xml.Schema.XmlSchemaSet.Add(System.Xml.Schema.XmlSchemaSet) 104cec60 0eb65afc CustomComponent.Tools.Web.Services.Extensions.ValidationExtension.ProcessMessage(System.Web.Services.Protocols.SoapMessage) 104cec8c 69918f30 System.Web.Services.Protocols.SoapMessage.RunExtensions(System.Web.Services.Protocols.SoapExtension[], Boolean) 104ceca4 699227b3 System.Web.Services.Protocols.SoapServerProtocol.Initialize() 104cece8 6990d904 System.Web.Services.Protocols.ServerProtocolFactory.Create(System.Type, System.Web.HttpContext, System.Web.HttpRequest, System.Web.HttpResponse, Boolean ByRef) 104ced28 699263ab System.Web.Services.Protocols.WebServiceHandlerFactory.CoreGetHandler(System.Type, System.Web.HttpContext, System.Web.HttpRequest, System.Web.HttpResponse) 104ced64 69926329 System.Web.Services.Protocols.WebServiceHandlerFactory.GetHandler(System.Web.HttpContext, System.String, System.String, System.String) 104ced88 65fc057c System.Web.HttpApplication.MapHttpHandler(System.Web.HttpContext, System.String, System.Web.VirtualPath, System.String, Boolean) 104cedcc 65fd58cd System.Web.HttpApplication+MapHandlerExecutionStep.System.Web.HttpApplication.IExecutionStep.Execute() 104ceddc 65fc1610 System.Web.HttpApplication.ExecuteStep(IExecutionStep, Boolean ByRef) 104cee1c 65fd32e0 System.Web.HttpApplication+ApplicationStepManager.ResumeSteps(System.Exception) 104cee6c 65fc0225 System.Web.HttpApplication.System.Web.IHttpAsyncHandler.BeginProcessRequest(System.Web.HttpContext, System.AsyncCallback, System.Object) 104cee88 65fc550b System.Web.HttpRuntime.ProcessRequestInternal(System.Web.HttpWorkerRequest) 104ceebc 65fc5212 System.Web.HttpRuntime.ProcessRequestNoDemand(System.Web.HttpWorkerRequest) 104ceec8 65fc3587 System.Web.Hosting.ISAPIRuntime.ProcessRequest(IntPtr, Int32) 104cf078 79f35ee8 [ContextTransitionFrame: 104cf078] 104cf0c8 79f35ee8 [GCFrame: 104cf0c8] 104cf220 79f35ee8 [ComMethodFrame: 104cf220] 0:040> kb ChildEBP RetAddr Args to Child 104ce5d4 7c822124 77e6bad8 000002e8 00000000 ntdll!KiFastSystemCallRet 104ce5d8 77e6bad8 000002e8 00000000 00000000 ntdll!NtWaitForSingleObject+0xc 104ce648 79e718fd 000002e8 ffffffff 00000000 kernel32!WaitForSingleObjectEx+0xac 104ce68c 79e718c6 000002e8 ffffffff 00000000 mscorwks!PEImage::LoadImage+0x199 104ce6dc 79e7187c ffffffff 00000000 00000000 mscorwks!CLREvent::WaitEx+0x117 104ce6ec 7a0d0d0f ffffffff 00000000 00000000 mscorwks!CLREvent::Wait+0x17 104ce710 7a0d5dfa ffffffff 00001037 000d4368 mscorwks!SVR::gc_heap::wait_for_gc_done+0x99 104ce788 7a0d691f 0e8f8650 00000014 00000000 mscorwks!SVR::gc_heap::try_allocate_more_space+0x17 104ce7a8 7a0d7ecc 0e8f8650 00000014 00000000 mscorwks!SVR::gc_heap::allocate_more_space+0x2f 104ce7c8 7a08bd32 0e8f8650 00000014 00000002 mscorwks!SVR::GCHeap::Alloc+0x74 104ce7e4 79e754ff 00000014 00000000 00080000 mscorwks!Alloc+0x60 104ce824 79e755c1 639f59e0 02382edc 02af0090 mscorwks!FastAllocateObject+0x38 104ce8c8 638b46bf 02af0e60 00000000 00000000 mscorwks!JIT_NewFast+0x9e 104ce8cc 02af0e60 00000000 00000000 00000000 System_Xml_ni+0x1146bf WARNING: Frame IP not in any known module. Following frames may be wrong. 104ce8d0 00000000 00000000 00000000 00000000 0x2af0e60 But unfortunately it can't give up this lock and finish what it is doing because it is waiting for the GC to finish, so effectively we are in a deadlock. Thread 40 holds a lock but to release it it needs the GC to complete. The GC can't complete because it can't suspend thread 56 (and other threads) that have preemptive GC disabled. Thread 56 can't enable preemptive GC until it gets the lock owned by thread 40. I should add, that the only times I have seen this it has been in the lock in XmlSchemaSet.Add, and under heavy load when multiple threads were trying to access the same XmlSchemaSet. Solution: So what can we do about this? Well, a hotfix just came out (KB946644) that will fix this problem so if you are running into this issue you can call into support and ask for that hotfix. In my earlier cases, the customers have locked around the XmlSchemaSet in the custom component since according to the msdn documentation any instance members of XmlSchemaSet are not guaranteed to be threadsafe. This resolved the issue... Laters, Tess ASP.NET ASP.NET Wiki Beta [Via: Scott Hanselman ] Sharepoint Dev Tip: The SharePoint University of... Link Listing - February 10, 2008 Tess, is that a CLR hotfix or an XML hotfix? We are seeing the same root problem (gc thread cant suspend due to other threads w/ preemptive disabled waiting on crit secs owned by gc thread) but XML is not in the stacks at all. this hotfix is specifically for this issue... you might be running into the gc/loaderlock deadlock described here though there are two versions of it, 1. you load up mixed mode (c++) dlls and you either have managed code in dllmain or other managed entry points like static arrays etc. or 2. you are loading up a dll that references a strong named mixed mode dll and you block in the policy resolution, and the load is done using a native loading method like createobject (something that grabs the loaderlock) in case #1 you have to recompile the dll with /noentry and follow the articles referenced in that post to make sure you have no entry points. in case #2 you should manually load the referenced dll using assembly.load in application_start for example Sorry for the convoluted answer, the space is a little short in the comments:) I can write a post on case #2 soon if you think you are running into that, but hopefully this helps you in the meantime... From looking at the stacks, I know 1 thread (#31) is in gc trying to suspend 2 other threads (#17,30) that are in preemptive disabled mode that are trying to enter crit secs owned by the gc thread (#31), hence the deadlock. (note: the non-gc threads waiting on the crit secs are in exception handlers at the time). This is happening in Microsoft CRM 3.0 (which is an asp.net 1.1 app). I am trying to better understand if this can be caused by the crm app code or if that hotfix was for the CLR itself. It looks like an EE crit sec, and my understanding of preemptive is that it can only be set from the CLR unmanaged code. So based on that I am guessing its a CLR bug, but I could be missing something. There is a US MS Case open, would love if you could take a look at the dump I uploaded 🙂 Waiting for TAMs to get thier acts together. sounds interesting, the support engineer you are working with probably debugs as much as I 🙂 and is probably specializing in CRM, but send me the case number and support engineers name or email address via the contact me section and I'll talk to him or her to see if they need a second opinion... Tess, we are having exactly the same issue as described in your article and are also working with MS support to resolve it. I'll send you the details and would appreciate if you could give the person we're working with your opinion as they don't seem to be able to find the hotfix you mentioned in the article and we're running out of options to try on our side. This is how the stack looks in our case: 2f8384b4 0a06f23b System_Xml!System.Xml.Schema.SchemaNames..ctor(System.Xml.XmlNameTable)+0x6d1 0a45e7fc 0a043bf0 System_Xml!System.Xml.Schema.XmlSchemaSet.GetSchemaNames(System.Xml.XmlNameTable)+0x43 0a45e7fc 0a043b98 System_Xml!System.Xml.Schema.XmlSchemaSet.PreprocessSchema(System.Xml.Schema.XmlSchema ByRef, System.String)+0x20 0a45e840 0a04f35a System_Xml!System.Xml.Schema.XmlSchemaSet.Add(System.String, System.Xml.Schema.XmlSchema)+0x28 0a45e840 09f6209a System_Xml!System.Xml.Schema.XmlSchemaSet.Add(System.Xml.Schema.XmlSchemaSet)+0x19a As you can see it's almost the exact replica of what you have in the article. Hi Paul, The article doesn't seem to be public yet but the hotfix is available, if they search on the KB 946644, if they can't find it they can contact me. Tess Hi Tess, It seems like our support person already talked to you and indeed, the hotfix for KB 946644 helped in our case (SP1 by itself didn't). I can't say that the situation is completely reolved though as we're till getting strange errors related to schema validation (102 errors out of 33k+ requests) that look like the following: System.Xml.Schema.XmlSchemaValidationException: The 'Service' element.ThrowDeclNotFoundWarningOrError(Boolean declFound) at System.Xml.Schema.XmlSchemaValidator.ValidateElement(String localName, String namespaceUri, XmlSchemaInfo schemaInfo, String xsiType, String xsiNil, String) Surely enough, the 'Service' element IS declared and the same request is executed just fine right before and right after the one that fails. The element may be different in different errors, but it's always the same error and it's just few lines of code away from the place where we had problems with XMLSchemaSet.Add() call previously. Can it be possibly related to the hotfix? It appears that the schema object gets corrupted in some rare cases (internally), which causes it to loose information about some of the elements that are properly declared in the schema. Also, we do see abnormal GC patterns that were not observed under 1.1 on the same box with the same application (last time tested three weeks ago). This includes large "% time in GC" (even without any external requests), number of Gen 0 to Gen 1 collections is close to 2:1 rather than to recommended 10:1, Gen 2 Heap size is significantly larger than Gen 0 or Gen 1, number of allocated bytes / sec and number of Gen 0 promoted bytes / sec is large even with zero users and no incoming requests and so on. I sent all the details with graphs and correlations to Michael Noto, who is handling our case (SRX080xxxxx0861). This difference between 1.1 and 2.0 appears to be somewhat similar to what was reported long time ago by "philippe" in one of the comments to your High CPU in GC article (). Any insight as to what might be causing it? Thanks much. Paul. Hi Paul, off the top of my head I can't say that I've seen this before, and I am currently on vacation so I don't have access to my usual resources. Based on what I know about the hotfix I don't think it is related though as the hotfix deals with blocking the thread for GCs but of course you can never be sure if it causes sideeffects. You might want to set up a breakpoint and dump on the first occurrence of this System.Xml.Schema.XmlSchemaValidationException so that you can examine the structures, but I would suggest that it is set up in such a way that the breakpoint is disabled after it's been hit once if they are very frequent. I very frequently get emails like the one I got this morning: "Tess, It sounds like the hotfix for kb946644 I tried to open a support case to get hotfix KB 946644, and was told that this hotfix has been discontinued. Do you know why this would be the case, or if the problem is being addressed in some other way in a future hotfix? It would be nice to try the hotfix, at least for experimental purposes, to see whether the problem it addresses really is the cause of our symptoms. Is there any other way for us to get a copy? I had a look and I can't see that it has been discontinued, from the looks of it it will also be included in SP1. I think the confusion might be in that the kb is not released yet, but the hotfix still seems to be there. Unfortunately I can't post the hotfix here, but if you contact your support specialist again they can contact me if they want. Hi Tess, Our team has faced with similar deadlock recently. After several weeks of head's pain we finally found the reason. As far as I didn't find any similar information, I want to share it. Possibly it will help somebody and safe time & nerves. It was reproduced on .NET framework v4.0. Information about threads prove that we faced with similar situation. 0:000> !threads ThreadCount: 21 UnstartedThread: 0 BackgroundThread: 21 PendingThread: 0 DeadThread: 0 Hosted Runtime: no PreEmptive GC Alloc Lock ID OSID ThreadOBJ State GC Context Domain Count APT Exception ... 11 6 d94 0fa8f048 b220 Enabled 6ef8e9c8:6ef8e9c8 005c7b40 0 MTA (GC) System.ExecutionEngineException (03551120) ... 19 11 13d0 0f58ca78 b220 Disabled 6ef8d70c:6ef8e9b0 005c7b40 2 MTA ... 0:011> !threadpool CPU utilization: 81% Worker Thread: Total: 9 Running: 1 Idle: 3 MaxLimit: 2047 MinLimit: 4 Work Request in Queue: 1 AsyncTimerCallbackCompletion TimerInfo@0b7e9db8 -------------------------------------- Number of Timers: 1 -------------------------------------- Completion Port Thread:Total: 0 Free: 0 MaxFree: 8 CurrentLimit: 0 MaxLimit: 1000 MinLimit: 4 Dumps of threads #11 and #19 shows that they use the same instance of one object 3c762ae8. However it is “impossible” because this object is born and die in thread #19. And there is no ways how execution can be split on two threads. Except one. The hole is event that is raised inside thread. Framework send execution of handler to another thread. As result thread #19 raise event. Then handler in thread #11 do his work, allocate memory, but memory is fragmented and GC start working in this thread. GC try to stop all threads but can’t do it because thread #19 is already blocked by #11. In our case tablet was quite simple – we replaced event by delegate and hole was closed. 0:011> !dumpstack OS Thread Id: 0xd94 (11) Current frame: clr!MethodDesc::GetSig+0x3b ChildEBP RetAddr Caller,Callee ... 2471efd8 73e0370b clr!Thread::StackWalkFrames+0xc1, calling clr!__security_check_cookie 2471efe0 73f69583 clr!MethodDesc::ReturnsObject+0x24, calling clr!MetaSig::MetaSig 2471f058 3c923948 (MethodDesc 3c762ae8 +0x158 SomeNamespace.SomeAlgoObject`1[[System.Double, mscorlib]].DoSmth(...)), calling 3c8ba030 2471f080 73f6cbf6 clr!Thread::HandledJITCase+0xc4, calling clr!Thread::StackWalkFramesEx ... 0:019> !dumpstack OS Thread Id: 0x13d0 (19) Current frame: 3c8ba034 ChildEBP RetAddr Caller,Callee 2810e4c4 3c923948 (MethodDesc 3c762ae8 +0x158 SomeNamespace.SomeAlgoObject`1[[System.Double, mscorlib]].DoSmth(...), calling 3c8ba030 2810e53c 3c923695 (MethodDesc 3c762aa8 +0x65 SomeNamespace.SomeAlgoObject`1[[System.Double, mscorlib]].DoWork(...)), calling (MethodDesc 3c762ae8 +0 SomeNamespace.SomeAlgoObject`1[[System.Double, mscorlib]].DoSmth(...)) ...
https://blogs.msdn.microsoft.com/tess/2008/02/11/hang-caused-by-gc-xml-deadlock/
CC-MAIN-2017-30
en
refinedweb
Kubernetes Documentation¶ Note This Kubernetes driver will be subject to change from community feedback. How to map the core assets (pods, clusters) to API entities will be subject to testing and further community feedback.. Authentication¶ Authentication currently supported with the following methods: - Basic HTTP Authentication - - No authentication (testing only) Instantiating the driver¶ from libcloud.container.types import Provider from libcloud.container.providers import get_driver cls = get_driver(Provider.KUBERNETES) conn = cls(key='my_username', secret='THIS_IS)+_MY_SECRET_KEY+I6TVkv68o4H', host='126.32.21.4') for container in conn.list_containers(): print(container.name) for cluster in conn.list_clusters(): print(cluster.name) Deploying a container from Docker Hub¶ Docker Hub Client HubClient is a shared utility class for interfacing to the public Docker Hub Service. You can use this class for fetching images to deploy to services like ECS from libcloud.container.types import Provider from libcloud.container.providers import get_driver from libcloud.container.utils.docker import HubClient cls = get_driver(Provider.KUBERNETES) conn = cls(key='my_username', secret='THIS_IS)+_MY_SECRET_KEY+I6TVkv68o4H', host='126.32.21.4') hub = HubClient() image = hub.get_image('ubuntu', 'latest') for cluster in conn.list_clusters(): print(cluster.name) if cluster.name == 'default': container = conn.deploy_container( cluster=cluster, name='my-simple-app', image=image) API Docs¶ - class libcloud.container.drivers.kubernetes. KubernetesContainerDriver(key=None, secret=None, secure=False, host='localhost', port=4243)[source]¶ deploy_container(name, image, cluster=None, parameters=None, start=True)[source]¶ Deploy an installed container image. In kubernetes this deploys a single container Pod. destroy_container(container)[source]¶ Destroy a deployed container. Because the containers are single container pods, this will delete the pod.
https://libcloud.readthedocs.io/en/latest/container/drivers/kubernetes.html
CC-MAIN-2017-30
en
refinedweb
MRAA ISR not workingPaulaK Aug 15, 2016 6:24 PM Hi community, I'm trying to implement a simple Python program that showcase the ISR functionality of MRAA library. Unfortunately, I'm not able to make it work. I'm using version 1.2.3 of MRAA library and i tried a similar code with nodejs and did not make any progress. Here is my Python code (it's from a sparkfun tutorial): import mraa import time switch_pin_number=7 led_pin_number=5 def interr_test(args): led.write(1) time.sleep(0.2) led.write(0) switch = mraa.Gpio(switch_pin_number) led = mraa.Gpio(led_pin_number) # Configuring the switch to input & led to output respectively switch.dir(mraa.DIR_IN) led.dir(mraa.DIR_OUT) #The command below enables the interrupt. switch.isr(mraa.EDGE_RISING, interr_test, interr_test) # The interrupt is going to be valid for as long as the program runs # Therefore we setup a dummy "do-nothing" condition try: while(1): pass #"do-nothing" condition except KeyboardInterrupt: led.write(0) exit Not even the provided example from MRAA Git repository is working with me: import mraa import time import sys class Counter: count = 0 c = Counter() # inside a python interrupt you cannot use 'basic' types so you'll need to use # objects def test(gpio): print("pin " + repr(gpio.getPin(True)) + " = " + repr(gpio.read())) c.count+=1 pin = 7 if (len(sys.argv) == 2): try: pin = int(sys.argv[1], 10) except ValueError: printf("Invalid pin " + sys.argv[1]) try: x = mraa.Gpio(pin) print("Starting ISR for pin " + repr(pin)) x.dir(mraa.DIR_IN) x.isr(mraa.EDGE_BOTH, test, x) var = raw_input("Press ENTER to stop") x.isrExit() except ValueError as e: print(e) I've also tried with Nodejs, but with no success: var mraa = require('mraa'); // Set up digital input on MRAA pin 36 (GP14) var buttonPin = new mraa.Gpio(7); buttonPin.dir(mraa.DIR_IN); // Global counter var num = 0; // Our interrupt service routine function serviceRoutine() { num++; console.log("BOOP " + num); } // Assign the ISR function to the button push buttonPin.isr(mraa.EDGE_FALLING, serviceRoutine); // Do nothing while we wait for the ISR periodicActivity(); function periodicActivity() { setTimeout(periodicActivity, 1000); } Could somebody help with MRAA_ISR on Python, please? P.S. are there specific pins that we can attach an interrupt?Because I've tested one of the code above with pin 6 and it worked. Thanks. 1. Re: MRAA ISR not workingAug 16, 2016 3:22 PM (in response to PaulaK) Hi PaulaK, We tested your code and it didn't work specifically with pins 7 and 8, and we have the same result with the GitHub example. We recommend you to use other pins for these tasks, we tested your code with pins 10, 9, 6, 4, 3, 2 and it worked perfectly. We only tried with these pins so can be the possibility that your code runs with another pin, but we can assure you that your code works. I hope you find this information useful, Regards, -Leonardo 2. Re: MRAA ISR not workingAug 19, 2016 7:52 AM (in response to PaulaK) Hi PaulaK, Was the information useful? Did your code work with other pins? Let us know if you have another problem. Regards, -Leonardo 3. Re: MRAA ISR not workingPaulaK Aug 19, 2016 8:01 PM (in response to Intel Corporation) Hi Leonardo, The information was very useful. I ran some tests and got different results than yours. I'm using MRAA version 1.2.3, and the pins that I can implement ISR are 4,5,6,9 and 11. Do you think the different result have something to do with my library version? Best, Paula 4. Re: MRAA ISR not workingAug 22, 2016 3:15 PM (in response to PaulaK) Hi PaulaK, It is not the library version because I am using the same version of MRAA (1.2.3). I tested your code on 2 different Galileo Gen2 boards and it worked perfectly with all the digital pins that I told you on the first reply plus the pin 11 that you used too. I found this document and it says that all the pins supports different modes of interrupts, you can check it on the column "Interrupt Modes", where I can confirm that pins 7 and 8 doesn't work with interrupts. So it is weird that your code doesn't run on the other pins so try to test them again. If they don't work it can be a hardware problem with your Galileo board. Let me know if you have another problem. Regards, -Leonardo 5. Re: MRAA ISR not workingPaulaK Aug 23, 2016 8:48 AM (in response to Intel Corporation) Hi Leonardo, Thank you so much for the document, it helped a lot. All pins worked as expected. Best, Paula 6. Re: MRAA ISR not workingAug 24, 2016 3:02 PM (in response to PaulaK) Hi PaulaK, It is nice to see that the document helped you. If you need something else let us know. Regards, -Leonardo
https://communities.intel.com/thread/105370
CC-MAIN-2017-30
en
refinedweb
Developers’ guide¶ These instructions are for developing on a Unix-like platform, e.g. Linux or Mac OS X, with the bash shell. If you develop on Windows, please get in touch. Mailing lists¶ General discussion of Neo development takes place in the NeuralEnsemble Google group. Discussion of issues specific to a particular ticket in the issue tracker should take place on the tracker. Using the issue tracker¶ If you find a bug in Neo, Neo, create a ticket with type “enhancement”. If you already have an implementation of the idea, create a patch (see below) and attach it to the ticket. To keep track of changes to the code and to tickets, you can register for a GitHub account and then set to watch the repository at GitHub Repository (see). Requirements¶ - Python 2.6, 2.7, 3.3-3.5 - numpy >= 1.7.1 - quantities >= 0.9.0 - if using Python 2.6, unittest2 >= 0.5.1 - Setuptools >= 0.7 - nose >= 0.11.1 (for running tests) - Sphinx >= 0.6.4 (for building documentation) - (optional) tox >= 0.9 (makes it easier to test with multiple Python versions) - (optional) coverage >= 2.85 (for measuring test coverage) - (optional) scipy >= 0.12 (for MatlabIO) - (optional) h5py >= 2.5 (for KwikIO, NeoHdf5IO) We strongly recommend you develop within a virtual environment (from virtualenv, venv or conda). It is best to have at least one virtual environment with Python 2.7 and one with Python 3.x. Getting the source code¶ We use the Git version control system. The best way to contribute is through GitHub. You will first need a GitHub account, and you should then fork the repository at GitHub Repository (see). To get a local copy of the repository: $ cd /some/directory $ git clone git@github.com:<username>/python-neo.git Now you need to make sure that the neo package is on your PYTHONPATH. You can do this either by installing Neo: $ cd python-neo $ python setup.py install $ python3 setup.py install (if you do this, you will have to re-run setup.py install any time you make changes to the code) or by creating symbolic links from somewhere on your PYTHONPATH, for example: $ ln -s python-neo/neo $ export PYTHONPATH=/some/directory:${PYTHONPATH} An alternate solution is to install Neo with the develop option, this avoids reinstalling when there are changes in the code: $ sudo python setup.py develop or using the “-e” option to pip: $ pip install -e python-neo To update to the latest version from the repository: $ git pull Running the test suite¶ Before you make any changes, run the test suite to make sure all the tests pass on your system: $ cd neo/test With Python 2.7 or 3.3: $. To run tests from an individual file: $ python test_analogsignal.py $ python3 test_analogsignal.py=neo --cover-erase Working on the documentation¶ All modules, classes, functions, and methods (including private and subclassed builtin methods) should have docstrings. Please see PEP257 for a description of docstring conventions. Module docstrings should explain briefly what functions or classes are present. Detailed descriptions can be left for the docstrings of the respective functions or classes. Private functions do not need to be explained here. Class docstrings should include an explanation of the purpose of the class and, when applicable, how it relates to standard neuroscientific data. They should also include at least one example, which should be written so it can be run as-is from a clean newly-started Python interactive session (that means all imports should be included). Finally, they should include a list of all arguments, attributes, and properties, with explanations. Properties that return data calculated from other data should explain what calculation is done. A list of methods is not needed, since documentation will be generated from the method docstrings. Method and function docstrings should include an explanation for what the method or function does. If this may not be clear, one or more examples may be included. Examples that are only a few lines do not need to include imports or setup, but more complicated examples should have them. Examples can be tested easily using the iPython %doctest_mode magic. This will strip >>> and ... from the beginning of each line of the example, so the example can be copied and pasted as-is. The documentation is written in reStructuredText, using the Sphinx documentation system. Any mention of another Neo module, class, attribute, method, or function should be properly marked up so automatic links can be generated. The same goes for quantities or numpy. To build the documentation: $ cd python-neo/doc $ make html Then open some/directory/python-neo/doc/build/html/index.html in your browser. Committing your changes¶ Once you are happy with your changes, run the test suite again to check that you have not introduced any new bugs. It is also recommended to check your code with a code checking program, such as pyflakes or flake8. Neo repository, open a pull request on GitHub (see). Python 3¶ Neo core should work with both recent versions of Python 2 (versions 2.6 and 2.7) and Python 3 (version 3.3 or newer). Neo IO modules should ideally work with both Python 2 and 3, but certain modules may only work with one or the other (see Installation).. Using virtual environments makes this very straightforward. Coding standards and style¶ All code should conform as much as possible to PEP 8, and should run with Python 2.6, 2.7, and 3.3 or newer. You can use the pep8 program to check the code for PEP 8 conformity. You can also use flake8, which combines pep8 and pyflakes. However, the pep8 and flake8 programs do not check for all PEP 8 issues. In particular, they do not check that the import statements are in the correct order. Also, please do not use from xyz import *. This is slow, can lead to conflicts, and makes it difficult for code analysis software. Making a release¶ Add a section in /doc/source/whatisnew.rst for the release. First check that the version string (in neo/version.py, setup.py, doc/conf.py and doc/install.rst) is correct. To build a source package: $ python setup.py sdist To upload the package to PyPI (currently Samuel Garcia and Andrew Davison have the necessary permissions to do this): $ python setup.py sdist upload $ python setup.py upload_docs --upload-dir=doc/build/html Finally, tag the release in the Git repository and push it: $ git tag <version> $ git push --tags origin If you want to develop your own IO module¶ See IO developers’ guide for implementation of a new IO.
http://neo.readthedocs.io/en/neo-0.5.0alpha1/developers_guide.html
CC-MAIN-2017-30
en
refinedweb
Introducing In my previous blog i´ve shown you how easily we can build up an simple Workflow with new SAP Cloud Platform Workflow service. In this blog i will start this WF periodically to get every day an overview of my current alerts. The setup is also aquite simple process how described also in this blog. Let´s start…. Enter your SAP CP Integration Web UI and switch to “Monitor” view: In this view we must define/deploy our “User Credential” for the user which is allowed to start the WF: Now enter the “Security Material” tile and add the credential: Define the following Information: - Name - Description - User An click Deploy Yo should now see the new credential on top of the of your security materials: No we can switch to the “Design” view of SAP CP Integration to define our new artifact (Integration Flow). Design the Integration Flow Select an existing package or create an new one: Create a new Integration Flow by providing a name, for me i call the Flow “timer_based_wf_instance_start“: If we click Ok we switch over to the desing perspective, were we can see an empty iflow: My First task is now to delete the “Start” event and the “Sender”. The final Integration flow should look like this: Integration Flow objects in detail The following objects are added/changed: - Timer: The Start Event is now a “Time based Event”, because of the fact that i want to shedule my Integration Flow once a day at 9:00, from Monday to Friday. - Content Modifier #1: This “Message Header” is requierd to fetch the XSRF token from SAP Cloud Platform Workflow API. The HTTP(S) connection between our Request-Reply connection and the Receiver, contains the following details: Address: The Endpoint form the API documentation “https://<WFRUNTIME_HOST>hana.ondemand.com/workflow-service/rest/v1/xsrf-token” Method: GET Authetication: Basic Credential Name: Our credential Name which we created earlier “WFAAS_USER“ - Content Modifier #2: This contains in the “Message Body” section our “definitionid” for the SAP CP WF, this is required for the API call to start our WF instance. - Groovy Script: This script is used to store the cookie in our message header. You can find the script also in this blog, but there are two little errors, the script which work’s for me is this one: import com.sap.gateway.ip.core.customdev.util.Message; import java.util.HashMap import java.util.ArrayList import java.util.Map import org.slf4j.Logger import org.slf4j.LoggerFactory import groovy.xml.* import java.io.*; def Message processData(Message message){ def headers = message.getHeaders(); def cookie = headers.get("set-cookie"); StringBuffer bufferedCookie = new StringBuffer(); for(Object item : cookie){ bufferedCookie.append(item + ";"); } message.setHeader("cookie", bufferedCookie.toString()); Logger log = LoggerFactory.getLogger(this.getClass()); log.error("cookie"+ bufferedCookie); return message; } - Finally we need the Connection between the Message End and the Receiver, which contains the API call to start the WF: The HTTP(S) connection has the following Address: The Endpoint form the API documentaion “https://<WFRUNTIME_HOST>hana.ondemand.com/workflow-service/rest/v1/workflow-instances” Method: POST Authetication: Basic Credential Name: Our credential Name which we created earlier “WFAAS_USER“ Test the Integartion Flow To test our newly created Integration Flow we just change the “Timer Start Event” to the option “Run Once“. Finally click on deploy an check the “Monitor” view of SAP CP Integration An yes no error, also we should now have a new SAP CP WF Instance running?…… Enter the FLP and open the “Workflow Instances” tile: …..and whoot, there it is… ;o) Conclusion I hope you can see how easily you can start an SAP Cloud Platform Workflow by SAP Cloud Platform Integration. For me, this is a (the) perfect combination. And shows once more (hopefully) how simple you can combine different SAP Cloud Platform services together. Finally check my next blog how we go ahead with the combination. cheers, fabian That’s one more great post Fabian!!!
https://blogs.sap.com/2017/05/05/using-sap-cloud-platform-integration-to-start-a-sap-cloud-platform-workflow/
CC-MAIN-2017-30
en
refinedweb
The most complicated thing to port to JavaScript it the pointer-juggling while parsing a string. You can translate it with the String object and handle it natively, that is, native to JavaScript but you can build it instead, with some ado, with the new typed Arrays. To get an ECMAScript into such an Array we need to allocate memory with ArrayBuffer. How much? As much as the String has characters, of course but that is not as easy as it looks. ECMAScript uses two-byte Unicode so every character occupies two bytes, except when they don’t. So if you are really sure that you have only one byte characters in the string use string.length otherwise use string.length * 2. Doubles the memory needed so be carefull if you want to pass the results over the net or store them in a local storage like localStorage. Example: #include <stdio.h> #include <stdlib.h> #include <ctype.h> int main(int argc, char **argv){ char *s = "ajshrbfgtd123.234e-2numberend"; char *string; char *endptr; double result; printf("string before = \"%s\"\n",s); string = s; while(!isdigit(*(string++))); string--; printf("string after = \"%s\"\n",string); result = strtod(string, &endptr); printf("number = \"%f\"\n",result); printf("rest of string = \"%s\"\n",endptr); exit(EXIT_SUCCESS); } Is in JavaScript (one of many ways, of course and almost all with less unnecessary complications) for one byte long characters: "use strict"; function isdigit(c) { return ((c - 48) < 10); } function buftostr(mem) { return String.fromCharCode.apply(null, mem); } var s = 'ajshrbfgtd123.234e-2numberend'; var slen = s.length; var buffer = new ArrayBuffer(slen); var string = new Uint8Array(buffer); // truncate anything to one byte, just in case for (var i = 0; i < slen; i++) { string[i] = s.charCodeAt(i) & 0xff; } var stringp = 0; var endptr = 0; console.log('string before = "' + buftostr(string.subarray(stringp)) + '"'); // ignores a leading sign while (!isdigit(string[stringp++])); // repair leading sign. Very inelegant if(string[stringp-1] == 0x2B || string[stringp-1] == 0x2D){ stringp--; } console.log('string after = "' + buftostr(string.subarray(stringp)) + '"'); var result = parseFloat(buftostr(string.subarray(stringp))); console.log('number = "' + result + '"'); // finding endptr is more difficult, parseFloat doesn't set one // one would need to parse manually console.log('rest of string = "' + buftostr(string.subarray(stringp + 10)) + '"') // But we can show how to snip a part of the string out console.log('number as a string = "' + buftostr(string.subarray(stringp, stringp + 10)) + '"'); Here TypedArray.subarray(start[,end]) returns a new TypedArray. If you want to do it in-place you can use offsets: snippet = new TypedArray(buffer,start[,end]). The offset is kept in snippet.byteOffset, the length of the snippet in bytes is kept in snippet.byteLength. Such a typed array can have different sizes of elements but the number of elements is kept in typedarray.length. The exact size (in bytes) of a single element can be read from TypedArray.BYTES_PER_ELEMENT Access to the raw buffer (read-only) is by way of the property snippet.buffer. Juggling with the elements inside of the typed arrays is possible: TypedArray.copyWithin(destination, start[, end]) copies the elements from start to end (or the end of the typed array if omitted) at the place in the typed array starting at the index destination. A shortcut is also available: you can place an Array (typed or not) into the typed array starting at a specific index (or zero if omitted) with TypedArray.set(array[,start]). This method overwrites any values that have been there before. The method subarray is the same as the Array.slice method. I don’t know who was the one who decided that. A committee perhaps? Please be aware that the whole thing is defined in the next ECMAScript standard 6 which is still a draft, although almost all current browsers and other ECMAScript engines offer basic support at least. Basic methods are the methods I intentionally restricted myself to in this post. Lets use this knowledge to build our own parseFloat. A function that takes a typed array that got set to the beginning of a number, returns a number and sets endptr accordingly. A IEEE-745 compliant base-10 number has a sign at the start or not; an integer part or not; a period (radix point, decimal point) or not, a fractional part or not, an exponent part or not. The exponent part starts with the letter “E” or “e”; has a sign or not; has at least one digit. "use_strict"; function strtod(str, strptr, endptr){ // these are all flags and should be done with a bit-mask instead var decimal_point = 0; var sign_mantissa = 0; var mantissa = 0; var sign_exponent = 0; var exponent_flag = 0; var exponent = 0; var goto_out_of_for_flag = 0; // set endptr to start value endptr[0] = strptr; // we restrict to base 10, so no prefix for (var i = 0; i < (str.length - strptr); i++) { // get the next character var c = str[strptr + i]; // to avoid using numbers only switch (String.fromCharCode(c)) { case '+': case '-': // only two signs max if (sign_exponent != 0) { return Number.NaN; } else if (exponent_flag != 0) { sign_exponent = 1; } else { sign_mantissa = 1; } break; case '.': // only one decimal point allowed if (decimal_point != 0) { return Number.NaN; } else { decimal_point = 1; } break; case 'e': case 'E': // only one exponent allowed if (exponent_flag != 0) { return Number.NaN; } else { exponent_flag = 1; } break; case '0': case '1': case '2': case '3': case '4': case '5': case '6': case '7': case '8': case '9': if (exponent_flag != 0) { exponent = 1; } else { mantissa = 1; } break; default: // the first non-Number character. For lack of goto set a flag goto_out_of_for_flag = 1; break; }; if (goto_out_of_for_flag != 0) { break; } // keep pace endptr[0]++; } // we need a mantissa or a mantissa and an exponent // if all is still zero after the first character, we have a problem if (decimal_point == 0 && sign_mantissa == 0 && mantissa == 0 && sign_exponent == 0 && exponent_flag == 0 && exponent == 0) { return Number.NaN; } //single exponent not allowed else if (mantissa == 0 && exponent_flag != 0 && exponent != 0) { return Number.NaN; } //decimal point without digits not allowed else if (mantissa == 0 && decimal_point != 0) { return Number.NaN; } // the string can be declared clean by now. If the parsing failed // at any point causing the return of NaN, endptr[0] is set to the index // of the mishap. return parseFloat(buftostr(str.subarray(strptr))); } It is a bit of a silly example because all of that can be done in native JavaScript with not much more than a regular expression. A regular expression is quite expensive, admitted, but we doing the same work twice here because parseFloat() will do the very same, only better and faster.
https://deamentiaemundi.wordpress.com/2014/10/11/ctojs-handling-pointers/
CC-MAIN-2017-30
en
refinedweb
Namespace from server? Expand Messages - My SOAP::Lite server is returning <namesp1:sayHelloResponse xmlns: How can I get access to this element and set the namespace to something more descriptive? (Or, does it matter what the namespace is?) I know how to get rid of "namesp1" on the client. I make the method a SOAP::Data object and set "->attr({xmlns => 'urn:Hello'})". But, I don't see how this is can be done in the server where it is apparently generating the "response" element? Thanks, Mark __________________________________ Do you Yahoo!? Yahoo! Mail - 50x more storage than other providers! Your message has been successfully submitted and would be delivered to recipients shortly.
https://groups.yahoo.com/neo/groups/soaplite/conversations/topics/3794?var=1
CC-MAIN-2017-30
en
refinedweb
02-04-2016 03:29 PM I would like to code out an import I was able to perform using the import Wizard in SAS Enterprise Guide 9.4. It allowed for me to: import a CSV file set the first observation row get the names for use as column names Spefcify where the next observation will start define how many observations to get there after my data is really fun and looks something like the following 01/01/2016, Start Stats, type, location, value, quanity value, ,-----,-----------,--------,-------------, ,new, Central, 100, 500, End Stats, type, location, value, quantiy value, ,-----,-----------,--------,-------------, ,used, Central, 50, 10, the end result is to: populate the column names using each Stats line ommit the dash lines populate the data portion into table values. Later I will merge all the items into one table on one line item for each file. I have several tables and the column names can change so getting the column names is very important. I'm unable to get the Wiard to provide it's import method to understand what's going on there to springboard my project. 02-04-2016 03:37 PM If you're looking for the Import Data task to give you reusable code, check the following options. The resulting code should be something that will run as-is in its own program, as long as the CSV file that you reference is in a path that the SAS session can get to. Chris 02-04-2016 04:49 PM yes it does and no it doesn't.... The resulting code does't show the method used to get the column names. It only shows hard coded outcomes of evaluation for column names. This is great but doesn't lend it's self well to a dynamic process. This is still a step forward from not using these steps. Previouse the file was only represented by datalines4 02-04-2016 05:07 PM Well...no, the column names are part of the code. If you want SAS to guess at the column names you can use PROC IMPORT instead, but it can guess incorrectly, which sets up issues down the road. You may also run into issues with where EG is located and your files are located, ie the server not seeing your local directory issues. 02-04-2016 05:13 PM Yes... I would like to use proc import method and have SAS guess at the names. And I've tried without success .. my column names contiue to populate as VARx and I'm not able to pinpoint where the data starts to omit the dash line. I was hoping to mimic the import wizard and modified as needed to produce the final outcome. 02-04-2016 05:20 PM Have you specified GETNAMES=Yes? The options are here:... I think you're also interested in NAMEROW and DATAROW options as mentioned in the GUI. 02-04-2016 05:35 PM yes so using the code like proc import datafile='C:test.CSV' dbms=csv out=work.test replace; datarow=4; getnames=yes; GUESSINGROWS=500; run; the data as provided before demonstrates VARx for all column names. My first guess is SAS is protecting it's self from values that have spaces 02-04-2016 05:43 PM 02-05-2016 08:29 AM SAS can handle var names with spaces if you specify OPTIONS VALIDVARNAME=ANY. Then you can reference the variable with a special literal notation in code, like this: length 'my variable'n 8; But not sure that's what is getting in your way here. If you can post a sample data file with a few records (beyond what you've already supplied), I'm sure that someone will help with an approach. I suspect the answer is not PROC IMPORT but a DATA step that reads and "fixes" the data lines on the way in. Or two passes: one to clean the file and produce a more typical CSV, and the second to run the PROC IMPORT that will determine column attributes. Chris 02-05-2016 12:24 PM 02-05-2016 12:48 PM Wow! That is very possible to read using a DATA step, but it will require a completely custom approach. It's a far stretch to call that a CSV file -- it does not comply with any variation of CSV I've encountered. I don't see how the Import tasks can help you at all. Here's what I suggest - create another top in the Base Programming board with a title like "Help reading in text file with custom layout". Post the example layout and ask for help with the technique. Who knows? Someone might solve it for you. More likely people will point you to examples if using the INPUT statement and @ to control input position and other necessary tricks. Chris 02-05-2016 01:01 PM You're attempting to fix a broken process with code. That output comes from a reporting system that probably stores the data in a better format. Go back to the source and ask for datasets not text files. Fix the process instead of work around it, yes, it's more work. And yes, its not always possible 02-05-2016 02:57 PM - last edited on 02-05-2016 03:05 PM by ChrisHemedinger Unfortunatly I'm working in an established proccess without wiggle room to request changes The method I've come up with by combining stuff from other post is something like grab all directory information from the home folder where file is CSV-- %let dirname = C:\Test; filename DIRLIST pipe "dir /B &dirname\*.csv"; data dirlist; length fname $256; infile dirlist length=reclen; input fname $varying256. reclen; run; /*Loop through each file grabbing only specific data lines-- This does not allow for files where column ccounts may be more or less then what I've specified so I've over requested columns. */ data rep_date; length myfilename $100; set dirlist; filepath = "&dirname\"||fname; /* change firstobs to assure that the column header is first observation in new table */ infile dummy filevar = filepath dlm='2C0D'x dsd missover lrecl=10000 firstobs=3 obs=3 end=done missover; do while(not done); myfilename = filepath; INPUT F1 : $CHAR96. F2 : $CHAR33. F3 : $CHAR18. F4 : $CHAR55. F5 : $CHAR11. F6 : $CHAR11. F7 : $CHAR11. F8 : $CHAR11. F9 : $CHAR11. F10 : $CHAR11. F11 : $CHAR11. F12 : $CHAR11. F13 : $CHAR11.; output; end; run; /*Assemble all the pieces- but this method does not create variables for the column names so it does not allow for varrying column names */ proc sql; create table final as ( select s.batch_id , s.extract_date , s.submit_date , s.system_name , s.record_count , s.encounter_count , s.svc_from_date , s.svc_to_date , a.F1 as rep_date , case when c.f2 like '%SUB%' then c1.f2 end as enc_SUBMIT , case when c.f3 like '%PROC%' then c1.f3 end as enc_PROCES , case when c.f4 like '%CLOSE%' then c1.f4 end as enc_CLOSED , case when c.f5 like '%DUP%' then c1.f5 end as enc_DUPLICATE , case when c.f6 like '%ACCEPT%' then c1.f6 end as enc_ACCEPTED , case when c.f7 like '%INFOR%' then c1.f7 end as enc_INFORMATION , case when c.f8 like '%JECTED%' then c1.f8 end as enc_REJECTED , case when c.f9 like '%PART RE%' then c1.f9 end as enc_PART_REJECT , case when d.f2 like '%SUB%' then d1.f2 end as diag_SUBMIT , case when d.f3 like '%PROC%' then d1.f3 end as diag_PROCES , case when d.f4 like '%CLOSE%' then d1.f4 end as diag_CLOSED , case when d.f5 like '%DUP%' then d1.f5 end as diag_DUPLICATE , case when d.f6 like '%ACCEPT%' then d1.f6 end as diag_ACCEPTED , case when D.f7 like '%INFOR%' then D1.f7 end as DIAG_INFORMATION , case when D.f8 like '%JECTED%' then D1.f8 end as DIAG_REJECTED , case when D.f9 like '%PART RE%' then D1.f9 end as DIAG_PART_REJECT from rep_date a inner join batch b on a.fname =b.fname INNER JOIN submitted S ON b.F2 =S.BATCH_ID inner join( select * from enc where f1 is not null) c on a.fname =c.fname inner join( select * from enc where f1 is null and f2 not like '%-%') c1 on a.fname =c1.fname inner join( select * from diag where f1 is not null) d on a.fname =d.fname inner join( select * from diag where f1 is null and f2 not like '%-%') d1 on a.fname =d1.fname where b.f2 not like'%BATCH%' and b.f2 not like'%-%' ); quit; 02-05-2016 03:01 PM 02-05-2016 03:06 PM
https://communities.sas.com/t5/SAS-Enterprise-Guide/Import-Wizard-EG-9-4/td-p/248083?nobounce
CC-MAIN-2017-30
en
refinedweb
hey. Im not sure how to adapt cin.get(); to stop my prog exiting here it is. If sum1 could point me in the correct direction that would be cool thankz ;)thankz ;)Code: #include <iostream> #include <string> using namespace std; int main() { string firstName; string lastName; int ID; int age; float salary; string occupation; cout << "Enter your first name "; cin >> firstName; cout << "Enter your last name "; cin >> lastName; cout << "Enter your ID number "; cin >> ID; cout << "Enter your age "; cin >> age; cout << "Enter your salary (Per Year) "; cin >> salary; cout << "Enter your occupation "; cin >> occupation; cout << "Hello " << firstName << " " << lastName << " " << age << " " << salary << " "<< occupation; cout << " or should I say " << ID << end1; return 0; }
https://cboard.cprogramming.com/cplusplus-programming/56596-prog-prob-printable-thread.html
CC-MAIN-2017-30
en
refinedweb
Windows Azure SDK Visual Studio Tooling Updates for Service Bus - Posted: Nov 15, 2012 at 12:33 PM - 2,439 Views Right click “Save as…” The Azure SDK 1.8 release delivers key visual studio tooling enhancements for Service Bus. We have added several new capabilities allowing you to develop and debug your applications. This is an overview of all the new Server Explorer capabilities and some messaging feature enhancements. Features include importing your namespaces, new properties for creating and monitoring entities, updating current entities and lots more. Learn about these new features and scenarios that help you in developing applications with.
http://channel9.msdn.com/Series/Windows-Azure-Service-Bus-Tutorials/Windows-Azure-SDK-Visual-Studio-Tooling-Updates-for-Service-Bus?format=smooth
CC-MAIN-2014-49
en
refinedweb
System.Speech.AudioFormat Namespace The System.Speech.AudioFormat namespace consists of a single class, SpeechAudioFormatInfo, which contains information about the format of the audio that is being input to the speech recognition engine, or being output from the speech synthesis engine. You can use the properties of SpeechAudioFormatInfo to obtain information about specific characteristics of an audio format. These properties are read-only, except for BlockAlign, which you can use to set the block alignment of incoming audio. To get all the properties of the incoming audio format, use the FormatSpecificData method. Using the constructors on the SpeechAudioFormatInfo class, you can specify any of the properties of the audio format when you initialize the object.
http://msdn.microsoft.com/en-us/library/system.speech.audioformat.aspx
CC-MAIN-2014-49
en
refinedweb