text
stringlengths
20
1.01M
url
stringlengths
14
1.25k
dump
stringlengths
9
15
lang
stringclasses
4 values
source
stringclasses
4 values
SYNOPSIS #include <sys/socket.h> int bind(int socket, const struct sockaddr *address, socklen_t address_len); DESCRIPTION .
http://www.linux-directory.com/man3/bind.shtml
crawl-003
en
refinedweb
#include <CbcHeuristicDiveVectorLength.hpp> Inheritance diagram for CbcHeuristicDiveVectorLength: Definition at line 11 of file CbcHeuristicDiveVectorLength.hpp. Clone. Implements CbcHeuristicDive. Assignment operator. Create C++ lines to get to current state. Reimplemented from CbcHeuristicDive. Returns true if all the fractional variables can be trivially rounded. Returns false, if there is at least one fractional variable that is not trivially roundable. In this case, the bestColumn returned will not be trivially roundable. Implements CbcHeuristicDive.
http://www.coin-or.org/Doxygen/Smi/class_cbc_heuristic_dive_vector_length.html
crawl-003
en
refinedweb
#include <CbcLinked.hpp> Inheritance diagram for OsiBiLinearEquality: This models x*y = b where both are continuous Definition at line 981 of file CbcLinked.hpp. Useful constructor - This Adds in rows and variables to construct Ordered Set for x*y = b So note not const solver. Clone. Reimplemented from OsiBiLinear. Possible improvement. change grid if type 0 then use solution and make finer if 1 then back to original returns mesh size Number of points. Definition at line 1018 of file CbcLinked.hpp. References numberPoints_. Definition at line 1020 of file CbcLinked.hpp. References numberPoints_. Number of points. Definition at line 1025 of file CbcLinked.hpp. Referenced by numberPoints(), and setNumberPoints().
http://www.coin-or.org/Doxygen/Smi/class_osi_bi_linear_equality.html
crawl-003
en
refinedweb
#include <CbcLinked.hpp> Inheritance diagram for OsiBiLinearBranchingObject: Definition at line 934 of file CbcLinked.hpp. Clone. Implements OsiBranchingObject. Does next branch and updates state. Implements OsiTwoWayBranchingObject. Print something about branch - only if log level high. Return true if branch should only bound variables. Reimplemented from OsiBranchingObject. data 1 means branch on x, 2 branch on y Definition at line 972 of file CbcLinked.hpp.
http://www.coin-or.org/Doxygen/Smi/class_osi_bi_linear_branching_object.html
crawl-003
en
refinedweb
#include <OsiAuxInfo.hpp> Inheritance diagram for OsiAuxInfo: 19 of file OsiAuxInfo.hpp. Clone. Reimplemented in OsiBabSolver. Assignment operator. Get application data. Definition at line 35 of file OsiAuxInfo.hpp. Pointer to user-defined data structure. Definition at line 39 of file OsiAuxInfo.hpp. Referenced by getApplicationData().
http://www.coin-or.org/Doxygen/Smi/class_osi_aux_info.html
crawl-003
en
refinedweb
...one of the most highly regarded and expertly designed C++ library projects in the world. — Herb Sutter and Andrei Alexandrescu, C++ Coding Standards Returns a true-valued Integral Constant if T1 and T2 are equal. #include <boost/m equal_to<c1,c2>::type r; typedef equal_to<c1,c2> r; Amortized constant time. BOOST_MPL_ASSERT_NOT(( equal_to< int_<0>, int_<10> > )); BOOST_MPL_ASSERT_NOT(( equal_to< long_<10>, int_<0> > )); BOOST_MPL_ASSERT(( equal_to< long_<10>, int_<10> > ));
http://www.boost.org/doc/libs/1_37_0/libs/mpl/doc/refmanual/equal-to.html
crawl-003
en
refinedweb
On 02/02/2011 20:47, Jason Pringle wrote: > > Can a web application populate the global JNDI namespace? No. > I am looking for a possible workaround to create shared connection pools without modifying server.xml (ie placing entries in <GlobalNamingResources .../>). JMX is probably your best bet but I don't think the necessary API is exposed. As always, patches welcome. >. Mark --------------------------------------------------------------------- To unsubscribe, e-mail: users-unsubscribe@tomcat.apache.org For additional commands, e-mail: users-help@tomcat.apache.org
http://mail-archives.apache.org/mod_mbox/tomcat-users/201102.mbox/%3c4D49D3B2.7080308@apache.org%3e
crawl-003
en
refinedweb
contains functions vsl_b_write, vsl_b_read and vsl_print_summary More... #include <vgl/vgl_line_2d.h> #include <vsl/vsl_binary_io.h> Go to the source code of this file. contains functions vsl_b_write, vsl_b_read and vsl_print_summary Modifications 2001/03/16 Franck Bettinger Creation Definition in file vgl_io_line_2d.h. Binary load vgl_line_2d from stream. Definition at line 24 of file vgl_io_line_2d.txx. Binary save vgl_line_2d to stream. Definition at line 12 of file vgl_io_line_2d.txx. Print human readable summary of a vgl_line_2d object to a stream. Definition at line 52 of file vgl_io_line_2d.txx.
http://public.kitware.com/vxl/doc/release/core/vgl/html/vgl__io__line__2d_8h.html
crawl-003
en
refinedweb
contains functions vsl_b_write, vsl_b_read and vsl_print_summary More... #include <vgl/vgl_infinite_line_3d.h> #include <vsl/vsl_binary_io.h> Go to the source code of this file. contains functions vsl_b_write, vsl_b_read and vsl_print_summary Modifications 2001/03/16 Franck Bettinger Creation Definition in file vgl_io_infinite_line_3d.h. Binary load vgl_infinite_line_3d from stream. Definition at line 25 of file vgl_io_infinite_line_3d.txx. Binary save vgl_infinite_line_3d to stream. Definition at line 14 of file vgl_io_infinite_line_3d.txx. Print human readable summary of a vgl_infinite_line_3d object to a stream. Definition at line 53 of file vgl_io_infinite_line_3d.txx.
http://public.kitware.com/vxl/doc/release/core/vgl/html/vgl__io__infinite__line__3d_8h.html
crawl-003
en
refinedweb
Next: A note on namespaces, Previous: Editing IMAP ACLs, Up: IMAP If you're using the never setting of nnimap-expunge-on-close, you may want the option of expunging all deleted articles in a mailbox manually. This is exactly what G x does. Currently there is no way of showing deleted articles, you can just delete them.
http://www.gnu.org/software/emacs/manual/html_node/gnus/Expunging-mailboxes.html#Expunging-mailboxes
crawl-003
en
refinedweb
#include <itkComplexToImaginaryImageAdaptor.h> List of all members. ComplexToImaginaryPixelAccessor is templated over an internal type and an external type representation. The internal type is an std::complex<T> and the external part is a type T. This class cast the input applies the funtion to it and cast the result according to the types defined as template parameters Definition at line 39 of file itkComplexToImaginaryImageAdaptor.h. External typedef. It defines the external aspect that this class will exhibit. Definition at line 44 of file itkComplexToImaginaryImageAdaptor.h. Internal typedef. It defines the internal imaginary representation of data. Definition at line 48 of file itkComplexToImaginaryImageAdaptor.h. Definition at line 53 of file itkComplexToImaginaryImageAdaptor.h. Definition at line 50 of file itkComplexToImaginaryImageAdaptor.h.
http://www.itk.org/Doxygen36/html/classitk_1_1Accessor_1_1ComplexToImaginaryPixelAccessor.html
crawl-003
en
refinedweb
How do I enable users to use dtrace on Mac OS X. I am trying to do the equivalent of strace on Linux, and I don't like running applications with elevated privileges. UPDATE Ok, the best I can tell. The only way to keep a nefarious application from ruining the system by debugging it is to. So that: sudo dtruss sudo -u myusername potentially_harmful_app I verified this with this short program: #include <iostream> #include <unistd.h> int main() { std::cout << "effective euid " << geteuid() << "\n"; } See this discussion for more info: Please see my update above. This is a bad security hole if I've ever seen one. A proper implementation of dtruss should drop privileges of any program it invokes. Having several users on a system, one of them would be bound to mess this up and allow a badly written program to trash things. chmod 4755 dtrace as root any time you run the program will run with root privileges You can't have both. dtrace requires root privileges to talk to the kernel, so it either has to run with root privs (setuid) or
http://serverfault.com/questions/215510/enable-dtrace-without-sudo-on-mac-os-x
crawl-003
en
refinedweb
Virtual:Virtual: destructor" train the classifier method prepare tree branch with the method's discriminating variable test the method - not much is done here... mainly further initialization general method used in writing the header of the weight files where the used variables, variable transformation type etc. is specified Function to write options and weights to file read the header from the weight files of the different MVA methods set directory of weight file set the weight file name (depreciated) retrieve weight file name writes all MVA evaluation histograms to file write special monitoring histograms to file - not implemented for this method tree sanity checks } plot significance, S/Sqrt(S^2 + B^2), curve for given number of signal and background events; returns cut for maximum significance also returned via reference is the maximum significance prints out classifier-specific help method interface for RootFinder returns efficiency as function of cut classifier response create ranking {} {} ---------- public accessors ----------------------------------------------- classifier naming (a lot of names ... aren't they ;-) { return fMethodName; } { return fMethodTitle; } { return fMethodType; } { return GetMethodName().Data(); } { fMethodName = methodName; } { fMethodTitle = methodTitle; } { fMethodType = methodType; } { fTestvarPrefix = prefix; } { fTestvar = (v=="")?(fTestvarPrefix + GetMethodTitle()):v; } internal names and expressions of input variables { return Data().GetInternalVarName(i); } { return Data().GetExpression(i); } normalisation and limit accessors { return GetVarTransform().Variable(ivar).GetRMS(); } { return GetVarTransform().Variable(ivar).GetMin(); } { return GetVarTransform().Variable(ivar).GetMax(); } sets the minimum requirement on the MVA output to declare an event signal-like { return fSignalReferenceCut; } retrieve variable transformer { return *fVarTransform; } the TMVA versions can be checked using if (GetTrainingTMVAVersionCode()>TMVA_VERSION(3,7,2)) {...} or if (GetTrainingROOTVersionCode()>ROOT_VERSION(5,15,5)) {...} { return fTMVATrainingVersion; } { return fROOTTrainingVersion; } event reference and update { return GetVarTransform().GetEvent(); } event properties ---------- public auxiliary methods --------------------------------------- this method is used to decide whether an event is signal- or background-like the reference cut "xC" is taken to be where Int_[-oo,xC] { PDF_S(x) dx } = Int_[xC,+oo] { PDF_B(x) dx } { return GetMvaValue() > GetSignalReferenceCut() ? kTRUE : kFALSE; } ---------- protected acccessors ------------------------------------------- { return Data().LocalRootDir(); } are input variables normalised ? { return fNormalise; } { fNormalise = norm; } set number of input variables (only used by MethodCuts, could perhaps be removed) the type of the variable transformation required for the data set of this classifier { return fVariableTransform; } sets the minimum requirement on the MVA output to declare an event signal-like { fSignalReferenceCut = cut; } ---------- protected event and tree accessors ----------------------------- names of input variables (if the original names are expressions, they are transformed into regexps) { return (*fInputVars)[ivar]; } { return Data().GetExpression(ivar); } accessing training and test trees { return Data().GetTrainingTree() != 0; } {} header and auxiliary classes {} { return fgThisBase; } if TRUE, write weights only to text files { return fTxtWeightsOnly; } { return fCutOrientation; } ---------- private acccessors --------------------------------------------- reset required for RootFinder { fgThisBase = this; } { return fHasMVAPdfs; }
http://root.cern.ch/root/html522/TMVA__MethodBase.html
crawl-003
en
refinedweb
define the options (their key words) that can be set in the option string here the options valid for ALL MVA methods are declared. know options: NCycles=xx :the number of training cycles Normalize=kTRUE,kFALSe :if normalised in put variables should be used HiddenLayser="N-1,N-2" :the specification of the hidden layers NeuronType=sigmoid,tanh,radial,linar : the type of activation function used at the neuronn decode the options in the option string parse layout specification string and return a vector, each entry containing the number of neurons to go in each successive layer initialize ANNBase object destructor delete/clear network delete a network layer build network given a layout (number of neurons in each layer) and optional weights array build the network layers build a single layer with neurons and synapses connecting this layer to the previous layer add synapses connecting a neuron to its preceding layer initialize the synapse weights randomly force the synapse weights force the input values of the input neurons force the value for each input neuron calculate input values to each neuron print messages, turn off printing by setting verbose and debug flag appropriately wait for keyboard input, for debugging print network representation, for debugging print a single layer, for debugging print a neuron, for debugging get the mva value generated by the NN write the weights stream destroy/clear the network then read it back in from the weights file compute ranking of input variables by summing function of weights write histograms to file write specific classifier response setters for subclasses { return GetOutputNeuron()->GetActivationValue(); } { return (TNeuron*)fInputLayer->At(index); } { return fOutputNeuron; }
http://root.cern.ch/root/html522/TMVA__MethodANNBase.html
crawl-003
en
refinedweb
#include <libnal/nal.h> int NAL_decode_uint32(const unsigned char **bin, unsigned int *bin_len, unsigned long *val); int NAL_decode_uint16(const unsigned char **bin, unsigned int *bin_len, unsigned int *val); int NAL_decode_char(const unsigned char **bin, unsigned int *bin_len, unsigned char *val); int NAL_decode_bin(const unsigned char **bin, unsigned int *bin_len, unsigned char *val, unsigned int val_len); int NAL_encode_uint32(unsigned char **bin, unsigned int *bin_len, const unsigned long val); int NAL_encode_uint16(unsigned char **bin, unsigned int *bin_len, const unsigned int val); int NAL_encode_char(unsigned char **bin, unsigned int *bin_len, const unsigned char val); int NAL_encode_bin(unsigned char **bin, unsigned int *bin_len, const unsigned char *val, const unsigned int val_len); NAL_decode_bin() follows the semantics of the other decode functions except that it decodes a block of binary data of length val_len. NAL_encode_uint32(), NAL_encode_uint16(), and NAL_encode_char() attempt to encode different sized integer values to the located pointed to by *bin (again, both bin and bin_len are passed by reference). If bin_len indicates there is sufficient room to successfully encode a value, val will be stored at *bin, *bin will be incremented to point to the next unused byte of storage, and *bin_len will be decremented to indicate how much unused storage remains. NAL_encode_bin() follows the semantics of the other encode functions except that it encodes a block of binary data of length val_len. #define MAX_DATA_SIZE 4096 typedef struct st_some_data_t { unsigned char is_active; /* boolean */ unsigned char buffer[MAX_DATA_SIZE]; unsigned int buffer_used; } some_data_t; We could define two functions for encoding and decoding an object of this type such that they could be serialised and transferred over a connection. The most elegant way to build serialisation of objects is to create functions that use the same form of prototype as the libnal serialisation functions, this way serialisation of complex objects can be performed recursively by serialisation of aggregated types. Although the built-in libnal serialisation functions leave bin and bin_len unchanged on failure, it is generally not worth bothering to preserve this property at higher-levels - these examples do not attempt this. An encoding function would thus look like; int encode_some_data(unsigned char **bin, unsigned int *bin_len, const some_data_t *val) { if( /* Encode the "is_active" boolean */ !NAL_encode_char(bin, bin_len, val->is_active) || /* Encode the used data */ !NAL_encode_uint16(bin, bin_len, val->buffer_used) || ((val->buffer_used > 0) && !NAL_encode_bin(bin, bin_len, val->buffer, val->buffer_used))) return 0; return 1; } Note that other types that include some_data_t objects could implement serialisation using encode_some_data() in the same way that encode_some_data() uses the lower-level libnal functions. A corresponding decode function follows. int decode_some_data(const unsigned char **bin, unsigned int *bin_len, some_data_t *val) { if( /* Decode the "is_active" boolean */ !NAL_decode_char(bin, bin_len, &val->is_active) || /* Decode the used data */ !NAL_decode_uint16(bin, bin_len, &val->buffer_used) || /* [TODO: check 'val->buffer_used' is acceptable here] */ ((val->buffer_used > 0) && !NAL_decode_bin(bin, bin_len, val->buffer, val->buffer_used))) return 0; return 1; } The above examples would be simpler still if a wrapper function were first written to serialise length-prefixed blocks of data. Such functions are not included in libnal because they can vary on what range of sizes are appropriate, what size encoding to use for a length-prefix, whether dynamic allocation should be used on decoding, etc. The above examples use a static buffer and encode the length prefix as 16-bits. NAL_CONNECTION_new(2) - Functions for the NAL_CONNECTION type. NAL_LISTENER_new(2) - Functions for the NAL_LISTENER type. NAL_SELECTOR_new(2) - Functions for the NAL_SELECTOR type. distcache(8) - Overview of the distcache architecture. - Distcache home page.
http://www.linuxmanpages.com/man2/NAL_decode_uint32.2.php
crawl-003
en
refinedweb
#include <libnal/nal.h> NAL_SELECTOR *NAL_SELECTOR_new(void); void NAL_SELECTOR_free(NAL_SELECTOR *sel); int NAL_SELECTOR_select(NAL_SELECTOR *sel, unsigned long usec_timeout, int use_timeout); NAL_SELECTOR_free() destroys a NAL_SELECTOR object. NAL_SELECTOR_select() blocks until the selector sel receives notification of network events for which it has registered interest. This function blocks indefinitely until receipt of a network event, interruption by the system, or if use_timeout is non-zero, then the function will break if more than usec_timeout microseconds have passed. See ``NOTES''. NAL_SELECTOR_free() has no return value. NAL_SELECTOR_select() returns negative for an error, otherwise it returns the number of connections and/or listeners that the selector has detected have network events waiting (which can be zero). The behaviour of NAL_SELECTOR_select() is what one would normally expectblocked signal arrived. In such cases, subsequent calls to NAL_CONNECTION_io() and NAL_LISTENER_accept() will trivially return without performing any actions as the selector has no events registered for processing. As such, if NAL_SELECTOR_select() returns zero, it is generally advised to add the connections and listeners back to the selector object and call NAL_SELECTOR_select() again. As with other libnal functions, `errno' is not touched so that any errors in the system's underlying implementations can be investigated directly by the calling application. NAL_CONNECTION_new(2) - Functions for the NAL_CONNECTION type. NAL_LISTENER_new(2) - Functions for the NAL_LISTENER type. NAL_BUFFER_new(2) - Functions for the NAL_BUFFER type. distcache(8) - Overview of the distcache architecture. - Distcache home page.
http://www.linuxmanpages.com/man2/NAL_SELECTOR_new.2.php
crawl-003
en
refinedweb
Assembly.GetType Method (String, Boolean) Updated: July 2010 Assembly: mscorlib (in mscorlib.dll) Parameters - name - Type: System.String The full name of the type. - throwOnError - Type: System.Boolean true to throw an exception if the type is not found; false to return null. Return ValueType: System.Type An object that represents the specified class. Implements_Assembly.GetType(String, Boolean) This method only searches the current assembly instance. The name parameter includes the namespace but not the assembly. To search other assemblies for a type, use the Type.GetType(String) method overload, which can optionally include an assembly display name as part of the type name. The throwOnError parameter only affects what happens when the type is not found. It does not affect any other exceptions that might be thrown. In particular, if the type is found but cannot be loaded, TypeLoadException can be thrown even if throwOnError is.
http://msdn.microsoft.com/en-us/library/19y21115.aspx
crawl-003
en
refinedweb
java.lang.Object org.apache.commons.transaction.file.FileSequenceorg.apache.commons.transaction.file.FileSequence public class FileSequence Fail-Safe sequence store implementation using the file system. Works by versioning values of sequences and throwing away all versions, but the current and the previous one. protected final String storeDir public FileSequence(String storeDir) storeDir- directory where sequence information is stored public boolean exists(String sequenceName) sequenceName- the name of the sequence you want to check trueif the sequence already exists, falseotherwise public boolean create(String sequenceName, long initialValue) sequenceName- the name of the sequence you want to create trueif the sequence has been created, falseif it already existed public boolean delete(String sequenceName) sequenceName- the name of the sequence you want to delete trueif the sequence has been deleted, falseif not public long nextSequenceValueBottom(String sequenceName, long increment) sequenceName- the name of the sequence you want the next value for increment- the increment for the sequence, i.e. how much to add to the sequence with this call ResourceManagerException- if anything goes wrong while accessing the sequence protected long read(String sequenceName) protected void write(String sequenceName, long value) protected String getPathI(String sequenceName) protected String getPathII(String sequenceName) protected long readFromPath(String path) throws NumberFormatException, FileNotFoundException, IOException NumberFormatException FileNotFoundException IOException protected void writeToPath(String path, long value)
http://commons.apache.org/transaction/apidocs/org/apache/commons/transaction/file/FileSequence.html
crawl-003
en
refinedweb
The QWidgetAction class extends QAction by an interface for inserting custom widgets into action based containers, such as toolbars. More... #include <QWidgetAction> This class was introduced in Qt 4.2. The QWidgetAction class extends QComboBox in a QToolBar, presenting a range of different zoom levels. QToolBar provides QToolBar::insertWidget() as convenience function for inserting a single widget. However if you want to implement an action that uses custom widgets for visualization in multiple containers then you have to subclass QWidgetAction. If a QWidgetAction is added for example to a setDefaultWidget(). That widget will then be used if the action is added to a QToolBar, or in general to an action container that supports QWidgetAction. If a QWidgetAction with only a default widget is added to two toolbars at the same time then the default widget is shown only in the first toolbar the action was added to.().
http://doc.trolltech.com/main-snapshot/qwidgetaction.html#createWidget
crawl-003
en
refinedweb
a face for tracking with mutual information More... #include <vbl/vbl_ref_count.h> #include <vnl/vnl_matrix_fixed.h> #include <vil1/vil1_memory_image_of.h> #include <vtol/vtol_intensity_face.h> #include <bsta/bsta_histogram.h> #include <vgl/vgl_point_2d.h> #include "strk_tracking_face_2d_sptr.h" Go to the source code of this file. a face for tracking with mutual information The shape and intensity data for this class are maintained by the vtol_intensity_face member. Additional gradient information is maintained in order to support the formation of a gradient direction histogram. Local histogram structs are defined to collect the intensity and gradient statistics of the face. \author Joseph L. Mundy - October 29, 2003 Brown University \verbatim Modifications 10-sep-2004 Peter Vanroose Added copy ctor with explicit vbl_ref_count init 15-june-2005 Ozge Can Ozcanli Added methods to calculate mutual information with known pixel correspondences of two faces 13-july-2005 Ozge Can Ozcanli Added max_intensity_ variable to support images with larger range than 8 bits Definition in file strk_tracking_face_2d.h.
http://public.kitware.com/vxl/doc/release/contrib/brl/bseg/strk/html/strk__tracking__face__2d_8h.html
crawl-003
en
refinedweb
TGeoTrack - Class for user-defined tracks attached to a geometry. Tracks are 3D objects made of points and they store a pointer to a TParticle. The geometry manager holds a list of all tracks that will be deleted on destruction of gGeoManager. Constructor. Destructor. Add a daughter track to this. Add a daughter and return its index. Returns distance to track primitive for picking. Draw this track overimposed on a geometry, according to option. Options (case sensitive): default : track without daughters /D : track and first level descendents only /* : track and all descendents /Ntype : descendents of this track with particle name matching input type. Options can appear only once but can be combined : e.g. Draw("/D /Npion-") Time range for visible track segments can be set via TGeoManager::SetTminTmax() Event treatment. Get some info about the track. Get coordinates for point I on the track. Return the pointer to the array of points starting with index I. Return the index of point on track having closest TOF smaller than the input value. Output POINT is filled with the interpolated value. Return the number of points within the time interval specified by TGeoManager class and the corresponding indices. Search index of track point having the closest time tag smaller than TIME. Optional start index can be provided. Set drawing bits for this track Reset data for this track. {return (GetNdaughters()>0)?kTRUE:kFALSE;}
http://root.cern.ch/root/html520/TGeoTrack.html
crawl-003
en
refinedweb
Introducing. If you want to know what are new features in Java SE 7 for dealing with IO take a look at Introducing NIO.2 (JSR 203) Part 1: What are new features? Before NIO.2, dealing with file system was mainly done using the File class and no other base class was available. In NIO.2 it there are some new classes at our disposal to take advantage of their existence to do our job. FileSystems: Everything starts with this factory class. We use this class to get an instance of the FileSystem we want to work on. The nio.2 provides a SPI to developed support for new file systems. For example an in-memory file system, a ZIP file system and so on. Following two methods are most important methods in FileSystems class. - The getDefault() returns the default file system available to the JVM. Usually the operating system default files system. - The getFileSystem(URI uri) returns a file system from the set of available file system providers that match the given uir schema. Path: This is the abstract class which provides us with all File system functionalities we may need to perform over a file, a directory or a link. FileStore: This class represents the underplaying storage. For example /dev/sda2 in *NIX machines and I think c: in windows machines. We can access the storage attributes using FileStoreSpaceAttributes object. Available space, empty space and so on. Following two sample codes shows how to copy a file and then how to copy it. public class Main { public static void main(String[] args) { try { Path sampleFile = FileSystems.getDefault().getPath("/home/masoud/sample.txt"); sampleFile.deleteIfExists(); sampleFile.createFile(); // create an empty file sampleFile.copyTo(FileSystems.getDefault().getPath("/home/masoud/sample2.txt"), StandardCopyOption.COPY_ATTRIBUTES.REPLACE_EXISTING); // Creating a link Path dir = FileSystems.getDefault().getPath("/home/masoud/dir"); dir.deleteIfExists(); dir.createSymbolicLink(sampleFile); } catch (IOException ex) { Logger.getLogger(Main.class.getName()).log(Level.SEVERE, null, ex); } } And the next sample shows how we can use the FileStore class. In this sample we get the underlying store for a file and examined its attributes. We can an iterator over all available storages using FileSystem.getFileStores() method and examine all of them in a loop. public class Main { public static void main(String[] args) throws IOException { long aMegabyte = 1024 * 1024; FileSystem fs = FileSystems.getDefault(); Path sampleFile = fs.getPath("/home/masoud/sample.txt"); FileStore fstore = sampleFile.getFileStore(); FileStoreSpaceAttributes attrs = Attributes.readFileStoreSpaceAttributes(fstore); long total = attrs.totalSpace() / aMegabyte; long used = (attrs.totalSpace() - attrs.unallocatedSpace()) / aMegabyte; long avail = attrs.usableSpace() / aMegabyte; System.out.format("%-20s %12s %12s %12s%n", "Device", "Total Space(MiB)", "Used(MiB)", "Availabile(MiB)"); System.out.format("%-20s %12d %12d %12d%n", fstore, total, used, avail); } In next entry I will discuss how we can manage file attributes along with discussing the security features of the nio.2 file system. - Login or register to post comments - Printer-friendly version - kalali's blog - 3118 reads
http://weblogs.java.net/blog/kalali/archive/2010/06/01/introducing-nio2-jsr-203-part-1-basics
crawl-003
en
refinedweb
PIXresizer, as the name suggests is a software tool for resizing the photos. Ceate multimedia professional presentations with video, effects and audio. GeoGebra is a free powerful mathematics tool to build drawings. Blaze Media Pro is a very powerful multimedia player, converter and editor. "Everything" is an administrative tool that locates files and folders. Create multiple JPG/JPEG files from multiple bitmap files. FastStone Capture is a powerful, lightweight screen capture tool. View, print, measure DWG, DXF, DWF, and CSF (IGC Content Sealed Format) files. import about 400 graphic file formatsExport about 50 graphic file formats. InstantBurn is software solution for rewritable DVD disks. This software will help computer users out of all data loss problems. ArcSoft MediaConverter - Easily convert multimedia files. Xilisoft HD Video Converter can easily convert HD video formats. Xara 3D Maker 7 transforms any texts or shapes into high-quality 3D graphics.
http://ptf.com/como/como+convertir+rc2+en+jpg/index5.html
crawl-003
en
refinedweb
Run random simulations of the Monty Hall game. Show the effects of a strategy of the contestant always keeping his first guess so it can be contrasted with the strategy of the contestant always switching his guess. -). #include <iostream> #include <cstdlib> #include <ctime> int randint(int n) { return (1.0*n*std::rand())/(1.0+RAND_MAX); } int other(int doorA, int doorB) { int doorC; if (doorA == doorB) { doorC = randint(2); if (doorC >= doorA) ++doorC; } else { for (doorC = 0; doorC == doorA || doorC == doorB; ++doorC) { // empty } } return doorC; } int check(int games, bool change) { int win_count = 0; for (int game = 0; game < games; ++game) { int const winning_door = randint(3); int const original_choice = randint(3); int open_door = other(original_choice, winning_door); int const selected_door = change? other(open_door, original_choice) : original_choice; if (selected_door == winning_door) ++win_count; } return win_count; } int main() { std::srand(std::time(0)); int games = 10000; int wins_stay = check(games, false); int wins_change = check(games, true); std::cout << "staying: " << 100.0*wins_stay/games << "%, changing: " << 100.0*wins_change/games << "%\n"; } Sample output: staying: 33.73%, changing: 66.9% Content is available under GNU Free Documentation License 1.2.
https://tfetimes.com/c-monty-hall-problem/
CC-MAIN-2019-51
en
refinedweb
#include "std.h" #include "subsystems/sensors/baro.h" #include "mcu_periph/adc.h" #include "mcu_periph/dac.h" Go to the source code of this file. Definition at line 42 of file baro_board.h. Definition at line 53 of file baro_board.h. Definition at line 37 of file baro_board.h. 55 of file baro_board.h. References baro_board, DACSet(), and BaroBoard::offset. Definition at line 63 of file baro_board.c. Referenced by baro_board_calibrate(), baro_board_SetOffset(), baro_init(), baro_periodic(), and lisa_l_baro_event().
http://docs.paparazziuav.org/latest/booz_2baro__board_8h.html
CC-MAIN-2019-51
en
refinedweb
Robust Statistical Estimators¶ Robust statistics provide reliable estimates of basic statistics for complex distributions. The statistics package includes several robust statistical functions that are commonly used in astronomy. This includes methods for rejecting outliers as well as statistical description of the underlying distributions. In addition to the functions mentioned here, models can be fit with outlier rejection using FittingWithOutlierRemoval(). Sigma Clipping¶ Sigma clipping provides a fast method to identify outliers in a distribution. For a distribution of points, a center and a standard deviation are calculated. Values which are less or more than a specified number of standard deviations from a center value are rejected. The process can be iterated to further reject outliers. The astropy.stats package provides both a functional and object-oriented interface for sigma clipping. The function is called sigma_clip() and the class is called SigmaClip. By default, they both return a masked array where the rejected points are masked. First, let’s generate some data that has a mean of 0 and standard deviation of 0.2, but with outliers: >>> import numpy as np >>> import scipy.stats as stats >>> np.random.seed(0) >>> x = np.arange(200) >>> y = np.zeros(200) >>> c = stats.bernoulli.rvs(0.35, size=x.shape) >>> y += (np.random.normal(0., 0.2, x.shape) + ... c*np.random.normal(3.0, 5.0, x.shape)) Now, let’s use sigma_clip() to perform sigma clipping on the data: >>> from astropy.stats import sigma_clip >>> filtered_data = sigma_clip(y, sigma=3, maxiters=10) The output masked array then can be used to calculate statistics on the data, fit models to the data, or otherwise explore the data. To perform the same sigma clipping with the SigmaClip class: >>> from astropy.stats import SigmaClip >>> sigclip = SigmaClip(sigma=3, maxiters=10) >>> print(sigclip) <SigmaClip> sigma: 3 sigma_lower: None sigma_upper: None maxiters: 10 cenfunc: <function median at 0x108dbde18> stdfunc: <function std at 0x103ab52f0> >>> filtered_data = sigclip(y) Note that once the sigclip instance is defined above, it can be applied to other data, using the same, already-defined, sigma-clipping parameters. For basic statistics, sigma_clipped_stats() is a convenience function to calculate the sigma-clipped mean, median, and standard deviation of an array. As can be seen, rejecting the outliers returns accurate values for the underlying distribution: >>> from astropy.stats import sigma_clipped_stats >>> y.mean(), np.median(y), y.std() (0.86586417693378226, 0.03265864495523732, 3.2913811977676444) >>> sigma_clipped_stats(y, sigma=3, maxiters=10) (-0.0020337793767186197, -0.023632809025713953, 0.19514652532636906) sigma_clip() and SigmaClip can be combined with other robust statistics to provide improved outlier rejection as well. import numpy as np import scipy.stats as stats from matplotlib import pyplot as plt from astropy.stats import sigma_clip, mad_std # Generate fake data that has a mean of 0 and standard deviation of 0.2 with outliers np.random.seed(0) x = np.arange(200) y = np.zeros(200) c = stats.bernoulli.rvs(0.35, size=x.shape) y += (np.random.normal(0., 0.2, x.shape) + c*np.random.normal(3.0, 5.0, x.shape)) filtered_data = sigma_clip(y, sigma=3, maxiters=1, stdfunc=mad_std) # plot the original and rejected data plt.figure(figsize=(8,5)) plt.plot(x, y, '+', color='#1f77b4', label="original data") plt.plot(x[filtered_data.mask], y[filtered_data.mask], 'x', color='#d62728', label="rejected data") plt.xlabel('x') plt.ylabel('y') plt.legend(loc=2, numpoints=1) () Median Absolute Deviation¶ The median absolute deviation (MAD) is a measure of the spread of a distribution and is defined as median(abs(a - median(a))). The MAD can be calculated using median_absolute_deviation. For a normal distribution, the MAD is related to the standard deviation by a factor of 1.4826, and a convenience function, mad_std, is available to apply the conversion. Note A function can be supplied to the median_absolute_deviation to specify the median function to be used in the calculation. Depending on the version of numpy and whether the array is masked or contains irregular values, significant performance increases can be had by pre-selecting the median function. If the median function is not specified, median_absolute_deviation will attempt to select the most relevant function according to the input data. Biweight Estimators¶ A set of functions are included in the astropy.stats package that use the biweight formalism. These functions have long been used in astronomy, particularly to calculate the velocity dispersion of galaxy clusters 1. The following set of tasks are available for biweight measurements: astropy.stats.biweight Module¶ This module contains functions for computing robust statistics using Tukey’s biweight function. References¶ - 1 Beers, Flynn, and Gebhardt (1990; AJ 100, 32) (….100…32B)
https://docs.astropy.org/en/stable/stats/robust.html
CC-MAIN-2019-51
en
refinedweb
import "k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/join" checketcd.go controlplanejoin.go controlplaneprepare.go data.go kubelet.go preflight.go NewCheckEtcdPhase is a hidden phase that runs after the control-plane-prepare and before the bootstrap-kubelet phase that ensures etcd is healthy NewControlPlaneJoinPhase creates a kubeadm workflow phase that implements joining a machine as a control plane instance NewControlPlanePreparePhase creates a kubeadm workflow phase that implements the preparation of the node to serve a control plane NewKubeletStartPhase creates a kubeadm workflow phase that start kubelet on a node. NewPreflightPhase creates a kubeadm workflow phase that implements preflight checks for a new node join type JoinData interface { CertificateKey() string Cfg() *kubeadmapi.JoinConfiguration TLSBootstrapCfg() (*clientcmdapi.Config, error) InitCfg() (*kubeadmapi.InitConfiguration, error) ClientSet() (*clientset.Clientset, error) IgnorePreflightErrors() sets.String OutputWriter() io.Writer KustomizeDir() string } JoinData is the interface to use for join phases. The "joinData" type from "cmd/join.go" must satisfy this interface. Package phases imports 33 packages (graph) and is imported by 4 packages. Updated 2019-11-15. Refresh now. Tools for package owners.
https://godoc.org/k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/join
CC-MAIN-2019-51
en
refinedweb
Difference between @Factory and @DataProvider Annotation: We have discussed different topics Of TestNG, But most of the peoples are confused when it comes to finding out what is the difference between @Factory and @DataProvider When to use DataProvider and When to Use @Factory Annotation. So in this post, we are going to discuss these two functionalities. Difference Between @Factory & @DataProvider Annotation The annotation like @Factory and @DataProvider are mainly used to reiterate the same test class with different test data. These annotations will help the user to use the same class seamlessly without duplicating the test class code. Here is the main difference between @Factory and @DataProvider annotation of TestNG: @DataProvider Annotation - A test method that uses DataProvider will be executed multiple numbers of times based on the data provided by the DataProvider. That means this annotation parametrizes the particular test method and executes the test number of times based on the data provided by the DataProvider method. - The condition that needs to be met here is that the method marked as @DataProvider must return a 2D Object array (Object[][]) where each Object[] will be used as the input parameter to an iteration of the test method which uses the data provider. - The Test method will be executed using the same instance of the test class to which the test method belongs. @Factory Annotation - It Can be used to execute all the test methods present inside a test class with multiple sets of data, using the separate instance of the class. - Using this, we can instantiate a class multiple times rather than just a method. - The factory method should return an Object[]. This can be an array of Method calls or class objects. - The Test method will be executed using the separate instance of the respective class. Let us take the help of Example to understand these topics more clearly: @DataProvider Annotation Example public class DataProviderClass { @BeforeClass public void beforeClass() { System.out.println("Before class executed"); } @DataProvider public Object[][] message() { return new Object [][]{{“Mayank” , new Integer (321)}, {“Dileep”, new Integer (282)}}; } @Test (dataProvider=”message”) public void PrintMsg(String name, Integer id) { System.out.println(“Names are: “+name+” “+id); } } Output: You can see there that the before class is executed one time, whereas the printing method executed two times because @DataProvider annotation passed two sets of data. @Factory Annotation Example Simple Program:") }; } } Output: Before the SimpleTest class executed. testMethod parameter value is: two Before the SimpleTest class executed. testMethod parameter value is: one PASSED: testMethod PASSED: testMethod If you see the output, we can find that the beforeClass method is executed before the execution of the test method, which represents that factory implementation executes the test method for each instance of the test class. Let’s go through with another example where we have implemented the @Factory and @DataProvider Ina Single program: public class TestFactory { @Factory public Object[] factorymethod() { return new Object[]{new DPandFactoryExaple(), new DPandFactoryExaple()}; }} public class DPandFactoryExaple { @DataProvider public Object[][] message() { return new Object [][]{{“Mayank” , new Integer (321)}, {“Dileep”, new Integer (282)}}; } @Test (dataProvider="message") public void PrintMsg(String name, Integer id) { System.out.println(“Names are: “+name+” “+id); } @Test public void PrintSuccessfullMessage() { System.out.println(“Print the successful message”); }} If we run the above program, then you will find that the test method, which is associated with the @DataProvider that was executed four times that means that it was executed two times for each instance, as we are passing two sets of data that’s why the count is 4. The other @test method is executed two times for both instances. So the total test case execution count is 6. @DataProvider gives you the power to run a test method with different sets of data, and @Factory gives you the power to run all methods inside a test class with different sets of data. Though you can also test methods with @Factory, it depends on your use case as to which approach fits it better.
https://www.softwaretestingo.com/factory-and-dataprovider-annotation/
CC-MAIN-2019-51
en
refinedweb
On 2009-04-05 10:49Z, Werner LEMBERG wrote: > > how can I remove a make goal? > > Consider that I have suite of targets `foo-a', `foo-b', ... Normally > I would say, for example, > > make foo-a > > Now imagine that I want to have a special build with a debug target, > and I want to say > > make debug foo-a > > The `debug' isn't a real target; I rather test with > > .PHONY major > ifneq ($(findstring debug,$(MAKECMDGOALS)),) > ... > endif > > to set some flags and the like. Would.
https://lists.gnu.org/archive/html/help-make/2009-04/msg00006.html
CC-MAIN-2019-51
en
refinedweb
#include <integral.h> 2D Gaussian integration class for linear triangles. Three integration points. This integration scheme can integrate up to second-order polynomials exactly and is therefore a suitable "full" integration scheme for linear (three-node) elements in which the highest-order polynomial is quadratic. Definition at line 867 of file integral.h. Default constructor (empty) Definition at line 881 of file integral.h. Broken copy constructor. Definition at line 884 of file integral.h. References oomph::BrokenCopy::broken_copy(). Return coordinate x[j] of integration point i. Implements oomph::Integral. Definition at line 899 of file integral.h. Number of integration points of the scheme. Implements oomph::Integral. Definition at line 896 of file integral.h. Broken assignment operator. Definition at line 890 of file integral.h. References oomph::BrokenCopy::broken_assign(). Return weight of integration point i. Implements oomph::Integral. Definition at line 903 of file integral.h. Array to hold the weights and knots (defined in cc file) Definition at line 875 of file integral.h. Number of integration points in the scheme. Definition at line 872 of file integral.h. Definition at line 875 of file integral.h.
http://oomph-lib.maths.man.ac.uk/doc/the_data_structure/html/classoomph_1_1TGauss_3_012_00_012_01_4.html
CC-MAIN-2019-51
en
refinedweb
import "gopkg.in/src-d/go-vitess.v1/vt/vtctl/vtctlclient" Package vtctlclient contains the generic client side of the remote vtctl protocol. RegisterFactory allows a client implementation to register itself. func RunCommandAndWait(ctx context.Context, server string, args []string, recv func(*logutilpb.Event)) error RunCommandAndWait executes a single command on a given vtctld and blocks until the command did return or timed out. Output from vtctld is streamed as logutilpb Factory func(addr string) (VtctlClient, error) Factory functions are registered by client implementations type VtctlClient interface { // ExecuteVtctlCommand will execute the command remotely ExecuteVtctlCommand(ctx context.Context, args []string, actionTimeout time.Duration) (logutil.EventStream, error) // Close will terminate the connection. This object won't be // used after this. Close() } VtctlClient defines the interface used to send remote vtctl commands func New(addr string) (VtctlClient, error) New allows a user of the client library to get its implementation. Package vtctlclient imports 9 packages (graph) and is imported by 10 packages. Updated 2019-06-13. Refresh now. Tools for package owners.
https://godoc.org/gopkg.in/src-d/go-vitess.v1/vt/vtctl/vtctlclient
CC-MAIN-2019-51
en
refinedweb
Generic interface to separable image reconstruction filters. More... #include <mitsuba/core/rfilter.h> Generic interface to separable image reconstruction filters. When resampling bitmaps or adding radiance-valued samples to a rendering in progress, Mitsuba first convolves them with a so-called image reconstruction filter. Various kinds are implemented as subclasses of this interface. Because image filters are generally too expensive to evaluate for each sample, the implementation of this class internally precomputes an discrete representation (resolution given by MTS_FILTER_RESOLUTION) When resampling data to a different resolution using Resampler::resample(), this enumeration specifies how lookups outside of the input domain are handled. Create a new reconstruction filter. Unserialize a filter. Virtual destructor. Configure the object (called once after construction) Reimplemented from mitsuba::ConfigurableObject. Evaluate the filter function. Perform a lookup into the discretized version. Return the block border size required when rendering with this filter. Retrieve this object's class. Reimplemented from mitsuba::ConfigurableObject. Return the filter's width. Serialize the filter to a binary data stream. Reimplemented from mitsuba::ConfigurableObject.
http://mitsuba-renderer.org/api/classmitsuba_1_1_reconstruction_filter.html
CC-MAIN-2019-51
en
refinedweb
Define resources in Azure Resource Manager templates When creating Resource Manager templates, you need to understand what resource types are available, and what values to use in your template. The Resource Manager template reference documentation simplifies template development by providing these values. If you are new to working with templates, see Quickstart: Create and deploy Azure Resource Manager templates by using the Azure portal for an introduction to working with templates. To determine locations that available for a resource type, see Set location in templates. To add tags to resources, see Tag resources in Azure Resource Manager templates. If you know the resource type, you can go directly to it with the following URL format:{provider-namespace}/{resource-type}. For example, the SQL database reference content is available at: The resource types are located under the Reference node. Expand the resource provider that contains the type you are looking for. The following image shows the types for Compute. Or, you can filter the resource types in navigation pane:
https://docs.microsoft.com/ja-jp/azure/templates/
CC-MAIN-2019-51
en
refinedweb
Introduction:Yes i am really happy now.Because now very nice built-in 'DrawerLayout' library is available from 'Nuget'.And i am really says very thanks to 'amarmesic' for providing this layout.Before in 8.0, we need to write lot of code to make 'Navigation Drawer'.But now we can make it with very few steps.Any way for my kudos visitors, i will be discuss it in this. Descritpion: Please see this to the understand sample Now i hope you understanding the sample 'what i am going to explain in this post'.Ok lets start to development Step 1: - Open Visual Studio 2013 - Create new project name is "DrawerLayout8.1" PM> Install-Package DrawerLayout After that you will be found 'DrawerLayout' dll in references like this Step 2: Open MainPage.xaml and add the namespace in xaml: XAML xmlns:drawerLayout="using:DrawerLayout" Step 3: XAML <Grid x: <Grid.RowDefinitions> <RowDefinition Height="Auto" /> <RowDefinition Height="*" /> </Grid.RowDefinitions> <!--Title bar --> <Grid x: <Grid.ColumnDefinitions> <ColumnDefinition Width="Auto" /> <ColumnDefinition Width="*" /> </Grid.ColumnDefinitions> <Image Margin="5" x: <TextBlock Grid. </Grid> <!--DrawerLayout bar --> <drawerLayout:DrawerLayout Grid. <!--MainPage --> <Grid x: <TextBlock Name="DetailsTxtBlck" Text="No Item Selected..." Margin="10" HorizontalAlignment="Center" VerticalAlignment="Center" FontSize="25" Foreground="Black" /> </Grid> <!--Favorites List Section --> <Grid x: <Grid.RowDefinitions> <RowDefinition Height="Auto" /> <RowDefinition Height="Auto" /> </Grid.RowDefinitions> <Border Grid. <TextBlock HorizontalAlignment="Center" Margin="0,5,0,5" Text="MyFavorites" FontSize="25"/> </Border> <ListView Grid. <ListView.ItemTemplate> <DataTemplate> <Grid Background="White" Margin="0,0,0,1"> <Grid.RowDefinitions> <RowDefinition Height="Auto" /> <RowDefinition Height="Auto" /> </Grid.RowDefinitions> <TextBlock Grid. <Rectangle Grid. </Grid> </DataTemplate> </ListView.ItemTemplate> </ListView> </Grid> </drawerLayout:DrawerLayout> </Grid> Step 4: Initialize the Drawer Layout then add some items to our list. C# public MainPage() { this.InitializeComponent(); DrawerLayout.InitializeDrawerLayout(); //Intialize drawer string[] menuItems = new string[5] { "Favorite 1", "Faverote 2", "Favorite 3", "Favorite 4", "Favorite 5" }; ListMenuItems.ItemsSource = menuItems.ToList(); //Set Menu list this.NavigationCacheMode = NavigationCacheMode.Required; } Step 5: Open/Close the drawer when the user taps on the Menu icon. C# private void DrawerIcon_Tapped(object sender, TappedRoutedEventArgs e) { if (DrawerLayout.IsDrawerOpen) DrawerLayout.CloseDrawer();//Close drawer else DrawerLayout.OpenDrawer();//Open drawer } Step 6: Get selected list item value and showing it on main page section C# private void ListMenuItems_SelectionChanged(object sender, SelectionChangedEventArgs e) { if (ListMenuItems.SelectedItem != null) { //Get selected favorites item value var selecteditem = ListMenuItems.SelectedValue as string; DetailsTxtBlck.Text = "SelectedItem is: "+selecteditem; DrawerLayout.CloseDrawer(); ListMenuItems.SelectedItem = null; } } Step 7: Close the drawer when the user taps on back key press. C# protected override void OnNavigatedTo(NavigationEventArgs e) { Windows.Phone.UI.Input.HardwareButtons.BackPressed += HardwareButtons_BackPressed; }void HardwareButtons_BackPressed(object sender, Windows.Phone.UI.Input.BackPressedEventArgs e) { if (DrawerLayout.IsDrawerOpen) { DrawerLayout.CloseDrawer();//Close drawer on back press e.Handled = true; } else { Application.Current.Exit();//exist app when drawer close on back press } } :) Great! But is it corresponds to the WP guidelines? It looks like an android application. Yes , You are Correct .. WP design in Unique compare to iOS and Android can anyone able to port this to windows phone 8? No,this drawer layout is targeted for 8.1 i ported this to wp8. it works like charm Helpful Tutorial. I have a doubt, Why this drawerLayout not works in BlankPage(SilverLight) a.k.a basic page in windows phone 8.1 and this page doesn't accept Dispatcher.BeginInvoke method ? is this code sample work on windows phone 8? No! This sample is not work with WP8.0.And if you looking for wp8.0,you may visit this link when i open the drawer in landscape mode and my application supports page orientation and i rotate the phone into portrait mode the menu's width become bigger than the page so it covers all the width available. is this a bug or i should call something in code behind so the drawer adapts its dimensions Hi Its really helpful Thanks!. I need a solution that how can I raise the drawer item clicked event which should call My view-model method? For my cross platform project Im using MVVMCross pattern. Please help. How can I highlight the selected item in drawer layout. Please suggest? How can I highlight the selected item in drawer layout. Please suggest? This comment has been removed by the author. This comment has been removed by the author. Thank you for a wonderful example ! But there is a problem . How to scroll down the menu to the left ? For example , we have a menu that does not fit entirely in control I have a question and if I want go back instead of closing the App, what should I do? I tried this but it only goes back to the First Page and sometimes my previous one was other one. Thanks for your time. void HardwareButtons_BackPressed(object sender, Windows.Phone.UI.Input.BackPressedEventArgs e) { if (DrawerLayout.IsDrawerOpen) { DrawerLayout.CloseDrawer();//Close drawer on back press e.Handled = true; } else { Frame frame = Window.Current.Content as Frame; if (frame == null) return; if (frame.CanGoBack) { frame.GoBack(); e.Handled = true; } } } I think my main problem is because I have already override that event in the App.xaml but it doesn't matter if I do it later in other form because of the change is everywhere. Would you be so kind and give me an idea how to fix it? Thanks. Helpful Tutorial...!! But How to add images or icons in Menu Items..?? great control! but how can i change the width of the menu in Windows 8.1 app? when the drawer is opened, it occupied more than 2/3 of the screen in landscape mode. Why when on a page with a listview can I not open the drawer is their a problem I saw a post about the drawer loosing gestures if their is a listview on page?. Hi Raju, First a fall i would like to say thanks for sharing such a valuable support in Windows Phone 8/8.1. Well I have a question, I need to implement the same drawer layout through out the App. Meant i would like to add this drawer layout on the App root Frame so that it is available on each pages of App..Though for a time being i have implemented this drawer layout on each pages of the App...but i need some robust solution....@@@Please help me dear!!!!!!!! Thanks, but if i want to add icon also along with the string like, ("icon.png","name") like Gmail android app. then what would be the process for that. your support value like anything on this subject matter. *** thanks alot ..plz reply as soon as possible. whatever possibilities... use stack pannel in your xaml. thank you sir.... Sir can you help me to open sublist in drable after click one item from the main list. eg. their is category called sperts ,politics, music so if i click in sports so i need to open sublist just following sports item like cricket,football etc. Hello, how can we open drawer from Right to Left ? Its Not Working Sliver Light App..Please Help Me Hello! How to use this control with mvvm (Prism)? Please help. Thank you very Much, God give you more Knowledge... Wonderful Application . Please I have a question? What if you want to attach different click events to the different text string in the listview, how can it be done,
http://bsubramanyamraju.blogspot.com/2014/11/windowsphone-81-wow-now-its-very-simple.html
CC-MAIN-2017-34
en
refinedweb
Plex The plex platform allows you to connect a Plex Media Server to Home Assistant. It will allow you to control media playback and see the current playing item. Setup The preferred way to setup the Plex platform is by enabling the discovery component which requires GDM enabled on your Plex server. If your Plex server has local authentication enabled or multiple users defined, Home Assistant requires an authentication token to be entered in the frontend. Press “CONFIGURE” to do it. If you don’t know your token, see Finding your account token / X-Plex-Token. If your server enforces SSL connections, write “ on” or “ true” in the “Use SSL” field. If it does not have a valid SSL certificate available but you still want to use it, write “ on” or “ true” in the “Do not verify SSL” field as well. You can also enable the plex platform directly by adding the following lines to your configuration.yaml: # Example configuration.yaml entry media_player: - platform: plex In case discovery does not work (GDM disabled or non-local plex server), you can create ~/.homeassistant/plex.conf manually. {"IP_ADDRESS:PORT": {"token": "TOKEN", "ssl": false, "verify": true}} - IP_ADDRESS (Required): IP address of the Plex Media Server. - PORT (Required): Port where Plex is listening. Default is 32400. - TOKEN (Optional): Only if authentication is required. Set to null(without quotes) otherwise. - ssl (Optional): Whether to use SSL or not. (Boolean) - verify (Optional): Whether to allow invalid or self-signed SSL certificates or not. (Boolean) Customization You can customize the Plex component by adding any of the variables below to your configuration: # Example configuration.yaml entry media_player: - platform: plex entity_namespace: 'plex' include_non_clients: true scan_interval: 5 show_all_controls: false use_custom_entity_ids: true use_episode_art: true - entity_namespace (Optional): Prefix for entity ID’s. Defaults to null. Useful when using overlapping components (ex. Apple TV and Plex components when you have Apple TV’s you use as Plex clients). Go from media_player.playroom2 to media_player.plex_playroom - include_non_clients (Optional): Display non-recontrollable clients (ex. remote clients, PlexConnect Apple TV’s). Defaults to false. - scan_interval (Optional): Amount in seconds in between polling for device’s current activity. Defaults to 10seconds. - show_all_controls (Optional): Forces all controls to display. Defaults to false. Ignores dynamic controls (ex. show volume controls for client A but not for client B) based on detected client capabilities. This option allows you to override this detection if you suspect it to be incorrect. - use_custom_entity_ids (Optional): Name Entity ID’s by client ID’s instead of friendly names. Defaults to false. HA assigns entity ID’s on a first come first serve basis. When you have identically named devices connecting (ex. media_player.plex_web_safari, media_player.plex_web_safari2), you can’t reliably distinguish and or predict which device is which. This option avoids this issue by using unique client ID’s (ex. media_player.dy4hdna2drhn). - use_episode_art (Optional): Display TV episode art instead of TV show art. Defaults to false. Service play_media Plays a song, playlist, TV episode, or video on a connected client. Music Playlist TV Episode Video Compatibility Notes -, check the setting Server> Network> Secure connectionsin your Plex Media Server: if it is set to Preferredor Required, you may need to manually set the ssland verifybooleans in the plex.conffile to, respectively, trueand false. See the “Setup” section above for details.
https://home-assistant.io/components/media_player.plex/
CC-MAIN-2017-34
en
refinedweb
Opened 6 years ago Closed 4 years ago #16502 closed Bug (fixed) CreateView useless error message when template_name is not specified Description According to documentation CreateView should use %app_name%/%model_name%_form.html template by default. But if template_name is not specified it returns uninformative error: Traceback (most recent call last): File "/home/kirill/workplace/projects/createview_test/lib/python2.7/site-packages/django/core/servers/basehttp.py", line 283, in run self.result = application(self.environ, self.start_response) File "/home/kirill/workplace/projects/createview_test/lib/python2.7/site-packages/django/contrib/staticfiles/handlers.py", line 68, in __call__ return self.application(environ, start_response) File "/home/kirill/workplace/projects/createview_test/lib/python2.7/site-packages/django/core/handlers/wsgi.py", line 273, in __call__ response = self.get_response(request) File "/home/kirill/workplace/projects/createview_test/lib/python2.7/site-packages/django/core/handlers/base.py", line 169, in get_response response = self.handle_uncaught_exception(request, resolver, sys.exc_info()) File "/home/kirill/workplace/projects/createview_test/lib/python2.7/site-packages/django/core/handlers/base.py", line 203, in handle_uncaught_exception return debug.technical_500_response(request, *exc_info) File "/home/kirill/workplace/projects/createview_test/lib/python2.7/site-packages/django/views/debug.py", line 59, in technical_500_response html = reporter.get_traceback_html() File "/home/kirill/workplace/projects/createview_test/lib/python2.7/site-packages/django/views/debug.py", line 89, in get_traceback_html for loader in template_source_loaders: TypeError: 'NoneType' object is not iterable There is not anything except traceback on white background. It will be cool if there will be default template for CreateView. If it is not a bug then note that template_name is required would be useful. Attachments (6) Change History (35) comment:1 Changed 6 years ago by Changed 6 years ago by Adds check for non-emptyness for list of template_names in django.template.loader.select_template. comment:2 Changed 6 years ago by aaugustin, through information from your comment I was able to find possible place in code where additional check could be perfomed. It is django.template.loader.select_template function. It expects list of template names and select first loadable. If there are some template names in list and no one from them couldn't be loaded then raising TemplateDoesNotExist exception is the right decision since. But if there isn't any template names in the list then >>Template<<DoesNotExist is wrong exception, I think. So maybe checks if list is empty and raise exception with message like "I can't load any template for you because you didn't gave me any possible variants"? comment:3 Changed 6 years ago by comment:4 Changed 6 years ago by Changed 6 years ago by add a get_model to SingleObjectMixin? comment:5 Changed 6 years ago by comment:6 Changed 6 years ago by function-based generic view, django.views.generic.create_update.create_object, use default template if no template_name is given. When you switch to class-based view, CreateView will not work as you has expected, since django.views.generic.edit.CreateView has no default template. This must be a bug, please give CreateView a default template in 1.3. comment:7 Changed 6 years ago by Changed 6 years ago by merging of patches and adding regression tests comment:8 Changed 6 years ago by I've merged my and bhustez 's patches and added regression tests for both. comment:9 Changed 6 years ago by comment:10 Changed 6 years ago by There are too many issues being addressed here in one patch. The ticket is valid and seeks the addition of a default template to CreateView Patches addressing how the template loader works, or a missing get_model method on SingleObjectMixin should have their own tickets opened and patches submitted. Changed 6 years ago by Separate get_model-patch. comment:11 Changed 6 years ago by According to this discussion patch for select_template was separated to ticket:16866. Patch with get_model was left here since there are multiple ways to define which model should be used (the self.model, self.queryset or self.form_class attributes) and it justifies having a utility function for it. Tests provided. comment:12 Changed 6 years ago by Just hit this issue, any chances this get attention for 1.4? Changed 6 years ago by update patch for revision 17517 comment:13 Changed 6 years ago by update Silver_Ghost's patch for revision 17517 Changed 5 years ago by update patch for revision 17904 comment:14 Changed 5 years ago by comment:15 Changed 5 years ago by my pull request: comment:16 follow-up: 17 Changed 5 years ago by I don't like the way this patch/pull request works with ModelForms - it magically extracts a model from a ModelForm, which already needs discussion as it's new behaviour, but even worse it then passes that model out and then makes a brand new ModelForm out of it - that shouldn't happen. comment:17 Changed 5 years ago by I don't like the way this patch/pull request works with ModelForms - it magically extracts a model from a ModelForm, which already needs discussion as it's new behaviour Yes, but it is not a new behaviour, the function-based generic view counterpart did the same magic. I did not know whether or not the design decision had been changed to not providing a default template_name. To provide a default template_name, I don't think there is a much less magical way , given current ModelForm API. but even worse it then passes that model out and then makes a brand new ModelForm out of it - that shouldn't happen. No, it does not. If form_class already exists, it will NOT make a new ModelForm from model extracted from that, it will just return form_class you defined. The function-based generic view counterpart did the same. comment:18 Changed 5 years ago by comment:19 Changed 4 years ago by comment:20 follow-up: 21 Changed 4 years ago by I was unable to duplicate this in 1.5.1. I created a basic model as below: #models.py from django.db import models class Author(models.Model): name = models.CharField(max_length=100) A basic view: #views.py from django.views.generic import CreateView from .models import Author class CreateAuthor(CreateView): model = Author The traceback I got back was: TemplateDoesNotExist at / test_16502/author_form.html Request Method: GET Request URL: Django Version: 1.5.1 Exception Type: TemplateDoesNotExist Exception Value: test_16502/author_form.html Exception Location: /home/vagrant/django/django/django/template/loader.py in select_template, line 194 Python Executable: /home/vagrant/.virtualenvs/django/bin/python Python Version: 2.7.3 I think this is the correct exception that should be raised and the exception is present in the regular debug view. Perhaps this was an issue with earlier versions of Django and it's been resolved in another ticket, though I can't hunt this down. Perhaps related to ticket:16866? Perhaps if this is still a bug provide more information on how to reproduce it. comment:21 Changed 4 years ago by set form_class rather than model comment:22 follow-up: 23 Changed 4 years ago by comment:23 Changed 4 years ago by comment:24 follow-up: 28 Changed 4 years ago by This is all a little convoluted. There is no longer one problem at hand, and a little clarification might be useful. Bug 1: get_template_names() (as defined in SingleObjectTemplateResponseMixin) is returning None, which is causing Django to throw a TemplateDoesNotExist. This should instead throw a ImproperlyConfigured error, as it does not have the information to determine the template file to load. This is more eloquently described in #18853 (marked as duplicate to this topic) Bug 2: The TemplateDoesNotExist exception is causing the server error message, as detailed (and solved) in #21058. Feature Request 1: Creating a CBGV by only overriding the form_class variable. The patch provided creates the ability to do so, but does not actually solve the bugs detailed. I spoke to Russel about the possibility of the new feature. Unfortunately, determining the model based off a form specified in form_class is not desirable, because this assumes the form is a ModelForm, which may not be the case. As such, this feature (and patch) will therefore not be approved for Django. This leaves only Bug 1 to be solved. comment:25 Changed 4 years ago by Here is a pull request to fix Bug 1, as described by jambonrose. comment:26 Changed 4 years ago by comment:27 Changed 4 years ago by comment:28 Changed 4 years ago by determining the model based off a form specified in form_classis not desirable, because this assumes the form is a ModelForm, which may not be the case. CreateView inherits ModelFormMixin. This already assumes form is a ModelForm. comment:29 Changed 4 years ago by @bhuztez I'm going to re-close this and ask that you please open a new ticket since we try to have one issue per ticket. I think the feature request and patch look reasonable, but it's a bit difficult to follow the conversation. Could you please a new ticket that summarizes the details and include the most recent patch? I think the existing patch also needs documentation. Thanks! Actually, this is a crash of the debug view itself. When Django encounters a TemplateDoesNotExistexception, the debug view attempts to gather information about available template loaders and templates. It relies on the fact that django.template.loaders.template_source_loadersis already populated (by django.template.loaders.find_template). But in your case, it isn't. So a new exception is raised, it overrides the initial exception, and — unfortunately — it makes it difficult to understand what really happens here.
https://code.djangoproject.com/ticket/16502?cversion=1&cnum_hist=24
CC-MAIN-2017-34
en
refinedweb
Recently, I have been working on an application that maintains a series of linked lists as part of its general operations. A while back, I needed to sort the lists at certain points within the program, so I derived a new class from CList (I was using MFC) and added a sort method: CList void Sort( ); At a later point, I required to search the list based upon a second search criteria, so I added another parameter to this function. However, it turns out that I in fact needed a third criteria as well, so I decided that instead of adding more and more constants to pass to the function I would create a general-purpose solution. The main motivation for this had been that I’d had to derive another class to sort a different kind of data. First, I defined a set of requirements that I believed were necessary in such a general-purpose solution The second point pretty much ruled out inheritance as an option, which I wanted to avoid in any case because I didn’t want to have to mess about altering already-defined class hierarchies. Instead, I decided to use templates. The solution that I have produced is completely general – it would work with any kind of collection, and any kind of data that could be stored in this collection. To this end, I have used several classes that work together to provide the sorting capability. All the functions are defined inline (i.e. in the header file) to increase speed. The classes that make up the sorting system can be classified as shown below: The main class that you will be using will be CCollectionSorter. This class provides the Quick Sort algorithm itself, and uses the other two categories of classes to assist. CCollectionSorter Here is how you can use CCollectionSorter in your application to sort a collection: Firstly, you need to declare a collection class. You may have already done this; here is the example we’ll be following through with. CList<int, int> MyIntegerList; Note that you could also simply further declarations of integer lists as follows: typedef CList<int, int> CIntList; Now, in the file where the sorting code will be, you’ll need to include the CollectionSorter.h file: #include "CollectionSorter.h" Next, you’ll need to declare a CCollectionSorter variable at the point where you wish to perform the sorting operation. This is a template with two arguments; TYPE and ARG_TYPE. These should be as specified in the declaration of the collection, and in the case of STL containers, they should both be the same: TYPE ARG_TYPE CCollectionSorter<int, int> IntSorter; Note that once again you can simplify further declarations of this as follows: typedef CCollectionSorter<int, int> CIntCollectionSorter; Further, I would recommend placing this new typedef along with the one above. typedef The next stage is to call one of the SortCollection methods. Which one you call will depend upon your situation. The two basic choices are SortCollection and SortPointerCollection. The difference between these is that SortPointerCollection will dereference the values stored in the collection before comparing them. This will produce unpredictable results if used incorrectly so please ensure you call the correct function. There are overloads of both functions for both CList and CArray collections. However, the CCollectionSorter class is much more flexible if you need it to be, so if these overloads don’t satisfy your requirements, please read on to the section entitled “Extending CCollectionSorter”. For the sake of our example, we will use the following function call: SortCollection SortPointerCollection CArray IntSorter.SortCollection(MyIntegerList); If the collection contains objects, it is required that operator >, operator < and operator == are defined in the classes contained. However, this can be avoided as described in the section “Extending CCollectionSorter”. operator > operator < operator == The example described above shows the use of this class at its simplest. However, it is possible to provide custom comparer classes, and custom collection accessor classes so that any type of ordered collection and data can be sorted. The purpose of the comparer class is to compare two values and return the result to the sorting routine. This function has been encapsulated into a class so that custom behaviour can be added as needed. For instance, you may wish to provide several comparer classes which all act on different fields of the same class. Then, the comparer class to be used is selected at runtime. The comparer classes should be derived from the IComparer interface. You must override the following functions: IComparer virtual bool IsABigger() const virtual bool IsBBigger() const virtual bool Equal() const The base class has two protected member variables of the type that was specified by the user of the CCollectionSorter class named m_cmpA and m_cmpB. These will be set by the sorting function, and then the three functions above will be used to compare the two values. You should derive a comparer class from IComparer, and if the class is for a specific type only, declare it as shown below: class CMyComparer : public IComparer<type> You should replace 'type' with the type of data that you are writing the class to compare. If it is to be a general-purpose comparer (like the two supplied), then use the following syntax: template<class TYPE> class CMyComparer: public IComparer<TYPE> You should then pass the new comparer to the SortCollection function as shown below: Sorter.SortCollection(myCollection, CNewComparer()); You may require that additional initialisation be done before the comparer class is used, in which case you must declare it as a variable, initialise it, then use it, as shown below: CNewComparer Comp; // *** initialise Comp here *** Sorter.SortCollection(myCollection, Comp); Here is the code for a class which sorts a series of values into descending (rather than ascending) order, derived from the CDefaultComparer class: CDefaultComparer #pragma once #include <span class="code-string">"DefaultComparer.h"</span> template<class TYPE> class CReverseComparer : public CDefaultComparer<TYPE> { public: CReverseComparer() {} virtual ~CReverseComparer() {} bool IsABigger() const { return CDefaultComparer<TYPE>::IsBBigger(); } bool IsBBigger() const { return CDefaultComparer<TYPE>::IsABigger(); } bool Equal() const { return (m_cmpA == m_cmpB); } }; Note that, for instance, in the IsABigger function, we cannot use the following: IsABigger return !CDefaultComparer<TYPE>::IsABigger(); This is because we must remember that we want m_cmpA < m_cmpB, and that, as shown by the equation below, the above line of code would result in the incorrect answer: !(m_cmpA > m_cmpB) == (m_cmpA <= m_cmpB) Now, to use the new comparer class, you'd do the following: CCollectionSorter<int, int> MyIntListSorter; MyIntListSorter.SortCollectino(myList, CReverseComparer<int>()); Another time when you might need to code your own comparer class is if you wanted to compare different members of the same object type in different parts of your program. Let’s assume you were creating a contact-management program: you might want to sort by either name, or by age. Here’s your CPerson class: class CPerson { public: // constructors here… UINT GetAge() { return m_nAge; } CString GetName() { return m_strName; } // etc… private: UINT m_nAge; CString m_strName; }; You’d declare the list as follows: typedef CList<CPerson *, CPerson *> CPersonList; typedef CCollectionSorter<CPerson *, CPerson *> CPersonCollectionSorter; CPersonList g_Contacts; Obviously this might not be a global variable in an actual application, but I’ve declared it as such here for simplicity. Now, to implement the two comparer classes: class CPersonAgeComparer : public IComparer<CPerson *> { public: CPersonAgeComparer() {} virtual ~CPersonAgeComparer() {} bool IsABigger() const { return (m_cmpA->GetAge() > m_cmpB->GetAge()); } bool IsBBigger() const { return (m_cmpB->GetAge() > m_cmpA->GetAge()); } bool Equal() const { return (m_cmpA->GetAge() == m_cmpB->GetAge()); } }; ///////////////////////////// class CPersonNameComparer : public IComparer<CPerson *> { public: CPersonNameComparer() {} virtual ~CPersonNameComparer() {} bool IsABigger() const { return (m_cmpA->GetName() > m_cmpB->GetName()); } bool IsBBigger() const { return (m_cmpB->GetName() > m_cmpA->GetName()); } bool Equal() const { return (m_cmpA->GetName() == m_cmpB->GetName()); } }; Now, you can use the two classes as follows: CPersonCollectionSorter sorter // By age: sorter.SortCollection(g_Contacts, CPersonAgeComparer()); // Or by Name: sorter.SortCollection(g_Contacts, CPersonNameComparer()); That’s it for the Comparer classes! Up to now, we’ve primarily used MFC collection classes. However, it is perfectly feasible to provide your own collection accessor class to access any kind of collection. The process is quite simple, and simply involves deriving a new class from the ICollectionAccessor interface and defining the three functions described below: ICollectionAccessor The interface does not provide a mechanism for storing the collection, as this would mean the sorter class would also have to know what kind of collection was being held. Therefore, it is up to you to provide such a method in your derived class. This also means that you can sort diverse collection types that cannot be stored in the normal way. For instance, in the example I’m about to demonstrate showing how to provide a custom accessor class, you’ll see that we need to store two pieces of data in order to provide access to the collection, not one. Should the base class have automatically handled storing a reference to collection, this wouldn’t have been as clean to implement. Further, the sorter class, which takes a reference to an ICollectionAccessor derived class, does not need to know anything about the collection that is being stored. It is usual to provide a method named SetCollection to handle passing a pointer or reference to the collection class. If you decide to do this using a reference, you will either have to store a pointer (taking the address of the reference), or pass a reference to the constructor instead. In order to clarify this discussion, I have provided an example below: SetCollection Now, let’s assume that we have an array of ten integers that need sorting: int nNumnbers[9]; // don’t forget nNumbers[0] is also valid Neither of the pre-defined collection accessor classes will provide for accessing this kind of collection. Therefore, we need to provide our own. Here are a summary of its responsibilities and requirements: As described above, we need to provide overrides for the three functions declared in the interface also. The CList and CArray accessor classes store a pointer to the collection. However, doing this would only provide half of the functionality required, because those classes have a built in function for determining the size of the collection. Because we are using a raw array, we must also take this information as a parameter. Note that, no matter what kind of elements we store in the array, the way it is accessed will be the same in 99% of cases. Therefore, we might as well write this as a template class so that we can use it for any kind of array in the future also. Okay, here’s the class that I’ve defined: template<class TYPE> class CStdArrayAccessor : public ICollectionAccessor<TYPE> { public: CStdArrayAccessor() { m_pArray = NULL; m_nSize = 0; } CStdArrayAccessor(TYPE *pArray, long nSize) { SetCollection(pArray, nSize); ASSERT(m_pArray); } void SetCollection(TYPE *pArray, long nSize) { m_pArray = pArray; m_nSize = nSize; ASSERT(m_pArray); } TYPE GetAt(long nIndex) { ASSERT(m_pArray); return m_pArray[nIndex]; } void Swap(long nFirst, long nSecond) { ASSERT(m_pArray); TYPE typeFirst = m_pArray[nFirst]; // set the item at the first position to equal the second: m_pArray[nFirst] = m_pArray[nSecond]; // now the second to equal the first: m_pArray[nSecond] = typeFirst; } long GetCount() { ASSERT(m_pArray); return m_nSize; } protected: TYPE *m_pArray; long m_nSize; }; Now, to use this accessor class, we must first initialise it: CStdArrayAccessor<int> saa; saa.SetCollection(nNumbers, 10); Notice that we provide 10 as the size because even though the upper bound of the array is 9, there are ten items because the index is always zero-based. Now, we can sort the array: CCollectionSorter<int, int> sorter; sorter.SortCollection(saa); Notice that we use the third overloaded version of the SortCollection method, which takes a collection accessor class as its first parameter rather than the collection itself. This eliminates the need for the sorting class to know about the collections it is sorting at all, and in fact you’ll see that the other overloaded version just end up calling this one anyway, but they are provided to add clarity to your code. Obviously, you could provide both a custom accessor and a custom comparer class, which the sorting function would then use instead of the defaults. The demo project is a very simple console application which takes a set of numbers, stores them in both a CArray and CList class, and then uses the methods of CCollectionSorter to sort both collections, then outputs them to the console. The aim of the project is purely to demonstrate the simplest usage of the class – hopefully after reading this article you will be able to easily implement sorting in your own application, providing custom accessor and/or comparer classes as needed. Okay, that’s it folks! I hope that you will find this collection-sorting mechanism useful; I know it’s been useful to me since I wrote it a few weeks back. If you have any problems with the code, or want to make suggestions, feel free to e-mail me (click on my name at the top of the article). This article has no explicit license attached to it but may contain usage terms in the article text or the download files themselves. If in doubt please contact the author via the discussion board below. A list of licenses authors might use can be found here surme wrote:Do you really work with CArray. General News Suggestion Question Bug Answer Joke Praise Rant Admin Use Ctrl+Left/Right to switch messages, Ctrl+Up/Down to switch threads, Ctrl+Shift+Left/Right to switch pages.
https://www.codeproject.com/Articles/1288/General-Purpose-Collection-Sorter?msg=1207492
CC-MAIN-2017-34
en
refinedweb
John Gilbert <jgilbert01@yahoo.com> wrote on 14/10/2006 20:14:43: > I am trying to write an Ejb3Directory. It seems to work for index > writing but not for searching. > I get the EOF exception. I assume this means that either my > OutputStream or InputStream is doing > something wrong. It fails because the CSInputStream has a length of > zero when it reads the .fnm section > of the .cfs file. > > Does anyone have any suggestions? Seems flushBuffer() ignores its length param: > public class Ejb3OutputStream extends OutputStream { > protected void flushBuffer(byte[] b, int len) throws IOException { > os.write(b); > } Shouldn't it be like this? os.write(b, 0, len); --------------------------------------------------------------------- To unsubscribe, e-mail: java-user-unsubscribe@lucene.apache.org For additional commands, e-mail: java-user-help@lucene.apache.org
http://mail-archives.apache.org/mod_mbox/lucene-java-user/200610.mbox/%3COFF3A93C54.90CF1E8D-ON88257208.001427EC-88257208.0014D4A3@il.ibm.com%3E
CC-MAIN-2017-34
en
refinedweb
Opened 10 years ago Closed 10 years ago Last modified 10 years ago #3838 closed (wontfix) stringformat filter allows trailing non-format characters but not leading characters Description This filter seems to provide an, intentionally simplified, implementation of python's format strings. I'm not sure what the intent was with this simplification, however it is inconsistent. Due to the requirement that the preceding '%' be left out, one cannot put leading constant characters into the format string (which may be intended). However, one _can_ put trailing constant characters into the format string and it will still work. For instance: {# fails #} {{ generic_key_type|stringformat:"course/%s.html" }} {# works #} {{ generic_key_type|stringformat:"s/course/index.html" }} The first way will not work, however the second way will work, which is inconsistent. Either the filter should ensure that no trailing constant characters are used or it should allow both ways. If the intent is to simplify it in some way, then it should consistently not allow either trailing or leading characters. In this case, the name is very misleading as the filter is not equivalent to python's format strings, and I would argue that it should be renamed after the subset of functionality that it is intended to provide. With its current name, I would argue that it should allow both trailing and leading characters, and the following is an implementation that allows this, and attempts to remain backwards compatible. def stringformat(value, arg): """ Formats the variable according to the argument, a string formatting specifier. This specifier uses Python string formating syntax with the exception that the leading "%" may be dropped. See for documentation of Python string formatting """ try: return ("%" + str(arg)) % value except (ValueError, TypeError): try: return str(arg) % value except (ValueError, TypeError): return "" Change History (7) comment:1 Changed 10 years ago by comment:2 Changed 10 years ago by I'm re-opening this report, as the resolution is not appropriate given the answer that was posted. I respectfully disagree with this being a "wontfix". If this filter ... is really more for alternate types of formatting (eg |stringfilter:".2f") than string concatenation. Then there is a bug ... Because it does allow for string concatenation. Hence there is a bug to be fixed. Mark it as low priority if you'd like, but it's still a bug. More importantly, I agree with the original poster that if this is the intent, then the filter is mis-named and mis-documented. It needs attention. The proposed method is no more "magic" than the original implementation, and it _will_ handle the case that you brought up: |stringformat:"settings/%s.html" comment:3 Changed 10 years ago by Ok, fair enough - it does handle the case I brought up because that raises a TypeError. I'll defer the decision to a core developer (and thanks to lakin's correction of my interpretation, my opinion changes is +0) comment:4 Changed 10 years ago by comment:5 Changed 10 years ago by comment:6 Changed 10 years ago by Going back to wontfix for the reasons Chris gave originally; the fact that you can, in some situations, accidentally use something in a way that's undocumented and unsupported does not mean we should be fixing bugs for that use. comment:7 Changed 10 years ago by but you should fix it so that it can't be used in ways that are undocumented and unsupported. I still find the naming ambiguous and frankly think that a filter should does what it name says, not what the documentation says that it currently does, especially if both have a big overlap as it is now. So... if stringformat can't be fixed so that it allows all possible uses (and adding documentation for those, as proposed here), then it probably should be changed in a way that raises an appropriate exception with a meaningful error message, or not? This filter is really more for alternate types of formatting (eg |stringfilter:".2f") than string concatenation. Your alternate method seems a bit too magic, and still isn't going to work with cases like |stringformat:"settings/%s.html".
https://code.djangoproject.com/ticket/3838
CC-MAIN-2017-34
en
refinedweb
Opened 9 years ago Closed 9 years ago #8564 closed (duplicate) newforms-admin doesn't support linking to it via {% url %} or reverse() Description In the old admin, that was possible to have a link to the admin area in the UI, like: {% if user.is_staff %} <a href="{% url django.contrib.admin.views.main.index %}">Admin area</a> {% endif %} or even: <a href="{% url django.contrib.admin.views.main.change_stage "app","model",instance.id %}">Edit instance</a> In newforms-admin, that crashes with: Reverse for 'src.django.contrib.admin.site.root' not found. I do realize that admin.site is now a Site instance, not a module, that's why {% url %} will never work for it. However, I can't see how to put a link to the admin area now? Currently I'm using the following hack: urlpatterns = # ... #(r'^admin/(.*)', admin.site.root), (r'^admin/(.*)', 'views.admin_site_root'), # ... def admin_site_root(request, url): return admin.site.root(request, url) and then: <a href="{% url views.admin_site_root "" %}">Admin</a> which is pretty ugly (since of complete redundancy) but solves the problem. I defenitely think there should be a standard way to pull the admin site urls. I can suggest two approaches: - Extend reverse() and {% url %} to recognize/understand bound methods, in particular django.contrib.admin.site.root - Under django.contrib.admin.templatetags, create a set of tags like {% admin django.contrib.admin.site %} and {% admin_change_list django.contrib.site "app","model" %} Change History (3) comment:1 Changed 9 years ago by comment:2 Changed 9 years ago by Thanks, I overlooked the named patterns. That partly solves the problem: with a named pattern, I actually can point to the admin root like {% url admin "" %} That is still not possible to link to a particular admin page, though, e.g. have something like {% url admin "/app/model/" + instance.id %}, which was possible in the old admin app. You can use named url patterns.
https://code.djangoproject.com/ticket/8564
CC-MAIN-2017-34
en
refinedweb
class OEOverlap OEOverlap calculates the static shape overlap between a reference molecule or grid and a fit molecule or grid. Note that this does not move the fit molecule(grid) nor does it optimize the overlap. It simply calculates the score for the provided orientation. OEOverlap() OEOverlap(const OEOverlap &rhs) OEOverlap(const OEChem::OEMolBase &refmol) OEOverlap(const OESystem::OEScalarGrid &refmol) Default and copy constructors. float GetCarbonRadius() const Return the current value for the carbon radius approximation. float GetGridSpacing() const Return the current value for the grid spacing to use for the OEOverlapMethod_Grid method of calculating overlaps. Is not used by any other overlap method. Defaults to 0.25. unsigned int GetRadiiApproximation() const Return the current value of the radii approximation. unsigned int GetRepresentationLevel() const Return the current representation level. bool Overlap(OEOverlapResults &res, float *atomOverlaps=0) bool Overlap(const OESystem::OEScalarGrid &fitgrid, OEOverlapResults . bool SetCarbonRadius(float cradius) Set the radius to use when using OEOverlapRadii_Carbon. By default this is set to 1.7 Angstroms. See OEBestOverlay.GetCarbonRadius. bool SetFitGrid(const OESystem::OEScalarGrid &fitgrid) Set grid to be used as fit object.. bool SetMethod(unsigned int m) Set the method used to calculate overlap. The default for OEOverlap is OEOverlapMethod_Exact. Alternatives are defined in the OEOverlapMethod namespace. bool SetRadiiApproximation(unsigned int type) Set the radius approximation used to calculate overlap. The default for OEOverlap is OEOverlapRadii_Carbon. Alternatives are defined in the OEOverlapRadii namespace. bool SetRefGrid(const OESystem::OEScalarGrid &refgrid) Set a reference grid for the calculation. An internal copy is made. Any previous reference molecule or grid is cleared. bool SetRefMol(const OEChem::OEMolBase &refmol) Set a reference molecule for the calculation. An internal copy is made. Any previous reference molecule or grid is cleared. bool SetRepresentationLevel(unsigned int type) Set the representation level for the Gaussians in OEOverlap. The default is OEOverlapRepresentation_Atomic. Alternatives are defined in the OEOverlapRepresentation namespace.
https://docs.eyesopen.com/toolkits/python/shapetk/OEShapeClasses/OEOverlap.html
CC-MAIN-2017-34
en
refinedweb
Dear Debianizers, NB. I have asked similar question at debian-python [1] but had no replies, so re-posting to -devel now I am ITPing python-scikits-learn and possibly few other python-scikits-* packages in the future. All of the packages would have 1 peculiarity, they all would rely on having $> cat /usr/share/pyshared/scikits/__init__.py __import__('pkg_resources').declare_namespace(__name__) As a resolution I am planing to package some silly Debian-native (there is no per se the upstream for this single file) package python-scikits-common which would provide that base directory with __init__.py Am I missing possible other alternative (I think that unpleasant and evil diverts, or inappropriate for this case alternatives aren't real choices here, right)? [1] -- .-. =------------------------------ /v\ ----------------------------= Keep in touch // \\ (yoh@|www.)onerussian.com Yaroslav Halchenko /( )\ ICQ#: 60653192 Linux User ^^-^^ [175555] Attachment: signature.asc Description: Digital signature
https://lists.debian.org/debian-devel/2010/03/msg01024.html
CC-MAIN-2017-34
en
refinedweb
0 The code executes as expected, but I am not sure it is considered good code:) I am thinking especially of the two streams on the socket, if the first one is successfully opened and the next one for some reason throws an exception, the finally block will never run. If someone would validate whether or not this code is okay, I would really appreciate it :) import java.io.*; import java.net.*; public class EchoClient { public static void main(String[] args) throws IOException { Socket clientSocket = null; PrintWriter out = null; BufferedReader in = null; try { clientSocket = new Socket("192.168.0.104", 7); out = new PrintWriter(clientSocket.getOutputStream(), true); in = new BufferedReader(new InputStreamReader(clientSocket.getInputStream())); try { BufferedReader userInBuffer= new BufferedReader(new InputStreamReader(System.in)); String userInput; if ((userInput = userInBuffer.readLine()) != null) { out.println(userInput); System.out.println("echo: " + in.readLine()); } } finally { out.close(); in.close(); clientSocket.close(); } } catch (UnknownHostException e) { System.err.println("Host name unavailable."); System.exit(1); } catch (IOException e) { System.err.println("Couldn't get I/O for the connection."); System.exit(1); } } }
https://www.daniweb.com/programming/software-development/threads/306105/is-this-bad-code-streams-try-catch-finally-and-exceptions
CC-MAIN-2017-34
en
refinedweb
In Cerebral you connect state to components where you need it. This give some benefits: Cerebral supports numerous view layers. They conceptually work the same way, but has different implementation details. Choose the view layer that makes sense to you and your team. We will move on using React, but have a look at the API section to find more out about Inferno, AngularJS, Preact and Vue. When you render your application you use the Container component to expose the controller to the rest of your components… import React from 'react' import {render} from 'react-dom' import {Container} from 'cerebral/react' import controller from './controller' import App from './App' render(( <Container controller={controller}> <App /> </Container> ), document.querySelector('#app')) When you connect a component like this… import React from 'react' import {connect} from 'cerebral/react' import {state} from 'cerebral/tags' export default connect({ title: state`title` }, function MyComponent ({title}) { return ( <div> <h1>{title}</h1> </div> ) } ) …the component will be registered to Cerebral. Cerebral actually has a register of all connected components in your application. This information is passed to the debugger and whenever Cerebral flushes out changes made to different state paths, it will know what components should render. All connected components are automatically optimized, meaning that they will only render if a parent component passes a changed prop or Cerebral explicitly tells it to render. To get more in-depth information about connect, please visit the API chapter.
http://cerebraljs.com/docs/introduction/components.html
CC-MAIN-2017-34
en
refinedweb
3D Models and Hotspots Hi guys. I'm new to QML and having a few issues that I hope you guys can help me with. I have a 3D model in a .dae file that I am trying to display. I can display it in Qt 5.4 by: import QtQuick 2.0 import Qt3D 2.0 Rectangle { width: 1140 height: 700 color: "white" Viewport{ id: viewport anchors.fill: parent camera: Camera { eye: Qt.vector3d (400.0, 100.0, -400.0) fieldOfView: 90 } Item3D{ id: satellite mesh: Mesh{source: "ow.dae"} position: Qt.vector3d(0, -0, 0) } light: Light{ position: Qt.vector3d(1000, -1000, -1000) }}} I have tried to display the same model in Qt 5.7 but all my attempts have been unsuccessful. Would any of you guys happen to know how to do it? Also, I would like to put hotspots on the model, that display information about a particular point on the model. However the only text I can find online, mentioning hotspots, is outdated. Is it possible to put hotspots on a model and if so, how can it be done in either 5.4 or 5.7? Thanks for your help.
https://forum.qt.io/topic/71676/3d-models-and-hotspots
CC-MAIN-2017-34
en
refinedweb
Beginning PHP and MySQL 5.0 142 Ravi Kumar writes "PHP and MySQL use is so prevalent that nowadays it is hard to miss seeing a website on the net which has been built using these technologies. The beauty of PHP is in its open nature and the rich set of libraries and modules which imparts a lot of power and flexibility to the programmer. Similarly MySQL is a free database which is ideal for use as a backend for any website. And not surprisingly there are a plethora of books in the market which explains these two topics. One such book is Beginning PHP and MySQL 5 from Novice to Professional authored by W.Jason Gilmore published by Apress." Read the rest of Ravi's review. is solely dedicated to working with files and operating systems where the author explains in his inimitable style different ways of reading from and writing to files. All the frequently used file manipulation functions are explained in this chapter with the aid of examples. The first 12 chapters of the book solely at beginners too. But that is a minor detail and I guess there are limits to which a books of even this size can cram information. All in all an informative book which gives good value for money. The author of this book. Ravi Kumar is passionate about all things related to GPL and open source and likes to share his thoughts through his blog." You can purchase Beginning PHP and MySQL 5.0 - From Novice to Professional from bn.com. Slashdot welcomes readers' book reviews -- to see your own review here, read the book review guidelines, then visit the submission page. Misleading Title (Score:5, Insightful) Re:Misleading Title (Score:3, Informative) What the hell is the title? (Score:4, Informative) Re:Misleading Title (Score:1) Incorrect data validation; Inappropriate use of resources; Inelegant design; Bad debugging methods; Using the wrong tools for the job. All of these and more are things that you really won't learn from a book. You learn these through experience. Any book that p Re:Misleading Title (Score:1) I agree infinitely. No, double that. I agree even more than infinitely. M Re:Misleading Title (Score:1) Re:Misleading Title (Score:1) Re:Misleading Title (Score:1) LAMP Rocks (Score:3, Interesting) You can do some incredible stuff with PHP/MySQL if you put your mind to it. One of my favorite projects (it wasn't the definitive or only one!) was a windows app that hooked keypresses. Every so often it would upload the number of keypresses to some PHP / MySQL code and update your user profile. The application potential is impressive, and not fully exploited the way I look at it. Re:LAMP Rocks (Score:5, Insightful) However, although I'm the first to brag about the power, simplicity, and performance that PHP and Apache offer when used by the right programmer, I do make a living off of ASP.NET/SQL Server applications, so please consider the following in the ensuing flamewar: 1.. 2. The 3. Say what you will about SQL Server, but if we could just replace the M in LAMP with PostreSQL, or, well, anything other than MySQL, I would be happy. SQL Server is not my favorite database, but it is very good. MySQL has its niche, but I expect a RDMS to have stored procedures and transactions as standard fare. (yes, I know 5.0 has SPs, and InnoDB gives you transactions, but I said "standard fare") 4. This is the most important point of all: There are just as many cookie-cutter, craptastic, insecure, bug-ridden PHP apps out there as there are ASP.NET apps. On the other hand, if you are smart and creative, and truly use the tools provided by either platform, you can create fantastic applications with either one, just as easily. How to put postgreSQL in LAMP (Score:1) LAMP: Linux, Apache, Most of our cool scripting languages start with a P and PostgreSQL Re:LAMP Rocks (Score:1) Re:LAMP Rocks (Score:2) VB.NET is a fine language, if a tad verbose for my taste. My problem is not with the language. My problem is with the swarms of morons churning out God-awful code using said language. Re:LAMP Rocks (Score:5, Insightful). Help me out here: you're saying that PHP is extremely flexible, as long as the programmer only tries to write one type of program with it? Hmmmm? I think we both recognize the truth: compared to Python, Ruby, or Lisp, PHP is not very flexible at all. It's a poorly designed, inflexible language that happens to have gained momentum at a critical era in the history of the WWW. Re:LAMP Rocks (Score:2) Re:LAMP Rocks (Score:2) Re:LAMP Rocks (Score:2) Re:LAMP Rocks (Score:2) When I got done though it was complete with all the Re:LAMP Rocks (Score:3, Insightful) What does this even mean? You know that they are available, but you don't want to use them? You don't support using the latest versions? You seem to want to imply something, but I can't figure out what it is. Re:LAMP Rocks (Score:2) As an example, I seem to remember that FOREIGN KEY REFERENCES was silently ignored in MySQL with the I don't want another PHP book (Score:2, Interesting) Whoa, look at the time. Next language/framework/ide please... Re:I don't want another PHP book (Score:4, Informative) I believe there's also a MySQL Cookbook, but my database use isn't so advanced that I need specific help on that just yet. I'm still learning proper programming technique, while trying to learn PHP and MySQL and the fine points of CSS AND crank out a new web site that won't require a massive rewrite in a year. Re:I don't want another PHP book (Score:2) but but (Score:1) Re:but but (Score:2) Re:but but (Score:2) Re:but but (Score:1) Only 3 articles below this very one, is a Monty Python themed technical book... SPOILER WARNING: it sucks. Henry's Python Programming Guide [slashdot.org] Good luck, oh, and I've added you to my newsletter. Tries to do too much (Score:2) A book like the one being reviewed tries to do too much. When you're starting out, you don't want a a lot of detailed library stuff getting in the way. Once you've got the basics done, you don't want a book that teaches it like a course, you want a reference. From the review's summary... (Score:1, Redundant) Forgot something else... (Score:2) Why is it most of these reviews sound like grade school current events reports? ROR (Score:2) Damn kids these days. Re:ROR (Score:1) I think R&R is more common for intranet and B-to-B apps rather than public sites. Thus, you wouldn't see it much browsing around public blogs etc. How does this book compare to.. (Score:3, Interesting) Re:How does this book compare to.. (Score:2) Php.org has got some great language resources. mySQL.com...eh, not as good, but decent if you have a basic grasp of SQL. Re:How does this book compare to.. (Score:2) I go a lot faster with a good reference book then I do hunting online. Re:How does this book compare to.. (Score:1) Re:How does this book compare to.. (Score:2) Re:How does this book compare to.. (Score:1, Interesting) Not Microsoft, and not a replacement, but it's indeed funny to see a hackish hobby project trash the hell out of PHP's performance even with all the Zend and Roadsend bells and whistles: [phpcompiler.net] [phpcompiler.net] Of course, once you venture into the native VB.NET/C# land, you can expect perform Owner of previous edition (Score:4, Informative) Re:Owner of previous edition (Score:2) Prevalent != Best (Score:2) Examine the options in the tools available to you, pick what works for you. I've tried MySQL and PHP and mod-perl and CGI and python, but my current favorites are PostgreSQL [postgresql.org] and Tomcat [apache.org] hosting Java Servlets. No books required, their included documentation is quite good. persistent problem (Score:5, Interesting) this is a persistent problem with all of these programming in ____________ books. They teach the language and sometimes get around to dealing with good programming. Learning PHP, or java, or python, or whatever is still not learning to program. Learning to program effectively should be the first priority. All the OOP features in PHP5 are of no use to someone without good knowledge of OOP. Likewise, I'd gather that most of the insecurities in PHP are the result of poor design. PHP is great for its templating features, the ability to separate content from design, and its speed of development. But, that still doesn't make it secure or effective. How many times does a programmer get in trouble becasue they don't escape double quotes in a TEXT field in mysql, or account for malformed URL's, html, bad javascript, etc.? No matter how good these books are, and I'm sure they do a good job of presenting all of PHP's features and strenghts, they still usually lack teaching how to design a web site/application, how to effectively use passwords, secure data queries, efficient programming, etc. That might be an altogether different beast, but there's a world of difference between using PHP in a web site and writing a good web app. I'd wish that the books would focus more on good programming techniques. I don't imagine everyone will buy the book otherwise, and not everyone will benefit the same, but I've not found too many books that put "programming" ahead of "programming in". Re:persistent problem (Score:2) This isn't idle for me - I want to contribute to a particular open source project (via programming; I currently do testing/documentation/etc.). While I work in IT, my background isn't in IT, and my programming education stopped at CS102, many years ago. I've taught myself plenty of BAD programming in PHP etc., but I'd like someone to suggest a book that teaches "good programming". To put it best, I learned my pro Re:persistent problem (Score:1) I'm currently having some fun with the How to Think Like A Computer Scientist series - [canonicalbooks.com] I'm reading the Python version right now, and it's pretty easy to follow. You might also want to take a look at MIT OpenCourseWare - [mit.edu] - I'm slowly working through their computer science courses. Emphasis on slowly. Re:persistent problem (Score:2) Um, sorry, could you show me which language more tightly couples content and design? Binding content and design is what those web template languages do. It's why they're better than traditional CGI scripts for quick projects and crash and burn for large projects, unless you add something to manage the separation. That some people have managed to assemble frameworks that do sort of separate cont Re:persistent problem (Score:1, Interesting) Excuse me? If anything, PHP actively promotes spaghetti code. It's PHP, HTML and SQL usually mixed together all over the place. How many times have you seen tr/td elements being output directly to the client from looping through the dataset returned by a query three lines above the table element? Because I've seen it a zillion times, and counting. There's only one "official" templating engine, Re:persistent problem (Score:2) Re:persistent problem (Score:1) Here's a tip for your .htaccess if you're using Apache: I would suggest you at least put together on the same server apps which have to communicate between each other. Then it's probably not worth the time. Only one chapter (Score:2) Re:Only one chapter (Score:2) WAMP kicks a considerable amount of ass (Score:4, Informative) [wampserver.com] Re:WAMP kicks a considerable amount of ass (Score:1) Re:WAMP kicks a considerable amount of ass (Score:2) Re:WAMP kicks a considerable amount of ass (Score:1) WAFP (Score:2) At least for a site with ten hits a day. All of which are from 127.0.0.1. Re:WISP kicks more (Score:2) Re:WAMP kicks a considerable amount of ass (Score:2, Informative) There's another similar project by the name of XAMPP [apachefriends.org]. XAMPP comes with quite a lot of other handy auxiliaries as well, such as eAccelerator, and it's available for Linux, Windows, Solaris and most recently OSX. The interesting thing is it supports both PHP 4 and 5, allowing easy testing of an application on both versions - and at least the Windows version comes with an automagical version switcher. I'd recommend giving both packages a look. Do note this, however (and I think it goes for WAMP too): Re:WAMP kicks a considerable amount of ass (Score:2) Too noisy for me. (Score:1) For me -- doing is better than reading (Score:3, Interesting) I learned the most I possibly could by downloading Wordpress (blog software), PHPBB (bulletin board software) and setting them up. I downloaded modifications and looked over the code in person. Over the past few months I've become really adept at writing my own PHP and MySQL-based software, to the point that I'm starting to design my own CMS interface. Not a single thing I've learned from a book has stuck, but everything I learn in chomping on code in Notepad or emacs seems to stick forever. Anyone else have problems with books on coding? Re:For me -- doing is better than reading (Score:2, Insightful) I use books only as a language reference. I find that no matter what I'm trying to accomplish in code, any book I own doesn't have examples that "fit" the pieces of my app that I'm struggling with. So what good is a book with 1000 pages of example codes and 300 pages of theory when 95% of the time it doesn't fit what I need anyways? I prefer the reference style book or snippet archive (TurboPascal days): "a Listbox has these properties, methods, and events and here is how they work" or "To make Already found a good one... (Score:1) Seriously though, as a relative n00b in the PHP world, I like the visual quickstart guides by Peachpit Press for PHP and Advanced PHP, where there is a practical example of what you might need to use PHP for, in addiion to a disection of the code being used. Both of these books deal with MySQL as well. While I wasn't exactly scripting my own Nuke system in ten minutes, after some casual reading I w power and flexibility, gee wiz (Score:2) The same could be said for python or perl. I think PHP's main "beauty" feature is how easy it is to install, nothing more. Re:power and flexibility, gee wiz (Score:2) Also, as it's just as easy to install Perl, your argument doesn't hold a lot of water... there must be another reason it's so popular ;) Re:power and flexibility, gee wiz (Score:2) Not mod_perl, it isn't. Well, at least not as easy to just drop in. It's easier these days, but I really think it held mod_perl back years ago. As for lack of explicit type definitions What can you do with PHP and MySQL ? (Score:3, Funny) The beauty of PHP (Score:1, Flamebait) beauty? PHP? Have you every looked at it? closely? Do you know any other solution? Just look at the naming of is_null, empty and isset Re:The beauty of PHP (Score:2) PHP is the worse language found on the web. The only reason it exists is because its easy. Java/ruby/python make a much better solution than PHP on any platform. Enjoy, Don't know the coding (Score:1) This. Book. Suxxors. (Score:1) Re:This. Book. Suxxors. (Score:1) I have found errors in every programming book I've ever read. In fact, I REFUSE to buy anything until the author accepts this and puts up an "errata" website somewhere. Plug for TinyButStrong (Score:2) I've no connection with either of these two projects, just a very impressed user (and the TinyButStrong promo "Libraries and modules" (Score:1) Too much information (Score:2) technologies. If you are an experienced programmer and want to learn PHP I would recommend reading O'Reilly's "PHP in Nutshell" book. You can read through the whole thing in less than a day and pick up most of what you will need to know. Also you cannot beat the online docs as a reference. A sorry situation (Score:1, Informative) The fact that PHP and MySQL are the most deployed tools for web development is a rather sorry situation, given the deep shortcomings of both tools. See these articles about the many PHP warts: Experiences of Using PHP in Large Websites [ukuug.org] Why PHP sucks [blogspot.com] The PHP Ghetto [ianbicking.org] You will be happier with a more mature and complete dynamic language like Python, or even (gasp ;-) ) Ruby. Similarly, see these other articles about the many MySQL warts: MySQL Hate [pythonmac.org] MySQL Gotchas [sql-info.de] Compare the last one with the one fo First sentence needs fixing (Score:1) There, that's better. Moo (Score:2) It doesn't look like a database, it doesn't smell like a database. It's doesn't even taste like a database. And only the really nascent to the db scene would say it looks like a database. It happens to have an language interface that on some level partially coincicdes with what many people think SQL should be. But, that's where it starts, and that's where it ends. I'm not saying MySQL is a bad product. It's a wonderful product for quick web development an You don't really need to buy a book for this (Score:2) <?php $connection = mysql_connect($location,$user,$pass) or die("Couldn't connect to DB server."); success_code = @mysql_select_db($db, $connection) or die("Couldn't select database."); $sql = "SELECT * FROM $table"; $result_set = mysql_query( $sql ); while ($row = mysql_fetch_array( $result_set )) { do_something_with( $row ); } ?> The only Re:Sec-exps already know PHP is the beginner's cho (Score:1) Re:Sec-exps already know PHP is the beginner's cho (Score:2) Hmmm...I thought that's what I said (although there have been some pretty bad holes in the core PHP bits themselves). On the other hand, "dumb entry-level programmers" was also one of the main knocks against IIS's ASP in its early days. (ASPX seems to have largely fixed this by being much less friendly than ASP to entry-level types.) ;) Re:Sec-exps already know PHP is the beginner's cho (Score:5, Interesting) Not blame the language? Why not? PHP is the only language that I know of that has like 6 or 7 functions just to escape strings to be injected in SQL queries and that still manages to get it wrong. I mean, first time you try to hit a DB, you've heard about SQL injection you want to escape your inputs, are you using addslashes? Nope, and you should stripslashes too, if magic_quotes are active, because even though they're built in they fucking fail. Oh, there's an sqlite_escape_string, but you're using mysql so you'd probably use this lil' mysql_escape_string... except that you were really supposed to use mysql_real_escape_string, cause it's the real one you know. And the best part of all that shit? there is not one of the unsafe function that's marked anything even remotely close to "deprecated" or "dangerous", they are unsafe and should never be used, that's old news, and you can still use them n/p Hell, PHP is the only language that I know of that does not feature any kind of prepared statement in it's standard DB interface. It only got prepared statements with the mysqli_ crapfest and that frigging piece of donkey poo requires you to create a prepared statement explicitely and then bind every single argument one by one to your statement. This thing is the most retarded standard DB interface that's ever been born in this world, and it's only taken like 4 years for the Zend retards to unleash this abortion on us! Developers rejoice, maybe in 4 more years we'll get a DB interface on par with Perl's DBI or Python's DBAPI2... And THIS is but one of the dozens of inherently stupid and/or insecure "features" PHP got built-in such as the good ol' REGISTER_GLOBALS, the hidden errors and notices, the lack of anything even remotely close to Perl's "use strict", the completely random and inconsistent function names and function outputs, the three-fucking-thousand functions all dumped into the global namespace (perl has 206, Python has 76 and ruby probably has less than a dozen)... I'm all for blaming the builder, as long as he's got usable tools. PHP is nothing that can be called "usable tool" with a straight face, the whole "language" is a gigantic hack built with feces and vomit, it IS to blame, and blame it I do. Re:Sec-exps already know PHP is the beginner's cho (Score:1) As a php programmer, let me be the first to say: good point Anyone coding PHP without expanding into Perl or Python is giving themselves undue stress. Re:Sec-exps already know PHP is the beginner's cho (Score:2) [php.net] ommended?rev=1.179.2.11 [php.net] t [php.net] I guess my post won't reach +5 because mine is not stupid enough Re:Sec-exps already know PHP is the beginner's cho (Score:2) No need to, it has been done. See python(turbogears, django, web.py, twisted nevow, probably about 10-20 others I have forgotten) or ruby(on rails, plus a few others) or perl(plus a lot of stuff I don't know) or a host of other, not stupid languages for details. Except they are not "web scripting languages" they are scripting languages with web platforms, which makes life really really easy when your website needs to talk to anything as Re:Sec-exps already know PHP is the beginner's cho (Score:2) Here is what I know of OSS philosophy: Write cool software, share it with the world, let people help you make it better. In a nutshell that is it. The OSS philosophy comment was directed at Zend taking patches to fix some of the glaring problems with php as a development platform, or rather the fact that they are pretty difficult about it. As for your comments about what a language is designed for, that is just foolish stereotyping. Python comes pretty close to Perl Re:Sec-exps already know PHP is the beginner's cho (Score:2, Insightful) Is it the fault of the language? I can point to a few things where I can say, Shame on You, PHP!, Re:Sec-exps already know PHP is the beginner's cho (Score:2) This very issue would appear to be at the heart of many existing C (or C++) vs. Java arguments. The claim is/was that newbie programmers are not as dangerous if given Java. Re:Sec-exps already know PHP is the beginner's cho (Score:2) Re:Sec-exps already know PHP is the beginner's cho (Score:2) Besides, the Java programmers are ju Re:Sec-exps already know PHP is the beginner's cho (Score:1) Re:Sec-exps already know PHP is the beginner's cho (Score:2) Re:Sec-exps already know PHP is the beginner's cho (Score:1) I might exchange newbie with ignorant. For example, a newbie is almost guaranteed to be ignorant, but someone who is not a newbie could possibly be ignorant and the mistakes are caused by not knowing any better. But, perhaps the biggest reason for poor apps is not newbies or ignorants, but rather is laziness. As you are aware, design, development and testing takes a lot longer than "getting something done". Re:Any website? (Score:2) Re:Any website? (Score:2) If the developers at "all those banks and stock exchanges handling vast loads" are using PHP and MySQL with the help of this book to develop their applications, then I'm going to stuff my money under my mattress. Lighten up a little. The problem is that too many beginners are shown easy software development languages and te Stats, please (Score:2) Or are you just repeating something you've heard? Postgres may have more features and better support of SQL standards like transactions, triggers, stored procedures, etc, but these are things that improve data integrity, not performance. MySQL has always been oriented to performance rather than features and its use as a backend for web sites has always been a Re:Stats, please (Score:2) Or are you just repeating something you've heard? Why are you questioning this? This has been common knowledge for years. Both MySQL and PostgreSQL have their relative strengths. From Wikipedia: "Critics find MySQL's popularity surprising in the light of the existence of other open source database projects with comparable performance and in closer compliance Wrong! (Score:2) Re:Wrong! (Score:2) Well, you've just restated my point. I'm not the one making the unverified claims about performance, I'm just asking for actual verification. Missing the point (Score:1) Data integrity is ABSOLUTELY CRITICAL. Without data integrity your data is next to worthless! Even if your data is disposable, like a blog, inconsistent data can cause applications to fail. If it's not disposable, like financial data of some sort, data integrity should be your number one concern. MySQL would have t Or what about? (Score:2) I have used RoR and am impressed with what it has to offer. Check back in a year and it might mature to the level for larger scale projects that aren't as vanilla boilerplate as is the case now. I have used Smalltalk as an OO language and am starting to teach myself Seaside since i
https://slashdot.org/story/06/05/22/1333207/beginning-php-and-mysql-50
CC-MAIN-2017-34
en
refinedweb
Hello, I worked with Python for about a year. Then I moved on to C/C++. Now I'm learning Java. I know the basics of programming really well, and I'm starting to get a good grasp of OOP. I'm learning the syntax of abstract classes and inheritance in Java. I thought it would be fun to make a small program to use the new syntax I learned, so decided to make a small RPG (I know--typical project). Say I have a Character class. public class Character { private String name; private int level; public void moveNorth() { // code to move north } // more methods for movement } So, it's rather simple. What if I'm going about my business in a world and talk to an NPC. The NPC asks me if I want to be a magician or a warrior. I want the character to acquire a set of methods and variables depending on his/her answer. For instance, a warrior might have DoubleSlash(), PowerStrike(), LungeAttack(), and a variable for keeping track of something like mana for warriors. And a magician might have FireBall(), LightningStrike(), Heal(), Weaken(), and a variable for mana. How could I have my character gain a certain set of skills? I haven't found anything for java on conditional inheritance. I'm trying to avoid giving all the skills to the character and having a million true/false flags for whether the character can use the skill or not. That would add a lot of hassle. Thanks for your input. Mouche
https://www.daniweb.com/programming/game-development/threads/99681/rpg-classes-conditional-inheritance
CC-MAIN-2017-34
en
refinedweb
Difference between revisions of "EMF/Recipes" Revision as of 06:31, 5 October 2007 This topic gathers recipe-style solutions for common needs developers using EMF have. Contents - 1 Code generation recipes - 2 Notification Framework Recipes - 3 Properties Recipes Code generation recipes. Related recipes None so far.). Related recipes None so far. References - Thread on the EMF newsgroup: "multiple namespaces in one editor = Notification")); } } } Related Recipes None so far.; } Related Recipes None so far. References
http://wiki.eclipse.org/index.php?title=EMF/Recipes&diff=53700&oldid=52689
CC-MAIN-2014-52
en
refinedweb
Rob Spoor wrote:Class Whatever needs to be public, which means it has to move into a file of its own. I tried that and it worked. Rob Spoor wrote:I think that what Marcus means is that why the class needs to be public. What I have experienced is that reflection has trouble finding non-public classes in general. This only applies to the classes themselves though. If a public class is nested in a non-public class (and is static) then it can be loaded through reflection. I've done this myself just quite recently. I was using some tool that automatically generated the public class, and I couldn't add a nested class to it. I could add code outside the class. What I did was create one non-public class with a public classes nested in it, and that worked just fine.
http://www.coderanch.com/t/580036/GUI/java/EventHandler-ActionListener-usage
CC-MAIN-2014-52
en
refinedweb
24 October 2012 04:53 [Source: ICIS news] SINGAPORE (ICIS)--?xml:namespace> The country’s domestic consumption of ethylene is estimated at 32.5m tonnes for 2012, growing at a slower annual pace of 3.8% compared with 4.9% in the previous year, Sinopec chief analyst Shu Zhaoxia said at the Downstream Asia 2012 conference in In 2013, Sinopec’s ethylene production capacity will be increased by 800,000 tonnes/year in 2013 once its plant in The company is expected to run its crackers at full tilt in 2013, she said. Sinopec's overall ethylene production is expected to grow to 10.1m tonnes next year from 9.66m tonnes in 2012, according to Shu. "The demand increase from polyethylene (PE) and ethylene glycol is the main driver of ethylene equivalent growth in 2013," she said. Overall ethylene demand in By 2015, PE consumption in 2015 will account for 56.6% of overall ethylene demand, according to Shu. Downstream Asia 2012 runs on 24-25 October and is part of the Singapore International Energy
http://www.icis.com/Articles/2012/10/24/9606645/china-c2-output-to-fall-1.7-in-2012-on-weak-demand.html
CC-MAIN-2014-52
en
refinedweb
Results 1 to 1 of 1 Apologies in advance for the long post, but I thought I'd put it all in one post as it all ties back to my overall infrastructure/network goals. If what I ... - Join Date - Nov 2010 - 3 Network advice for virtual infrastructure If what I am asking is far too broad for one post, please let me know if I should split these questions up into individual threads. -------------------------- I am currently working on configuring a bunch of Linux based virtual machines for the following purposes: Ultimate Goals: - A mobile, virtual infrastructure running any services that are needed/wanted from time to time, all running within virtual guests on a laptop host initially, but any parts of which can later be ported to other hardware. For instance, once configured correctly, one or more virtual server could be ported onto other managed hardware and made available externally. - Automated installation and configuration of this internal network and any of my physical machines that interact with it (have been reading about and/or setting up tools like fai, cfengine, puppet, etc. on a Debian guest). - Run services securely on an internal network for learning/business purposes (eg wiki, media), with some services made available externally (web, collaboration server/services, media server). To achieve this, I am working with the following: Physical host: - 2 laptops. - Currently using Virtualbox for virtualization. - Configured to acquire IP address by DHCP. - Host #1: - Currently running Windows XP (32-bit). - Ultimately migrate base OS over to 64-bit Linux running as a virtual host (likely run Debian as the host OS, which I am currently learning using Virtualbox guests). - 2 nics - 1 ethernet, 1 wlan. - Host #2: - Currently running Debian Squeeze (32-bit). - 1 working wlan nic only (integrated ethernet faulty). Virtual guests: - Virtual Linux guests running on laptop host #1. - Run 1 or more virtual servers (www, mail, nfs, svn). - Run 1 or more virtual client/workstation guests (business, development, testing, different OS's, etc.). - All virtual machines must have at least one static IP so their installation/configuration can be automated off virtual install/config server. - All VMs can have 1 or more nics. The general idea is to use the laptops solely as secure virtual hosts giving access to self-contained virtual guests offering different functionality/services. The virtual machines can then be ported to any hardware of my choice and in any location. For example, VMs could be transferred between laptops or onto a managed server in a different location. To keep things simple in the early stages, I am aiming to set up: - one test virtual server running a whole bunch of services (Debian Squeeze), and - 2 test virtual clients (Debian Squeeze, Ubuntu). The main issue is that I am in the early stages of learning how to do all this, and am particularly confused about how I could/should configure the networking . At this stage, I am really looking for general ideas/suggestions/problems/obstacles given what I am trying to do. For example, could/should I: - Create separate subdomains for the virtual servers and virtual clients? - Maintain a /etc/hosts file centrally with automated distribution to all hosts using something like cfengine, or use something like BIND to configure a couple of nameservers? - Run the laptops as nameservers for the virtual guests? - Treat each laptop host as a DMZ and the virtual nodes as an internal network? - Have all internet traffic pass through to the virtual guests via a virtual firewall/gateway/nat router? - Configure the host laptops and/or virtual guests with 2 nics each - 1 dhcp for internet access, and 1 static for connecting to the install/config server and any internal network? What other considerations should I be thinking about? What would be the best way to deal with name resolution? If setting up nameservers, is it possible to configure it for the sole purpose of resolving IPs/hostnames on an internal network? Can a private/fictitious domain name be used with any internal network, or must any domain name used be registered even if only used to resolve for the 192.168.1/24 namespace? Thanks in advance. Last edited by cad1llac; 06-07-2011 at 02:09 PM. Reason: Give more appropriate title.
http://www.linuxforums.org/forum/networking/179437-network-advice-virtual-infrastructure.html
CC-MAIN-2014-52
en
refinedweb
i wrote this program, but somthenig is missing when i input -1 to finish. instead to have straight this :the overall average miles/gallons:.... my average consumption result, the program output this: Enter the miles driven and it follows:the overall average miles/gallons: itwil be kind, i had two days stugling. Code: #include <stdio.h> #include <stdlib.h> int main() { int miles; int gallons; int counter; int total; float average; float consumption; counter = 0; total = 0; printf ("Enter the gallons used, -1 to end :"); scanf("%d", &gallons); printf("Enter the miles driven :"); scanf("%d", &miles); while (gallons != -1){ consumption = (float) miles / gallons; printf("The miles / gallons for this thank was %.2f \n", consumption); total += consumption; counter += 1; printf ("\nEnter the gallons used, -1 to end :"); scanf("%d", &gallons); printf("Enter the miles driven :"); scanf("%d", &miles); } if (counter != 0){ average = (float) total / counter; printf ("\nthe overall average miles/gallons %.2f \n", average); } else{ printf ("\nNo No No "); } system("PAUSE"); return 0; }
http://cboard.cprogramming.com/c-programming/97016-consumption-printable-thread.html
CC-MAIN-2014-52
en
refinedweb
// Include_Lib_Test.ino#include "MyLib.h"#include <Button.h>void setup() { }void loop() { readbutton();} // MyLib.h#ifndef _MYLIB_H#define _MYLIB_H#include <Button.h>Button helloButton = Button(2, LOW);void readbutton();#endif // MyLib.cpp#include "MyLib.h"void readbutton() { helloButton.listen();} #include <Button.h>#include "MyLib.h" The Arduino software scans the files for function prototyping and include files before compiling.There is an error with that, it doesn't look at the "#ifndef", so you end up declaring "hello button" twice. Button helloButton = Button(2, LOW); extern Button helloButton; Variables should be defined in .cpp files. The definition means, in effect: allocate some memory to hold a Button, name it helloButton and initialise it to this value ...Code: [Select]Button helloButton = Button(2, LOW);If you need to access a variable from other files, you should add an external declaration in a header file and include that wherever you need to access it. The external declaration means, in effect: the symbol helloButton refers to a Button which is defined in some other file ...Code: [Select]extern Button helloButton;In this case you don't refer to helloButton from outside MyLib.cpp so you don't actually need the external declaration. extern Button helloButton;
http://forum.arduino.cc/index.php?topic=163624.msg1222671
CC-MAIN-2014-52
en
refinedweb
Data Structures for Parallel Programming The .NET Framework version 4 introduces several new types that are useful in parallel programming, including a set of concurrent collection classes, lightweight synchronization primitives, and types for lazy initialization. You can use these types with any multithreaded application code, including the Task Parallel Library and PLINQ. The collection classes in the System.Collections.Concurrent namespace provide thread-safe add and remove operations that avoid locks wherever possible and use fine-grained locking where locks are necessary. Unlike collections that were introduced in the .NET Framework versions 1.0 and 2.0, a concurrent collection class does not require user code to take any locks when it accesses items. The concurrent collection classes can significantly improve performance over types such as System.Collections.ArrayList and System.Collections.Generic.List<T> (with user-implemented locking) in scenarios where multiple threads add and remove items from a collection. The following table lists the new concurrent collection classes: For more information, see Thread-Safe Collections. The new synchronization primitives in the System.Threading namespace enable fine-grained concurrency and faster performance by avoiding expensive locking mechanisms found in legacy multithreading code. Some of the new types, such as System.Threading.Barrier and System.Threading.CountdownEvent have no counterparts in earlier releases of the .NET Framework. The following table lists the new synchronization types: For more information, see: With lazy initialization, the memory for an object is not allocated until it is needed. Lazy initialization can improve performance by spreading object allocations evenly across the lifetime of a program. You can enable lazy initialization for any custom type by wrapping the type Lazy<T>. The following table lists the lazy initialization types: For more information, see Lazy Initialization. The System.AggregateException type can be used to capture multiple exceptions that are thrown concurrently on separate threads, and return them to the joining thread as a single exception. The System.Threading.Tasks.Task and System.Threading.Tasks.Parallel types and PLINQ use AggregateException extensively for this purpose. For more information, see How to: Handle Exceptions Thrown by Tasks and How to: Handle Exceptions in a PLINQ Query.
http://msdn.microsoft.com/en-us/library/dd460718(v=vs.100)
CC-MAIN-2014-52
en
refinedweb
Reasons why Reiser4 is great for you: V3 of reiserfs is used as the default filesystem for SuSE, Lindows, FTOSX, Libranet, Xandros and Yoper.. Table of Contents Copyright and patent patent). Making the source code available to you is not enough by itself to bring you all of the possible benefits of software libre. Many file systems are so difficult to modify that only someone who has worked with the code for years finds it feasible to modify it, and even then small changes can take months of labor due to their ripple effects on the other code and the difficulties of dealing with disk format changes. This is why we have a plugin based architecture in Reiser4, so that it is not just possible, but easy, to improve the software. Imagine that you were an experimental physicist who had spent his life using only the tools that were in his local hardware store. Then one day you joined a major research lab with a machine shop and a whole bunch of other physicists. All of a sudden you are not using just whatever tools the large tool companies who have never heard of you have made for you. You are now part of a cooperative of physicists all making your own tools, swapping tools with each other, suddenly empowered to have tools that are exactly what you want them to be, or even merely exactly what your colleagues want them to be, rather than what some big tool company, that has to do a market analysis before giving you what you want, wants them to be. That is the transition you will make when you go from version 3 to version 4 of ReiserFS. The tools your colleagues and sysadmins (your machinists) make are going to be much better for what you need. You may wonder why the design we will present is so highly structured, why every object is allowed to control what is done to it by its providing a limited interface, and why we pass requests to objects to do things rather than doing things directly to the object? Surely we limit our functionality by doing so, yes? Indeed we do, but is there a reason why the price is worth paying? Is there something that becomes crucial as complexity grows? Chaos theory offers the answer. If you disturb one thing, and disturbing that thing inherently disturbs another thing, which in turn disturbs the first thing plus maybe a whole bunch of other things, and those things all disturb the first thing again, and...., etc., you get what chaos theory calls a feedback loop. These loops have a marvelous tendency for the end effect of the disturbance to be incalculable, and our inability to calculate such loops is perhaps a significant aspect of our being mere mortals. Of course, as you probably know most programmers want to be gods, and when they are unable to know what the effect will be of a change they make to their code, they dislike this. As a result, they go to great lengths to reduce the tendency of code changes to the design of one object to have ripple effects upon other objects. A vitaly important way to do this is to have very strictly defined interfaces to objects, and for the designer of each object to be able to know that the interface will never be violated when he writes it. This is called "object oriented design", or "structured programming", and if used well it can do a lot to reduce a type of chaotic behavior known as bugs.;-) Verifying the avoidance of interactions that violate the design for an object is a key task in security auditing (inspecting the code to see if it has security holes). The expressive power of an information system is proportional not to the number of objects that get implemented for it, but instead is proportional to the number of possible effective interactions between objects in it. (Reiser's Law Of Information Economics) This is similar to Adam Smith's observation that the wealth of nations is determined not by the number of their inhabitants, but by how well connected they are to each other. He traced the development of civilization throughout history, and found a consistent correlation between connectivity via roads and waterways, and wealth. He also found a correlation between specialization and wealth, and suggested that greater trade connectivity makes greater specialization economically viable. You can think of namespaces as forming the roads and waterways that connect the components of an operating system. The cost of these connecting namespaces is influenced by the number of interfaces that they must know how to connect to. That cost is, if they are not clever to avoid it, N times N, where N is the number of interfaces, since they must write code that knows how to connect every kind to every kind. One very important way to reduce the cost of fully connective namespaces is to teach all the objects how to use the same interface, so that the namespace can connect them without adding any code to the namespace. Very commonly, objects with different interfaces are segregated into different namespaces. If you have two namespaces, one with N objects, and another with M objects, the expressive power of the objects they connect is proportional to (N times N) plus (M times M), which is less than (N plus M) times (N plus M). Try it on a calculator for some arbitrary N and M. Usually the cost of inventing the namespaces is much less than the cost of the users creating all the objects. This is what makes namespaces so exciting to work with: you can have an enormous impact on the productivity of the whole system just by being a bit fanatical in insisting on simplicity and consistency in a few areas. Please remember this analysis later when we describe why we implement everything to support a "file" or "directory" interface, and why we aren't eager to support objects with unnecessarily different namespaces/interfaces --- such as "attributes" that cannot interact with files in all the same ways that files can interact with files. To interact with an object you name it, and you say what you want it to do. The filesystem takes the name you give, and looks through things we call directories to find the object, and then gives the object your request to do something. A file is something that tries to look like a sequence of bytes. You can read the bytes, and write the bytes. You can specify what byte to start to read/write from (the offset), and the number of bytes to read/write (the count). [Diagram needed]. You can also cut bytes off of the end of the file. Cutting bytes out of the middle or the beginning of a file, and inserting bytes into the middle of a file, are not permitted by any of our current file plugins, all of which implement fairly ancient Unix file semantics, but this is likely to change someday. Your interactions with a file are handled by the file's "plugin". These interactions are structured (in programming, such structures are generally called "interfaces") into a set of limited and defined interactions. (We are too lazy to perform the infinite work of programming plugins to handle infinite types of interactions.) Each way you can interact with a plugin is called a "method". A plugin is composed as a set of such methods. Among programmers, laziness is considered the highest art form, and we do our best to express our souls in this art. This is why we have layers and layers of laziness built into our plugin architecture. Each method is composed from a library of functions we thought would be useful in constructing plugin methods. Each plugin is composed from a library of methods used by plugins, and a plugin can be considered a one-to-one mapping (that's where you have two sets of things, and for every member of one set, you specify a member of the other set as its match) of every way of interacting with the plugin to a method handling it. For every file, there is a file pluginid. Whenever you attempt to interact with a file, we take the name of the file, find the pluginid for the file, and inside the kernel we have an array of plugins [diagram needed that is suitable for persons who don't know what an array or offset is], and we use the pluginid as the offset of that file's plugin within that array. (An offset is a position relative to something else, and in programming it is typically measured in bytes.) This implies that when you invent a new file plugin, you have to recompile (Programmers don't actually write programs, they got too lazy for that long ago, instead they write instructions for the computer on how to write the program, and when the computer follows these instructions ("source code"), it is called "compiling", which programmers usually pretend was done by them when they speak about it, as in "I recompiled the kernel for my exact CPU this time, and now playing pong is noticeably faster.".) the kernel, and you can only add plugins to the end of the list, and you can never reuse or change pluginids for a plugin, or else you will have to go through the whole filesystem changing all of the pluginids that are no longer correct. Someday in a later version we will revise this so that plugins are "dynamically loadable" (which is when you can add something to a program while it is running), and you can add support for new plugins to a running kernel. When we do that we will carefully benchmark and ensure that there is no loss of performance (or we won't do it) from using dynamic loading. Programs are often "layered", which is when the program is divided into layers, and each layer only talks to the layer immediately above it, or immediately below it, and never talks to a part of the program two levels below it, etc. This reduces the complexity of the interfaces for the various parts of the program, and most of the complexity of a program is in coding its interfaces. Reiser4 has a "semantic layer", and this semantic layer concerns itself with naming objects and specifying what to do to the objects, and doesn't concern itself with such things as how to pack objects into particular places on disk or in the tree. An IO to a file may affect more than one physical sequence of bytes, or no physical sequence of bytes, it may affect the sequences of bytes offered by other files to the semantic layer, and the file plugin may invoke other plugins and delegate work to them, but its interface is structured for offering the caller the ability to read and/or write what the caller sees as being a single sequence of bytes. Appearances are what is wanted. When we say that security attributes are implemented as files, we mean that security attributes look like a sequence of bytes, but the security attributes may be stored in some compressed form that perhaps might be of fixed length, or even be just a single bit. For the filesystem to offer the benefits of simplicity it need merely provide a uniform appearance that all things it stores are sequences of bytes, and there is nothing to prevent it from gaining efficiency through using many different storage implementations to offer this uniform appearance. For many files it is valuable for them to support efficient tree traversal to any offset in the sequence of bytes. It is not required though, and Unix/Gnu/Linux has traditionally supported some types of files which could not do this. A pipe will allow you take the output of one command, and connect it to the input of another command, and each of the commands will see the pipe as a file. This pipe is an example of a file for which you cannot simply jump to the middle of the file efficiently but instead you must go through it from beginning to end in sequential order. A name is a means of selecting an object. An object is anything that acts as though it is a single unified entity. What is an object is context dependent. For instance, if you tell an object to delete itself, many distinctly named entities (that are distinct objects in other ways such as reading) might well disappear as though they are a single object in response to the delete request. A namespace is a mapping of names to objects. Filesystems, databases, search engines, environment variable names within shells, are all examples of namespaces. The early papers using the term tended to seek to convey that namespaces have commonality in their structure, are not fundamentally different, should be based on common design principles, and should be unified. Such unification is a bit of a quest for a holy grail. In British mythology King Arthur sent his knights out on a quest for the holy grail, and if only they could become worthy of it, it would appear to them. None of them found it, and yet the quest made them what they became. Namespaces will never be unified, but the closer we can come to it, the more expressive power the OS will have. Reiser4 seeks to create a storage layer effective for such an eventually unified namespace, and gives it a semantic layer with some minor advantages over the state of the art. Later versions will add more and more expressive semantics to the storage layer. Finding objects is layered. The semantic layer takes names and converts them into keys (we call this "resolving" the name). The storage layer (which contains the tree traversing code) takes keys and finds the bytes that store the parts of the object. Keys are the fundamental name used by the Reiser4 tree. They are the name that the storage layer at the bottom of it all understands. They can be used to find anything in the tree, not just whole objects, but parts of objects as well. Everything in the tree has exactly one key. Duplicate keys are allowed, but their use usually means that all duplicates must be examined to see if they really contain what is sought, and so duplicates are usually rare if high performance is desired. Allowing duplicates can allow keys to be more compact in some circumstances (e.g. hashed directory entries). An objectid cannot be used for finding an object, only keys can. Objectids are used to compose keys so as to ensure that keys are unique. When designing the naming system described in the future vision whitepaper I broke names from human and computer languages into their pieces, and then looked at their pieces to see which ones differed from each other in meaningful ways vs. which pieces were different expressions that provided the same functionality. (In more formal language, I would say that I systematically decomposed the ways of naming things that we use in human and computer languages into orthogonal primitives, and then determined their equivalence classes.) I then selected one way of expression from each set of ways that provided equivalent functionality. (Since that whitepaper is focused on what is not yet implemented, the whitepaper does not list all of the equivalence classes for names, but instead describes those which I thought I could say something interesting to the reader about. For instance, the NOT operator is simply unmentioned in it, as I really have nothing interesting to say about NOT, though it is very useful and will be documented when implemented.) The ordering of two components of a name either has meaning, or it does not. If the resolution of one component of the name depends on what is named by another component, then that pair of name components forms a hierarchical name. Hierarchy can be indicated by means other than ordering. Many human languages indicate structure by use of suffix or tag mechanisms (e.g. Russian and Japanese). The syntactical mechanism one chooses to express hierarchy does not determine the possible semantics one can express so long as at least one effective method for expressing hierarchy is allowed. I choose to only offer one expression from each equivalence class of naming primitives, and here I chose the '/' separated file pathname expression traditional to Unix for pragmatic compatibility with existing operating systems. Reiser4 handles only hierarchical names, and non-hierarchical names are planned only for SSN Reiserfs. Hierarchical names are implemented in Reiser4 by use of directories. The first component of a hierarchical name is the name of the directory, and the components that follow are passed to the directory to interpret. We use `/' to separate the components of a hierarchical name. Directories may choose to delegate parts of their task to their sub-directories. The unix directory plugin when supplied with a name will use the part of the name before the first / to select a sub-directory (if there is a / in what it is resolving), and delegate resolving the part of the name after the first / to the sub-directory. A directory can employ any arbitrary method at all of resolving the name components passed to it, so long as it returns a set of keys of objects as the result. In Reiser4, this set of keys always contains exactly one member, but this is designed to change in SSN Reiserfs. (Reiser4 also needs to interact with a standard interface for Unix filesystems called VFS (Virtual File System), and directories are also designed to be able to return what VFS understands, which we won't go into here.) Directories will also return a list of names when asked. This list is not required to be a complete list of all names that they can resolve, and sometimes it is not desirable that it be so. Names can be hidden names in Reiser4. Directory plugins may be able to resolve more names than they can list, especially if they are written such that the number of names that they can resolve is infinite. In partuclar, such names can resolve to the objects behaving like ordinary files (with respect to standard file system interface: read, write, readdir, etc.), but not backed up by storage layer. Such objects are called "pseudo files". Here is a list of pseudo files currently implemented in Reiser4 with description of their semantics. The unix directory plugin implements directories by storing a set of directory entries per directory. These directory entries contain a name, and a key. When given a name to resolve, the unix directory plugin finds the directory entry containing that name, and then returns the key that is in the directory entry (more precisely, since a key selects not just the file but a particular byte within a file, it returns that part of the key which is sufficient to select the file, and which is sufficient to allow the code to determine what the full keys for those various parts when the byte offset and some other fields (like item type) are added to the partial key to form a whole key). The key can then be used by the tree storage layer to find all the pieces of that which was named. Unix differs from Multics, in that Multics defined a file to be a sequence of elements (the elements could be bytes, directory entries, or something else....), while Unix defines a file to be purely a sequence of bytes. In Multics directories were then considered to be a particular type of file which was a sequence of directory entries. For many years, all implementations of Unix directories were as sequences of bytes, and the notion of location within a Unix directory is tied not to a name as you might expect, but to a byte offset within the directory. The problem is that one is using a byte offset to represent a location whose true meaning is not a byte offset but a directory entry, and doing so for a particular file in a system which meaningfully names that file not by byte offset within the directory but by filename. Various efforts are being made in the Unix community to pretend that this byte offset is something more general than a byte offset, and they often try to do so without increasing the size used to store the thing which they pretend is not a byte offset. Since byte offsets are normally smaller than filenames are allowed to be, the result is ugliness and pathetic kludges. Trust me that you would rather not know about the details of those kludges unless you absolutely have to, and let me say no more. Unix/Linux makes no promises regarding the order of names within directories. The order in which files are created is not necessarily the order in which names will be listed in a directory, and the use of lexicographic (alphabetic) order is surprisingly rare. The unix utilities typically sort directory listings after they are returned by the filesystem, which is why it seems like the filesystem sorts them, and is why listing very large directories can be slow. (Our current default plugin sorts filenames that are less than 15 letters long lexicographically. For those that are more than 15 characters long it sorts them first by their first 8 letters then by the hash of the whole name.) There is value to allowing the user to specify an arbitrary order for names using an arbitrary ordering function the user supplies. This is not done in Reiser4, but is planned as a feature of later versions. Allowing the creation of a hash plugin is a limited form of this that is currently implemented. In Reiser4 (but not ReiserFS 3) an object can be both a file and a directory at the same time. If you access it as a file, you obtain the named sequence of bytes. If you use it as a directory you can obtain files within it, directory listings, etc. There was a lengthy discussion on the Linux Kernel Mailing List about whether this was technically feasible to do. I won't reproduce it here except to summarize that Linus showed that this was feasible without "breaking" VFS. Allowing an object to be both a file and a directory is one of the features necessary to to compose the functionality present in streams and attributes using files and directories. To implement a regular unix file with all of its metadata, we use a file plugin for the body of the file, a directory plugin for finding file plugins for each of the metadata, and particular file plugins for each of the metadata. We use a unix_file file plugin to access the body of the file, and a unix_file_dir directory plugin to resolve the names of its metadata to particular file plugins for particular metadata. These particular file plugins for unix file metadata (owner, permissions, etc.) are implemented to allow the metadata normally used by unix files to be quite compactly stored. A file can exist but not be visible when using readdir in the usual way. WAFL does this with the .snapshots directory; it works well for them without disturbing users. This is useful for adding access to a variety of new features and their applications without disturbing the user when they are not relevant. To a theoretician it is extremely important to minimize the number of primitives with which one achieves the desired functionality in an abstract construction. It is a bit hard to explain why this is so, but it is well accepted that breaking an abstract model into more basic primitives is very important. A not very precise explanation of why is to say that by breaking complex primitives into their more basic primitives, then recombining those basic primitives differently, you can usually express new things that the original complex primitives did not express. Let's follow this grand tradition of theoreticians and see what happens if we apply it to Gnu/Linux files and directories. In Gnu/Linux we have files, directories, and attributes. In NTFS they also have streams. Since Samba is important to Gnu/Linux, there frequently are requests that we add streams to ReiserFS. There are also requests that we add more and more different kinds of attributes using more and more different APIs. Can we do everything that can be done with {files, directories, attributes, streams} using just {files, directories}? I say yes--if we make files and directories more powerful and flexible. I hope that by the end of reading this you will agree. Let us have two basic objects. A file is a sequence of bytes that has a name. A directory is a name space mapping names to a set of objects "within" the directory. We connect these directory name spaces such that one can use compound names whose subcomponents are separated by a delimiter '/'. What is missing from files and directories now that attributes and streams offer? In ReiserFS 3, there exist file attributes. File attributes are out-of-band data describing the sequence of bytes which is the file. For example, the permissions defining who can access a file, or the last modification time, are file attributes. File attributes have their own API; creating new file attributes creates new code complexity and compatibility issues galore. ACLs are one example of new file attributes users want. Since in Reiser4 files can also be directories, we can implement traditional file attributes as simply files. To access a file attribute, one need merely name the file, followed by a '/', followed by an attribute name. That is: a traditional file will be implemented to possess some of the features of a directory; it will contains files within the directory corresponding to file attributes which you can access by their names; and it will contain a file body which is what you access when you name the "directory" rather than the file. Unix currently has a variety of attributes that are distinct from files (ACLS, permissions, timestamps, other mostly security related attributes, ...). This is because a variety of people needed this feature and that, and there was no infrastructure that would allow implementing the features as fully orthogonal features that could be applied to any file. Reiser4 will create that infrastructure. Each of these additional features is a feature that would benefit the filesystem. So we add them in v4. One way of organizing information is to put it into trees. When we organize information in a computer, we typically sort it into piles (nodes we call them), and there is a name (a pointer) for each pile that the computer will be able to use to find the pile. Figure 1. One Example Of A Tree. Some of the nodes can contain pointers, and we can go looking through the nodes to find those pointers to (usually other) nodes. We are particularly interested in how to organize so that we can find things when we search for them. A tree is an organization structure that has some useful properties for that purpose. Figure 2. The simplest tree. Figure 3. A trivial, linear tree. It is interesting to argue over whether finite should be a part of the definition of trees. There are many ways of defining trees, and which is the best definition depends on what your purpose is. Donald Knuth (a well known author of algorithm textbooks) supplies several definitions of tree. As his primary definition of tree he even supplies one which has no pointers/edges/lines in the definition, just sets of nodes. Reiser4 uses a finite tree (the number of nodes is limited). Knuth defines trees as being finite sets of nodes. There are papers on infinite trees on the Internet. I think it more appropriate to consider finite an additional qualifier on trees, rather than bundling finite into the definition. However, I personally only deal with finite trees in my storage layer research. It is interesting to consider whether storage layers are inherently more motivated than semantic layers to limit themselves to finite trees rather than infinite trees. This is where some writers would say ".... is left as an exercise for the reader". :-) Oh the temptation.... I will remind the reader of my explanation of why storage layer trees are more motivated to be acyclic, and, at the cost of some effort at honesty, constrain myself to saying that doing more than providing that hint is beyond my level of industry.;-) Edge is a term often used in tree definitions. A pointer is unidirectional (you can follow it from the node that has it to the node it points to, but you cannot follow it back from the node it points to to the node that has it). An edge is bidirectional (you can follow it in both directions). Here are three alternative tree definitions, which are interesting in how they are mathematically equivalent to each other, though they are not equivalent to the definition I supplied because edges are not equivalent to pointers: For all three of these definitions, let there be not more than one edge connecting the same two nodes. The. Please feel encouraged to read Knuth's writings for more discussions of these for when efficiency with minimal complexity is what is desired, and there is no need to reach a node by more than one route. Reiser4 has both graphs and trees, with trees used for when the filesystem chooses the organization (in what we call the storage layer, which tries to be simple and efficient), and graphs for when the user chooses the organization (in the semantic layer, which tries to be expressive so that the user can do whatever he wants). We assign everything stored in the tree a key. We find things by their keys. Use of keys gives us additional flexibility in how we sort things, and if the keys are small, it gives us a compact means of specifying enough to find the thing. It also limits what information we can use for finding things. This limit restricts its usefulness, and so we have a storage layer, which finds things by keys, and a semantic layer, which has a rich naming system. The storage layer chooses keys for things solely to organize storage in a way that will improve performance, and the semantic layer understands names that have meaning to users. As you read, you might want to think about whether this is a useful separation that allows freedom in adding improvements that aid performance in the storage layer, while escaping paying a price for the side effects of those improvements on the flexible naming objectives of the semantic layer. We start our search at the root, because from the root we can reach every other node. what subtree of that node contains the thing we are looking for. Duplicate keys are a topic for another time. For now I will just hint that when searching through objects with duplicate keys we find the first of them in the tree, and then we search through all duplicates one-by-one until we find what we are looking for. Allowing duplicate keys can allow for smaller keys, so there is sometimes a tradeoff between key size and the average frequency of such inefficient linear searches. Using duplicate keys can also allow, if one defines one's insertion algorithms such that they always insert at the end of a set of duplicate keys, ordering objects with the same key by creation time. The contents of each node in the tree are sorted within the node. So, the entire tree is sorted by key, and for a given key we know just where to go to find at least one thing with that key. Leaves are nodes that have no children. Internal nodes are nodes that have children. Figure 4. A height = 4, fanout = 3, balanced tree. A search will start with the root node, the sole level 4 internal node, traverse 2 more internal nodes, and end with a leaf node which holds the data and has no children. A node that contains items is called a formatted node. If an object is large, and is not compressed and doesn't need to support efficient insertions (compressed objects are special because they need to be able to change their space usage when you write to their middles because the compression might not be equally efficient for the new data), then it can be more efficient to store it in nodes without any use of items at all. We do so by default for objects larger than 16k.. Extent pointers point to unfleaves. An extent is a sequence of contiguous in block number order unfleaves that belong to the same object. An extent pointer contains the starting block number of the extent, and a length. [diagram needed] Because the extent belongs to just one object, we can store just one key for the extent, and then we can calculate the key of any byte within that extent. If the extent is at least 2 blocks long, extent pointers are more compact than regular node pointers would be. Node Pointers are pointers to formatted nodes. We do not yet have a compressed version of node pointers, but they are probably soon to come. Notice how with extent pointers we don't have to store the delimiting key of each node pointed to, and with node pointers we need to. We will probably introduce key compression at the same time we add compressed node pointers. One would expect keys to compress well since they are sorted into ascending order. We expect our node and item plugin infrastructure will make such features easy to add at a later date. Twigs are parents of leaves. Extent Pointers exist only in twigs. This is a very controversial design decision I will discuss a bit later. Branches are internal nodes that are not twigs. You might think we would number the root level 1, but since the tree grows at the top, it turns out to be more useful to number as 1 the level with the leaves where object data is stored. The height of the tree will depend upon how many objects we have to store and what the fanout rate (average number of children) of the internal and twig nodes will be. For reasons of code simplicity, we find it easiest to implement Reiser4 such that it has a minimum height of 2, and the root is always an internal node. There is nothing deeper than judicial laziness to this: it simplifies the code to not deal with one node trees, and nobody cares about the waste of space. An example of a Reiser4 tree: Figure 5. This Reiser4 tree is a 4 level, balanced tree with a fanout of 3. In practice Reiser4 fanout is much higher and varies from node to node, but a 4 level tree diagram with 16 million leaf nodes won't fit easily onto my monitor so I drew something smaller....;-) We choose to make the nodes equal in size. This makes it much easier to allocate the unused space between nodes, because it will be some multiple of node sized, and there are no problems of space being free but not large enough to store a node. Also, disk drives have an interface that assumes equal size blocks, which they find convenient for their error-correction algorithms. If having the nodes be equal in size is not very important, perhaps due to the tree fitting into RAM, then using a class of algorithms called skip lists is worthy of consideration. Reiser4 nodes are usually equal to the size of a page, which if you use Gnu/Linux on an Intel CPU is currently 4096 (4k) bytes. There is no measured empirical reason to think this size is better than others, it is just the one that Gnu/Linux makes easiest and cleanest to program into the code, and we have been too busy to experiment with other sizes. If nodes are of equal size, how do we store large objects? We chop them into pieces. We call these pieces items. Items, then the space wasted is much larger than the file. It is not effective to store such typical database objects as addresses and phone numbers in separately named files in a conventional filesystem because it will waste more than 90% of the space in the blocks it stores them in.'s directly accessible address space. Due to some implementation details mmap() needs file data to be 4k aligned, and if the data is already 4k aligned, it makes mmap() much more efficient. In Reiser4 the current default is that files that are larger than 16k are 4k aligned. We don't yet have enough empirical data and experience to know whether 16k is the precise optimal default value for this cutoff point, but so far it seems to at least be a decent choice. Nodes in the tree are smaller than some of the objects they hold, and larger than some of the objects they hold, so how do we store them? One way is to pour them into items. An item is a data container that is contained entirely within a single node, and it allows us to manage space within nodes. For the default 4.0 node format, every item has a key, an offset to where in the node the item body starts, a length of the item body, and a pluginid that indicates what type of item it is. Items allow us to not have to round up to 4k the amount of space required to store an object. Reiser4 includes many different kinds of items designed to hold different kinds of information. We call a unit that which we must place as a whole into an item, without splitting it across multiple items. When traversing an item's contents it is often convenient to do so in units: An unformatted leaf node (unfleaf node), which is the only node without a Node_Header, has the trivial structure: The Structure of an Item Aformatted leaf nodehas the structure: A twig node has the structure: A branch node has the structure: Height Balanced Trees are trees such that each possible search path from root node to leaf node has exactly the same length (Length = number of nodes traversed from root node to leaf node). For instance the height of the tree in Figure 1 is four while the height of the left hand tree in Figure 1.3 is three and of the single node in Figure 2 is 1. The term balancing is used for several very distinct purposes in the balanced tree literature. Two of the most common are: to describe balancing the height, and to describe balancing the space usage within the nodes of the tree. These quite different definitions are unfortunately a classic source of confusion for readers of the literature. Most algorithms for accomplishing height balancing do so by only growing the tree at the top. Thus the tree never gets out of balance. Figure 6. This is an unbalanced tree. Three of the principle considerations in tree design are: The fanout rate n refers to how many nodes may be pointed to by each level's nodes. (see Figure 7) be able to store in the tree, the larger you have to the fields in the key that first distinguish the objects (the objectids ), and then select parts of the object (the offsets). This means your keys must be larger, which decreases fanout (unless you compress your keys, but that will wait for our next version....). Figure 7. Three 4 level, height balanced trees with fanouts n = 1, 2, and 3. The first graph is a four level tree with fanout n = 1. It has just four nodes, starts with the (red) root node, traverses the (burgundy) internal and (blue) twig nodes, and ends with the (green) leaf node which contains the data. The second tree, with 4 levels and fanout n = 2, starts with a root node, traverses 2 internal nodes, each of which points to two twig nodes (for a total of four twig nodes), and each of these points to 2 leaf nodes for a total of 8 leaf nodes. Lastly, a 4 level, fanout n = 3 tree is shown which has 1 root node, 3 internal nodes, 9 twig nodes, and 27 leaf nodes. It is possible to store not just pointers and keys in internal nodes, but also to store the objects those keys correspond to in the internal nodes. This is what the original B-tree algorithms did. Then B+trees were invented in which only pointers and keys are stored in internal nodes, and all of the objects are stored at the leaf level. Figure 8. Figure 9. Warning! I found from experience that most persons who don't first deeply understand why B+trees are better than B-Trees won't later understand explanations of the advantages of putting extents on the twig level rather than using BLOBs. The same principles that make B+Trees better than B-Trees, also make Reiser4 faster than using BLOBs like most databases do. So make sure this section fully digests before moving on to the next section, ok?;-) Fanout is increased when we put only pointers and keys in internal nodes, and don't dilute them with object data. Increased fanout increases our ability to cache all of the internal nodes because there are fewer internal nodes. Often persons respond to this by saying, "but B-trees cache objects, and caching objects is just as valuable". It is not, on average, is the answer. Of course, discussing averages makes the discussion much harder. We need to discuss some cache design principles for a while before we can get to this. Tying the caching of things whose usage does not strongly correlate is bad. Suppose: Then B according to the LRU algorithm, then this might be worthwhile. If there is no such strong correlation, then it is bad. But wait, you might say, you need things from B also, so it is good that some of them were cached. Yes, you need some random subset of B. The problem is that without a correlation existing, the things from B that you need are not especially likely to be those same things from B that were tied to the things from A that were needed. the hot dog vendors: if you can only eat the hot dog produced by the best movie displayer on a particular night that you want to watch a movie, and you aren't allowed to bring in hot dogs from outside the movie theater, is it a socially optimum system? Optimal for you? Tying the uncorrelated is a very common error in designing caches, but it is still not enough to describe why B+Trees are better. With internal nodes, we store more than one pointer per node. That means that pointers are not separately cached. You could well argue that pointers and the objects they point to are more strongly correlated than the different pointers. We need another cache design principle. If two types of things that are cached and accessed, in units that are aggregates, have different average temperatures, then segregating the two types into separate units helps caching. For balanced trees, these units of aggregates are nodes. This principle applies to the situation where it may be necessary to tie things into larger units for efficient access, and guides what things should be tied together. Suppose you have R bytes of RAM for cache, and D bytes of disk. Suppose that 80% of accesses are to the most recently used things which are stored in H (hotset) bytes of nodes. Reducing the size of H to where it is smaller than R is very important to performance. If you evenly disperse your frequently accessed data, then a larger cache is required and caching is less effective. Pointers to nodes tend to be frequently accessed relative to the number of bytes required to cache them. Consider that you have to use the pointers for all tree traversals that reach the nodes beneath them and they are smaller than the nodes they point to. Putting only node pointers and delimiting keys into internal nodes concentrates the pointers. Since pointers tend to be more frequently accessed per byte of their size than items storing file bodies, a high average temperature difference exists between pointers and object data. According to the caching principles described above, segregating these two types of things with different average temperatures, pointers and object data, increases the efficiency of caching. Now you might say, well, why not segregate by actual temperature instead of by type which only correlates with temperature? We do what we can easily and effectively code, with not just temperature segregation in consideration. There are tree designs which rearrange the tree so that objects which have a higher temperature are higher in the tree than pointers with a lower temperature. The difference in average temperature between object data and pointers to nodes is so high that I don't find such designs a compelling optimization, and they add complexity. I could be wrong. If one had no compelling semantic basis for aggregating objects near each other (this is true for some applications), and if one wanted to access objects by nodes rather than individually, it would be interesting to have a node repacker sort object data into nodes by temperature. You would need to have the repacker change the keys of the objects it sorts. Perhaps someone will have us implement that for some application someday for Reiser4. BLOBs, Binary Large OBjects, are a method of storing objects larger than a node by storing pointers to nodes containing the object. These pointers are commonly stored in what is called the leaf nodes (level 1, except that the BLOBs are then sort of a basement "level B" :-\ ) of a "B*" tree. Figure 10. A Binary Large OBject (BLOB) has been inserted with, in a leaf node, pointers to its blocks. This is what a ReiserFS V3 tree looks like. BLOBs are a significant unintentional definitional drift, albeit one accepted by the entire database community. This placement of pointers into nodes containing data is a performance problem for ReiserFS V3 which uses BLOBs (Never accept that "let's just try it my way and see and we can change it if it doesn't work" argument. It took years and a disk format change to get BLOBs out of ReiserFS, and performance suffered the whole time (if tails were turned on.)). Because the pointers to BLOBs are diluted by data, it makes caching all pointers to all nodes in RAM infeasible for typical file sets. Reiser4 returns to the classical definition of a height balanced tree in which the lengths of the paths to all leaf nodes are equal. It does not try to pretend that all of the nodes storing objects larger than a node are somehow not part of the tree even though the tree stores pointers to them. As a result, the amount of RAM required to store pointers to nodes is dramatically reduced. For typical configurations, RAM is large enough to hold all of the internal nodes. Figure 11. A Reiser4, 4 level, height balanced tree with fanout = 3 and the data that was stored in BLOBs now stored in extents in the level 1 leaf nodes and pointed to by extent pointers stored in the level 2 twig nodes.." (1993, Transaction Processing: concepts and techniques, Morgan Kaufman Publishers, San Francisco, CA, p.834 ...) My problem with this explanation of why the height balanced approach is effective is that it does not convey that you can get away with having a moderately unbalanced tree provided that you do not significantly increase the total number of internal nodes. In practice, most trees that are unbalanced do have significantly more internal nodes. In practice, most moderately unbalanced trees have a moderate increase in the cost of in-memory tree traversals, and an immoderate increase in the amount of IO due to the increased number of internal nodes. But if one were to put all the BLOBs together in the same location in the tree, since the amount of internal nodes would not significantly increase, the performance penalty for having them on a lower level of the tree than all other leaf nodes would not be a significant additional IO. It might be undesirable to segregate objects by their size rather than just their semantics though. Perhaps someday someone will try it and see what results. Balanced trees have traditionally employed fixed criterion for determining whether nodes should be squeezed together into fewer nodes so as to save space. This criterion is traditionally satisfied at the end of every modification to the tree. A typical such criterion is to guarantee that after each modification to the tree the modified node cannot be squeezed together with its left and right neighbor into two or fewer nodes. ReiserFS V3 uses that criterion for its leaf nodes. The more neighboring nodes you consider for squeezing into one fewer nodes, the more memory bandwidth you consume on average per modification to the tree, and the more likely you are to need to read those nodes because they are not in memory. It is a typical pattern in memory management algorithm design that the more tightly packed memory is kept, the more overhead is added to the cost of changing what is stored where in it. This overhead can be significant enough that some commercial databases actually only delete nodes when they are completely empty, and they feel that in practice this works well. Trees that adhere to fixed space usage balancing criteria can have many things rigorously proven about their worst case performance in publishable papers. This is different from their being optimal. An algorithm can have worse bounds on its theoretical worst case performance and be a better algorithm. Just because one cannot rigorously define average usage patterns does not mean they are the slightest bit less important. Sorry mere mortal mathematicians, that is life. Maybe some might prefer to think about the questions that they can define and answer rigorously, but this does not in the slightest make them the right questions. Yes, I am a chaotic.... In Reiser4 we employ not balanced trees, but dancing trees. Dancing trees merge insufficiently full nodes, not with every modification to the tree, but instead: Let a slum be defined as a sequence of contiguous in the tree order, and dirty in this transaction, nodes. (In simpler words, a bunch of dirty nodes that are right next to each other.) A dancing tree responds to memory pressure by squeezing and flushing slums. It is possible that merely squeezing a slum might free up enough space that flushing is unnecessary, but the current implementation of Reiser4 always flushes the slums it squeezes. This is not necessarily the right approach, but we found it simpler and good enough for now. Another simplification we choose to engage in for now is that instead of trying to estimate whether squeezing a slum will save space before squeezing it, we just squeeze it and see. Balanced trees have an inherent tradeoff between balancing cost and space efficiency. If they consider more neighboring nodes, for the purpose of merging them to save a node, with every change to the tree, then they can pack the tree more tightly at the cost of moving more data with every change to the tree. By contrast, with a dancing tree, you simply take a large slum, shove everything in it as far to the left as it will go, and then free all the nodes in the slum that are left with nothing remaining in them, at the time of committing the slum's contents to disk in response to memory pressure. This gives you extreme space efficiency when slums are large, at a cost in data movement that is lower than it would be with an invariant balancing criterion because it is done less often. By compressing at the time one flushes to disk, one compresses less often, and that means one can afford to do it more thoroughly. By compressing dirty nodes that are in memory, one avoids performing additional I/O as a result of balancing. ReiserFS V3 assigns block numbers to nodes as it creates them. XFS is smarter, they wait until the last moment just before writing nodes to disk. I'd like to thank the XFS team for making an effort to ensure that I understood the merits of their approach. The easy way to see its merits is to consider a file that is deleted before it reaches disk. Such a file should have no effect on the disk layout. When a computer crashes there is data in RAM which has not reached disk that is lost. You might at first be tempted to think that we want to then keep all of the data that did reach disk. Suppose that you were performing a transfer of $10 from bank account A to bank account B, and this consisted of two operations 1) debit $10 from A, and 2) credit $10 to B. Suppose that 1) but not 2) reached disk and the computer crashed. It would be better to disregard 1) than to let 1) but not 2) take effect, yes? When there is a set of operations which we will ensure will all take effect, or none take effect, we call the set as a whole an atom. Reiser4 implements all of its filesystem system calls (requests to the kernel to do something are called system calls ) as fully atomic operations, and allows one to define new atomic operations using its plugin infrastructure. Why don't all filesystems do this? Performance. Reiser4 possesses employs new algorithms that allow it to make these operations atomic at little additional cost where other filesystems have paid a heavy, usually prohibitive, price to do that. We hope to share with you how that is done. Originally filesystems had filesystem checkers that would run after every crash. The problem with that was that 1) the checkers can not handle every form of damage well, and 2) the checkers run for a long time. The amount of data stored on hard drives increased faster than the transfer rate (the rate at which hard drives transfer their data from the platter spinning inside them into the computer's RAM when they are asked to do one large continuous read, or the rate in the other direction for writes), which means that the checkers took longer to run, and as the decades ticked by it became less and less reasonable for a mission critical server to wait for the checker. A solution to this was adopted of first writing each atomic operation to a location on disk called the journal or log, and then, only after each atom had fully reached the journal, writing it to the committed area of the filesystem. The problem with this is that twice as much data needs to be written. On the one hand, if the workload is dominated by seeks, this is not as much of a burden as one might think. On the other hand, for writes of large files, it halves performance because such writes are usually transfer time dominated. For this reason, meta-data journaling came to dominate general purpose usage. With meta-data journaling, the filesystem guarantees that all of its operations on its meta-data will be done atomically. If a file is being written to, the data in that file being written may be corrupted as a result of non-atomic data operations, but the filesystem's internals will all be consistent. The performance advantage was substantial. V3 of reiserfs offers both meta-data and data journaling, and defaults to meta-data journaling because that is the right solution for most users. Oddly enough, meta-data journaling is much more work to implement because it requires being precise about what needs to be journaled. As is so often the case in programming, doing less work requires more code. With fixed location data journaling, the overhead of making each operation atomic is too high for it to be appropriate for average applications that don't especially need it --- because of the cost of writing twice. Applications that do need atomicity are written to use fsync and rename to accomplish atomicity, and these tools are simply terrible for that job. Terrible in performance, and terrible in the ugliness they add to the coding of applications. Stuffing a transaction into a single file just because you need the transaction to be atomic is hardly what one would call flexible semantics. Also, data journaling, with all its performance cost, still does not necessarily guarantee that every system call is fully atomic, much less that one can construct sets of operations that are fully atomic. It usually merely guarantees that the files will not contain random garbage, however many blocks of them happen to get written, and however much the application might view the result as inconsistent data. I hope you understand that we are trying to set a new expectation here for how secure a filesystem should keep your data, when we provide these atomicity guarantees. One way to avoid having to write the data twice is to change one's definition of where the log area and the committed area are, instead of moving the data from the log to the committed area. There is an annoying complication to this though, in that there are probably a number of pointers to the data from the rest of the filesystem, and we need for them to point to the new data. When the commit occurs, we need to write those pointers so that they point to the data we are committing. Fortunately, these pointers tend to be highly concentrated as a result of our tree design. But wait, if we are going to update those pointers, then we want to commit those pointers atomically also, which we could do if we write them to another location and update the pointers to them, and.... up the tree the changes ripple. When we get to the top of the tree, since disk drives write sectors atomically, the block number of the top can be written atomically into the superblock by the disk thereby committing everything the new top points to. This is indeed the way WAFL, the Write Anywhere File Layout filesystem invented by Dave Hitz at Network Appliance, works. It always ripples changes all the way to the top, and indeed that works rather well in practice, and most of their users are quite happy with its performance. Suppose that a file is currently well laid out, and you write to a single block in the middle of it, and you then expect to do many reads of the file. That is an extreme case illustrating that sometimes it is worth writing twice so that a block can keep its current location while committing atomically. If one writes a node twice in this way, one also does not need to update its parent and ripple all the way to the top of the tree. Our code is a toolkit that can be used to implement different layout policies, and one of the available choices is whether to write over a block in its current place, or to relocate it to somewhere else. I don't think there is one right answer for all usage patterns. If a block is adjacent to many other dirty blocks in the tree, then this decreases the significance of the cost to read performance of relocating it and its neighbors. If one knows that a repacker will run once a week (a repacker is expected for V4.1, and is (a bit oddly) absent from WAFL), this also decreases the cost of relocation. After a few years of experimentation, measurement, and user feedback, we will say more about our experiences in constructing user selectable policies. Do we pay a performance penalty for making Reiser4 atomic? Yes, we do. Is it an acceptable penalty? We picked up a lot more performance from other improvements in Reiser4 than we lost to atomicity, and so it is not isolated in our measurements, but I am unscientifically confident that the answer is yes. If changes are either large or batched together with enough other changes to become large, the performance penalty is low and drowned out by other performance improvements. Scattered small changes threaten us with read performance losses compared to overwriting in place and taking our chances with the data's consistency if there is a crash, but use of a repacker will mostly alleviate this scenario. I have to say that in my heart I don't have any serious doubts that for the general purpose user the increase in data security is worthwhile. The users though will have the final say. A transaction preserves the previous contents of all modified blocks in their original location on disk until the transaction commits, and commit means the transaction has hereby reached a state where it will be completed even if there is a crash. The dirty blocks of an atom (which were captured and subsequently modified) are divided into two sets, relocate and overwrite, each of which is preserved in a different manner. The relocatable set is the set of blocks that have a dirty parent in the atom. The relocate set is those members of the relocatable set that will be written to a new or first location rather than overwritten. The overwrite set contains all dirty blocks in the atom that need to be written to their original locations, which is all those not in the relocate set. In practice this is those which do not have a parent we want to dirty, plus also those for which overwrite is the better layout policy despite the write twice cost. Note that the superblock is the parent of the root node and the free space bitmap blocks have no parent. By these definitions, the superblock and modified bitmap blocks are always part of the overwrite set. The wandered set is the set of blocks that the overwrite set will be written to temporarily until the overwrite set commits. An interesting definition is the minimum overwrite set, which uses the same definitions as above with the following modification. If at least two dirty blocks have a common parent that is clean then its parent is added to the minimum overwrite set. The parent's dirty children are removed from the overwrite set and placed in the relocate set. This policy is an example of what will be experimented with in later versions of Reiser4 using the layout toolkit. For space reasons, we leave out the full details on exactly when we relocate vs. overwrite, and the reader should not regret this because years of experimenting is probably ahead before we can speak with the authority necessary for a published paper on the effects of the many details and variations possible. When we commit we write a wander list which consists of a mapping of the wander set to the overwrite set. The wander list is a linked list of blocks containing pairs of block numbers. The last act of committing a transaction is to update the super block to point to the front of that list. Once that is done, if there is a crash, the crash recovery will go through that list and "play" it, which means to write the wandered set over the overwrite set. If there is not a crash, we will also play it. There are many more details of how we handle the deallocation of wandered blocks, the handling of bitmap blocks, and so forth. You are encouraged to read the comments at the top of our source code files (e.g. wander.c) for such details.... Suppose one wants to capture a node which belongs to an atom with stage >= ASTAGE_PRE_COMMIT. This capture request should wait (sleep in capture_fuse_wait()) when atom is committed. The copy-on-capture optimization allows to satisfy capture request by creating a copy of a node which is being captured. The commit process takes control on one copy of the node, the capturing process takes control over another copy. It does not lead to any node versions confilicts because it is guaranted that one copy below the commit process will not be modified. The idea of steal-on-capture optimization is that only the last committed transaction to modify an overwrite block actually needs to write that block. Other transactions can skip post-commit that block. This optimization, which is also present in ReiserFS version 3, means that frequently modified overwrite blocks will be written less than two times per transaction. With this optimization a frequently modified overwrite block may avoid being overwritten by a series of atoms; as a result crash recovery must replay more atoms than without the optimization. If an atom has overwrite blocks stolen, the atom must be replayed during crash recovery until every stealing-atom commits. Another way of escaping from the balancing time vs. space efficiency tradeoff is to use a repacker. 80% of files on the disk remain unchanged for long periods of time. It is efficient to pack them perfectly, by using a repacker that runs much less often than every write to disk. This repacker goes through the entire tree ordering, from left to right and then from right to left, alternating each time it runs. When it goes from left to right in the tree ordering, it shoves everything as far to the left as it will go, and when it goes from right to left it shoves everything as far to the right as it will go. (Left means small in key or in block number:-) ). In the absence of FS activity the effect of this over time is to sort by tree order (defragment), and to pack with perfect efficiency. Reiser4.1 will modify the repacker to insert controlled "air holes", as it is well known that insertion efficiency is harmed by overly tight packing. I hypothesize that it is more efficient to periodically run a repacker that systematically repacks using large IOs than to perform lots of 1 block reads of neighboring nodes of the modification points so as to preserve a balancing invariant in the face of poorly localized modifications to the tree. Every file possesses a plugin id, and every directory possesses a plugin id. This plugin id will identify a set of methods. The set of methods will embody all of the different possible interactions with the file or directory that come from sources external to ReiserFS. It is a layer of indirection added between the external interface to ReiserFS, and the rest of ReiserFS. Each method will have a methodid. It will be usual to mix and match methods from other plugins when composing plugins. Reiser4 will implement a plugin for traditional directories. It will implement directory style access to file attributes as part of the plugin for regular files. Later we will describe why this is useful. Other directory plugins we will leave for later versions. There is no deep reason for this deferral. It is simply the randomness of what features attract sponsors and make it into a release specification; there are no sponsors at the moment for additional directory plugins. I have no doubt that they will appear later; new directory plugins will be too much fun to miss out on.:-) Directory is mapping from file name to file itself. This mapping is implemented through Reiser4 internal balanced tree. Unfortunately file names cannot be used as keys until keys of variable length are implemented, or unreasonable limitations on maximal file name length are imposed. To work around this file name is hashed and hash is used as key in a tree. No hash function is perfect and there always be hash collisions, that is, file names having the same value of a hash. Previous versions of reiserfs (3.5 and 3.6) used "generation counter" to overcome this problem: keys for file names having the same hash value were distinguished by having different generation counters. This allowed to amortize hash collisions at the cost of reducing number of bits used for hashing. This "generation counter" technique is actually some ad hoc form of support for non-unique keys. Keeping in mind that some form of this have to be implemented anyway, it seemed justifiable to implement more regular support for non-unique keys in Reiser4. Another reason for using hashes is that some (arguable brain-dead) interfaces require them: telldir(3), and seekdir(3). These functions presume that file system can issue 64 bit "cookies" that can be used to resume a readdir. Cookies are implemented in most filesystems as byte offsets within a directory (which means they cannot shrink directories), and in ReiserFS as hashes of file names plus a generation counter. Curiously enough, Single UNIX specification tags telldir(3), and seekdir(3) as "Extension", because "returning to a given point in a directory is quite difficult to describe formally, in spite of its intuitive appeal, when systems that use B-trees, hashing functions, or other similar mechanisms to order their directories are considered". We order directory entries in ReiserFS by their cookies. This costs us performance compared to ordering lexicographically. (But is immensely faster than the linear searching employed by most other Unix filesystems.) Depending on the hash and its match to the application usage pattern there may be more or less performance lossage. Hash plugins will probably remain until version 5 or so, when directory plugins and ordering function plugins will obsolete them. Directory entries will then be ordered by file names like they should be (and possibly stem compressed as well). Security plugins handle all security checks. They are normally invoked by file and directory plugins. Example of reading a file: The balancing code will be able to balance an item iff it has an item plugin implemented for it. The item plugin will implement each of the methods the balancing code needs (methods such as splitting items, estimating how large the split pieces will be, overwriting, appending to, cutting from, or inserting into the item, etc). In addition to all of the balancing operations, item plugins will also implement intra-item search plugins. V3 of ReiserFS understood the structure of the items it balanced. This made adding new types of items storing such new security attributes as other researchers might develop too expensive in coding time, greatly inhibiting the addition of them to ReiserFS. In writing Reiser4 we hoped that there would be a great proliferation in the types of security attributes in ReiserFS if we made it a matter requiring not a modification of the balancing code by our most experienced programmers, but the writing of an item handler. This is necessary if we are to achieve our goal of making the adding of each new security attribute an order of magnitude or more easier to perform than it is now. When assigning the key to an item, the key assignment plugin is invoked, and it has a key assignment method for each item type. A single key assignment plugin is defined for the whole FS at FS creation time. We know from experience that there is no "correct" key assignment policy; squid has very different needs from average user home directories. Yes, there could be value in varying it more flexibly than just at FS creation time, but we have to draw the line somewhere when deciding what goes into each release.... Every node layout has a search method for that layout, and every item that is searched through has a search method for that item. (When doing searches, we search through a node to find an item, and then search within the item for those items that contain multiple things to find.) If you want to add a new plugin, we think your having to ask the sysadmin to recompile the kernel with your new plugin added to it will be acceptable for version 4.0. We will initially code plugin-id lookup as an in-kernel fixed length array lookup, methodids as function pointers, and make no provision for post-compilation loading of plugins. Performance, and coding cost, motivates this. People often ask, as ReiserFS grows in features, how will we keep the design from being drowned under the weight of the added complexity and from reaching the point where it is difficult to work on the code? The infrastructure to support security attributes implemented as files also enables lots of features not necessarily security related. The plugins we are choosing to implement in v4.0 are all security related because of our funding source, but users will add other sorts of plugins just as they took DARPA's TCP/IP and used it for non-military computers. Only requiring that all features be implemented in the manner that maximizes code reuse will ReiserFS coding complexity down to where we can manage it over the long term. Most plugins will have only a very few of their features unique to them and the rest of the plugin will be reused code. What Namesys sees as its role as a DARPA contractor is not primarily supplying a suite of security plugins, though we are doing that, but creating an architectural (not just the license) enabling of lots of outside vendors to efficiently create lots of innovative security plugins that Namesys would never have imagined if working by itself. By far most casualties in wars have always been to civilians. In future information infrastructure attacks, who will take more damage, civilian or military installations? DARPA is funding us to make all Gnu/Linux computers throughout the world a little bit more resistant to attack. Suppose you have a large file with many components. A general principle of security is that good security requires precision of permissions. When security lacks precision, it increases the burden of being secure; the extent to which users adhere to security requirements in practice is a function of the burden of adhering to it. Many filesystems make it space usage ineffective to store small components as separate files for various reasons. Not being separate files means that they cannot have separate permissions. One of the reasons for using overly aggregated units of security is space efficiency. ReiserFS currently improves this by an order of magnitude over most of the existing alternative art. Space efficiency is the hardest of the reasons to eliminate; its elimination makes it that much more enticing to attempt to eliminate the other reasons. Applications sometimes want to operate on a collection of components as a single aggregated stream. (Note that commonly two different applications want to operate on data with different levels of aggregation; the infrastructure for solving this security issue will also solve that problem as well.) I am going to use the /etc/passwd file as an example, not because I think that other solutions won't solve its problems better, but because the implementation of it as a single flat file in the early Unixes is a wonderful illustrative example of poorly granularized security that the readers may share my personal experiences with. I hope they will be able to imagine that other data files less famous could have similar problems. Have you ever tried to figure out just exactly what part of your continually changing /etc/passwd file changed near the time of a break-in? Have you ever wished that you could have a modification time on each field in it? Have you ever wished the users could change part of it, such as the gecos field, themselves (setuid utilities have been written to allow this, but this is a pedagogical not a practical example), but not have the power to change it for other users? There were good reasons why /etc/passwd was first implemented as a single file with one single permission governing the entire file. If we can eliminate them one by one, the same techniques for making finer grained security effective will be of value to other highly secure data files. Consider the use of emacs on a collection of a thousand small 8-32 byte files like you might have if you deconstructed /etc/passwd into small files with separable acls for every field. It is more convenient in screen real estate, buffer management, and other user interface considerations, to operate on them as an aggregation all placed into a single buffer rather than as a thousand 8-32 byte buffers. Suppose we create a plugin that aggregates all of the files in a directory into a single stream. How does one handle writes to that aggregation that change the length of the components of that aggregation? Richard Stallman pointed out to me that if we separate the aggregated files with delimiters, then emacs need not be changed at all to acquire an effective interface for large numbers of small files accessed via an aggregation plugin. If /new_syntax_access_path/big_directory_of_small_files/.glued is a plugin that aggregates every file in big_directory_of_small_files with a delimiter separating every file within the aggregation, then one can simply type emacs /new_syntax_access_path/big_directory_of_small_files/.glued, and the filesystem has done all the work emacs needs to be effective at this. Not a line of emacs needs to be changed. One needs to be able to choose different delimiting syntax for different aggregation plugins so that one can, for say the passwd file, aggregate subdirectories into lines, and files within those subdirectories into colon separate fields within the line. XML would benefit from yet other delimiter construction rules. (We have been told by Philipp Guehring of LivingXML.NET that ReiserFS is higher performance than any database for storing XML, so this issue is not purely theoretical.) In summary, to be able to achieve precision in security we need to have inheritance with specifiable delimiters and we need whole file inheritance to support ACLs. We provide the infrastructure for your constructing plugins that implement arbitrary processing of writes to inheriting files, but we also supply one generic inheriting file plugin that intentionally uses delimiters very close to the sys_reiser4() syntax. We will document the syntax more fully when that code is working, for now syntax details are in the comments in the file invert.c in the source code. A new system call sys_reiser4() will be implemented to support applications that don't have to be fooled into thinking that they are using POSIX. Through this entry point a richer set of semantics will access the same files that are also accessible using POSIX calls. Reiser4() will not implement more than hierarchical names. A full set theoretic naming system as described on our future vision page will not be implemented before SSN Reiserfs is implemented (Distributed Reiserfs is our distributed filesystem, Semi-Structured Naming Reiserfs is our enhanced semantics, whether we implement Didtrubuted Reiserfs or SSN Reiserfs first depends on which sponsors we find ;-) ). Reiser4() will implement all features necessary to access ACLs as files/directories rather than as something neither file nor directory. These include opening and closing transactions, performing a sequence of I/Os in one system call, and accessing files without use of file descriptors (necessary for efficient small I/O). SSN Reiserfs will use a syntax suitable for evolving into SSN Reiserfs syntax with its set theoretic naming. Security related attributes tend to be small. The traditional filesystem API for reading and writing files has these flaws in the context of accessing security attributes: The usual response to these flaws is that people adding security related and other attributes create a set of methods unique to their attributes, plus non-reusable code to implement those methods in which their particular attributes are accessed and stored not using the methods for files, but using their particular methods for that attribute. Their particular API for that attribute typically does a one-off instantiation of a lightweight single system call write constrained atomic access with no code being reusable by those who want to modify file bodies. It is basic and crucial to system design to decompose desired functionality into reusable, orthogonal separated components. Persons designing security attributes are typically doing it without the filesystem that they want offering them a proper foundation and tool kit. They need more help from us core FS developers. Linus said that we can have a system call to use as our experimental plaything in this. With what I have in mind for the API, one rather flexible system call is all we want for creating atomic lightweight batched constrained accesses to files, with each of those adjectives to accesses being an orthogonal optional feature that may or may not be invoked in a particular instance of the new system call. Looking at the coin from the other side, we want to make it an order of magnitude less work to add features to ReiserFS so that both users and Namesys can add at least an order of magnitude more of them. To verify that it is truly more extensible you have to do some extending, and our DARPA funding motivates us to instantiate most of those extensions as new security features. This system call's syntax enables attributes to be implemented as a particular type of file. It avoids uglifying the semantics with two APIs for two supposedly different kinds of objects that don't truly need different treatment. All of its special features that are useful for accessing particular attributes are all also available for use on files. It has symmetry, and its features have been fully orthogonalized. There is nothing particularly interesting about this system call to a languages specialist (It's ideas were explored decades ago except by filesystem developers.) until SSN Reiserfs, when we will further evolve it into a set theoretic syntax that deconstructs tuple structured names into hierarchy and vicinity set intersection. That is described at You can create a new security attribute by: The reiser4() system call (still being debugged at the time of writing) executes a sequence of commands separated by commas. Assignment, and transaction, are the commands supported in Reiser4(); more commands will appear in SSN Reiserfs. <- and <<- are two of the assignment operators. lhs(assignment target) values: assigns (writes) to the buffer starting at address offset in the process address space, ending at last_byte. (The assignment source may be smaller or larger than the assignment target.) Representation of offset and last_byte is left to the coder to determine. It is an issue that will be of much dispute and little importance. Notice / is used to indicate that the order of the operands matters; see the future vision whitepaper for details of why this is appropriate syntax design. Note the lack of a file descriptor. assigns to the file named filename. writes to the body, starting at offset, ending not past last_byte writes to the body starting at ofset rhs (assignment source) values: reads from the buffer starting at address offset in the process address space, ending at last_byte. Representation of offset, last_byte is left to the coder to determine, as it is an issue that will be of much dispute and little importance. reads the entirety of the file named filename. reads from the body, starting at first_byte, ending not past last_byte reads from the body starting at offset until the end reads from the ownership field of the stat data (stat data is that which is returned by the stat() system call (owner, permissions, etc.) and stored on a per file basis by the FS.) Note that "...." and "process" are style conventions for the name of a hidden subdirectory implementing methods and accessing metadata supported by a plugin. It is possible to rename it, etc. We had a discussion about whether to instead use names that could not clash with any legitimate name likely to be used by users. Vladimir Demidov suggested that cryptic names historically have harmed the acceptance of several languages, and so it was realized that being novice unfriendly in the naming was worse than risking a name collision, especially since it could be cured by using rename on "...." and "process" for the few cases where it is necessary. (Note: this is not yet coded.) Another way security may be insufficiently fine grained is in values: it can be useful to allow persons to change data but only within certain constraints. For this project we will implement plugins; one type of plugin will be write constraints. Write-constraints are invoked upon write to a file; if they return non-error then the write is allowed. We will implement two trivial sample write-constraint plugins. One will be in the form of a kernel function loadable as a kernel module which returns non-error (thus allowing the write) if the file consists of the strings "secret" or "sensitive" but not "top-secret". The other, which does exactly the same, will be in the form of a perl program residing in a file and executed in user-space. Use of kernel functions will have performance advantages, particularly for small functions, but severe disadvantages in power of scripting, flexibility, and ability to be installed by non-secure sources. Both types of plugins will have their place. Note that ACLs will also embody write constraints. We will implement both constraints that are compiled into the kernel, and constraints that are implemented as user space processes. Specifically, we will implement a plugin that executes an arbitrary constraint contained in an arbitary named file as a user space process, passes the proposed new file contents to that process as standard input, and iff the process exits without error allows the write to occur. It can be useful to have read constraints as well as write constraints. (Note: this is not yet coded.) We will implement a plugin that notifies administrators by email when access is made to files, e.g. read access. With each plugin implemented creating additional plugins becomes easier as the available toolkit is enriched. Auditing constitutes a major additional security feature, yet it will be easy to implement once the infrastructure to support it exists. (It would be substantial work to implement it without that infrastructure.) The scope of this project is not the creation of plugins themselves, but the creation of the infrastructure that plugin authors would find useful. We want to enable future contributors to implement more secure systems on the Gnu/Linux platform, not implement them ourselves. By laying a proper foundation and creating a toolkit for them, we hope to reduce the cost of coding new security attributes for those who follow us by an order of magnitude. Employing a proper set of well orthogonalized primitives also changes the addition of these attributes from being a complexity burden upon the architecture into being an empowering extension of the architecture. (This feature is not yet coded.) Inheritance of security attributes is important to providing flexibility in their administration. We have spoken about making security more fine grained, but sometimes it needs to be larger grained. Sometimes a large number of files are logically one unit in regards to their security and it is desirable to have a single point of control over their security. Inheritance of attributes is the mechanism for implementing that. Security administrators should have the power to choose whatever units of security they desire without having to distort them to make them correspond to semantic units. Inheritance of file bodies using aggregation plugins allows the units of security to be smaller than files; inheritance of attributes allows them to be larger than files. Currently, encrypted files suffer severely in their write performance when implemented using schemes that encrypt at every write() rather than at every commit to disk. We encrypt on flush such that a file with an encryption plugin id is encrypted not at the time of write, but at the time of flush to disk. Encryption is implemented as a special form of repacking on flush, and it occurs for any node which has its CONTAINS_ENCRYPTED_DATA state flag set on it. Reiser4 offers a dramatically better infrastructure for creating new filesystem features. Files and directories have all of the features needed to make it not necessary to have file attributes be something different from files. The effectiveness of this new infrastructure is tested using a variety of new security features. Performance is greatly improved by the use of dancing trees, wandering logs, allocate on flush, a repacker, and encryption on commit. It was an important question whether we could increase the level of abstraction in our design without harming performance. Reiser4 gives you BOTH the most cleanly abstracted storage AND the highest performance storage of any filesystem. [Gray93] Jim Gray and Andreas Reuter. "Transaction Processing: Concepts and Techniques". Morgan Kaufmann Publishers, Inc., 1993. Old but good textbook on transactions. Available at [Hitz94] D. Hitz, J. Lau and M. Malcolm. "File system design for an NFS file server appliance". Proceedings of the 1994 USENIX Winter Technical Conference, pp. 235-246, San Francisco, CA, January 1994 Available at [TR3001] D. Hitz. "A Storage Networking Appliance". Tech. Rep TR3001, Network Appliance, Inc., 1995 Available at [TR3002] D. Hitz, J. Lau and M. Malcolm. "File system design for an NFS file server appliance". Tech. Rep. TR3002, Network Appliance, Inc., 1995 Available at [Ousterh89] J. Ousterhout and F. Douglis. "Beating the I/O Bottleneck: A Case for Log-Structured File Systems". ACM Operating System Reviews, Vol. 23, No. 1, pp.11-28, January 1989 Available at [Seltzer95] M. Seltzer, K. Smith, H. Balakrishnan, J. Chang, S. McMains and V. Padmanabhan. "File System Logging versus Clustering: A Performance Comparison". Proceedings of the 1995 USENIX Technical Conference, pp. 249-264, New Orleans, LA, January 1995 Available at [Seltzer95Supp] M. Seltzer. "LFS and FFS Supplementary Information". 1995 [Ousterh93Crit] J. Ousterhout. "A Critique of Seltzer's 1993 USENIX Paper" [Ousterh95Crit] J. Ousterhout. "A Critique of Seltzer's LFS Measurements" [SwD96] A. Sweeny, D. Doucette, W. Hu, C. Anderson, M. Nishimoto and G. Peck. "Scalability in the XFS File System". Proceedings of the 1996 USENIX Technical Conference, pp. 1-14, San Diego, CA, January 1996 Available at [VelskiiLandis] G.M. Adel'son-Vel'skii and E.M. Landis, An algorithm for the organization of information, Soviet Math. Doklady 3, 1259-1262, 1972, This paper on AVL trees can be thought of as the founding paper of the field of storing data in trees. Those not conversant in Russian will want to read the [Lewis and Denenberg] treatment of AVL trees in its place. [Wood] contains a modern treatment of trees. [Apple] Inside Macintosh, Files, by Apple Computer Inc., Addison-Wesley, 1992. Employs balanced trees for filenames, it was an interesting filesystem architecture for its time in a number of ways, now its problems with internal fragmentation have become more severe as disk drives have grown larger. I look forward to the replacement they are working on. [Bach] Maurice J. Bach. "The Design of the Unix Operating System". 1986, Prentice-Hall Software Series, Englewood Cliffs, NJ, superbly written but sadly dated, contains detailed descriptions of the filesystem routines and interfaces in a manner especially useful for those trying to implement a Unix compatible filesystem. See [Vahalia]. [BLOB] R. Haskin, Raymond A. Lorie: On Extending the Functions of a Relational Database System. SIGMOD Conference (body of paper not on web) 1982: 207-212, Reiser4 obsoletes this approach. [Chen] Chen, P.M. Patterson, David A., A New Approach to I/O Performance Evaluation---Self-Scaling I/O Benchmarks, Predicted I/O Performance, 1993 ACM SIGMETRICS Conference on Measurement and Modeling of Computer Systems, also available on Chen's web page. [C-FFS] Ganger, Gregory R., Kaashoek, M. Frans. "Embedded Inodes and Explicit Grouping: Exploiting Disk Bandwidth for Small Files." A very well written paper focused on 1-10k file size issues, they use some similar notions (most especially their concept of grouping compared to my packing localities). Note that they focus on the 1-10k file size range, and not the sub-1k range. The 1-10k range is the weakpoint in ReiserFS V3 performance. The page with link to postscript paper available at [ext2fs] by Remi Card extensive information, source code is available Probably our toughest current competitor, it is showing its age though, and recent enhancements of it (journaling, htrees, etc.) have not been performance effective. It embodies both the strengths and weaknesses of the incrementalist approach to coding, and substantially resembles the older FFS filesystem from BSD. [FFS] M. McKusick, W. Joy, S. Leffler, R. Fabry. "A Fast File System for UNIX". ACM Transactions on Computer Systems, Vol. 2, No. 3, pp. 181-197, August 1984 describes the implementation of a filesystem which employs parent directory location knowledge in determining file layout. It uses large blocks for all but the tail of files to improve I/O performance, and uses small blocks called fragments for the tails so as to reduce the cost due to internal fragmentation. Numerous other improvements are also made to what was once the state-of-the-art. FFS remains the architectural foundation for many current block allocation filesystems, and was later bundled with the standard Unix releases. Note that unrequested serialization and the use of fragments places it at a performance disadvantage to ext2fs, though whether ext2fs is thereby made less reliable is a matter of dispute that I take no position on (Reiser4 is an atomic filesystem, which is a different level of reliability entirely). Available at. [Ganger] Gregory R. Ganger, Yale N. Patt. "Metadata Update Performance in File Systems". ( Abstract only) [Gifford] Describes a filesystem enriched to have more than hierarchical semantics, he shares many goals with this author, forgive me for thinking his work worthwhile. If I had to suggest one improvement in a sentence, I would say his semantic algebra needs closure.(Postscript only). [Hitz, Dave] A rather well designed filesystem optimized for NFS and RAID in combination. Note that RAID increases the merits of write-optimization in block layout algorithms. Available at [Holton and Das] Holton, Mike., Das, Raj. "The XFS space manager and namespace manager use sophisticated B-Tree indexing technology to represent file location information contained inside directory files and to represent the structure of the files themselves (location of information in a file)". Note that it is still a block (extent) allocation based filesystem, no attempt is made to store the actual file contents in the tree. It is targeted at the needs of the other end of the file size usage spectrum from ReiserFS, and is an excellent design for that purpose (though most filesystems including Reiser4 do well at writing large files, and I think it is medium-sized and smaller files where filesystems can substantively differentiate themselves.) SGI has also traditionally been a leader in resisting the use of unrequested serialization of I/O. Unfortunately, the paper is a bit vague on details. Available at [Howard] Howard, J.H., Kazar, M.L., Menees, S.G., Nichols, D.A., Satayanarayanan, N., Sidebotham, R.N., West, M.J. "Scale and Performance in a Distributed File System". ACM Transactions on Computer Systems, 6(1), February 1988 A classic benchmark, it was too CPU bound to effectively stress ext2fs and ReiserFS, and is no longer very effective for modern filesystems. [Knuth] Knuth, D.E., The Art of Computer Programming, Vol. 3 (Sorting and Searching), Addison-Wesley, Reading, MA, 1973, the earliest reference discussing trees storing records of varying length. [LADDIS] Wittle, Mark., and Bruce, Keith. "LADDIS: The Next Generation in NFS File Server Benchmarking", Proceedings of the Summer 1993 USENIX Conference., July 1993, pp. 111-128 [Lewis and Denenberg] Lewis, Harry R., Denenberg, Larry. "Data Structures & Their Algorithms", HarperCollins Publishers, NY, NY, 1991, an algorithms textbook suitable for readers wishing to learn about balanced trees and their AVL predecessors. [McCreight] McCreight, E.M., Pagination of B*-trees with variable length records, Commun. ACM 20 (9), 670-674, 1977, describes algorithms for trees with variable length records. [McVoy and Kleiman] The implementation of write-clustering for Sun's UFS. Available at [OLE] "Inside OLE" by Kraig Brockshmidt, discusses Structured Storage, abstract only. Structured storage is what you get when application developers need features to better manage the storage of objects on disk by the applications they write, and the filesystem group at their company can't be bothered with them. Miserable performance, miserable semantics. Available at. [Ousterhout] J.K. Ousterhout, H. Da Costa, D. Harrison, J.A. Kunze, M.D. Kupfer, and J.G. Thompson. "A Trace-driven Analysis of the UNIX 4.2BSD File System". In Proceedings of the 10th Symposium on Operating Systems Principles, pages 15--24, Orcas Island, WA, December 1985. .... [Peacock] K. Peacock. "The CounterPoint Fast File System". Proceedings of the Usenix Conference Winter 1988 [Pike] Rob Pike and Peter Weinberger, The Hideous Name, USENIX Summer 1985 Conference Proceedings, pp. 563, Portland Oregon, 1985. Short, informal, and drives home why inconsistent naming schemes in an OS are detrimental. Available at. His discussion of naming in plan 9: [Rosenblum and Ousterhout] M. Rosenblum and J. Ousterhout. "The Design and Implementation of a Log-Structured File System". ACM Transactions on Computer Systems, Vol. 10, No. 1, pp. 26-52, February 1992. Available at. This paper was quite influential in a number of ways on many modern filesystems, and the notion of using a cleaner may be applied to a future release of ReiserFS. There is an interesting on-going debate over the relative merits of FFS vs. LFS architectures, and the interested reader may peruse and the arguments by Margo Seltzer it links to. [Snyder] "tmpfs: A Virtual Memory File System" discusses a filesystem built to use swap space and intended for temporary files, due to a complete lack of disk synchronization it offers extremely high performance. [Vahalia] Uresh Vahalia, "Unix Kernal Internals" [Reiser93] Reiser, Hans T., Future Vision Whitepaper, 1984, Revised 1993. Available at.
http://web.archive.org/web/20070306224406/http:/www.namesys.com/v4/v4.html
CC-MAIN-2014-52
en
refinedweb
Pub/Sub in the cloud–A brief comparison between Azure Service Bus and PubNub Publish/Subscribe in the cloud has became relatively important lately as an integration pattern for business to business scenarios between organizations. The major benefit of using a service hosted in the cloud as intermediary is that publishers and subscribers don’t need to be publicly addressable, be in the same network or be able to talk each other directly. The cloud infrastructure allows this intermediary service to scale correctly as the number of publishers or subscribers increase, and also to act as a firewall for brokering the communication (Publishers or subscribers need explicit permissions to connect, send or receive messages from the intermediary service). This pattern can be used in workflow systems to relay events among distributed computer applications, update data in business systems or as a way to move data between data stores. For example, in an order processing application, notifications must be sent whenever a transaction occurs; an order is placed in a system, the order details are forwarded as a message to a payment processor service for approval, and finally, an order confirmation message is sent back to the system where the order was originally created. This infrastructure typically supports the idea of “topics” or named logical channels. Subscribers will receive all the messages published to the topics to which they subscribe, and all subscribers to a topic will receive the same messages. I am going to discuss today two available solutions in the cloud, the “AppFabric Service Bus”, which is part of the Microsoft PaaS cloud strategy known as Azure and also a relatively new implementation “PubNub” hosted in the Amazon EC2 cloud infrastructure. AppFabric Service Bus The AppFabric Service Bus is a service running in Microsoft data centers. This service acts as broker for relaying messages through the cloud to services running on premises behind network obstacles of any kind, such as firewalls or NAT devices. The Service Bus secures all its endpoints by using the claim based security model provided by the Access Control service (Another service available as part of Azure AppFabric). You can find a lot of interesting features as part of the service bus such as federated authentication for listening or sending to the cloud, a naming mechanism for the endpoints in the cloud, a common messaging fabric with a great variety of communication options, or a discoverable service registry that any application trying to integrate with it can use. In the first release, the service bus originally provided a relay service for integrating on-premises applications with services running on the cloud. At that time, the integration with the relay service could be done in two ways, a message buffer in the cloud accessible through a REST API or using the traditional WCF programming model with special channels talking to the relay service on the cloud. By using the WCF programming model, the interaction with the relay binding was almost transparent for applications, as all the communications details were handled at channel level by WCF. This message buffer was a temporary store for the messages, so they disappeared after being consumed or when they expired. The AppFabric team recently announced the availability of a new feature for supporting durable messaging at the service bus level. Durable messaging in this context comes in two flavors, reliable message queuing and durable publish/subscribe messaging. The main difference between them is the number of parties that can consume a message published in the service bus. While a message is consumed by a single party when a queue is used, the publish/subscribe model relies on topics, which allows multiple parties to subscribe to the messages received in an specific topic (Every party receives a copy of the message basically). The pricing model for the service bus is currently based on the number of used connections. Every message sent to the service bus usually involves two connections, one connection for sending or publishing the message and another connection for receiving it (this might change for the model where you have multiple subscribers for a message). This thread in the MSDN forums discusses the model in details, and I have to admit it takes some time to digest. Advantages - The service bus supports a good isolation level based on service namespaces. A service namespace represents a level of isolation for a particular set endpoints, and you can associate multiple service namespaces to an Azure account. For example, you can have two different applications associated to your Azure account and each one them listening on a different service namespace address. - The great number of communication options you can find as part of the service bus. - The REST API, the .NET APIs and the WCF bindings makes the service bus really easy to use from any application. Disadvantages - The pricing model is to complex to understand and it is hard to predict. Microsoft does not currently offers a good monitoring option for determining the number of used connections or predicting costs before receiving the monthly bill. - The number of service namespaces that you can create in an specific Azure account is limited (I believe the number is 50 namespaces, and that number can be increased if you make an explicit request). This is still a big problem if you want to use the service bus to route messages to several machines listening on different namespaces, or support a multitenant schema in which a different namespace is assigned per tenant. - There is not an API for managing the service namespaces, which represents an inconvenient if you want to allocate service namespaces dynamically. PubNub PubNub is a relatively new push service hosted in the cloud. It’s currently hosted in the Amazon EC2 infrastructure, and provide a set of APIs for pushing or receiving messages in almost all the languages and platforms you can imagine. All those APIs are also available as open source in GitHub. While the main purpose of this service is to serve as a mechanism for pushing data to different devices (mobile devices, web browsers, etc) via Http, I can also a find a good use case of this service for pub/sub in the enterprise. PubNub pushes data to the different subscribers using a BOSH comet technology. The idea BOSH comet is to define a transport protocol that emulates the semantics of a long-lived, bidirectional TCP connection between a client and a server by efficiently using multiple synchronous HTTP request/response pairs without requiring the use of frequent polling or chunked responses. Subscribers must issue a API call to begin listening for messages on an specific channel (similar to a topic), automatically keeping the connection open until the application is closed. Every message sent by a client application to an specific channel will be forwarded to all the subscriber listening on that channel. One the main disadvantages probably is the maximum size for the message payload that you can send or receive, which is 1.8 Kb (This limit might be increased, or otherwise, you might need to implement a chunking channel on your end). Advantages - Extremely fast and easy to use. - The pricing model is very easy to understand, you pay for every message that you sent basically. This model scales well for a great number of clients and servers as well as the price you pay for every message is relatively cheap. - They offer an API for managing accounts, which is the mechanism they use for billing. - Client API available in a great number of technologies and languages. Disadvantages - The supported message payload size, which is by default, 1.8 kb. - They don’t have an exclusive isolation level like the service bus does. The only isolation level here is the account.
http://weblogs.asp.net/cibrax/pub-sub-in-the-cloud-a-brief-comparison-between-azure-service-bus-and-pubnub
CC-MAIN-2014-52
en
refinedweb
JNDI name is a user-friendly name for an object. These names are bound to their objects by the naming and directory service that is provided by a J2SE server. Because J2SE components access this service through the JNDI API, an object’s user-friendly name is its JNDI name. For instance, the JNDI name of the Oracle database can be jdbc/Oracle. When it starts up, Sun Java System Web Server reads information from configuration file and automatically adds JNDI database names to the name space.SE container implements the Web application component’s environment, and provides it to the application component instance as a JNDI naming context. The application component’s environment is used as follows: The Web do not have to change the name in the code. This flexibility also makes it easier for you to assemble J2SE applications from preexisting components. The following table lists recommended JNDI lookups and their associated references for the J2SE resources used by Sun Java System Web Server.Table 12–1 JNDI Lookups and Their Associated References naming support in Sun Java System Web Server is based primarily on J2SE 1.3, with a few added enhancements.When an application component creates the initial context by using InitialContext(), Sun Java System Web Server returns an object that serves as a handle to the Web application’s naming environment. This object in turn provides sub-contexts for the java:comp/env namespace. Each Web application gets its own namespace java:comp/env name space is allotted to each Web application and objects bound in one Web application’s namespace do not collide with objects bound in other Web applications. Web Server resource factories are specified within the <resources> </resources> tags in server.xml and have a JNDI name specified using the jndiname attribute (with the exception of jdbconnectionpool which does not have a jndiname). This attribute is used to register the factory in the server-wide namespace. Deployers can map user-specified, application-specific resource reference names (declared within resource-ref or resource-env-ref elements) to these server-wide resource factories using the resource-ref element in sun-web.xml. This enables deployment time decisions to be made with regards to which JDBC resources (and other resource factories) to use for a given application. A custom resource accesses a local JNDI repository and an external resource accesses an external JNDI repository. Both types of resources need user-specified factory class elements, JNDI name attributes, and so on. In this section, we will discuss how to create various J2SE resources, and how to access these resources. Creating Java-based Resources Modifying Java-based Resources
http://docs.oracle.com/cd/E19857-01/820-7651/bhano/index.html
CC-MAIN-2014-52
en
refinedweb
04 September 2013 20:08 [Source: ICIS news] TORONTO (ICIS)--Two planned west-to-east Canadian oil pipeline projects should benefit oil-based petrochemical producers in Quebec's ?xml:namespace> The projects would also improve margins for Quebec's two refineries which would be able to source oil priced on West Texas Intermediate (WTI), rather than Brent, David Podruzny, vice president of business and economics at trade group Chemistry Industry Association of Canada (CIAC), told ICIS. “Anytime you can get greater energy diversity and price competition you are going to see people say that this is a good move,” he said. Energy infrastructure firm Enbridge has proposed to reverse the flow of an existing oil pipeline to go from the Sarnia petrochemicals production hub in southern Ontario eastward to Montreal. Meanwhile, TransCanada plans to build a west-to-east oil pipeline that will partially be based on converting existing natural gas pipeline capacity to oil. Podruzny said that overall, "We have a very competitive gas-based petrochemicals industry in western Canada and in Sarnia which is competing against places like China, Japan and Europe, where producers are oil-based," he added. Podruzny also
http://www.icis.com/Articles/2013/09/04/9703243/canada-west-to-east-oil-pipeline-projects-should-help-montreal-chems.html
CC-MAIN-2014-52
en
refinedweb
By Chris Judson Over the years PeopleSoft has provided many different integration points to access data and execute business functionality. This is all made possible because PeopleSoft 8.x applications and above are built upon Components. Components represent real-world business objects and have keys that enable navigation to a specific instance of a business object. They are structured to encapsulate all the data, business logic, and functionality needed to perform a specific business function (i.e. Add/Update Names, Addresses, etc.). PeopleSoft has been able to leverage the encapsulation provided by the Component concept to provide many different ways to retrieve and update data while ensuring that all the necessary business logic is executed utilizing the data and security constraints defined in PeopleSoft. The Component Interface (CI) is one integration method created by PeopleSoft to allow access from internal and external applications to the underlying Components. An external system can invoke PeopleSoft Components over HTTP(S)/XML or it can invoke the Component Client using Java, COM or C/C++ bindings. The Component Client is a multi-threaded client that interacts with the application server to execute PeopleSoft business logic housed within a Component. PeopleSoft Application Messaging is another method of integration that allows PeopleSoft applications to notify external systems of the invocation of business events executed by Components. These business events are published as XML and delivered to subscribing systems. Application Messaging also supports the ability for a Component to invoke an external synchronous Web Service to obtain data before continuing with the process. There are two primary differences between integrating with Component Interfaces and Application Messaging: 1. The Component Interface executes all of the business logic that already exists in a Component. Application Messaging only executes the logic associated with the message event. If there is no delivered logic in the message event then it must be created. 2. Invoking a Component Interface always requires a valid PeopleSoft username and password with the correct permissions. The remainder of this discussion will focus on the functionality provided by Component Interfaces. The major integration functionality that was introduced in different versions of PeopleTools is shown in Table 1: PeopleTools Functionality. PeopleTools Version Functionality PeopleTools 8.0 Introduction of: Component Interfaces Application Engine Application Messaging PeopleTools 8.40 Introduction of Integration Broker PeopleTools 8.42 Create Web Services from any Component Interface PeopleTools 8.44 Create Web Services from any Application Message PeopleTools 8.46 Integration Broker certified interoperable with Oracle BPEL PM PeopleTools 8.47 PeopleTools certified interoperable with Oracle Fusion Middleware PeopleTools 8.48 PeopleTools optimized for Oracle Fusion Middleware Project Accounting Project ID, Project Name, Project Cost, Project Cost Distribution Lines, Periods, Tasks, Expenditures (Items, Groups, Types etc.) Table 1: PeopleTools Functionality The Component Interfaces are unique in that they provide real-time interfaces as if the external application were entering data through the PeopleSoft user interface. This ensures that whether the data is being entered by a user or through the CI that the exact same business logic, data integrity rules and security are used. In PeopleTools version 8.42 or greater, CIs can automatically generate Web Services (as well as Java, C/C++ and COM objects that have been supported since PeopleTools 8.22). This makes integrating with the CIs quick and easy since Oracle Fusion Middleware can easily interact with standards based Web Services. This architecture is shown in Figure 1: Integration with PeopleTools 8.42 and greater. Starting with PeopleTools version 8.0 CIs can be exposed only through Java, COM or C/C++ bindings. There are two ways to interface to CIs from within Oracle Fusion Middleware. These two options are outlined in Figure 2: Integrating with pre-PeopleTools 8.42. Figure2: Integrating with pre-PeopleTools 8.42 The Oracle Adapter for PeopleSoft can expose any CIs to Oracle Fusion Middleware. The adapter can be deployed within the SOAP Switch which exposes the CI as a Web service or can be integrated into the BPEL Process Manager through JCA. Either way makes it easy for BPEL PM or ESB to access a CI. The initial configuration of the PeopleSoft adapter is not very easy but once configured it makes integrating into a large number of CIs quick and easy. The major drawback that we had using this model was the error messages that were received back from the adapter were not descriptive. The second way to interface with CIs from within BPEL is to create your own Web service that utilizes the Java API that is generated from the Application Designer. This sounds like a lot of work but is actually pretty simple with a little knowledge of Java, the CI and a handy tool like JDeveloper that will create a Web service for you. It takes between 1-3 days to create a Web service wrapper for a CI based on how many levels the CI contains. When creating the custom Web service, special attention needs to be paid to the order in which fields are updated, because they need to be entered in a specific order otherwise the CI will not work correctly. Once the Web services are created they can be easily integrated into any BPEL or ESB process. Creating our own Web services wrappers around the CI allows us to simulate anything a user can do from pushing custom buttons to clicking on custom links. This is a large advantage over using the Oracle Adapter for PeopleSoft which simply provides the ability to get and save data. If you are starting off slow, and only integrating to a handful of interfaces that need tight integration with the CI, the custom Web services option might be the most cost effective, but if you plan on integrating with a lot of CIs (10+) it would probably be more cost effective in the long run to use the Oracle Adapter for PeopleSoft. The PeopleSoft Integration Broker shown in the above pictures is another method for getting and entering data into PeopleSoft, but that is a whole other discussion. Creating a Component Interface from a Component By Sunil Manaktala We have created an On-Boarding process to demonstrate how the functionality introduced in People Tools 8.48 makes it much easier and quicker to adopt SOA architecture, for your Enterprise applications. The process is initiated in PeopleSoft HR by adding an employee in the Workforce Administration module. Next, the BPEL process invokes the Person Data Component Interface and retrieves all the data you have entered for the new employee. This enables the process to use the retrieved data in subsequent steps of the process. Then the custom built Facilities Web service is invoked to generate a badge number for the new hire. Now an email notification is generated to the new hire, containing the work email address, badge number and employee ID. Then, the custom built Badge Component Interface is invoked to update the Badge Component with the newly generated badge number. Finally, the process logs into the PeopleSoft Finance system and creates a Purchase Order with a new laptop for the new employee. The Purchase Order references the Employee ID, for whom the laptop was ordered for. On-Boarding Process Flow: The points illustrated in our BPEL process are: · Initiate the process by simply adding a new employee. The process is launched after the employee data is entered and the page is saved. The user does not need to perform any additional action to initiate the process. · Expose an existing Component Interface (CI_PERSONAL_DATA) as a Web service and invoke it in the BPEL process. · Create a new Component Interface (BADGE_CI) using an existing component (BADGE), expose it as a web service and invoking it in the BPEL process. · Create a custom web service and invoking it in the BPEL process. · Generate an email notification containing the data values entered on the PS HCM employee entry page. (email address, badge number, employee ID) · Access two separate PeopleSoft systems in the same BPEL process, and transfer data from HCM to Financials. We create a Purchase Order in PeopleSoft Financials 9.0 and add the Employee ID to the Purchase Order to ease the reconciliation process. In PeopleSoft a Component Interface (CI) can be created from any Component. Once the CI is created it can be exposed as a Web service. We created a CI for the delivered Component (BADGE) in PeopleSoft HCM 9.0. The steps performed to create the CI are as follows: · In Application Designer, select: File > New · Select ‘Component Interface’: · Select the Component you wish to build from (BADGE), you will receive this message: · Choose ‘Yes’ The Component Interface has been created: · Click ‘Save ‘ and give the Component Interface a name. PeopleTools appends a 'CI__*' to the front of the Component Interface so I recommend appending 'CI' to the end of the component, like so (BADGE_CI). As a rule of thumb, the smaller the component the easier the Component Interface will be to create and work with. Congratulations! You have created a custom Component Interface! The next step is to validate the CI for consistency. This ensures the Meta data is correct and the definition matches what the database expects. This also ensures you will not run into any problems when invoking the CI as a web service. To validate the CI: · Select ‘Tools’ and ‘Validate for Consistency’ Look for results in the bottom window: The next step is to test the CI in Application Designer: · Select ‘Tools’ and ‘Test Component Interface’: The CI tester is launched, enter a key value you know exists and select ‘Get Existing’. BE SURE to check the GET history Items and EDIT history items check box. This will allow you to edit existing data: You are presented with the CI tester data entry screen. Enter valid values for updating your component. Be careful to enter correct values for fields that have prompt values and ensure you enter the correct data type in the respective field. For example if you enter a STRING value in a DATE field your CI will FAIL when you attempt to update the CI. After you enter your values you’re ready to call a method. Highlight the BADGE_CI top level record and right click. · Select ‘Save’ You will receive this message box, if you are presented with a ‘1’ your test was a success and your CI is functioning normally, if you receive a ‘0’ you will need to continue testing. Testing is complete. You have validated the CI and tested data entry. Now you need to do the entire online configuration to enable the serice and export the CI as a WSDL. · Navigate to: PeopleTools > Integration Broker>Web Services > Provide Web Service: Complete the next three steps of the wizard and you will have exported the CI as a Web service. WSDL: Now you open JDeveloper, select a partner link and provide the WSDL URL. You are now ready to work with this new CI in your BPEL process. Enjoy ! By Thor Nicolas Knowledge of PeopleSoft PeopleTools Familiarity with the Java language Familiarity with JDeveloper 11g Exposing PeopleSoft components as Web services has become easier with the release of PeopleTools 8.48; using just a few steps you can expose Web services that can be consumed by 3rd party applications or development tools. Here we will demonstrate how to expose a component as a Web service and invoke it using JDeveloper 11g. To be able to invoke a web service from JDeveloper we’ll need to import the WSDL to generate a proxy class; this proxy class will serve as a wrapper to the Web Service. Once generated, invoking the web service will be as simple as using the proxy class. Although we are using JDeveloper 11g in this exercise, previous versions such as 10.1.3, follow the same steps in generating the required proxy class, however, changes to the sample code will be needed. We’ll use the delivered User Profile component and component interface, which is used in all PeopleSoft applications. We’ll use these objects to generate the required service operations in creating the PeopleSoft Web service. We will then test the Web service by passing a PeopleSoft User ID (“PS”) in our Java application to retrieve the User Profile Description (“PeopleSoft Superuser”) and output it in the screen. 1) PeopleTools 8.48 2) JDeveloper 11g Tech Preview 3 1) Create a component interface 2) Expose the component interface as a service operation 3) Publish the service operation as a web service 4) Consume the web service (via the PS generated WSDL) in JDeveloper 11g 1) Select the PeopleSoft component to expose as a web service, for this exercise we will use the User Profile component. 2) Launch Application Designer and create a component Interface 3) Log back into PeopleSoft online 4) Make sure the Integration Broker Service Configuration is correct 5) Make sure the following Service Operations are active 6) Generate the Service Operations for the target component interface using the CI-Based Services wizard 7) Inspect the generated Service Operations 8) Create the Web Service for the generate Service Operations using the Provide Web Services wizard 9) Copy the WSDL URL for the generated Web Service (You will need this in JDeveloper) 10) Grant security to the Component Interface (select which methods will be available to users trying to access it using permission lists) 11) Launch JDeveloper 11g and create a new project 12) Add a new item: Business Tier > Web Services > Web Services Proxy to launch the Create Web Service Proxy wizard 13) Paste the WSDL URL that was copied earlier in Step #10 14) Name the package that will be generated and click Finish. This will generate the classes required by reading the WSDL (and the schemas referenced in it) 15) After the classes have been generated you will have a proxy client class (<CI_NAME>_PortClient.java) that you could use for testing. 16) Use the included code to test the User Profile component. Copy and Paste the code below the “//Add your own code here” 17) Import the necessary classes by clicking Alt-Enter at each of the missing classes. 18) Build and Run the test code You should have a successful result. import project1.proxy.types.com.oracle.xmlns.enterprise.tools.schemas .m158290.Get__CompIntfc__USER_PROFILEResponseTypeShape; import project1.proxy.types.com.oracle.xmlns.enterprise.tools.schemas .m158290.UserDescriptionTypeShape; import project1.proxy.types.com.oracle.xmlns.enterprise.tools.schemas .m879014.UserIDTypeShape; /* * Demo for JDeveloper 11g Tech Preview 3 to PS Web Service(Tools 8.48) * Thor Nicolas, 02-2008 Oracle Consulting */ // Pass the security token // This is the user trying to invoke the CI Web Service // Access controlled by permission list myPort.setUsername("PS"); myPort.setPassword("PS"); // Call the CI using the proxy classes UserIDTypeShape sUserID = new UserIDTypeShape(); // What USER ID are we querying sUserID.set_value("PS"); Get__CompIntfc__USER_PROFILEResponseTypeShape ciResp; ciResp = myPort.CI_USER_PROFILE_G(sUserID); // Retrieve the return values (e.g. Operator Description) UserDescriptionTypeShape sUserDescr = new UserDescriptionTypeShape(); sUserDescr = ciResp.getUserDescription(); // Display the description System.out.println("Operator Description: " + sUserDescr.get_value()) System.out.println("Finished"); /* End of custom code */ Consuming PeopleSoft WSRP Portlets with Style By Ronaldo Viscuso The ability to consume PeopleSoft pagelets from any Portal can be easily accomplished thanks to the support for the WSRP standard (Web Services for Remote Portlets) introduced in the Pagelet Wizard in PeopleTools 8.46. Version 8.48 extended this functionality to allow any compliant Content Reference to be exposed as a WSRP portlet, thus enabling virtually all of the PeopleSoft user interface to be consumed by other Portals. Even though the process of consuming a PeopleSoft pagelet through WSRP is very straightforward as shown in another tutorial, the resulting page lacks, in terms of visual identity. This happens because PeopleSoft uses its own style sheet classes to define the look and feel of its user interface, and the act of simply including a PeopleSoft pagelet on a Portal page does not take that fact into account. The example below shows an Oracle WebCenter page that includes a PeopleSoft portlet from a Content Reference (Personal information Summary). The page by default uses the Oracle BLAF style sheet, therefore the PeopleSoft-specific style classes are not defined resulting in a somewhat dull and confusing layout. The solution to this is very simple: the PeopleSoft style sheet classes must be implemented in the page and that can be done either by: 1)Inline adding the style sheet classes to the page OR 2)Defining a CSS file and referencing it from the page However, PeopleSoft uses over 300 different style sheet classes and custom defining each one of them would be extremely difficult if not impractical. That is precisely why PeopleTools 8.48 introduced the WSRP Style property, which allows the mapping and on-the-fly substitution of PeopleSoft’s default style sheet classes with a much smaller subset of WSRP style classes. This substitution is done automatically at portlet render time, so that the WSRP portlets provided by PeopleSoft will only contain WSRP styles. The image below shows a style sheet class definition in PeopleSoft Applications Designer. Note the new WSRP Style field: In PeopleTools 8.48, there are 38 WSRP style classes in use: PORTLET-FORM-BUTTON PORTLET-FORM-LABEL PORTLET-FORM-FIELD PORTLET-FORM-FIELD-LABEL PORTLET-FORM-FIELD-LABELDISABLED PORTLET-FORM-INPUT-FIELD PORTLET-MSG-ALERT PORTLET-MSG-SUCCESS PORTLET-MSG-ERROR PORTLET-MSG-STATUS PORTLET-MSG-INFO PORTLET-MENU PORTLET-MENU-CASCADE-ITEM-S PORTLET-MENU-DESCRIPTION PORTLET-MENU-ITEM-HOVER-S PORTLET-MENU-CASCADE-ITEM PORTLET-MENU-ITEM-HOVER PORTLET-MENU-CAPTION PORTLET-MENU-ITEM-SELECTED PORTLET-FONT PORTLET-FONT-DIM PORTLET-ICON-LABEL PORTLET-SECTION-HEADER PORTLET-SECTION-SUBHEADER PORTLET-SECTION-SELECTED PORTLET-SECTION-TEXT PORTLET-SECTION-BODY PORTLET-SECTION-ALTERNATE PORTLET-SECTION-FOOTER PORTLET-DLG-ICON-LABEL PORTLET-TABLE-HEADER PORTLET-TABLE-FOOTER PORTLET-TABLE-SELECTED PORTLET-TABLE-SUBHEADER PORTLET-TABLE-TEXT PORTLET-TABLE-BODY PORTLET-TABLE-ALTERNATE Therefore, getting the pagelet to display properly on WebCenter (or any other Portal tool) is just a matter of creating a CSS file containing definitions for all 38 style sheets classes and referencing it from within the page, like shown below (note that WebCenter pages use ADF Faces tags, therefore the afh namespace): <afh:head <meta http- <link type="text/css" rel="stylesheet" href="peoplesoft-wsrp.css"/> </afh:head> The file peoplesoft-wsrp.css would look something like this: . PORTLET-FORM-BUTTON{font-family:Arial,sans-serif; font-size:9pt;font-weight:normal; font-style:normal;color:rgb(0,0,0); background-color:rgb(252,252,181); cursor:Hand .PORTLET-SECTION-HEADER{border-width:0pt; border-color:rgb(231,0,0); border-style:solid;} .PORTLET-SECTION-SUBHEADER{font-family:Arial,sans-serif; font-size:11pt;font-weight:bold; font-style:normal;color:rgb(51,51,153); margin-top:1em;margin-bottom:0.3em;} .PORTLET-TABLE-HEADER{font-family:Arial,sans-serif; font-size:9pt;font-weight:bold; font-style:normal;color:rgb(255,255,255); background-color:rgb(51,51,153); text-indent:1px; border-width:thin;border-color:rgb(51,51,153); border-style:solid;} And so on, for all 38 style classes. The image below shows the same page as Figure 1, now with the style sheet definitions above. This allows us to get the portlet to have nearly the exact same look and feel as in PeopleSoft: The real power of style sheets, however, is shown when they are used so that the PeopleSoft portlet will match the style used by the page. By making a few changes to the style class definitions in the CSS file, we’re able to achieve a more homogeneous look and feel, as shown below: By Peter Lewandowski With the dynamic workforce today many organizations have programs or reports written by individuals who are no longer employed or engaged by the organization. When a modification needs to be made to one of these legacy reports, research is required to figure out how and where to make this change. Extensible Mark-up Language (XML) attempts to overcome this issue by creating output documents that are human-readable and reasonably clear. The technology provided by PeopleSoft/Oracle provides a solution to these real world issues and at times reduces the number modification needed to produce the desired results. XML was developed to describe data and is a simplified sub-set of the SGML standard and is a W3C-recommended markup language. Its purpose is to provide a mechanism to share data within a heterogeneous information technology world. A Document Type Definition (DTD), XML Schema, RELAX NG is utilized to describe the data. DTD is native to SGML and XML and is the most transparent. XML is human-readable, reasonably clear, and machine-readable. It is both a self-defining and a self-documenting format which describes data structures and field names. It provides for an open solution stored in plain text without license restrictions. The biggest advantage is that it is unencumbered by changes in technology and platforms. This allows developers working with a variety of applications to share XML formats and the tools for parsing those formats. Here is an example of how XML structures its data elements. Note that without knowing what the elements are, you can determine what data elements are in the file. This will allow any developer to access the file and make modification or develop new uses for the file rapidly and accurately. Syntax <name attribute=”value”>content</name> XML Example from PeopleSoft <?xml version="1.0" ?> <query numrows="10" queryname="address" xmlns:xsi="" xsi: <row rownumber="1"> <ADDRESS1>ADDRESS1 sample data</ADDRESS1> <ADDRESS2>ADDRESS2 sample data</ADDRESS2> <CITY>CITY sample data </CITY> <STATE>STATE sample data </STATE> </row> </query> XML Desktop solutions will work on PeopleTools 8.48.07e and Microsoft Word 2000 or newer, Oracle XML Template Builder 5.6 Build 45 and Oracle Publisher Enterprise Release 5.6.2. This white paper assumes the user is familiar with PeopleSoft, Microsoft Word, and Oracle Publisher Enterprise and can navigate the system without additional instructions. A report will be created for use in PeopleSoft taking advantage of functionality found within PeopleSoft application. Navigate to the Reporting Tools in PeopleSoft by drilling through the XML Publisher to the Data Source portal entry. Select data source of PS Query, this query needs to exist before you try to create an XML report for the query. Type a description for the Data Source (Check Printing); make this as descriptive as the field allows since this data element is searchable and the more descriptive the more readily you can identify the report. Next, select the Object Owner ID, this example utilizes PeopleTools. Generate a Sample Data File and download this to the desktop. This will be utilized in the XML Template Builder when creating the template. Generate a Schema File: note that while not used for this example, it is required by the application to function properly. Save the Data Source and open Microsoft Word. Figure 1 If a new document does not default open, a new one will need to be created. Navigate to the Template Builder menu which Oracle has installed with the Template Builder. Select Data, and Load the XML Data. In the document this will create references to the XML data that will be utilized to create a report once executed in PeopleSoft. At this point, a Layout is being created which may be updated independently of the data elements. View the document in HTML or whatever the final output desired. Do this by navigating to Tools Preview and selecting the final format. Once that is complete, add the data elements to the document by Navigating to the Template Builder and selecting Insert Field. Proceed to select the desired data elements to create your final output document. Once again, preview the document utilizing the Preview tool in the Template Builder menu. Save the document as an RTF file. Figure 2 Once the desired reporting format has been achieved the document is ready for upload into PeopleSoft. Navigate to Reporting Tools and drill down into the XML Publisher, Report Definition. Create a New Definition; create a Report Name and Data Source description that will assist in determining which Data Source ID ties back to the Report. The report does not have to be activated at this time; for this exercise it will be active. Place the report in the appropriate Report Category ID. For the exercise, select ALLUSER. The Object Owner ID will once again be PeopleTools and the template type is RTF. The template must be uploaded to the application. To do this select the Template tab and Upload the file making certain that the effective date is today or older or it will not show up in your search to run the report. Make the RTF Template file active as well. Save the Report Definition. Verify that the template uploads correctly by clicking the Preview button. Another browser window will open and the report Layout will display just as it did during the Preview testing in Microsoft Word. Figure 3 To run the Report and see actual output data, navigate to the Query Report Viewer. Select on the output format for the final report and select View Report. If there are Prompts in your query, the View Report will utilize those prompts and then output the results to a new window. The report now displays with the actual data output. The report may be submitted in batch mode. When submitting the report in batch mode, the report will have the same run options as when it was originally created. Figure 4 If PeopleSoft Row-Level security is enabled, the report will utilize the security based on Query Security. Modifications to the query will not impact the report unless a field has been removed or renamed, thus modifying the XML source. A modification to the report format does not impact the query and may even be made by someone that has no development background. A totally different report template utilizing the same query and XML source file can be created with no modification to the query or source file. Industries and governments around the world are adopting this new standard, such as Oracle via the XML Publisher, SAP, SUN, Microsoft, IBM as well as government agencies such as the IRS and the Department of Labor. Other organizations are also helping to provide guidelines for standards within the standard XML such as the HR-XML Consortium. Oracle XML Publisher in conjunction with the PeopleSoft provides a safe secure and flexible methodology for providing reports to the organization. It allows the customer to utilize the desktop tools they are already familiar with and have created to perform the task of form design. This means that a developer can concentrate on only those reports which would truly require a developer resource, and relieve them of the burden of creating report layouts in a development language. By Michael Rulf Oracle offers an excellent Oracle by Example (OBE) tutorial on how to integrate PeopleSoft with other systems using the PeopleSoft Integration Broker and the Enterprise Service Bus within Fusion Middleware. The next step is to apply these skills to your own integration scenario. To do so, you need to identify the integration point that PeopleSoft will invoke for the particular form or business process you are interested in. To aid you in your search for the right integration point within PeopleSoft, Oracle offers a great tool on the PeopleSoft Customer Connection support site called the Interactive Services Repository (ISR) which requires a valid support account. This repository provides detailed information on all of the various integration points in PeopleSoft. For example, say you want to create an employee record in Oracle EBusiness Suite whenever a new person is hired using PeopleSoft HR. After logging into the ISR, I want to do a search by integration point. Because there are thousands of integration points within PeopleSoft, I start off by narrowing my search to integration sets associated with person information. Based on the description field, I am interested in the PERSON_CONTRACT integration set since I want to synchronize personal information between PeopleSoft and EBusiness Suite. Upon returning to the Integration Point search screen and executing my search, a number of potential integration points are listed as available. After reviewing the integration points based on my integration needs and that my integration scenario uses Application Messaging for PeopleSoft HRMS version 8.9, I select “PERSON_BASIC_SYNC.Version_1 (Notification)”. The resulting Integration Point Detail page contains the information I need to implement my own integration scenario using the Oracle by Example (OBE) steps as a guide. In this scenario, I navigate to PeopleTools -> Integration Broker -> Integration Setup -> Service Operations and search for the service “PERSON_BASIC_SYNC”. This service needs to be activated in order for messages to be generated by PeopleSoft whenever a CRUD operation on person information occurs, sent to the ESB, and ultimately consumed by Oracle EBusiness Suite. Once I have completed the rest of the OBE steps for my integration scenario, PeopleSoft will begin generating and sending messages to the ESB. But how do I find out the format of these messages so I can develop any necessary transformations and/or business logic needed to map the PeopleSoft person information to the format required by EBusiness Suite? On that same Integration Points Details screen, the message associated with this integration point is identified as “PERSON_BASIC_FULLSYNC” and a link to the Message Schema is provided. Clicking that link takes you to the XSD schema for the associated message. You can cut the data from this page and save it as an XSD file within JDeveloper for use as a schema definition in your integration project. This negates the need for creating an additional inbound routing and associated WSDL for the “PERSON_BASIC_SYNC” service.
http://www.oracle.com/technetwork/topics/ofm-psft-blog-postings-094500.html
CC-MAIN-2014-52
en
refinedweb
Accounts with zero posts and zero activity during the last months will be deleted periodically to fight SPAM! Morten: The place where this project is maintained. Index: src/sdk/globals.cpp===================================================================--- src/sdk/globals.cpp (revision 9381)+++ src/sdk/globals.cpp (working copy)@@ -1020,8 +1020,8 @@ long flags = lc->GetWindowStyleFlag(); switch (style) {- case sisNoIcons: flags = (flags & ~wxLC_ICON) | wxLC_SMALL_ICON; break;- default: flags = (flags & ~wxLC_SMALL_ICON) | wxLC_ICON; break;+ case sisNoIcons: flags = (flags & ~wxLC_MASK_TYPE) | wxLC_LIST; break;+ default: flags = (flags & ~wxLC_MASK_TYPE) | wxLC_ICON; break; } lc->SetWindowStyleFlag(flags); #endif@@ -1032,7 +1032,7 @@ // this doesn't work under wxGTK... #ifdef __WXMSW__ long flags = lc->GetWindowStyleFlag();- if (flags & wxLC_SMALL_ICON)+ if (flags & wxLC_LIST) return sisNoIcons; #endif return sisLargeIcons;Index: src/src/environmentsettingsdlg.cpp===================================================================--- src/src/environmentsettingsdlg.cpp (revision 9381)+++ src/src/environmentsettingsdlg.cpp (working copy)@@ -107,8 +109,8 @@ wxXmlResource::Get()->LoadObject(this, parent, _T("dlgEnvironmentSettings"),_T("wxScrollingDialog")); int sel = cfg->ReadInt(_T("/environment/settings_size"), 0); wxListbook* lb = XRCCTRL(*this, "nbMain", wxListbook);+ LoadListbookImages(); SetSettingsIconsStyle(lb->GetListView(), (SettingsIconsStyle)sel);- LoadListbookImages(); Connect(XRCID("nbMain"),wxEVT_COMMAND_LISTBOOK_PAGE_CHANGING,wxListbookEventHandler(EnvironmentSettingsDlg::OnPageChanging)); Connect(XRCID("nbMain"),wxEVT_COMMAND_LISTBOOK_PAGE_CHANGED, wxListbookEventHandler(EnvironmentSettingsDlg::OnPageChanged )); I'm currently working on implementing the possibility to show/hide the settings icons with wxGTK also (after stumbling over the same issue), but it's not (yet) working as I want it (more or less minor issues on windows).The patch touches SetSettingsIconStyles (also) and environment-, editor- and compilerdialog. Committed to trunk. During startup program exited with code 1 Argh.... Running codeblocks-wx29 from within codeblocks makes debugging a simple console application with the second C::B impossible.The debugger doesn't stop on breakpoints and prints this annoying message:CodeDuring startup program exited with code 1Anyone experiencing such issues?Running codeblocks-wx28 works as expected... Probably this happens, because gnome-shell is running as single process and the instance started by C::B finishes right after it tells the main process what to do. I think there is an option to disable this behaviour. I have to note that I'm using relatively new version of wxGTK taken from git last couple of days (probably yesterday).
https://forums.codeblocks.org/index.php/topic,18278.30.html?PHPSESSID=8lkgrrhekvvi381sjdi0ae9r32
CC-MAIN-2021-17
en
refinedweb
0.1: – Basic Libary Support – Basic Playlist Support – Integrate the coordinate converter utility 0.2: – Initial gpsbabel support. Upload/Download – Expanded GPX support (geocache namespace support) 0.3: – Actual Syncronization Somewhere in 0.3 – 0.9: – “Sync to iPod” for Paperless caching – Nested Playlists. – Feedback system (e.g. type of unit being used) – Check for updates feature – Google Maps integration – Copy / Paste -> Parse pocket query emails – Getting Started Wizard/Assisant The. Long comments, URLs, and code tend to get flagged for spam moderation. No need to resubmit. ALL comments submitted with fake or throw-away services are deleted, regardless of content. Don't be a dweeb.
https://www.baldengineer.com/roadmap-to-v10.html
CC-MAIN-2021-17
en
refinedweb
Hello, I have just started learning Python. Please can you tell me if there is a way to read in a file one bit or byte at a time (i.e. in binary). Thank you! If you have Python three, then this is easy: with open(filename, "rb") as fh: b = True while b: b = fh.read(1) #...do something with b Thank you, however I'm using Python 2.6.1... is there a different way to do it using that? Maybe try this.. pretty much same principal as 3.0 : fh = open('myfile.txt', 'rb') while True: try: fh.read(1) except: pass Here's how to add items to a list: >>> bits_list = [] >>> bits_list.append('0') >>> bits_list.append('1') >>> bits_list.append('1') >>> bits_list ['0', '1', '1'] >>> Yes well, I forgot that read() does not raise a stop iteration when used in the manner that it is being used here... so the try-except case never gets thrown and we're stuck in an infinite loop. You can do something like, if bit == '': break Yeah, that's what I thought. But this is the code I have - it does not seem to work, just prints out a blank line in the shell then RESTART. And then the whole shell/IDLE seemed to crash when I closed them. The file I'm trying to read one bit at a time from is an encrypted jpeg. This is the code I have: encrypted = [] fh = open('encjpgfile', 'rb') while True: try: bit = fh.read(1) #print bit encrypted.append(bit) except: pass print encrypted sys.exit(0). I suggest this from binascii import hexlify L = ["00", "01", "10", "11"] LL = [u+v for u in L for v in L] s = "0123456789abcdef" D = dict((s[i], LL[i]) for i in range(16)) jpeg = open('encjpgfile', 'rb').read() bits = ''.join(D[x] for x in hexlify(jpeg)) print(bits) Try something along that line: # can be used for Python25/26 def int2bin(num, bits): """ returns the binary of integer num, using bits number of digits, will pad with leading zeroes """ bs = '' for x in range(0, bits): if 2**x == 2**x & num: bs = '1' + bs else: bs = '0' + bs return bs image_file = "red.jpg" # a testfile try: # read all data into a string data = open(image_file, 'rb').read() except IOError: print "Image file %s not found" % image_file raise SystemExit bit_list = [] # create a list padded bits for ch in data: # take the int value of ch and convert to 8 bit strings bits8 = int2bin(ord(ch), 8) bit_list.append(bits8) print(bit_list) """ my result --> ['11111111', '11011000', '11111111', '11100000', '00000000', ... ] """ Sorry about the baddy!. Yes, unfortunately Python's file handling wasn't intended for bit-by-bit reading of a file; the read function takes an optional parameter for the number of bytes to read. Since we used '1', that told Python to read the file byte-by-byte. Each of the characters that you see is a single byte of the image data. Equipped with the knowledge that a byte consists of 8 bits, you could consider it a personal challenge to come up with a way to take each of those bytes and translate it into the corresponding bits. \xc4 means hexadecimal C4, a.k.a. 0xC4 >>> int('c4',16) 196 >>> int('0xc4',16) 196 >>> In fact, I believe that Python 3.0 (and therefore 2.6) has a built-in method bin() that would translate directly to binary. Let's say you have a list of your 8-bit binary elements called new_bytes , if you already have them back into '\x00' form, you could simply write to file like this: fh = open( 'my_new_file.jpg', 'wb' ) for new_byte in new_bytes: fh.write(new_byte) fh.close() However if you still need to convert the binary string into hex then you can do either use something like hex('0b10011010') (NOTE: the notation 0b denotes Binary, just as 0x denotes hex-refer here for more details). We're a friendly, industry-focused community of 1.20 million developers, IT pros, digital marketers, and technology enthusiasts learning and sharing knowledge.
https://www.daniweb.com/programming/software-development/threads/182003/beginner-how-do-i-read-in-one-bit-byte-at-a-time-from-a-file
CC-MAIN-2021-17
en
refinedweb
Front-End Web & Mobile and offer a pluggable model which can be extended to use other providers. The libraries can be used with both new backends created using the Amplify CLI and existing backend resources. Our Amplify UI Components are an open-source UI toolkit that encapsulates cloud-connected workflows inside of cross-framework UI components. In this tutorial, we’ll do the following: - Create a new Vue 3 app as the base for this tutorial. - Set up a base configuration for Amplify with Interactions category. - Create a Chatbot component to add to your Vue 3 application Prerequisites Before you begin this tutorial, please visit the Amplify Documentation website and follow the prerequisites section. Once you complete the prerequisites, you will be ready to walk through the tutorial. When you configure your AWS profile, please make sure your region supports AWS Lex. You can see the supported regions here. Creating a new Vue app First, we’ll create and start a new Vue app with @vue/cli, a CLI tool used to bootstrap a Vue app using current best practices. We’ll then add Amplify and initialize a new project. The following procedure will walk you through this process. To create a new Vue app From your projects directory, run the following commands: npm install -g @vue/cli vue create my-amplify-project ? Please pick a preset: (Use arrow keys) ❯ Default (Vue 3 Preview) ([Vue 3] babel, eslint) <-- Manually select features cd my-amplify-project To run the app: yarn serve Ctrl + C to stop the server. Initialize a new backend Now that we have a running Vue app, it’s time to set up Amplify so that we can create the necessary backend services needed to support the app. From the root of the project, run: amplify init When you initialize Amplify you’ll be prompted for some information about the app: ? Enter a name for the project (myamplifyproject) ? Enter a name for the environment dev ? Choose your default editor: Visual Studio Code ? Choose the type of app that you're building javascript ? What JavaScript framework are you using Vue ? Source Directory Path: src ? Distribution Directory Path: dist ? Build Command: npm run-script build ? Start Command: npm run-script serve ? Do you want to use an AWS profile? Yes ? Please choose the profile you want to use [Your AWS Profile] Install Amplify libraries The first step to using Amplify in the client is to install the necessary dependencies: yarn add aws-amplify @aws-amplify/ui-components Set up frontend Next, we need to configure Amplify on the client so that we can use it to interact with our backend services. Open src/main.js and add the following code below the last import: import Amplify from 'aws-amplify'; import awsconfig from './aws-exports'; import { applyPolyfills, defineCustomElements, } from '@aws-amplify/ui-components/loader'; applyPolyfills().then(() => { defineCustomElements(window); }); Amplify.configure(awsconfig); Now Amplify and UI components have been successfully configured. As you add or remove categories and make updates to your backend configuration using the Amplify CLI, the configuration in aws-exports.js will update automatically. Add Interactions Category The next feature you will be adding to your Vue app is Interactions category, which uses Amazon Lex to host a conversational bot on AWS. To add interactions to your Vue app From your projects directory, run the following command, and answer the prompted questions as indicated. $ amplify add interactions ? Provide a friendly resource name that will be used to label this category in the project: mychatbot ? Would you like to start with a sample chatbot or start from scratch? Start with a sample ? Choose a sample chatbot: BookTrip ? Please indicate if your use of this bot is subject to the Children's Online Privacy Protection Act (COPPA). No Deploy the service by running the amplify push command. amplify push Add a chatbot to your Amplify Vue project Now that we have added Interactions to the application, we will go ahead and add a chatbot component. In your src/App.vue update your code’s template: <template> <amplify-chatbot </template> To see the complete list of available props, please visit the documentation. Run the application with yarn serve. You should see a chatbot rendered: You can interact with bot by sending a text message or clicking to the microphone button to talk. Console warnings: If you see “failed to resolve component” warnings, you can create vue.config.jsfile in the app directory and use this gist to remove the warnings. Listen to chat fulfillment Now, let’s register a callback function that fires after a chat has been fulfilled. Chatbot fires a chatCompleted event whenever chat session has finishes. We can use Vue’s lifecycle hooks to listen to the event. Open src/App.vue and add the following code inside your script tag: <script> const handleChatComplete = (event) => { const { data, err } = event.detail; if (data) alert('success!\n' + JSON.stringify(data)); if (err) alert(err); }; export default { name: 'App', components: {}, mounted() { this.$el.addEventListener('chatCompleted', handleChatComplete); }, beforeUnmount() { this.$el.removeEventListener('chatCompleted', handleChatComplete); }, }; </script> You can now see a pop up whenever you finish, or fail the chat. In practice, you would want to more with data. Here are some ideas: - Add result to a DynamoDB table using Amplify’s APIcategory - Keep analytics of your orders with Amplify’s Analyticscategory - Render an image of the city that the user is traveling to. Customization Amplify provides two main technologies to customize the component to fit your application: CSS variables and slots. Using CSS Variables to Style the Component CSS variables are variables that contain specific css values reused throughout the component. You can assign them in your component stylesheet to style Amplify UI Components. Let’s change the text color background color of the text bubbles for example. Open src/App.vue and add the following css variables: <style> :root { --amplify-primary-color: #fd8abd; } amplify-chatbot { --width: 450px; --height: 600px; --header-color: rgb(40, 40, 40); --bot-background-color: #eaebff; --bot-text-color: rgb(40, 40, 40); --user-background-color: #fd8abd; } </style> Refresh the app and you should see the new colors applied to the component: Feel free to use any color of your choice. To see the complete list of css variables that amplify-chatbot provides, please visit the documentation. Using slots to Insert Custom Content Slots are placeholders inside the component that can be filled with your own markup. For example, chatbot provides a header slot that you can replace. Let’s add a logo and a custom header. We’ll be using an amplify logo for this, but you can use any image and put it to src attribute instead. Update your template in src/App.vue: <template> <amplify-chatbot <!-- eslint-disable-next-line vue/no-deprecated-slot-attribute --> <div slot="header" className="custom-header"> <img src="" height="40" /> Amplify Chatbot </div> </amplify-chatbot> </template> Note that we are adding a eslintdisable line because it will suggest you to use v-slotinstead. That is applicable for Vue component slot specification but not the web component slots that we use in this case. Finally, let’s add some css style to improve our header. Append the following to your style in src/App.vue: .custom-header { padding: 1.25rem 0.375rem 1.25rem 0.375rem; text-align: center; font-size: 1.6rem; } With that, refresh your app and you should see our customized chatbot component! Summary In this blog post, we successfully created a Vue 3 web application with Amplify. We then configured Interactions category and rendered a conversational chatbot, which we customized with CSS Variables and slots.
https://aws.amazon.com/blogs/mobile/amplify-javascript-releases-support-for-vue-3/
CC-MAIN-2021-17
en
refinedweb
Resources - The O’Reilly page (errata etc) - Jesse Liberty’s page for his various books - Buy it from Amazon or Barnes and Noble Disclaimer One reader commented that a previous book review was too full of “this is only my personal opinion” and other such disclaimers. I think it’s still important to declare the situation, but I can see how it can get annoying if done throughout the review. So instead, I’ve lumped everything together here. Please bear these points in mind while reading the whole review: - Obviously this book competes with C# in Depth, although probably not very much. - I was somewhat prejudiced against the book by seeing that the sole 5-star review for it on Amazon was by Jesse Liberty himself. Yes, he wanted to explain why he wrote the book and why he’s proud of it, but giving yourself a review isn’t the right way to go about it. - I’ve seen a previous edition of the book (for C# 2.0) and been unimpressed at the coverage of some of the new features. - I’m a nut for technical accuracy, particularly when it comes to terminology. More on this later, but if you don’t mind reading (and then presumably using) incorrect terminology, you’re likely to have a lot better time with this book than I did. - I suspect I have higher expectations for established, prolific authors such as Jesse Liberty than for newcomers to the world of writing. - I’m really not the target market for this book. Okay, with all that out of the way, let’s get cracking. Contents and target audience According to the preface, Programming C# 3.0 (PC# from now on) is for people learning C# for the first time, or brushing up on it. There’s an expectation that you probably already know another language – it wouldn’t be impossible to learn C# from the book without any prior development experience, but the preface explicitly acknowledges that it would be reasonably tough. That’s a fair comment – probably fair for any book, in fact. I have yet to read anything which made me think it would be a wonderful way to teach someone to program from absolute scratch. Likewise the preface recommends C# 3.0 in a Nutshell for a more detailed look at the language, for more expert readers. Again, that’s reasonable – it’s clearly not aiming to go into the same level of depth as Accelerated C# 2008 or C# in Depth. The book is split into 4 parts: - The C# language: pretty much what you’d expect, except that not all of the language coverage is in this part (most of the new features of C# 3.0 are in the second part) and some non-language coverage is included (regular expressions and collections) – about 270 pages - C# and Data: LINQ, XML (the DOM API and a bit of LINQ to XML), database access (ADO.NET and LINQ to SQL) – about 100 pages - Programming with C#: introductions to ASP.NET, WPF and Windows Forms – about 85 pages - The CLR and the .NET Framework: attributes, reflection, threading, I/O and interop – about 110 pages As you can tell, the bulk of it is in the language part, which is fine by me and reflects the title accurately. I’ll focus on that part of the book in this review, and the first chapter of part 2, which deals with the LINQ parts of C# 3.0. To be honest, I don’t think the rest of the book actually adds much value, simply because they skim over the surface of their topics so lightly. Part 3 would make a reasonable series of videos – and indeed that’s how it’s written, basically in the style of “Open Visual Studio, start a new WinForms project, now drag a control over here” etc. I’ve never been fond of that style for a book, although it works well in screencasts. The non-LINQ database and XML chapters in part 2 seemed relatively pointless too – I got the feeling that they’d been present in older editions and so had just stayed in by default. With the extra space available from cutting these, a much better job could have been done on LINQ to SQL and LINQ to XML. The latter gets particularly short-changed in PC#, with a mere 4 pages devoted to it! (C# in Depth is much less of a “libraries” book but I still found over 6 pages to devote to it. Not a lot, I’ll grant you.) Part 4 has potential, and is more useful than the previous parts – reflection, threading, IO and interop are all important topics (although I’d probably drop interop in favour of internationalization or something similar) – but they’re just not handled terribly well. The threading chapter talks about using lock or Monitor, but never states that lock is just shorthand for try/ finally blocks which use Monitor; no mention is made of the memory model or volatility; aborting threads is demonstrated but not warned about; the examples always lock on this without explaining that it’s generally thought to be a bad idea. The IO chapter uses TextReader (usually via StreamReader) but never mentions the crucial topic of character encodings (it uses Encoding.ASCII but without really explaining it) – and most damning of all, as far as I can tell there’s not a single using statement in the entire chapter. There are calls to Close() at the end of each example, and there’s a very brief mention saying that you should always explicitly close streams – but without saying that you should use a using statement or try/ finally for this purpose. Okay, enough on those non-language topics – let’s look at the bulk of the book, which is about the language. Language coverage PC# starts from scratch, so it’s got the whole language to cover in about 300 pages. It would be unreasonable to expect it to provide as much attention to detail as C# in Depth, which (for the most part) only looks at the new features of C# 2.0 and 3.0. (On the other hand, if the remaining 260 pages had been given to the language as well, a lot more ground could have been covered.) It’s also worth bearing in mind that the book is not aimed at confident/competent C# developers – it’s written for newcomers, and delving into tricky issues like generic variance would be plain mean. However, I’m still not impressed with what’s been left out: - There’s no mention of nullable types as far as I can tell – indeed, the list of operators omit the null-coalescing operator (??). - Generics are really only talked about in the context of collections – despite the fact that to understand any LINQ documentation, you really will need to understand generic delegates. Generic constraints are only likewise only mentioned in the context of collections, and only what I call a “derivation type constraint” (e.g. T : IComparable<T>) (as far as I can tell the spec doesn’t give this a name). There’s no coverage of default(T)– although the “default value of a type” is mentioned elsewhere, with an incorrect explanation. - Collection initializers aren’t explained as far as I can tell, although I seem to recall seeing one in an example. They’re not mentioned in the index. - Iterator blocks (and the yieldcontextual keyword) are likewise absent from the index, although there’s definitely one example of yield returnwhen IEnumerable<T>is covered. The coverage given is minimal, with no mention of the completely different way that this executes compared with normal methods. - Query expression coverage is limited: although from, where, orderby, joinand groupare covered, there’s no mention of let, the difference between joinand join ... into, explicitly typed range variables, or query continuations. The translation process isn’t really explained clearly, and the text pretty much states that it will always use extension methods. - Expression trees aren’t referenced to my knowledge; there’s one piece of text which attempts to mention them but just calls them “expressions” – which are of course entirely different. We’ll come onto terminology in a minute. - Only the simplest (and admittedly most common by a huge margin) form of usingdirectives is shown – no extern aliases, no namespace aliases, not even using Foo = System.Console; - Partial methods aren’t mentioned. - Implicitly typed arrays aren’t covered. - Static classes may be mentioned in passing (not sure) but not really explained. - Object initializers are shown in one form only, ditto anonymous object initializer expressions - Only field-like events are shown. The authors spend several pages on an example of bad code which just has a public delegate variable, and then try to blame delegates for the problem (which is really having a public variable). The solution is (of course) to use an event, but there’s little to no explanation of the nature of events as pairs of methods, a bit like properties but with subscribe/unsubscribe behaviour instead of data fetch/mutate. - Anonymous methods and lambda expressions are covered, but with very little text about the closure aspect of them. This is about it: “[…] and the anonymous method has access to the variables in the scope in which they are defined:” (followed by an example which doesn’t demonstrate the use of such variables at all). I suspect there’s more, but you get the general gist. I’m not saying that all of these should have been covered and in great detail, but really – no mention of nullable types at all? Is it really more appropriate in a supposed language book to spend several pages building an asynchronous file server than to actually list all the operators accurately? Okay, I’m clearly beginning to rant by now. The limited coverage is annoying, but it’s not that bad. Yes, I think the poor/missing coverage of generics and nullable types is a real problem, but it’s not enough to get me really cross. It’s the massive abuse of terminology which winds me up. Accuracy I’ll say this for PC# – if you ignore the terminology abuse, it’s mostly accurate. There are definitely “saying something incorrect” issues (e.g. an implication that ref/out can only be used with value type parameters; the statement that reference types in an array aren’t initialized to their default value (they are – the default value is null); the claim that extension methods can only access public members of target types (they have the same access as normal – so if the extension method is in the same assembly as the target type, for instance, it could access internal members)) but the biggest problem is that of terminology – along with sloppy code, including its formatting. The authors confuse objects, values, variables, expressions, parameters, arguments and all kinds of other things. These have well-defined meanings, and they’re there for a reason. They do have footnotes explaining that they’re deliberately using the wrong terminology – but that doesn’t make it any better. Here are the three footnotes, and my responses to them: The terms argument and parameter are often used interchangably, though some programmers insist on differentiating between the parameter declaration and the arguments passed in when the method is invoked. Just because others abuse terms doesn’t mean it’s right for a book to do so. It’s not that programmers insist on differentiating between the two – the specification does. Now, to lighten things up a bit I’ll acknowledge that this one isn’t always easy to deal with. There are plenty of times where I’ve tried really hard to use the right term and just not ended up with a satisfactory bit of wording. However, at least I’ve tried – and where it’s easy, I’ve done the right thing. I wish the authors had the same attitude. (They do the same with the conditional operator, calling it “the ternary operator”. It’s a ternary operator. Having three operands is part of its nature – it’s not a description of its behaviour. Again, lots of other people get this wrong. Perhaps if all books got it right, more developers would too.) Next up: Throughout this book, I use the term object to refer to reference and value types. There is some debate in the fact that Microsoft has implemented the value types as though they inherited from the root class Object (and thus, you may call all of Object’s methods on any value type, including the built-in types such as int.) To me, this pretty much reads as “I’m being sloppy, but I’ve got half an excuse.” It’s true that the C# specification isn’t clear on this point – although the CLI spec is crystal clear. Personally, it just feels wrong to talk about the value 5 as an object. It’s an object when it’s boxed, of course (and if you call any Object methods on a value type which haven’t been overridden by that type, it gets boxed at that point) but otherwise I really don’t think of it as an object. An instance of the type, yes – but not an object. So yes, I’ll acknowledge that there’s a little wiggle room here – but I believe it’s going to confuse readers more than it helps them. It’s the “confusing readers more than it helps them” part which is important. I’m not above a little bit of shortcutting myself – in C# in Depth, I refer to automatically implemented properties as “automatic properties” (after explicitly saying what I’m doing) and I refer to the versions of C# as 1, 2 and 3 instead of 1.0, 1.2, 2.0 and 3.0. In both these cases, I believe it adds to the readability of the book without giving any room for confusion. That’s very different from what’s going on in PC#, in my view. I’ve saved the most galling example of this for last: As noted earlier, btnUpdateand btnDeleteare actually variables that refer to the unnamed instances on the heap. For simplicity, we’ll refer to these as the names of the objects, keeping in mind that this is just short-hand for “the name of the variables that refer to the unnamed instances on the heap.” This one’s the killer. It sounds relatively innocuous until you see the results. Things like this (from P63): No, that code doesn’t instantiate anything. It declares a variable – and that’s all. The comment isn’t non-sensical – the idea of some code which does instantiate a ListBox object clearly makes sense – but it’s not what’s happening in this code (in C# – it would in C++, which makes it even more confusing). That’s just one example – the same awful sloppiness (which implies something completely incorrect) permeates the whole book. Time and time again we’re told about instances being created when they’re not. From P261: The Clock class must then create an instance of this delegate, which it does on the following line:public SecondChangeHandler SecondChanged; Why do I care about this so much? Because I see the results of it on the newsgroups, constantly. How can I blame developers for failing to communicate properly about the problems they’re having if their source of learning is so sloppy and inaccurate? How can they get an accurate mental model of the language if they’re being told that objects are being instantiated when they’re not? Communication and a clear mental model are very important to me. They’re why I get riled up when people perpetuate myths about where structs “live” or how parameters are passed. PC# had me clenching my fists on a regular basis. These are examples where the authors apparently knew they were abusing the terminology. There are other examples where I believe it’s a genuine mistake – calling anonymous methods “anonymous delegates” or “statements that evaluate to a value are called expressions” (statements are made up of expressions, and expressions don’t have to return a value). I can certainly sympathise with this. Quite where they got the idea that HTML was derived from “Structured Query Markup Language” I don’t know – the word “Query”should have been a red flag – but these things happen. In other places the authors are just being sloppy without either declaring that they’re going to be, or just appearing to make typos. In particular, they’re bad at distinguishing between language, framework and runtime. For instance: - “C# combines the power and complexity of regular expression syntax […]” – no, C# itself neither knows nor cares about regular expressions. They’re in the framework. - (When talking about iterator blocks) “All the bookkeeping for keeping track of which element is next, resetting the iterator, and so forth is provided for you by the Framework.” – No, this time it is the C# compiler which is doing all the work. (It doesn’t support reset though.) - “Strings can also be created using verbatim string literals, which start the at (@) symbol. This tells the String constructor that the string should be used verbatim […]” – No, the String constructor doesn’t know about verbatim string literals. They’re handled by the C# compiler. - “The .NET CLR provides isolated storage to allow the application developer to store data on a per-user basis.” I very much doubt that the CLR code has any idea about this. I expect it to be in the framework libraries. Again, if books don’t get this right, how do we expect developers to distinguish between the three? Admittedly sometimes it can be tricky to decide where responsibility lies – but there are plenty of clearcut cases where PC# is just wrong. I doubt that the authors really don’t know the difference – they just don’t seem to think it’s important to get it right. Code I’m mostly going to point out the shortcomings of the code, but on the plus side I believe almost all of it will basically work. There’s one point at which the authors have both a method and a variable with the same name (which is already in the unconfirmed errata) and a few other niggles, but they’re relatively rare. However: - The code frequently ignores naming conventions. Method and class names sometimes start with lower case, and there’s frequent use of horrible names beginning with “my” or “the”. - The authors often present several pages of code together, and then take them apart section by section. This isn’t the only book to do this by a long chalk, but I wonder – does anyone really benefit from having the whole thing in a big chunk? Isn’t it better to present small, self-contained examples? - As mentioned before, the uses of usingstatements are few and far between. - The whitespace is all over the place. The indentation level changes all the time, and sometimes there are outdents in the middle of blocks. Occasionally newlines have actually been missed out, and in other cases (particularly at the start of class bodies) there are two blank lines for no reason at all. (The latter is very odd in a book, where vertical whitespace is seen as extremely valuable.) Sometimes there’s excessive (to my mind) spacing Just as an example (which is explicitly labelled as non-compiling code, so I’m not faulting it at all for that):using System.Console; class Hello { static void Main() { WriteLine(“Hello World”); } } I promise you that’s exactly how it appears in the book. Now this may have started out as a fault of the type-setter, but the authors should have picked it up before publication, IMO. I could understand there being a few issues like this (proof-reading code really is hard) but not nearly as many as there are. - There are examples of mutable structs (or rather, there’s at least one example), and no warning at all that mutable value types are a really, really bad idea. Again, I don’t want to give the impression I’m an absolute perfectionist when it comes to code in book. For the sake of keeping things simple, sometimes authors don’t seal types where they should, or make them immutable etc. I’m not really looking for production-ready code, and indeed I made this very point in one of the notes for C# in Depth. However, I draw the line at using statements, which are important and easy to get right without distracting the reader. Likewise giving variables good names – counter rather than ctr, and avoiding those the and my prefixes – makes a competent reader more comfortable and can transfer good habits to the novice via osmosis. Writing style and content ordering Time for some good news – when you look beyond the terminology, this is a really easy book to read. I don’t mean that everything in it is simplistic, but the style rarely gets in the way. It’s not dry, and some of the real-world analogies are very good. This may well be Jesse Liberty’s experience as a long-standing author making itself apparent. In common with many O’Reilly books, there are two icons which usually signify something worth paying special attention to: a set of paw prints indicating a hint or tip, and a mantrap indicating a commonly encountered issue to be aware of. Given the rest of the review, I suspect you’d be surprised if I agreed with all of the points made in these extra notes – and indeed there are some issues – but most of them are good. Likewise there are also notes for the sake of existing Java and C++ developers, which make sense and are useful. I don’t agree with some of the choices made in terms of how and when to present some concepts. I found the way of explaining query expressions confusing, as it interleaved “here’s a new part of query expressions” with “here’s a new feature (e.g. anonymous types, extension methods).” It will come as no surprise to anyone who’s read C# in Depth that I prefer the approach of presenting all the building blocks first, and then showing how query expressions use all those features. There’s a note explaining why the authors have done what they’ve done, but I don’t buy it. One important thing with the “building blocks first” approach is to present a preliminary example or two, to give an idea of where we’re headed. I’ve forgotten to do that in the past (in a talk) and regretted it – but I don’t regret the overall way of tackling the topic. On a slightly different note, I would have presented some of the earlier topics in a different order too. For instance, I regard structs and interfaces as more commonly used and fundamental topics than operator overloading. (While C# developers tend not to create their own structs often, they use them all the time. When was the last time you wrote a program without an int in it?) This is a minor nit – and one which readers may remember I also mentioned for Accelerated C# 2008. There’s one final point I’d like to make, but which doesn’t really fit anywhere else – it’s about Jesse Liberty’s dedication. Most people dedicate books to friends, colleages etc. Here’s Jesse’s: This book is dedicated to those who come out, loud, and in your face and in the most inappropriate places. We will look back at this time and shake our heads in wonder. In 49 states, same-sex couples are denied the right to marry, though incarcerated felons are not. In 36 states, you can legally be denied housing just for being q-u-e-e-r. In more than half the states, there is no law protecting LGBT children from harassment in school, and the suicide rate among q-u-e-e-r teens is 400 percent higher than among straight kids. And, we are still kicking gay heroes out of the military despite the fact that the Israelis and our own NSA, CIA, and FBI are all successfully integrated. So yes, this dedication is to those of us who are out, full-time. (I’ve had to spell out q-u-e-e-r as otherwise the blog software replaces it with asterisks. Grr.) I’m straight, but I support Jesse’s sentiment 100%. I can’t remember when I first started taking proper notice of the homophobia in the world, but it was probably at university. This dedication does nothing to help or hinder the reader with C#, but to my mind it still makes it a better book. Conclusion In short, I’m afraid I wouldn’t recommend Programming C# 3.0 to potential readers. There are much better books out there: ones which won’t make it harder for the reader to talk about their code with others, in particular. It’s not all bad by any means, but the mixture of sloppy use of terminology and poor printed code is enough of a problem to make me give a general thumbs down. Next up will be CLR via C#, by Jeffrey Richter. Response from Jesse Liberty As normal, I mailed the author (in this case just Jesse Liberty – I confess I didn’t look for Donald Xie’s email address) and very promptly received a nice response. He asked me to add the following as his reaction: Also as normal, I’ll be emailing Jesse with a list of the errors I found, so hopefully they can be corrected for the next edition. 14 thoughts on “Book Review: Programming C# 3.0 by Jesse Liberty and Donald Xie” Really enjoying these reviews Jon. I must admit though, I’m surprised that you are prepared to review so much material that is all heavily C# related – does it get at all tiresome? It gets a little bit repetitive, but it’s the topic I’m best able to review for accuracy – and of course, the more books I read on a similar topic, the more easily I can compare them. We’ll see how things go though… What motivates you to do the reviews? Most bloggers I know will simply write reviews as an incidental “well I just read this so here are my thoughts”. But instead you seem to be actively seeking out books that you believe you can review effectively. It seems like an odd choice, as I know I would prefer to read books covering subject matter that I am -unfamiliar- with! Is it to help differentiate your book (which is FANTASTIC, btw) from the competition? That would make sense. @Paul: Good question, and one I certainly haven’t answered before. 1) I like to provide information for people, particularly where I feel I’m in some ways “better qualified” to provide that information. As I’ve said before, there are different ways in which a reviewer can be “the right person for the job” – I can’t really approach a C# book from the perspective of “How effectively did the book actually teach me” but I can judge its accuracy pretty well. I hope my perspective as an author is also interesting – I probably think more about things like ordering of contents more than others. 2) I like to know my competition – both to learn from it and to differentiate it. As I sort of expected, I haven’t found another book which has the same aims as C# in Depth yet – i.e. a really tight language focus. Some other books are “language + framework” focus (e.g. Accelerated C# and C# 3.0 in a Nutshell) – which is absolutely fine, but not what I wanted to write :) 3) I want to improve the experience of people learning C#. By reviewing books I can accomplish that in two ways: I can submit detailed errata to the authors, which should make their next reprint/edition more accurate. I can also hopefully guide potential readers to a book which will suit them well, and give them an idea of what to expect in terms of content, accuracy and style. The better-informed the general C# development community is, the more interesting (and articulate) the discussions on newsgroups and forums such as Stack Overflow is likely to be. 4) I just enjoy doing it :) Now that I’m working in London, I have quite a bit of time on trains, buses and tubes every day – reviewing books is a useful way of spending that time. 5) I hope that by providing detailed book reviews, I’ll encourage others to write similarly detailed reviews. I’d really like someone else to give C# in Depth the same kind of treatment I’m giving these books – in particular, I’m sure there are plenty of undiscovered errata, and I’d like to get rid of them :) Hope that gives some idea of my motivation – it’s not a complete answer, but I think it covers most of it. Calling value types “objects” doesn’t really bother me; its really hard to make a definitive case either way. Saying “it’s an object when it’s boxed” is true only if you equate “object” with “heap reference”, which doesn’t necessarily have to be true (although if you have a variable of type object it is always a heap reference, which I guess is why we tend to equate the two). However, I can see where it could further confuse the issue for someone who does not already have a firm grasp of the memory model for value and reference types. The other inaccuracies are inexcusable though. I am in absolute agreement with you: using incorrect terminology because you think that “most real-world programmers” can’t tell the difference is exactly why we have so many “real-world programmers” who can’t tell the difference! Its like teaching ebonics to grade school students who are already having trouble with English; we should be teaching them the right way, not giving up on them and accepting the wrong way. Thanks for taking the time to provide such a thorough answer Jon. I’m especially looking forward to the next review. Hi Jon, Thanks for another detailed and great review. Your review and the followup dialogue in the comments between yourself and Paul Batum raises an interesting (to me) question; It’s quite clear that someone new to C# (or even worse, new to programming) would probably have a hard time starting with “C# in Depth”. After a few years of playing with C# (coming from basic, then assembler, then Pascal, then C and PHP before finally giving in and going OO), I felt that C# in Depth had a lot to offer me; And it probably will the next few times I read it, i.e. it’s one of those books that reflect your own perspective so that the more you bring into it, the more you get out of it. However, I can see that a complete language n00b would die before getting through the first few pages; What would your recommendation be for the book (whether this is a book that already exists, or your opinions on what the book would need to be like) a new C# programmer needs to read before C# in Depth if (s)he is new to a) programming in general b) C# I realize this would not be a review as such, but it would be interesting to see your thoughts about this subject as someone who clearly knows his stuff and has a scary attention to detail.. ;) Thanks! @Rune: I don’t yet have much of a recommendation for a “complete newbie” programmer, although I will let slip that I have been talking to a few people about the possibility of writing such a book. For a programmer who knows Java or C++ but doesn’t know any C#, either “C# 3.0 in a Nutshell” or “Accelerated C# 2008” would be a reasonable starting point. I have “Essential C# 3.0” as well, but I haven’t read it yet. One problem is that I think that C# is too big a language to learn from scratch to 3.0 in one book. I suspect it would probably take a year of reading and practising – and I for one don’t like the idea of taking a whole year to read one book! On the other hand, a lot of books which don’t cover the language thoroughly don’t say what they’re leaving out. I hope that if I were ever to write a “beginners” C# book, I would leave appropriate bits out but indicate what’s left to investigate, preferably with links to MSDN etc. (I thoroughly agree with your suggestion that newbies would get nothing out of C# in Depth, btw.) Jon Hi Jon Here’s another C# book you may care to take a look at and if possible, review. C# for artists. Pulp free press. As a programmer who had some limited previous C++ experience and was always interested in OOP concepts, I can confirm, that a combination of “Accelerated C# 2008” (as a main guide) and “C# 3.0 in a Nutshell” (as a reference) is really good. Both books are also extremely interesting. Jon, you didn’t like “statements that evaluate to a value are called expressions”. How would you define an expression? In functional programming books they tend to imply that an expression is something which can be evaluated and returns a value when evaluated. Functional languages define a special empty value which is of a special type (like “()” value of type unit in F#). @Vladimir: I don’t think I’d like to try to define an expression in just a few words, really, but possibly something along the lines of “anything in your code which can be evaluated”. So in “Console.WriteLine(x + y);” the whole statement (minus the semicolon) is an expression, as is x, as is y, as is x + y. @Vladimir, The point is that expressions are not limited to statements. A statement is a syntactic element that has to be able to stand on its own. A program is a sequence of statements. “x = a + b” is a statement and (in C#) an expression (which evaluates to the final value of “x”), while “a + b” is an expression but not a statement. So to define expressions as “statements that evaluate to a value” is to make the definition too narrow. This goes back to the author’s apparent lack of concern for accuracy when using standard terminology. He is apparently using “statement” to mean “any part of the code”, but that is not what “statement” means. @Kevdez: Thanks will take a look. Can’t say I like the font they’ve used for headings etc, but I won’t let that put me off too much :) I have read previous edition for C# 2.0 and found it inaccurate. I sent my opinion to Jessy. He never responded and did not make errata changes.
https://codeblog.jonskeet.uk/2008/09/27/book-review-programming-c-3-0-by-jesse-liberty-and-donald-xie/?like_comment=10408&_wpnonce=405e7e4a9f
CC-MAIN-2021-17
en
refinedweb
Using the Request Directly¶ Warning The current page still doesn't have a translation for this language. But you can help translating it: Contributing. documented (with OpenAPI, for the automatic API user interface). from fastapi import FastAPI, Request app = FastAPI() @app.get("/items/{item_id}") def read_root(item_id: str, request: Request): client_host = request.client.host return {"client_host": client_host, "item_id": item_id} By declaring a path operation function parameter with the type being the Request FastAPI will know to pass the Request in that parameter. Tip Note that in this case, we are declaring a path parameter beside. Technical Details You could also use from starlette.requests import Request. FastAPI provides it directly just as a convenience for you, the developer. But it comes directly from Starlette.
https://fastapi.tiangolo.com/zh/advanced/using-request-directly/
CC-MAIN-2021-17
en
refinedweb
Number of pairs with a given sum Sign up for FREE 1 month of Kindle and read all our books for free. Get FREE domain for 1st year and build your brand new site Reading time: 25 minutes | Coding time: 5 minutes In this article, we will find out how we can count the number of pairs in an array whose sum is equal to a certain number. Brute force approaches will take O(N^2) time but we can solve this in O(N) time using a Hash Map. We solve this problem using two approaches: - Brute force approach [ O(N^2) time and O(1) space ] - Efficient approach using Hash Map [ O(N) time and O(N) space ] For example: a[] = {1,2,3,4,5,6} sum = 5 so the pairs with sum 5 are: {1,4} {2,3} so the output is equal to 2. Note other pairs like (1,2) (3,4) and others do not sum upto 5 so these pairs are not considered. In fact, there are 15 pairs in total. Now to solve this problem we can take help of an efficient algorithm and use an good container data structure. But first we shall see the naive algorithm, and further solve it in a efficient approach. Brute force In this method we scan each and every element in the array and using the nested loop we find if any other element in the array makes the required sum in the array. Pseudocode: - Find all pairs - for each pair, check if the sum is equal to given number int count_pairs(int list, int sum) { int length = length_of(list); int count = 0; for(int i = 0; i<length; i++) for(int j = i+1; j<length; j++) if(list[i] + list[j] == sum) ++count; return count; } Code implementation: Following is the complete C++ implementation: #include <bits/stdc++.h> using namespace std; int pair_calc(int arr[], int n, int sum) { int count = 0; for (int i=0; i<n; i++) for (int j=i+1; j<n; j++) if (arr[i]+arr[j] == sum) count++; return count; } int main() { int n; int a[100]; cout<<"enter the size of array"<<endl; cin>>n; cout<<"enter the array"<<endl; for(int i=0;i<n;i++) { cin>>a[i]; } int sum; cout<<"enter the sum:"<<endl; cin>>sum; cout << "The number of pairs= " << pair_calc(a, n, sum); return 0; } Output input: enter the size of the array: 5 enter the array: 1 3 2 4 2 enter the sum: 4 The number of pairs=2 Complexity of Brute Force approach Time complexity: O(N^2) Space complexity: O(1) Efficient algorithm O(N) We use an unordered_map to fulfill our task. This algorithm consists of two simple traversals: - The first traversal stores the frequency of each element in the array, in the map. - The second traversal actually searches the pairs that have the required sum.But in any case the pair is counted two times so the counter's value has to be halved. And if in case the pair a[i] and a[i] satisfies the case then we will have to subtract 1 from the frequency counter. Pseudocode: int pairs(int a[], int sum) { int length = length_of(a); hashmap m; for (int i=0; i<length; i++) if a[i] is not in m add a[i] to m with value 1 (a[i], 1) else increment value of a[i] (a[i], value++) int count = 0; for (int i=0; i<length; i++) { if(sum - a[i] is in m) count = count + value of sum-a[i] if (sum-a[i] == a[i]) count--; // to ignore duplicates } return count/2; // as all pair has been counted twice } Code implementation: Following is the complete C++ implementation: #include <bits/stdc++.h> using namespace std; int Pairs_calc(int a[], int n, int sum) { unordered_map<int, int> m; for (int i=0; i<n; i++) m[a[i]]++; int count = 0; for (int i=0; i<n; i++) { count += m[sum-a[i]]; if (sum-a[i] == a[i]) count--; } return count/2; } int main() { int arr[] = {2,4,5,1,0} ; int n = sizeof(arr)/sizeof(arr[0]); int sum = 6; cout << "the number of pairs are = " << Pairs_calc(arr, n, sum); return 0; } Output: the number of pairs are = 2 Explanation: now in the array: 2,4,5,1,0 we wanna find the sum = 6 so the map first stores the value and its frequency of each number, here each element is unique so each has a frequency of 1. so after this the search for the pairs begins, here the target sum is 6 so we actually search 6-a[i] for the pair. now if we get the value of 6-a[i] in any bucket of the map,we increase the counter. But by doing so we actually increase the pair counter double times. So we need to half the value. Complexity: Time complexity: O(N) Space complexity: O(N) Note that the space complexity increases from O(1) in the brute force approach to O(N) in the efficient hashmap approach but the time complexity improves from O(N^2) to O(N) in the efficient hash map approach. The idea is that if we compromise the space complexity, we can actually improve the time complexity. Task How will you modify the above efficient approach to print the pairs? The idea is to simply print the value whenever you are incrementing the count value. To avoid duplicates, one can store the pairs in a set and at the end, print all values in the set. With this, you have the complete knowledge of solving this problem efficiently. Enjoy.
https://iq.opengenus.org/pairs-with-certain-sum/
CC-MAIN-2021-17
en
refinedweb
react-native-responsive-screen react-native-responsive-screen is a small library that provides 2 simple methods so that React Native developers can code their UI elements fully responsive. No media queries needed. It also provides an optional third method for screen orienation detection and automatic rerendering according to new dimensions. Give it a try and make your life simpler! Installation npm install react-native-responsive-screen --save Usage - After the package has installed, when application loads (in real device and/or emulator), it detects the screen's width and height. I.e. for Samsung A5 2017 model it detects width: 360DPand height: 640DP(these are the values without taking into account the device's scale factor). - The package exposes 2 basic methods: widthPercentageToDPand heightPercentageToDP. Their names essentially mean that you can supply a "percentage like" string value to each method and it will return the DP (indipendent pixel) that correspond to the supplied percentage of current screen's width/height respectivelly. I.e. for Samsung A5 2017, if we supply to a CSS box: width: widthPercentageToDP('53%'), the rendered style will be width: 190.8DP. Check example number 1 for how to use them. - Methods widthPercentageToDPand heightPercentageToDPcan be used for any style (CSS) property that accepts DP as value. DP values are the ones of type numberover the props mentioned in RN docs: View style props, Text style props, Image style props, Layout props and Shadow props. Use the exposed methods for all of the type numberproperties used in your app in order to make your app fully responsive for all screen sizes. - You can also provide decimal values to these 2 methods, i.e. font-size: widthPercentageToDP('3.75%'). - The package methods can be used with or without flex depending on what you want to do and how you choose to implement it. - The suggested approach is to start developing from larger screens (i.e. tablets). That way you are less prone to forget adding responsive values for all properties of type number. In any case, when your screen development is done, you should test it over a big range of different screens as shown below in the How do I know it works for all devices ? section. - There are 2 more methods to use if you want to support responsiveness along with orientation change. These are listenOrientationChangeand removeOrientationListener. To see how to use them, check example number 3. - You can use this package along with styled-components. To see how to do that, check example number 2. Examples 1. How to use with StyleSheet.create() and without orientation change support import {widthPercentageToDP as wp, heightPercentageToDP as hp} from 'react-native-responsive-screen'; class Login extends Component { render() { return ( <View style={styles.container}> <View style={styles.textWrapper}> <Text style={styles.myText}>Login</Text> </View> </View> ); } } const styles = StyleSheet.create({ container: { flex: 1 }, textWrapper: { height: hp('70%'), // 70% of height device screen width: wp('80%') // 80% of width device screen }, myText: { fontSize: hp('5%') // End result looks like the provided UI mockup } }); export default Login; You can find a working example of this over the related example repository 2. How to use with StyleSheet.create() and with orientation change support Check the README of the related example repository 3. How to use with styled components Check the README of the related example repository How do I know it works for all devices ? As mentioned in "How to Develop Responsive UIs with React Native" article, this solution is already in production apps and is tested with a set of Android, iOS emulators of different screen specs, in order to verify that we always have the same end result. Example: The 4 blue tiles at the bottom half of the screen should take over 98% of the screen’s width in dp and 10% of the screen’s height in dp always: Smartphones Tablets
https://reactnativeexample.com/make-react-native-views-responsive-for-all-devices-with-the-use-of-2-simple-methods/
CC-MAIN-2021-17
en
refinedweb
webpack is a module bundler that bundles our code. That may not seem like much, but imagine a large, complex application with hundreds of files using many different libraries and dependencies. We need all these files and libraries to work together. That's where webpack comes in. There are several other popular tools for achieving the same goals, including Gulp.js and Grunt.js. These tools are known as task runners. A task runner is exactly what it sounds like: a tool to run tasks such as concatenation and minification. webpack can do the same things, but it does so by loading assets such as plugins. webpack is the most popular solution today and it's used with major frameworks such as React and Angular. If you're interested, you can explore Gulp.js or Grunt.js in your own time - though they aren't as widely used now that module bundlers have taken over. You're not expected to know the fine points of the differences between module bundlers and task runners while you're at Epicodus, but you're encouraged to do some additional reading on your own. Over the next ten lessons, we'll make incremental additions to our webpack configuration. We will also provide a basic explanation of what webpack is doing. These lessons are not designed to be exhaustive and the webpack documentation is excellent. We recommend referring back to the documentation if you have further questions or need clarification about webpack. So how does webpack work and why is it so useful? webpack uses a dependency graph to recursively manage an application's assets. That sounds complicated, but the good news is that webpack will do most of the heavy lifting for us. Let's take a look at an example. Imagine that we're building an application that makes very complex peanut butter and jelly sandwiches. As a result, we have multiple JavaScript files for managing the creation of these sandwiches: peanut-butter.js, jelly.js and bread.js. In addition to these files, which contain our business logic, we also have an entry point for our application called main.js where we include our interface logic. ( main.js is a common naming convention for an entry point file.) Think of an entry point as a door leading into our application. webpack needs this entry point in order to recursively gather all the other files the application needs. A bigger application may have multiple entry points but we'll only be working with one. Here's what the first few lines of main.js might look like: import { PeanutButter } from './peanut-butter.js' import { Jelly } from './jelly.js' import { Bread } from './bread.js' import '../css/styles.css' ... We haven't covered import statements just yet - we'll do so in a few lessons. For now, just be aware that an import statement is exactly what it sounds like: a way to import a piece of code from one file into another. When we tell webpack to load main.js, webpack will recursively load and concatenate all the code from main.js as well as any required code from other files such as peanut-butter.js. If jelly.js imports code from yet another file called blueberry.js, webpack would gather that code, too. This code will all be gathered into a single file with a name like bundle.js - which is exactly what we'll call our bundled code. Remember how we mentioned that our finished project will have a dist directory with a file named bundle.js inside it? webpack will automatically create that file for us! And just like that, our code is bundled into one file. With a task runner, we'd need to write a task concatenating our code to achieve the same thing. webpack will load not just JavaScript files but also other assets such as CSS files and images. In fact, as long as we have the right loaders and plugins (which we'll discuss shortly), we can import many types of assets. That's why we'll be storing all our assets - CSS, HTML, and JS - in our src directory. webpack will gather them all and turn them into a single JavaScript file. In general, we don't really need to worry about how webpack is gathering its resources. This is one of those things where the tool we're using will take care of things for us and we don't need to dig too much deeper. However, it's good to have a general sense of what webpack is actually up to behind the scenes. Ultimately, as long as we correctly set up your webpack configuration file and use import statements, webpack will take care of the rest for you. Lesson 9 of 48 Last updated more than 3 months ago.
https://www.learnhowtoprogram.com/intermediate-javascript/test-driven-development-and-environments-with-javascript/introduction-to-webpack
CC-MAIN-2021-17
en
refinedweb
where is `sage.databases.db.Database-` I am trying to port some older Sage code (I think for Sage 5.4 or so). It uses sage.databases.db.Database but that doesn't seem to exist anymore. Where is it? The file I'm trying to update is euler_database.py. Edit: Can I just use from sage.databases.all import SQLDatabase as Database ?
https://ask.sagemath.org/question/10154/where-is-sagedatabasesdbdatabase-/
CC-MAIN-2021-17
en
refinedweb
Tisserand plots and applications in gravity assisted maneuvers¶ Spacecraft fuel is limited and thus becomes a constrain when developing interplanetary maneuvers. In order to save propellant, mission analysis team usually benefits from so called gravity assisted maneuvers. Although they are usually applied for increasing spacecraft velocity, they might also be used for the opposite objective. Previous kind of maneuvers are not only useful for interplanetary trips but also extremely important when designing so-called “moon tours” in Jupiter and Saturn systems. In order to perform a preliminary gravity assist analysis, it is possible to make use of Tisserand plots. These plots illustrate how to move within different bodies for a variety of \(V_{\infty}\) and \(\alpha\), being this last angle the pump angle. Tisserand plots assume: Perfectly circular and coplanar planet orbits. Although it is possible to include inclination within the analysis, Tisserand would no longer be 2D plots but become surfaces in a three dimensional space. Phasing is not taken into account. That means only orbits are taken into account, not rendezvous between departure and target body. Please, note that poliastro solves ``mean orbital elements`` for Solar System bodies. Although their orbital parameters do not have great variations among time, planet orbits are assumed not to be perfectly circular either coplanar. However, Tisserand figures still are useful for quick-design gravity assisted maneuvers. How to read the graphs¶ As said before, these kind of plots assume perfectly circular and coplanar orbits. Each point in a Tisserand graph is just a fly-by orbit wiht a given \(V_{\infty}\) and pump angle. That particular orbit has some energy associated, which can be computed as \(C_{Tiss}=3 - V_{\infty}^2\). The question then is, where can a spacecraft come and reach that orbit with those particular conditions? Although Tisserand figures come in many different ways, they might be usually represent: Periapsis VS. Apoapsis Orbital period VS. Periapsis Specific Energy VS. Periapsis Let us plot a very simple Tisserand-energy kind plot for the inner planets except Mercury. [1]: import astropy.units as u import matplotlib.pyplot as plt import numpy as np from poliastro.bodies import Venus, Earth, Mars from poliastro.plotting.tisserand import TisserandPlotter, TisserandKind from poliastro.plotting._base import BODY_COLORS Notice that we imported the TisserandKind class, which will help us to indicate the kind of Tisserand plot we want to generate. [2]: # Show all possible Tisserand kinds for kind in TisserandKind: print(f"{kind}", end="\t") TisserandKind.APSIS TisserandKind.ENERGY TisserandKind.PERIOD We will start by defining a TisserandPlotter instance with custom axis for a better look of the final figure. In addition, user can also make use of plot and plot_line method for representing both a collection of lines or just isolated ones. [3]: # Build custom axis fig, ax = plt.subplots(1,1,figsize=(15,7)) ax.set_title("Energy Tisserand for Venus, Earth and Mars") ax.set_xlabel("$R_{p} [AU]$") ax.set_ylabel("Heliocentric Energy [km2 / s2]") ax.set_xscale("log") ax.set_xlim(10**-0.4, 10**0.15) ax.set_ylim(-700, 0) # Generate a Tisserand plotter tp = TisserandPlotter(axes=ax, kind=TisserandKind.ENERGY) # Plot Tisserand lines within 1km/s and 10km/s for planet in [Venus, Earth, Mars]: ax = tp.plot(planet, (1, 14) * u.km / u.s, num_contours=14) # Let us label previous figure tp.ax.text(0.70, -650, "Venus", color=BODY_COLORS["Venus"]) tp.ax.text(0.95, -500, "Earth", color=BODY_COLORS["Earth"]) tp.ax.text(1.35, -350, "Mars", color=BODY_COLORS["Mars"]) # Plot final desired path by making use of `plot_line` method ax = tp.plot_line(Venus, 7 * u.km / u.s, alpha_lim=(47 * np.pi / 180, 78 * np.pi / 180), color="black") ax = tp.plot_line(Mars, 5 * u.km / u.s, alpha_lim=(119 * np.pi / 180, 164 * np.pi / 180), color="black") Previous black lines represents an EVME, which means Earth-Venus-Mars-Earth. Our spacecraft starts at and orbit with \(V_{\infty}=5\)km/s at Earth location. At this point, a trajectory shared with a \(V_{\infty}=7\)km/s for Venus is shared. This new orbit would take us to a new orbit for \(V_{\infty}=5\)km/s around Mars, which is also intercepted at some point by Earth again. More complex tisserand graphs can be developed, for example for the whole Solar System planets. Let us check! [4]: # Let us import the rest of the planets from poliastro.bodies import Mercury, Jupiter, Saturn, Uranus, Neptune SS_BODIES_INNER = [ Mercury, Venus, Earth, Mars, ] SS_BODIES_OUTTER = [ Jupiter, Saturn, Uranus, Neptune, ] We will impose the final figure also to show a dashed red line which represents \(R_{p} = R_{a}\), meaning that orbit is perfectly circular. [5]: # Prellocate Tisserand figure fig, ax = plt.subplots(1,1,figsize=(15,7)) ax.set_title("Apsis Tisserand for Solar System bodies") ax.set_xlabel("$R_{a} [AU]$") ax.set_ylabel("$R_{p} [AU]$") ax.set_xscale("log") ax.set_yscale("log") # Build tisserand tp = TisserandPlotter(axes=ax, kind=TisserandKind.APSIS) # Show perfect circular orbits r = np.linspace(0, 10**2) * u.AU tp.ax.plot(r,r, linestyle="--", color="red") # Generate lines for inner planets for planet in SS_BODIES_INNER: tp.plot(planet, (1, 12) * u.km / u.s, num_contours=12) # Generate lines for outter planets for planet in SS_BODIES_OUTTER: if planet == Jupiter or planet == Saturn: tp.plot(planet, (1, 7) * u.km / u.s, num_contours=7) else: tp.plot(planet, (1, 5) * u.km / u.s, num_contours=10)
https://docs.poliastro.space/en/latest/examples/Tisserand.html
CC-MAIN-2021-17
en
refinedweb
#include <exception> #include <string> #include "parser.h" #include "scanner_flex.h" Internal header file used by the selection tokenizer. Internal data structure for the selection tokenizer state. Internal function to add a token to the pretty-printed selection text. Internal function that processes identifier tokens. Internal function for cases where several tokens need to be returned.
https://manual.gromacs.org/current/doxygen/html-full/scanner__internal_8h.xhtml
CC-MAIN-2021-17
en
refinedweb
In this project I want to combine 2 components, the 1602 LCD Display and the DHT11 temperature and humidity sensor to create a digital thermometer we could actually use in the real world. Before we start, read the DHT11 tutorial where we write a program that reads the data from the sensor: and also read the 1602 LCD tutorial where I explain how to write to the display: Once you do so, all you need to do from the circuits perspective is to add both circuits to the same Arduino based project: Here it is in practice: On the code side, we do a similar thing. We include both the DHT and the LiquidCrystal libraries first, then we initialize the 2 components. We initialize them in setup() and in loop() we check every 2 seconds the data coming from the sensor, and we print it to the LCD display: #include <LiquidCrystal.h> #include <DHT.h> DHT dht(2, DHT11); LiquidCrystal lcd(7, 8, 9, 10, 11, 12); void setup() { dht.begin(); lcd.begin(16, 2); } void loop() { delay(2000); float h = dht.readHumidity(); float t = dht.readTemperature(); if (isnan(h) || isnan(t)) { return; } lcd.setCursor(0, 0); lcd.print((String)"Temp: " + t + "C"); lcd.setCursor(0, 1); lcd.print((String)"Humidity: " + h + "%"); } Here is the project running: More electronics tutorials: - Electronic components: Servo Motors - Arduino project: read a digital input - Arduino project: read analog input - Arduino project: control a servo motor with a potentiometer - Electronic components: the DHT11 temperature and humidity sensor - How to run a Web Server on an Arduino - The Arduino Uno WiFi rev 2 board - Arduino project: the analogWrite() function and PWM - Electronic components: Resistors
https://flaviocopes.com/arduino-project-digital-thermometer/
CC-MAIN-2021-17
en
refinedweb
Build system for C# developers. [EN/DE/RU] Hi, this is an older question and I found no satisfying solution. Maybe there are new options or ideas, so here again: I have build steps that are packaged as Target which is a good thing because they have Requirements, Dependencies and something to execute. Now I have a small library of building blocks (Targets) like Test, Compile, Pack, Publish, Install (you probably can guess what they do and that they somewhat depend on each other). Often I have to add special code to those Targets which means I want to have a new e.g. Pack target where I might add Dependencies, might add Requirements and want to add Code (pre-processing / post-processing). Currently I try to overridethe old Target (inheritance) but this does not work well. What I'm looking for is a clean way to compose targets (add things to targets). Obviously one (hard) part is that I want to stay with the original target name. If this cannot be solved, I could live with workaround of giving new names. But I want to re-use existing target definitions in a way that I can add code that get's execute after and BEFORE the old target code (somewhat aspect orientated programming). Any suggestions? public override Target Publish => _ => _ .DependsOn(base.Publish.Dependencies) // .DependsOn(... other targets MY code depends on...) .Requires(base.Publish.Requirements) // .Requires(... requirements for MY code ...) .Executes(() => { // MY preprocessing // ... base.Publish.Execute(); // MY postprocessing // ... }); public override Target Publish => base.Publish // .AddRequirement(... requirements for MY code ...) // .AddDependency(... other targets MY code depends on ...) .AddExecutionPreProcessing(() => { // MY preprocessing // ... }) .AddExecutionPostProcessing(() => { // MY postprocessing // ... }); Hello Stefan. One thing I did in my Build Scripts is to separate Target Definition and Target Implementation. public Target Publish => _ => _ .Executes(() => PublishAction()); public virtual void PublishAction() { // Implement Publish Target } That way you can override the PublishAction without changing the Target Definition. And you can add additional Targets, that are triggered by existing Targets by using TriggeredBy. helm3has some changes and I now need to add a -n <namespace>param to the HelmGetValuestasks IBuildExtensioninstances to be skipped if no targets were started EmbeddedPackagesDirectoryfor global tools PackPackageToolsTaskto use lower-case package ids ParameterAttribute.ValueProviderto allow members of type IEnumerable<string> Loggerto remove ControlFlowfrom stacktrace build.cmd GitVersion.Toolversion in project templates LatestMyGetVersionAttributeto handle new RSS feed format PublishReadyToRun, PublishSingleFile, PublishTrimmed, PublishProfile, NoLogofor DotNetPublish Verbosityin DotNetPack lcovin CoverletTasks ReSharperTasksto use correct tool path ChangelogTasksto respect additional markdown-linting rules
https://gitter.im/nuke-build/nuke
CC-MAIN-2021-17
en
refinedweb
I'm playing around with asyncio #!/usr/bin/env python3 import asyncio import string async def print_num(): for x in range(0, 10): print('Number: {}'.format(x)) await asyncio.sleep(1) print('print_num is finished!') async def print_alp(): my_list = string.ascii_uppercase for x in my_list: print('Letter: {}'.format(x)) await asyncio.sleep(1) print('print_alp is finished!') async def msg(my_msg): print(my_msg) await asyncio.sleep(1) async def main(): await msg('Hello World!') await print_alp() await msg('Hello Again!') await print_num() if __name__ == '__main__': loop = asyncio.get_event_loop() loop.run_until_complete(main()) loop.close() Hello print_alp is finished! Hello Again! Number: 0 Number: 1 Number: 2 Number: 3 Number: 4 Number: 5 Number: 6 Number: 7 Number: 8 Number: 9 print_num is finished! You are calling the functions sequentially, so the code also executes sequentially. Remember that await this means "do this and wait for it to return" (but in the meantime, if this chooses to suspend execution, other tasks which have already started elsewhere may run). If you want to run the tasks asynchronously, you need to: async def main(): await msg('Hello World!') task1 = asyncio.ensure_future(print_alp()) task2 = asyncio.ensure_future(print_num()) await asyncio.gather(task1, task2) await msg('Hello Again!') See also the documentation of the asyncio.gather function. Alternatively, you could also use asyncio.wait.
https://codedump.io/share/2XF2YHVoudCC/1/python--asyncio-doesn39t-execute-the-tasks-asynchronously
CC-MAIN-2017-30
en
refinedweb
At 08:12 PM 11/23/2005 +0100, Matthias Urlichs wrote: >Hi, > >Phillip J. Eby: > > I'm thinking that perhaps I should add an option like > > '--single-version-externally-managed' to the install command so that you > > can indicate that you are installing for the sake of an external package > > manager that will manage conflicts and uninstallation needs. This would > > then allow installation using the .egg-info form and no .pth files. > > >You might shorten that option a bit. ;-) I agree that this would be a >good option to have. I try to use very long names for options that can have damaging effects if used indiscriminately. A project that's installed the "old-fashioned way" (which is what this does, apart from adding .egg-info) is hard to uninstall and may overwrite other projects' files. So, it is only safe to use if the files are being managed by some external package manager, and it further only works for a single installed version at a time. So the name is intended to advertise these facts, and to discourage people who are just reading the option list from trying it out to see what it does. :) > > >People will often inspect sys.path to understand where Python > > >is looking for their code. > > > > As I pointed out, eggs give you much better information on this. > >The .egg metadata does. That, as you say, is distinct from the idea of >packaging the .egg as a zip file. Most likely, one that includes .pyc >files which were byte-compiled with different file paths; That causes no >problems whatsoever ... until you get obscure ideas like trying to step >through the code with pdb, or opening it in your editor to insert an >assertion or a printf, trying to figure out why your code breaks. :-/ This is actually what the .egg-info mode was designed for. That is, doing development of the project. A setuptools-based project can run "setup.py develop" to add the project's source directory to sys.path, after generating an .egg-info directory in the project source if necessary. This allows you to do all your development right in your source checkout, and of course all the file paths are just fine, and the egg metadata is available at runtime. You can then deploy the project as an .egg file or directory. (Also, for the .egg directory format, note that easy_install recompiles the .pyc/.pyo files so their paths *do* point to the .egg contents instead of the original build paths. The issues with zipfiles and precompiled .pyc files are orthogonal to anything about setuptools, eggs, etc.; they will bite you in today's Python no matter what's in the zipfile or who precompiled the .pyc files. I do have some ideas for fixing both of these problems in future versions of Python, but they're rather off-topic for all the lists we are currently talking on.) >That's not exactly negotiable. Debian has a packaging format which >resolves generic installation dependencies on its own. Therefore it >cannot depend on Python-specific .egg metadata. Therefore we need a way >to translate .egg metadata to Debian metadata. Yes, that's precisely what I was suggesting would be helpful. As Vincenzo already mentioned, the egg metadata is a good starting point for defining the Debian metadata. I'm obviously not proposing changing Debian's metadata system. Well, maybe it wasn't *obvious* that I wasn't proposing that, but in any case I'm not. :) > > I remain concerned about how such packages will work with namespace > > packages, since namespace packages mean that two different distributions > > may be supplying the same __init__.py files, and some package managers may > > not be able to deal with two system packages (e.g. Debian packages, RPMs, > > etc.) supplying the same file, even if it has identical contents in each > > system package. > > >Debian packaging has a method to explicitly rename a different package's >file if it conflicts with yours ("dpkg-divert"; it does _not_ depend on >which package gets installed first). IMHO that's actually superior >randomly executing only one of these files, since you are aware that >there is a conflict (the second package simply doesn't install if you >don't fix it), and thus can handle it intelligently. The two kinds of possible conflicts are namespace packages, and project-level resources. A namespace package is more like a Java package than a traditional Python package. A Java package can be split across multiple directories or jar files; it doesn't have to be all in one place. Thus you can have lots of jars with org.apache.* classes in them. Python, however, requires packages to have an __init__.py file, and by default the entire package is assumed to be in the directory containing the __init__.py file. However, as of Python 2.3, the 'pkgutil' module was introduced in the Python standard library which allowed you to create a Java-style "namespace package", automatically combining package directories found on different parts of sys.path. So, if in one sys.path directory you had a 'zope.interface' package, and in another you had a 'zope.publisher' package, these would be combined, instead of the first one being treated as if it were all of 'zope.*', and the second being completely ignored. However, *each* of the subpackages needs its own zope/__init__.py file for this to work. So, the issue here is that if you install two projects that contain zope.* packages into the *same* directory (e.g. site-packages), then there will be two different zope/__init__.py files installed at the same location, even though they will have the same content (a short snippet of code to activate the namespace mechanism via the pkgutil module or via setuptools' pkg_resources module). To date, there are only a small number of these namespace packages in existence, but over time they will represent a fairly large number of *projects*. As I go through the breakup of the PEAK meta-project into separate components, I expect to have a dozen or so projects contributing to the peak.* and peak.util.* namespace packages. Ian Bicking's Paste meta-project has a paste.* namespace package spread out in two or three subprojects so far. There has been some off-and-on discussion about whether Zope 3 will move to eggs instead of their own zpkg tool (which has issues on Windows and Mac OS that eggs do not), and in that case they will likely have a couple dozen components in zope.* and zope.app.*. So, for the long-term solution of wrapping Python projects in Debian packages, the namespace issue needs to be addressed, because renaming each project's zope/__init__.py or whatever isn't going to work very well. There has to be one __init__.py file, or else such projects need to be installed in their own .egg directories or zipfiles to avoid collisions. The second collision issue with --single-version-externally-managed is top-level resource collisions. Some existing projects that are not egg-based manipulate their install_data operation in such a way that they create files or directories in site-packages directly, rather than inside their own package data structures. Setuptools neither encourages nor discourages this, because it doesn't cause any problems for any egg layout except the .egg-info one -- and the .egg-info one was originally designed to support development, not deployment. In the development scenario, any such files are isolated to the source tree, and for deployment the .egg file or directory keeps each projects' contents completely isolated. So, what I'm saying is that putting all projects in the same directory (as all "traditional" Python installations do) has some inherent limitations with respect to namespace packages and top-level resources, and these limitations are orthogonal to the question of egg metadata. The .egg formats were created to solve these problems (including clean upgrades, multi-version support, and uninstallation in scenarios where a package manager isn't usable), and so the other features that they enable will be increasingly popular as well. In other words, as people make more use of PyPI (because they now really *can*), more people will put things on PyPI, and the probability of package name conflicts will increase more rapidly. The natural response will be a desire to claim uber-project or organizational names (like paste.*, peak.*, zope.*, etc.) putting individual projects under sub-package names. (For example, someone has already argued that I should move RuleDispatch's 'dispatch' package to 'peak.dispatch' rather than keeping the top-level 'dispatch' name all to myself.) So, I'm just saying that using the --single-version-externally-managed approach requires that a package manager like Debian grow a way to handle these namespace packages safely and sanely. One possibility is to create dummy packages that contain only the __init__.py file for that namespace, and then have the real packages all depend on the dummy package, while omitting the __init__.py. So, perhaps each project containing a peak.util.* subpackage would depend on a 'python2.4-peak.util-namespace' package, which in turn would depend on a 'python2.4-peak-namespace' package. It's rather ugly, to say the least, but it would work as long as upstream developers never put anything in namespace __init__.py files except for the pkg_resources.declare_namespace() call. (By the way, since part of an egg's metadata lists what namespace packages the project contains code or data for, the generation of these dependencies can be automated as part of the egg-to-deb conversion process.) Or, of course, the .egg directory approach can also be used to bypass all collision issues, but this brings sys.path and .pth files back into the discussion. On the other hand, it can possibly be assumed that anything in a namespace package can be used only after a require() (either implicit or explicit), so maybe the .pth can be dropped for projects with namespace packages. These are possibilities worth considering, since they avoid the ugliness of creating dummy packages just to hold namespace __init__.py files.
https://mail.python.org/pipermail/distutils-sig/2005-November/005466.html
CC-MAIN-2017-30
en
refinedweb
My data is a csv that looks like: 1 abc 1 def 2 ghi 3 jkl 3 mno 3 pqr abc; def jkl; mno mno; pqr First, your input csv file is not really a csv. It's more a file that can be parsed using str.split. Well. Now, I'll get the tokens and use itertools.groupby using first column as key to group items with same first column. Once you have that, filter out the lists with one 1 item, and apply a combination on the rest. Write as a proper csv file: import csv, itertools with open("test.csv") as f: with open("output.csv","w",newline="") as f2: # with open("output.csv","wb") as f2: # uncomment for python 2 (comment above!) cw = csv.writer(f2,delimiter=";") for l in itertools.groupby((l.split() for l in f),lambda x : x[0]): grouped = [x[1] for x in l[1]] if len(grouped)>1: for c in itertools.combinations(grouped,2): cw.writerow(c) result (corrected, yours is not correct): abc;def jkl;mno jkl;pqr mno;pqr
https://codedump.io/share/IV26FUMg2ahF/1/transforming-a-csv-to-a-list-of-co-occurrence-pairs-in-python
CC-MAIN-2017-30
en
refinedweb
Am 09.09.2011 20:21, schrieb Jacob Holm: >). Of course. I'm not new to PEP 342 :) But I have to apologize: what I did test was the confusingly similar return yield - principal which isn't allowed (yes, I know that even return (yield -principal) isn't allowed, but that's not for syntactical reasons.) Now I checked properly: In fact, "yield" expressions after assignment operators are special-cased by the grammar, so that they don't need to be parenthesized [1]. In all other places, yield expressions must occur in parentheses. For example: myreturn = principal - yield Georg [1] I guess that's because it was thought to be a common case. I agree that it's not really helping readability.
https://mail.python.org/pipermail/python-ideas/2011-September/011446.html
CC-MAIN-2017-30
en
refinedweb
== Now seeing a line of base64 data after the phrase DomainAdminPass is definitely interesting enough to catch my attention, during the assessment I was able to abuse these configuration files to retrieve the plaintext password and fully compromise the client. A little post assessment research shows that the vulnerability I exploited is known about and already has a Metasploit module. However I didn’t have Metasploit available during this assessment, I was engaged in a desktop breakout test – however I was able to retrieve the encrypted key and write a small python script to decrypt, which I’ll share here: import base64 from Crypto.Cipher import DES DomainAdminPass = "l4LCPmUqYdS2mWkeTmHn6w==" cipher_text = base64.b64decode(DomainAdminPass) desa = DES.new('NumaraTI', DES.MODE_CBC, 'NumaraTI') print desa.decrypt(cipher_text) Provided for you here just in case you stumble across on of these configuration files or this tool in the future. Oh and yeah, the key and IV were both the company/product name – nice move guys. Turns out the software has an unauthenticated file upload to remote code execution vulnerability, hard coded credentials for the database and weak cryptography protecting exposed domain administrator credentials, amongst other issues. An easy full domain compromise for the attackers and no support for the defenders. If you have BMC/Numera Track-It! deployed in your enterprise I highly recommend you check out the CERT page and do a full review of the security of the installation. These issues affect a spread of versions of this software and any exposure of the configuration file could lead to a full domain compromise.
https://www.gracefulsecurity.com/bmcnumara-track-it-decrypt-pass-tool/
CC-MAIN-2017-30
en
refinedweb
.security.deploy;18 19 /**20 * @version $Rev: 476049 $ $Date: 2006-11-16 23:35:17 -0500 (Thu, 16 Nov 2006) $21 */22 public class DefaultDomainPrincipal extends DefaultPrincipal {23 private String domain;24 25 public String getDomain() {26 return domain;27 }28 29 public void setDomain(String domain) {30 this.domain = domain;31 }32 }33 Java API By Example, From Geeks To Geeks. | Our Blog | Conditions of Use | About Us_ |
http://kickjava.com/src/org/apache/geronimo/security/deploy/DefaultDomainPrincipal.java.htm
CC-MAIN-2017-30
en
refinedweb
Fouad Bhai Pics please... Just a week to ten days more, Usman. Then, I promise to FLOOD these pages with hundreds of pictures of the painted Phantom, from every possible angle. Ok Waiting sir paint it sooon ,this 700 post walaa is waiting Ali, you have dibs on a drive whenever you like. Paint...or no paint. Please transfer your vision, talent and technology to Peshawar as well. Never knew I would see airbrushing/paint job of this quality locally. Thanks for the kind words, Saad...but to be honest, the entire air-brushing / flame-painting / custom paint design thing (which you've seen on the Impala) is ludicrously simple. All it really takes is some masking tape, some water-soluble markers, and a bit of imagination. Layering on the paint, and rubbing it smooth...is pretty much identical to how you'd paint any car. So really, brother...there's hardly any talent/vision/technique involved! I think the more pertinent thing is that we (as vehicle owners) are unwilling to take "risks" with regards to our rides. We're wary of stepping beyond the circle of conformity, and experimenting with looks...and our technicians KNOW this, and even reinforce this. When I was flame-painting the Impala, I tried (unsuccessfully) to get a good job done with rattle-can spray paints, and decided that I needed some professional equipment. Well, the painter whose services I hired, immediately came out and said "bao ji, je kharab ho gaya, te mein idda zimmewaar koi naien" (if this thing ends up looking f***ed, I'm not to be held responsible). It was only after I assured him that I would do the painting myself, needed him only for his equipment and knowledge of how to use it, and was solely responsible for any inconsistencies, that he gave his assent to even work on the car! I realise that fixing weird-looking, thoroughly botched-up design attempts is a costly and time-consuming process. But at the same time, nothing quite makes your ride so individualistic and unique as a one-off paint job. When we (as consumers) will make repeated demands upon our craftsmen and artisans for something of this sort, they'll learn all of the tricks of the trade and acquire all of the requisite hardware that are needed to provide us with the necessary solutions. Until then, we're going to have to admire (and envy) the custom paint, and flame designs, and airbrush art in magazines, I'm afraid! hmmmmmm def with paint and all gadjets Sir from where you got this idea of making a monster truck? I hope this is not the answer //youtu.be/FWTV: Top Truck Challenge XIV Part 3 - YouTube Sir from where you got this idea of making a monster truck? I hope its not the answer FWTV: Top Truck Challenge XIV Part 3 - YouTube Last week i again saw the vehicle, got few pics and would like to suggest something. When i completed working on my suzuki LJ50 and started using it, i realized i could have done few more modifications against the fitting of those items which are always with the veh. Like the jack pana, pana lever, tool kit box, etc etc. This can only be done before the paint job. Even one can have a place for the jerrycane or any other recovery equipment like air filling plant/ box. For that some holders/ levers and plates can be welded in the below areas. Done! Wow. Those are some monstrous trucks! But they're also highly-specialised with lots of unique, one-off features that we couldn't hope to replicate in Pakistan. Phantom bechara, sirf guzara hee karta hai, in comparison to those beasts! A very pertinent point, indeed. Thanks Noman. These things have been catered for, and are going to be fabricated and affixed before the eventual paint job, Insha Allah! Happy Independence Day, everyone! And finally... Some rough pictures of the Phantom's new coat of dull/matte black astar... Of course, this has been painted on, AFTER the beast got a coat of rust-repellent primer (red oxide), and after all the oxidation on it had been scraped off. kia comment karain ... bus lock ker dain Matt black... looking monster... MashaAllah No idea what to say (Y) zabardasht !!@!@!@ yeh pics 14th aug ka gift hai ham sab k leyeh ? Looking good but i have some reservations On the eve of Eid-ul-Fitr...my very warmest Eid Greetings to everyone. Please pray that Pakistan and all who dwell within Pakistan remain safe and happy and prosperous. Ameen!
https://www.pakwheels.com/forums/t/the-phantom-of-city-sadar-road/164334?page=59
CC-MAIN-2017-30
en
refinedweb
UPDATE: 2015-02-25 Today I nearly crapped a cow. I was testing out a custom ErrorDocument 401 directive that would redirect back to the sign in page (BTW: that's a bad idea, IE & Firefox sign on windows are modal). I clicked OK with empty username and empty password fields, and I got the dreaded Internal Sever Error HTTP/1.1 500 page. Then because the browser had cached the empty creds, I could not get back on the server. Clearing the cache and browser history had no effect. I actually thought I had broken Apache! Stack Exchange ServerFault to the rescue. The fix is to set AuthLDAPBindAuthoritative off in TL;DRThis is surprisingly easy, although there is some new syntax to learn, and you will need to get some info from your system administrator. Here are some steps for Apache-2.4 from ApacheLounge. - Follow the directions in the Django documentation on Authentication using REMOTE_USER and add RemoteUserMiddlewareand RemoteUserBackendto AUTHENTICATION_BACKENDSto your settings file. This will use the REMOTE_USERenvironment variable set by Apache when it authorizes users and use it for authentication on the Django website. - Get the URL or IP address of your Active Directory server from your system administrator. For LDAP with basic authentication, the port is usually 389, but check to make sure. - Also get the "Distringuished Name" of the "search base" from your system administrator. A "Distringuished Name" is LDAP lingo for a string made up of several components, usually the "Organizational Unit (OU)" and the "Domain Components (DC)", that distinguish entries in the Active Directory. - Finally ask your system administrator to set up a "binding" distinguished name and password to authorize searches of the Active Directory. - Then in httpd.confenable mod_authnz_ldapand mod_ldap. - Also in httpd.confadd a Locationfor the URL endpoint, EG: /for the entire website, to be password protected. - You must set AuthName. This will be displayed to the user when they are prompted to enter their credentials. - Also must also set AuthType, AuthBasicProvider, AuthLDAPUrland Require. Prepend ldap://to your AD server name and append the port, base DN, scope, attribute and search filter. The port is separated by a colon (:), the base DN by a slash (/) and the other parameters by question marks (?) such as: ldap://host:port/basedn?attribute?scope?filter - The "attribute" to search for in Windows Active Directory is "SAM-Account-Name" or sAMAccountName. This is the equivalent of a user name. - The default "scope" is subwhich means it will search the base DN and everything below it in the Active Directory. And the default "filter" is (objectClass=*)which is the equivalent of no filter. - There are several options for limiting users and groups. If you set Requireto valid-userthen any user in the AD who can authenticate will be authorized. - Set AuthLDAPBindDNand AuthLDAPBindPasswordto the binding account's DN and password. - It has been reported that LDAPReferralsshould be set to off or you may get the following error. (70023)This function has not been implemented on this platform: AH01277: LDAP: Unable to add rebind cross reference entry. Out of memory? - Finally, restart your Apache httpd server and test out your site. Note: This will change how Django works; for example, any authorized user not in the Django Users model will have their username automatically added and set to active, but their password and the is_staffattribute will not be set. <Location /> AuthName "Please enter your SSO credentials." AuthBasicProvider ldap AuthType basic AuthLDAPUrl "ldap://my.activedirectory.com:389/OU=Offices,DC=activedirectory,DC=com?sAMAccountName" AuthLDAPBindDN "CN=binding_account,OU=Administrators,DC=activedirectory,DC=com" AuthLDAPBindPassword binding_password AuthLDAPBindAuthoritative off LDAPReferrals off Require valid-user </Location> LoggoutIn addition to adding authenticated users to the Django Users model, the users credentials are stored in the browser. This makes logging out akward since the user will need to close their browser to logout. There are several approaches to get Django to logout a user. - redirect the user to a URL with fake basic authentication prepended to the path. - render a template with status set to 401 which is the code for unauthorized that will clear the credentials in browser cache. from django.shortcuts import render from django.contrib.auth import logout as auth_logout import logging # import the logging library logger = logging.getLogger(__name__) # Get an instance of a logger def logout(request): """ Replaces ``django.contrib.auth.views.logout``. """ logger.debug('user %s logging out', request.user.username) auth_logout(request) return render(request, 'index.html', status=401) Using Telnet to ping AD serverA lot of sites suggest this. First you will need to enable Telnet on your Windows PC. This can be done from Uninstall a program in the Control Panel by selecting Turn Windows features on or off and checking Telnet Client. Then opening a command terminal and typing telnetfollowed by open my.activedirectory.com 389. Surprise! If it works you will only see the output: If it does not work then you will see this additional output:If it does not work then you will see this additional output: Connecting to my.activedirectory.com... Now treat yourself and tryNow treat yourself and try Could not open connection to the host, on port 389: Connect failed open towel.blinkenlights.nl. Use control + ] to kill the connection, then type quitto quit telnet. Testing LDAP using Python - Python-LDAPSo to learn more about LDAP there are a couple of packages that you can use to interrogate and authenticate with and AD server using LDAP. Python-LDAP seems to be common and easy to use. It's based on OpenLDAP Here's a list of common LDAP Queries from Google. >>> import ldap >>> server = ldap.initialize('ldap://my.activedirectory.com:389') >>> server.simple_bind('CN=bind_user,OU=Administrators,DC=activedirectory,DC=com','bind_password') # returns 1 on success 1 >>> user = server.search_s('OU=Users,DC=activedirectory,DC=com',ldap.SCOPE_SUBTREE,'(&(sAMAccountName=my_username)(ObjectClass=user))',('cn','sAMAccountName','mail')) >>> user [('CN=My Name,OU=Super-Users,OU=USA,OU=California,OU=Sites,DC=activedirectory,DC=com', {'cn': ['My Name'], 'sAMAccountName': ['my_username'], 'mail': ['my_username@activedirectory.com']})] >> users = server.search_s('OU=Users,DC=activedirectory,DC=com',ldap.SCOPE_SUBTREE,'(&(memberOf=CN=@my_group,OU=Groups,OU=Users,DC=activedirectory,DC=com)(ObjectClass=user))',('cn','sAMAccountName','mail')) [('CN=My Name,OU=Super-Users,OU=USA,OU=California,OU=Sites,DC=activedirectory,DC=com', {'cn': ['My Name'], 'sAMAccountName': ['my_username'], 'mail': ['my_username@activedirectory.com']}), ('CN=Somebody_Else,OU=Super-Users,OU=USA,OU=California,OU=Sites,DC=activedirectory,DC=com', {'cn': ['Their name'], 'sAMAccountName': ['their_username'], 'mail': ['their_username@activedirectory.com']})] Alternatives - SSPI/NTLMIf users will only use the Django application on a Windows PC which they already have been authorized, EG through windows logon, then using either - Apache-2.4 - Apache-2.2 - Django Extensions and Snippets - SAML and OAuthSure you could do this. You can also use SSL with LDAP or Kerebos with SSPI/NTLM. But, alas, I did not research these options althought I did come across a few references. mod_authnz_sspior mod_authnz_ntlmto acquire those credentials from your Windows session is also an option. There are several Django extensions and snippets that use Python-LDAP and override ModelBackend so that Django handles authorization and authentication instead of Apache. Some Django extensions and snippets also exist to subclass ModelBackends to use PyWin32 to use local credentials from the current windows machine for authorization and authentication from within Django. CSS and JSThe references section loosely based on Javascript TOC robot. It could also use the counters and the ::beforestyle pseudo-element, but since I'm using JavaScript it doesn't make sense. But here's what that looked like anyway. Example Example In case it wasn't clear above the JavaScript below is not what I'm using on this page. It was for a different approach using counters which I scratched, so these examples are very contrived and don't really make sense anymore.
http://poquitopicante.blogspot.com/2015/01/
CC-MAIN-2017-30
en
refinedweb
Hibernate and Multi-Threading. Let's look at a very simple domain: clients with payments. Each client has a set of payments @Entity public class Client { ... @OneToMany(cascade=ALL) private Set payments = new HashSet (); public void addPayment(double amount) { payments.add(new Payment(amount)); } public double getTotal() { int total = 0; for (Payment payment : payments) { total += payment.getAmount(); } return total; } } @Entity public class Payment { ... public Payment(double amount) { this.amount = amount; } ... } Now a very simple test is to save create a client with some payments, then reload it and test the total number of the payments again. public void testSimpleClientWithPayment() throws Exception { Client client = new Client("me"); client.addPayment(12.0); client.addPayment(5.0); assertEquals(17.0, client.getTotal()); Serializable id = dao.save(client); dao.flushAndClear(); client = dao.get(id); assertEquals(17.0, client.getTotal()); } Of course this all works like a charm and there are no problems. The project drifts easy along its ever lasting path and you keep on coding. Introduce threads: Lazy Loading problems At some point one of the wise-ass architects comes up with a scheme where reports of the totals of all clients have to be printed on a regular basis in separate threads. The reports are sent by mail or some other lame excuse for introducing this fancy new threading thing. So to keep him happy you create a nice service that can do that. public class ReportingService { private final class Reporter implements Runnable { private final Client client; private Reporter(Client client) { this.client = client; } public void run() { System.out.println(client + ": " + client.getTotal()); } } public void reportTotals() { for (final Client client : dao.findAllClients()) { executorService.submit(new Reporter(client)); } } ... } Of course you test the service with a number of payments, and everything works out fine... until the feature is brought into production and the logs start filling up with these exceptions: 21:21:11,562 ERROR [LazyInitializationException, LazyInitializationException. ] failed to lazily initialize a collection of role: domain.Client.payments, no session or session was closed org.hibernate.LazyInitializationException: failed to lazily initialize a collection of role: domain.Client.payments,.initialize(AbstractPersistentCollection.java:343) at org.hibernate.collection.AbstractPersistentCollection.read(AbstractPersistentCollection.java:86) at org.hibernate.collection.PersistentSet.iterator(PersistentSet.java:163) at domain.Client.getTotal(Client.java:43) at service.ReportingService$Reporter.run(ReportingService.java:18)) But you tested that, right? What could possibly be wrong? Well, the collection of payments from a few of the clients are not yet initialized (=loaded from the database) when the client is handed to the executor for reporting. This is no problem: Hibernate can lazily load the collection whenever this is needed. For that purpose, it replaces the collection with a special implementation, that keeps a reference to the Session that was used to load the containing entity (the client). When we call payments.iterator(), the collection is loaded from the database through the Session. As long as this Session is open, the code will work. Sessions, however, are going to be closed at some point. Using springs transaction management, for example, they are often associated with a transaction and closed after completion of that transaction. Now if the transaction that loads a client with a lot of payments shoots of all these tasks and then completes before the tasks are executed, the session will close and the tasks are left without the option to lazily load the payments. Hence the lazy loading problems. In a test environment it is often hard to simulate this, because there is less data and your testing infrastructure might keep transactions and sessions open until everything is executed. You will mostly get burned in production. So, the solution is simple, right? Just load the payments before you execute the task: ... for (final Client client : dao.findAll()) { // --> Solution: Hibernate.initialize(client.getPayments()); executorService.submit(new Reporter(client)); } ... That seems simple enough. You can also initialize the set by executing a method, for example .size(), or by fetching it eagerly when loading the client. So, no more stacktraces in the production log, you saved the company! Missing Updates Now the business analysts have come up with a new idea: The printing of the totals is too simple, the clients want a record in the database of the total per client every week and they want to know when the last total was generated for each client. So you extend your ReportingService with a new method and a new task: ... public class ReportGenerator implements Runnable { private final Client client; public ReportGenerator(Client client) { this.client = client; } public void run() { dao.save(new Report(client, client.getTotal())); client.setLastCreated(new Date()); } } ... Now if you execute this task in your test environment, you notice something strange: The reports are added to the database, but the "lastCreated" field is not updated. What can be the problem? Hibernate should perform transactional write-behind with dirty checking: When the transaction is completing, it should check that the client object has changed and persist the changed values. This works normally, why is this different? The dirty checking is only performed in the session in which an object was loaded. The client object was loaded in a different session, on which the transaction has already been completed. When the report is created, this is saved in a separate session. The infrastructure of the dao takes care associating the session with the current thread and of instantiating a new session when necessary. Locking with no mode So how do you solve this? To be able to do updates on an object loaded from a different session, you'll have to reconnect it to a new open session. The .lock(.., LockMode.NONE) operation can be used for that. You have to make sure that the session remains open until the updates are done. To ensure this, we use a new transaction for the new report action: ... public void run() { new TransactionTemplate(transactionManager) .execute(new TransactionCallbackWithoutResult() { @Override public void doInTransactionWithoutResult(TransactionStatus status) { dao.lock(client, NONE); dao.save(new Report(client, client.getTotal())); client.setLastCreated(new Date()); } }); } ... Using this solution in combination with the previous solution, however, will result in new exceptions: Caused by: org.hibernate.HibernateException: Illegal attempt to associate a collection with two open sessions at org.hibernate.collection.AbstractPersistentCollection.setCurrentSession(AbstractPersistentCollection.java:410) at org.hibernate.event.def.OnLockVisitor.processCollection(OnLockVisitor.java:38)) at org.hibernate.event.def.AbstractReassociateEventListener.reassociate(AbstractReassociateEventListener.java:79) at org.hibernate.event.def.DefaultLockEventListener.onLock(DefaultLockEventListener.java:59) at org.hibernate.impl.SessionImpl.fireLock(SessionImpl.java:584) at org.hibernate.impl.SessionImpl.lock(SessionImpl.java:576) ... This is because we are attempting to connect the collection, that we so cunningly initialized before, to the new session as well. This collection however, will attempt to keep a reference to one open session, but not two. On the bright side, we do not have to initialize the session anymore if we connect the client to a new session: lazy loading will work again. In some situations however, it is very hard to determine which objects are already initialized and which aren't and when to connect which objects to the new session and so on... Using the lock operation is often a tedious job! Just pass the ID... silly! After battling the problem for a while, you give up and settle for a much easier scheme: You store the object in the database and pass the ID-value to the worker thread. That thread can now reload the data from the database and safely work on it. Conclusion Passing Hibernate managed objects (like collections or proxies) to other threads might result in some hard to trace problems, like: - Lazy Initialization problems - Missing Updates - Locking exceptions It is therefore often better not to pass Hibernate managed objects to other threads, but to save the objects and reload them by ID from the database in the other tread. Except for Lazy initialization problems I don't see why hibernate managed objects in particular has multithreading issues (note that this is problem with the way you handle transactions). Even if you wrote pure SQL, you will have these legitimate concurrency issues. As developers we are expected to be aware of such problems. It is hilarious to see 'updates' for reporting, go figure. While these problems are ugly enough to be center of a blog article, they have nothing to do with multithreading. They all can be reproduced within a single thread. (I know because I met them all in person in a swing project which just used the EDT 😉 What makes hibernate especially ugly with respect to multi threading is that the Hibernate managed objects (hibernate implementations of collections and proxies) keep a reference to the session in which they were loaded. They'll try to do any updates through that session. If you pass the object to another thread, but do transaction handling on a per thread basis (which is the defacto standard with Hibernate) you'll run into a lot of problems. You can run into the same kind of problems in a single thread if you don't handle your transactions correctly, but with multi threading they are all the more likely, even in otherwise 'simple' scenarios. Regards. Hi Maarten, nice to see some stuff about concurrency control and databases. Using the id is a very good solution imho because it provides very clear transactional behavior. In some cases you need to make sure that the entity which id is passed, hasn't changed in the mean while. In those cases you could introduce a versionedid (an id in combination with the version). This uniquely identifies an object over time. A small remark about this example client: I would not model the Payments as part of the client (not all relations have to be modelled as a java reference) but use a dao/repository call instead. A client can live without its payments. This simplifies the domain as well imho. Very interesting topic. But what about the "main" thread? Imagine you do some modification to an entity in the worker thread which you would like to see reflected in the main thread (for example a list that threads the modification on its items). How to refresh the entity loaded in the original session? Reload? Refresh? Merge? All these throw different exceptions... Seems that all the time you save with basic CRUD operation is payed back by these kind of tricky situations... Wonderful, I really needed these tips. Thank you. Hello, absolutely greate and helpfull article!!!! I just ran into these problems myself and it was very interesting helpfull to read your article. Thanks a million times, now I know how to solve the problem. thanks jens Hi It is very good .How better way we can use threadpoolexecutor for this hibernate intialize process Finding this post has anreeswd my prayers Hi, I have the similar problem in my project. We have one jms listener to listen one queue. when request comes to that queue, onMessage() will take that request and delegate that request to ExecutorService(java 1.6) which in turn call one controller to execute that task(ExampleCotroller). in side that controller i am loading parent object(which has some child entries). while getting the child entries we are getting lazy initialization exception, saying session closed or no session. your help is appreciated here.... Thanks, Ramki. Just wanted to say thank you very much. We had a similar problem and solved it by using the ID. Dave [...] of all the problems we can meet when trying to use Hibernate in a multithreaded application (1st clue, 2nd clue, 3rd clue, etc.), I was thinking of another solution: implementing the logical part [...] Thanks so much for this article. It helps me to implement a fix of a stressed situation. Yes Hibernate has some internal voice that cannot be understood by human people 🙂 Maarten, you have saved my life. Thank you. Thanks so much for this article ! Very usefull
http://blog.xebia.com/hibernate-and-multi-threading/
CC-MAIN-2017-30
en
refinedweb
Markov chain text generator This task is about coding a Text Generator using Markov Chain algorithm. A Markov chain algorithm basically determines the next most probable suffix word for a given prefix. To do this, a Markov chain program typically breaks an input text (training text) into a series of words, then by sliding along them in some fixed sized window, storing the first N words as a prefix and then the N + 1 word as a member of a set to choose from randomly for the suffix. As an example, take this text with N = 2: now he is gone she said he is gone for good this would build the following table: PREFIX SUFFIX now he is he is gone, gone is gone she, for gone she said she said he said he is gone for good for good (empty) if we get at this point, the program stops generating text To generate the final text choose a random PREFIX, if it has more than one SUFFIX, get one at random, create the new PREFIX and repeat until you have completed the text. Following our simple example, N = 2, 8 words: random prefix: gone she suffix: said new prefix: she + said new suffix: he new prefix: said + he new suffix: is ... and so on gone she said he is gone she said The bigger the training text, the better the results. You can try this text here: alice_oz.txt Create a program that is able to handle keys of any size (I guess keys smaller than 2 words would be pretty random text but...) and create output text also in any length. Probably you want to call your program passing those numbers as parameters. Something like: markov( "text.txt", 3, 300 ) C++[edit] In this implementation there is no repeated suffixes! #include <ctime> #include <iostream> #include <algorithm> #include <fstream> #include <string> #include <vector> #include <map> class markov { public: void create( std::string& file, int keyLen, int words ) { std::ifstream f( file.c_str(), std::ios_base::in ); fileBuffer = std::string( ( std::istreambuf_iterator<char>( f ) ), std::istreambuf_iterator<char>() ); f.close(); if( fileBuffer.length() < 1 ) return; createDictionary( keyLen ); createText( words - keyLen ); } private: void createText( int w ) { std::string key, first, second; size_t next, pos; std::map<std::string, std::vector<std::string> >::iterator it = dictionary.begin(); std::advance( it, rand() % dictionary.size() ); key = ( *it ).first; std::cout << key; while( true ) { std::vector<std::string> d = dictionary[key]; if( d.size() < 1 ) break; second = d[rand() % d.size()]; if( second.length() < 1 ) break; std::cout << " " << second; if( --w < 0 ) break; next = key.find_first_of( 32, 0 ); first = key.substr( next + 1 ); key = first + " " + second; } std::cout << "\n"; } void createDictionary( int kl ) { std::string w1, key; size_t wc = 0, pos, textPos, next; next = fileBuffer.find_first_not_of( 32, 0 ); if( next == -1 ) return; while( wc < kl ) { pos = fileBuffer.find_first_of( ' ', next ); w1 = fileBuffer.substr( next, pos - next ); key += w1 + " "; next = fileBuffer.find_first_not_of( 32, pos + 1 ); if( next == -1 ) return; wc++; } key = key.substr( 0, key.size() - 1 ); while( true ) { next = fileBuffer.find_first_not_of( 32, pos + 1 ); if( next == -1 ) return; pos = fileBuffer.find_first_of( 32, next ); w1 = fileBuffer.substr( next, pos - next ); if( w1.size() < 1 ) break; if( std::find( dictionary[key].begin(), dictionary[key].end(), w1 ) == dictionary[key].end() ) dictionary[key].push_back( w1 ); key = key.substr( key.find_first_of( 32 ) + 1 ) + " " + w1; } } std::string fileBuffer; std::map<std::string, std::vector<std::string> > dictionary; }; int main( int argc, char* argv[] ) { srand( unsigned( time( 0 ) ) ); markov m; m.create( std::string( "alice_oz.txt" ), 3, 200 ); return 0; } - Output: March Hare had just upset the milk-jug into his plate. Alice did not dare to disobey, though she felt sure it would all come wrong, and she went on. Her listeners were perfectly quiet till she got to the part about her repeating 'You are old, Father William,' said the Caterpillar. 'Well, I've tried to say slowly after it: 'I never was so small as this before, never! And I declare it's too bad, that it is!' As she said this she looked down into its face in some alarm. This time there were three gardeners at it, busily painting them red. Alice thought this a very difficult game indeed. The players all played at once without waiting for the end of me. But the tinsmith happened to come along, and he made me a body of tin, fastening my tin arms and J[edit] This seems to be reasonably close to the specification: require'web/gethttp' setstats=:dyad define 'plen slen limit'=: x txt=. gethttp y letters=. (tolower~:toupper)txt NB. apostrophes have letters on both sides apostrophes=. (_1 |.!.0 letters)*(1 |.!.0 letters)*''''=txt parsed=. <;._1 ' ',deb ' ' (I.-.letters+apostrophes)} tolower txt words=: ~.parsed corpus=: words i.parsed prefixes=: ~.plen]\corpus suffixes=: ~.slen]\corpus ngrams=. (plen+slen)]\corpus pairs=. (prefixes i. plen{."1 ngrams),. suffixes i. plen}."1 ngrams stats=: (#/.~pairs) (<"1~.pairs)} (prefixes ,&# suffixes)$0 weights=: +/\"1 stats totals=: (+/"1 stats),0 i.0 0 ) genphrase=:3 :0 pren=. #prefixes sufn=. #suffixes phrase=. (?pren) { prefixes while. limit > #phrase do. p=. prefixes i. (-plen) {. phrase t=. p { totals if. 0=t do. break.end. NB. no valid matching suffix s=. (p { weights) I. ?t phrase=. phrase, s { suffixes end. ;:inv phrase { words ) - Output: 2 1 50 setstats '' genphrase'' got in as alice alice genphrase'' perhaps even alice genphrase'' pretty milkmaid alice And, using 8 word suffixes (but limiting results to a bit over 50 words): - Output: 2 8 50 setstats '' genphrase'' added it alice was beginning to get very tired of this i vote the young lady tells us alice was beginning to get very tired of being such a tiny little thing it did not take her long to find the one paved with yellow bricks within a short time genphrase'' the raft through the water they got along quite well alice was beginning to get very tired of this i vote the young lady tells us alice was beginning to get very tired of being all alone here as she said this last word two or three times over to genphrase'' gown that alice was beginning to get very tired of sitting by her sister on the bank and alice was beginning to get very tired of being such a tiny little thing it did so indeed and much sooner than she had accidentally upset the week before oh i beg (see talk page for discussion of odd line wrapping with some versions of Safari) Lua[edit] Not sure whether this is correct, but I am sure it is quite inefficient. Also not written very nicely. Computes keys of all lengths <= N. During text generation, if a key does not exist in the dictionary, the first (least recent) word is removed, until a key is found (if no key at all is found, the program terminates). local function pick(t) local i = math.ceil(math.random() * #t) return t[i] end local n_prevs = tonumber(arg[1]) or 2 local n_words = tonumber(arg[2]) or 8 local dict, wordset = {}, {} local prevs, pidx = {}, 1 local function add(word) -- add new word to dictionary local prev = '' local i, len = pidx, #prevs for _ = 1, len do i = i - 1 if i == 0 then i = len end if prev ~= '' then prev = ' ' .. prev end prev = prevs[i] .. prev local t = dict[prev] if not t then t = {} dict[prev] = t end t[#t+1] = word end end for line in io.lines() do for word in line:gmatch("%S+") do wordset[word] = true add(word) prevs[pidx] = word pidx = pidx + 1; if pidx > n_prevs then pidx = 1 end end end add('') local wordlist = {} for word in pairs(wordset) do wordlist[#wordlist+1] = word end wordset = nil math.randomseed(os.time()) math.randomseed(os.time() * math.random()) local word = pick(wordlist) local prevs, cnt = '', 0 --[[ print the dictionary for prevs, nexts in pairs(dict) do io.write(prevs, ': ') for _,word in ipairs(nexts) do io.write(word, ' ') end io.write('\n') end ]] for i = 1, n_words do io.write(word, ' ') if cnt < n_prevs then cnt = cnt + 1 else local i = prevs:find(' ') if i then prevs = prevs:sub(i+1) end end if prevs ~= '' then prevs = prevs .. ' ' end prevs = prevs .. word local cprevs = ' ' .. prevs local nxt_words repeat local i = cprevs:find(' ') if not i then break end cprevs = cprevs:sub(i+1) if DBG then io.write('\x1b[2m', cprevs, '\x1b[m ') end nxt_words = dict[cprevs] until nxt_words if not nxt_words then break end word = pick(nxt_words) end io.write('\n') - Output: > ./markov.lua <alice_oz.txt 3 200 hugged the soft, stuffed body of the Scarecrow in her arms instead of kissing his painted face, and found she was crying herself at this sorrowful parting from her loving comrades. Glinda the Good stepped down from her ruby throne to give the prizes?' quite a chorus of voices asked. 'Why, she, of course,' said the Dodo, pointing to Alice with one finger; and the whole party look so grave and anxious.) Alice could think of nothing else to do, and perhaps after all it might tell her something worth hearing. For some minutes it puffed away without speaking, but at last it sat down a good way off, panting, with its tongue hanging out of its mouth again, and said, 'So you think you're changed, do you?' low voice, 'Why the fact is, you see, Miss, we're doing our best, afore she comes, to-' At this moment Five, who had been greatly interested in Perl 6[edit] unit sub MAIN ( :$text=$*IN, :$n=2, :$words=100, ); sub add-to-dict ( $text, :$n=2, ) { my @words = $text.words; my @prefix = @words.rotor: $n => -$n+1; (%).push: @prefix Z=> @words[$n .. *] } my %dict = add-to-dict $text, :$n; my @start-words = %dict.keys.pick.words; my @generated-text = lazy |@start-words, { %dict{ "@_[ *-$n .. * ]" }.pick } ...^ !*.defined; put @generated-text.head: $words; >perl6 markov.p6 <alice_oz.txt --n=3 --words=200 Scarecrow. He can't hurt the straw. Do let me carry that basket for you. I shall not mind it, for I can't get tired. I'll tell you what I think, said the little man. Give me two or three pairs of tiny white kid gloves: she took up the fan and gloves, and, as the Lory positively refused to tell its age, there was no use in saying anything more till the Pigeon had finished. 'As if it wasn't trouble enough hatching the eggs,' said the Pigeon; 'but I must be very careful. When Oz gives me a heart of course I needn't mind so much. They were obliged to camp out that night under a large tree in the wood,' continued the Pigeon, raising its voice to a whisper. He is more powerful than they themselves, they would surely have destroyed me. As it was, I lived in deadly fear of them for many years; so you can see for yourself. Indeed, a jolly little clown came walking toward them, and Dorothy could see that in spite of all her coaxing. Hardly knowing what she did, she picked up a little bit of stick, and held it out to Phix[edit] This was fun! (easy, but fun) integer fn = open("alice_oz.txt","rb") string text = get_text(fn) close(fn) sequence words = split(text) function markov(integer n, m) integer dict = new_dict(), ki sequence key, data, res string suffix for i=1 to length(words)-n do key = words[i..i+n-1] suffix = words[i+n] ki = getd_index(key,dict) if ki=0 then data = {} else data = getd_by_index(ki,dict) end if setd(key,append(data,suffix),dict) end for integer start = rand(length(words)-n) key = words[start..start+n-1] res = key for i=1 to m do ki = getd_index(key,dict) if ki=0 then exit end if data = getd_by_index(ki,dict) suffix = data[rand(length(data))] res = append(res,suffix) key = append(key[2..$],suffix) end for return join(res) end function ?markov(2,100) - Output: from the alice_oz.txt file: "serve me a heart, said the Gryphon. \'Then, you know,\' Alice gently remarked; \'they\'d have been ill.\' \'So they were,\' said the Lion. One would almost suspect you had been running too long. They found the way to send me back to the imprisoned Lion; but every day she came upon a green velvet counterpane. There was a long sleep you\'ve had!\' \'Oh, I\'ve had such a capital one for catching mice-oh, I beg your pardon!\' cried Alice hastily, afraid that she was shrinking rapidly; so she felt lonely among all these strange people. Her tears seemed to Alice a good dinner."
http://rosettacode.org/wiki/Markov_chain_text_generator
CC-MAIN-2017-30
en
refinedweb
Create an SAPUI5 Application for SAP Variant Configuration and Pricing You will learn - How to use SAP API Business Hub’s productivity tools for developers (like sandbox environment and code snippet generator) to easily test cloud services - How to use SAP Cloud Platform’s trial environment and SAP Web IDE to build a small SAPUI5 application - How to orchestrate and use the different APIs of the Variant Configuration and Pricing services Create a free trial account on the SAP Cloud Platform to be able to use the Web IDE. Go to SAP Cloud Platform and click on Start your free trial. Fill the registration form by providing you name, email and a password. Once your account is created, log in and launch SAP Web IDE. Direct link: () In the Web IDE, create a new application from the template by selecting File -> New -> Project from Template. Select the SAPUI5 Application template, then click Next. Provide a descriptive project name and namespace, then click Next. You may rename the initial view, then click Finish. The application is now created. The configuration will be created using the API during the initialization of the form. Add an empty onInit function to the controller, / ProductConfigurationAPITutorial -> webapp -> controller -> Main.controller.js, in which the API will be called. You will call the following cloud service APIs: GET /api/v2/knowledgebases/{kbId}to read static master data for display (descriptions of a characteristic and its values in our example). PATCH /api/v2/configurations/{configurationId}/items/{itemId}/characteristics/{characteristicId}to change a characteristic value. GET /api/v2/configurations/{configurationId}to read and displayed the changed configuration results. POST /api/v1/statelesspricingto get the price based on the chosen characteristic value. In the API Business Hub search for SAP Variant Configuration, find the SAP Variant Configuration and Pricing API Package and select it. Once on the API package page, choose Variant Configuration service. On the API reference, find the Click on the JavaScript tab and then click on the Copy and Close buttons to copy the code to your clipboard. An APIKeyis used as an authentication method. Each time an API is called, the APIKeyneeds to be sent in the http request header. Make sure you are logged in SAP API Business Hub when copying the code to your clipboard so that your APIKeyis automatically added in the generated code. If you need to get your APIKey, you can use the Show API Key button on the same page. Back in Web IDE, add the copied code from the API Business Hub to your onInit function. For the data variable use the example input JSON found for the service on API Hub. var data = JSON.stringify({ "productKey": "CPS_BURGER", "date": "2018-08-09", "context": [{ "name": "VBAP-VRKME", "value": "EA" }] });() ESLINT errors caused by console statement or hard-coded URL can be switched off in project settings or by inserting Java /* eslint-disable */in the first line. Run your application. You should see a blank application with a title. The pre-generated code puts the results of the API in the browser console. To find the result, open your browser’s developer tools and go to the Console tab. Which field in the response body of service endpoint /api/v2/configurations returns the unique identifier of the configuration? This identifier must be provided as input field configurationId to the subsequent calls to the other endpoints, e.g. to change a characteristic value. The result from the API consists of the configuration, characteristics, and characteristic values. Add a ComboBox to display characteristic CPS_OPTION_M value. Open your main view and add a ComboBox. The ComboBox items and selected item will be set from the result of the API call. Ensure that core namespace is declared via xmlns:core="sap.ui.core" Back in the controller file, you will need to define the model used in the view. You need to add the JSONModel library to your controller. In the define at the top of the controller, add the JSONModel library by adding sap/ui/model/json/JSONModel and defining the JSONModel in the controller function. sap.ui.define([ "sap/ui/core/mvc/Controller", "sap/ui/model/json/JSONModel" ], function (Controller, JSONModel) { In the onInit function, you need to save the current version so that you can access the view associated with the controller in the API call response. Create a new variable called self and set it to this. var self = this; Additionally, you need to create a new JSONModel to house the results of the API call. Bind a new empty JSONModel to the view. this.getView().setModel(new JSONModel({})); To actually bind the result to the model, you need to parse the API response in the xhr.addEventListener function. The result from the API comes back as text, so you need to parse it to JSON. var jsonResults = JSON.parse(this.responseText); Then, you can set the relevant properties into the model. The ComboBox needs CPS_OPTION_M possible values and its initial value. var CPS_OPTION_M = jsonResults.rootItem.characteristics.find(function (i) { return i.id === "CPS_OPTION_M"; }); self.getView().getModel().setProperty("/possible_values", CPS_OPTION_M.possibleValues); if (CPS_OPTION_M.values.length > 0) { self.getView().getModel().setProperty("/value", CPS_OPTION_M.values[0].value); } else { self.getView().getModel().setProperty("/value", ""); } The used find() statement is not supported by Internet Explorer 11. Save your changes. If you execute the application, you will see a ComboBox filled with CPS_OPTION_M characteristic possible values, having the selected value be the default value for this characteristic. Next, update the configuration if the user changes the value of the ComboBox. First, you need to declare a new event on the ComboBox control in the view. Back to the main controller, create an empty onChange function. onChange: function (oEvent) { } As in step 3, head over to SAP API Business Hub, locate the PATCH /api/v2/configurations/{configurationId}/items/{itemId}/characteristics/{characteristicId} method and copy the JavaScript code. Add the copied Javascript code from API Business Hub to the newly created onChange function. Change the data variable declaration to assign the value from the value property of the view model, which is bound to the ComboBox value. var data = JSON.stringify({ "values": [{ "value": this.getView().getModel().getProperty("/value"), "selected": true }] }); Since this API PATCH method does not return a response body, in the xhr.addEventListener call of the onChange function, you may change the console log so that the response code is logged instead of the response text. console.log(this.status); To fill out all parameters for this API method, you need to add a few fields in the view model, namely configuration id and item id. Add these new properties on the model in the xhr.addEventListener call of the onInit function so that the model is filled when the configuration is loaded. self.getView().getModel().setProperty("/config_id", jsonResults.id); self.getView().getModel().setProperty("/item_id", jsonResults.rootItem.id); Once they are added in the model, replace hard-coded {configurationID} and {itemID} in the generated url in the onChange function by the values in the model. Likewise, replace hard-coded {characteristidID} by CPS_OPTION_M. xhr.open("PATCH", "" + this.getView().getModel().getProperty("/config_id") + "/items/" + this.getView().getModel().getProperty("/item_id") + "/characteristics/ CPS_OPTION_M" ); Almost done! The variant configuration API uses HTTP header fields etag and If-Match as an optimistic lock. You need to capture the etag header in the model from the HTTP response when loading the configuration and send back that value in the If-Match HTTP header when updating the configuration. In the xhr.addEventListener call of the onInit function, set the etag property of the model with the etag value of the response header. self.getView().getModel().setProperty("/etag", this.getResponseHeader("etag")); In the same way, you need to capture the etag value of the characteristic change response in case the user wants to update the value multiple times. Add the same line in the xhr.addEventListener call of the onChange function. Back in the onChange function, fill the If-Match request header value with the etag value of the model. xhr.setRequestHeader("If-Match", this.getView().getModel().getProperty("/etag")); Do not forget variable selfin this and the coming new functions. Run your application. You should see a ComboBox filled with the possible values of characteristic CPS_OPTION_M, having the selected value be the default value for this characteristic. If you change the value of the ComboBox, the call is made to the API to change the value, and you can see the response code in the JavaScript console, which should be 200. By how much is the eTag value in the service response header increased with each change to the characteristic CPS_OPTION_M? Check the development tools of your browser. Currently, the value keys are displayed in the interface. In a real-world scenario, you might want to provide the value descriptions to the user and add a label to the combo box. This can be achieved by getting the knowledge base details. First, create a new method readKb in the controller that takes a knowledge base id as parameter. When creating a configuration (Step 4), the knowledge base id is returned from the API. To get the KB details, another API needs to be called. Go to the API Business Hub (as in step 3), locate the GET/api/v2/knowledgebases/{kbId} method, copy the JavaScript code then paste it in the readKb function. Modify the URL of the request to include the knowledge base ID function parameter. xhr.open("GET", "" + kbId + "?$select=products,classes,characteristics,characteristicSpecifics,bomItems,description"); Now you need to bind the possible values to the model by parsing the response text to JSON then retrieving the possible_values property of CPS_OPTION_M. Also, add the characteristic name to a new model property /name. var jsonResults = JSON.parse(this.responseText); var CPS_OPTION_M = jsonResults.characteristics.find(function (i) { return i.id === "CPS_OPTION_M"; }); self.getView().getModel().setProperty("/possible_values", CPS_OPTION_M.possibleValues); self.getView().getModel().setProperty("/name", CPS_OPTION_M.name); Remember to remove the assignment of possible_values to the model in the addEventListener function of the onInit function. Now, you need to call the new function readKb when the configuration is created with the knowledge base id, at the addEventListener of the onInit function. In the view, add a SimpleForm which contains the previous ComboBox and a new Label. Set the label text to the new model property /name. Finally, adjust the ComboBox control in the view so that elements id and name of possible_values are used as key and text respectively. <sap.ui.layout.form:SimpleForm xmlns:sap.ui.layout. <sap.ui.layout.form:content> <Label text="{/name}" id="label"/> <ComboBox items="{/possible_values}" selectedKey="{/value}" selectionChange="onChange"> <core:Item </ComboBox> </sap.ui.layout.form:content> </sap.ui.layout.form:SimpleForm> Run your application. The value descriptions are provided by the ComboBox, and the label text comes from the knowledge base. In addition to the configuration, SAP Variant Configuration and Pricing also provides a way to calculate the pricing. In the Burger model, the price is influenced by the menu option. By changing the menu option values, we should see a difference in the calculated price. For the pricing API to correctly reflect the status of the configuration, the variant condition characteristic values needs to be provided. Which means that the configuration needs to be read after updating it in the app. Locate the GET /api/v2/configurations/{configurationId} in the SAP API Business Hub, copy the JavaScript code and paste it in a new method _getConfig. Modify the URL of the request to include the configuration ID from the model. xhr.open("GET", "" + this.getView().getModel().getProperty("/config_id")); In the xhr.addEventListener function, assign the configuration JSON in a new attribute self._config. Call this new method from the xhr.addEventListener function of the onChange method, so that the configuration is read again as soon as it is modified. Next, add new read-only fields that will be used to show the base price and selected options as well as a button to calculate the price. Do not forget to update your i18n file with the new labels! basePrice=Base price selectedOptions=Selected options price=Price getPrice=Get pricing Next, go to the SAP API Business Hub, locate the /api/v1/statelesspricing method in the Pricing service, copy the JavaScript code and paste it in a new method onPrice. There are a lot of parameters in this API that you would fill out in a real-world application. But for this tutorial, hard code all values except for the KOMP-VARCOND attribute of the item 000010. Get the variant condition values from the configuration and assign them to the KOMP-VARCOND attribute in the request data. var varCond = this._config.rootItem.characteristics.find(function (i) { return i.id === "CPS_VARCOND"; }); var varCondValues = []; for (var i = 0; i < varCond.values.length; i++) { varCondValues.push(varCond.values[i].value); } In the xhr.addEventListener function of the onPrice function, get the net value as well as the value of the condition purposes ZSS1 and ZSS2, which in this test model means Base Price and Selected Options, and assign them to the JSON model. var jsonResults = JSON.parse(this.responseText); self.getView().getModel().setProperty("/price", jsonResults.netValue); self.getView().getModel().setProperty("/base_price", jsonResults.conditionsWithPurpose.find(function (i) { return i.purpose === "ZSS1"; }).value); self.getView().getModel().setProperty("/selected_options", jsonResults.conditionsWithPurpose.find(function (i) { return i.purpose === "ZSS2"; }).value); … … Run your application. The selected options and price values will change depending on the menu option you choose. Congratulations! You have successfully completed the tutorial. We hope that you find it useful and it helps you to start discovering our services. In the example above, possible values of the characteristic are read at the end only from the knowledge base. List of possible values can change during runtime, therefore in a real world example possible values from configuration results must be considered. In the example above, the sandbox environment of the API Hub is used with an API key when calling the services. In the productive environment, OAuth authentication with client credentials would be used. In the example above, the configuration service is called without providing session context. For optimal performance and resource consumption, please ensure that the cookie retrieved with each configuration creation (implementation in the onInitevent listener of the function in our example) is used when calling the other configuration service APIs for the same configuration session (functions onChange, readKB, and getConfigin our example). self.getView().getModel().setProperty("/cookie", this.getResponseHeader("set-cookie")); xhr.setRequestHeader("Cookie", this.getView().getModel().getProperty("/cookie")); Please read the development guide () for more information about how to use the services. - Step 1: Create a cloud account - Step 2: Create an SAPUI5 application - Step 3: Get pre-generated code - Step 4: Load configuration in your application - Step 5: Run your application - Step 6: Display characteristic value - Step 7: Change characteristic value - Step 8: Use value description - Step 9: Calculate pricing - Back to Top
https://developers.sap.com/tutorials/productconfiguration-ui5-app-create.html
CC-MAIN-2020-05
en
refinedweb
SBECK has asked for the wisdom of the Perl Monks concerning the following question: In the past year, I was introduced to two new tools I hadn't previously been aware of (Devel::Cover and Travis CI) that I am now using for my modules, and I was just wondering what other tools might be out there that I could benefit from. What I'm looking for are tools that will improve the overall quality of my modules in terms of usability, readability, completeness, or whatever other metric. I looked around the monestary and didn't find such a list... after some feedback, I'd be hapy to add it as a tutorial. Tools that I use now are listed below. I know that many of these are pretty obvious, but perhaps for someone just starting out, they should be included. What am I missing? Update: I'm going to add the suggestions to the list as they come in, so I don't necessarily use all of them... and of course, not every tool will fit everyone's needs and/or wants, but they are a great place to start looking. The tool I wish I had the most, but don't (to my knowledge) would be a place where I could log in to and select the OS, version of perl, and version of any prerequisite modules in order to debug a test from the cpantesters site. If this exists and I don't know about it, please fill me in! Nice list. Here's another one you might add... Perl::Critic I too have frequently desired a place where I can throw a new module at to see test results, instead of uploading a new version to CPAN. I recently started Release::Checklist. It is far from complete. Use README.mdChecklist.md to see the current state. All feedback welcome. Super! I've had ideas along this path but never acted on them. Keep it going! Hello SBECK, Since you are looking to compile a comprehensive list, I think reference should be made to Task::Kensho, in particular Task::Kensho::ModuleDev and Task::Kensho::Testing. Hope that helps, One that I use a lot (and encourage others to use) is Perl::Critic. For those for aren't aware, it is a static source code analyzer. It critiques your code against best practices and recommendations from both the Perl community and Damien Conway's excellent book Perl Best Practices. <rant>A common criticism of Perl::Critic I've heard before is that some people disagree with this or that default policy. So for those folks I recommend Perl::Critic::Lax, which has policies that get Perl::Critic to loosen its tie a bit . There are also 167 modules in the Perl::Critic namespace, many of which are collections of policies and 65 in the Perl::Critic::Policy sub-namespace itself. Chances are that there's a policy in there that might scratch your itch. Failing that they can always RTFM and learn to make their own policies.</rant> I have found static source code analysis to be a great tool when beginning work on a very large codebase. It helps point out things that could very well be long-standing bugs of which the team working on the code may not even be aware. It also helps me zero in on areas of the code that may have only been put through perfunctory testing that may be in need of extra attention. I highly recommend trying it out if you've never used it. If you are using Test::Perl::Critic, please be sure to make the tests only run if some environment variable such as RELEASE_TESTING is set. There are a couple of reasons for this. First, Perl::Critic takes time, and what it tests is not likely to actually change from the time you test your release to the time it gets on a user's system. So there's no good reason to tie up user install time testing what cannot have changed since you built the distribution. Second, it is possible that others have a global Perl::Critic config file set that alter what Perl::Critic looks for. You could discover your tests are suddenly failing on those user's systems, not because the code has changed, but because the test's behavior has changed. Conversely, if you have your own .perlcriticrc, and if it doesn't ship with the distribution, then what you are testing will again be different from what the tests do on a typical user's system. For these reasons it's wise to not cause a test suite failure based on Test::Perl::Critic running on user's systems. The best approach is to only run it when you are preparing a release. Dave This is good advice. It's why I run it on the side outside of the test suite. What I'm looking for are tools that will improve the overall quality of my modules in terms of usability, readability, completeness... Some nodes I've written over the years related to code and module quality: Sorry,.
https://www.perlmonks.org/?node_id=1138589
CC-MAIN-2020-05
en
refinedweb
Customize Presentation through UI Layers UILayers provide an extensible approach to showing different parts of RadRichTextEditor put your logic in is: public override void UpdateUIViewPortOverride(UILayerUpdateContext context) Public Overrides Sub UpdateUIViewPortOverride(ByVal context As UILayerUpdateContext); } } Public Overrides ReadOnly Property Name() As String Get Return Me.customLayerName End Get End Property After having implemented the logic of your custom UI layer, you can plug it in the editor by creating a CustomUILayerBuilder. public class CustomLayersBuilder : UILayersBuilder Public Class CustomLayersBuilder Inherits UILayersBuilder You can assign the new builder to specific instance of RadRichTextEditor like this: this.radRichTextEditor1.RichTextBoxElement.UILayersBuilder = new CustomLayersBuilder(); Me.radRichTextEditor1.RichTextBoxElement CustomDecorationUILayerBase()); } Protected Overrides Sub BuildUILayersOverride(ByVal uiLayerContainer As IUILayerContainer) uiLayerContainer.UILayers.AddAfter(DefaultUILayers.HighlightDecoration, New CustomDecorationUILayerBase()) End Sub
https://docs.telerik.com/devtools/winforms/controls/richtexteditor/how-to/customize-presentation-through-ui-layers
CC-MAIN-2020-05
en
refinedweb
. So, let’s take a look at how Docker works and what that means for container security. To answer the question whether Docker is secure, we’ll first take a look at the key parts of the Docker stack: There are two key parts to Docker: Docker Engine, which is the runtime, and Docker Hub, which is the official registry of Docker containers. It’s equally important to secure both parts of the system. And to do that, it takes an understanding of what they each consist of, which components need to be secured, and how. Let’s start with Docker Engine. Docker Engine Docker Engine hosts and runs containers from the container image file. It also manages networks and storage volumes. There are two key aspects to securing Docker Engine: namespaces and control groups., allowing you to run containers as non-root users. Namespaces are switched off by default in Docker, so need to be activated before you can use them. Support for control groups, or cgroups, in Docker allows you to set limits for CPU, memory, networking, and block IO. By default containers can use an unlimited amount of system resources, so it’s important to set limits. Otherwise the entire system could be affected by a single hungry container. Apart from namespaces and control groups, Docker Engine can be further hardened by the use of additional tools like SELinux and AppArmor. SELinux provides access control for the kernel. It can manage access based on the type of process running in the container, or the level of the process, according to policies you set for the host. Based on this policy, thing should be scanned for vulnerabilities. For users of private repositories, Docker Hub will scan downloaded container images. It scans a few repositories for free, after which you need to pay for scanning as an add-on. Docker Hub isn’t the only registry service for Docker containers. Other popular registries include Quay, AWS ECR, and GitLab Container Registry. These tools also have scanning capabilities of their own. Further, Docker Trusted Registry (DTR) can be installed behind your firewall for a fee. Third-party security tools While the above security features provide basic protection for Docker Engine and Docker Hub, they lack the power and reach of a dedicated container security tool. A tool like Twistlock can completely secure your Docker stack. It goes beyond any one part, and gives you a holistic view of your entire system. Docker is an intricate mesh of various moving and static parts. Clearly, plugging in any one of these security tools does not instantly make the entire stack secure. It will take a combination of these approaches to secure Docker at all levels. So, next time someone asks you if Docker is secure, you should ask them which part of Docker they’re referring to. Then you can explain the various security considerations that affect that layer..
https://www.infoworld.com/article/3201967/how-to-think-about-docker-security.html?cid=ifw_nlt_infoworld_open_source_2017-06-28
CC-MAIN-2020-05
en
refinedweb
Developing Convex Optimization Algorithms in Dask parallel math is fun This work is supported by Continuum Analytics, the XDATA Program, and the Data Driven Discovery Initiative from the Moore Foundation. Summary We build distributed optimization algorithms with Dask. We show both simple examples and also benchmarks from a nascent dask-glm library for generalized linear models. We also talk about the experience of learning Dask to do this kind of work. This blogpost is co-authored by Chris White (Capital One) who knows optimization and Matthew Rocklin (Continuum Analytics) who knows distributed computing. Introduction Many machine learning and statistics models (such as logistic regression) depend on convex optimization algorithms like Newton’s method, stochastic gradient descent, and others. These optimization algorithms are both pragmatic (they’re used in many applications) and mathematically interesting. As a result these algorithms have been the subject of study by researchers and graduate students around the world for years both in academia and in industry. Things got interesting about five or ten years ago when datasets grew beyond the size of working memory and “Big Data” became a buzzword. Parallel and distributed solutions for these algorithms have become the norm, and a researcher’s skillset now has to extend beyond linear algebra and optimization theory to include parallel algorithms and possibly even network programming, especially if you want to explore and create more interesting algorithms. However, relatively few people understand both mathematical optimization theory and the details of distributed systems. Typically algorithmic researchers depend on the APIs of distributed computing libraries like Spark or Flink to implement their algorithms. In this blogpost we explore the extent to which Dask can be helpful in these applications. We approach this from two perspectives: - Algorithmic researcher (Chris): someone who knows optimization and iterative algorithms like Conjugate Gradient, Dual Ascent, or GMRES but isn’t so hot on distributed computing topics like sockets, MPI, load balancing, and so on - Distributed systems developer (Matt): someone who knows how to move bytes around and keep machines busy but doesn’t know the right way to do a line search or handle a poorly conditioned matrix Prototyping Algorithms in Dask Given knowledge of algorithms and of NumPy array computing it is easy to write parallel algorithms with Dask. For a range of complicated algorithmic structures we have two straightforward choices: - Use parallel multi-dimensional arrays to construct algorithms from common operations like matrix multiplication, SVD, and so on. This mirrors mathematical algorithms well but lacks some flexibility. - Create algorithms by hand that track operations on individual chunks of in-memory data and dependencies between them. This is very flexible but requires a bit more care. Coding up either of these options from scratch can be a daunting task, but with Dask it can be as simple as writing NumPy code. Let’s build up an example of fitting a large linear regression model using both built-in array parallelism and fancier, more customized parallelization features that Dask offers. The dask.array module helps us to easily parallelize standard NumPy functionality using the same syntax – we’ll start there. Data Creation Dask has many ways to create dask arrays; to get us started quickly prototyping let’s create some random data in a way that should look familiar to NumPy users. import dask import dask.array as da import numpy as np from dask.distributed import Client client = Client() ## create inputs with a bunch of independent normals beta = np.random.random(100) # random beta coefficients, no intercept X = da.random.normal(0, 1, size=(1000000, 100), chunks=(100000, 100)) y = X.dot(beta) + da.random.normal(0, 1, size=1000000, chunks=(100000,)) ## make sure all chunks are ~equally sized X, y = dask.persist(X, y) client.rebalance([X, y]) Observe that X is a dask array stored in 10 chunks, each of size (100000, 100). Also note that X.dot(beta) runs smoothly for both numpy and dask arrays, so we can write code that basically works in either world. Caveat: If X is a numpy array and beta is a dask array, X.dot(beta) will output an in-memory numpy array. This is usually not desirable as you want to carefully choose when to load something into memory. One fix is to use multipledispatch to handle odd edge cases; for a starting example, check out the dot code here. Dask also has convenient visualization features built in that we will leverage; below we visualize our data in its 10 independent chunks: Array Programming If you can write iterative array-based algorithms in NumPy, then you can write iterative parallel algorithms in Dask As we’ve already seen, Dask inherits much of the NumPy API that we are familiar with, so we can write simple NumPy-style iterative optimization algorithms that will leverage the parallelism dask.array has built-in already. For example, if we want to naively fit a linear regression model on the data above, we are trying to solve the following convex optimization problem: Recall that in non-degenerate situations this problem has a closed-form solution that is given by: We can compute $\beta^*$ using the above formula with Dask: ## naive solution beta_star = da.linalg.solve(X.T.dot(X), X.T.dot(y)) >>> abs(beta_star.compute() - beta).max() 0.0024817567237768179 Sometimes a direct solve is too costly, and we want to solve the above problem using only simple matrix-vector multiplications. To this end, let’s take this one step further and actually implement a gradient descent algorithm which exploits parallel matrix operations. Recall that gradient descent iteratively refines an initial estimate of beta via the update: where $\alpha$ can be chosen based on a number of different “step-size” rules; for the purposes of exposition, we will stick with a constant step-size: ## quick step-size calculation to guarantee convergence _, s, _ = np.linalg.svd(2 * X.T.dot(X)) step_size = 1 / s - 1e-8 ## define some parameters max_steps = 100 tol = 1e-8 beta_hat = np.zeros(100) # initial guess for k in range(max_steps): Xbeta = X.dot(beta_hat) func = ((y - Xbeta)**2).sum() gradient = 2 * X.T.dot(Xbeta - y) ## Update obeta = beta_hat beta_hat = beta_hat - step_size * gradient new_func = ((y - X.dot(beta_hat))**2).sum() beta_hat, func, new_func = dask.compute(beta_hat, func, new_func) # <--- Dask code ## Check for convergence change = np.absolute(beta_hat - obeta).max() if change < tol: break >>> abs(beta_hat - beta).max() 0.0024817567259038942 It’s worth noting that almost all of this code is exactly the same as the equivalent NumPy code. Because Dask.array and NumPy share the same API it’s pretty easy for people who are already comfortable with NumPy to get started with distributed algorithms right away. The only thing we had to change was how we produce our original data ( da.random.normal instead of np.random.normal) and the call to dask.compute at the end of the update state. The dask.compute call tells Dask to go ahead and actually evaluate everything we’ve told it to do so far (Dask is lazy by default). Otherwise, all of the mathematical operations, matrix multiplies, slicing, and so on are exactly the same as with Numpy, except that Dask.array builds up a chunk-wise parallel computation for us and Dask.distributed can execute that computation in parallel. To better appreciate all the scheduling that is happening in one update step of the above algorithm, here is a visualization of the computation necessary to compute beta_hat and the new function value new_func: Each rectangle is an in-memory chunk of our distributed array and every circle is a numpy function call on those in-memory chunks. The Dask scheduler determines where and when to run all of these computations on our cluster of machines (or just on the cores of our laptop). Array Programming + dask.delayed Now that we’ve seen how to use the built-in parallel algorithms offered by Dask.array, let’s go one step further and talk about writing more customized parallel algorithms. Many distributed “consensus” based algorithms in machine learning are based on the idea that each chunk of data can be processed independently in parallel, and send their guess for the optimal parameter value to some master node. The master then computes a consensus estimate for the optimal parameters and reports that back to all of the workers. Each worker then processes their chunk of data given this new information, and the process continues until convergence. From a parallel computing perspective this is a pretty simple map-reduce procedure. Any distributed computing framework should be able to handle this easily. We’ll use this as a very simple example for how to use Dask’s more customizable parallel options. One such algorithm is the Alternating Direction Method of Multipliers, or ADMM for short. For the sake of this post, we will consider the work done by each worker to be a black box. We will also be considering a regularized version of the problem above, namely: At the end of the day, all we will do is: - create NumPy functions which define how each chunk updates its parameter estimates - wrap those functions in dask.delayed - call dask.computeand process the individual estimates, again using NumPy First we need to define some local functions that the chunks will use to update their individual parameter estimates, and import the black box local_update step from dask_glm; also, we will need the so-called shrinkage operator (which is the proximal operator for the $l1$-norm in our problem): from dask_glm.algorithms import local_update) ## set some algorithm parameters max_steps = 10 lamduh = 7.2 rho = 1.0 (n, p) = X.shape nchunks = X.npartitions XD = X.to_delayed().flatten().tolist() # A list of pointers to remote numpy arrays yD = y.to_delayed().flatten().tolist() # ... one for each chunk # the initial consensus estimate z = np.zeros(p) # an array of the individual "dual variables" and parameter estimates, # one for each chunk of data u = np.array([np.zeros(p) for i in range(nchunks)]) betas = np.array([np.zeros(p) for i in range(nchunks)]) for k in range(max_steps): # process each chunk in parallel, using the black-box 'local_update' magic new_betas = [dask.delayed(local_update)(xx, yy, bb, z, uu, rho, f=local_f, fprime=local_grad) for xx, yy, bb, uu in zip(XD, yD, betas, u)] new_betas = np.array(dask.compute(*new_betas)) # everything else is NumPy code occurring at "master" beta_hat = 0.9 * new_betas + 0.1 * z # create consensus estimate zold = z.copy() ztilde = np.mean(beta_hat + np.array(u), axis=0) z = shrinkage(ztilde, lamduh / (rho * nchunks)) # update dual variables u += beta_hat - z >>> # Number of coefficients zeroed out due to L1 regularization >>> print((z == 0).sum()) 12 There is of course a little bit more work occurring in the above algorithm, but it should be clear that the distributed operations are not one of the difficult pieces. Using dask.delayed we were able to express a simple map-reduce algorithm like ADMM with similarly simple Python for loops and delayed function calls. Dask.delayed is keeping track of all of the function calls we wanted to make and what other function calls they depend on. For example all of the local_update calls can happen independent of each other, but the consensus computation blocks on all of them. We hope that both parallel algorithms shown above (gradient descent, ADMM) were straightforward to someone reading with an optimization background. These implementations run well on a laptop, a single multi-core workstation, or a thousand-node cluster if necessary. We’ve been building somewhat more sophisticated implementations of these algorithms (and others) in dask-glm. They are more sophisticated from an optimization perspective (stopping criteria, step size, asynchronicity, and so on) but remain as simple from a distributed computing perspective. Experiment We compare dask-glm implementations against Scikit-learn on a laptop, and then show them running on a cluster. Reproducible notebook is available here We’re building more sophisticated versions of the algorithms above in dask-glm. This project has convex optimization algorithms for gradient descent, proximal gradient descent, Newton’s method, and ADMM. These implementations extend the implementations above by also thinking about stopping criteria, step sizes, and other niceties that we avoided above for simplicity. In this section we show off these algorithms by performing a simple numerical experiment that compares the numerical performance of proximal gradient descent and ADMM alongside Scikit-Learn’s LogisticRegression and SGD implementations on a single machine (a personal laptop) and then follows up by scaling the dask-glm options to a moderate cluster. Disclaimer: These experiments are crude. We’re using artificial data, we’re not tuning parameters or even finding parameters at which these algorithms are producing results of the same accuracy. The goal of this section is just to give a general feeling of how things compare. We create data ## size of problem (no. observations) N = 8e6 chunks = 1e6 seed = 20009 beta = (np.random.random(15) - 0.5) * 3 X = da.random.random((N,len(beta)), chunks=chunks) y = make_y(X, beta=np.array(beta), chunks=chunks) X, y = dask.persist(X, y) client.rebalance([X, y]) And run each of our algorithms as follows: # Dask-GLM Proximal Gradient result = proximal_grad(X, y, lamduh=alpha) # Dask-GLM ADMM X2 = X.rechunk((1e5, None)).persist() # ADMM prefers smaller chunks y2 = y.rechunk(1e5).persist() result = admm(X2, y2, lamduh=alpha) # Scikit-Learn LogisticRegression nX, ny = dask.compute(X, y) # sklearn wants numpy arrays result = LogisticRegression(penalty='l1', C=1).fit(nX, ny).coef_ # Scikit-Learn Stochastic Gradient Descent result = SGDClassifier(loss='log', penalty='l1', l1_ratio=1, n_iter=10, fit_intercept=False).fit(nX, ny).coef_ We then compare with the $L_{\infty}$ norm (largest different value). abs(result - beta).max() Times and $L_\infty$ distance from the true “generative beta” for these parameters are shown in the table below: Again, please don’t take these numbers too seriously: these algorithms all solve regularized problems, so we don’t expect the results to necessarily be close to the underlying generative beta (even asymptotically). The numbers above are meant to demonstrate that they all return results which were roughly the same distance from the beta above. Also, Dask-glm is using a full four-core laptop while SKLearn is restricted to use a single core. In the sections below we include profile plots for proximal gradient and ADMM. These show the operations that each of eight threads was doing over time. You can mouse-over rectangles/tasks and zoom in using the zoom tools in the upper right. You can see the difference in complexity of the algorithms. ADMM is much simpler from Dask’s perspective but also saturates hardware better for this chunksize. Profile Plot for Proximal Gradient Descent Profile Plot for ADMM The general takeaway here is that dask-glm performs comparably to Scikit-Learn on a single machine. If your problem fits in memory on a single machine you should continue to use Scikit-Learn and Statsmodels. The real benefit to the dask-glm algorithms is that they scale and can run efficiently on data that is larger-than-memory by operating from disk on a single computer or on a cluster of computers working together. Cluster Computing As a demonstration, we run a larger version of the data above on a cluster of eight m4.2xlarges on EC2 (8 cores and 30GB of RAM each.) We create a larger dataset with 800,000,000 rows and 15 columns across eight processes. N = 8e8 chunks = 1e7 seed = 20009 beta = (np.random.random(15) - 0.5) * 3 X = da.random.random((N,len(beta)), chunks=chunks) y = make_y(X, beta=np.array(beta), chunks=chunks) X, y = dask.persist(X, y) We then run the same proximal_grad and admm operations from before: # Dask-GLM Proximal Gradient result = proximal_grad(X, y, lamduh=alpha) # Dask-GLM ADMM X2 = X.rechunk((1e6, None)).persist() # ADMM prefers smaller chunks y2 = y.rechunk(1e6).persist() result = admm(X2, y2, lamduh=alpha) Proximal grad completes in around seventeen minutes while ADMM completes in around four minutes. Profiles for the two computations are included below: Profile Plot for Proximal Gradient Descent We include only the first few iterations here. Otherwise this plot is several megabytes. Profile Plot for ADMM These both obtained similar $L_{\infty}$ errors to what we observed before. Although this time we had to be careful about a couple of things: - We explicitly deleted the old data after rechunking (ADMM prefers different chunksizes than proximal_gradient) because our full dataset, 100GB, is close enough to our total distributed RAM (240GB) that it’s a good idea to avoid keeping replias around needlessly. Things would have run fine, but spilling excess data to disk would have negatively affected performance. - We set the OMP_NUM_THREADS=1environment variable to avoid over-subscribing our CPUs. Surprisingly not doing so led both to worse performance and to non-deterministic results. An issue that we’re still tracking down. Analysis The algorithms in Dask-GLM are new and need development, but are in a usable state by people comfortable operating at this technical level. Additionally, we would like to attract other mathematical and algorithmic developers to this work. We’ve found that Dask provides a nice balance between being flexible enough to support interesting algorithms, while being managed enough to be usable by researchers without a strong background in distributed systems. In this section we’re going to discuss the things that we learned from both Chris’ (mathematical algorithms) and Matt’s (distributed systems) perspective and then talk about possible future work. We encourage people to pay attention to future work; we’re open to collaboration and think that this is a good opportunity for new researchers to meaningfully engage. Chris’s perspective - Creating distributed algorithms with Dask was surprisingly easy; there is still a small learning curve around when to call things like persist, compute, rebalance, and so on, but that can’t be avoided. Using Dask for algorithm development has been a great learning environment for understanding the unique challenges associated with distributed algorithms (including communication costs, among others). - Getting the particulars of algorithms correct is non-trivial; there is still work to be done in better understanding the tolerance settings vs. accuracy tradeoffs that are occurring in many of these algorithms, as well as fine-tuning the convergence criteria for increased precision. - On the software development side, reliably testing optimization algorithms is hard. Finding provably correct optimality conditions that should be satisfied which are also numerically stable has been a challenge for me. - Working on algorithms in isolation is not nearly as fun as collaborating on them; please join the conversation and contribute! - Most importantly from my perspective, I’ve found there is a surprisingly large amount of misunderstanding in “the community” surrounding what optimization algorithms do in the world of predictive modeling, what problems they each individually solve, and whether or not they are interchangeable for a given problem. For example, Newton’s method can’t be used to optimize an l1-regularized problem, and the coefficient estimates from an l1-regularized problem are fundamentally (and numerically) different from those of an l2-regularized problem (and from those of an unregularized problem). My own personal goal is that the API for dask-glmexposes these subtle distinctions more transparently and leads to more thoughtful modeling decisions “in the wild”. Matt’s perspective This work triggered a number of concrete changes within the Dask library: - We can convert Dask.dataframes to Dask.arrays. This is particularly important because people want to do pre-processing with dataframes but then switch to efficient multi-dimensional arrays for algorithms. - We had to unify the single-machine scheduler and distributed scheduler APIs a bit, notably adding a persistfunction to the single machine scheduler. This was particularly important because Chris generally prototyped on his laptop but we wanted to write code that was effective on clusters. - Scheduler overhead can be a problem for the iterative dask-array algorithms (gradient descent, proximal gradient descent, BFGS). This is particularly a problem because NumPy is very fast. Often our tasks take only a few milliseconds, which makes Dask’s overhead of 200us per task become very relevant (this is why you see whitespace in the profile plots above). We’ve started resolving this problem in a few ways like more aggressive task fusion and lower overheads generally, but this will be a medium-term challenge. In practice for dask-glm we’ve started handling this just by choosing chunksizes well. I suspect that for the dask-glm in particular we’ll just develop auto-chunksize heuristics that will mostly solve this problem. However we expect this problem to recur in other work with scientists on HPC systems who have similar situations. - A couple of things can be tricky for algorithmic users: - Placing the calls to asynchronously start computation (persist, compute). In practice Chris did a good job here and then I came through and tweaked things afterwards. The web diagnostics ended up being crucial to identify issues. - Avoiding accidentally calling NumPy functions on dask.arrays and vice versa. We’ve improved this on the dask.array side, and they now operate intelligently when given numpy arrays. Changing this on the NumPy side is harder until NumPy protocols change (which is planned). Future work There are a number of things we would like to do, both in terms of measurement and for the dask-glm project itself. We welcome people to voice their opinions (and join development) on the following issues: - Asynchronous Algorithms - User APIs - Extend GLM families - Write more extensive rigorous algorithm testing - for satisfying provable optimality criteria, and for robustness to various input data - Begin work on smart initialization routines What is your perspective here, gentle reader? Both Matt and Chris can use help on this project. We hope that some of the issues above provide seeds for community engagement. We welcome other questions, comments, and contributions either as github issues or comments below. Acknowledgements Thanks also go to Hussain Sultan (Capital One) and Tom Augspurger for collaboration on Dask-GLM and to Will Warner (Continuum) for reviewing and editing this post. blog comments powered by Disqus
http://matthewrocklin.com/blog/work/2017/03/22/dask-glm-1
CC-MAIN-2020-05
en
refinedweb
Download presentation Presentation is loading. Please wait. Published byDwain Merritt Modified over 5 years ago 1 Editing Java programs with the BlueJ IDE 2 Working environments to develop (= write) programs There are 2 ways to develop (write) computer programs: 1.Using an editor (e.g. gedit) and a compiler (such as javac) separately. You have seen this method in the last webnote --- abus/02/BlueJ/java.html abus/02/BlueJ/java.html 2. Using an editor and a compiler in an integrated manner 3 Working environments to develop (= write) programs (cont.) In the second way, you will need to install a special application called an Integrated Development Environment (IDE) 4 Java IDEs There are a number of Integrated Development Environment IDE) available for Java Java IDEs: Eclipse -- Eclipse is highly extensible and customizable, but hard to learn (freely available) NetBeans -- created by Sun MircoSystem (original designer of the Java programming language) (freely available) JBuilder -- top commercial Java IDE; very costly... BlueJ -- easy to learn (freely available) 5 Java IDEs (cont.) In this webnote, you will learn to edit Java programs with BlueJ In the next webnote, you will learn how to: You will learn how to program in the Java programming language later in this course compile the Java program with BlueJ run the (compiled) Java program with BlueJ 6 Java IDEs (cont.) BlueJ is freely available and it can be obtained from this website: 7 Preparation Before you can use BlueJ, you must: Login to a computer in the MathCS lab Open a terminal window Change the current (working) directory to your cs170 directory This directory is used to store CS 170 labs and homework. 8 Information about this BlueJ tutorial The tutorial is described from the perspective of the user cheung (Because it was developed by Professor Cheung) The directory used to hold the project is /home/cheung/cs170 For clarity, I have delete all files and folders from my cs170 directory. 9 Information about this BlueJ tutorial (cont.) We will write a simple Java program and store the program in a project directory called "TestProj". The "TestProj" will be contained inside the /home/cheung/cs170 directory. In other words, the absolute path of the project directory is: /home/cheung/cs170/TestProj 10 Information about this BlueJ tutorial (cont.) Here is the Simple Java program that you will enter into BlueJ: You don't need to understand this program right now; it will be explained later in the course. public class Hello { public static void main(String[] args) { System.out.println("Hello Class"); System.out.println("How is everyone doing so far ?"); } 11 Topics covered in this (short) tutorial Things you need to learn to get started with BlueJ Run the BlueJ application Create a new project in BlueJ Create a new program file Insert text into the file Delete text from the file Goto a certain line in the file Search for a pattern in the file Search and replace for a pattern with another pattern in the file Undo a change Save your work Quit without saving (because you made a mess)... 12 Starting the BlueJ IDE application Enter the following command in a terminal window: This will run BlueJ as a detached process UNIX prompt>> bluej & 13 Starting the BlueJ IDE application (cont.) You will first see an announcement window: 14 Starting the BlueJ IDE application (cont.) When it's ready, you will see the main window: 15 Create a new project BlueJ requires that each project be stored in a different directory When you create a new project, BlueJ will also create a new directory for you. 16 Create a new project (cont.) How to create a new project: Left click on the Project tab Then left click on the New Project tab: 17 Create a new project (cont.) A new window will pop up: 18 Create a new project (cont.) Enter the name of the new project directory (/home/cheung/cs170/TestProj) and click on the Create button: 19 Create a new project (cont.) When BlueJ has successful created an new project, it will show the following window: 20 Create a new program file Suppose we want to create a file that contains the following Java program (given above): public class Hello { public static void main(String[] args) { System.out.println("Hello Class"); System.out.println("How is everyone doing so far ?"); } 21 Create a new program file (cont.) Notice that the program is called class Hello This will be important in the creation procedure. 22 Create a new program file (cont.) How to create a Java program file: Left click on the New Class button: 23 Create a new program file (cont.) A new window will pop up: 24 Create a new program file (cont.) Type in the name of the "class" (which is Hello) and click OK: A new window will pop up: 25 Create a new program file (cont.) Final result: You can see the new file Hello in the TestProj area. 26 Create a new program file (cont.) To see that BlueJ has created a file, we list the content of the TestProj directory from a terminal window: The icon named Hello in BlueJ represents the program file Hello.java inside the TestProj directory. 27 Open a program file for editing If you want to edit a program file, do the following: Right click on the file icon Then left click on the Open Editor button: 28 Open a program file for editing If you want to edit a program file, do the following: A new window will pop up: 29 Open a program file for editing The new window contains the content of the file Hello.java (To verify, use "cat Hello.java" in a terminal window) BlueJ has already inserted a few things in the file Hello.java to help you start writing a Java program 30 Deleting text from a file How to delete text from a file: Highlight the text in BlueJ that you want to delete: 31 Deleting text from a file (cont.) Press the backspace key You can also press the delete key or control-X Result: 32 Inserting text into a file Use the scroll bar on the right to find the location in the file where you want to insert text. Left click at the insert location Then type in the new text. Example: 33 Insert text by copy and paste You can insert text from another window into the document in BlueJ by using the copy and paste facility: 1.Highlight any text in a window (e.g., from a webpage) The highlighted text is automatically copied in UNIX 2.(On a Windows-based PC, you need to type control-C to copy) 3.Now click in the BlueJ window at the position where you want to insert the highlighted text 4.Type control-V (for paste) 34 Replacing some text How to replace text: Delete the text Insert new text 35 Undo a change When you make a edit mistake, you can undo the last change with the undo-command: control-Z 36 Undo a change (cont.) Undo earlier changes: You can undo earlier changes by pressing control-Z multiple time The maximum number of changes can be undo is 25 37 Undo an undo Suppose you have undone a change that was in fact correct You can undo an undo operation using: control-Y (this is called a Redo operation) 38 Goto a certain line in the file A feature that is very useful when you write computer programs is: That is because compilers (an application that translates a program written in a high level language into machine code) always report an error along with the location (as a line number) in the file. Goto a certain line in a file 39 Goto a certain line in the file (cont.) How to go to line number n in a file: 1.Left click on the Tools tab 2.Then left click on the Go to Line tab 40 Goto a certain line in the file (cont.) Example: After this, a window will pop up and you can enter the desired line number 41 Goto a certain line in the file (cont.) Keyboard shortcut: The keyboard shortcut for the Go to Line function is control-L 42 Search for a text pattern Finding the next occurrence of a pattern in a file: 1.Left click on the Find tab The lower portion of the BlueJ window will change to the Find menu Example: 43 Search for a text pattern (cont.) Enter the search text pattern and click Next: The text highlighted in yellow is the next matching pattern All other matching patterned are highlighted in blue 44 Search for a text pattern (cont.) Left click on the Next button to find the subsequent matching pattern Search forward: Left click on the Prev button to search forward 45 Search and Replace Finding the next occurrence of a text pattern in a file and replace it with some other pattern: Left click on the Replace tab The lower portion of the BlueJ window will change to the Replace menu 46 Search and Replace (cont.) Example: 47 Search and Replace (cont.) 2. Enter the replacement pattern in the Replace field: 48 Search and Replace (cont.) 3.Click on the Once button to replace the current occurrence (in yellow): 49 Search and Replace (cont.) You can replace the next occurrence by clicking on Once another time. Click on All to replace every occurrence 50 Search and Replace (cont.) Hint: If you do not want to replace the current occurrence and want to continue the Search and Replace operation, then do the following: 1.Click on the text immediately after the current occurrence 2.Click Next (to find the next occurrence) 3.Continue with replace if desire 51 Search and Replace (cont.) Example: 52 Search and Replace (cont.) Click on the text immediately after the current occurrence 53 Search and Replace (cont.) Click Next Continue with the Replace operation if so desired. 54 Saving the changes Auto saving: You do not need to save your work. When you quit (close) the BlueJ window, it saves your works automatically 55 Saving the changes (cont.) Save your work explicitly: You can choose to save your work explicit by clicking of Class and then Save: 56 Quit without saving your work... You do not have this option in BlueJ 57 Exit BlueJ Before you exit BlueJ, I would recommend that you save all your changes explicitly You have learned saving your work above !!! 58 Exit BlueJ (cont.) Exiting BlueJ: To exit BlueJ, click Project in the BlueJ's main window and select Quit: Similar presentations © 2021 SlidePlayer.com Inc.
http://slideplayer.com/slide/5979798/
CC-MAIN-2021-17
en
refinedweb
error LNK1181 opencv_calib3d300d.lib Hi everybody, I am working on a project and trying to get familiar with OpenCV. I have chosen to try and connect it with MS Visual Studio 2013. Here is the code I am trying to run: "#include <opencv\cv.h> //ignore " "#include <opencv\highgui.h> //ignore " using namespace cv; int main(){ IplImage* img = cvLoadImage("C:\\Users\\--\\Desktop\\ocv.png"); cvNamedWindow("Example1", CV_WINDOW_NORMAL); cvShowImage("Example1", img); cvWaitKey(0); cvReleaseImage(&img); cvDestroyWindow("Example1"); return 0; } This is what I have done to connect to the OpenCV libraries: In Properties Linker>Additioinal Dependencies "opencv_ts300d.lib";"opencv_calib3d300d.lib";"opencv_core300d.lib";"opencv_features2d300d.lib"; "opencv_flann";%(AdditionalDependencies) Linker>Input "C:\opencv\build\x86\vc12\lib";"C:\opencv\build\x86\vc12\bin"%(AdditionalLibraryDirectories) Linker>General "C:\opencv\build\x86\vc12\lib";"C:\opencv\build\x86\vc12\bin"%(AdditionalLibraryDirectories) C/C++>Additional Include Directories "C:\opencv\build\include";"C:\opencv\build\include\opencv";"C:\opencv\build\include\opencv2";%(AdditionalIncludeDirectories) VC++ Directories>Include Directories "C:\opencv\build\include";"C:\opencv\build\include\opencv";"C:\opencv\build\include\opencv2";$(IncludePath) VC++ Directories>Library Directories "C:\opencv\build\x86\vc12\bin";"C:\opencv\build\x86\vc12\lib";$(LibraryPath) VC++ Directories>Source Directories "C:\opencv\build\x86\vc12\bin";"C:\opencv\build\x86\vc12\lib";"C:\opencv\build\x86\vc12\staticlib"$(SourcePath) I get a reoccurring error and my program will not compile. Error 1 error LNK1181: cannot open input file 'opencv_calib3d300d.lib' C:\Users\Will\Documents\Visual Studio 2013\Projects\OPEN_CV_TEST\OPEN_CV_TEST\LINK OPEN_CV_TEST I've tried manipulating everything. I've read all the other similar questions and noticed that other people are having this error. I've also read what Microsoft's website says on the issue. Any help would be appreciated very much. Thank you, deltamaster Shouldn't you have: #include <opencv2/core/core.hpp> #include <opencv2/highgui/highgui.hpp> I mean these are the includes I would use doing a C++ project. Also, before someone else points it, you are writing code with the old "C" way of doing it. Now, C++ is privileged and here is a link to a simple tutorial: Thank you for your input. Adding the #includes does not fix the problem. I tried the code in the example-link and that gave me the same error! Thanks Are you sure the library opencv_calib3d300d.lib is actually there in one of the linker include paths? Since it is not complaining about the first library (opencv_ts300d.lib) in the list, I'm just guessing. Besides that, the code your using is very old (and depricated); please stop using it and start using the C++ interface for new projects. If built from source the lib files should be in C:\opencv\install\x86\vc12\lib or wherever install folder would be in your opencv build folder. A simple search for the .lib file in your windows explorer will also lead you there. Thank you ben.seep that fixed my problem! This is what I did in case anybody has this issue: In the staticlibrary folder are all the ...300d files. I made a folder called "replace" in static library. I moved every ...300d file into the folder (just the ones that are added to additional dependencies). Moved the folder to desktop and copied the ...300d files to the library folder. Then it worked. Note: If you don't move the files out of staticlibrary you will get hundreds of errors. An explanation of the reason behind this would be very helpful. @deltamaster what is static library folder? where is it?
https://answers.opencv.org/question/51060/error-lnk1181-opencv_calib3d300dlib/
CC-MAIN-2021-17
en
refinedweb
whitebox WhiteboxTools Frontends WhiteboxTools is an advanced geospatial data analysis platform developed by Prof. John Lindsay at the University of Guelph's Geomorphometry and Hydrogeomatics Research Group. The WhiteboxTools library currently contains 440 tools, which are each grouped based on their main function into one of the following categories: Data Tools, GIS Analysis, Hydrological Analysis, Image Analysis, LiDAR Analysis, Mathematical and Statistical Analysis, Stream Network Analysis, and Terrain Analysis. For a listing of available tools, complete with documentation and usage details, please see the WhiteboxTools User Manual. WhiteboxTools can be accessed either from a command prompt (i.e. terminal) or through one of the following front-ends: Python Package Links - GitHub repo: - PyPI: - conda-forge: - Documentation: - Maintainer: Qiusheng Wu Installation The whitebox Python package can be installed using the following command: pip install whitebox The whitebox Python package is also available on conda-forge, which can be installed using the following command: conda install -c conda-forge whitebox Usage Tool names in the whitebox Python package can be called using the snake_case convention (e.g. lidar_info). See below for an example Python script. import os import pkg_resources import whitebox wbt = whitebox.WhiteboxTools() print(wbt.version()) print(wbt.help()) # identify the sample data directory of the package data_dir = os.path.dirname(pkg_resources.resource_filename("whitebox", 'testdata/')) wbt.set_working_dir(data_dir) wbt.verbose = False wbt.feature_preserving_smoothing("DEM.tif", "smoothed.tif", filter=9) wbt.breach_depressions("smoothed.tif", "breached.tif") wbt.d_inf_flow_accumulation("breached.tif", "flow_accum.tif") WhiteboxTools also provides a Graphical User Interface (GUI) - WhiteboxTools Runner, which can be invoked using the following Python script: import whitebox whitebox.Runner() R Package Links - GitHub repo: - R-Forge: - Documentation: - Maintainer: Qiusheng Wu Installation The whitebox R package is available on R-Forge, which can be installed using the following command: install.packages("whitebox", repos="") You can alternatively install the development version of whitebox from GitHub as follows: if (!require(devtools)) install.packages('devtools') devtools::install_github("giswqs/whiteboxR") RStudio Screenshot Usage Tool names in the whitebox R package can be called using the snake_case (e.g. wbt_lidar_info). See below for an example. library(whitebox) # Set input raster DEM file dem <- system.file("extdata", "DEM.tif", package="whitebox") # Run tools wbt_feature_preserving_smoothing(dem, "./smoothed.tif", filter=9, verbose_mode = TRUE) wbt_breach_depressions("./smoothed.tif", "./breached.tif") wbt_d_inf_flow_accumulation(dem, "./flow_accum.tif") ArcGIS Python Toolbox Links - GitHub repo: - Maintainer: Qiusheng Wu Installation Step 1: Download the toolbox Go to the WhiteboxTools-ArcGIS GitHub repo and click the green button (Clone or download) on the upper-right corner of the page to download the toolbox as a zip file. Depcompress the downloaded zip file. Step 2: Connect to the toolbox Navigate to the Folder Connections node in the catalog window tree. Right-click the node and choose Connect To Folder. Type the path or navigate to the WhiteboxTools-ArcGIS folder and click OK. Browse into the toolbox and start using its tools. Usage Open any tool within the toolbox and start using it. Check out the WhiteboxTools User Mannual for more detailed help documentation of each tool. ArcGIS Pro Screenshot ArcMap Screenshots QGIS Plugin <a class='anchor' id='qgis'></a> Links - Documentation: - GitHub repo: - Maintainer: Alexander Bruy Installation Please follow the installation guide here. Screenshot Command-line Interface Links - GitHub repo: - User Manual: - Maintainer: John Lindsay Installation You can download a copy of the WhiteboxTools executable for your operating system from the Geomorphometry and Hydrogeomatics Research Group website. Once you've downloaded WhiteboxTools and decompressed (unzipped) the folder, you can open a command prompt and start using it. Usage WhiteboxTools is a command-line program and can be run by calling it with appropriate commands and arguments, from a terminal application. The following commands are recognized by the WhiteboxTools library: Generally, the Unix convention is that single-letter arguments (options) use a single hyphen (e.g. -h) while word-arguments (longer, more descriptive argument names) use double hyphen (e.g. --help). The same rule is used for passing arguments to tools as well. Use the --toolhelp argument to print information about a specific tool (e.g. --toolhelp=Clump). Tool names can be specified either using the snake_case or CamelCase convention (e.g. lidar_info or LidarInfo). For examples of how to call functions and run tools from WhiteboxTools, see the whitebox_example.py Python script, which itself uses the whitebox_tools.py script as an interface for interacting with the executable file. In addition to direct command-line and script-based interaction, a very basic user-interface called WB Runner can be used to call the tools within the WhiteboxTools executable file, providing the required tool arguments. Example command prompt: '/Users/johnlindsay/Documents/data/' --run=DevFromMeanElev --input='DEM clipped.dep' --output='DEV raster.dep' -v./whitebox_tools --wd= Notice the quotation marks (single or double) used around directories and filenames, and string tool arguments in general. Use the '-v' flag (run in verbose mode) to force the tool print output to the command prompt. Please note that the whitebox_tools executable file must have permission to be executed; on some systems, this may require setting special permissions. The '>>' is shorthand for the command prompt and is not intended to be typed. Also, the above example uses the forward slash character (/), the directory path separator used on unix based systems. On Windows, users should use the back slash character (\) instead.
https://blog.gishub.org/whitebix?guid=none&deviceId=0bd7bfe9-b028-40fe-a986-c24342292654
CC-MAIN-2021-17
en
refinedweb
Introduction Matplotlib is one of the most widely used data visualization libraries in Python. From simple to complex visualizations, it's the go-to library for most. In this tutorial, we'll take a look at how to plot a bar plot in Matplotlib. Matplotlib Plotting a Bar Plot in Matplotlib is as easy as calling the bar() function on the PyPlot instance, and passing in the categorical and continuous variables that we'd like to visualize. import matplotlib.pyplot as plt x = ['A', 'B', 'C'] y = [1, 5, 3] plt.bar plt.bar(). This results in a clean and simple bar graph: Plot a Horizontal Bar Plot in Matplotlib Oftentimes, we might want to plot a Bar Plot horizontally, instead of vertically. Changing the color of the bars themselves is as easy as setting the color argument with a list of colors. If you have more bars than colors in the list, they'll start being applied from the first color again: import matplotlib.pyplot as plt x = ['A', 'B', 'C'] y = [1, 5, 3] plt.bar(x, y, color=['red', 'blue', 'green']) plt.show() Now, we've got a nicely colored Bar Plot: Of course, you can also use the shorthand versions or even HTML codes: plt.bar(x, y, color=['red', 'blue', 'green']) plt.bar(x, y, color=['r', 'b', 'g']) plt.bar(x, y, color=['#ff0000', '#00ff00', '#0000ff']) plt.show() Or you can even put a single scalar value, to apply it to all bars: plt.bar(x, y, color='green') Bar Plot with Error Bars in Matplotlib When you're plotting mean values of lists, which is a common application for Bar Plots, you'll have some error space. It's very useful to plot error bars to let other observers, and yourself, know how truthful these means are and which deviation is expected. For this, let's make a dataset with some values, calculate their means and standard deviations with Numpy and plot them with error bars: import matplotlib.pyplot as plt import numpy as np x = np.array([4, 5, 6, 3, 6, 5, 7, 3, 4, 5]) y = np.array([3, 4, 1, 3, 2, 3, 3, 1, 2, 3]) z = np.array([6, 9, 8, 7, 9, 8, 9, 6, 8, 7]) x_mean = np.mean(x) y_mean = np.mean(y) z_mean = np.mean(z) x_deviation = np.std(x) y_deviation = np.std(y) z_deviation = np.std(z) bars = [x_mean, y_mean, z_mean] bar_categories = ['X', 'Y', 'Z'] error_bars = [x_deviation, y_deviation, z_deviation] plt.bar(bar_categories, bars, yerr=error_bars) plt.show() Here, we've created three fake datasets with several values each. We'll visualize the mean values of each of these lists. However, since means, as well as averages can give the false sense of accuracy, we'll also calculate the standard deviation of these datasets so that we can add those as error bars. Using Numpy's mean() and std() functions, this is a breeze. Then, we've packed the bar values into a bars list, the bar names for a nice user experience into bar_categories and finally - the standard deviation values into an error_bars list. To visualize this, we call the regular bar() function, passing in the bar_categories (categorical values) and bars (continuous values), alongside the yerr argument. Since we're plotting vertically, we're using the yerr arguement. If we were plotting horizontally, we'd use the xerr argument. Here, we've provided the information about the error bars. This ultimately results in: Plot Stacked Bar Plot in Matplotlib Finally, let's plot a Stacked Bar Plot. Stacked Bar Plots are really useful if you have groups of variables, but instead of plotting them one next to the other, you'd like to plot them one on top of the other. For this, we'll again have groups of data. Then, we'll calculate their standard deviation for error bars. Finally, we'll need an index range to plot these variables on top of each other, while maintaining their relative order. This index will essentially be a range of numbers the length of all the groups we've got. To stack a bar on another one, you use the bottom argument. You specify what's on the bottom of that bar. To plot x beneath y, you'd set x as the bottom of y. For more than one group, you'll want to add the values together before plotting, otherwise, the Bar Plot won't add up. We'll use Numpy's np.add().tolist() to add the elements of two lists and produce a list back: import matplotlib.pyplot as plt import numpy as np # Groups of data, first values are plotted on top of each other # Second values are plotted on top of each other, etc x = [1, 3, 2] y = [2, 3, 3] z = [7, 6, 8] # Standard deviation rates for error bars x_deviation = np.std(x) y_deviation = np.std(y) z_deviation = np.std(z) bars = [x, y, z] ind = np.arange(len(bars)) bar_categories = ['X', 'Y', 'Z']; bar_width = 0.5 bar_padding = np.add(x, y).tolist() plt.bar(ind, x, yerr=x_deviation, width=bar_width) plt.bar(ind, y, yerr=y_deviation, bottom=x, width=bar_width) plt.bar(ind, z, yerr=z_deviation, bottom=bar_padding, width=bar_width) plt.xticks(ind, bar_categories) plt.xlabel("Stacked Bar Plot") plt.show() Running this code results in: Conclusion In this tutorial, we've gone over several ways to plot a bar plot using Matplotlib and Python. We've also covered how to calculate and add error bars, as well as stack bars on top of each other. If you're interested in Data Visualization and don't know where to start, make sure to check out our bundle of books on Data Visualization in Python: Data Visualization in Python >>IMAGE.
https://stackabuse.com/matplotlib-bar-plot-tutorial-and-examples/
CC-MAIN-2021-17
en
refinedweb
How to use Python's enumerate and zip to iterate over two lists and their indices. enumerate- Iterate over indices and items of a list¶ The Python Cookbook (Recipe 4.4) describes how to iterate over items and indices in a list using enumerate. For example: alist = ['a1', 'a2', 'a3'] for i, a in enumerate(alist): print i, a Results: 0 a1 1 a2 2 a3 zip- Iterate over two lists in parallel¶ I previously wrote about using zip to iterate over two lists in parallel. Example: alist = ['a1', 'a2', 'a3'] blist = ['b1', 'b2', 'b3'] for a, b in zip(alist, blist): print a, b Results: a1 b1 a2 b2 a3 b3 enumerate with zip¶ Here is how to iterate over two lists and their indices using enumerate together with zip: alist = ['a1', 'a2', 'a3'] blist = ['b1', 'b2', 'b3'] for i, (a, b) in enumerate(zip(alist, blist)): print i, a, b Results: 0 a1 b1 1 a2 b2 2 a3 b3 Related posts - An example using Python's groupby and defaultdict to do the same task — posted 2014-10-09 - If you're working with last lists and/or memory is a concern, using the itertools module is an even better option. from itertools import izip, count alist = ['a1', 'a2', 'a3'] blist = ['b1', 'b2', 'b3'] for i, a, b in izip(count(), alist, blist): print i, a, b yields the exact same result as above, but is faster and uses less memory. >>> def foo(): ... for i, x, y in izip(count(), a, b): ... pass ... >>> def bar(): ... for i, (x, y) in enumerate(zip(a, b)): ... pass ... >>> delta(foo) 0.0213768482208 >>> delta(bar) 0.180979013443 where a = b = xrange(100000) and delta(f(x)) denotes the runtime in seconds of f(x). Jeremy, Thanks for the tip and the clear example and demonstration of the performance benefit. I had heard of itertools but have not really used it. It was great to talk to you today and I hope I can talk to you again soon. Thanks for the zip example, I grok it now. Jeremy, Thanks for the example, It is very helpful. I have set of n set, each with different number of elements I wanted to find all possible combinations of n elements, one from each set. Consider two sets (e1,e2,e3) (e4,e5) output required is as follows (e1,e4) (e1,e5) (e2,e4) (e2,e5) (e3,e4) (e3,e5) I do not know the number of such sets in advance. Nitin: In order to use zip to iterate over two lists - Do the two lists have to be the same size? What happens if the sizes are unequal? Thanks. Thx man helped me alot nice example btw re:#8, unequal list length: the result is truncated to the shorter list. See below for a discussion of how to use the longest list instead: short answer for py2.6+: use "map(None, alist, blist)" dunno what the equivalent is in py3+ when iterating through unequal length lists using itertools import itertools a1=[1,2,3,6,7,9] c1=['a','a','c','d'] b1=[10,20,30,40,50,60] d1=[11,12,13,14,15,16,17,18] mylist = list(itertools.izip_longest(a1,b1,c1,d1)) a1=[1,2,3,6,7,9] for items in mylist: litems=list(items) if items[0] is not None and items[1] is not None: a_old = items[0] b_old = items[1] if items[0] is None and items[1] is None: litems[0]= a_old litems[1]= b_old a,b,c,d=litems print a,b,c,d is ther any other better way to give previous value if None occurs for any field. disqus:2412310580 Very useful page with clear examples, thanks. disqus:3273150118
https://www.saltycrane.com/blog/2008/04/how-to-use-pythons-enumerate-and-zip-to/
CC-MAIN-2021-17
en
refinedweb
metixgosu edited 2 days ago visual studio·namespace·reference assemblies 3 Replies 2 Votes glenneroo commented Apr 2, '21 in Help Room 1 Reply 0 Votes RGV answered Mar 18, '21 in Help Room 2 Replies -1 Votes dizzymediainc edited Mar 16, '21 import·package·namespace 6 Replies Mr12309BACHMMb edited Mar 15, '21 in Help Room MUG806 commented Feb 15, '21 namespace·compiler error 0 Replies cdietz08 answered Jan 24, '21 namespace dcy460710594 asked Dec 26, '20 in Help Room CmdrZin answered Dec 18, '20 scripting problem·namespace·importing problems·subclass·import assets thornebrandt asked Dec 14, '20 c#·rendering·namespace·pipeline·hdr StringAtlas edited Dec 11, '20 c#·game·save·namespace·cs0103 xxmariofer commented Nov 19, '20 in Help Room peter-vrtualx published Nov 12, '20 in Help Room TheRealRan answered Nov 4, '20 list·classes·namespace·message·methods tadpolejedi commented Oct 21, '20 android·error·editor·namespace·app aabb15768 edited Oct 19, '20 in Help Room wpvo published Oct 2, '20 in Help Room r6mus published Sep 29, '20 text·update·bug-perhaps·namespace 7 Replies 4 Votes Bunny83 answered Sep 28, '20 in Help Room Ezz12 commented Sep 17, '20 assetbundle·namespace·standard-assets rcfearn answered Sep 2, '20 error·namespace·not found 1 Votes Kyron-Ellsworth asked Sep 2, '20 in Help Room Bunny83 commented Aug 28, '20 object·serialization·namespace·saveload·properties unity_neIDb-PA4Kmwdw commented Aug 20, '20 unity 5·variables·namespace·first person controller·modify Mortuus17 asked Aug 14, '20 namespace·compiler error stadlernicolas26 edited Jul 11, '20 in Help Room Giantbean answered Jul 9, '20 vr·namespace Hellium commented Jun 30, '20 error·error message·namespace Shpinxis asked Jun 27, '20 in Help Room HernandoNJ commented Jun 24, '20 error·namespace·contains·definition
https://answers.unity.com/topics/namespace.html
CC-MAIN-2021-17
en
refinedweb
An options page allows a plug-in developer to add controls to the ReSharper Options dialog. This is typically used to let the user specify various plug-in settings. The plug-in writer can add an unlimited amount of option pages to the dialog, and the dialogs can be nested in any of the options groups. Here is a screenshot of a custom options page in action: Let us now discuss the way in which option pages are defined. Making an options page Making an options page is surprisingly easy. You begin by defining a class which will house the options page. This class should be made to implement IOptionsPage, and should be decorated with the OptionsPage attribute. The OptionsPage attribute requires the plug-in author to provide the following parameters: - The page ID. This ID can be specified as a constant field inside the class. The page ID is a string which uniquely identifies this particular options page. - The name of the page. This is the text that will appear in the left-hand tree on the Options page as well as in the title and navigation elements. - The image. This refers to the glyph that appears next to the item in the tree and is specified as a type (e.g., typeof(OptionsPageThemedIcons.SamplePage)). See 4.05 Icons (R7) for more information. In addition, you may specify the following optional parameters: ParentIdlets you define the ID of the section or element which serves as this page's parent. If you want the parent to be one of the known ReSharper items, look inside the JetBrains.UI.Options.OptionsPagesnamespace for the corresponding pages, and then use their Pidas this parameter. For example, for an options page to appear in the Environment section, you specify the ParentIdof EnvironmentPage.Pid. Sequencelets you define the location of the item you are inserting in relation to the other items. Items are placed in order, so the higher this value, the further this page will be in the list of items. Of course, to accurately position the item, you need to know the Sequencevalue of its siblings. Luckily, this information is available in the metadata. Having specified the attributes, your class will appear as follows: Injecting dependencies Options pages are created by the Component Model, which means you can inject dependencies via your constructor parameters. Your constructor should take at least the following two parameters: Lifetime, which controls the lifetime of this page. OptionsSettingsSmartContext, the settings context that you can use to bind UI elements. Both of these values need to be injected because they are required for binding particular settings to UI elements. If you are inheriting from AOptionsPage, you will also need to inject IUIApplication to pass into the base class constructor. In addition to these values, you may inject any other available component into the service. Note that if you implement IOptionsPage on a user control, you should ensure that the generated default constructor is replaced with the constructor you wish the component model to inject dependencies for. Defining the UI You can define the UI for your options page using either Windows Forms or WPF. Whichever option you choose, all you have to do to actually present the UI is to initialize it and assign it to your option page's Control variable. Note: whichever UI framework you choose, your application must reference the WPF assemblies. The compiler will warn you about this if you start using the EitherControl type without adding appropriate references. To create an options page using Windows Forms, simply create a UserControl and assign it to the Control property. Please note that since multiple inheritance is impossible, the only way to keep the options page class and the UserControl class one and the same is as follows: - Inherit from UserControland implement the IOptionsPageinterface. - Decorate the control with the OptionsPageattribute as described above. - Implement the read-only property Control, returning the value of this. To create an options page using WPF, simply define your UI in terms of WPF elements and then assign the Control property accordingly. You can specify any WPF control, e.g., Grid, as the page control. Needless to say, it is entirely possible to use the WindowsFormsHost class to host Windows Forms controls on a WPF options page. The mechanism which binds the controls to the settings works for both WPF and Windows Forms. (Of course, if you implement IOptionsPage manually, you can simply assign properties manually without using bindings at all.) Working with Settings The OptionsSettingsSmartContext class that we inject has several SetBinding() methods that let us tie together settings and controls. These bind methods have two generic arguments - the name of the settings class, and the type of property that is being saved. In the case of WPF, you would specify: - The property that is being assigned on exit. Defined as a lambda expression (e.g., x => x.Name). - The name of the control that the property is being read from. - The dependency property that is being read. For example, here is how one would bind a WPF text box for a username to a corresponding setting: The situation with WinForms is a bit more trickly - there are no dependency properties to be used, so we use the WinFormsProperty helper class. This helper class has a single method, Create(), that creates an object of type IProperty<T> (where T is the property type). To create the property, it requires the following parameters: - The Lifetimeof the calling component. This should be obvious, since the 'proxy property' should only live as long as it is needed. This does, of course, imply that you must inject the Lifetimeinto the constructor. - The class to take data from. In actual fact, though in the case of WinForms you'll probably provide the corresponding control, this doesn't have to be a control per se - it can be practically any object. After all, the WinFormsPropertyclass does not use any WinForms-specific code. - A lambda expression indicating which property of the aforementioned class that is to be used. Thus, the call to bind a WinForms-based password box to a setting becomes as follows:
https://confluence.jetbrains.com/pages/diffpagesbyversion.action?pageId=50503778&selectedPageVersions=5&selectedPageVersions=4
CC-MAIN-2021-17
en
refinedweb
Extra Models¶ Warning The current page still doesn't have a translation for this language. But you can help translating it: Contributing. Continuing with the previous example, it will be common to have more than one related model. This is especially the case for user models, because: - The input model needs to be able to have a password. - The output model should not have a password. - The database model would probably need to have a hashed password. Danger Never store user's plaintext passwords. Always store a "secure hash" that you can then verify. If you don't know, you will learn what a "password hash" is in the security chapters. Multiple models¶ Here's a general idea of how the models could look like with their password fields and the places where they are used: from typing import Optional from fastapi import FastAPI from pydantic import BaseModel, EmailStr app = FastAPI() class UserIn(BaseModel): username: str password: str email: EmailStr full_name: Optional[str] = None class UserOut(BaseModel): username: str email: EmailStr full_name: Optional[str] = None class UserInDB(BaseModel): username: str hashed_password: str email: EmailStr full_name: Optional[str] = None About **user_in.dict()¶ Pydantic's .dict()¶ user_in is a Pydantic model of class UserIn. Pydantic models have a .dict() method that returns a dict with the model's data. So, if we create a Pydantic object user_in like: user_in = UserIn(username="john", password="secret", email="john.doe@example.com") and then we call: user_dict = user_in.dict() we now have a dict with the data in the variable user_dict (it's a dict instead of a Pydantic model object). And if we call: print(user_dict) we would get a Python dict with: { 'username': 'john', 'password': 'secret', 'email': 'john.doe@example.com', 'full_name': None, } Unwrapping a dict¶ If we take a dict like user_dict and pass it to a function (or class) with **user_dict, Python will "unwrap" it. It will pass the keys and values of the user_dict directly as key-value arguments. So, continuing with the user_dict from above, writing: UserInDB(**user_dict) Would result in something equivalent to: UserInDB( username="john", password="secret", email="john.doe@example.com", full_name=None, ) Or more exactly, using user_dict directly, with whatever contents it might have in the future: UserInDB( username = user_dict["username"], password = user_dict["password"], email = user_dict["email"], full_name = user_dict["full_name"], ) A Pydantic model from the contents of another¶ As in the example above we got user_dict from user_in.dict(), this code: user_dict = user_in.dict() UserInDB(**user_dict) would be equivalent to: UserInDB(**user_in.dict()) ...because user_in.dict() is a dict, and then we make Python "unwrap" it by passing it to UserInDB prepended with **. So, we get a Pydantic model from the data in another Pydantic model. Unwrapping a dict and extra keywords¶ And then adding the extra keyword argument hashed_password=hashed_password, like in: UserInDB(**user_in.dict(), hashed_password=hashed_password) ...ends up being like: UserInDB( username = user_dict["username"], password = user_dict["password"], email = user_dict["email"], full_name = user_dict["full_name"], hashed_password = hashed_password, ) Warning The supporting additional functions are just to demo a possible flow of the data, but they of course are not providing any real security. Reduce duplication¶ Reducing code duplication is one of the core ideas in FastAPI. As code duplication increments the chances of bugs, security issues, code desynchronization issues (when you update in one place but not in the others), etc. And these models are all sharing a lot of the data and duplicating attribute names and types. We could do better. We can declare a UserBase model that serves as a base for our other models. And then we can make subclasses of that model that inherit its attributes (type declarations, validation, etc). All the data conversion, validation, documentation, etc. will still work as normally. That way, we can declare just the differences between the models (with plaintext password, with hashed_password and without password): from typing import Optional from fastapi import FastAPI from pydantic import BaseModel, EmailStr app = FastAPI() class UserBase(BaseModel): username: str email: EmailStr full_name: Optional[str] = None class UserIn(UserBase): password: str class UserOut(UserBase): pass class UserInDB(UserBase): hashed_password: str Union or anyOf¶ You can declare a response to be the Union of two types, that means, that the response would be any of the two. It will be defined in OpenAPI with anyOf. To do that, use the standard Python type hint typing.Union: Note When defining a Union, include the most specific type first, followed by the less specific type. In the example below, the more specific PlaneItem comes before CarItem in Union[PlaneItem, CarItem]. from typing import Union from fastapi import FastAPI from pydantic import BaseModel app = FastAPI() class BaseItem(BaseModel): description: str type: str class CarItem(BaseItem): type = "car" class PlaneItem(BaseItem): type = "plane" size: int items = { "item1": {"description": "All my friends drive a low rider", "type": "car"}, "item2": { "description": "Music is my aeroplane, it's my aeroplane", "type": "plane", "size": 5, }, } @app.get("/items/{item_id}", response_model=Union[PlaneItem, CarItem]) async def read_item(item_id: str): return items[item_id] List of models¶ The same way, you can declare responses of lists of objects. For that, use the standard Python typing.List: from typing import List from fastapi import FastAPI from pydantic import BaseModel app = FastAPI() class Item(BaseModel): name: str description: str items = [ {"name": "Foo", "description": "There comes my hero"}, {"name": "Red", "description": "It's my aeroplane"}, ] @app.get("/items/", response_model=List[Item]) async def read_items(): return items Response with arbitrary dict¶ You can also declare a response using a plain arbitrary dict, declaring just the type of the keys and values, without using a Pydantic model. This is useful if you don't know the valid field/attribute names (that would be needed for a Pydantic model) beforehand. In this case, you can use typing.Dict: from typing import Dict from fastapi import FastAPI app = FastAPI() @app.get("/keyword-weights/", response_model=Dict[str, float]) async def read_keyword_weights(): return {"foo": 2.3, "bar": 3.4} Recap¶ Use multiple Pydantic models and inherit freely for each case. You don't need to have a single data model per entity if that entity must be able to have different "states". As the case with the user "entity" with a state including password_hash and no password.
https://fastapi.tiangolo.com/tr/tutorial/extra-models/
CC-MAIN-2021-17
en
refinedweb
Im having problems with getting 2 of my functions too loop to each other. so i made 2 functions and at the end i want them each to go back to the other one but i keep getting this error: error: use of undeclared identifier 'b' i understand what the problem is but i haven't found a solution to it can somebody please help me figure it out? When C++ builds, you are trying to reference a function that doesn't exist yet. Because C++ is compiled instead of interpreted, it scans through the function before running it. Because of this, you cannot make a recursive function loop in a compiled language. Try doing this in Python. After realizing that my explanation was difficult to understand Ok. Let's try this again. Imagine you were the C++ compiler, you would read the .cpp file line-by-line. Look at this code #include <iostream> // Use iostream void a() { // Starts defining a function std:cout << "test"; // Print "test" b(); // Calls unknown function! AHH! // Raise error and stop If this were Python code, it would be different. def a(): # Starts defining function print("test") # I don't care, not being run b() # I don't care, not being run def b(): # Starts defining function print("test") # IDC, not being run a() # IDC, not being run a() # Starts function, sees that both are defined, and runs CPP and Python handle defining functions diferently. If this were JS, it would create all the variables (functions are variables, technically) first, and then run the code. I hope that explanation was a little more read-able :P but of course, since C++ is compiled, all functions are in fact already declared. you just need to do a forward declaration in order to say "hey, this exists!" int add(int, int); // hey, I exist! void addloop(int x, int y) { add(x,y); } int add(int x, int y) { // this is how I work! int z = x + y; addloop(z,y); } I guess that's a good point, but still they both parse functions differently. @xxpertHacker What you're looking for are forward declarations. Normally, if an undefined/undeclared indentifier is parsed, the C++ parser stops immediately with a fatal error. (identifiers are variable names, parameter names, function names, or type names) Forward declarations declare it early, allowing code to be parsed correctly, and allowing early type checking. Example: The early defined type is known as a "function prototype," whereas the actual function with it's body is known as the "function declaration." Hopefully it helps. @xxpertHacker yes this did help me thank you so much!
https://replit.com/talk/ask/Im-having-problems-with-getting-2-of-my-functions-too-loop-to-each-other/131059
CC-MAIN-2021-17
en
refinedweb
#include <SIM_Constraint.h> The intention of this class is to act as a flexible container for constraint data. The individual components of the container (the relationship, the anchors, etc.) can each be switched at need, without affecting the other components. Definition at line 22 of file SIM_Constraint. Definition at line 49 of file SIM_Constraint.h.
https://www.sidefx.com/docs/hdk/class_s_i_m___constraint.html
CC-MAIN-2021-17
en
refinedweb
TestSpeed.h Example Filedemos/spectrum/3rdparty/fftreal/TestSpeed.h /***************************************************************************** TestSpeedSpeed_HEADER_INCLUDED) #define TestSpeed_HEADER_INCLUDED #if defined (_MSC_VER) #pragma once #pragma warning (4 : 4250) // "Inherits via dominance." #endif /*\\\ INCLUDE FILES \\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\*/ template <class FO> class TestSpeed { /*\\\ PUBLIC \\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\*/ public: typedef typename FO::DataType DataType;_SPD_TESTS = 10 * 1000 * 1000 }; enum { MAX_NBR_TESTS = 10000 }; /*\\\ FORBIDDEN MEMBER FUNCTIONS \\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\*/ private: TestSpeed (); ~TestSpeed (); TestSpeed (const TestSpeed &other); TestSpeed & operator = (const TestSpeed &other); bool operator == (const TestSpeed &other); bool operator != (const TestSpeed &other); }; // class TestSpeed #include "TestSpeed.hpp" #endif // TestSpeed.
https://doc.qt.io/archives/qt-4.8/qt-demos-spectrum-3rdparty-fftreal-testspeed-h.html
CC-MAIN-2021-17
en
refinedweb
Create shareable, secured URLs (deep links) Learn how to create shareable, secured URLs to forms and records. Overview The URL Generator enables developers to create shareable and secured URLs (also known as deep links) to specific forms that are root navigable. An optional data context can be passed to the form to display filtered or specific data when the form is opened. The URL Generator enables scenarios such as embedding links in reports, email, and external applications, enabling users to quickly and easily locate the specified forms or data by simply navigating using the generated link. Purpose - Empower developers to generate URLs that can be used to navigate to root navigable forms in a specified instance. - Empower developers to optionally specify a data context that should be displayed when navigating to the specified form. - Empower users to share, save, and access the generated URLs from any browser with Internet access. - Secure the URLs to prevent unauthorized access to the system, forms, or data. - Secure the URLs to prevent exposure of sensitive data or tampering. Security Site access Access to the domain/client is controlled through the existing login and SSL mechanism. Form access Access to forms is controlled through Menu items, as Menu items are the entry points where security is enforced. If a user navigates using a URL that contains a Menu item that the user does not have access to, then the Menu item security will prevent the form from opening. The user will receive a message indicating that they do not have the necessary permissions to open the form. Note that deep links will only work for Menu items that allow root navigation. Data access Access to data is controlled through the existing form-level queries. When a form is opened with a generated URL, the form will run its existing form-level queries, which restrict the user's access to data. The data context that is specified in the generated URL is consumed after these form-level queries are applied, and results only in further filtering of the data displayed to the user. In short, a generated URL can, at most, open a form and display all of the data that a form would display to the user based on the form-level queries. A generated URL cannot grant a user access to data that is otherwise inaccessible on the form when not using the generated URL. Usage The URL Generator is a .NET library that is accessible from X++, under the following namespace. Microsoft.Dynamics.AX.Framework.Utilities.UrlHelper.UrlGenerator Requirements The URL Generator must be used from code running on the AOS, in an active user session or batch process. This requirement ensures that the URL can be secured through encryption specific to the instance that generates the URL. At a minimum, the following information must be specified and passed to the URL Generator in order to generate a working URL. - Host URL - The URL of the web root for the instance. For example: - AOT name of the Menu Item Display - The menu item display to be used to open the form. - Partition - The partition to use for the request. - Company - The company to use for the request. Example // gets the generator instance var generator = new Microsoft.Dynamics.AX.Framework.Utilities.UrlHelper.UrlGenerator(); var currentHost = new System.Uri(UrlUtility::getUrl()); generator.HostUrl = currentHost.GetLeftPart(System.UriPartial::Authority); generator.Company = curext(); generator.MenuItemName = <menu item name>; generator.Partition = getCurrentPartition(); // repeat this segment for each datasource to filter var requestQueryParameterCollection = generator.RequestQueryParameterCollection; requestQueryParameterCollection.AddRequestQueryParameter( <datasource name>, <field1>, <value1>, <field2>, <value2>, <field3>, <value3>, <field4>, <value4>, <field5>, <value5> ); System.Uri fullURI = generator.GenerateFullUrl(); // to get the encoded URI, use the following code fullURI.AbsoluteUri Note Can you tell us about your documentation language preferences? Take a short survey. The survey will take about seven minutes. No personal data is collected (privacy statement).
https://docs.microsoft.com/en-us/dynamics365/fin-ops-core/dev-itpro/user-interface/create-deep-links
CC-MAIN-2021-17
en
refinedweb