text stringlengths 20 1.01M | url stringlengths 14 1.25k | dump stringlengths 9 15 ⌀ | lang stringclasses 4
values | source stringclasses 4
values |
|---|---|---|---|---|
#include <stdinc.h> stream stropen(string filename, string mode) void strclose(stream str) void strdelete(stream str, bool scratch) string strname(stream str) bool strseek(stream str) Descriptionstropen() opens a file by filename and associates a stream with it, much like fopen(3) does. It has a few additional features: (1) existing files cannot be opened for writing unless mode=w! (mode=a! is also permitted), (2) names of form - map to stdin/stdout, depending on mode, (3) names of the form -num for some small num set up a stream to read/write file descriptor num. (4) With mode=s a file is opened in write-scratch mode. Whenever strclose is called, the file is also deleted. When the filename does not start with a "/", a unique temporary filename is automatically created. (5) Output file with the name "." (dot) are equivalent to a bit sink (/dev/null). (6) Input files that look like a URL (http://.., ftp://.. etc) are opened with popen(3) and data directly passed back to the client Note that fopen(3) itself officially recognizes the following modes:r, w, a, r+, w+, a+. strclose() closes a stream, which is the recommended practice within NEMO (formally, an exit from the program using exit(3) also closes all open files properly). Each opened stream uses additional space in internal tables that are free'd when strclose is called. strdelete() deletes the file associated with the stream str previously opened with stropen. If scratch is set TRUE it will always delete the file, if set to FALSE the file must have been opened in scratch mode to be deleted. This routine also clears the internal filetable, that was used when stropen was called. strname() returns the name of the file that had been opened by stropen. Note it returns a pointer to an internal static table, and should not be overwritten. See also scopy(3NEMO). strseek returns seekability of a stream. This is primarely useful for filestruct, which might need to know if stream i/o can be optimized with deferred input. CaveatsFiles that are given as URLs can easily cause confusion, because a malformed or mistyped URL can give either no output or whatever the server decides to return on non-existing names. Since this is often a webpage with an error message, perfectly legal output, the client on the NEMO side will get no error message. % tsf ### Fatal error [tsf]: gethdr: bad magic: 20474
~/src/kernel/io stropen.c, filesecret.c
23-jul-90 created Josh 5-oct-90 added strdelete and strname; man page written PJT 1-mar-91 fixed bug in stropen - improved doc PJT 19-may-92 added strseek - fixed verbosity in strdelete PJT 5-nov-93 added special "." filename mode for /dev/null pjt 22-mar-00 scratch files cannot exist, otherwise error pjt 9-dec-05 add simple ability to grab URL-based files PJT
Table of Contents | http://bima.astro.umd.edu/nemo/man_html/strdelete.3.html | crawl-001 | en | refinedweb |
2.3.Mapping JPAQL/HQL queries. Mapping JPAQL/HQL queries
You can map EJBQL/HQL queries using annotations.
@NamedQuery and
@NamedQueries can be defined at the class level or in a JPA XML file. However their definitions are global to the session factory/entity manager factory scope. A named query is defined by its name and the actual query string.
<entity-mappings> <named-query <query>select p from Plane p</query> </named-query> ... </entity-mappings> ... @Entity @NamedQuery(name="night.moreRecentThan", query="select n from Night n where n.date >= :date") public class Night { ... } public class MyDao { doStuff() { Query q = s.getNamedQuery("night.moreRecentThan"); q.setDate( "date", aMonthAgo ); List results = q.list(); ... } ... }
You can also provide some hints to a query through an array of
QueryHint through a
hints attribute.
The availabe Hibernate hints are | http://www.redhat.com/docs/manuals/jboss/jboss-eap-4.3/doc/hibernate/Annotations_Reference_Guide/Mapping_Queries-Mapping_JPAQLHQL_queries.html | crawl-001 | en | refinedweb |
2.4.5. Single Association related annotations.
Foreign key constraints, while generated by Hibernate, have a fairly unreadable name. You can override the constraint name by use
@ForeignKey.
@Entity public class Child { ... @ManyToOne @ForeignKey(name="FK_PARENT") public Parent getParent() { ... } ... } alter table Child add constraint FK_PARENT foreign key (parent_id) references Parent | http://www.redhat.com/docs/manuals/jboss/jboss-eap-4.3/doc/hibernate/Annotations_Reference_Guide/Hibernate_Annotation_Extensions-Single_Association_related_annotations.html | crawl-001 | en | refinedweb |
5.4.1. Built-in bridges
Hibernate Search comes bundled with a set of built-in bridges between a Java property type and its full text representation.
Null elements are not indexed (Lucene does not support null elements and it does not make much sense either)
null elements are not indexed. Lucene does not support null elements and this does not make much sense either.
String are indexed as is
Numbers are converted in their String representation. Note that numbers cannot be compared by Lucene (ie used in ranged queries) out of the box: they have to be padded [1] DateRange Query, you should know that the dates have to be expressed in GMT time.
Usually, storing the date up to the milisecond is not necessary.
@DateBridge defines the appropriate resolution you are willing to store in the index (
). The date pattern will then be truncated accordingly.
@DateBridge(resolution=Resolution.DAY)
@Entity @Indexed public class Meeting { @Field(index=Index.UN_TOKENIZED) @DateBridge(resolution=Resolution.MINUTE) private Date date; ... }
A Date whose resolution is lower than
MILLISECOND cannot be a
@DocumentId | http://www.redhat.com/docs/manuals/jboss/jboss-eap-4.3/doc/hibernate/Annotations_Reference_Guide/PropertyField_Bridge-Built_in_bridges.html | crawl-001 | en | refinedweb |
pxdom is a W3C DOM Level 3 Core/XML/Load/Save implementation with Python and OMG (_get/_set) bindings. All features in the November 2003 Candidate Recommendations are supported, with the following exceptions:
Additionally, Unicode encodings are only supported on Python 1.6 and later, and Unicode character normalisation features are only available on Python 2.3 and later.
Copy pxdom.py into any folder in your Python path, for example /usr/lib/python/site-packages or C:\Python23\Lib\site-packages.
pxdom can also be included and imported as a submodule of another package. This is a good strategy if you wish to distribute a DOM-based application without having to worry about the version of Python or other XML tools installed.
The only dependencies are the standard library
string,
StringIO,
urllib and
urlparse modules.
The pxdom module implements the DOMImplementationSource interface from DOM Level 3 Core. So to parse a document from a file, use eg.:
import pxdom
dom= pxdom.getImplementation('')
parser= dom.createLSParser(dom.MODE_SYNCHRONOUS, None)
doc= parser.parseURI('')
For more on using DOM Level 3 Load to create documents from various sources, see the DOM Level 3 Load/Save.config mapping. By default, according to the DOM 3 spec, all bound entity references will be replaced by the contents of the entity referred to, and all CDATA sections will be replaced with plain text nodes.
If you use the
parse/
parseString functions,
pxdom will set the parameter ‘cdata-sections’ to
True,
allowing CDATA sections to stay in the document. This is to emulate the behaviour of
minidom.
If you prefer to receive entity reference nodes too, set the ‘entities’ parameter to a true value. For example:
parser= dom.createLSParser(dom.MODE_SYNCHRONOUS, None)
parser.config.)
pxdom supports a few features which aren’t available in the DOM standard. Their names are always prefixed with ‘pxdom’. (ie. attribute nodes are not). The document’s
domConfig
is used to set parameters for parse and serialise operations invoked by pxdomContent.
pxdomContent is a replacement for the ElementLS.markupContent property that was in earlier Working Drafts of the DOM 3 LS spec.
pxdom is a non-validating, non-external-entity-including DOM implementation. However, it is possible that future versions may support external entities. If this is implemented, it will be turned on by default in new LSParser objects.
If you wish to be sure external entities will never be used in future versions of
pxdom, set the LSParser.config parameter ‘pxdom-resolve-resources’
to a false value. Alternatively, use the
parse/
parseString
functions, which will never resolve external entities (as minidom does not).
In order to support the feature Text.isElementContentWhitespace, pxdom must know the content model of the particular element that contains the text node. Often this is only defined in the DTD external subset, which pxdom doesn’t read.
Normally pxdom will (as per spec) guess that elements with unknown content models
do not contain ‘element content’ — so Text.isElementContentWhitespace
will always return
False for elements not defined in the internal
subset. However, if the DOMConfiguration parameter ‘pxdom-assume-element-content’
is set to a true value, it will guess that unknown elements do contain element content,
and so whitespace nodes inside them will be ‘element content whitespace’
(aka.config.setParameter('element-content-whitespace', 0)
parser.config.setParameter('pxdom-assume-element-content', 1)
doc= parser.parse('')
In addition to the DocumentType NamedNodeMaps ‘entities’ and ‘notations’, pxdom includes maps for the other two types of declaration that might occur in the DTD internal subset. They can be read to get more information on the content models than the schemaTypeInfo interface makes available.
pxdomElements is a NamedNodeMap of element content declaration nodes (as created by the
<!ELEMENT> declaration). ElementDeclaration
nodes have an integer contentType property with enum keys EMPTY_CONTENT, ANY_CONTENT,
MIXED_CONTENT and ELEMENT_CONTENT. In the case of
mixed and element content, the
elements property gives more information on the
child elements allowed.
pxdomAttlists is a NamedNodeMap of elements’ declared attribute lists (as created by the
<!ATTLIST> declaration). AttributeListDeclarations hold a
NamedNodeMap in their
declarations property of attribute
names toATIONs and NOTATIONs, the typeValues property holds a list of possible string values.
There is also an integer defaultType property with enum keys REQUIRED_VALUE, IMPLIED_VALUE,
DEFAULT_VALUE and FIXED_VALUE. In the case of FIXED and DEFAULT, the
childNodes
property holds any Text and/or EntityReference nodes that make up the default Andrew Clover may not. | http://www.doxdesk.com/file/software/py/v/pxdom-0.9.html | crawl-001 | en | refinedweb |
java.lang.NOSuchMethodError: main Exception in thread “main”
There are lot of errors we have to face while programming. today we are going to know about this error mentioned below and also we will come to know that how to solve this kind of error
Reason behind the java.lang.NOSuchMethodError: main Exception in thread “main”:
Most of the new java programmers are known to this error message . many of them asked me actually what are the reasons behind this error and how can they resolve it.
So before going to resolve it we would suggest you to understand the reason behind java.lang.NoSuchMethodError: main Exception in thread “main” this error. If you understand the reason I hope you could resolve it yourself.
We use the command “java” to run a .java file . When you use the java command the command loads the class you mentioned in your program and then start searching for the main method or in easy way we can say that the compiler start looking for a method that usually looks like this :
public class anything { ... public static void main (String[] args) { //body of main method ...... } }
So the main reason of this error is when you tried to compile your program but the code starts to call a method but the method is not available there or not existed on the class and the method might be static or non-static that does not matter at all.
On the above program the entry method is the main method which must have some minimum requirements . I would like to explain the requirements .
#1. The method must be in a class which is nominated before . (here the class is “anything”)
#2. We must have to make the main method public.
#3. We must have to make the main method static too .
#4. Return value must be a null value that means we have to make the return type void .
#5. We cant use more than one argument in this method . we need exactly one argument and the argument type must be String[] type
if any of the above requirements is not satisfied then we will find this error
java.lang.NoSuchMethodError: main Exception in thread “main”
or we can say this there are two main reason
1. The class which we are trying to run , there is no main method
2. there is incorrect signature of the main method .
lets see one example to understand it in a better way .
public class anything { ... public static void main (String args) { //body of main method ...... } }
now look at the code carefully we did not put [] this . if we compile the code. we will get this error message
java.lang.NoSuchMethodError: main Exception in thread “main” | https://www.codespeedy.com/java-lang-nosuchmethoderror-main-exception-in-thread-main/ | CC-MAIN-2018-51 | en | refinedweb |
Hi, i’m trying to use $http with .map() later … and then then console output .map() is not a function.
Project is a ionic v1.x proyect developed in ionic creator.
I tried to use:
import { Observable } from ‘rxjs/Rx’;
import ‘rxjs/add/operator/map’;
in my services.js but doesn’t work. Does anyone found a solution for this? Thanks
Maybe I have to put something in “Code Settings” -> Angular modules … but I don’t know… Could someone give me a hand? thanks. | https://forum.ionicframework.com/t/map-is-not-a-function/93977 | CC-MAIN-2018-51 | en | refinedweb |
Creating windows with AIR is an extremely simple process. There are two different types of AIR application windows. The NativeWindow is a lightweight window class that falls under the flash class path and can only add children that are under the flash class path. The mx:Window component is a full window component that falls under the mx namespace and therefore can include any component under the mx namespace.
Since NativeWindow falls under the flash.display package, it can be used in any Flex, Flash, or HTML AIR project. NativeWindows have many properties that can alter their functionality and look. The following example will create a basic NativeWindow and build on the same file, creating different versions of the NativeWindow.
Start by creating a new AIR project named Chapter9_NW, which will create a new application file named Chapter9_NW.mxml. This will look like Listing 9-1.
Now add a new script block by adding the code from Listing 9-2. This function will create a default new NativeWindow object by passing a new NativeWindowInitOptions() object into the constructor. Next the title, width, and height are set. A new TextField is created and added to the NativeWindow by calling the stage.addChild() method. The contents of the NativeWindow ...
No credit card required | https://www.oreilly.com/library/view/beginning-adobe-airtm/9780470229040/9780470229040_creating_windows.html | CC-MAIN-2018-51 | en | refinedweb |
Java Garbage Collection
I was trying to find the best information on java garbage collection and why java does not support destructors. Almost read 100 of blogs and articles on java garbage collector and destructors method in java. I have found to the point answer but I was looking for the process and was looking for the exact method by which the JVM performs this task.
so today In this article I am going to share my knowledge to make understand others easily so that they can make sense on java garbage collection. This blog will help you to understand what is going in background when java garbage collection is being processed.
So before going to the process I think you might know some several things I am going to describe below. But if you are well known to these things then you might skip these.
One of the major difference between c++ and java :
Both are high level and object oriented programming language but the major difference is Java does not have destructor element as c++ have. Instead of destructor java uses Garbage collector to clear off unused memory automatically.
We may have a lot of unused objects. But we human are always in search of better memory management technique . To save up our memory we were using free() function in c language and delete() function in c++ .
The main advantage of Java is here, In Java this task is done automatically by the java which is known as Java Garbage Collector. So we can always say that Java provides us a better memory management as we don’t have to make extra effort on creating functions to delete unused or unreferenced objects .
so till now I know you guys are known to the fact what is the purpose of java garbage collector and why java does not have destructor (as java has its own java garbage collection so no need to have destructor as c and c++ have) .
But I am sorry to say that you need to know more in details otherwise you may have wrong conception on this topic. ( like we can use function to free memory in java too like c++ )
so I am going deeper to clear all the questions that might be appeared on your mind .
What is Destructor:
Objects are born with a life cycle. when an object’s life cycle is over , a special method is called in order to clear off the memory and de-allocate the resources. This method is known as destructor. It is also called manual memory management . destructor is used in order to avoid memory leaks.
The destructor is used by the programmer and there are several rules of using a destructor .
Rule 1. The class name and the destructor name must be same.
Rule 2. There must not be arguments.
Rule 3. The destructor must not have any return type.
So it is clear that we have to make extra effort here to clean up the memory .
Concept Of garbage collection/collector :
As you saw on the above few lines we should manually implement destructor and it is done by the programmer But in case of java there is a concept of garbage collection where JVM automatically performs the task of cleaning up the unused memory.
It is a program that actually runs on Java Virtual Machine. This program delete the objects that are not used anymore in the future or delete the objects that are not accessible from the code any more. (these types of objects are also sometime called unreferenced objects ) . This runs automatically on JVM and checks periodically if there is any object which is unreferenced in the memory heap. If any unreferenced object is found in the memory heap that signifies that the object is totally useless and will never be used in future , so the garbage collector gets rid of the object and frees up the memory which was allocated by the unreferenced object.
So there are two main basic principle of garbage collection.
#1. To find unreferenced objects that can not be used in the future anymore
#2. To clean up the memory and reclaim the resources used by the object .
You might now thinking that how java finds the unreferenced objects?
It is very interesting to know that actually java does not find unreferenced objects , java actually finds all the referenced objects in a program and the mark rest of the objects in that program as unreferenced objects. (cool way of finding unreferenced objects in a java program )
Those who are interested in Using Garbage Collection manually can use the below parameters to use different types of Garbage Collection.
1. The serial collector:
-XX:+UseSerialGC
2. Parallel Collector:
-XX:+UseParallelGC
There are more types we will discuss those in advance garbage collector in later.
How an object is become unreferenced ?
#1. Nulling a reference
Student e=new Student(); e=null;
#2. Assigning a reference object to another
Student r1=new Student(); Student r2=new Student(); r1=r2;//now the first object referred by r1 is available for garbage collection
We programmer can use finalize() method . This method is used to clean up memory in java.
This method is used in the below format
protected void finalize(){}
We can create an object using new keyword and we can create an object without the new keyword too. But in JVM garbage collector only collect those objects which are created by using new keyword so In case if you have created an object without using the new keyword than you have to use this finalize() method to use clean up process manually.
Now I think you have enough knowledge on garbage collection in java . So now we can take a look at simple java garbage collection example by coding .
public class MyGarbage{ public void finalize(){ System.out.println("Garbage collected successfully"); } public static void main(String args[]){ MyGarbage s1=new MyGarbage(); MyGarbage s2=new MyGarbage(); s1=null; s2=null; System.gc(); } }
the gc() method is also used to clean up the memory and it is used in system and runtime classes.
Output:
Garbage collected successfully Garbage collected successfully
Reading this blog I hope you are sure that there are lot of advantages of java garbage collector.
But there is an another side of this coin.
like everything this garbage collection too have some disadvantages.
#1. Java Garbage Collector runs on its own thread still it can make effect on performance.
#2. Since Java Garbage Collector has to keep constant tracking on the objects which are not referenced It adds overhead.
#3. Java Garbage Collector needs some amount of resource to identify which memory need to be freed and which not.
#4. Its almost impossible to predict how much time can be taken to collect garbage by the JVM.
Hope you enjoyed learning. | https://www.codespeedy.com/java-garbage-collection/ | CC-MAIN-2018-51 | en | refinedweb |
The basic abstraction for the target C++ ABI. More...
#include "clang/Basic/TargetCXXABI.h"
The basic abstraction for the target C++ ABI.
Definition at line 24 of file TargetCXXABI.h.
The basic C++ ABI kind.
Definition at line 27 of file TargetCXXABI.h.
When is record layout allowed to allocate objects in the tail padding of a base class?
This decision cannot be changed without breaking platform ABI compatibility, and yet it is tied to language guarantees which the committee has so far seen fit to strengthen no less than three separate times:
Definition at line 293 of file TargetCXXABI.h.
A bogus initialization of the platform ABI.
Definition at line 124 of file TargetCXXABI.h.
Definition at line 126 of file TargetCXXABI.h.
Are arguments to a call destroyed left to right in the callee? This is a fundamental language change, since it implies that objects passed by value do not live to the end of the full expression.
Temporaries passed to a function taking a const reference live to the end of the full expression as usual. Both the caller and the callee must have access to the destructor, while only the caller needs the destructor if this is false.
Definition at line 209 of file TargetCXXABI.h.
References isMicrosoft().
Referenced by canEmitDelegateCallArgs().
Are member functions differently aligned?
Many Itanium-style C++ ABIs require member functions to be aligned, so that a pointer to such a function is guaranteed to have a zero in the least significant bit, so that pointers to member functions can use that bit to distinguish between virtual and non-virtual functions. However, some Itanium-style C++ ABIs differentiate between virtual and non-virtual functions via other means, and consequently don't require that member functions be aligned.
Definition at line 181 of file TargetCXXABI.h.
References GenericAArch64, GenericARM, GenericItanium, GenericMIPS, getKind(), iOS, iOS64, Microsoft, WatchOS, and WebAssembly.
Can an out-of-line inline function serve as a key function?
This flag is only useful in ABIs where type data (for example, vtables and type_info objects) are emitted only after processing the definition of a special "key" virtual function. (This is safe because the ODR requires that every virtual function be defined somewhere in a program.) This usually permits such data to be emitted in only a single object file, as opposed to redundantly in every object file that requires it.
One simple and common definition of "key function" is the first virtual function in the class definition which is not defined there. This rule works very well when that function has a non-inline definition in some non-header file. Unfortunately, when that function is defined inline, this rule requires the type data to be emitted weakly, as if there were no key function.
The ARM ABI observes that the ODR provides an additional guarantee: a virtual function is always ODR-used, so if it is defined inline, that definition must appear in every translation unit that defines the class. Therefore, there is no reason to allow such functions to serve as key functions.
Because this changes the rules for emitting type data, it can cause type data to be emitted with both weak and strong linkage, which is not allowed on all platforms. Therefore, exploiting this observation requires an ABI break and cannot be done on a generic Itanium platform.
Definition at line 259 of file TargetCXXABI.h.
References GenericAArch64, GenericARM, GenericItanium, GenericMIPS, getKind(), iOS, iOS64, Microsoft, WatchOS, and WebAssembly.
Referenced by computeKeyFunction().
Definition at line 132 of file TargetCXXABI.h.
Referenced by areMemberFunctionsAligned(), canKeyFunctionBeInline(), createCXXABI(), clang::CodeGen::CreateItaniumCXXABI(), clang::ASTContext::getCommentForDecl(), getTailPaddingUseRules(), isItaniumFamily(), and isMicrosoft().
Definition at line 308 of file TargetCXXABI.h.
References AlwaysUseTailPadding, GenericAArch64, GenericARM, GenericItanium, GenericMIPS, getKind(), iOS, iOS64, Microsoft, UseTailPaddingUnlessPOD03, UseTailPaddingUnlessPOD11, WatchOS, and WebAssembly.
Referenced by mustSkipTailPadding().
Does this ABI have different entrypoints for complete-object and base-subobject constructors?
Definition at line 215 of file TargetCXXABI.h.
References isItaniumFamily().
Referenced by clang::CodeGen::CGCXXABI::EmitCtorCompleteObjectHandler(), clang::CodeGen::CodeGenModule::getMangledName(), and clang::CodeGen::CodeGenTypes::inheritingCtorHasParams().
Does this ABI use key functions? If so, class data such as the vtable is emitted with strong linkage by the TU containing the key function.
Definition at line 227 of file TargetCXXABI.h.
References isItaniumFamily().
Does this ABI allow virtual bases to be primary base classes?
Definition at line 220 of file TargetCXXABI.h.
References isItaniumFamily().
Does this ABI generally fall into the Itanium family of ABIs?
Definition at line 135 of file TargetCXXABI.h.
References GenericAArch64, GenericARM, GenericItanium, GenericMIPS, getKind(), iOS, iOS64, Microsoft, WatchOS, and WebAssembly.
Referenced by hasConstructorVariants(), hasKeyFunctions(), and hasPrimaryVBases().
Is this ABI an MSVC-compatible ABI?
Definition at line 154 of file TargetCXXABI.h.
References GenericAArch64, GenericARM, GenericItanium, GenericMIPS, getKind(), iOS, iOS64, Microsoft, WatchOS, and WebAssembly.
Referenced by areArgsDestroyedLeftToRightInCallee(), BuildAppleKextVirtualCall(), checkForMultipleExportedDefaultConstructors(), clang::CodeGen::CodeGenFunction::EmitAutoVarAlloca(), clang::CodeGen::CodeGenModule::GetAddrOfFunction(), clang::CodeGen::CodeGenModule::getFunctionLinkage(), clang::Sema::InstantiateClassMembers(), clang::Type::isIncompleteType(), isMsLayout(), clang::ASTContext::isMSStaticDataMemberInlineDefinition(), isVarDeclStrongDefinition(), clang::CodeGen::CodeGenVTables::isVTableExternal(), clang::Sema::PerformImplicitConversion(), shouldEmitVTableThunk(), TryReinterpretCast(), and TryStaticMemberPointerUpcast().
Definition at line 128 of file TargetCXXABI.h.
Referenced by clang::targets::AArch64TargetInfo::AArch64TargetInfo(), clang::targets::ARMTargetInfo::ARMTargetInfo(), clang::targets::DarwinAArch64TargetInfo::DarwinAArch64TargetInfo(), clang::targets::DarwinARMTargetInfo::DarwinARMTargetInfo(), clang::targets::ItaniumWindowsARMleTargetInfo::ItaniumWindowsARMleTargetInfo(), clang::targets::MicrosoftARM64TargetInfo::MicrosoftARM64TargetInfo(), clang::targets::MicrosoftARMleTargetInfo::MicrosoftARMleTargetInfo(), clang::targets::MinGWARM64TargetInfo::MinGWARM64TargetInfo(), clang::targets::MinGWARMTargetInfo::MinGWARMTargetInfo(), and clang::TargetInfo::TargetInfo().
Definition at line 339 of file TargetCXXABI.h.
Definition at line 335 of file TargetCXXABI.h. | https://clang.llvm.org/doxygen/classclang_1_1TargetCXXABI.html | CC-MAIN-2018-51 | en | refinedweb |
Implements Graded reverse lexicographic order. More...
#include <drake/common/symbolic_monomial_util.h>
Implements Graded reverse lexicographic order.
We first compare the total degree of the monomial; if there is a tie, then we use the lexicographical order as the tie breaker, but a monomial with higher order in lexicographical order is considered lower order in graded reverse lexicographical order.
Take MonomialBasis({x, y, z}, 2) as an example, with the order x > y > z. To get the graded reverse lexicographical order, we take the following steps:
First find all the monomials using the total degree. The monomials with degree 2 are {x^2, y^2, z^2, xy, xz, yz}. The monomials with degree 1 are {x, y, z}, and the monomials with degree 0 is {1}. To break the tie between monomials with the same total degree, first sort them in the reverse lexicographical order, namely x < y < z in the reverse lexicographical order. The lexicographical order compares two monomial by first comparing the exponent of the largest variable, if there is a tie then go forth to the second largest variable. Thus z^2 > zy >zx > y^2 > yx > x^2. Finally reverse the order as x^2 > xy > y^2 > xz > yz > z^2.
There is an introduction to monomial order in, and an introduction to graded reverse lexicographical order in
Returns true if m1 > m2 under the Graded reverse lexicographic order. | https://drake.mit.edu/doxygen_cxx/structdrake_1_1symbolic_1_1_graded_reverse_lex_order.html | CC-MAIN-2018-51 | en | refinedweb |
In this article, I intend to create a
GraphSheet control in .NET, using GDI+. It's a plain simple control using all available features of the
Graphics object. We create the
GraphSheet control as a 'Windows User Control' project. More on the control and its usage in 'Why? When? How?' below...
The code is self explanatory. In this
GraphSheet control, I create a bitmap of the graph sheet with gridlines. I have also created public methods to make it easier to draw points, and to add a set of points to an array which can then be used to draw a line or a curve. There are also public variables to introduce an offset in the X or Y axes or both axes, in the graph. We also create
Boolean flags to enable or disable gridlines, display scale units as text, etc.
Add the control to any Windows project.. and then call its methods and properties like any other control.
Private Sub Form1_Load(ByVal sender As System.Object, _ ByVal e As System.EventArgs) Handles MyBase.Load gs.Xscale_Max = 10 gs.Yscale_Max = 10 gs.Xscale_units = 1 gs.Yscale_units = 1 gs.showBorder = True gs.displayUnits = True gs.fontSize = 6 'gs.MarginX = 10 'gs.MarginY = 40 gs.initializeGraphSheet() gs.AddPoint(1, 1) gs.AddPoint(2, 5) gs.AddPoint(3, 3) gs.AddPoint(5, 5) gs.AddPoint(7, 5) gs.AddPoint(9, 9) gs.AddPoint(7, 9) gs.AddPoint(4, 6) gs.DrawGraph(GraphSheetControl.GraphSheet.PlotType.Curve, _ Color.Blue, False) 'gs.DrawPoint(2, 5, Color.SaddleBrown) End Sub
.NET has definitely made things simple for producing graphics, especially for VB programmers. We had to really be good at mathematics if we had to produce an optimized yet flexible code like this in Visual basic 6.0. One good example is the
DrawCurve method of the
Graphics object which draws a curve through a set of points passed to it as an array. If it were Visual Basic 6.0, it wouldn't have been this simple.
oG.DrawCurve(New Pen(color.red), PointsArray)
.NET though, hasn't made everything easy for the graphics part of things, for VB programmers at least. Creating a Windows control that would draw itself entirely is not easy, and even if done, not feasible to be used in production.
Paintevent is fired, it would have to draw itself. Implementing this causes flicker, continuously firing the event, and even MSDN advises against writing code in paint events for such controls.
Graphicsobject. It seems like drawing can be stored only as an
Image, and have to be restored from an
Image. I preferred using a simple mechanism of creating an
Imageobject and displaying it in a
PictureBoxin this article's code.
Invalidatemethod to make sure your object updates every time there's a change.
Form, a
Panel, etc. But if you use anything other than the
Imageobjects, you will have to redraw on every paint.
Well, we have Chris Maunder commenting on this article. God, where did I get into...was my first expression on the comment. :)
The
GraphSheet control was to make my code more pleasant on a small project, where I needed to plot graphs based on information, dynamically. The image added to this article has a small clipping of the output (right hand side of the image) of this project.
The control did make the final code well maintainable. I had to just modify code in the control to make offsetting, err handling, etc., possible, and the change took effect on all graphs drawn thereafter.
With the
Drawing and
Graphics namespace related code isolated, the project's code clearly showed a sign of big relief. I had to create as many instances of the
GraphSheet control needed, add points from the data to the control (here, we could also modify the control to allow data binding), and draw the graphs. More importantly, I had created logic in the control to size the graph according to the size of the control, and to allow offsetting if necessary.
To add to all this, since it's a control, I could create an instance dynamically and position it on my project.
General
News
Question
Answer
Joke
Rant
Admin | http://www.codeproject.com/KB/GDI-plus/graphsheetcontrol.aspx | crawl-002 | en | refinedweb |
This is the third and final part in my series of articles about my .NET State Machine Toolkit. In Part I, I introduced the classes that make up the toolkit and demonstrated how to create a simple, flat state machine. In Part II, I discussed some of the advanced features of the toolkit and demonstrated how to create a hierarchical state machine. In this part, we will look at how to use code generation to create state machines.
Note: based on feedback, since originally submitting my article from Ramon Smits, I've vastly improved the toolkit's XML support. It now uses XML serialization directly instead of relying on a
DataSet to read and write state machines as XML data. In addition, the XML schema has been greatly simplified. Many thanks to Ramon for his helpful suggestions and providing code to demonstrate his ideas.
Code generation is accomplished through the
StateMachineBuilder class. This class follows the Builder design pattern. With this design pattern, an object is constructed in steps using a builder object. After all the necessary steps have been taken, the builder is instructed to build the object, usually by calling a
Build method. After which the built object can be retrieved and used. This pattern helps break down the complex construction of an object into discrete steps. It also enables you to use the same construction process repeatedly to get different representations. With the
StateMachineBuilder class, you build a CodeDom
CodeNamespace object. The namespace contains an abstract class representing the state machine. This class serves as a base class for the class that you will write.
Originally, the
StateMachineBuilder class used classes from ADO.NET for representing state machine data. Simple
DataTables represented states, events, guards, etc. More complex
DataTables represented state transitions, substate/superstate relationships, and so on. One-to-many relationships were established between the simple tables and the more complex tables. Basically, it was an in-memory relational database for representing state machines. Pretty nifty, or so I thought...
There were problems with this approach. The main one was that I couldn't enforce all of the rules for declaring a hierarchical state machine through data constraints alone. It was possible to enter illegal combinations of values in the tables. For example, you could declare a state to be a substate of one state and a superstate to that same state. Since a state cannot be a substate of one state and a superstate to the same state, this was nonsense. I was trying to make a relational database do the job of a compiler, and it wasn't working. In addition, the XML generated by the
DataSet was overly verbose. A better approach was needed.
Instead of using a large number of
DataTables to create a relational database, the
StateMachineBuilder class now uses four custom classes for keeping track of states, events, guards, actions, transitions, etc. The classes are:
StateRow
StateRowCollection
TransitionRow
TransitionRowCollection
The
StateRow class represents a single state. The
StateRowCollection class represents a collection of
StateRows. You can think of the
StateRowCollection as a table of states. The
TransitionRow and
TransitionRowCollection classes together represent a state's transitions.
Let's look at the
StateRow class first. It has the following properties:
Name
InitialState
HistoryType
Substates
Transitions
The
Name property is the name of the state. The
InitialState property is the state's initial state. If the state does not have any substates, this property is ignored when the state machine is built. The
HistoryType property is the state's history type, obviously. This property is also ignored if the state does not have any substates. These three properties can be thought of as three columns in the state table.
The
Substates property is interesting. It represents a
StateRowCollection object. So if we can think of a collection of
StateRows as belonging to a table, this property is a kind of table within a table.
StateRows can be added to the
Substates property, and in turn those
StateRows can have
StateRows added to their
Substates property, and so on. This forms a tree like structure in which there is a top level of states, states that do not have a superstate, and branches descending from them representing their substates.
The
Transitions property represents a
TransitionRowCollection object. A state's transitions are added to this property. So each
StateRow contains a table of its transitions.
Each of these classes have XML serialization attributes describing how they should be serialized as XML data. In addition, the
StateRowCollection class and the
TransitionRowCollection class can be data bound to a control such as the
DataGrid. This makes it easy to create a GUI front end for creating state machines with the
StateMachineBuilder class.
The
StateMachineBuilder class has a
States property representing a
StateRowCollection object. It is the root of the state hierarchy. Once all of the states and their transitions have been added to the
StateMachineBuilder, the state machine base class can be built. As stated earlier, the result is a CodeDom
CodeNamespace object.
Let's look at some code that uses the
StateMachineBuilder class to build the traffic light state machine described in Part II and display the results:
using System; using System.Data; using System.IO; using System.CodeDom.Compiler; using Microsoft.CSharp; using System.Xml.Serialization; using Sanford.StateMachineToolkit; namespace StateMachineBuilderDemo { class Class1 { [STAThread] static void Main(string[] args) { try { StateMachineBuilder builder = new StateMachineBuilder(); builder.NamespaceName = "StateMachineDemo"; builder.StateMachineName = "TrafficLightBase"; builder.InitialState = "Off"; builder.States.Add("Disposed"); int index = builder.States.Add("Off"); builder.States[index].Transitions.Add("TurnOn", null, "On"); builder.States[index].Transitions.Add("Dispose", null, "Disposed"); index = builder.States.Add("On", "Red", HistoryType.Shallow); builder.States[index].Transitions.Add("TurnOff", null, "Off"); builder.States[index].Transitions.Add("Dispose", null, "Disposed"); StateRowCollection substates = builder.States[index].Substates; index = substates.Add("Red"); substates[index].Transitions.Add("TimerElapsed", null, "Green"); index = substates.Add("Yellow"); substates[index].Transitions.Add("TimerElapsed", null, "Red"); index = substates.Add("Green"); substates[index].Transitions.Add("TimerElapsed", null, "Yellow"); builder.Build(); StringWriter writer = new StringWriter(); CodeDomProvider provider = new CSharpCodeProvider(); ICodeGenerator generator = provider.CreateGenerator(); CodeGeneratorOptions options = new CodeGeneratorOptions(); options.BracingStyle = "C"; generator.GenerateCodeFromNamespace(builder.Result, writer, options); writer.Close(); Console.Read(); } catch(Exception ex) { Console.WriteLine(ex.Message); Console.Read(); } } } }
Here is the generated code:
namespace StateMachineDemo { public abstract class TrafficLightBase : Sanford.StateMachineToolkit.ActiveStateMachine { private Sanford.StateMachineToolkit.State stateDisposed; private Sanford.StateMachineToolkit.State stateOff; private Sanford.StateMachineToolkit.State stateOn; private Sanford.StateMachineToolkit.State stateRed; private Sanford.StateMachineToolkit.State stateYellow; private Sanford.StateMachineToolkit.State stateGreen; public TrafficLightBase() { this.Initialize(); } private void Initialize() { this.InitializeStates(); this.InitializeGuards(); this.InitializeActions(); this.InitializeTransitions(); this.InitializeRelationships(); this.InitializeHistoryTypes(); this.InitializeInitialStates(); this.Initialize(this.stateOff); } private void InitializeStates() { Sanford.StateMachineToolkit.EntryHandler enDisposed = new Sanford.StateMachineToolkit.EntryHandler(this.EntryDisposed); Sanford.StateMachineToolkit.ExitHandler exDisposed = new Sanford.StateMachineToolkit.ExitHandler(this.ExitDisposed); this.stateDisposed = new Sanford.StateMachineToolkit.State( ((int)(StateID.Disposed)), enDisposed, exDisposed); Sanford.StateMachineToolkit.EntryHandler enOff = new Sanford.StateMachineToolkit.EntryHandler(this.EntryOff); Sanford.StateMachineToolkit.ExitHandler exOff = new Sanford.StateMachineToolkit.ExitHandler(this.ExitOff); this.stateOff = new Sanford.StateMachineToolkit.State( ((int)(StateID.Off)), enOff, exOff); Sanford.StateMachineToolkit.EntryHandler enOn = new Sanford.StateMachineToolkit.EntryHandler(this.EntryOn); Sanford.StateMachineToolkit.ExitHandler exOn = new Sanford.StateMachineToolkit.ExitHandler(this.ExitOn); this.stateOn = new Sanford.StateMachineToolkit.State( ((int)(StateID.On)), enOn, exOn); Sanford.StateMachineToolkit.EntryHandler enRed = new Sanford.StateMachineToolkit.EntryHandler(this.EntryRed); Sanford.StateMachineToolkit.ExitHandler exRed = new Sanford.StateMachineToolkit.ExitHandler(this.ExitRed); this.stateRed = new Sanford.StateMachineToolkit.State( ((int)(StateID.Red)), enRed, exRed); Sanford.StateMachineToolkit.EntryHandler enYellow = new Sanford.StateMachineToolkit.EntryHandler(this.EntryYellow); Sanford.StateMachineToolkit.ExitHandler exYellow = new Sanford.StateMachineToolkit.ExitHandler(this.ExitYellow); this.stateYellow = new Sanford.StateMachineToolkit.State( ((int)(StateID.Yellow)), enYellow, exYellow); Sanford.StateMachineToolkit.EntryHandler enGreen = new Sanford.StateMachineToolkit.EntryHandler(this.EntryGreen); Sanford.StateMachineToolkit.ExitHandler exGreen = new Sanford.StateMachineToolkit.ExitHandler(this.ExitGreen); this.stateGreen = new Sanford.StateMachineToolkit.State( ((int)(StateID.Green)), enGreen, exGreen); } private void InitializeGuards() { } private void InitializeActions() { } private void InitializeTransitions() { Sanford.StateMachineToolkit.Transition trans; trans = new Sanford.StateMachineToolkit.Transition(null, this.stateYellow); this.stateGreen.Transitions.Add(((int)(EventID.TimerElapsed)), trans); trans = new Sanford.StateMachineToolkit.Transition(null, this.stateOn); this.stateOff.Transitions.Add(((int)(EventID.TurnOn)), trans); trans = new Sanford.StateMachineToolkit.Transition(null, this.stateDisposed); this.stateOff.Transitions.Add(((int)(EventID.Dispose)), trans); trans = new Sanford.StateMachineToolkit.Transition(null, this.stateOff); this.stateOn.Transitions.Add(((int)(EventID.TurnOff)), trans); trans = new Sanford.StateMachineToolkit.Transition(null, this.stateDisposed); this.stateOn.Transitions.Add(((int)(EventID.Dispose)), trans); trans = new Sanford.StateMachineToolkit.Transition(null, this.stateGreen); this.stateRed.Transitions.Add(((int)(EventID.TimerElapsed)), trans); trans = new Sanford.StateMachineToolkit.Transition(null, this.stateRed); this.stateYellow.Transitions.Add(((int)(EventID.TimerElapsed)), trans); } private void InitializeRelationships() { this.stateOn.Substates.Add(this.stateGreen); this.stateOn.Substates.Add(this.stateRed); this.stateOn.Substates.Add(this.stateYellow); } private void InitializeHistoryTypes() { this.stateDisposed.HistoryType = Sanford.StateMachineToolkit.HistoryType.None; this.stateGreen.HistoryType = Sanford.StateMachineToolkit.HistoryType.None; this.stateOff.HistoryType = Sanford.StateMachineToolkit.HistoryType.None; this.stateOn.HistoryType = Sanford.StateMachineToolkit.HistoryType.Shallow; this.stateRed.HistoryType = Sanford.StateMachineToolkit.HistoryType.None; this.stateYellow.HistoryType = Sanford.StateMachineToolkit.HistoryType.None; } private void InitializeInitialStates() { this.stateOn.InitialState = this.stateRed; } protected virtual void EntryDisposed() { } protected virtual void EntryOff() { } protected virtual void EntryOn() { } protected virtual void EntryRed() { } protected virtual void EntryYellow() { } protected virtual void EntryGreen() { } protected virtual void ExitDisposed() { } protected virtual void ExitOff() { } protected virtual void ExitOn() { } protected virtual void ExitRed() { } protected virtual void ExitYellow() { } protected virtual void ExitGreen() { } public enum EventID { TurnOn, Dispose, TurnOff, TimerElapsed, } public enum StateID { Disposed, Off, On, Red, Yellow, Green, } } }
Yes, the code is ugly and verbose. This is due in part to the fully qualified names CodeDom is using. However, this is a code you never have to touch or look at. The class generated is the base class from which you derive your own state machine class. The advantage of this approach is that if you need to change the state machine, such as adding an event, you can regenerate the code and your derived class is not touched, only the base class is regenerated. You may need to make some minor tweaks to your derived class depending on what changes you make, but your implementation is not overwritten.
The entry and exit methods are made virtual with do-nothing implementations. This in effect makes them optional. In your derived class, if you need to add behavior for entry and/or exit actions, you can override the methods you need and implement the behavior. The guard and action methods, however, are abstract. You must override these.
Here is the new
TrafficLight class. It is derived from the
TrafficLightBase generated by the
StateMachineBuilder:
using System; using Sanford.Threading; using Sanford.StateMachineToolkit; namespace StateMachineDemo { public class TrafficLight : TrafficLightBase { private DelegateScheduler scheduler = new DelegateScheduler(); public TrafficLight() { } #region Entry/Exit Methods protected override void EntryOn() { scheduler.Start(); } protected override void EntryOff() { scheduler.Stop(); scheduler.Clear(); } protected override void EntryRed() { scheduler.Add(1, 5000, new SendTimerDelegate(SendTimerEvent)); } protected override void EntryYellow() { scheduler.Add(1, 2000, new SendTimerDelegate(SendTimerEvent)); } protected override void EntryGreen() { scheduler.Add(1, 5000, new SendTimerDelegate(SendTimerEvent)); } protected override void EntryDisposed() { scheduler.Dispose(); Dispose(true); } #endregion public override void Dispose() { #region Guard if(IsDisposed) { return; } #endregion Send((int)EventID.Dispose); } private delegate void SendTimerDelegate(); private void SendTimerEvent() { Send((int)EventID.TimerElapsed); } } }
Compare this version with the version in Part II. All of the code for creating and initializing the
State objects as well as their
Transitions is hidden away in the base class.
The
StateMachineBuilder class can be serialized as XML data. This lets you save state machine values and retrieve them later. Before looking at an XML representation of the traffic light state machine, let's look at the XML structure the toolkit uses to represent hierarchical state machines. We will examine each element and their attributes.
The root element is
stateMachine, and it has three attributes:
namespace
name
initialState
The
namespace attribute is the name of the namespace in which the state machine class resides. The
name attribute is the name of the state machine. And the
initialState attribute is the initial state of the state machine. The value of the
initialState attribute must be one of the top level states. A top level state is a state that does not have a superstate; it exists at the top of the state hierarchy. Not surprisingly, states are represented by the
state element. It has three attributes:
name
initialState
historyType
The
name attribute is the name of the state. The
initialState attribute is the initial state of the state; if the state has any substates, the
initialState attribute represents which of its substates is entered after it is entered. And the
historyType attribute represents the state's history type. It can have one of three values,
None,
Shallow, and
Deep. If a state does not have any substates, the
initialState and the
historyType attributes are ignored. Otherwise the
initialState attribute is required. The
historyType attribute is optional, and if it is not present, the state will default to a history type value of
None.
States can be nested inside other states. A nested state is the substate of the state that contains it, and it in turn can have nested states. Thus substate/superstate relationships are represented directly in the XML state machine structure.
State transitions are represented by the
transition element. Transitions are nested inside the states to which they belong. The
transition element has four attributes:
event
guard
action
target
The
event attribute represents the event that triggered the transition. The
guard attribute represents the guard that is evaluated to determine whether or not the transition should actually take place. The
action attribute is the action that should be performed if the transition takes place. And the
target attribute is the state target of the transition. All of the attributes are optional except for the
event attribute. It must be present in all transitions.
To serialize a state machine, you would first build it with the
StateMachineBuilder as we did above with the traffic light state machine. Then serialize the builder with the
XmlSerializer class:
// ... using System.Xml.Serialization; // ... builder.Build(); StringWriter writer = new StringWriter(); XmlSerializer serializer = new XmlSerializer(typeof(StateMachineBuilder)); serializer.Serialize(writer, builder); Console.WriteLine(writer.ToString()); writer.Close(); // ...
Here, we serialized the
StateMachineBuilder to a
StringWriter object so that we can display the resulting XML to the Console. This is the result of serializing the traffic light state machine:
<?xml version="1.0" encoding="utf-16"?> <stateMachine xmlns: <state name="Disposed" historyType="None" /> <state name="Off" historyType="None"> <transition event="TurnOn" target="On" /> <transition event="Dispose" target="Disposed" /> </state> <state name="On" initialState="Red" historyType="Shallow"> <state name="Red" historyType="None"> <transition event="TimerElapsed" target="Green" /> </state> <state name="Yellow" historyType="None"> <transition event="TimerElapsed" target="Red" /> </state> <state name="Green" historyType="None"> <transition event="TimerElapsed" target="Yellow" /> </state> <transition event="TurnOff" target="Off" /> <transition event="Dispose" target="Disposed" /> </state> </stateMachine>
As you can see, the XML schema is straightforward and simple enough so that you can even declare a state machine in XML by hand.
Included with the demo project is a program that provides a nice GUI for using the
StateMachineBuilder class. It's easy to use. Simply enter the values into the
DataGrid, build the state machine, and save the results as either C# or VB code. If you want to save the state machine values for editing later, you can save the data as an XML file. I'll explain how to use the State Machine Maker application below.
The State Machine Maker has three text boxes for setting the state machine's namespace, name, and initial state respectively. The important thing to note here is that you will get an error if you forget to enter an initial state. Every state machine must have an initial state it enters into when it is first run.
In addition, there is a
DataGrid control where you add states and their transitions. The
DataGrid is data bound to the
StateMachineBuilder's
States property so that entries made to the
DataGrid are added to the
StateMachineBuilder automatically. Initially, you will enter the top level states for the state machines; these are states that do not have a superstate.
After entering a top level state, you can add its substates by expanding its row and clicking the Substates link:
There you will be taken to its Substates table:
After adding the substates, you can navigate back to the State table by clicking on the navigation arrow:
A state's transitions are added the same way, only you click on the Transitions link. This takes you to the state's Transition table:
Once all of the states and their transitions have been added, you can build the state machine. An error message will be displayed if the build failed. For example, say that you forgot to enter a state's name:
If the build succeeded, you'll get a message letting you know:
After the build, you can save the results as C# or VB code:
When you save the results as code, it will save the results from the last build, not the last edit. In other words, be sure to remember to build the state machine immediately before saving it as code. You may make a change to the state machine after a build and forget this when saving it to code and wonder why your last edit isn't showing up.
Be sure to read the dependencies section in Part I.
Well, this wraps up the last article in the series. With the second version of the toolkit, I'm now comfortable with it overall. While the engine was something that I worked hard on and was satisfied with, aspects of the code generation process still felt rough around the edges to me. With some help from a fellow CP'ian, that is no longer the case. I now feel that support for code generation is up to the same level of quality as the rest of the toolkit. And I hope you find it useful. Thanks for your time.
StateMachineBuilderclass.
General
News
Question
Answer
Joke
Rant
Admin | http://www.codeproject.com/KB/cs/statemachinetoolkitprtiii.aspx | crawl-002 | en | refinedweb |
This is part of a larger project on speech recognition we developed at ORT Braude college. The aim of the project is to activate programs on your desktop or panel by voice.
We planned to make some common tasks that every user does on his/her computer (opening/ closing programs, editing texts, calculating) possible not only by mouse/ keyboard, but also by voice.
Every speech recognition application consists of:
Needless to say that as the grammar increases, the probability of misinterpretations grows. We tried to keep the grammar as small as possible without loosing information. The grammar format is explained latter.
The easiest way to check if you have these is to enter your control panel-> speech. Here you should see the "Text to Speech" tab AND the "Speech recognition" tab. If you don't see the "Speech Recognition" tab then you should download it from the Microsoft site.
The project's interface is shown bellow (Fig 1).
In order to start talking right away, you should do these two steps...
IMPORTANT: after these changes, you will need to make the program start listening again by clicking the right mouse button and choosing "Start listen." The more you train the engine, the better it will recognize your voice, although you will see an improvement from the first training. After the program is started, it may be in several "states". In every state, it recognizes a list of specific commands. The list of the commands that the program can identify is shown below.
A little explanation of the menu...
To enable/disable the mic (it's switched according to what you choose), after disabling the label's becomes red (accuracy and state) indicating our state.
Although the agent is used only for giving feedback, it could be useful to know if your command is heard or not. This is so even though you can disable it if you want or if you don't have an agent file (can be downloaded from Microsoft, ACS files) or if it is not working and you still want to use the recognition (there is no connection between the agent and the recognition). This also is being taken care of if the program didn't find the agent file or could not be loaded from any other reason.
In the "activate" state you can say the command "favorites programs" and open a form with your favorites programs and running them by saying the program name. This menu will open a form showing your favorites programs so you can add/delete or edit them as you want.
This will allow you changing the agent character (can download them from Microsoft site, ACS files).
Every recognition accuracy is displayed in the "Accuracy" label. You can choose this menu and change the accuracy limit that you want the program to respond to the command that he hears with. You should do this to avoid responding to any voice or sound that he hears. you can raise this more every time that you train your computer and increase the recognition.
If the program is being used by several users, you can choose to give each user a profile and train the computer for each one (to add a user profile enter "control panel -> speech." Here you can only choose existing ones).
This is very important (as I explained before) for the recognition. The first thing to do in every computer (only at the first time) is to activate this menu and setting up your mic or if you changed your mic to a new one.
For a better recognition (notice that the training is for the selected user profile).
The initial state is in the "deactivate" state, which means that the program is in a sleepy state... After the command "activate" you will wake up the program ("activate" state) and start recognizes other commands (Fig 2).
For example, use "start" to activate the start menu. Then you can say "programs" to enter the programs menu. From this point, you can navigate by saying "down"," up", "right"... "OK" according the commands list. You can also say "commands list" from any point to see a form with the list of the commands that you can say.
One of the important states in the program is the "menu" state, meaning that if a program is running (and focused) you can say "menu" to hook all menu items and start using them. For example, if you are running Notepad you could open new file by saying "menu"->"File"->"new file". Every time that you hook menu, you can see how many menus the program hooked so you can start using them as commands. I had a little problem with some menus like "Word" and "Excel" that I couldn't hook, but... I'll check it later.
Another nice state is "Numeric state". For example, say the commands "favorites programs","calculator","enter numeric state", "one","plus","two","equal" and see the result. Alternatively, you can open a site in "Alphabetic state". For example, say the commands "favorites programs","internet explorer","enter alphabetic state", "menu","down","down","O K", "enter alphabetic state","c","o","d","e",...,"dot","c","o","m" and see the result.
One of the main problems with the voice activated systems is what happens if you don't know exactly which commands the computer expects. No problem! If you are unable to proceed just say "commands list " and the program will show you what are the available commands from here. States (commands) available in the program:
The first thing to do is to add reference to the file... C:\Program Files\Common Files\Microsoft Shared\Speech\SAPI.dll so we can use the Speech Library by writing...
using SpeechLib;
When we activate the engine, the initialization step takes place. There are mainly 3 objects involved:
SpSharedRecoContextthat starts the recognition process (must be shared so it can apply to all processes). It implements an
ISpeechRecoContextinterface. After this object is created, we add the events we are interested in (in our case AudioLevel and Recognition)
ISpeechRecoGrammarthe list of static recognizable words is shown in Fig 2 and attached for downloading dynamic grammar that lets adding rules implement
ISpeechGrammarRule;. The rule has two main parts:
Three basic functions that we will need...
initSAPI(): To create grammar interface and activating interested events
SAPIGrammarFromFile(string FileName): To load grammar from file
SAPIGrammarFromArrayList(ArrayList PhraseList): To change grammar programmatically
private void initSAPI() { try { objRecoContext = new SpeechLib.SpSharedRecoContext(); objRecoContext.AudioLevel+= new _ISpeechRecoContextEvents_AudioLevelEventHandler( RecoContext_VUMeter); objRecoContext.Recognition+= new _ISpeechRecoContextEvents_RecognitionEventHandler( RecoContext_Recognition); objRecoContext.EventInterests= SpeechLib.SpeechRecoEvents.SRERecognition | SpeechLib.SpeechRecoEvents.SREAudioLevel; //create grammar interface with ID = 0 grammar=objRecoContext.CreateGrammar(0); } catch(Exception ex) { MessageBox.Show("Exeption \n"+ex.ToString(),"Error - initSAPI"); } }
After initialization, the engine still will not recognize anything until we load a grammar. There are two ways to do that: loading a grammar from file...
private void SAPIGrammarFromFile(string FileName) { try { grammar.CmdLoadFromFile(appPath+FileName, SpeechLib.SpeechLoadOption.SLODynamic); grammar.CmdSetRuleIdState(0,SpeechRuleState.SGDSActive); } catch { MessageBox.Show("Error loading file "+ FileName+"\n","Error - SAPIGrammarFromFile"); } }
Or we can change the grammar programmatically. The function is getting an
ArrayList that every item is a structure:
private struct command { public string ruleName; public string phrase; } private void SAPIGrammarFromArrayList(ArrayList phraseList) { object propertyValue=""; command command1; int i; for (i=0;i< phraseList.Count;i++) { command1=(command)phraseList[i]; //add new rule with ID = i+100 rule=grammar.Rules.Add(command1.ruleName, SpeechRuleAttributes.SRATopLevel, i+100); //add new word to the rule state=rule.InitialState; propertyValue=""; state.AddWordTransition(null,command1.phrase," ", SpeechGrammarWordType.SGLexical, "", 0, ref propertyValue, 1F); //commit rules grammar.Rules.Commit(); //make rule active (needed for each rule) grammar.CmdSetRuleState(command1.ruleName, SpeechRuleState.SGDSActive); } }
All that's left for us is to check the recognized phrase...
public void RecoContext_Recognition(int StreamNumber, object StreamPosition, SpeechRecognitionType RecognitionType, ISpeechRecoResult e) { //get phrase string phrase=e.PhraseInfo.GetText(0,-1,true); . . . }
When a program is activated, by saying "Menu" its menu is hooked and its commands added to the dynamic grammar. We used some unmanaged functions which we imported from user32.dll. The program also hooks the accelerators that are associated with each menu (that have an & sign before them). The command is simulated with function
keybd_event and executed.
private void hookMenu(IntPtr hMnu) { //reset grammar initSAPI(); SAPIGrammarFromFile("XMLDeactivate.xml"); int mnuCnt=GetMenuItemCount(hMnu); if (mnuCnt!=0) { //add menu to grammar int i; command command1; StringBuilder mnuStr=new StringBuilder(50); ArrayList phraseList=new ArrayList(); for (i=0;i < mnuCnt;i++) { //get sting from menu ... to mnuString GetMenuString(hMnu,i,mnuStr,50,-1); //make sure its not a separator if (mnuStr.ToString()!="") { //save in commnd1.ruleName only the underlined letter command1.ruleName=mnuStr.ToString(); command1.ruleName=command1.ruleName[ command1.ruleName.IndexOf('&')+1].ToString(); //save in command1.phrase the word (without &) command1.phrase=mnuStr.ToString(); command1.phrase=command1.phrase.Remove( command1.phrase.IndexOf('&'),1); phraseList.Add(command1); } } //add the phraseList (menu) to grammar SAPIGrammarFromArrayList(phraseList); } }
Sample XML grammar... (for the complete grammar tags see Microsoft documentation)
<!-- 409 = american english --> <GRAMMAR LANGID="409"> <DEFINE> <ID NAME="RID_GoodMorning" VAL="0"></ID> <ID NAME="RID_Activate" VAL="1"></ID> <ID NAME="RID_Numbers" VAL="3"></ID> <ID NAME="RID_Close" VAL="3"></ID> </DEFINE> <RULE NAME="GoodMorning" ID="RID_GoodMorning" TOPLEVEL="ACTIVE"> <P>good morning</P> </RULE> <RULE NAME="Activate" ID="RID_Activate" TOPLEVEL="ACTIVE"> <O>please</O> <P>activate</P> <O>the</O> <O>computer</O> </RULE> <RULE NAME="Numbers" ID="RID_Numbers" TOPLEVEL="ACTIVE"> <L> <P DISP="1">one</P> <P DISP="2">two</P> </L> </RULE> <RULE NAME="Close" ID="RID_Close" TOPLEVEL="ACTIVE"> <P WEIGHT=".05">close</P> </RULE> </GRAMMAR>
General
News
Question
Answer
Joke
Rant
Admin | http://www.codeproject.com/KB/audio-video/tambiSR.aspx | crawl-002 | en | refinedweb |
A four line algorithm in MC++ for converting a decimal value to three separate pieces: units, numerator, denominator. Their results are suitable to format a string.
The basic formula is to divide the right side of the decimal value by the decimal equivalent of the fractional measure and round to an integer. This becomes the numerator over the desired denominator (of the conversion fraction). Thus, for converting to an eighth, as 1/8 is .125, one divides the
IEEERmainder() of the decimal by .125 to obtain a numerator over 8.
Convert 3.6742 to 16th: .6742/.0625 = 10.7872. 10.7872 rounds to 11 creating the fraction 11/16. The result is three and eleven sixteen (3 11/16).
Convert 3.6742 to 8th: .6742/.125 = 5.3936 = 5/8. The result is three and five eighth. (3 5/8).
I use
Math::IEERemainder(decimal, 1.0) to separate the decimal from the
single value, and
Math::floor(decimal) to separate the units. If the decimal part is greater than .5 then the
Math algorithm returns a negative complement, and must be subtracted from one. Thus the line:
if (Math::Sign(remainder) == -1) remainder = 1 + remainder
.6742 returns -0.32579942, which, when added to 1.0 results in .6742008.
Due to this check of the sign the algorithm only works for positive numbers.
(The decimal and denominator are input)
Single remainder = Math::IEEERemainder(decimal, 1.0); if (Math::Sign(remainder) == -1) remainder = 1 + remainder; Int32 units = Convert::ToInt32(Math::Floor(decimal)); Int32 numerator = Convert::ToInt32(Math::Round(remainder/denominator));
For flexibility, one would prefer to provide an integer specifying the desired conversion rather than hard code, say, .125 as the denominator. Thus input an integer numerator and compute the divisor.
// compute the fractions numerator Single divisor = Convert::ToSingle(1)/Convert::ToSingle(denominator); Int32 numerator = Convert::ToInt32(Math::Round(remainder/divisor));
The algorithm only works for positive decimals, thus one needs to test for flag, correct and restore negativity. Further, problems that need to be considered are the rounding down to zero and rounding up to the next unit, and reduction of the fraction. The following code accounts for these.
The following code was written for a very specific purpose: to convert English inch measurement fractions, specifically, the common fractions 1/8, 1/4 and 1/2. (Although I tested to 1/32.) I was not interested in fractions like 1/5 or 1/7 or 1/324 whatever. The algorithm may be useful to help those, but not the example function. The code is not generalized. But the algorithm is. The code is only provided as a wrapper example.
#pragma warning( disable : 4244 ) // possible loss of data due to conversion // Convert a Single to a string fraction of the // form "integer numerator/denominator" String* Utils::Form1::SingleToStringFraction(Single decimal, Int32 denominator) { // Input must be positive so save and restore the negative if necessary. bool isneg = (Math::Sign(decimal) == -1) ? true : false; if (isneg) decimal *= -1; // obtain the decimal and units parts of the input number Single remainder = Math::IEEERemainder(decimal, 1.0); if (Math::Sign(remainder) == -1) remainder = 1 + remainder; Int32 units = Convert::ToInt32(Math::Floor(decimal)); // compute the fractions numerator Single divisor = Convert::ToSingle(1)/Convert::ToSingle(denominator); Int32 numerator = Convert::ToInt32(Math::Round(remainder/divisor)); String* fraction; // Handle an error or condition or reduce the fraction // and convert to a string for the return. if ((numerator > 0) && (numerator == denominator)) { // than fraction is one full unit units++; fraction = S""; } else if (numerator == 0) { // as numerator is 0, no fraction fraction = S""; } else { // reduce while (numerator%2 == 0) { numerator /= 2; denominator /= 2; } fraction = String::Format(" {0}/{1}", numerator.ToString(), denominator.ToString()); } // restore negativity if (isneg) units *= -1; #ifdef _DEBUG_CUT String* rtnstr; if (isneg) decimal *= -1; rtnstr = String::Format("{0}{1}", units.ToString(), fraction); Diagnostics::Trace::WriteLine(rtnstr, decimal.ToString()); #endif return String::Format("{0}{1}", units.ToString(), fraction); } #pragma warning( default : 4244 )
I have never claimed to know everything. And, my MC++ skills may be lacking. If you know of a better and more efficient algorithm, or can improve on the quality of the above code, please comment. In the program that I am working on, I will be going back and forth from fractions to decimals regularly. Efficiency would be nice.
Oh yes, one could simply convert to a string and use split on S"."; but what fun in that? And using
Math to split the
Single qualifies as an algorithm while splitting a string does not.
General
News
Question
Answer
Joke
Rant
Admin | http://www.codeproject.com/KB/mcpp/DecimalToFraction.aspx | crawl-002 | en | refinedweb |
Vol. 12, Issue 5, 1189-1198, May 2001
Department of Biochemistry, Hebrew
University-Hadassah Medical School, Jerusalem 91120, Israel;
Department of Biological Sciences, University of
Alberta, Edmonton, Alberta, Canada T6G 2E9; and §Institut
für Physiologische Chemie, der Universität München,
80336 München, Germany
Tom40 is the major subunit of the translocase of the outer mitochondrial membrane (the TOM complex). To study the assembly pathway of Tom40, we have followed the integration of the protein into the TOM complex in vitro and in vivo using wild-type and altered versions of the Neurospora crassa Tom40 protein. Upon import into isolated mitochondria, Tom40 precursor proteins lacking the first 20 or the first 40 amino acid residues were assembled as the wild-type protein. In contrast, a Tom40 precursor lacking residues 41 to 60, which contains a highly conserved region of the protein, was arrested at an intermediate stage of assembly. We constructed mutant versions of Tom40 affecting this region and transformed the genes into a sheltered heterokaryon containing a tom40 null nucleus. Homokaryotic strains expressing the mutant Tom40 proteins had growth rate defects and were deficient in their ability to form conidia. Analysis of the TOM complex in these strains by blue native gel electrophoresis revealed alterations in electrophoretic mobility and a tendency to lose Tom40 subunits from the complex. Thus, both in vitro and in vivo studies implicate residues 41 to 60 as containing a sequence required for proper assembly/stability of Tom40 into the TOM complex. Finally, we found that TOM complexes in the mitochondrial outer membrane were capable of exchanging subunits in vitro. A model is proposed for the integration of Tom40 subunits into the TOM complex.
Transport of proteins into and across the two mitochondrial
membranes is achieved through the concerted action of translocation machineries: the TOM complex in the outer membrane and either of the
two TIM complexes in the inner membrane (Glick and Schatz, 1991
; Lill
et al., 1996
; Schatz, 1996
; Neupert, 1997
; Pfanner et
al., 1997
; Koehler et al., 1999
; Bauer et
al., 2000
). Targeting and initial translocation of most
preproteins that are destined to the mitochondrial matrix are dependent
on amino-terminal, cleavable presequences (Haucke and Schatz, 1997
;
Neupert, 1997
). In contrast, proteins of the outer membrane and a
number of proteins of the inner membrane and the intermembrane space
contain noncleavable targeting signals (Shore et al., 1995
;
Stuart and Neupert, 1996
; Neupert, 1997
). Currently, the nature of most
of these latter signals is obscure, though a few such as Tom70 (McBride
et al., 1992
), Tom22 (Rodriguez-Cousino et al.,
1998
), BCS1 (Fölsch et al., 1996
), and cytochrome
c heme lyase (Diekert et al., 1999
) have been
analyzed in detail.
The TOM complex contains import receptors for the initial recognition
of preproteins (Tom20, Tom22, and Tom70) and membrane-embedded components that form the general import pore, which facilitates the
translocation of preproteins across the outer membrane (Tom40, Tom5,
Tom6, and Tom7). Tom40, a protein essential for viability of yeast and
Neurospora crassa cells, was found to be the most abundant
component of the TOM complex (Dekker et al., 1998
;
Künkele et al., 1998
) and the core element of the
preprotein-conducting pore (Hill et al., 1998
; Künkele
et al., 1998
). The protein forms oligomers, with dimers as
the basic structure, and interacts with polypeptide chains in transit
(Vestweber et al., 1989
; Kiebler et al., 1990
;
Rapaport et al., 1997
; Rapaport et al., 1998
).
During preprotein translocation, the Tom40 oligomer undergoes
conformational changes that affect both the structure of the Tom40
dimer and its interaction with other constituents of the TOM complex
(Rapaport et al., 1998
). Tom40 has been predicted to
traverse the outer membrane as a series of 14 antiparallel
-strands
that form a
-barrel (Court et al., 1995
; Mannella
et al., 1996
). In contrast, all other TOM components are
postulated to be anchored to the outer membrane by helical
transmembrane segments. The import signals of these latter components
were suggested to be located in the membrane anchor itself or in the
sequences that flank the anchor (McBride et al., 1992
; Cao
and Douglas, 1995
; Rodriguez-Cousino et al., 1998
).
Tom20 and Tom70 act as receptors in the recognition of Tom40
precursors, whereas the translocation pore of the TOM complex is
utilized for insertion (Keil et al., 1993
; Rapaport and
Neupert, 1999
). Previous studies have suggested the presence of a
targeting signal in a yet undefined internal part of the Tom40
precursor and a signal required for assembly at the N-terminal region
of the protein (Rapaport and Neupert, 1999
). Understanding the
biogenesis of the TOM complex requires more information on the
mechanisms by which Tom40 is inserted into the membrane, how it
achieves its final structure, and how it interacts with the other
components in the assembled complex. In the present study we have
addressed the question of Tom40 assembly and have identified a region
at the N-terminus of the protein that is involved in the process. The
region is highly conserved in Tom40 sequences from various species. We
also give evidence that TOM complexes in the outer membrane can
dynamically exchange subunits, and we discuss possible models for the
insertion of Tom40 subunits into preexisting TOM complexes.
Strains, Media, and Growth
Growth and handling of N. crassa strains was as
described (Davis and De Serres, 1970
). Race tubes were constructed in
sterile 25-ml pipettes or 80-cm glass tubes as described (Davis and De Serres, 1970
; White and Woodward, 1995
). The extent of mycelial elongation was recorded every 24 h.
Construction of N. crassa tom40 Mutant Strains
Mutant alleles of tom40 were created by site directed
mutagenesis of single-stranded DNA derived from a plasmid containing the genomic version of N. crassa tom40 and a bleomycin
resistance gene in a Bluescript plasmid. After confirmation of the
desired mutation, plasmids were transformed (Schweizer et
al., 1981
; Akins and Lambowitz, 1985
) into spheroplasts of a
tom40RIP sheltered heterokaryon (to be
described elsewhere) that was generated by the standard N. crassa genetic procedure (Metzenberg and Grotelueschen, 1992
;
Harkness et al., 1994
) of sheltered RIP (repeat induced point mutation). Rescue of the RIPed nucleus was confirmed by testing
for biochemical requirements (see RESULTS), and integration of the
mutant alleles was confirmed by DNA sequence analysis of PCR products
from genomic DNA of the transformants.
Isolation of the Suppressor Mutant of Yeast tom40ts
The suppressor mutant of tom40ts
was isolated by performing random mutagenesis. Yeast strain KKY-Isp42-6
(Kassenbrock et al., 1993
), a haploid strain containing a
complete deletion of the genomic tom40 coding sequence and a
mutated tom40 gene on the centromere plasmid, pRS314, was
mutagenized (Lawrence, 1991
) with methanesulfonic acid ethylester and
incubated on plates at the nonpermissive temperature (37°C). After
several days, 77 suppressor-containing colonies were tested for
plasmid-linked mutations in the tom40ts
gene. For this purpose, the ability of plasmids isolated from these
colonies to confer the suppressor phenotype was tested. In 66 cases the
suppressor phenotype was found to be plasmid linked. The strongest
suppressor mutant, tom40tsSup, had a back
mutation at position 66 from a proline residue in the
temperature-sensitive strain to the wild-type leucine residue. This
back mutation restored the steady-state level and stability of the
Tom40 protein.
Import of Preproteins into Isolated Mitochondria
For import of Tom40 precursors in vitro, mitochondria from
N. crassa were isolated as described (Mayer et
al., 1993
). Radiolabeled preproteins were synthesized in rabbit
reticulocyte lysate in the presence of
[35S]methionine (ICN Biomedicals, Costa Mesa,
CA) after in vitro transcription by SP6 polymerase from pGEM4
vectors containing the gene of interest. Import reactions were
performed by incubation of radiolabeled preproteins with 30-50 µg
mitochondria in import buffer (0.5% BSA [wt/vol], 250 mM sucrose, 80 mM KCl, 5 mM MgCl2, 2 mM ATP, 10 mM MOPS-KOH, pH
7.2) at the indicated temperature. Proteinase K (PK) or trypsin
treatment of samples was performed by incubation with the protease for
15 min on ice, followed by addition of 1 mM phenylmethylsulfonyl
fluoride (PMSF) for 5 min. Import was analyzed by SDS-PAGE, and the
gels were viewed by autoradiography or quantified using a
phosphorimaging system (BAS 1500; Fuji Medical Systems, Stamford, CT).
Immunodecoration was according to standard procedures and was
visualized by the ECL method (Amersham).
Carbonate extraction was performed to determine if imported precursor proteins were inserted into membranes. We used sucrose flotation gradients to avoid the possibility of having nonintegrated protein aggregates pelleting with membranes. After import reactions, mitochondria were pelleted, resuspended in 100 µl 0.1 M Na2CO3, and incubated for 30 min on ice. Then, a solution of 2.4 M sucrose, 0.1 M Na2CO3 was added to a final concentration of 1.5 M sucrose (final volume, 266 µl). This was overlayed first with 250 µl of buffer containing 1.4 M sucrose, 0.1 M Na2CO3 and then with 200 µl of buffer containing 0.25 M sucrose, 0.1 M Na2CO3. The gradient was centrifuged for 2 h at 337,000 × g in a Beckman SW60 rotor (Fullerton, CA) at 2°C, which causes the membranes to float to the upper layer of the gradient. Gradients were analyzed by removing 250 µl from the top zone, 150 µl from the middle zone, and 200 µl from the bottom zone of the gradient. Proteins in these fractions were precipitated with trichloroacetic acid and analyzed by SDS-PAGE and autoradiography or Western blotting.
Construction of tom40 Mutants for In Vitro Import
pGEM4-Tom40(
2-20) DNA and pGEM4-Tom40(
2-40) DNA were
constructed by PCR amplification of the relevant DNA from pGEM4-Tom40, which contains the N. crassa wild-type tom40
gene. For pGEM4-Tom40(
2-20) and pGEM4-Tom40(
2-40) the upstream
primers 5'-AGAAAAGAATTCACCATGAGCCTTTCCGATGCCTTC-3' and
5'-AGAAAAGAATTCACCATGCCCGGCACGATCGAGACC-3', respectively, were used. In
both cases the downstream primer 5'-CTCTAAGCTTTTAAAAGGGGATGTTGAGG-3' was used. Both PCR products were digested with EcoRI and
HindIII and subcloned into pGEM4. pGEM4-Tom40(
41-60) DNA
was constructed by a method involving the simultaneous ligation of two
inserts. The first insert, representing amino acid residues 1-40, and
the second insert, representing amino acid residues 61-349, were
constructed by PCR amplification of the relevant DNA from pGEM4-Tom40.
The upstream primer for the first insert represents the sequence
containing the NheI from the pGEM4 vector, whereas the
downstream primer 5'-AAAAAATCATATGGTTGGAAAGACCGAACTGTTT-3' contained an
NdeI site. This PCR product was digested with
EcoRI (site derived from the multiple cloning site of pGEM4)
and NdeI. For the second insert, the upstream primer was
5'-AAA GAA TTC CAT ATG TTC TCT GGC CTC CGC GCC GAC-3', whereas the
downstream primer was the same as used for the cloning of
pGEM4-Tom40(
2-20) and pGEM4-Tom40(
2-40). This PCR product was
digested with NdeI and HindIII. The digested products were ligated into pGEM4 that had been digested with
EcoRI and HindIII.
Tom40 variants for in vitro import that contained smaller deletions and amino acid substitutions were generated by site-directed mutagenesis of a Tom40 cDNA cloned in the pGEM-7Zf(+) vector.
Cross Linking and Coimmunoprecipitation
For cross-linking experiments, radiolabeled precursors were incubated with isolated mitochondria under various conditions. After the import reaction mitochondria were isolated and resuspended in import buffer followed by addition of 440 µM of the cross-linking reagent disuccinimidyl glutarate (DSG; Pierce Chemical Co., Rockford, IL) for 40 min at 0°C. Excess cross-linker was quenched by the addition of 80 mM glycine, pH 8.0, and incubation for 15 min at 0°C. Aliquots were removed before and after addition of the cross-linking reagents. For coimmunoprecipitation, samples were dissolved in lysis buffer (0.5% digitonin, 150 mM NaCl, 10 mM Tris-HCl, pH 7.2). After a clarifying spin (15 min at 20,000 × g), the supernatant was incubated with antibodies that were coupled to protein A-Sepharose beads.
Blue Native Gel Electrophoresis (BNGE)
Mitochondria (50-100 µg) were lysed in 50 µl
detergent-containing buffer (1% digitonin, 0.3% dodecylmaltoside, or
1% dodecylmaltoside in 20 mM Tris-HCl, 0.1 mM EDTA, 50 mM NaCl, 10%
glycerol, 1 mM PMSF, pH 7.4). After incubation on ice for 10 min and a
clarifying spin (20 min, 22,000 × g), 5 µl sample
buffer (5% [wt/vol] Coomassie brilliant blue G-250, 100 mM Bis-Tris,
500 mM 6-aminocaproic acid, pH 7.0) were added, and the mixture was
analyzed on a 6 to 13% gradient blue native gel (Schägger
et al., 1994
; Schägger and von Jagow, 1991
).
A Tom40 Precursor Lacking Amino Acid Residues 41 to 60 Does Not Integrate into the TOM Complex
The assembly pathway of Tom40 can be divided into three
stages: binding to the mitochondrial surface, insertion into the
membrane, and assembly into the TOM complex (Rapaport and Neupert,
1999
). Here we have investigated Tom40 with respect to the structural features required for integration into the complex. Precursor proteins
lacking residues 2 to 20, 2 to 40, or 41 to 60 were analyzed for their
ability to be integrated into the TOM complex, because previous studies
had indicated a role for the N-terminal region in the process (Rapaport
and Neupert, 1999
). All three mutant forms were targeted to
mitochondria with efficiencies similar to wild-type (Figure
1, Bound). The ability of the variants to insert correctly into the mitochondrial outer membrane was determined by assessing the acquisition of protection against added trypsin and
the formation of proteinase K cleavage fragments characteristic of the
inserted wild-type protein. Deletion of residues 2 to 20 did not affect
proper insertion, whereas removing residues 2 to 40 had a moderate
effect (Figure 1). The variant lacking residues 41 to 60 was ca. 50%
less efficient in its ability to properly insert than the wild-type
precursor. Treatment of mitochondria with proteinase K after import of
the
41-60 variant resulted in the formation of an additional
cleavage fragment visible just above the usual F-26 fragment that is
observed for wild-type Tom40. The additional fragment was likely a
digestion product from the fraction of the Tom40
41-60 molecules
that did not insert properly.
Insertion of the precursor proteins into the mitochondrial outer
membrane was further assessed by carbonate extraction after in vitro
import (Figure 1C). The extraction products were analyzed on sucrose
gradients, which results in flotation of membranes to the top of the
gradient (see MATERIALS AND METHODS). Thus, integral membrane proteins
(e.g., Tom40) will be found in the upper zone of the gradient, whereas
soluble proteins (e.g., Hsp70) will be found in the bottom zone.
Tom40
2-20 was found in the upper zone of the gradient,
demonstrating its insertion into mitochondrial membranes. About half of
the Tom40
41-60 molecules were integrated into the membrane, whereas
the remaining half fractionated with the soluble proteins. These data
are in agreement with the results of the protease treatment experiments
(Figure 1, A and B), where about half of the protein molecules acquired
the correct conformation. Taken together, these results demonstrate
that the deletions did not impair membrane insertion or cause dramatic
changes in the conformation of the inserted protein.
Integration of the variants into the endogenous TOM complex was studied
by BNGE. The variant lacking the first 20 amino acid residues was found
to integrate into the fully assembled complex with an efficiency
similar to the wild-type form (see Figure 4D; shown as a control),
whereas the variant lacking the first 40 residues assembled at slightly
reduced efficiency (our unpublished results). However, the precursor
lacking residues 41 to 60 accumulated at a stage shown previously to be
an intermediate (Figure 2A, I) on the
assembly pathway (Rapaport and Neupert, 1999
) and was not assembled
into an authentic TOM complex (Figure 2A). In the intermediate, both
the wild-type and mutant forms were only loosely associated with the
TOM complex and they dissociated from the complex upon solubilization
of mitochondria with dodecylmaltoside (Figure 2A). The band between the
monomer and intermediate bands (Figure 2A, lanes 1, 3, and 7) is
unproductively bound material (Rapaport and Neupert, 1999
).
The efficiency of integration into the TOM complex was tested
further by immunoprecipitation with antibodies against either Tom6 or
Tom20. When full-length Tom40 was imported into mitochondria at 4°C,
only low levels of the protein were assembled into the TOM complex.
However, the efficiency was high when import was performed at 25°C
(Figure 2B). In contrast, very low levels of assembly of Tom40
41-60
were detected upon import at either temperature (Figure 2B). These
levels might reflect association with the TOM complex as an insertion
intermediate rather than actual assembly. The levels of integration of
precursors lacking the first 20 or 40 residues were similar to those of
wild-type precursor (our unpublished results). Thus, the
immunoprecipitation experiments confirm the results of the BNGE
analysis and demonstrate the inability of Tom40
41-60 to progress
into fully assembled TOM complexes. Interestingly, significant levels
of the characteristic 26-kDa proteinase K fragment from the wild-type
precursor were observed after incubation at 4°C (Figure 2C). This
implies that the Tom40 precursor readily acquires its native (or near
native) folding even before it is stably integrated into the authentic
TOM complex. In previous experiments we observed that folding did not
occur in experiments performed at 0°C with short incubation times
(Rapaport and Neupert, 1999
).
Mutations in Amino Acid Residues 40 to 50 of Tom40 Result in Growth Defects
To investigate further the role of the N-terminal portion of the
protein in assembly and function of Tom40 in vivo, we examined the
ability of mutant derivatives of tom40 to complement a
nonfunctional RIP allele of the gene (Figure
3). Because Tom40 is an essential protein, we used the procedure of sheltered RIP (Metzenberg and Grotelueschen, 1992
; Harkness et al., 1994
) to create a
strain of N. crassa (to be described elsewhere), in which a
nucleus lacking a functional tom40 gene is maintained in a
heterokaryon with a nucleus containing a wild-type version of the gene
(Figure 3). A tom40 gene encoding a protein lacking amino
acid residues 2 to 60 was not able to restore viability to the nucleus
harboring the RIPed version of tom40, supporting the in
vitro findings (Rapaport and Neupert, 1999
) that the first 60 residues
of the N-terminal domain contain crucial information for Tom40
assembly. To identify regions in the N-terminal part of the protein
that might play a role in the assembly process, we compared Tom40
sequences from various organisms. We observed no conservation of
sequence between organisms prior to amino acid 38 of the N. crassa protein, which is in agreement with the results from in
vitro experiments. However, we found a number of highly
conserved residues in the region of residues 40 to 60 with the greatest
level of similarity within residues 40 to 50 (Figure
4A). Three mutant derivatives of
tom40 affecting the conserved region were created (Figure
4A). One mutant (
NPGT) had a deletion of the four highly conserved
residues NPGT at positions 40 to 43 of the N. crassa
sequence, whereas a second mutant (AAAA) had four alanine residues at
those sites. The third mutant (
40-48) had a deletion of residues 40 to 48, with an additional single amino acid change, R49A. Each of these
mutant alleles was able to rescue the
tom40RIP nucleus and give rise to
homokaryons requiring lysine and leucine (Figure 3).
The
NPGT and AAAA mutants displayed a complex growth phenotype. We
analyzed 18 different lysine- and leucine-requiring transformants of
the AAAA type and 13 of the
NPGT type for growth characteristics on
race tubes. Nine of the AAAA strains and five of the
NPGT strains
displayed a "stop-start" growth phenotype. These strains grew at
near normal rates for a few days and then grew very slowly or not at
all for 1 or 2 days before they resumed their initial growth rate. The
strains exhibiting this behavior are remarkably consistent, because
impaired growth was observed on the same day in up to six different
race tubes of an individual strain. The rest of the strains analyzed
from both groups grew slightly slower than control strains for 12 days
without evidence of stopping. An example of each type of growth is
shown in Figure 4B. Tom40 levels in all strains were similar,
regardless of their growth phenotype (our unpublished results). Still,
it is conceivable that subtle differences in expression, not identified
in the analysis of Western blots, may explain the different growth
characteristics. This could be due to locus-specific effects at
different integration points of the transformed mutant alleles in the
individual strains. Regardless, we have found no differences between
the strains except for this behavior. For further analysis, one stopper
type of the
NPGT strains and one normal growth type of the AAAA
strains were chosen. The
40-48 strain had a slower growth rate that
was easily distinguished from the other mutants (Figure 4B). The
ability of all three mutant strains to climb the walls of flasks and to form conidia in these flasks was significantly reduced (Figure 4C).
Thus, the data from the mutant strains suggest a crucial role for
residues 40-49 in the function of Tom40.
The fact that viable strains were obtained by
transforming the tom40RIP nucleus with the
AAAA,
NPGT, and
40-48 variants implies that these forms are at
least partially capable of assembling into mature, functional TOM
complex. This was confirmed by using BNGE to assess assembly after in
vitro import of the mutant precursors. All three mutants show partial
assembly into the TOM complex, whereas a significant proportion remains
in the high molecular weight intermediate (Figure 4D). Interestingly,
the
40-48 mutant appears to have the least amount of precursor in
the fully assembled form, which correlates well with the more severe
growth phenotype observed for strains bearing this mutation (Figure
4B).
Mutations in Residues 40 to 50 of Tom40 Result in a More Fragile TOM complex
The level of TOM complex components and other mitochondrial
proteins from the
NPGT, AAAA, and
40-48 strains was found to be
similar to those in wild-type controls (Figure
5). The TOM complex in the mutants was
further examined by BNGE and immunoblotting with
antibodies directed against individual TOM complex components. When
mitochondria were dissolved in 1% digitonin and subjected to BNGE and
the blots decorated with antibody to Tom40, all three mutants
were found to contain a TOM complex with slightly increased electrophoretic mobility (Figure 6). When
the experiment was repeated with mitochondria dissolved in 1%
dodecylmaltoside, the TOM complex in the three mutants was found to be
more fragile than the wild-type strain, and at least some fraction of
the Tom40 molecules in the mutants migrated as monomers (Figure 6).
These results demonstrate that residues 40 to 49 play an important role
in the stability of the TOM complex. Blots of blue native gels were
also examined using antibodies directed against two other TOM core
complex components (Ahting et al., 1999
), Tom6 and Tom22. In
both cases, the samples solubilized in digitonin showed the same
electrophoretic mobility alteration seen in the blots examined with
Tom40 antibodies (our unpublished results). For samples dissolved in
1% dodecylmaltoside, the patterns were similar to those seen with
Tom40 antibodies, except that only trace amounts of Tom6 monomers were
released from the mutant complexes.
Reversion of a Yeast Tom40 Temperature-sensitive Mutant Restores an Amino Acid in the Conserved N-terminal Region
A study of temperature-sensitive strains of the yeast
Saccharomyces cerevisiae further supports the notion that
amino acid residues in the 40-to-50 region are important for the
biogenesis of Tom40. Kassenbrock et al. (1993)
isolated
several temperature-sensitive strains carrying mutations in the
tom40 gene. In one of the strains (KKY-Isp42-6), DNA
sequencing identified 10 mutations in the tom40 gene. We
isolated revertants of this temperature-sensitive strain and found that
a single reversion in the temperature-sensitive strain, Pro66, back to
the wild-type Leu66, was sufficient to allow the revertant strain to
grow at the wild-type rate at the restrictive temperature (Figure
7). This result suggests a crucial role
for Leu66 in the function of yeast Tom40. The yeast Tom40 Leu66
corresponds to the Ile residue at position 47 in the N. crassa protein (Figure 4A).
Integration of Tom Components into the TOM Complex
To gain further insight into the structure and assembly of the TOM
complex, we wished to determine if newly incorporated precursors can
integrate into preexisting complexes. In a mutant strain of N. crassa that expresses only a truncated form of Tom40 lacking the
C-terminal 20 amino acid residues, the TOM complex is expected to be
~13 to 17 kDa smaller than the wild-type complex, based on estimates
of six to eight molecules of Tom40 per complex (Ahting et
al., 1999
). We found that this size difference could be detected by BNGE (Figure 8). The difference was
exploited to determine if imported TOM complex subunits can be inserted
into preexisting complexes. When full-length precursors of Tom22 and
Tom40 were imported into mitochondria isolated from the C-terminal
deletion strain, they were rapidly integrated into complexes with the
molecular weight characteristic of this strain (Figure 8). These
observations support previously suggested models in which precursors
are either taken up into a small pool of nearly completed complexes or
integrated directly into existing functional complexes (Rapaport and
Neupert, 1999
).
Exchange of TOM Complex Subunits in Isolated Outer Membrane Vesicles (OMVs)
Because the stoichiometry of subunits in a functional TOM complex
is likely to be constant, direct integration into functional complex
would imply displacement of preexisting subunits. To determine if
exchange of subunits can take place between different complexes in
vitro, we mixed OMVs isolated from a wild-type N. crassa
strain with OMVs from a strain whose only form of Tom22 contained a
hexahistidinyl tag. The two OMV samples were induced to undergo fusion
by three cycles of freeze/thaw, which is known to induce fusion of
lipid vesicles (Hincha et al., 1998
). The samples were then
solubilized with digitonin and incubated with Ni-NTA sepharose beads to
isolate TOM complex containing his-tagged Tom22. Complexes containing his-tagged Tom22 were also found to contain the wild-type form of the
protein, indicating that mixing of the two original complexes had
occurred (Figure 9A, lane 4). Optimal
formation of mixed complexes required fusion of the vesicles, whereas
simple mixing, without freeze/thaw cycles, resulted in low levels of
mixed TOM complex (Figure 9A, lane 3) only slightly above background
(Figure 9A, lane 1).
In a related set of experiments, antibodies against a C-terminal peptide of Tom40 were used as a tool to analyze exchange of Tom40 subunits. OMVs from a wild-type strain and OMVs from the strain harboring Tom40 with the C-terminal deletion were fused, solubilized with digitonin, and subjected to immunoprecipitation with the C-terminal antibodies. Both forms of Tom40 were present in immunoprecipitates when OMVs underwent fusion but not in immunoprecipitates from controls where OMVs were mixed but not subjected to the fusion treatment (Figure 9B, lanes 3 and 4). Thus, both Tom22 and Tom40 subunits can be exchanged between complexes. The data suggest that Tom22 subunits can be exchanged more easily than Tom40 subunits. Tom40 molecules may be more tightly associated in the complex than Tom22 so that exchange of these subunits may be less frequent.
We have examined the mechanisms by which the polytopic mitochondrial outer membrane protein Tom40 is inserted into the mitochondrial outer membrane and assembled into the TOM complex of N. crassa. The highly conserved region containing amino acid residues 41 to 60 of Tom40 was found to be expendable for binding receptors and membrane insertion but crucial for the assembly of the protein into the complex. Deletion of residues 41-60 completely abolished assembly in vitro, whereas smaller mutations resulted in the formation of TOM complexes with altered electrophoretic mobility and stablility in vivo. The changes in mobility likely reflect changes in conformation of the TOM complex because all mutants analyzed displayed similar changes in mobility, whereas the molecular weight of Tom40 in the AAAA mutant is not significantly different from that of the wild-type form. Tom40 monomers were lost from the mutant strains after solubilization in dodecylmaltoside. Because the three mutants exhibited similar behavior with respect to loss of Tom40, this suggests that the NPGT sequence, which is affected in all three strains, may play a role in mediating proper assembly and stability of the TOM complex. The N. crassa strains expressing the Tom40 variants were altered in their ability to form conidia and exhibited growth defects. Furthermore, a suppressing back mutation of a yeast tom40 temperature-sensitive allele occurred within the same amino acid residues, emphasizing the importance of the conserved region in a separate organism. Taken together, these observations show that the sequence containing residues 41-60 plays an important role in the assembly of Tom40. The sequence seems to function as an assembly signal only in its native context, because fusion proteins between this region of Tom40 and a cytosolic protein DHFR (Tom40(41-60)-DHFR) or Tom20 (Tom40(41-60)-Tom20) did not integrate into the Tom core complex (our unpublished results). Thus, these residues are necessary but not sufficient to achieve assembly.
Several observations support the view that the impaired assembly
of Tom40 mutant variants reflects a specific function of the affected
amino acid residues rather than simple misfolding of the mutant forms.
Tom40 in the TOM complex yields characteristic fragments upon treatment
of mitochondria with proteinase K (Künkele et al.,
1998
). The same fragments can be generated after integration of
full-length Tom40 precursor into the membrane of isolated mitochondria at a stage when Tom40 is not yet assembled into the complex. Similarly, the
41-60 variant, which is not assembled into the complex, still gave rise to the characteristic proteolytic cleavage fragments after
integration into the membrane. Thus, Tom40 appears to reach its final,
or near final, conformation rather early in its assembly pathway, and
the actual integration of Tom40 precursor into the core of the TOM
complex may not induce major conformational changes.
Wild-type Tom40 precursors do not persist in a pool of monomers in the outer membrane but are quickly assembled into full-size complexes via a short-lived, high molecular weight intermediate. The relatively small number of radiolabeled molecules of either Tom40 or Tom22 taken up by mitochondria during in vitro import are rapidly integrated into complexes with preexisting subunits. Newly imported subunits could be directly assembled into the preexisting functional complexes by which they are imported. Alternatively, the newly imported subunits could be moved into the lipid phase of the outer membrane and quickly associate with a small pool of partially assembled complexes. In the first model, assuming a fixed stoichiometry of the complex, incorporation of new subunits must be coupled to the release of preexisting subunits. This suggests the existence of a pool of partially assembled TOM complexes made up of the released subunits. Thus, the two models are related in that both require a pool of partially assembled subunits that can interact with other subunits. The inability to detect such a pool implies that such pools would be small and formation of the functional complex rapid. Our observation of subunits being exchanged between existing TOM complexes is consistent with models suggesting that incorporation of new subunits into the complex could result in the displacement of preexisting subunits. Regardless of the mechanism of assembly of new subunits into the TOM complex, a dynamic equilibrium of completely assembled and partially assembled TOM complex may exist in the outer membrane.
We are grateful to Petra Heckmeyer, Thomas Waizenegger, Bonnie Crowther, and Allison Kennedy for excellent technical assistance. This work was supported by a grant from the Medical Research Council of Canada and the Sonderforschungsbereich 184 of the Deutsche Forschungsgemeinschaft. RDT was the recipient of financial support from the Natural Sciences and Engineering Research Council of Canada and the Alberta Heritage Foundation for Medical Research.
¶ Corresponding author. E-mail address: frank.nargang{at}ualberta.ca.
* The first two authors contributed equally to this work.
Current address: Institut für Genetik,
Universität zu Köln, Zülpicher Straße 47, 50674 Köln, Germany.
Abbreviations used: BNGE, blue native gel electrophoresis; OMV, outer membrane vesicles; RIP, repeat induced point mutation; TOM, translocase of the outer mitochondrial membrane.
This article has been cited by other articles: | http://www.molbiolcell.org/cgi/content/full/12/5/1189 | crawl-002 | en | refinedweb |
A common need for Windows forms is to remember the last position, size and state of the form. One way to do this is to call a function when your form loads and closes. I decided to do it the Object Oriented way. If your form inherits this class, it will automatically load and save the the form settings; Left, Top, Height, Width, and State; to a .config file.
using KrugismSamples" to the list of references at the top of the form code.
public class Form1 : System.Windows.Forms.Form
to:
public class Form1 : PersistentForm
That's it! When the form is loaded, the saved values are set to the form. When the form is closed, the settings are saved.
PersistentFormclass in the Add Reference, Projects tab.
Imports KrugismSamples" to the top of the source.
Inherits System.Windows.Forms.Form
to:
Inherits PersistentForm
That's it! When the form is loaded, the saved values are set to the form. When the form is closed, the settings are saved.
This class is very straightforward. It simply inherits the
Windows.Forms.Form class. It then overrides the
OnCreateControl() and
OnClosing() events. By
overriding the base events, no additional code needs to be added to the form.
(The
LoadSettings() and
SaveSettings() code is not
shown here.)
public class PersistentForm : System.Windows.Forms.Form { public PersistentForm() { } protected override void OnCreateControl() { LoadSettings(); // Load the saved settings from the file base.OnCreateControl (); } protected override void OnClosing(System.ComponentModel.CancelEventArgs e) { SaveSettings(); // Save the settings to the file base.OnClosing(e); } }
General
News
Question
Answer
Joke
Rant
Admin | http://www.codeproject.com/KB/cs/persistentform_class.aspx | crawl-002 | en | refinedweb |
This control, we can eliminate the necessity of controlling logon through IIS, and enable it through our code. This opens up a considerable area of control through code. We can hence have a control on which user login is requested, and on which domain and all that.
We will discuss the creation of the project and the logic I had in mind while developing it. The completed and tested code, that was developed using VS.NET is attached to this article.
We create two web user controls.
WindowsLoginControl
This has the implementation and UI for the login pane. It has two UIs, one for new users, one for already logged in users. A session variable maintains the state of the login to determine which UI to show. Code in this control calls the
logInUser class' shared method to process the login.
ErrorControl
This has implementation and UI for an error reporting pane. When errors occur, other controls on the page update a session variable, which is checked when this control loads. When there's no error, we display an 'Under construction' message (This may be removed in release versions).
NOTE: We could have implemented the logic of this control also into the
WindowsLoginControl, but having this as a separate control allows us to easily move the control on the UI of a target page in VS.NET.
LoginUserclass with a shared method for processing the login,
This has implementation of the login process. A shared method takes username, password and domain as parameters and tries a Windows logon with the data, and we impersonate the user.
This is to cleanup the sessions, and make the
totalActiveUser count on the system more reliable.
Open an ASP.NET project. Select the project in the Solution Explorer and create a new 'web user control' item. Develop the UI for it.. probably two text boxes.. for username and password and a 'Login' button.
We make another UI, which shows a viewpane with the details of user login.
We show the login form when the user hasn't logged into the system, and show a login details view after the user logs in. Users login with their Windows authentication (this means that we should have created users on the server and the domain for this to work).
The
windowsLoginControl calls shared function
LogInThisUser() of the
LogInUser class which logs-in the user and impersonates the logged-on user. The code that does this is as below.
Dim loggedOn As Boolean = LogonUser(username, _ domainname, password, 3, 0, token1) 'impersonate user Dim token2 As IntPtr = New IntPtr(token1) Dim mWIC As WindowsImpersonationContext = _ New WindowsIdentity(token2).Impersonate
For this, we declare the
loginuser class with the proper namespaces, and in a manner to include unmanaged code. We need unmanaged code to be written, because I believe we don't have a managed code implementation of the
LogonUser function of Windows to do the same.
'include permissions namespace for security attributes 'include principal namespace for windowsidentity class 'include interopservices namespace for dllImports. Imports System.Security.Principal Imports System.Security.Permissions Imports System.Runtime.InteropServices <Assembly: SecurityPermissionAttribute (SecurityAction.RequestMinimum, UnmanagedCode:=True)> Public Class LogInUser <DllImport("C:\\WINDOWS\\System32\\advapi32.dll")> _ Private:\\WINDOWS\\System32\\Kernel32.dll")> _ Private Shared Function GetLastError() As Integer End Function
We can also find whether the
logonuser function generated errors, by calling the
GetLastError method.
We use session variables to keep track of the user's login information and last access. We use an application variable to keep track of the total active users in the system.
Below code is a part of this implementation (can be found in windowsLoginControl.ascx.vb)
Session.Add("LoggedON", True) Session.Add("Username", sRetText) Application.Item("TotalActiveUsers") += 1 lblUserName.Text = Session("Username") lblLastSession.Text = Session("LastActive") lblTotalUsers.Text = Application("TotalActiveUsers")
We keep track of the no. of active users by simply incrementing the value every time the login method succeeds, and decrementing the value every time
session_end event occurs.
Better means to do this can also be used. The idea of this article is only to communicate the logic.
Before testing the project, we should check the following.
We keep the
domainname as constant, rather than taking it from the user as an input. Check whether proper domain name is assigned to the constant.
Private Const domainName = "TestDomain"
Check whether location of the DLLs that are being imported are proper.
<DllImport("C:\\WINDOWS\\System32\\advapi32.dll")>
Check whether the logoff page has the correct page name and path to transfer the user, once cleanup is done.
Server.Transfer("webform1.aspx")
It has to be taken care that code implemented doesn't allow for inappropriate usage through various
userLogins.
I preferred to keep the domain name hard-coded into the application through a constant rather than accept it as an user input... so that it's easy to limit or monitor user login sessions.
In case of intranet projects, we can create a separate domain, and user group for the project and use the above logic to allow users to login to the system only on the particular domain. May be you can call this an 'Idea' :o)
To implement the web user controls in a web project, we simply copy the files related to the two controls, the
loginuser class, the logoff user page, to our new web project, and also copy the code from our global.asax.vb to the new project's global.asax.vb.
In VS.NET, these copied files can easily be included in the target project by right clicking and selecting 'Include in Project' in Solution Explorer.
The code that's been worked out in this article will authenticate users on only one page of the web application. Normally, a web application will have content inside the site to be viewed by authenticated users.. in this case, the controls will have to have a mechanism of holding the user's authentication across page requests. This can be done by holding the
windowsIdentity object of the authenticated user in a session variable, and allowing users rights on pages by using
FileIOPermission and other classes in the
System.Security namespace.
General
News
Question
Answer
Joke
Rant
Admin | http://www.codeproject.com/KB/web-security/ASPdotnet_LoginControl.aspx | crawl-002 | en | refinedweb |
Windows Forms databinding has greatly improved in .NET 2.0. Microsoft has enhanced our object-oriented toolbox by allowing Windows Forms controls to databind to our custom objects' properties. This new functionality is centered around the new
BindingSource component. Although this component takes a lot of the leg-work out of databinding custom objects, I recently had an issue with some rather large objects that contained both
Read-Write and
ReadOnly properties. After setting up databinding, I had to go through and manually set the "
Enabled" property of these controls depending on whether each individual property was
ReadOnly or not. I decided to create a custom
BindingSource component that would take care of this manual process in the background.
Another option that I considered exploring was creating a custom attribute to add to the properties that I wanted to have
ReadOnly or
Read-Write capabilities to. The reason I didn't do this was that I wanted to avoid requiring the business object developer having to implement more than what is necessary just to satisfy the UI developer.
We start off by creating a class library that will contain our custom
SmartPropertyBindingSource component and our
ReadOnlyBehavior enumeration. This enumeration will be used to determine the rendering behavior of the databound control. This class library project should reference the
System.Windows.Forms and set it to be a globally imported namespace (you may also just write
Imports System.Windows.Forms at the top of the SmartPropertyBindingSource.vb file).
We will be supporting two options for how the databound control of the
ReadOnly property will be rendered on the form. This is set through the
ReadOnlyControlBehavior property in the
SmartPropertyBindingSource. You can choose to have it disabled or hidden.
Public Enum ReadOnlyBehavior Disable = 0 Hide = 1 End Enum
We will now add a class named
SmartPropertyBindingSource which inherits from
BindingSource.
Public Class SmartPropertyBindingSource Inherits BindingSource End Class
Next, we will create a
Public property in this class to set the control rendering behavior for
ReadOnly properties.
Public Property ReadOnlyControlBehavior() As ReadOnlyBehavior Get Return mReadOnlyBehavior End Get Set(ByVal value As ReadOnlyBehavior) mReadOnlyBehavior = value End Set End Property
We will also go ahead and provide overloads for the three constructors in the base class. Once we have done this, we are ready to write the functionality of the class. By overriding the
OnBindingComplete, we will have access to the current object's property information, thanks to the
System.Reflection namespace. With this information, we will be able to find out if the property in question is
ReadOnly.
Protected Overrides Sub OnBindingComplete(ByVal e As _ System.Windows.Forms.BindingCompleteEventArgs) 'get the current type of the object we are binding Dim t As Type = e.Binding.BindingManagerBase.Current.GetType 'get the property information using the BindingMemberInfo object Dim prop As System.Reflection.PropertyInfo = _ t.GetProperty(e.Binding.BindingMemberInfo.BindingMember) 'see if it ReadOnly If Not prop.CanWrite Then 'decide what to do with the control Select Case Me.ReadOnlyControlBehavior Case ReadOnlyBehavior.Disable e.Binding.Control.Enabled = False Case ReadOnlyBehavior.Hide e.Binding.Control.Visible = False End Select End If 'continue with the method in the base class MyBase.OnBindingComplete(e) End Sub
That's all there is to it. Using reflection, we can investigate whether the property is
ReadOnly and render (or not render) the control appropriately. We also don't have to ask the domain object's developer to do anything special to improve the user experience.
The new .NET 2.0 framework provides a great deal of enhanced functionality from the previous framework allowing for easier integration of custom business objects. We can build upon this functionality by customizing classes to better suit our needs.
General
News
Question
Answer
Joke
Rant
Admin | http://www.codeproject.com/KB/vb/DatabindingReadOnlyProps.aspx | crawl-002 | en | refinedweb |
DataSets in .NET are a powerful tool for manipulating data locally. However, despite their flexibility, they can be time consuming and difficult to use. I am sure I am not alone in wishing many times that I could just "execute some SQL" against a
DataSet and have the result presented in a new
DataSet. As it stands, I have to create
DataViews, clone tables, create calculated columns, loop through rows copying, etc. etc., if I want to take a
DataSet and manipulate its contents.
The attached code library shows a crude method for enabling this functionality. It leverages the fact that a
DataTable can be converted to a .CSV (Comma Separated Value) table with minimum difficulty, and that Microsoft provides an ODBC driver for .CSV files.
By converting all
DataTables in a
DataSet to .CSV files and then executing some SQL against them using the ODBC Microsoft Text Driver, we can generate a new
DataSet.
If you build the solution, a .dll will be created, called DataSetSQLEngine.dll. This exposes a single public class, with a single public shared function,
executeSQLDataset(). The arguments to the function are a source
DataSet, the SQL string to be executed, and, optionally, a path to a folder for storing the temporarily created .CSV files. If this path is not passed, the local temp folder will be used. If using the library from ASP.NET, you will probably need to pass a path that exists beneath your application's root folder, as the ASPNET user may not have access to the system temp folder.
If anything goes wrong, such as an SQL syntax error or IO error, an error will be raised, so you should always wrap these calls in a
Try...Catch block.
Dim newDS As DataSet = _ DataSetSQLEngine.DataSetSQLEngine.executeSQLDataset(ds, _ "Select * From myTable.csv as myTable order by dateStamp")
Note in the example that you must reference your tables as aTable.CSV. By default, the ODBC driver looks for files with a .txt extension. There may be an option you can set in the driver to force it to default to .csv; I haven't looked very hard. If anyone finds it, please let me know.
The Microsoft Text Driver ODBC driver appears to use the Jet Engine on which Access is built, and seems to support the Access SQL implementation. It would also appear that the standard Access functions are available, which is a bonus.
The included project, SQLEngineExample, has a very simple example which will give you a good idea of how the function works.
As mentioned, this implementation is quite crude, and will not provide a usable solution in some circumstances. For example, if performance is critical, then this solution fails miserably, as it requires a lot of disk IO and scanning of tables. Also, I have not tested it with many different types of data. You will need to test it well with whatever data you have, before relying on it.
When using string values in your queries, ensure you use single quotes; double quotes cause an error.
Another limitation is that only
SELECT and
INSERT are supported by the Text ODBC driver, not
UPDATE or
DELETE. Although, why you would use this library for anything other than
SELECT I don't know... Just something to be aware of.
If you take the attached project and add more robust error-checking, or improve the usability in any way, please let me know and I will upload it (and likely use it myself!).
I would like to take this opportunity to request Microsoft that they add this functionality as part of the .NET
DataSet implementation, preferably with a syntax that encompasses T-SQL.
General
News
Question
Answer
Joke
Rant
Admin | http://www.codeproject.com/KB/database/DataSetSQL.aspx | crawl-002 | en | refinedweb |
Imagine the scenario where you have multiple users viewing a page in your web application. You want to have each client page updated automatically when there are changes to the data the page is displaying.
Because of the request/response nature of web applications, we can not simply get our application to communicate directly to clients that are viewing our page and tell them to update their content as with a traditional client/server application. This article explores a way to synchronise your user's views using the XMLHTTP handler.
If we use the
META tag in the header of our ASP.NET page, we could get a page to reload every few seconds, showing updated content since the last reload. So, what is wrong with doing this?
In a situation where we have a 100 users viewing a page with our
META tag set to reload the page every 5 seconds, each page would generate a request for an ASP.NET page, which in turn might be querying a database. That is a lot of page requests, doing a lot of work, which is mainly unnecessary seeing that the data is unlikely to be changing every 5 seconds.
We need a way for a client to ask our application if it is necessary to update its content ... We can do this using the
XMLHTTP object.
There are four main components of the demonstration web application which are used to synchronise the ASP.NET page in our demo web app.
A controller class representing the bowels of the application (our business layer, if you like ...).
Our ASP.NET page displaying data we want synchronized across multiple client sessions. This contains a
GridView control displaying products with functions to add, edit, and remove products.
JavaScript code using the
XMLHTTP object to perform 'behind the scenes' requests to CheckSync.ashx. This is included in the Default.aspx page.
An HTTP handler with a simple job of responding to an
XMLHTTP request, with a flag determining if a page should update its content.
To achieve our page synchronization, we are going to keep two references. One reference is going to be kept with the client (in the session state), and the other is going to be kept with the server (in the application state).
When any action takes place such as adding, editing, or deleting, the server will increment its reference. The client will regularly check its reference against the server reference. If the server reference is higher, the client will synchronize its content and copy the server reference to its own. This process repeats until the client is no longer viewing the data.
I have built a set of classes to accommodate simple synchronization in the demo.
ViewSynchronisation
Responsible for creating a server reference and ensuring that only one copy exists in the application.
ServerSynchronisationReference
Holds a server synchronization reference.
ClientSynchronisationReference
Holds a client synchronization reference.
The process of the client checking its reference against the server is detailed below.
DemoWebAppCoreis instantiated in the
Application_Start()method of Global.asax.
protected void Application_Start(object sender, EventArgs e) { ... Application["Engine"] = new DemoWebAppCore(); ... }
DemoWebAppCore.ProductViewSyncis instantiated.
public class DemoWebAppCore { ... private ViewSynchronisation _productViewSync = new ViewSynchronisation("Products"); public ViewSynchronisation ProductViewSync { get { return _productViewSync; } } ... }
Session["ClientSyncRef"]is instantiated with a
ClientSynchronisationReferenceusing
DemoWebAppCore.ProductViewSync.CreateClientReference()in the
Session_Start()method of Global.asax.
protected void Session_Start(object sender, EventArgs e) { Session["ClientSyncRef"] = Engine.ProductViewSync.CreateClientReference(); }
XMLHTTPrequest for SyncCheck.ashx.
var pollInterval = 5000; var checkStatusUrl = "CheckSync.ashx"; var pollID = window.setInterval(checkStatus, pollInterval); function checkStatus() { // create XMLHTTP object req = createReq(); if(req != null) { req.onreadystatechange = process; req.open("GET", checkStatusUrl, true); req.send(null); } else window.removeInterval(pollID); } function createReq() { // Create XMLHTTP compatible in various browsers try { req = new ActiveXObject("Msxml2.XMLHTTP"); } catch(e) { try { req = new ActiveXObject("Microsoft.XMLHTTP"); } catch(oc) { req = null; } } if (!req && typeof XMLHttpRequest != "undefined") { req = new XMLHttpRequest(); } return req; }
ClientSynchronisationReferencein
Session["ClientSyncRef"]is invalid using
ClientSynchronisationReference.IsInvalid. If it is, CheckSync.ashx returns the character "1" in its response. Otherwise, it will return "0".
public void ProcessRequest(HttpContext context) { // Make sure that the response of this // handler is not cached by the browser context.Response.Cache.SetCacheability(HttpCacheability.NoCache); context.Response.ContentType = "text/plain"; if (ClientSyncRef(context) != null) { if (ClientSyncRef(context).IsInvalid) { // Client ref was invalid so ... // return 1 to CheckSync.js context.Response.Write("1"); return; } } // Client ref was not invalid so return 0 to CheckSync.js context.Response.Write("0"); }
function process() { // simply determine response from CheckSync.ashx // and refresh current window if 1 is returned if (req.readyState == 4 && req.status == 200 && req.responseText == '1') window.location.replace(window.location.href); }
As mentioned previously, whenever data changes, we need to invalidate our client views. To do this, we simply call the
ViewSynchronisation.InvalidateClients() method.
public void AddProduct() { ... ProductViewSync.InvalidateClients(); } public void UpdateProduct(long Id, string Name, string Description, decimal Price) { ... ProductViewSync.InvalidateClients(); } public void RemoveProduct(long Id) { ... ProductViewSync.InvalidateClients(); }
By calling
InvalidateClients(), we are simply incrementing the reference stored in
ServerSynchronisationReference._syncRef.
public void InvalidateClients() { Interlocked.Increment(ref _syncRef); }
So, when a client checks
ClientSynchronisationReference.IsInvalid, a simple comparison is made of its own client reference and the server reference.
public bool IsInvalid { get { long _serverSyncRef = _serverRef.Value; // If server sync ref is greater than the client, // then client is invalid if (_serverSyncRef > _clientSyncRef) { _clientSyncRef = _serverSyncRef; return true; } return false; } }
XMLHTTPand an ASHX handler, we cut out most of the overheads involved with a full postback to an ASPX page. Also, we cut out the annoyance of full page refresh at the browser side.
var pollIntervalline in the CheckSync.js file.
XMLHTTPrequest to pull down modified data and update the page dynamically for a totally transparent update.
Remember also, CheckSync.ashx doesn't only have to return "1" and "0" ...
Server reference = 1 Product 234 added Server reference = 2 Product 634 updated Server reference = 3 Product 231 removed
General
News
Question
Answer
Joke
Rant
Admin | http://www.codeproject.com/KB/aspnet/xmlhttpsync.aspx | crawl-002 | en | refinedweb |
Sometimes you'd want to limit your application to a single instance. In Win32
we had the
CreateMutex API function using which we could create a named
mutex and if the call failed, we assume that the application is already running.
Well the .NET SDK gives us the
Mutex class for
inter-thread/inter-process synchronization. Anyway in this article we are more interested in using the
Mutex class to limit our apps to a single instance rather than in its use
as an inter-process data synchronization object.
The code below is nothing new. It's simply a .NET version of a universal technique that has been used pretty much successfully over the years. For a thorough understanding of this technique and other techniques and issues involved when making single instance applications, you must read Joseph M Newcomer's article - Avoiding Multiple Instances of an Application
__gc class CSingleInstance { private: //our Mutex member variable Mutex *m_mutex; public: CSingleInstance(String *mutexname) { m_mutex=new Mutex(false,mutexname); } ~CSingleInstance() { //we must release it when the CSingleInstance object is destroyed m_mutex->ReleaseMutex (); } bool IsRunning() { //you can replace 10 with 1 if you want to save 9 ms return !m_mutex->WaitOne(10,true); } };
int __stdcall WinMain() { //create a mutex with a unique name CSingleInstance *si= new CSingleInstance("{94374E65-7166-4fde-ABBD-4E943E70E8E8}"); if(si->IsRunning()) MessageBox::Show("Already running...so exiting!"); else Application::Run(new MainForm()); return 0; }
Remember to put the following line on top of your program.
using namespace System::Threading;
I have used the string
{94374E65-7166-4fde-ABBD-4E943E70E8E8}as my unique mutex name. You can use
a name that you believe will be unique to your application. Using a GUID would
be the smartest option obviously. You can put the
class in a DLL and thus you can use it from all your applications.
General
News
Question
Answer
Joke
Rant
Admin | http://www.codeproject.com/KB/mcpp/mcppsingleinst.aspx | crawl-002 | en | refinedweb |
This.
I wanted to be able to save a strongly-typed collection class to an XML file, but because of the way
XmlSerializer works (briefly described later) and the fact that one of the properties being serialized was a base class, I couldn't do this out of the box because I wanted to store not the base class itself but the derived classes.
I wasn't the first to find this limitation (it has come up a number of times on various forums), and most people seemed to have worked around this by writing custom code to read and write an XML file, but I wanted a simpler solution.
First, I'll show you the three original classes I was working with:-
ViewInfoCollection - a collection of
ViewInfo objects (no surprises there!). It is derived from
ViewInfoCollectionBase (an automatically generated collection) and provides additional methods to be able to save and load itself to/from a file. (I also have a static property to save the
XmlSerializer object so that it need only be created on first use, but now I know more about how
XmlSerializer works, I believe that it is cached internally by .NET anyway and so may be redundant.)
using System; using System.IO; using System.Xml.Serialization; namespace Dashboard { [Serializable] public class ViewInfoCollection: ViewInfoCollectionBase { #region Constructors public ViewInfoCollection() {} public ViewInfoCollection(int capacity): base(capacity) {} public ViewInfoCollection(ViewInfoCollectionBase c): base(c){} public ViewInfoCollection(ViewInfo[] a): base(a) {} #endregion Constructors #region Static private static XmlSerializer Serializer { get { if (serializer == null) { serializer = new XmlSerializer(typeof(ViewInfoCollection)); } return serializer; } } static XmlSerializer serializer; public static ViewInfoCollection FromXmlFile(string filename) { ViewInfoCollection @new = new ViewInfoCollection(); @new.ReadFromXml(filename); return @new; } #endregion Static #region Methods public void WriteToXml(string filename) { using(StreamWriter writer = new StreamWriter(filename)) { Serializer.Serialize(writer, this); } } public void ReadFromXml(string filename) { ReadFromXml(filename, false); } public void ReadFromXml(string filename, bool preserveItems) { if (preserveItems == false) Clear(); using(StreamReader reader = new StreamReader(filename)) { AddRange( (ViewInfoCollection) Serializer.Deserialize(reader)); } } #endregion Methods } }
ViewInfo - this is the class that is contained in the collection. Nothing special here, but watch out for the last property -
Parameters - although it looks innocent enough, it is the cause of all my problems.
using System; using System.Xml.Serialization; namespace Dashboard { [Serializable] public class ViewInfo { public string Name { get { return name; } set { name = value; } } string name; public string Category { get { return category; } set { category = value; } } string category; public string ServiceProvider { get { return serviceProvider; } set { serviceProvider = value; } } string serviceProvider; public bool IsWellKnown { get { return isWellKnown; } set { isWellKnown = value; } } bool isWellKnown; public string FormType { get { return formType; } set { formType = value; } } string formType; public string[] AlternativeFormTypes { get { return alternativeFormTypes; } set { alternativeFormTypes = value; } } string[] alternativeFormTypes; public object UniqueID { get { if (uniqueID == null) return Name; else { return uniqueID; } } set { uniqueID = value; } } object uniqueID; public DashboardParams Parameters { get { return parameters; } set { parameters = value; } } DashboardParams parameters; public override string ToString() { return string.Format("Name={0}, IsWellKnown={1}, " + "UniqueID={2}, FormType={3}, AlternativeFormTypes={4}", Name, IsWellKnown, UniqueID, FormType, AlternativeFormTypes == null ? "(none)" : string.Join("; ", AlternativeFormTypes)); } } }
DashboardParams - this class happens to be abstract although the problems I had would be the same if it wasn't. It is a base class intended to be overridden by any number of classes that hold parameter information. It is marked as
Serializable (as are the other two classes) because it will be passed to other processes on other machines via remoting) but this is not relevant for this article. It is the Type used in the
Parameter property of
ViewInfo and is used to provide a base for concrete classes such as
DataWatcherParams (not listed here), which adds a few more properties and is mentioned later in the article.
using System; using System.Xml.Serialization; namespace Dashboard { [Serializable] public abstract class DashboardParams: IDashboardParams { #region Properties [XmlIgnore] public ClientToken Token { get { return token; } } ClientToken token = ClientToken.Instance; [XmlIgnore] public DashboardMessageHandler MessageHandler { get { return messageHandler; } set { messageHandler = value; } } DashboardMessageHandler messageHandler; #endregion Properties public string DisplayName { get { return displayName; } set { displayName = value; } } string displayName; public virtual bool IsValid { get { return messageHandler != null; } } public virtual string UniqueID { get { return Guid.NewGuid().ToString(); } } } }
I tried saving a test collection which contains two
ViewInfo objects, one has an instance of a
DataWatcherParams as its
Parameters property, and the other has a null
Parameters property.... and got the following exception :-
Unhandled Exception: System.InvalidOperationException: There was an error generating the XML document. ---> System.InvalidOperationException: The type Dashboard.DataWatchServices.DataWatcherParams was not expected. Use the XmlInclude or SoapInclude attribute to specify types that are not known statically."
Not knowing much about
XmlSerializer apart from the basics, I Googled. And then I Googled some more. And I came up with the following observations.
.NET has two independent serialization paradigms designed for completely different serialization scenarios:-
BinaryFormatteror
SoapFormatter(or any class implementing
IFormatter) and is intended to put the contents of any serializable class into a stream which can be saved or transported and then used to recreate the original object in its entirety. It uses attributes such as
Serializableto control serializable and, most importantly, serializes the
privatefields within the class.
XmlSerializer, is intended simply to map fields and properties of a class to an XML document and vice versa. It is completely independent of the 'other' serializer and has its own attributes for control. The most important difference is that it only looks at
publicread/write properties and methods. It also has special support for types that implement
ICollectionand serializes the contents of the collection as nested elements.
XmlSerializer works by generating an on-the-fly assembly (with a random name) that knows how to serialize/deserialize the type passed to it in its constructor. During the generation of this assembly, it looks for
public read/write properties within the type, checks that other types involved have a constructor that takes no parameters (otherwise, it wouldn't be able to recreate the object during deserialization!), and builds a list of types that it needs to know about to perform the serialization/deserialization.
And herein lies the problem. If during serialization, it finds a class that is not part of the type list it built whilst generating the serializer, it throws the exception listed above.
In my scenario, the generated serializer knew about the
DashboardParams type of the
Parameters property, and knew that it could serialize/deserialize it, but when it actually came to serialize my test collection, the
DataWatcherParams type was not in the list and so the exception was thrown.
Using the
XmlInclude attribute (as recommended in the exception description) does work (basically, it manually adds a Type to the list discovered during the generation phase), but it means that I need to know at compile-time all of the classes derived from
DashboardParams. Not a viable solution for my scenario.
The next step to investigate was the options available in the constructor. It is possible to specify a list of types there, but although that eliminates the problem of knowing the derived types at compile-time, I would need to maintain a list and get any new class to 'register' with that list. Definitely over the top.
I then looked at the other XML attributes available for controlling XML serialization. A promising option was the Type property on the
XmlElement attribute that allows a derived type to be specified. Exactly what I wanted to do, but again, it relies on knowing the possible types at compile-time or maintaining a list to use at runtime.
Then I discovered
IXmlSerializable!!!
This allows full control of the XML serialization process and allows a type to put any information it likes into the XML document, an example being
DataSet.
Strictly speaking, it is for internal use only in v1.1 but is documented in v2.0. Anyway, if it's good enough for a
DataSet, it's good enough for my
ViewInfo class!
The
IXmlSerializable interface has three methods:
public XmlSchema GetSchema(); public void ReadXml(XmlReader reader); public void WriteXml(XmlWriter writer);
Since we don't need a schema, I guessed (correctly as it turned out) that returning a
null in
GetSchema() would be acceptable. That just left
ReadXml and
WriteXml needing to be written. The methods supply an
XmlReader object and an
XmlWriter object respectively, and all I needed to do was insert/extract the elements for my
ViewInfo class.
Then I realized that there is actually a lot of complicated reflection work going inside the generated serializer. Although I could serialize simple properties such as
string
Name and
bool
IsWellKnown by using their.
ToString() methods and then use
.Parse() to recreate the value,
object
UniqueID would be somewhat more complicated since I would have to interrogate its type using reflection and then add an attribute. Parsing on deserializable would then get very complicated! Another problem was that if I (or another developer) later decided to derive from
ViewInfo, I would also need to re-implement these methods for any new properties.
All I really wanted to do was get control of serializing
Parameters and let normal serialization take care of the rest, but I couldn't do this simply -
IXmlSerializable is an all or nothing solution.
Then I remembered that during my Googling session, I saw a solution to a custom XML Serialization problem that, although not applicable to my problem, gave me another idea. (I can't find the original source now, but thanks to that guy anyway!). His problem was that
Color didn't serialize correctly, and he got around the problem by putting an
XmlIgnore attribute on his
Color property and creating a new property called
XmlColor which contained a new class called
SerializeColor. So
XmlSerializer, instead of serializing
Color, serialized an instance of
SerializeColor which was a class over which he had complete control.
I came up with this:-
using System; using System.Xml; using System.Xml.Schema; using System.Xml.Serialization; namespace Dashboard { public class DashboardParamsSerializer: IXmlSerializable { } }
and I modified my
ViewInfo class as follows:-
[XmlIgnore] public DashboardParams Parameters { get { return parameters; } set { parameters = value; } } DashboardParams parameters; [XmlElement("Parameters")] public DashboardParamsSerializer XmlParameters { get { if (Parameters == null) return null; else { return new DashboardParamsSerializer(Parameters); } } set { parameters = value.Parameters; } }
(The
XmlElement("Parameters") is just sugar so that the correct element name is written into the XML file rather than "XmlParameters").
What is happening here is that the
Parameters property is now ignored by the serializer but
XmlParameters is serialized instead. The serializer comes along, sees that
XmlParameters is a read/write property of type
DashboardParamsSerializer, and asks for its value. The
XmlParameters property getter method creates a temporary
DashboardParamsSerializer object passing the original
Parameters value in the constructor. (Null values are ignored by
XmlSerializer anyway, so there is no need for special handling code).
Because
DashboardParamsSerializer implements
IXmlSerializable, the serializer
calls its
WriteXml method, and this gives us the opportunity to add an attribute into the current element and store the actual Type into it. It then passes this type to a new
XmlSerializer object which can then serialize the object as it would normally straight into the
XmlWriter object - no need for any reflection on my part.
Deserialize works in reverse. The serializer will call
DashboardParamsSerializer.ReadXml with the
XmlWriter located at the correct place in the XML file. The method then reads the attribute it placed there originally and creates a new
XmlSerializer to create the new object.
XmlSerializer then passes the new object to the
XmlParameters property setters. The real object is then stored in the
parameters
private field.
BINGO! It worked a treat!
The only fly in the ointment was that I now had an extra
public property in
ViewInfo that I didn't really want (it had to be
public, otherwise
XmlSerializer would ignore it).
Then I had an Epiphany. Remember that I mentioned that the
XmlElement attribute had a Type property to specify a derived type? Well, I wondered whether it really needed to be a derived object or whether
XmlSerializer was just casting to it.
I added this section to
DashboardParamsSerializer:-
#region Static public static implicit operator DashboardParamsSerializer(DashboardParams p) { return p == null ? null : new DashboardParamsSerializer(p); } public static implicit operator DashboardParams(DashboardParamsSerializer p) { return p == null ? null : p.Parameters; } #endregion Static
changed the attribute on the
Parameters property to:-
[XmlElement(Type=typeof(DashboardParamsSerializer))]
and deleted the
XmlParameters property completely.
If
XmlSerializer was simply casting to the new type rather than explicitly checking that it was a derived type, then the implicit overload methods would silently convert between
DashboardParamsSerializer and
DashboardParams (and any derivation of it). It worked!!
So, I now had a single extra class and a single attribute that would allow me to serialize any class derived from
DashboardParams. This was the solution for my needs but remember that I previously said that the
XmlSerializer has constructor overrides to allow attributes to be specified? Microsoft did this with a view to being able to put XML attributes on classes for which the source code is not available.
So, I tested my solution to its logical conclusion and removed all customization from
ViewInfo - leaving it exactly as it was originally. Instead, I've made the attributes 'virtual' attributes, and told the
XmlSerializer to use them as though they were on the target object.
I changed the
Serializer property in
ViewInfoCollection to do this as follows:-
private static XmlSerializer Serializer { get { if (serializer == null) { XmlAttributeOverrides attributeOverrides = new XmlAttributeOverrides(); XmlAttributes attributes = new XmlAttributes(); XmlElementAttribute attribute = new XmlElementAttribute(typeof(DashboardParamsSerializer)); attributes.XmlElements.Add(attribute); attributeOverrides.Add(typeof(ViewInfo), "Parameters", attributes); serializer = new XmlSerializer(typeof(ViewInfoCollection), attributeOverrides); } return serializer; } } static XmlSerializer serializer;
Again, this worked as expected!
What we are doing here is telling the serializer that if it should come across a property or method called "
Parameters" in a
ViewInfo Type, then pretend it had an
XmlElement attribute on it created with its Type property set to
typeof(DashboardParamsSerializer).
So, now we have a way of being able to XmlSerialize an object that contains a read/write property (or
public field) which holds a derived class, and we only know the base type at compile time. p == null ? null : new DashboardParamsSerializer(p); } public static implicit operator DashboardParams(DashboardParamsSerializer p) { return p == null ? null : p.Parameters; } #endregion Static } }
DashboardParams" with "<newClassName>" and replace "
DashboardParamsSerializer" with "<newClassName>Serializer".
XmlSerializerwill know to use your custom serializer class:-
[XmlElement(Type=typeof(<newClassName>Serializer))]
XmlSerializerif you don't have the source code:-
XmlAttributeOverrides attributeOverrides = new XmlAttributeOverrides(); attributes.XmlElements.Add(new XmlElementAttribute(typeof(<newClassName>Serializer))); attributeOverrides.Add(typeof(<typeWithAPropertyHoldingABaseType>), "<nameOfPropertyHoldingABaseType>", attributes); serializer = new XmlSerializer(typeof(<anyTypeThatIndirectlyReferences TypeWithAPropertyHoldingABaseType>), attributeOverrides);
Another buglet that I spotted during later testing was that the
UniqueID property returns the value of the
Name property if no specific
UniqueID had been set. Standard stuff and not a problem for normal serialization since that stores the
private value, but it is a problem for XML serialization since it will serialize whatever
UniqueID returns and not its underlying value. Luckily,
XmlSerializer follows the convention for 'normal' serialization, and will check any
DefaultValue attribute and call
ShouldSerialize<target> for a final determination on whether to serialize or not. The following line fixes the problem:-
public bool ShouldSerializeUniqueID() { return uniqueID != null; }
General
News
Question
Answer
Joke
Rant
Admin | http://www.codeproject.com/KB/XML/xmlserializerforunknown.aspx | crawl-002 | en | refinedweb |
Whenever a new technology comes along, I personally find that the best way to get to grips with its functionality is to try and create something you have done in another language. To this end, this article will describe how to create a custom control in WPF which will raise custom events. The custom control will then be placed with a standard XAML window and the custom control's events shall be subscribed to. That's it in a nutshell. But along the way there are several things that I would like to point out to you.
The proposed structure will be as follows:
InitializeComponent()method anyway
WPF applications are quite similar is one sense to ASP.NET applications; there may (or may not) be an XAML file, and also a code behind file where the XAML file contains the windows / control rendering, and the code behind does all the procedural code. This is one development model. But there is another way. Anything that can be done in XAML can also be done entirely in code behind (C#/ VB). To this end the custom control that I've created is entirely code created. As for this example it just seemed to make more sense.
As color pickers seem to be almost universally popular at codeproject, I thought let's do one of them. This is a single control that has been created in a separate Visual Studio 2005 project, and is part of the whole solution. I have done this as it is the most common way that we all use third party controls. We get a DLL and make a reference to it. In fact I have chosen this path, as the XAML directives to reference a control do vary slightly depending on whether it is an internal class, or an external DLL. Most commonly I thought it would be a third party external DLL that was being referenced. If you don't get this, don't worry. There will be more on it later.
So without further ado, let's look at the code:
using System; using System.Windows; using System.Windows.Controls; using System.Windows.Controls.Primitives; using System.Windows.Data; using System.Windows.Input; using System.Windows.Media; using System.Windows.Shapes; namespace ColorPicker { #region ColorPickerControl CLASS /// <summary> /// A simple color picker control, with a custom event that uses /// the standard RoutedEventArgs. /// <br/> /// NOTE: I also tried to create a custom event with custom inherited /// RoutedEventArgs, but this didn't seem to work, /// so this event is commented out. But if anyone knows how to do this /// please let me know, custom, //); // Create individual items foreach (string clr in _sColors) { // Creat"; } #endregion #region Events // Provide CLR accessors for the event public event RoutedEventHandler NewColor { add { AddHandler(NewColorEvent, value); } remove { RemoveHandler(NewColorEvent, value); } } // This method raises the NewColor event private void RaiseNewColorEvent() { RoutedEventArgs newEventArgs = new RoutedEventArgs(NewColorEvent); RaiseEvent(newEventArgs); } // Provide CLR accessors for the event public event NewColorCustomEventHandler NewColorCustom { add { AddHandler(NewColorCustomEvent, value); } remove { RemoveHandler(NewColorCustomEvent, value); } } // This method raises the NewColorCustom event private void RaiseNewColorCustomEvent() { ToolTip t = (ToolTip)(SelectedItem as Rectangle).ToolTip; ColorRoutedEventArgs newEventArgs = new ColorRoutedEventArgs(t.Content.ToString()); newEventArgs.RoutedEvent = ColorPickerControl.NewColorCustomEvent; RaiseEvent(newEventArgs); } //******************************************************************* #endregion #region Overrides /// <summary> /// Overrides the OnSelectionChanged ListBox inherited method, and /// raises the NewColorEvent /// </summary> /// <param name="e">the }
It can be seen that this is all fairly normal C# .NET 3.0 code (that is if you are OK with .NET 3.0 stuff, I am just learning). I want to pay some special attention to the constructor. Let's have a look at that part by part.
// Define a template for the Items,);
The
FrameworkElementFactory class is a way to programmatically create templates, which are subclasses of
FrameworkTemplate such as
ControlTemplate or
DataTemplate. This is equivalent to creating a
<ControlTemplate> tag in XAML markup. So what we are really doing here is saying that the internal inherited
Listbox.ItemsPanel will have a template applied to it that will be a uniform grid layout with 10 columns.
// Create individual items foreach (string clr in _sColors) { // Create; }
This section of the code is responsible for creating the individual
ListItem contents. So what is going on? Well, the items are being created as
Rectangle objects (yep that's right, Rectangles). Then the Rectangles are being filled with a Brush color, and then the Rectangle has a
ToolTip applied.
/";
Finally the
SelectedValuePath is told that the property that should be mapped to the
SelectedValue is "
Fill". So this means that whenever we get the
SelectedValue the object it will be is a
Fill" is a
Brush Type, unless it is cast to another object Type. Isn't that mental. WPF is mind blowing, it really is.
The more eagle eyed amongst you will notice that the code for the control contains 2 events one of which is commented out. More on this later.
Microsoft being Microsoft didn't want us to get too comfortable with things, so they have overhauled everything it would appear. Even something as small as events, is no longer the same as it was in .NET 2.0.
The code snippets below represent the new .NET 3.0 way of creating events.
I have created a custom event called
NewColorEvent so let's have a look at how to define an event:
//A RoutedEvent using standard RoutedEventArgs, event declaration //The actual event routing public static readonly RoutedEvent NewColorEvent = EventManager.RegisterRoutedEvent("NewColor", RoutingStrategy.Bubble, typeof(RoutedEventHandler), typeof(ColorPickerControl));
What else do we need to do, well we need to create the accessors for subscribing / unsubscribing to the event.
// Provide CLR accessors for the event public event RoutedEventHandler NewColor { add { AddHandler(NewColorEvent, value); } remove { RemoveHandler(NewColorEvent, value); } }
And we also need a raise event method such as:
// This method raises the NewColor event private void RaiseNewColorEvent() { RoutedEventArgs newEventArgs = new RoutedEventArgs(NewColorEvent); RaiseEvent(newEventArgs); }
And lastly we need to raise the event somewhere. I have chosen to do this in an override of the inherited
ListBox
OnSelectionChanged method; this is shown below:
//raise the event with standard RoutedEventArgs event args RaiseNewColorEvent();
And that's all there is to creating a custom event in a custom control. We just need to place the control somewhere now and subscribe to this lovely new event.
OK so you think you know how to make a reference to a DLL which contains a custom control. You just create a new tab on the toolbar, and browse to the DLL and any of the contained controls to the toolbar. Right. Well that didn't seem to work. So what do you have to do. Is Add a project reference (right click on references) and browse to the assembly (DLL) with the custom control(s), only one in this articles case.
So that's step 1. Then we actually want to use the custom control within a XAML window. So we have to add an extra directive to the XAML Windows root element. The important part to add is as follows:
xmlns:src="clr-namespace:NAMESPACE_NEEDED"
xmlns:src="clr-namespace:NAMESPACE_NEEDED;assembly=ASSEMBLYNAME_NEEDED"
So for the attached example, where we have an external DLL which has a control we need to use, the root element would be changed to the following:
<Window x:
That's part of the story. We have now successfully referenced the external user control in the XAML, but we still don't have an instance of the control in the markup yet. So how do we do that. Well we do something like this:
<src:ColorPickerControl
OK so now we have done the XAML part, but what about the code behind file. We still need to do that part. So how is that done. Well, luckily that part is easier. Its just a normal
using statement we need:
using ColorPicker;
Now we really do have a fully referenced external DLL, which contains a user control, which we now have an instance of within our XAML window, that is also now known about by the code behind logic because of the two steps just carried out.
But just how does the XAMLs code behind file know about the user control contained within the XAML file. Well the answer to that, is that when Visual Studio compiles a project, it creates a new generated source file, which is placed into the DEBUG\OBJ or RELEASE\OBJ (depends on how you are compiling the project). This file is called the same as the current XAML window file, but the extension will be g.cs for c# or g.vb for VB.
The following screen print shows this for the attached project:
Can you see there is a Window1.g.cs (as I use C#) file there.
So what is that all about. Well let's have a look.
//--------------------------------------------------------------------------- // <auto-generated> // This code was generated by a tool. // Runtime Version:2.0.50727.42 // // Changes to this file may cause incorrect behavior and will be lost if // the code is regenerated. // </auto-generated> //--------------------------------------------------------------------------- using ColorPicker; ColorControlApp { /// <summary> /// Window1 /// </summary> public partial class Window1 : System.Windows.Window, System.Windows.Markup.IComponentConnector { internal System.Windows.Controls.StackPanel Stack; internal System.Windows.Controls.Label lblColor; internal ColorPicker.ColorPickerControl lstColorPicker; private bool _contentLoaded; /// <summary> /// InitializeComponent /// </summary> [System.Diagnostics.DebuggerNonUserCodeAttribute()] public void InitializeComponent() { if (_contentLoaded) { return; } _contentLoaded = true; System.Uri resourceLocater = new System.Uri("/ColorControlApp;component/window1.xaml", System.UriKind.Relative); System.Windows.Application.LoadComponent(this, resourceLocater); } .Stack = ((System.Windows.Controls.StackPanel)(target)); return; case 2: this.lblColor = ((System.Windows.Controls.Label)(target)); return; case 3: this.lstColorPicker = ((ColorPicker.ColorPickerControl)(target)); return; } this._contentLoaded = true; } } }
It can be seen that this source file provides the missing parts, most noticeably the
InitializeComponent() method and also notice that there are a few instance fields which represent the components within the XAML file. So this is how both the code behind and XAML files are compiled to form a single assembly with all the required information.
The last thing I should show is the running app. That part is probably not so important, as it is the concepts I was trying to share really.
But for completeness sake, here is a screen shot.
remember the
ColorPicker is simply a specialized
ListBox really. Quite impressive no?
Well although we've only created a simple control and used it within a single XAML page, I hope you can see that there are quite a few core concepts that were covered here.
I would just like to ask, if you liked the article please vote for it, as it lets me know if the article was at the right level or not.
I have quite enjoyed constructing this article. I hope you liked it. I think it will help you a lot when you get time to do some XAML / WPF type apps. I have only just started out with XAML and I truly believe it is set to totally change the sort of applications we are all going to see.
General
News
Question
Answer
Joke
Rant
Admin | http://www.codeproject.com/KB/WPF/WPF_CustomerControl.aspx | crawl-002 | en | refinedweb |
Automated and Dictionary attacks to login is a security threat that every IT is quite aware of. There are many techniques that help address this problem, one of which is the CAPTCHA - an image that contains characters and/or numbers that presumably only humans can read; its value is then entered by the user manually. This helps filter out automated logins. However, this technique can be quite difficult to implement and also costly because you would have to generate image on the fly. Further, some software are designed to figure out the value on the image using technologies similar to OCR scanning. Although CAPTCHA may work most of the time, like I said, it is difficult, expensive, and does not work all the time, plus, requires your user to enter yet another value from an already difficult to read text.
I began thinking about this problem and wanted to come up with a solution that...
Suddenly, it dawned upon me when I started thinking like a hacker that if I wanted to automatically try to login using brute force, I would have to continuously generate different user ID and password combinations until I find the one that will get me through, but what is common in this? The keys! Let me explain... for example, if the login page contains two text boxes, one named "
userid" and the other "
password", all I have to do is submit values to these fields, something like, and keep on changing the values "
John" and "
cool" until I find the right combination and I will get in. The keys that are common in this scenario are "
userid" and "
password". What if these keep changing every time you make a submit attempt? You would never know which key to provide the value to, hence cripple the key-value combination attack altogether!
The basic idea in accomplishing this is to assign a different name to the
userID text box and
password text box every time the page is loaded, either by first loading or a postback is triggered. To make sure that the keys (the names assigned to the
userID textbox and
password textbox) are unpredictable, I elected to use GUID. There are four parts to this technique.
Part 1:
UserIDKey and
PwdKey private properties. (I use ViewState to store the assigned key instead of Session so that if the user spawns another instance of login page, each page would have its own keys.)
private string UserIDKey { get { if(ViewState["UserIDKey"] == null) ViewState["UserIDKey"] = Guid.NewGuid().ToString(); return (string) ViewState["UserIDKey"]; } set { ViewState["UserIDKey"] = value; } } private string PwdKey { get { if(ViewState["PwdKey"] == null) ViewState["PwdKey"] = Guid.NewGuid().ToString(); return (string) ViewState["PwdKey"]; } set { ViewState["PwdKey"] = value; } }
Part 2: Assign new names to the text boxes when the page is first loaded.
private void Page_Load(object sender, System.EventArgs e) { if(!IsPostBack) { MakeFieldNamesSecret(); } } private void MakeFieldNamesSecret() { txtPwd.ID = PwdKey; txtUserID.ID = UserIDKey; }
Part 3: Validation. When the Submit button is clicked, retrieve the values of the two text boxes to validate.
private void btnLogin_Click(object sender, System.EventArgs e) { string userID = Request.Form[UserIDKey]; string pwd = Request.Form[PwdKey]; //You must provide your own validation if(userID == "John" && pwd == "cool") Server.Transfer("PostLoginPage.aspx"); else lblErr.Text = "Invalid UserID or Password"; }
Part 4: Change the names of the text boxes on postback. This is what really prevents the key-value attack!
private void LoginPage_PreRender(object sender, System.EventArgs e) { if(IsPostBack) { UserIDKey = null; PwdKey = null; MakeFieldNamesSecret(); } }
What I found to be very interesting is the magic of thinking outside the box. What most people are doing trying to solve this problem is how to make the input values more difficult to automate, but few, perhaps thought about changing the variable that takes the value. With this very simple technique, I think I have solved a real problem. What do you think?
First revision: January 5, 2005.
General
News
Question
Answer
Joke
Rant
Admin | http://www.codeproject.com/KB/web-security/NoAutoLogin.aspx | crawl-002 | en | refinedweb |
There are tons of articles and sample code on how to do something with ATL. Usually they teach how to only add feature to your component and you have to dig a lot of tutorials in order to build something rich-featured.
In this article, I try to cover how to create COM server, expose it to scripting languages, make it an event source, add a VB-style collection to the object and add an ability to your object to report errors.
I didn't set a goal to cover all questions on COM or attributed ATL in this article, so don't expect to find here explanations of every attribute or COM basics. Refer to MSDN for more detailed information. This article is just a quick walk-through on things that will make your COM object more friendly to other programmers.
We will be learning by example. The example is very simple � Windows� Services Manager. The services manager itself will be a COM object we'll write in C++ using attributed ATL, and there will also be a set of scripts in VBScript that will allow us to control services in batch mode.
When creating a program, we should think about which classes we will have. Here, we'll have the Manager itself, Services collection and Service. We are talking about COM, so they will be our coclasses.
ServicesMgr coclass will provide the user with a set of operations on
Services collection and services identified by name.
Services collection will provide the user with abilities to iterate services using
foreach statement.
Service coclass will represent a single service.
To start with ATL project, run Visual Studio .NET IDE and select File/New/Project... command. Select Visual C++ Projects/ATL/ATL Project and enter a name. In this tutorial, I'll use the name "ServicesManager".
Don't change any options in ATL Project Wizard. Let it be Attributed and Dynamic-link library. Click Finish � we're done!
Now we have a dummy COM object. It can be compiled, but does nothing yet.
Open ServicesManager.cpp file. Note
[module...] lines there. This is an attribute. It defines the library block. This means that we have
DllMain,
DllRegisterServer and
DllUnregisterServer functions without writing any line of code.
Let's add our coclasses to the project.
Right click on ServicesManager project in Solution Explorer and select Add/Add Class. Then select ATL/ATL Simple Object in Add Class - ServicesManager window. Enter ServicesMgr as a name in ATL Simple Object Wizard. Leave all options on the next page as is. Note Dual Interface option is selected. This will help us to provide the functionality of
ServicesMgr both to languages like C++ that use VTBL binding of methods and scripting languages that use
IDispatch interface to communicate with objects.
Click Finish in wizard's window and get all needed code for our
ServicesMgr coclass.
Now, find
IServicesMgr interface declaration in ServicesMgr.h file and add the following attributes to this interface:
oleautomation,
hidden and
nonextensible, so it will look like this:
[ object, uuid("2543548B-EFFB-4CB4-B2ED-9D3931A2527D"), dual, oleautomation, nonextensible, hidden, helpstring("IServicesMgr Interface"), pointer_default(unique) ] __interface IServicesMgr : IDispatch { };
Adding these attributes to the interface will make it compatible with OLE automation, hidden in user-oriented object browsers (just to save user's time) and will disallow the user to populate this interface with properties or methods at run-time.
Make another note: we do all stuff right in our C++ code. We don't bother with IDL and other things.
Repeat the steps above to add coclasses
smServices (we cannot use
Services name, because it is a name of some system namespace) and
smService. Add attribute
noncreatable to both
smServices and
smService coclasses. This will prevent the user from creation of these objects.
Let's add
Start() and
Stop() methods to our
ServicesMgr coclass. Right click on
IServiceMrg node in Class View and select Add/Add Method. Set method name to
Start and add a
BSTR [in] parameter with name
ServiceName. Do the same to add
Stop() method. You should get the following code:
__interface IServicesMgr : IDispatch { [id(1), helpstring("method Start")] HRESULT Start([in] BSTR ServiceName); [id(2), helpstring("method Stop")] HRESULT Stop([in] BSTR ServiceName); };
The wizard will also add proper declarations to the coclass and provide you with default implementation of this method. Edit
helpstring attributes to give more helpful hint for the user.
Note the
id attribute near each method. It sets the dispatch ID of the method. By using this attribute, we don't need to write any dispatching stuff by hand � everything will be done by the compiler.
To simplify testing, "implement" these methods that way:
STDMETHODIMP CServicesMgr::Start(BSTR ServiceName) { Beep(400, 100); return S_OK; } STDMETHODIMP CServicesMgr::Stop(BSTR ServiceName) { Beep(1000, 100); return S_OK; }
Build the project and run the following script to test it:
Set Mgr = WScript.CreateObject("ServicesManager.ServicesMgr") Mgr.Start("SomeSvc") MsgBox "Started!" Mgr.Stop("SomeSvc")
If you did everything right, then you'll hear a beep, then see the message box and then hear a beep again.
Note, that we testing our object with script, so it's exposed to scripting languages. Note also that we did it with minimum effort by using
dual attribute, deriving our interface from
IDispatch and using
id attribute for the methods.
Starting and stopping services using their names is a good deal, but how's the user expected to know these names? We should provide him with the ability to iterate names of services in order to obtain all available names.
According to "Building COM Components That Take Full Advantage of Visual Basic and Scripting" article, we should implement an interface with 2 methods and 1 property �
_NewEnum() method, property
Item and
Count() method. These methods have special dispatch ID codes, so the caller will know what to expect from them. Note an underscored
_NewEnum() method. This means the method won't be visible for the user.
So, our
IsmServices should have these methods and property:
[ object, uuid("5BB63796-959D-412D-B94C-30B3EB8D97F1"), dual, oleautomation, hidden, nonextensible, helpstring("IsmServices Interface"), pointer_default(unique) ] __interface IsmServices : IDispatch { [propget, id(DISPID_VALUE), helpstring("Returns a service referred by name or index")] HRESULT Item([in] VARIANT Index, [out, retval] IsmService** ppVal); [id(1), helpstring("Returns number of services")] HRESULT Count([out,retval] LONG* plCount); [id(DISPID_NEWENUM), helpstring("method _NewEnum")] HRESULT _NewEnum([out,retval] IUnknown** ppUnk); };
Note that property
Item and method
_NewEnum() use special
DISPID identifiers. This is important.
We decided that coclass
smServices will perform services enumeration, but on the other hand, coclass
ServicesMgr that will provide
Services as property, has methods for starting and stopping services. Then it's a good idea to delegate
Start() and
Stop() methods to
smServices. But this will lead us to a bit tricky declaration of
smServices coclass.
Now
smServices implements
IsmServices interface. Remove this declaration and replace it with the following:
class ATL_NO_VTABLE CsmServices : public IDispatchImpl<IsmServices> { BEGIN_COM_MAP(CsmServices) COM_INTERFACE_ENTRY(IDispatch) COM_INTERFACE_ENTRY(IsmServices) END_COM_MAP() ... };
This will provide us with the default implementation of
IDispatch interface and expose both
IDispatch and
IsmServices interfaces to the client.
Now we are able to instantiate
smServices coclass ourselves by using this construction (this will implement
IUnknown interface for
smServices):
CComObject<CsmServices> Services;
Repeat these steps for
smService coclass.
Then make a
typedef and add a declaration to
ServicesMgr coclass:
typedef CComObject<CsmServices> CServices; class ATL_NO_VTABLE CServicesMgr : public IServicesMgr { private: CServices *m_pServices; public: CServicesMgr() { if (SUCCEEDED(CServices::CreateInstance(&m_pServices))) m_pServices->AddRef(); } void FinalRelease() { if (m_pServices) m_pServices->Release(); } ... };
And finally add
Services property to
ServicesMgr coclass:
__interface IServicesMgr : IDispatch { [id(1), helpstring("method Start")] HRESULT Start([in] BSTR ServiceName); [id(2), helpstring("method Stop")] HRESULT Stop([in] BSTR ServiceName); [propget, id(3), helpstring("Collection of available services")] HRESULT Services([out, retval] IsmServices** ppVal); };
Now add
EnumServices() method to our
smServices coclass (not in interface!):
typedef std::vector<_Service> _Services; class ATL_NO_VTABLE CsmServices : public IDispatchImpl<IsmServices> { ... private: _Services m_Services; public: STDMETHOD(EnumServices)(); ... }; STDMETHODIMP CsmServices::EnumServices() { // Populate m_Services here return S_OK; }
And implement
get_Services() method of
ServicesMgr:
STDMETHODIMP CServicesMgr::get_Services(IsmServices** ppVal) { if (m_pServices) { // Make sure we enumerated services HRESULT hr = m_pServices->EnumServices(); if (SUCCEEDED(hr)) return m_pServices->QueryInterface(ppVal); else return hr; } return E_FAIL; }
We populated
CsmServices coclass with methods without touching the interface that will be used by clients for enumeration. Clients won't need to call
EnumServices() directly.
Now add
Start() and
Stop() methods to
smServices coclass using the same way and move their implementation from
ServicesMrg coclass.
In order to support collection iteration behavior (
For Each ... Next), we should implement
_NewEnum() method of
CsmServices. The method should return a new object enumerating the collection. This object should implement
IEnumVARIANT interface.
Let's create
CsmServicesEnum class. This class will copy the list of services from
CsmServices and will give the user an ability to iterate it. List of services should be copied because if the user will run two enumerations simultaneously, we'll need to handle them independently.
Add a new ATL Simple Object to the project. Name it
smServicesEnum. It doesn't need a custom interface, so remove
IsmServicesEnum interface declaration and change the declaration of
CsmServicesEnum class and populate it with
IEnumVARIANT interface methods:
class ATL_NO_VTABLE CsmServicesEnum : public CComObjectRoot , IEnumVARIANT { BEGIN_COM_MAP(CsmServicesEnum) COM_INTERFACE_ENTRY(IEnumVARIANT) END_COM_MAP() ... public: STDMETHOD(Next)(unsigned long celt, VARIANT *rgvar, unsigned long *pceltFetched); STDMETHOD(Skip)(unsigned long celt); STDMETHOD(Reset)(); STDMETHOD(Clone)(IEnumVARIANT **ppenum); };
And don't forget to add
typedef to be able to instantiate the object:
typedef CComObject<CsmServicesEnum> CServicesEnum;
Next() method will fetch
celt elements of the collection,
Skip() will skip a number of items,
Reset() method will reset enumeration state to initial, and
Clone() method should create a copy of the current state of enumeration.
Our enumerator must hold a copy of services and the current state of enumeration:
class ATL_NO_VTABLE CsmServicesEnum : public CComObjectRoot , IEnumVARIANT { ... private: _Services m_Services; int m_Idx; public: CsmServicesEnum() : m_Idx(0) { } void CloneServices(const _Services *pServices) { m_Services.assign(pServices->begin(), pServices->end()); m_Idx = 0; } ... };
Then
_NewEnum() method of
smServices will look like this:
STDMETHODIMP CsmServices::_NewEnum(IUnknown** ppUnk) { CServicesEnum *pEnum; CServicesEnum::CreateInstance(&pEnum); pEnum->AddRef(); pEnum->CloneServices(&m_Services); HRESULT hr = pEnum->QueryInterface(ppUnk); pEnum->Release(); return hr; }
Now we can implement methods of our enumerator.
STDMETHODIMP CsmServicesEnum::Next(unsigned long celt, VARIANT *rgvar, unsigned long *pceltFetched) { if (pceltFetched) *pceltFetched = 0; if (!rgvar) return E_INVALIDARG; for (int i = 0; i < celt; i++) VariantInit(&rgvar[i]); unsigned long fetched = 0; while (m_Idx < m_Services.size() && fetched < celt) { rgvar[fetched].vt = VT_DISPATCH; // Create and initialize service objects CService *pService; CService::CreateInstance(&pService); pService->AddRef(); pService->Init(m_Services[m_Idx]); HRESULT hr = pService->QueryInterface(&rgvar[fetched].pdispVal); pService->Release(); if (FAILED(hr)) break; m_Idx++; fetched++; } if (pceltFetched) *pceltFetched = fetched; return (celt == fetched) ? S_OK : S_FALSE; } STDMETHODIMP CsmServicesEnum::Skip(unsigned long celt) { unsigned long i = 0; while (m_Idx < m_Services.size() && i < celt) { m_Idx++; i++; } return (celt == i) ? S_OK : S_FALSE; } STDMETHODIMP CsmServicesEnum::Reset() { m_Idx = 0; return S_OK; } STDMETHODIMP CsmServicesEnum::Clone(IEnumVARIANT **ppenum) { CServicesEnum *pEnum; CServicesEnum::CreateInstance(&pEnum); pEnum->AddRef(); pEnum->CloneServices(&m_Services); HRESULT hr = pEnum->QueryInterface(ppenum); pEnum->Release(); return hr; }
In order to test our enumerator, implement
Name and
DisplayName properties of
smService coclass.
STDMETHODIMP CsmService::get_Name(BSTR* pVal) { *pVal = m_Service.Name.AllocSysString(); return S_OK; } STDMETHODIMP CsmService::get_DisplayName(BSTR* pVal) { *pVal = m_Service.DisplayName.AllocSysString(); return S_OK; }
Now we can write a simple test script:
Set Mgr = WScript.CreateObject("ServicesManager.ServicesMgr") WScript.Echo Mgr.Services.Count Dim Service For Each Service In Mgr.Services WScript.Echo Service.DisplayName Next
Only one thing left to complete with collections support. This is the
Item property. E_FAIL; } else if (Index.vt & (VT_BYREF | VT_VARIANT)) { // Reference by VARIANT (Dim i; For i = 0 to x Next; in VBScript) LONG i = Index.pvarVal->lVal; if (!GetService(i, &svc)) return E_FAIL; } else { // Reference by integer index LONG i = V_I4(&Index); if (!GetService(i, &svc)) return E_FAIL; } // Create service CService *pService; CService::CreateInstance(&pService); pService->AddRef(); pService->Init(svc); HRESULT hr = pService->QueryInterface(ppVal); pService->Release(); return hr; }
The code above uses overloaded function
GetService(). This function searches for service record using either integer index or service handle. Refer to smServices.cpp for details.
Now we can write the following code to work with our collection:
Set Mgr = WScript.CreateObject("ServicesManager.ServicesMgr") For i = 0 To Mgr.Services.Count - 1 WScript.Echo Mgr.Services.Item(i).Name Next
Congratulations, we added collections support to our COM object. You can use similar technique to add another collection.
What if the user specified invalid service handle or index value? What if there're some problems with service manager on our machine? The right solution is to add to our object an ability to report errors.
To report errors, our objects should implement
ISupportErrorInfo interface and use
SetErrorInfo function to supply information about an error to the caller.
First of all, we'll write an error-reporting function that will handle all deals with
SetErrorInfo function for us and will return a special result code.
template<class ErrorSource> HRESULT ReportError(ErrorSource* pes, ULONG ErrCode, UINT ResourceId = -1) { ICreateErrorInfo *pCrErrInfo; IErrorInfo *pErrInfo; if (SUCCEEDED(CreateErrorInfo(&pCrErrInfo))) { // Set all needed information for Err object in VB or active scripting CString Descr; if (-1 != ResourceId) Descr.LoadString(ResourceId); pCrErrInfo->SetDescription(Descr.AllocSysString()); pCrErrInfo->SetGUID(__uuidof(ErrorSource)); CString Source = typeid(ErrorSource).name(); pCrErrInfo->SetSource(Source.AllocSysString()); if (SUCCEEDED(pCrErrInfo->QueryInterface(IID_IErrorInfo, reinterpret_cast<void**>(&pErrInfo)))) { // Set error information for current thread SetErrorInfo(0, pErrInfo); pErrInfo->Release(); } pCrErrInfo->Release(); } // Report error via result code return MAKE_HRESULT(1, FACILITY_ITF, ErrCode); }
This is a template function. It will use type information to deduct interface GUID of the source and source type name (this could be obtained with
Err.Source). It can also load error description from resources.
In order to implement
ISupportErrorInfo interface, we'll use
support_error_info attribute. Actually this is all we need to do.
[ ... support_error_info("IServicesMgr"), ... ] class ATL_NO_VTABLE CServicesMgr; // ... [ ... support_error_info("IsmService"), ... ] class ATL_NO_VTABLE CsmService; // ... [ ... support_error_info("IsmServices"), ... ] class ATL_NO_VTABLE CsmServices;
Now, let's define error codes and how we'll return them.
For
ServicesMgr, erroneous situation is when
smServices couldn't be instantiated. Add the following to the code:
class ATL_NO_VTABLE CServicesMgr : public IServicesMgr { ... private: enum { errNoServices = 0x100 }; ... }; STDMETHODIMP CServicesMgr::Start(BSTR ServiceName) { if (m_pServices) { CString SvcName(ServiceName); return m_pServices->Start(SvcName); } else return ReportError(this, errNoServices); } STDMETHODIMP CServicesMgr::Stop(BSTR ServiceName) { if (m_pServices) { CString SvcName(ServiceName); return m_pServices->Stop(SvcName); } else return ReportError(this, errNoServices); }
For
smServices, erroneous situation is when services couldn't be enumerated, user specified invalid service handle or index, or service couldn't be stopped or started:
class ATL_NO_VTABLE CsmServices : public IDispatchImpl<IsmServices> { ... private: enum { errCannotEnumServices = 0x200, errCannotStart, errCannotStop, errInvalidIndex, errInvalidHandle, errCannotOpenServiceManager, errCannotEnumerateServices, errOutOfMemory, errCannotOpenService, errCannotQueryStatus, errOperationFailed }; ... };
Then
CsmServices::get_Item() will look like this: ReportError(this, errInvalidHandle); } else if (Index.vt & (VT_BYREF | VT_VARIANT)) { // Reference by VARIANT (Dim i; For i = 0 to x Next; in VBScript) LONG i = Index.pvarVal->lVal; if (!GetService(i, &svc)) return ReportError(this, errInvalidIndex); } else { // Reference by integer index LONG i = V_I4(&Index); if (!GetService(i, &svc)) return ReportError(this, errInvalidIndex); } // Create service CService *pService; CService::CreateInstance(&pService); pService->AddRef(); pService->Init(svc); HRESULT hr = pService->QueryInterface(ppVal); pService->Release(); return hr; }
We can test error reporting with this script:
Set Mgr = WScript.CreateObject("ServicesManager.ServicesMgr") Err.Clear On Error Resume Next WScript.Echo Mgr.Services.Item("qwe").Name ' "qwe" doesn't exist MsgBox Err.Source MsgBox Err.Number MsgBox Err.Description
The last thing we'll add to our services manager is an ability to notify the client with events. We'll add
ServiceOperationProgress() event to notify the client about lengthy starting or stopping of service.
First, we create a brand new event interface:
// Service operation progress codes [ export, helpstring("Operation progress codes") ] enum ServiceProgress { spContinuePending = SERVICE_CONTINUE_PENDING, spPausePending = SERVICE_PAUSE_PENDING, spPaused = SERVICE_PAUSED, spRunning = SERVICE_RUNNING, spStartPending = SERVICE_START_PENDING, spStopPending = SERVICE_STOP_PENDING, spStopped = SERVICE_STOPPED }; // IServicesMgrEvents [ dispinterface, nonextensible, hidden, uuid("A51F19F7-9AF5-4753-9B6F-52FC89D69B18"), helpstring("ServicesMgr events") ] __interface IServicesMgrEvents { [id(1), helpstring("Notifies about lenghtly operation on service")] HRESULT ServiceOperationProgress(ServiceProgress ProgressCode); };
Note that we also added an enumeration that will be visible for users in VB.NET, so they could use special value names instead of numbers.
Now specify
IServicesMgrEvents interface as event interface in
ServicesMrg coclass using
__event __interface keyword.
ServicesMrg coclass also must be marked with
event_source("com") attribute. To fire
ServiceOperationProgress() event, we should use
__raise keyword.
[ ... event_source("com"), ... ] class ATL_NO_VTABLE CServicesMgr : public IServicesMgr { ... __event __interface IServicesMgrEvents; void Fire_ServiceOperationProgress(ServiceProgress Code) { __raise ServiceOperationProgress(Code); } ... };
After doing all this stuff, we can easily notify a client with service status by calling
Fire_ServiceOperationProgress() method.
HRESULT CsmServices::WaitPendingService(SC_HANDLE hService, DWORD dwPendingState, DWORD dwAwaitingState) { // ... while (dwPendingState == ServiceStatus.dwCurrentState) { // ... if (m_pMgr) m_pMgr->Fire_ServiceOperationProgress (static_cast<ServiceProgress>(ServiceStatus.dwCurrentState)); // ... } // ... }
To test events handling, we'll use the following script:
Set Mgr = WScript.CreateObject("ServicesManager.ServicesMgr", "Mgr_") Mgr.Start("Alerter") Sub Mgr_ServiceOperationProgress(ProgressCode) WScript.Echo ProgressCode End Sub
Better run this script using cscript.exe, not wscript.exe, so the output will be done in
stdout.
You can find more about handling events in scripts by reading "Scripting Events" article in MSDN (Andrew Clinick, 2001). This was really an interesting thing for me.
There's also a great article "Building COM Components That Take Full Advantage of Visual Basic and Scripting" (by Ivo Salmre, 1998, MSDN). In this article, you'll find basic information about the features your COM server needs to be seamlessly used in C++, VB and VBScript languages.
If you want to debug similar objects, then just write a script in VBScript, set cscript.exe as debugging command and path to the script as command arguments. Then place breakpoints where needed and run the project. This is the easiest way to debug such COM objects.
Version 1.0 so far.
General
News
Question
Answer
Joke
Rant
Admin | http://www.codeproject.com/KB/atl/BuildingRichComponents.aspx | crawl-002 | en | refinedweb |
The C++ language invites for object oriented programming. The Win32 API is entirely based on the C programming language. Writing software for the Windows platform always requires the use of the Win32 API. Many developers who prefer C++ and object oriented programming would wish to have available appropriate C++ class libraries, to give their software a consistent object oriented look and feel.
The immense popularity of Java, and now .NET, is mostly based on the large number of classes that in fact make up the programming platform. Java and .NET application programmers just simply write their applications utilizing these classes whereas, by contrast, C++ programmers first write an infrastructure and then use it to write the applications. In this article, I will show you how to write a simple C++ class that wraps the Win32 thread related APIs.
The Java and .NET platform already have proposed some very good models and so we might as well make our model look similar. The advantage of it is that anyone familiar with Java or .NET can easily relate to it.
The threading models in Java as well as in .NET require that a thread object accepts a class method as its thread procedure. Here is an illustration:
// define class with a threadable method public class MyObject implements Runnable { // the thread procedure public void run() { // TODO: put the code here } } MyObject obj = new MyObject(); Thread thread = new Thread(obj); tread.start();
// define class with a threadable method public class MyObject { // the thread procedure public void Run() { // TODO: put the code here } } MyObject obj = new MyObject(); Thread thread = new Thread(new ThreadStart(obj.Run)); tread.Start();
The models are remarkably similar. Java requires the threadable object to implement the
Runnable interface, and .NET, in a way, requires the same thing because the
Thread classes on either platform expects a threadable procedure to be of this form:
public void run().
The Java specification is rather simple. Just one simple interface exposing one simple method. The .NET specification is more sophisticated. The 'delegate' concept lends greater flexibility to the writing of multi-threaded programs. Here is an illustration:
// create a threadable object public class MyObject { // first thread procedure public void ThreadProc1() { // TODO: } // second thread procedure public void ThreadProc2() { // TODO: } } MyObject obj = new MyObject(); // create first thread Thread thread1 = new Thread( new ThreadStart(obj.ThreadProc1) ); thread1.Start(); //create second thread Thread thread2 = new Thread( new ThreadStart(obj.ThreadProc2) ); thread2.Start();
The .NET threading model offers more advantages. Any class method that is compatible with the
ThreadStart delegate can be run as a thread procedure. And as the code snippet above illustrates, a single object instance can concurrently be accessed and manipulated by multiple threads. This is a very powerful feature.
We naturally prefer a C++ threading model to be as simple as that of Java and as flexible as that of .NET. Let us focus first on the Java-like simplicity. Here is a proposal:
// define the interface struct IRunnable { virtual void run() = 0; }; // define the thread class class Thread { public: Thread(IRunnable *ptr) { _threadObj = ptr; } void start() { // use the Win32 API here DWORD threadID; ::CreateThread(0, 0, threadProc, _threadObj, 0, &threadID); } protected: // Win32 compatible thread parameter and procedure IRunnable *_threadObj; static unsigned long __stdcall threadProc(void* ptr) { ((IRunnable*)ptr)->run(); return 0; } };
We can now write a multi-threaded program as elegantly as the Java folks can do.
// define class with a threadable method class MyObject : IRunnable { public: // the thread procedure virtual void run() { // TODO: put the code here } } MyObject *obj = new MyObject(); Thread thread = new Thread(obj); tread->start();
It is so simple because we have buried the Win32 API call into a wrapper class. The neat trick here is the static method defined as part of our
Thread class. We have thus emulated the simpler Java
Thread class.
The .NET
Thread and
ThreadStart approach is a little harder to emulate. But we can still realize it in a way by using pointers to class methods. Here is the example:
// define class with a threadable method class MyObject : IRunnable { // pointer to a class method typedef void (MyObject::* PROC)(); PROC fp; // first thread procedure void threadProc1() { //TODO: code for this thread procedure } // second thread procedure void threadProc2() { //TODO: code for this thread procedure } public: MyObject() { fp = threadProc1; } void setThreadProc(int n) { if(n == 1) fp = threadProc1; else if(n == 2) fp = threadProc2; } // the thread procedure virtual void run() { (this->*fp)(); } }; MyObject *obj = new MyObject(); obj->setThreadProc(1); Thread *thread1 = new Thread(obj); thread1->start(); obj->setThreadProc(2); Thread *thread2 = new Thread(obj); thread2->start();
The actual threadable method
run() now uses a pointer to a class method to run the appropriate thread procedure. That pointer must be correctly initialized before a new thread is started.
Wrapping the Win32 APIs into C++ classes is the preferred practice. The Java and .NET platforms provide us with well defined models. And by comparison, these models are so similar that defining C++ classes for a thread class, socket class, stream class, etc. should just be a matter of following the provided documentation.
You may download the
Thread class and try it out. I have designed it to be as simple as possible but you may enhance it by wrapping an additional number of thread related APIs, e.g.
SetThreadPriority, GetThreadPriority etc.
General
News
Question
Answer
Joke
Rant
Admin | http://www.codeproject.com/KB/threads/ThreadClass.aspx | crawl-002 | en | refinedweb |
Figure 1. Image of the
HelloCli sample program.
Welcome to Step 7 of our DCOM tutorial. This is the last step!,.
We're currently on Step 7 of this tutorial (the last!), where we put together a little MFC program, called
HelloCli, to test our server. Let's plunge in:
HelloCliClient to Test the Server
As you can see from the screenshot above, I built a dialog-based application using MFC AppWizard (EXE). I added a status window to report status, so I can see just where errors occurred, and I also successfully handled Connection Points. To add text to the status window, which is just an Edit control with the read-only style set, I added a
CString member variable,
m_strStatus to the dialog class with ClassWizard, and then anytime I needed to add a message line to the edit control, it was made easy with this code:
m_strStatus += "This is a status line for the edit control\r\n"; UpdateData(FALSE);Listing 1. Adding a status line to the edit control.
We add on text to the contents of
m_strStatus with the
+= operator of
CString, and then we call
UpdateData(FALSE) to move the contents of the member variable from the variable to the edit control.
To start my sample project, I brought up the New dialog box, clicked 'MFC AppWizard (EXE)' in the list, and then typed '
HelloCli' for the name of my project. After completing AppWizard, I opened the
STDAFX.H file and added the line shown in bold in Listing 2:
__8495B5E0_67FF_11D4_A358_00104B732442__INCLUDED_)Listing 2. Adding the
#define _WIN32_WINNTline to
STDAFX.Hso that DCOM works.
The next thing to do is to add something to let the client know about the interfaces and everything else that the server supports. There are two ways we can do this:
#importto bring in the type library of the server. This file,
HelloServ.tlb, is produced by
MIDLwhen it compiles the
HelloServ.idlfile.
#include "HelloServ.h"to just include the C++ and C declarations of interfaces. This is nice, but then you have to also define, in your code, all the GUIDs that the server responds to. These are
CLSID_HelloWorld,
IID_IHelloWorld,
DIID_DHelloWorldEvents, and
LIBID_HELLOSERVLib. If you use
#import, this is done for you.
I like the idea of using
#import, because of not only what was explained above, but also because you get to use smart pointers with
#import, too. Be careful, though; we have a custom (that is,
IUnknown-derived) interface that our server uses. We can use
#import just fine in this example since the
IHelloWorld::SayHello() method takes no parameters. If the
IHelloWorld::SayHello() method took parameters, and they weren't of OLE-Automation-compatible types, then we would have to skip using
#import, because it will only recognize those types. However, if you mark your custom interface with the
[oleautomation] attribute and use OLE Automation-compatible types in your methods, this will work.
With custom interfaces, it's generally a better idea to use the second method above. However, like I said earlier, we'll go ahead and use
#import this time because our method doesn't take any parameters. So this means that we need to copy the
HelloServ.tlb file to our
HelloCli project folder from the
HelloServ project folder, and then add an
#import line somewhere. How about in good ol'
STDAFX.H again? We'll also add
#include lines for
atlbase.h and
afxctl.h, since these files give us support for things we'll use later on. Doing all this in
STDAFX.H will help us when we build our program to keep the build time down, too:
< atlbase.h > // Support for CComPtr< > #include < afxctl.h > // MFC support for Connection Points #import "HelloServ.tlb" no_namespace named_guids raw_interfaces_only //{{AFX_INSERT_LOCATION}} // Microsoft Visual C++ will insert additional declarations immediately before // the previous line. #endif // !defined(AFX_STDAFX_H__8495B5E0_67FF_11D4_A358_00104B732442__INCLUDED_)Listing 3. Adding other needed code to
STDAFX.H.
NOTE: This only works if the event source interface is a
dispinterface, like our
DHelloWorldEvents interface.
The next thing to do is to use ClassWizard to give us a class which will implement our connection point for us (!). Yes, we've finally arrived!! To do this, open up ClassWizard, click the Message Maps tab, click Add Class, and then click New, as shown below in Figure 2:
Figure 2. Adding a new class with ClassWizard.
The next thing to do is to specify the new class we want to add to our project. Since this class is (kind of) implementing an interface, that is, the
DHelloWorldEvents dispinterface, we'll call this class the
CHelloWorldEvents class. Next, we specify that we want to derive this class from the MFC
CCmdTarget class, which helps us with all the COM implementation. People have often said that "MFC doesn't really have any COM support besides that needed for OLE and UI stuff. And then, it only does dispinterfaces." None of the preceeding sentence is entirely correct. MFC is great at helping us out with UI stuff, but I have seen example code (that works) where MFC is used to implement any interface you like, even in COM servers with non-dispinterface and non-
IDispatch interfaces! The
CCmdTarget class is the key.
Anyway, enough of my blathering. The last thing to do before we can click OK in the New Class dialog box is to click the Automation option button. This turns on the support in
CCmdTarget that we need to use; don't worry, choosing this won't even add so much as an .ODL file to your project, and you needn't have checked 'Automation' in AppWizard to use this. When everything in the New Class dialog box is as it should be, it should look like Figure 3, below:
Figure 3. Specifying the settings for our new
CHelloWorldEvents class in ClassWizard.
ClassWizard will add the
CHelloWorldEvents class to your project, but it will whine because you didn't specify Automation support in AppWizard. Since you didn't, your project doesn't have a
HelloCli.odl file. Too bad for ClassWizard; it shows you the protest message below, but you can click OK and ignore it:
Figure 4. ClassWizard should just grow up, and quit its whining; but, oh well... Ignore this warning and click OK.
CHelloWorldEvents
ClassWizard, helpful as it is, did make one booboo that we'll want to erase. Open the
HelloWorldEvents.cpp file and remove the line shown below in bold:
BEGIN_MESSAGE_MAP(CHelloWorldEvents, CCmdTarget) //{{AFX_MSG_MAP(CHelloWorldEvents) // NOTE - the ClassWizard will add and remove mapping macros here. //}}AFX_MSG_MAP END_MESSAGE_MAP() BEGIN_DISPATCH_MAP(CHelloWorldEvents, CCmdTarget) //{{AFX_DISPATCH_MAP(CHelloWorldEvents) // NOTE - the ClassWizard will add and remove mapping macros here. //}}AFX_DISPATCH_MAP END_DISPATCH_MAP() // Note: we add support for IID_IHelloWorldEvents to support typesafe binding // from VBA. This IID must match the GUID that is attached to the // dispinterface in the .ODL file. // {B0652FB5-6E0F-11D4-A35B-00104B732442} static const IID IID_IHelloWorldEvents = { 0xb0652fb5, 0x6e0f, 0x11d4, { 0xa3, 0x5b, 0x0, 0x10, 0x4b, 0x73, 0x24, 0x42 } };Listing 4. Delete the lines of code that are shown in bold.
Next, find the code shown in bold in Listing 5, below. We're going to replace it with the DIID (DispInterfaceID) of the
DHelloWorldEvents interface:
BEGIN_INTERFACE_MAP(CHelloWorldEvents, CCmdTarget) INTERFACE_PART(CHelloWorldEvents, IID_IHelloWorldEvents, Dispatch) END_INTERFACE_MAP()Listing 5. The code to look for, shown in bold.
Replace
IID_IHelloWorldEvents with
DIID_DHelloWorldEvents, as shown in Listing 6:
BEGIN_INTERFACE_MAP(CHelloWorldEvents, CCmdTarget) INTERFACE_PART(CHelloWorldEvents, DIID_DHelloWorldEvents, Dispatch) END_INTERFACE_MAP()Listing 6. Putting
DIID_DHelloWorldEventsin place of the Class-Wizard-added
IID_IHelloWorldEventsidentifier.
Now, we're going to use ClassWizard to add the handler function which gets called when the server fires the
OnSayHello event. Bring up ClassWizard, and select the Automation tab. Make sure that
CHelloWorldEvents is selected in the Class Name box, as shown in Figure 5 below:
Figure 5. Selecting the
CHelloWorldEvents class on the Automation tab of ClassWizard.
Click Add Method. The Add Method dialog box appears, as shown in Figure 6 below. Here we're going to specify the "external name" for our event handler method, as well as other information. The "external name" should ALWAYS match the name of the event method that we used when we added it to the server! The Internal Name box should hold the name of the member function that will get called when the event comes in; this can be whatever you please. We're going to use what ClassWizard suggests; a name that matches the
OnSayHello "external name." ALWAYS specify
void for the return type of an event handler, because the server always uses
HRESULT as its return types. For clients, anytime we're handling connection point events, the return type should always be
void. Next, specify a parameter,
LPCTSTR lpszHost as the event handler's single parameter. You notice that
BSTR isn't in the list of event handler types (alright!). This is because you use
LPCTSTR instead; ClassWizard makes sure that MFC will convert between
BSTR and
LPCTSTR for you (!).
Figure 6. Setting up a handler for the
OnSayHello event.
Click OK. ClassWizard adds code to
CHelloWorldEvents to make the magic happen (with
CCmdTarget's help), and then shows a new entry in its External Names listbox to show that the event handler has been added:
Figure 7. ClassWizard showing the addition of our new event handler.
Save your changes! Make sure and click OK in ClassWizard, otherwise it will roll-back all of its changes that it made. If you click Cancel, you'll have to add the event handler again. Just a word of warning. Now it's time to implement our event handler. We'll grab the name of the server computer from
lpszHost, and then we'll show the user a message box saying that the server said Hello. We might also want to add text to the Status window of our dialog saying that the event handler function got called. Here's how I did that:
#include "HelloCliDlg.h" void CHelloWorldEvents::OnSayHello(LPCTSTR lpszHost) { CHelloCliDlg* pDlg = (CHelloCliDlg*)AfxGetMainWnd(); if (pDlg != NULL) { pDlg->m_strStatus += "The OnSayHello() connection point method has been called\r\n"; pDlg->UpdateData(FALSE); } // Show a message box saying 'Hello, world, from host ' + lpszHost: CString strMessage = "Hello, world, from "; strMessage += lpszHost; AfxMessageBox(strMessage, MB_ICONINFORMATION); }Listing 7. Implementing the
CHelloWorldEvents::OnSayHello()event handler function.
I also had to add a
friend statement to the declaration of
CHelloWorldEvents, because
CWnd::UpdateData() is a
protected function:
class CHelloWorldEvents : public CCmdTarget { friend class CHelloCliDlg; ... };
The next thing to do is to add some data members to the
CHelloCliDlg class in order to hold the pointers and objects that we'll be using in working with the server. There are quite a few of them, and you'll have to make sure to add the line
#include "HelloWorldEvents.h"to the top of the
HelloCliDlg.hfile:
// Implementation protected: HICON m_hIcon; DWORD m_dwCookie; // Cookie to keep track of connection point BOOL m_bSinkAdvised; // Were we able to advise the server? IHelloWorldPtr* m_pHelloWorld; // Pointer to the IHelloWorld interface pointer IUnknown* m_pHelloWorldEventsUnk; // Pointer to the IUnknown of the // event "sink" CHelloWorldEvents m_events; // Our event-handler objectListing 8. Data members we need to add to the
CHelloCliDlgclass.
Next, we need to add code to the dialog's constructor:
CHelloCliDlg::CHelloCliDlg(CWnd* pParent /*=NULL*/) : CDialog(CHelloCliDlg::IDD, pParent) { ... m_dwCookie = 0; m_pHelloWorldEventsUnk = m_events.GetIDispatch(FALSE); // So we don't have to call Release() m_bSinkAdvised = FALSE; }Listing 9. Code to add to the
CHelloCliDlg::CHelloCliDlg()constructor function.
To advise the server, I added a
AdviseEventSink(),
protected member function to
CHelloCliDlg using ClassView. This function is implemented the basically the same for anytime you want to advise a server about your MFC connection point:
BOOL CHelloCliDlg::AdviseEventSink() { if (m_bSinkAdvised) return TRUE; IUnknown* pUnk = NULL; CComPtr< IUnknown > spUnk = (*m_pHelloWorld); pUnk = spUnk.p; // Advise the connection point BOOL bResult = AfxConnectionAdvise(pUnk, DIID_DHelloWorldEvents, m_pHelloWorldEventsUnk, TRUE, &m_dwCookie); return bResult; }Listing 10. Implementation of advising the server. This demonstrates how to call
AfxConnectionAdvise(). You must have properly registered the server like we did in Step 6, or else this won't work.
When we're ready to go back to being aloof to the server and its events that it fires, we can call
AfxConnectionUnadvise():
BOOL CHelloCliDlg::UnadviseEventSink() { if (!m_bSinkAdvised) return TRUE; // Get the IHelloWorld IUnknown pointer using a smart pointer. // The smart pointer calls QueryInterface() for us. IUnknown* pUnk = NULL; CComPtr< IUnknown > spUnk = (*m_pHelloWorld); pUnk = spUnk.p; if (spUnk.p) { // Unadvise the connection with the event source return AfxConnectionUnadvise(pUnk, DIID_DHelloWorldEvents, m_pHelloWorldEventsUnk, TRUE, m_dwCookie); } // If we made it here, QueryInterface() didn't work and we can't // unadvise the server return FALSE; }Listing 11. Unadvising the event source and sink with
AfxConnectionUnadvise().
To actually make the method call, you can see how I implemented all of this and where my
AdviseEventSink() and
UnadviseEventSink() play in in the sample program. Remember, though, to add this code to
OnInitDialog():
BOOL CHelloCliDlg::OnInitDialog() { CDialog::OnInitDialog(); ... CoInitialize(NULL); CoInitializeSecurity(NULL, -1, NULL, NULL, RPC_C_AUTHN_LEVEL_NONE, RPC_C_IMP_LEVEL_IMPERSONATE, NULL, EOAC_NONE, NULL); return TRUE; }Listing 12. Adding intialization code to
OnInitDialog().
The
OnStartServer() function handles a button the user clicks when they want to start the server (how circular). As you can see, I had a server computer on the network named
\\Viz-06 which I connected to with DCOM:
void CHelloCliDlg::OnStartServer() { COSERVERINFO serverInfo; ZeroMemory(&serverInfo, sizeof(COSERVERINFO)); COAUTHINFO athn; ZeroMemory(&athn, sizeof(COAUTHINFO)); // Set up the NULL security information athn.dwAuthnLevel = RPC_C_AUTHN_LEVEL_NONE; athn.dwAuthnSvc = RPC_C_AUTHN_WINNT; athn.dwAuthzSvc = RPC_C_AUTHZ_NONE; athn.dwCapabilities = EOAC_NONE; athn.dwImpersonationLevel = RPC_C_IMP_LEVEL_IMPERSONATE; athn.pAuthIdentityData = NULL; athn.pwszServerPrincName = NULL; serverInfo.pwszName = L"\\\\Viz-06"; serverInfo.pAuthInfo = &athn; serverInfo.dwReserved1 = 0; serverInfo.dwReserved2 = 0; MULTI_QI qi = {&IID_IHelloWorld, NULL, S_OK}; ... try { m_pHelloWorld = new IHelloWorldPtr; } catch(...) { AfxMessageBox(AFX_IDP_FAILED_MEMORY_ALLOC, MB_ICONSTOP); ... return; } HRESULT hResult = CoCreateInstanceEx(CLSID_HelloWorld, NULL, CLSCTX_LOCAL_SERVER | CLSCTX_REMOTE_SERVER, &serverInfo, 1, &qi); if (FAILED(hResult)) { ... return; } m_pHelloWorld->Attach((IHelloWorld*)qi.pItf); // Now we have a live pointer to the IHelloWorld interface // on the remote host ... return; }Listing 13. How to get an interface pointer to the
IHelloWorldinterface on the remote server.
Calling the method is a simple matter of executing this statement:
HRESULT hResult = (*m_pHelloWorld)->SayHello();Listing 14. Calling the
IHelloWorld::SayHello()method.
To release the server when we're done with it, simply
delete the
m_pHelloWorld pointer:
delete m_pHelloWorld; m_pHelloWorld = NULL;Listing 15.
There! Now we have a living, breathing, DCOM client/server software system. It doesn't do much, but it can do a lot... Anyway, I hope this tutorial has been enlightening, and DCOM demystified. I always encourage you to e-mail me just whenever, and ask me questions. No question is a stupid question, and I will be happy to help you. Click Back below if you want to go back to Step 6, or click Questions and Answers to see if someone else asked a question you need answered.
Until next time... it's been fun.
General
News
Question
Answer
Joke
Rant
Admin | http://www.codeproject.com/KB/COM/hellotutorial7.aspx | crawl-002 | en | refinedweb |
The new .NET technologies, Remoting and Web Services has made life much easier than the days of trying to get DCOM to work. Although with anything that has been made easier there are some details that have been made too easy. In the case of Remoting or calling a web service, the Microsoft .NET Framework includes an automatic feature that converts all returned
DataTables with
DateTime values to the caller's time zone. So if you're in Seattle and need to find out a certain
DateTime value in a database table row (let's say
sale_date) on a server that runs in New York City you can make a web service call to find out. What happens is the
sale_date value may have a value of 8/22/2004 9:05 am on the server in New York, but your web service call will result in a value of 8/22/2004 6:05 am. Which is clearly wrong. This article will tell you how to fix this problem.
The problem seems to only occur whenever you send a
DataTable as a return value. This is because .NET Framework will automatically serialize the
DataTable into xml using it's
System.Xml.Serialization.XmlSerializer class. The
XmlSerializer will convert the DateTime values upon deserialization on the client. The idea here is to take control of the xml serialization process and manipulate the xml using regular expressions to give us the correct result.
1. In the web service we first need to convert the
DataTable to an xml string and send back the string. We use the
System.IO.StringWriter class to write out the xml string:
using System.Data; using System.IO; using System.Web.Services; ... namespace NYDataServices { ... // Web service is running in New York City public class MyWebService : System.Web.Services.WebService { ... [WebMethod] public string GetData() { DataTable dataTable = null; // Get data from database as a DataTable ... // Now convert the DataTable to an xml string and return it to client return convertDataTableToString( dataTable ); } private string convertDataTableToString( DataTable dataTable ) { DataSet dataSet = new DataSet(); dataSet.Tables.Add( dataTable ); StringWriter writer = new StringWriter(); dataSet.WriteXml( writer, XmlWriteMode.WriteSchema ); return writer.ToString(); }
2. On the client side we make the call to get the data and receive the data as an xml string.
using System.Data; using System.Text.RegularExpressions; ... namespace SeattleClient { ... // Client program running in Seattle public class MyClient : System.Windows.Form { ... public void GetDataFromServer() { NYDataServices.MyWebService ws = new NYDataServices.MyWebService(); string xmlString = ws.GetData(); DataTable dataTable = convertStringToDataTable( xmlString ); // Do something with dataTable ... }
3. Converting the xml string back to a
DataTable requires the use of regular expressions to search, adjust time values and replace. The
DateTime values take on the form of 2004-08-22T00:00:00.0000000-05:00. The last 5 characters in the string indicate the UTC (Universal Time Coordinate) time. During xml deserialization back into a
DataTable, the
XmlSerializer class reads this value and creates an offset value based on the client's UTC time. It then adds this offset into all
DateTime values upon deserialization. The kicker here is that if the
DateTime value happens to be on DST (Daylight Savings Time) and the client is not on DST it will adjust for this too. We use some of the magic of the
System.Text.RegularExpression namespace such as the
Regex.Replace() function,
Match class and
MatchEvaluator delegate.
private DataTable convertStringToDataTable( string xmlString ) { // Search for datetime values of the format // --> 2004-08-22T00:00:00.0000000-05:00 string rp = @"(?<DATE>\d{4}-\d{2}-\d{2})(?<TIME>T\d{2}:\d{2}:\d{2}."+ "\d{7}-)(?<HOUR>\d{2})(?<LAST>:\d{2})"; // Replace UTC offset value string fixedString = Regex.Replace( xmlString, rp, new MatchEvaluator( getHourOffset ) ); DataSet dataSet = new DataSet(); StringReader stringReader = new StringReader( fixedString ); dataSet.ReadXml( stringReader ); return dataSet.Tables[ 0 ]; } private static string getHourOffset( Match m ) { // Need to also account for Daylights Savings // Time when calculating UTC offset value DateTime dtLocal = DateTime.Parse( m.Result( "${date}" ) ); DateTime dtUTC = dtLocal.ToUniversalTime(); int hourLocalOffset = dtUTC.Hour - dtLocal.Hour; int hourServer = int.Parse( m.Result( "${hour}" ) ); string newHour = ( hourServer + ( hourLocalOffset - hourServer ) ).ToString( "0#" ); string retString = m.Result( "${date}" + "${time}" + newHour + "${last}" ); return retString; }
I know this problem happens when sending back DataTables. I'm not sure if the same applies to custom classes, although I suspect it does.
Here are links that I found very useful --
General
News
Question
Answer
Joke
Rant
Admin | http://www.codeproject.com/KB/cs/datetimeissuexmlser.aspx | crawl-002 | en | refinedweb |
Can you imagine the programming process without the possibility of debugging program code at run-time? It is obvious that such programming may exist, but programming without debugging possibility is too complicated when we are working on big and complex projects. In addition to standard approaches of debugging program code such as an output window on Visual Studio IDE or the macros of asserts, I propose a not new method for debugging your code: to output your debugging data to an application which is separated from Visual Studio IDE and the project you are currently working on.
Launch the application of trace messages catcher (next: trace catcher) before you start working with this module. Tracing data, sent to the trace catcher application, will be saved if the catcher application was inactive or was terminated during the trace operations. All the data which have been saved during the critical situations, as described above, will be kept and popped-out to trace catcher application when it starts again. There�s a possibility to start the trace catcher application with the creation of the trace module, and terminate it when the trace module is being destructed.
As I mentioned, this trace module allows you to put your trace data to several output windows in trace catcher application (next: trace channels). Trace channel is a simple window which helps to visualize your tracing data by the trace catcher application. In order to add your trace data to a certain trace channel, you must describe it as follows:
_Log.setSectionName( "channel_#1" ); _Log.dump( "%s", "My trace data" );
or
_Log.dumpToSection( "channel_#1", "%s", "My trace data" );
If you send your trace data to the new trace channel which is not created in the trace catcher application, new trace channel will be created automatically.
In addition to sending your trace data to the catcher, there�s a possibility to manipulate the trace catcher application with commands help. Commands are divided into two parts: global commands, and commands which depend to the trace channel.
closeRoot� close the trace catcher application;
onTop.ON� enable always on top state for the catcher application;
onTop.OFF� disable always on top state for the catcher application.
Example:
_Log.sendCmd( "closeRoot" ); _Log.sendCmd( "onTop.ON");
clear- deleting the entry of given trace channel;
close- closing the given trace channel;
save<path to output stream> - saving the entry of the given trace channel to output stream described by you.
Example:
_Log.sendCmd( "Channel_1", "clear" ); _Log.sendCmd( "Channel_2", "save c:\\channel2.log" ); _Log.sendCmd( "close" ); /** close the current output window (section) */
In order to fully use this trace module, you have to do only two steps:
An example:
#include "path_by_you\LogDispathc.dir\LogDispath.h"</pre>
To call all this messages described below, you must use variable names as follows: [_Log]. The trace object is created once during the project life-time (using singleton pattern).
Let's say, calling the dump message will be described like that:
_Log.dump( "System time is %d %d %5.5f ", 15, 10, 08.555121 );
dump� formatted trace data (
sprintfformat) are sent to catcher application. The tracing data will be placed to the section named as the result of calling method "
setSectionName" before that, or if the method "
setSectionName" wasn�t called, tracing data will be placed to the default section named as "
output@default".
dumpToSection� the principle is the same as dump message. The difference is that this message will place your data to the channel by name which you described in this message.
setSectionName� set the working (active) channel name.
getCmdPrefix� sets the prefix of the command.
setCmdPrefix� returns the prefix of the command.
sendCmd� sends your message to the receiver application.
setCloseOnExit� enables/disables the possibility to send the message to the catcher application on exit.
setCloseCMDOnExit� sets the command of the catcher application which will be send when the trace module will be destroyed.
setClassNameOfCatcher� set the class name of the catcher application. That class name will be used in search of the catcher application where the tracing data will be sent.
runCatcher� executing the catcher application from the described path.
This trace module and the strategy we are using on it is very flexible, and is an effective trace tool for debugging big projects. In my opinion, this tool will be a very effective strategy to trace release versions of the projects where all debugging data are removed. Very easy and comfortable to use it :].
General
News
Question
Answer
Joke
Rant
Admin | http://www.codeproject.com/KB/debug/LogDispatch.aspx | crawl-002 | en | refinedweb |
Recently there was a need to connect to a SSH server from my C# code. I needed to perform a simple task: login to a remote Linux device, execute a command and read the response. I knew there were a number of free Java SSH libraries out there and I hoped to find a free .NET one that will allow me to do just that, but all I could find were commercial components. After experimenting with an open source Java SSH library called JSch I decided to try and port it to C# just for the sake of exercise. The result is the attached sharpSsh library and this article which explains how to use it.
SSH (Secure Shell) is a protocol to log into another computer over a network, to execute commands in a remote machine, and to move files from one machine to another. It provides strong authentication and secure communications over unsecured channels. The JSch library is a pure Java implementation of the SSH2 protocol suite; It contains many features such as port forwarding, X11 forwarding, secure file transfer and supports numerous cipher and MAC algorithms. JSch is licensed under BSD style license.
My C# version is not a full port of JSch. I ported only the minimal required features in order to complete my simple task. The following list summarizes the supported features of the library:
Please check my homepage for the latest version and feature list of SharpSSH.
Let me begin with a small disclaimer. The code isn't fully tested, and I cannot guarantee any level of performance, security or quality. The purpose of this library and article is to educate myself (and maybe you) about the SSH protocol and the differences between C# and Java.
In order to provide the simplest API for SSH communication, I created two wrapper classes under the
Tamir.SharpSsh namespace that encapsulates JSch's internal structures:
SshStream- A stream based class for reading and writing over the SSH channel.
Scp- A class for handling file transfers over the SSH channel.
The
SshStream class makes reading and writing of data over an SSH channel as easy as any I/O read/write task. Its constructor gets three parameters: The remote hostname or IP address, a username and a password. It connects to the remote server as soon as it is constructed.
//Create a new SSH stream SshStream ssh = new SshStream("remoteHost", "username", "password"); //..The SshStream has successfully established the connection.
Now, we can set some properties:
//Set the end of response matcher character ssh.Prompt = "#"; //Remove terminal emulation characters ssh.RemoveTerminalEmulationCharacters = true;
The
Prompt property is a string that matches the end of a response. Setting this property is useful when using the
ReadResponse() method which keeps reading and buffering data from the SSH channel until the
Prompt string is matched in the response, only then will it return the result string. For example, a Linux shell prompt usually ends with '#' or '$', so after executing a command it will be useful to match these characters to detect the end of the command response (this property actually gets any regular expression pattern and matches it with the response, so it's possible to match more complex patterns such as "\[[^@]*@[^]]*]#\s" which matches the bash shell prompt
[user@host dir]# of a Linux host). The default value of the
Prompt property is
"\n", which simply tells the
ReadResponse() method to return one line of response.
The response string will typically contain escape sequence characters which are terminal emulation signals that instruct the connected SSH client how to display the response. However, if we are only interested in the 'clean' response content we can omit these characters by setting the
RemoveTerminalEmulationCharacters property to
true.
Now, reading and writing to/from the SSH stream will be done as follows:
//Writing to the SSH channel ssh.Write( command ); //Reading from the SSH channel string response = ssh.ReadResponse();
Of course, it's still possible to use the
SshStream's standard
Read/
Write I/O methods available in the
System.IO.Stream API.
Transferring files to and from an SSH server is pretty straightforward with the
Scp class. The following snippet demonstrates how it's done:
//Create a new SCP instance Scp scp = new Scp(); //Copy a file from local machine to remote SSH server scp.To("C:\fileName", "remoteHost", "/pub/fileName", "username", "password"); //Copy a file from remote SSH server to local machine scp.From("remoteHost", "/pub/fileName", "username", "password", "C:\fileName");
The
Scp class also has some events for tracking the progress of file transfer:
Scp.OnConnecting- Triggered on SSH connection initialization.
Scp.OnStart- Triggered on file transfer start.
Scp.OnEnd- Triggered on file transfer end.
Scp.OnProgress- Triggered on file transfer progress update (The
ProgressUpdateIntervalproperty can be set to modify the progress update interval time in milliseconds).
The demo project is a simple console application demonstrating the use of
SshStream and
Scp classes. It asks the user for the hostname, username and password for a remote SSH server and shows examples of a simple SSH session, and file transfers to/from a remote SSH machine.
Here is a screen shot of an SSH connection to a Linux shell:
And here is a file transfer from a Linux machine to my PC using SCP:
In the demo project zip file you will also find an examples directory containing some classes showing the use of the original JSch API. These examples were translated directly from the Java examples posted with the original JSch library and show the use of advanced options such as public key authentication, known hosts files, key generation, SFTP and others.
Scpclass.
General
News
Question
Answer
Joke
Rant
Admin | http://www.codeproject.com/KB/IP/sharpssh.aspx | crawl-002 | en | refinedweb |
Re: Decompiler.NET reverse engineers your CLS compliant code
From: Shawn B. (leabre_at_html.com)
Date: 09/24/04
- ]
Date: Fri, 24 Sep 2004 11:28:43 -0700
> > see it as tying to a specific machine. What happens if you go out of
> > business in 2 years?
>
> That won't happen. We're also only charging $500, not 5 million. There is
as
> much of a risk that you may get hit by a bus tomorrow and won't need the
> software anymore.
How do you know that won't happen? Because you don't want it to? There
have been many many 3rd parties and small software vendors and large ones
that have come and gone. I'm not saying that will happen to you, but I'm
saying the possibility exists. For as long as the use of the software
depends on the existence of the vendor, that software has a very high risk
of becoming useless in the unfortunate case that the vendor dissappears.
Again, I'm not saying it will happen to you, but there aren't many software
vendors that don't eventually go the way of the do-do bird without becoming
the largest entity in the niche you are targeting or being purchased by a
larger company. Who's to say that larger company will continue to support
the product?
If the licensing didn't require such a strict lockdown, it wouldn't be a
problem. But because the ability to use the software depends on a
particular companies existance, it is a very high risk for me to purcahse
*any* product that follows suit (not just yours). Alos, with new laws being
passed every day, you could become outlawed and thus, out of business, or
arrested, or whatever, for providing a tool that can potentially be used
maliciously for whatever reason the media/software industry decides is
harmful to them... they have a powerful lobby, how powerful is yours?
The point is it is a risk to spend any money on software that depends on the
vendors existence to continue usage. A risk that it too great for my pocket
book. Nothing personal.
> > Besides, I didn't say I'd write my own decompiler, I just said if it was
> > that important I could, I'm more than capable, its just not a priority
and
> > since I don't decompile non System.* assemblies, the price is not
> > justifyable.
> >
>
> We spent over two years writing ours. I imaging that two years of your
time
> is worth more to you than the $500 we charge for a license which our
> costomers feel is a tremendous value to them.
I don't doubt you spent 2 years on this. My time is worth more, but then
again, since I only occasionaly review the System.* namespaces, $500 isn't
worth it to me. If I was going to look at some proprietary code other than
System.*, perhaps it would be. But if I was going to do that, I would just
create my own version and learn how to imitate a feature and learn from it,
rather than "cheat" and take the easy way out. Of course, since I'm
dependant on the System.* namespaces, I have no problem examining something
when I'm not sure about the documentation. Would I pay $500 for that? No.
It isn't *that* important. With Reflector, it is a convenience that I
exploit. Nothing more.
There's always Mono, but I'm much less inclined to actually look at GPL code
(I generally avoid it for reasons I won't discuss in this thread). Besides
that, Mono may not be programmed exactly the way that the System.* classes
are.
> > I'm not fine with being actively dependant on a vendor in
> > order to keep using the software despite all of my requirements.
>
> You are not if you don't replace your motherboard or machine itself. Tivo
> doesn't even let you move your lifetime subscriptions to newer hardware
that
> they themselves sell.
I don't use Tivo so I wouldn't know. But we're not talking about hardware
here, we're talking about software.
> > I just happen to dissagree
> > with that kind of licensing. It does nothing to keep prices low
>
> Pirated copied cause vendots to raise their prices for their software
since
> their target market is cannibalized. Locking down licensed copies to
> hardware reduces the amount of software piracy and therefore does keep
> prices lower that without it.Although you may feel that $500 is high, we
> intenitionally priced our product much lower than the cost to develop it
to
> make it accessible to small developers like yourself who might benefit
from
> it. If Reflector also charged $500 and there weren't any free choices
> available with relatively good decompilation capability, you would
probably
> feel differently towards our product and be glad that a product like it
was
> available to you instead of having to invest the two years yourself trying
> to write your own decompiler that works as well.
Name one commercial product that every lowered its price because they got
piracy under control. No, what actually happens is they complain more and
then justify the higher prices because they have to spend more money on R&D
to contantly come up with new anti-piracy measures. Now that they have the
average user inconvenienced and have thwarted "casual" sharing, prices
aren't any lower than they were previously. But the true pirate still has
no problems getting around it.
Again, if Reflector wasn't free, I agree, I wouldn't be using it. I
wouldn't be purchasing any tool to do the job, anyway. I can read IL, I
would be inconvenienced, but I can do it (I program in it sometimes, probly
because I program Win32 in assembly also) but, if I wasn't "restricted" to
my initial machine which changes often and "dependant" on a vendor, I might
consider it.
But since we're going in circles here, there's no more point in elaborating
why I don't purchase your product or any other that causes me to be
dependant on them and screwed, blued, and tattoo'd if they go out of
business. You obviously feel justified and confident in your product and
licensing terms, and I obviously feel like it creates a severe financial
risk for me to use the product and nothing is going to change that. If the
entire software industry follows suit, I'll use less software or will
eventually move to Free software (free as in beer, free as in speech)
because I just happen to refuse dependance on any particular non-Microsoft
software vendor. It has nothing to do with you, it has everything to do
with my freedom and getting value out of my hard-earned money.
Thanks,
Shawn
- ] | http://www.tech-archive.net/Archive/DotNet/microsoft.public.dotnet.languages.vb/2004-09/4817.html | crawl-002 | en | refinedweb |
David Vriend noted a problem in Example 3-3
of Java I/O,
StreamCopier, as well as several similar examples from that
book. The
copy() method attempts to synchronize on
the input and output streams to "not allow other threads to read from the
input or write to the output while copying is
taking place". Here's the relevant method:
However, this only helps if the other threads using those streams are also kind enough to synchronize them. In the general case, that seems unlikely. The question is this: is there any way to guarantee thread safety in a method like this when:However, this only helps if the other threads using those streams are also kind enough to synchronize them. In the general case, that seems unlikely. The question is this: is there any way to guarantee thread safety in a method like this when:); } } } }
InputStreamand
OutputStreamin this example) so you can't add synchronization directly to them.
Note that although the specific instance of this question deals with streams, the actual question is really more about threading. Since anyone answering this question probably already has a copy of Java I/O, I'll send out a free copy of XML: Extensible Markup Language for the best answer.
There were several thoughful answers to this question, but ultimately the answer is no.
This does seem
to be a design
flaw in Java. There is simply no way to guarantee exclusive access
to an object your own code did not
create. Apparently other languages do exist that do solve this problem.
In Java, however, the only thing you can do is fully document the intended use of the class
and hope programmers read and understand the documentation. What's especially pernicious
is that the behavior in a multithreaded environment of
most of the Sun supplied classes in the
java packages is undocumented.
This has gotten a little better in JDK 1.2, but not enough.
Several people argued that the question was silly; that it was ridiculous to attempt to guarantee behavior in the face of unknown objects produced by unknown programmers. The argument essentially went that classes only exist as part of a system and that you can only guarantee thread safety by considering the entire system. The problem with this attitude is that it runs completely counter to the alleged benefits of data encapsulation and code reuse that object oriented programming is supposed to provide. If you can only write safe code by writing every line of code in the system, then we might as well go back to C and Pascal and forget all the lessons learned in the last twenty years.
More pointedly, any solution that requires complete knowledge of all code in the entire program is extremely difficult to maintain in a multi-programmer environment. It may be possible with small teams. It's completely infeasible for large teams. And it's absolutely impossible for anyone trying to write a class library to be used by many different programmers on many different projects.
All the answers were quite well though out this time around.
You'll find them on the complete question page.
I think the best answer came from Michael Brundage who gets a copy of
XML: Extensible Markup Language as a prize just as soon as
he sends me his snail mail address. For the rest of you, I'll have a new question for
you soon which delves into some undocumented behavior in the
InputStream class.
Subject: ThreadSafeStreams Date: Mon, 7 Jun 1999 14:41:22 -0700 From: Michael Brundage brundage@ipac.caltech.edu
Hi Elliotte,
Saw your Cafe au Lait question regarding thread-safety and streams, and thought I would toss in a few cents:
The problem actually has nothing to do with streams or I/O at all. Rather, it's a fundamental problem with the thread/synchronization model chosen for Java. Monitors (as opposed to, say, concurrent sequential programming) just don't give the developer an opportunity to determine thread safety within the context of a single method. The developer is forced to look at the global picture to determine whether deadlocks or other threading problems might occur.
I once had a really informative conversation with Frode Odegard about this topic. I've attached a post he made to the LA Java Users' Group mailing list (including some useful references) to the end of this message for your reading pleasure.
A classic example is the thread-safe library class
public class Safe { protected int x, y; public synchronized void setXY(int x, int y) { this.x = x; this.y = y; } }
You see something like this in almost every library in existence: The class is meant to be extended, so the members are made protected. They can only be accessed through thread-safe methods, so everything's okay, right?
Wrong -- there is no way to prevent a developer from coming along and writing:
public class Scary extends Safe { public void mySetXY(int x, int y) { this.x = x; this.y = y; } }
Now, calls to
setXY() and
mySetXY() can be interleaved, resulting is
unexpected results. There is no way, as long as there are
protected/public fields or protected/public unsynchronized methods, to
prevent developers from coming along and shooting themselves in the foot
with a subclass that bypasses synchronization.
(Btw, you can simplify this example by using just one member of type double; however, I've heard that there are plans to make assignment to doubles atomic in the future, and in any case no JVM I've seen treats it as nonatomic, .)
One can avoid this particular case by never declaring anything that could result in a thread-safety problem in subclasses or elswhere protected or public. The example above could be repaired using:
public class Safe { private int x,y; public synchronized void setXY(int x, int y) { this.x = x; this.y = y; } }
Now any subclass that wants to change the value of x or y is required to go through the setXY() method on Safe. As long as every field is always private and every public and protected method is synchronized, your object can be threadsafe (which says nothing about liveness issues like deadlock, unfortunately). You could even imagine an automated tool to check your source code for this kind of potential threading problem.
There are some other areas where subclassing is problematic with threading, such as mixing static and instance methods (which synchronize differently), and deadlock is particularly thorny with subclasses. Doug Lea gives several examples in his book, Concurrent Programming in Java.
The example I gave above demonstrates how to settle the subclassing problem when the lock is required on the instance -- just always require subclasses to go through your (synchronized) methods on the superclass to do any modifications to fields in the superclass. However, the example you gave on Cafe au Lait is trickier, because the library class is not one you wrote yourself and two locks need to be acquired instead of one. Even for this trickier problem, the same idea of containment applies: You want to prevent other methods from modifying the objects you're using. The only way to guarantee that other developers don't bypass your synchronization is to prevent all other kinds of unsynchronized access.
Because the standard Java I/O classes are inherently un-thread-safe, this means you cannot allow external code to get ahold of the original stream objects, only thread-safe wrapped versions of the original stream. In particular, any stream class you write with the intention of making it thread-safe cannot have a constructor taking a generic stream (InputStream, OutputStream) since the calling code could then modify the stream it passed in the ctor in an un-thread-safe way. This restriction alone makes it almost impossible to guarantee thread-safety when working with the pre-existing java.io stream classes, since they are designed to be attached to other streams through the ctor.
Although this demonstrates that it is not possible to accept a (potentially unsafe) stream from the user's code, it is still possible to create a safe bridge by putting a stream factory into the library, wrapping the stream in an opaque way for its travels through the user's code, and then extracting the original stream again in the library code. By preventing the developer from ever getting at the underlying (unsafe) instance, you can guarantee that the developer's code never interferes with your library's synchronization.
If your library resides in a package that you control, say my.library, then you could do the following:
package my.library; import java.io.*; public interface SafeStream { } final class MySafeInputStream implements SafeStream { // package-local private InputStream stream; MySafeInputStream(InputStream s) { stream = s; } InputStream getInputStream() { return stream; } } final class MySafeOutputStream implements SafeStream { // package-local private OutputStream stream; MySafeOutputStream(OutputStream s) { stream = s; } OutputStream getOutputStream() { return stream; } } /* this implementation is just a sketch, to give the idea; it won't compile as-is because (among other things) it's missing error-handling. You'd also rewrite parts of it for better efficiency, etc. The main point of this class is that (together with the opaque handlers MySafeInputStream and MySafeOutputStream) it prevents the underlying InputStream or OutputStream instances from "leaking" to external (untrusted) code. */ public final class StreamFactory { public static SafeStream createInputStream(Class type) { return new MySafeInputStream(type.forInstance()); } public static SafeStream createInputStream(Class type, SafeStream inner) { Constructor ctor = type.getConstructor(new Class[] { InputStream.class }); return new MySafeInputStream(ctor.newInstance(new Object[] { inner })); } public static SafeStream createOutputStream(Class type) { return new MySafeOutputStream(type.forInstance()); } public static SafeStream createOutputStream(Class type, SafeStream inner) { Constructor ctor = type.getConstructor(new Class[] { OutputStream.class }); return new MySafeOutputStream(ctor.newInstance(new Object[] { inner })); } public SafeStream createFileInputStream(String file) { return new MyInputStream(new File(file).getInputStream()); } } public final class SafeStreamUtils { public static copy(SafeStream safeIn, SafeStream safeTo) throws IOException { InputStream in = ((MySafeInputStream)safeIn).getInputStream(); OutputStream out = ((MySafeOutputStream)safeIn).getOutputStream(); // now copy(in, to), exactly as before } }
You can always replace the one interface SafeStream with two, SafeInputStream and SafeOutputStream, and personally I would also add methods to these interfaces so that the user can work with them directly (but in a thread-safe way, mind you) instead of going through a global "Utils" object (which is so no-OO).
This factory/bridging mechanism works to solve the "guaranteed threadsafe" problem in general, not just for streams, although it also brings up issues of its own (mainly when trying to get two independent libraries to work well together). It can even be made to work better with custom stream subclasses by modifying the factory to accept arbitrary constructor method descriptors, although that opens you up to the possibility that the developer retains a reference to some underlying object (like another stream) and uses it to bypass your synchronization checks.
To: elharo@metalab.unc.edu Subject: ThreadSafeStreams Mime-Version: 1.0 Date: Wed, 09 Jun 1999 17:03:32 +1000 From: Michael Lawley lawley@dstc.edu.au
Hi Elliotte,
If Thread.suspend() were not deprecated, then one could even imagine a solution that involved attempting to suspend all other threads for the duration (not very pretty, and you'd have to contend with ThreadGroup security issues).
My only other suggestion is a real hack - grab the class files for the underlying implementation, hack them to change the class names, then wrap them with a new implementation using the original class names and arrange for the resulting classes to appear at the beginning of the classpath (or even the bootclasspath).
OTOH I wonder if this is really a problem in practice? Other than stdin, stdout and stderr, how often does one have multiple reader/ writer threads on a stream that one doesn't otherwise have control over either all the methods or the creation of the stream?
Date: Sat, 12 Jun 1999 18:23:08 -0400 From: Irum Parvez irum@sprint.ca Organization: Irum Parvez CA To: elharo@metalab.unc.edu Subject: ThreadSafeStreams
Hi Rusty
One of the difficulties with streams is that you are often using a chain of them. Witch stream (of the chain) do you synchronize on? - the first or the last(one of the middle streams?). Let me define: last-The node stream that the filter streams use.(There is one of these in a chain) first-A filter stream that you are using middle-A filter stream that other filter streams often use(delegate to)
It would be very nice if Sun would add to the Stream interface (API) It would be very nice if java.io.InputStream (& OutputStream) would contain an abstract method called getNode(). The FilterStreams would delegate this to the Node Stream that would implement the method (returning itself - this). This would allow you to just synchronize on the node stream of (a chain)and eliminate the overhead and risk of multiple locks.You would to just synchronize on the object returned by getNode().
What do you think?
P.S. I think your question could be phrased a little better:
//------------------------------------------------------------------------------ 3.Wrapping the unsynchronized classes in a synchronized class is insufficient because the underlying unsynchronized class may still be >exposed to other classes and threads.
>'unsynchronized classes' - Java mutex locks synchronize on objects - not classes.
I prefer to use the term delicate data or delicate object rather than unsynchronized class.
There are few ways to protect delicate data from being corrupted.
A java primitive variable that is shared between threads can be protected by declaring it as volitile(so it not stored in a register).
A delicate object can only be protected (from coruption) by encapsulation and synchronization:
NOTE:Protecting public delicate data is a waist of time.You must both synchronize and encasulate.
Date: Thu, 03 Jun 1999 18:51:36 +0200 From: Michael Peter <Michael.Peter@stud.informatik.uni-erlangen.de> X-Accept-Language: en MIME-Version: 1.0 To: elharo@metalab.unc.edu Subject: ThreadSafeStreams Status:
Hi!
Since you have no control over the classes you wish to synchronize, it is not possible to use them directly. In your example your StreamCopier class tries to use synchronized(in) and synchronized(out) to make the method thread safe. This is of no use, since the classes themselves are not synchronized and synchronization only takes places within synchronized blocks. Even if they were synchronized, your method could possibly create a problem if the classes use another object for synchronization like in the following example:
public class foo { private Object lock = new Object(); private int someVariable; public int someSynchronizedMethod() { synchronized( lock ) { // lock is used instead of this to synchronize return someVariable; } } }
Since you cannot access the lock, you cannot synchronize from the outside.
There is however a way to provide synchronization by using a wrapper. You write that wrapping the class is insufficient, but it is only if you don't create the class in your wrapper, but accept a previously created object. As for StreamCopier the following works:
public class SynchronizedInputStream extends InputStream { private InputStream in; public SynchronizedInputStream() { throw new IllegalArgumentException( "Use createXXXInputStream to create a synchronized input stream" ); } private SynchronizedInputStream( InputStream in ) { this.in = in; } public InputStream createFileInputStream( File f ) { return new SynchronizedInputStream( new FileInputStream( f ) ); } public InputStream createByteArrayInputStream( byte[] buf ) { return new SynchronizedInputStream( new ByteArrayInputStream( buf ) ); } /* ... You have to write a method for every InputStream type you want to use */ /* Wrap all InputStream methods */ public int available() { synchronized( this ) { return in.available(); } } public boolean markSupported() { synchronized( this ) { return in.markSupported(); } } /* Do this for all remaining methods */ }
This method might not be very efficient if multiple InputStreams are chained, but I think there is some room for optimisation, like checking if a class is already a instance of SynchronizedInputStream and then synchronizing only once. This is left as an exercise to the reader. (I always wanted to say this!)
The copy method from StreamCopier would then look like this:
public static void copy(SynchronizedInputStream in, SynchronizedOutput); } } } }
Date: Thu, 3 Jun 1999 16:41:04 -0700 To: elharo@metalab.unc.edu From: Greg Guerin <glguerin@amug.org> Subject: ThreadSafeStreams
Hi,
Good Q o'Week...
I think that parts of the question as stated are misleading (as in "leading to incorrect, unsupported, or erroneous conclusions or constraints":
>However, this only helps if the other threads using those streams >are also kind enough to synchronize them. In the general case, that >seems unlikely.
It better not be -- those other threads don't magically pop themselves into existence executing whatever code they feel like. Those other threads are spawned by *MY* code, and presumably execute the run() method *I* designate for them. Kindness is not the question: "The question is which is to be master -- that's all".
>1.You're trying to write a library routine to be used by many different >programmers in their own programs so you can't count on the rest of the >program outside this utility class being written in a thread safe fashion.
Fine, that's what synchronized methods are for. If you have encapsulate an entire object so only one thread can use it over a longer period of time than a single method-call, that's what wait/notify are for. See more under 3 below.
>2.You have not written the underlying classes that need to be thread safe >(InputStream and OutputStream in this example) so you can't add >synchronization directly to them.
True, but that doesn't mean I can't sub-class them and only give a SynchronizedInputStream and/or a SynchronizedOutputStream to the threads that need to coordinate their I/O. Either I write the threads to do that themselves, or I pass args to them that guarantee the needed coordination. A Thread is a thread of execution, not a monolithic self-contained self-determined omnipotent blob of code with only a few puppet-strings emerging from it. An ActiveX control or a Java Applet may fit that mold, but not a Thread.
>3.Wrapping the unsynchronized classes in a synchronized class is insufficient >because the underlying unsynchronized class may still be exposed to other >classes and threads.
So what if the *CLASS* is exposed? The threads are either:
If I don't have that level of control over my threads, then I have a bigger problem than merely coordinating some I/O streams.
This situation is no different than classes that take InputStream arguments, but happily work with FileInputStream, PipeInputStream, FilterInputStream, StrongCryptoInputStream, or MyBeamedInFromMarsInputStream. Indeed, a sub-class of FilterInputStream and FilterOutputStream with synchronization added to its methods should handily solve the problem, as I understand it.
If you need thread-safe access to an entire object over multiple method-calls, then a centralized arbitrator operating with wait/notify will be needed. You call the arbitrator to get the object, you use it until you don't need it any more, then you return the object to the arbitrator so it can hand it out again. Threads that fail to return an object to the arbitrator are defective -- they are resource-tyrants. If your question is asking "How can an arbitrator wrest contol of a resource back from a resource-tyrant?" the answer is "you can't". If that's a problem, then either don't write resource-tyrant threads, or don't share resources with a resource-tyrannical thread.
I think your StreamCopier.copy() example is kind of an odd case. First, I don't think I'd call it copy() -- seems to me that expand() or concatenate() might make more sense. Second, I probably wouldn't have structured it as a static method, but as an instance-method of a class that either expanded a given InputStream arg to an instance-variable OutputStream (guaranteeing the expansion would occur without intrusion), or a class constructed with several InputStream sources that guaranteed reading would occur in sequence (concatenation). The former makes more sense to me for your "copy" feature. The latter is essentially a java.io.SequenceInputStream. Either way, I think the StreamCopier is just a poorly-designed class for its intended purpose (judging by its copy() method alone, since I haven't read your book).
Since I haven't read your new book yet, I don't know to what use StreamCopier is being put. If it's guaranteeing non-intrusion of other writes, then a sub-classed FilterOutputStream could easily fill the bill. If it's concatenation, then SequenceInputStream might work, though I doubt its Enumeration is guaranteed thread-safe, so you might need a custom-made InputStream sub-class with thread-safety and concatenation.
Finally, here's a common idiom, overextended to make a point, that shares certain characteristics with the problem as you posed it. Consider:
OutputStream out1 = mySocket.getOutputStream(); OutputStream out2 = new BufferedOutputStream( out1 ); OutputStream out3 = new CRLFingFilterOutputStream( out2 ); OutputStream out4 = new BufferedOutputStream( out3 );
Here we have 4 different OutputStreams, and none of them are thread-safe. A single thread that writes to out1, out2, out3, and out4 at different points is going to screw things up badly, due to buffering, filtering, etc. OH NO, HOW DO WE PREVENT THAT?! Simple -- you create the thread's code to only write on out4. It can't mangle what it can't touch.
The same principle applies to the thread-safety issue with StreamCopier. If it's reading and writing to a synchronized stream, then each buffer-full that it processes has guaranteed sequential integrity. If it needs to guarantee sequential integrity of an entire InputStream onto an OutputStream, then it needs to participate in the Designated Sharing Ritual just like every other thread that has access to either stream. If a thread can't play by those rules, then you shouldn't let it play with other threads who are willing to share their toys and play nicely.
Sorry if that went too long.
-- GG
p.s. I don't have either of your Java I/O or XML books, but even if I don't win, you could just leave the choice up to the winner -- if unsure of customer's needs, ask customer. ;-) | http://www.cafeaulait.org/questions/06031999.html | crawl-002 | en | refinedweb |
Package::Alias - alias one namespace into another
use Package::Alias Foo => 'main', P => 'Really::Long::Package::Name', 'A::B' => 'C::D', Alias => 'Existing::Namespace';
This module aliases one package name to another. After running the SYNOPSIS code,
@INC and
@Foo::INC reference the same memory.
$Really::Long::Package::Name::var and $P::var do as well..
Package::Alias won't, by default, alias over a namespace if it's already in use. That's not considered a fatal error - you'll just get a warning and flow will continue. You can change that cowardly behaviour this way:
# Make Bar like Foo, even if Bar is already in use. BEGIN { $Package::Alias::BRAVE = 1 } use Package::Alias Bar => 'Foo';
Joshua Keroes <skunkworks@eli.net> | http://search.cpan.org/~joshua/Package-Alias-0.04/Alias.pm | crawl-002 | en | refinedweb |
Win32::FileOp - 0.14.1
Module for file operations with fancy dialog boxes, for moving files to recycle bin, reading and updating INI files and file operations in general.
Unless mentioned otherwise all functions work under WinXP, Win2k, WinNT, WinME and Win9x. Let me know if not.
Version 0.14.1
GetDesktopHandle
GetWindowHandle
Copy
CopyConfirm
CopyConfirmEach
CopyEx
Move
MoveConfirm
MoveConfirmEach
MoveEx
MoveFile
MoveFileEx
CopyFile
MoveAtReboot
Recycle
RecycleConfirm
RecycleConfirmEach
RecycleEx
DeleteConfirm
DeleteConfirmEach
DeleteEx
DeleteAtReboot
UpdateDir
FillInDir
Compress
Uncompress
Compressed
SetCompression
GetCompression
CompressDir
UncompressDir
GetLargeFileSize
GetDiskFreeSpace
AddToRecentDocs
EmptyRecentDocs
WriteToINI
WriteToWININI
ReadINI
ReadWININI
DeleteFromINI
DeleteFromWININI
OpenDialog
SaveAsDialog
BrowseForFolder
Map
Unmap
Disconnect
Mapped
Subst
Unsubst
Substed
ShellExecute
To get the error message from most of these functions, you should not use $!, but $^E or Win32::FormatMessage(Win32::GetLastError())!
use Win32::FileOp $handle = GetDesktopHandle()
Same as: $handle = $Win32::FileOp::DesktopHandle
Used to get desktop window handle when confirmation is used. The value of the handle can be gotten from $Win32::FileOp::DesktopHandle.
Returns the Desktop Window handle.
use Win32::FileOp $handle = GetWindowHandle()
Same as: $handle = $Win32::FileOp::WindowHandle
Used to get the console window handle when confirmation is used. The value of the handle can be gotten from $Win32::FileOp::WindowHandle.
Returns the Console Window handle.
Copy ($FileName => $FileOrDirectoryName [, ...]) Copy (\@FileNames => $DirectoryName [, ...] ) Copy (\@FileNames => \@FileOrDirectoryNames [, ...])
Copies the specified files. Doesn't show any confirmation nor progress dialogs.
It may show an error message dialog, because I had to omit FOF_NOERRORUI from its call to allow for autocreating directories.
You should end the directory names by backslash so that they are not mistaken for filenames. It is not necessary if the directory already exists or if you use Copy \@filenames => $dirname.
Returns true if successful.
Rem: Together with Delete, Move, DeleteConfirm, CopyConfirm, MoveConfirm, CopyEx, MoveEx, DeleteEx and Recycle based on Win32 API function SHFileOperation().
CopyConfirm ($FileName => $FileOrDirectoryName [, ...]) CopyConfirm (\@FileNames => $DirectoryName [, ...] ) CopyConfirm (\@FileNames => \@FileOrDirectoryNames [, ...])
Copies the specified files. In case of a collision, shows a confirmation dialog. Shows progress dialogs.
Returns true if successful.
The same as CopyConfirm.
CopyEx ($FileName => $FileOrDirectoryName, [...], $options) CopyEx (\@FileNames => $DirectoryName, [...], $options) CopyEx (\@FileNames => \@FileOrDirectoryNames, [...], $options)
Copies the specified files. See below for the available options (
FOF_ constants).
Returns true if successful.
Moves the specified files. Parameters as
Copy
It may show an error message dialog, because I had to omit FOF_NOERRORUI from its call to allow for autocreating directories.
Moves the specified files. Parameters as
CopyConfirm
The same as MoveConfirm
Moves the specified files. Parameters as
CopyEx
MoveAtReboot ($FileName => $DestFileName, [...])
This function moves the file during the next start of the system.
MoveFile ($FileName => $DestFileName [, ...])
Move files. This function uses API function MoveFileEx as well as MoveAtReboot. It may be a little quicker than
Move, but it doesn't understand wildcards and the $DestFileName may not be a directory.
REM: Based on Win32 API function MoveFileEx().
MoveFileEx ($FileName => $DestFileName [, ...], $options)
This is a simple wrapper around the API function MoveFileEx, it calls the function for every pair of files with the $options you specify. See below for the available options (
FOF_... constants).
REM: Based on Win32 API function MoveFileEx().
CopyFile ($FileName => $DestFileName [, $FileName2 => $DestFileName2 [, ...]])
Copy a file somewhere. This function is not able to copy directories!
REM: Based on Win32 API function CopyFile().
Recycle @filenames
Send the files into the recycle bin. You will not get any confirmation dialogs.
Returns true if successful.
RecycleConfirm @filenames
Send the files into the recycle bin. You will get a confirmation dialog if you have "Display delete confirmation dialog" turned on in your recycle bin. You will confirm the deletion of all the files at once.
Returns true if successful. Please remember that this function is successful even if the user chose [No] on the confirmation dialog!
RecycleConfirmEach @filenames
Send the files into the recycle bin. You will get a separate confirmation dialog for each file if you have "Display delete confirmation dialog" turned on in your recycle bin. You will confirm the deletion of all the files at once.
Returns the number of files that were successfully deleted.
RecycleEx @filenames, $options
Send the files into the recycle bin. You may specify the options for deleting, see below. You may get a confirmation dialog if you have "Display delete confirmation dialog" turned on in your recycle bin, if so, you will confirm the deletion of all the files at once.
Returns true if successful. Please remember that this function is successful even if the user chose [No] on the confirmation dialog!
The $options may be constructed from
FOF_... constants.
Delete @filenames
Deletes the files. You will not get any confirmation dialogs.
Returns true if successful.
DeleteConfirm @filenames
Deletes the the files. You will get a confirmation dialog to confirm the deletion of all the files at once.
Returns true if successful. Please remember that this function is successful even if the user selected [No] on the confirmation dialog!
DeleteConfirmEach @filenames
Deletes the files. You will get a separate confirmation dialog for each file.
Returns the number of files that were successfully deleted.
DeleteEx @filenames, $options
Deletes the files. You may specify the options for deleting, see below. You may get a confirmation dialog if you have "Display delete confirmation dialog" turned on in your recycle bin.
Returns true if successful. Please remember that this function is successful even if the user selected [No] on the confirmation dialog!
DeleteAtReboot @files
This function moves the file during the next start of the system.
UpdateDir $SourceDirectory, $DestDirectory [, \&callback]
Copy the newer or updated files from $SourceDir to $DestDir. Processes subdirectories! The &callback function is called for each file to be copied. The parameters it gets are exactly the same as the callback function in File::Find. That is $_, $File::Find::dir and $File::Find::name.
If this function returns a false value, the file is skipped.
Ex. UpdateDir 'c:\dir' => 'e:\dir', sub {print '.'}; UpdateDir 'c:\dir' => 'e:\dir', sub {if (/^s/i) {print '.'}};
FillInDir $SourceDirectory, $DestDirectory [, \&callback]
Copy the files from $SourceDir not present in $DestDir. Processes subdirectories! The &callback works the same as in UpdateDir.
Compress $filename [, ...]
Compresses the file(s) or directories using the transparent WinNT compression (The same as checking the "Compressed" checkbox in Explorer properties fo the file).
It doesn't compress all files and subdirectories in a directory you specify. Use ComressDir for that. Compress($directory) only sets the compression flag for the directory so that the new files are compressed by default.
WinNT only!
REM: Together with other compression related functions based on DeviceIoControl() Win32 API function.
Uncompress $filename [, ...]
Uncompresses the file(s) using the transparent WinNT compression (The same as unchecking the "Compressed" checkbox in Explorer properties fo the file).
WinNT only!
Compressed $filename
Checks the compression status for a file.
SetCompression $filename [, $filename], $value
Sets the compression status for file(s). The $value should be either 1 or 0.
GetCompression $filename
Checks the compression status for a file.
CompressDir $directory, ... [, \&callback]
Recursively descends the directory(ies) specified and compresses all files and directories within. If you specify the \&callback, the specified function gets executed for each of the files and directories. If the callback returns false, no compression is done on the file/directory.
The parameters the callback gets are exactly the same as the callback function in File::Find. That is $_, $File::Find::dir and $File::Find::name.
UncompressDir $directory, ... [, \&callback]
The counterpart of CompressDir.
($lo_word, $hi_word) = GetLargeFileSize( $path ); # or $file_size = GetLargeFileSize( $path );
This gives you the file size for too big files (over 4GB). If called in list context returns the two 32 bit words, in scalar context returns the file size as one number ... if the size is too big to fit in an Integer it'll be returned as a Float. This means that if it's above cca. 10E15 it may get slightly rounded.
$freeSpaceForUser = GetDiskFreeSpace $path; # or ($freeSpaceForUser, $totalSize, $totalFreeSpace) = GetDiskFreeSpace $path;
In scalar context returns the amount of free space available to current user (respecting quotas), in list context returns the free space for current user, the total size of disk and the total amount of free space on the disk.
Works OK with huge disks.
Requires at least Windows 95 OSR2 or WinNT 4.0.
AddToRecentDocs $filename [, ...]
Add a shortcut(s) to the file(s) into the Recent Documents folder. The shortcuts will appear in the Documents submenu of Start Menu.
The paths may be relative.
REM: Based on Win32 API function SHAddToRecentDocs().
EmptyRecentDocs;
Deletes all shortcuts from the Recent Documents folder.
REM: Based on Win32 API function SHAddToRecentDocs(). Strange huh?
WriteToINI $INIfile, $section, $name1 => $value [, $name2 => $value2 [, ...]]
Copies a string into the specified section of the specified initialization file. You may pass several name/value pairs at once.
Returns 1 if successful, undef otherwise. See Win32::GetLastError & Win32::FormatMessage(Win32::GetLastError) if failed for the error code and message.
REM: Based on Win32 API function WritePrivateProfileString().
WriteToWININI $section, $name1 => $value1 [, $name2 => $value2 [, ...]]
Copies a string into the specified section of WIN.INI. You may pass several name/value pairs at once.
Please note that some values or sections of WIN.INI and some other INI files are mapped to registry so they do not show up in the INI file even if they were successfully written!
REM: Based on Win32 API function WriteProfileString().
$value = ReadINI $INIfile, $section, $name [, $defaultvalue]
Reads a value from an INI file. If you do not specify the default and the value is not found you'll get undef.
REM: Based on Win32 API function GetPrivateProfileString().
$value = ReadWININI $section, $name [, $defaultvalue]
Reads a value from WIN.INI file. If you do not specify the default and the value is not found you'll get undef.
Please note that some values or sections of WIN.INI and some other INI files are mapped to registry so even that they do not show up in the INI file this function will find and read them!
REM: Based on Win32 API function GetProfileString().
DeleteFromINI $INIfile, $section, @names_to_delete
Delete a value from an INI file.
REM: Based on Win32 API function WritePrivateProfileString().
DeleteFromWININI $section, @names_to_delete
Delete a value from WIN.INI.
REM: Based on Win32 API function WriteProfileString().
@sections = ReadINISections($inifile); \@sections = ReadINISections($inifile); ReadINISections($inifile,\@sections);
Enumerate the sections in a INI file. If you do not specify the INI file, it enumerates the contents of win.ini.
REM: Based on Win32 API function GetPrivateProfileString().
@sections = ReadINISectionKeys($inifile, $section); \@sections = ReadINISectionKeys($inifile, $section); ReadINISectionKeys($inifile, $section, \@sections);
Enumerate the keys in a section of a INI file. If you do not specify the INI file, it enumerates the contents of win.ini.
REM: Based on Win32 API function GetPrivateProfileString().
$filename = OpenDialog \%parameters [, $defaultfilename] @filenames = OpenDialog \%parameters [, $defaultfilename] $filename = OpenDialog %parameters [, $defaultfilename] @filenames = OpenDialog %parameters [, $defaultfilename]
Creates the standard Open dialog allowing you to select some files.
Returns a list of selected files or undef if the user pressed [Escape]. It also sets two global variables :
$Win32::FileOp::ReadOnly = the user requested a readonly access. $Win32::FileOp::SelectedFilter = the id of filter selected in the dialogbox %parameters title => the title for the dialog, default is 'Open' 'Open file' filters => definition of file filters { 'Filter 1' => '*.txt;*.doc', 'Filter 2' => '*.pl;*.pm'} [ 'Filter 1' => '*.txt;*.doc', 'Filter 2' => '*.pl;*.pm'] [ 'Filter 1' => '*.txt;*.doc', 'Filter 2' => '*.pl;*.pm' , $default] "Filter 1\0*.txt;*.doc\0Filter 2\0*.pl;*.pm" defaultfilter => the number of the default filter counting from 1. Please keep in mind that hashes do not preserve ordering! dir => the initial directory for the dialog, default is the current directory filename => the default filename to be showed in the dialog handle => the handle to the window which will own this dialog Default is the console of the perl script. If you do not want to tie the dialog to any window use handle => 0 options => options for the dialog, see bellow OFN_... constants
There is a little problem with the underlying function. You have to preallocate a buffer for the selected filenames and if the buffer is too smallyou will not get any results. I've consulted this with the guys on Perl-Win32-Users and there is not any nice solution. The default size of buffer is 256B if the options do not include OFN_ALLOWMULTISELECT and 64KB if they do. You may change the later via variable $Win32::FileOp::BufferSize.
NOTE: I have been notified about a strange behaviour under Win98. If you use UNCs you should always use backslashes in the paths. \\server/share doesn't work at all under Win98 and //server/share works only BEFORE calling the Win32::FileOp::OpenDialog(). I have no idea what is the cause of this behaviour.
REM: Based on Win32 API function GetOpenFileName().
Creates the Save As dialog box, parameters are the same as for OpenDialog.
REM: Based on Win32 API function GetSaveFileName().
BrowseForFolder [$title [, $rootFolder [, $options]]]
Creates the standard "Browse For Folder" dialog. The $title specifies the text to be displayed below the title of the dialog. The $rootFolder may be one of the
CSIDL_... constants. For $options you should use the
BIF_... constants. Description of the constants is bellow.
REM: Based on Win32 API function SHBrowseForFolder().
Map $drive => $share; $drive = Map $share; Map $drive => $share, \%options; $drive = Map $share, \%options;
Map a drive letter or LTPx to a network resource. If successfull returns the drive letter/LPTx.
If you do not specify the drive letter, the function uses the last free letter, if you specify undef or empty string as the drive then the share is connected, but not assigned a letter.
Since the function doesn't require the ':' in the drive name you may use the function like this:
Map H => '\\\\server\share'; as well as Map 'H:' => '\\\\server\share'; Options: persistent => 0/1 should the connection be restored on next logon? user => $username username to be used to connect the device passwd => $password password to be used to connect the device overwrite => 0/1 should the drive be remapped if it was already connected? force_overwrite => 0/1 should the drive be forcefully disconnected and remapped if it was already connected? interactive = 0 / 'yes' / $WindowHandle if necessary displays a dialog box asking the user for the username and password. prompt = 0/1 if used with interactive=> the user is ALWAYS asked for the username and password, even if you supplied them in the call. If you did not specify interactive=> then prompt=> is ignored. redirect = 0/1 forces the redirection of a local device when making the connection Example: Map I => '\\\\servername\share', {persistent=>1,overwrite=>1};
Notes: 1) If you use the
interactive option the user may Cancel that dialog. In that case the Map() fails, returns undef and Win32::GetLastError() returns 1223 and $^E is equals to 1223 in numerical context and to "The operation was canceled by the user." in string context.
2) You should only check the Win32::GetLastError() or $^E if the function failed. If you do check it even if it succeeded you may get error 997 "Overlapped I/O operation is in progress.". This means that it worked all right and you should not care about this bug!
REM: Based on Win32 API function WNetAddConnection3().
Connect $share Connect $share, \%options
Connects a share without assigning a drive letter to it.
REM: Based on Win32 API function WNetAddConnection3().
Disconnect $drive_or_share; Disconnect $drive_or_share, \%options;
Breaks an existing network connection. It can also be used to remove remembered network connections that are not currently connected.
$drive_or_share specifies the name of either the redirected local device or the remote network resource to disconnect from. If this parameter specifies a redirected local resource, only the specified redirection is broken; otherwise, all connections to the remote network resource are broken.
Options: persistent = 0/1, if you do not use persistent=>1, the connection will be closed, but the drive letter will still be mapped to the device force = 0/1, disconnect even if there are some open files See also: Unmap
REM: Based on Win32 API function WNetCancelConnection2().
Unmap $drive_or_share; Unmap $drive_or_share, \%options;
The only difference from Disconnect is that persistent=>1 is the default.
REM: Based on Win32 API function WNetCancelConnection2().
%drives = Mapped; $share = Mapped $drive; $drive = Mapped $share; # currently not implemented !!!
This function retrieves the name of the network resource associated with a local device. Or vice versa.
If you do not specify any parameter, you get a hash of drives and shares.
To get the error message from most of these functions, you should not use $!, but Win32::FormatMessage(Win32::GetLastError()) or $^E !
REM: Based on Win32 API function WNetGetConnection().
Subst Z => 'c:\temp'; Subst 'Z:' => '\\\\servername\share\subdir';
This function substitutes a drive letter for a directory, both local and UNC.
Be very carefull with this, cause it'll allow you to change the substitution even for C:. ! Which will most likely be lethal !
Works only on WinNT.
REM: Based on DefineDosDevice()
=item SubstDev SubstDev F => 'Floppy0'; SubstDev G => 'Harddisk0\Partition1';
Allows you to make a substitution to devices. For example if you want to make an alias for A: ...
To get the device mapped to a drive letter use Substed() in list context.
Works only on WinNT.
REM: Based on DefineDosDevice()
Unsubst 'X';
Deletes the substitution for a drive letter. Again, be very carefull with this!
Works only on WinNT.
REM: Based on DefineDosDevice()
%drives = Substed; $substitution = Substed $drive; ($substitution, $device) = Substed $drive;
This function retrieves the name of the resource(s) associated with a drive letter(s).
If used with a parameter :
In scalar context you get the substitution. If the drive is the root of a local device you'll get an empty string, if it's not mapped to anything you'll get undef.
In list context you'll get both the substitution and the device/type of device :
Substed 'A:' => ('','Floppy0') Substed 'B:' => undef Substed 'C:' => ('','Harddisk0\Partition1') Substed 'H:' => ('\\\\servername\homes\username','UNC') # set by subst H: \\servername\homes\username Substed 'S:' => ('\\\\servername\servis','LanmanRedirector') # set by net use S: \\servername\servis Substed 'X:' => () # not mapped to anything
If used without a parameter gives you a hash of drives and their corresponding sunstitutions.
Works only on WinNT.
REM: Based on Win32 API function QueryDosDevice().
ShellExecute $filename; ShellExecute $operation => $filename; ShellExecute $operation => $filename, $params, $dir, $showOptions, $handle; ShellExecute $filename, {params => $params, dir => $dir, show => $showOptions, handle => $handle}; ShellExecute $operation => $filename, {params => $params, dir => $dir, show => $showOptions, handle => $handle};
This function instructs the system to execute whatever application is assigned to the file type as the specified action in the registry.
ShellExecute 'open' => $filename; or ShellExecute $filename;
is equivalent to doubleclicking the file in the Explorer,
ShellExecute 'edit' => $filename;
is equivalent to rightclicking it and selecting the Edit action.
Parameters:
$operation : specifies the action to perform. The set of available operations depends on the file type. Generally, the actions available from an object's shortcut menu are available verbs.
$filename : The file to execute the action for.
$params : If the $filename parameter specifies an executable file, $params is a string that specifies the parameters to be passed to the application. The format of this string is determined by the $operation that is to be invoked. If $filename specifies a document file, $params should be undef.
$dir : the default directory for the invoked program.
$showOptions : one of the SW_... constants that specifies how the application is to be displayed when it is opened.
$handle : The handle of the window that gets any message boxes that may be invoked by this. Be default the handle of the console that this script runs in.
REM: Based on Win32 API function ShellExecute
FOF_SILENT = do not show the progress dialog FOF_RENAMEONCOLLISION = rename the file in case of collision ('file.txt' -> 'Copy of file.txt') FOF_NOCONFIRMATION = do not show the confirmation dialog FOF_ALLOWUNDO = send file(s) to RecycleBin instead of deleting FOF_FILESONLY = skip directories FOF_SIMPLEPROGRESS = do not show the filenames in the process dialog FOF_NOCONFIRMMKDIR = do not confirm creating directories FOF_NOERRORUI = do not report errors FOF_NOCOPYSECURITYATTRIBS = do not copy security attributes
OFN_ALLOWMULTISELECT
Specifies that the File Name list box allows multiple selections. If you also set the OFN_EXPLORER flag, the dialog box uses the Explorer-style user interface; otherwise, it uses the old-style user interface.
OFN_CREATEPROMPT.
OFN_EXPLORER
Since I cannot implement hook procedures through Win32::API (AFAIK), this option in not necessary._LONGNAMES
For old-style dialog boxes, this flag causes the dialog box to use long filenames. If this flag is not specified, or if the OFN_ALLOWMULTISELECT flag is also set, old-style dialog boxes use short filenames (8.3 format) for filenames with spaces. Explorer-style dialog boxes ignore this flag and always display long filenames.
OFN_NOCHANGEDIR
Restores the current directory to its original value if the user changed the directory while searching for files.
OFN_NODEREFERENCELINKS
Directs the dialog box to return the path and filename of the selected shortcut (.LNK) file. If this value is not given, the dialog box returns the path and filename of the file referenced by the shortcut
OFN_NOLONGNAMES
For old-style dialog boxes, this flag causes the dialog box to use short filenames (8.3 format). Explorer-style dialog boxes ignore this flag and always display long filenames. dialog boxes allow invalid characters in the returned filename.. If the check box is checked when the dialog box is closed $Win32::FileOp::ReadOnly is set to true.
OFN_SHAREAWARE
Specifies that if a call to the OpenFile function fails because of a network sharing violation, the error is ignored and the dialog box returns the selected filename.
OFN_SHOWHELP
Causes the dialog box to display the Help button. The hwndOwner member must specify the window to receive the HELPMSGSTRING registered messages that the dialog box sends when the user clicks the Help button.
BIF_DONTGOBELOWDOMAIN
Does not include network folders below the domain level in the tree view control.
BIF_RETURNONLYFSDIRS
Only returns file system directories. If the user selects folders that are not part of the file system, the OK button is grayed.
BIF_RETURNFSANCESTORS
Only returns file system ancestors. If the user selects anything other than a file system ancestor, the OK button is grayed.
This option is strange, cause it seems to allow you to select only computers. I don't know the definition of a filesystem ancestor, but I didn't think it would be a computer. ?-|
BIF_BROWSEFORCOMPUTER
Only returns computers. If the user selects anything other than a computer, the OK button is grayed.
BIF_BROWSEFORPRINTER
Only returns printers. If the user selects anything other than a printer, the OK button is grayed.
BIF_STATUSTEXT
Since it is currently impossible to define callbacks, this options is useless.
This is a list of available options for BrowseForFolder().
CSIDL_BITBUCKET
Recycle bin --- file system directory containing file objects in the user's recycle bin. The location of this directory is not in the registry; it is marked with the hidden and system attributes to prevent the user from moving or deleting it.
CSIDL_CONTROLS
Control Panel --- virtual folder containing icons for the control panel applications.
CSIDL_DESKTOP
Windows desktop --- virtual folder at the root of the name space.
CSIDL_DESKTOPDIRECTORY
File system directory used to physically store file objects on the desktop (not to be confused with the desktop folder itself).
CSIDL_DRIVES
My Computer --- virtual folder containing everything on the local computer: storage devices, printers, and Control Panel. The folder may also contain mapped network drives.
CSIDL_FONTS
Virtual folder containing fonts.
CSIDL_NETHOOD
File system directory containing objects that appear in the network neighborhood.
CSIDL_NETWORK
Network Neighborhood --- virtual folder representing the top level of the network hierarchy.
CSIDL_PERSONAL
File system directory that serves as a common repository for documents.
CSIDL_PRINTERS
Printers folder --- virtual folder containing installed printers.
CSIDL_PROGRAMS
File system directory that contains the user's program groups (which are also file system directories).
CSIDL_RECENT
File system directory that contains the user's most recently used documents.
CSIDL_SENDTO
File system directory that contains Send To menu items.
CSIDL_STARTMENU
File system directory containing Start menu items.
CSIDL_STARTUP
File system directory that corresponds to the user's Startup program group.
CSIDL_TEMPLATES
File system directory that serves as a common repository for document templates.
Not all options make sense in all functions!
SW_HIDE
Hides the window and activates another window.
SW_MAXIMIZE
Maximizes the specified window.
SW_MINIMIZE
Minimizes the specified window and activates the next top-level window in the z-order.
SW_RESTORE
Activates and displays the window. If the window is minimized or maximized, Windows restores it to its original size and position. An application should specify this flag when restoring a minimized window.
SW_SHOW
Activates the window and displays it in its current size and position..
SW_SHOWMAXIMIZED
Activates the window and displays it as a maximized window.
SW_SHOWMINIMIZED
Activates the window and displays it as a minimized window.
SW_SHOWMINNOACTIVE
Displays the window as a minimized window. The active window remains active.
SW_SHOWNA
Displays the window in its current state. The active window remains active.
SW_SHOWNOACTIVATE
Displays a window in its most recent size and position. The active window remains active.
SW_SHOWNORMAL
Activates and displays a window. If the window is minimized or maximized, Windows restores it to its original size and position. An application should specify this flag when displaying the window for the first time.
$Win32::FileOp::ProgressTitle
This variable (if defined) contains the text to be displayed on the progress dialog if using FOF_SIMPLEPROGRESS. This allows you to present the user with your own message about what is happening to his computer.
If the options for the call do not contain FOF_SIMPLEPROGRESS, this variable is ignored.
use Win32::FileOp; CopyConfirm ('c:\temp\kinter.pl' => 'c:\temp\copy\\', ['\temp\kinter1.pl', 'temp\kinter2.pl'] => ['c:\temp\copy1.pl', 'c:\temp\copy2.pl']); $Win32::FileOp::ProgressTitle = "Moving the temporary files ..."; MoveEx 'c:\temp\file.txt' => 'd:\temp\\', ['c:\temp\file1.txt','c:\temp\file2.txt'] => 'd:\temp', FOF_RENAMEONCOLLISION | FOF_SIMPLEPROGRESS; undef $Win32::FileOp::ProgressTitle; Recycle 'c:\temp\kinter.pl';
All the functions keep Win32::API handles between calls. If you want to free the handles you may undefine them, but NEVER EVER set them to anything else than undef !!! Even "$handlename = $handlename;" would destroy the handle without repair! See docs for Data::Lazy.pm for explanation.
List of handles and functions that use them:
$Win32::FileOp::fileop : Copy, CopyEx, CopyConfirm, Move, MoveEx, MoveConfirm Delete, DeleteEx, DeleteConfirm, Recycle, RecycleEx, RecycleConfirm $Win32::FileOp::movefileex : MoveFileEx MoveFile MoveAtReboot $Win32::FileOp::movefileexDel : DeleteAtReboot $Win32::FileOp::copyfile : CopyFile $Win32::FileOp::writeINI : WriteToINI MoveAtReboot DeleteAtReboot $Win32::FileOp::writeWININI : WriteToWININI $Win32::FileOp::deleteINI : DeleteFromINI $Win32::FileOp::deleteWININI : DeleteFromWININI $Win32::FileOp::readINI : ReadINI $Win32::FileOp::readWININI : ReadWININI $Win32::FileOp::GetOpenFileName : OpenDialog $Win32::FileOp::GetSaveFileName : SaveAsDialog $Win32::FileOp::SHAddToRecentDocs : AddToRecentDocs EmptyRecentDocs $Win32::FileOp::DesktopHandle $Win32::FileOp::WindowHandle : OpenDialog SaveDialog $Win32::FileOp::WNetAddConnection3 : Map $Win32::FileOp::WNetGetConnection : Mapped $Win32::FileOp::WNetCancelConnection2 : Unmap Disconnect Map $Win32::FileOp::GetLogicalDrives : FreeDriveLetters Map
By default all functions are exported! If you do not want to polute your namespace too much import only the functions you need. You may import either single functions or whole groups.
The available groups are :
BASIC = Move..., Copy..., Recycle... and Delete... functions plus constants _BASIC = FOF_... constants only HANDLES = DesktopHandle GetDesktopHandle WindowHandle GetWindowHandle INI = WriteToINI WriteToWININI ReadINI ReadWININI ReadINISectionKeys DeleteFromINI DeleteFromWININI DIALOGS = OpenDialog, SaveAsDialog and BrowseForFolder plus OFN_..., BIF_... and CSIDL_... constants _DIALOGS = only OFN_..., BIF_... and CSIDL_... constants RECENT = AddToRecentDocs, EmptyRecentDocs DIRECTORY = UpdateDir, FillInDir COMPRESS => Compress Uncompress Compressed SetCompression GetCompression CompressedSize CompressDir UncompressDir MAP => Map Unmap Disconnect Mapped SUBST => Subst Unsubst Substed SubstDev
Examples:
use Win32::FileOp qw(:BASIC GetDesktopHandle); use Win32::FileOp qw(:_BASIC MoveEx CopyEx); use Win32::FileOp qw(:INI :_DIALOGS SaveAsDialog);
This module contains all methods from Win32::RecycleBin. The only change you have to do is to use this module instead of the old Win32::RecycleBin. Win32:RecycleBin is not supported anymore!
WNetConnectionDialog, WNetDisconnectDialog
Module built by : Jan Krynicky <Jenda@Krynicky.cz> $Bill Luebkert <dbe@wgn.net> Mike Blazer <blazer@peterlink.ru> Aldo Calpini <a.calpini@romagiubileo.it> Michael Yamada <myamada@gj.com> | http://search.cpan.org/~jenda/Win32-FileOp-0.14.1/FileOp.pm | crawl-002 | en | refinedweb |
QWizard - Display a series of questions, get the answers, and act on the answers.
# #izard's doesn't.
A pseudo-code walk-through of the essential results of the magic() routine above is below. In a CGI script, for example, the magic() routine will be called multiple times (once per screen) but the results will be the same in the end -- it's all taken care of magically ;-).
################ ## WARNING: pseudo-code describing a process! Not real code! ################ # Loop through each primary and display the primary's questions. while(primaries to process) { display_primary_questions(); get_user_input(); check_results(); run_primary_post_answers(); } # Displays a "will be doing these things" screen, # and has a commit button. display_commit_screen(); # Loop through each primary and run its actions. # Note: see action documentation about execution order! foreach (primary that was displayed) { results = run_primary_actions(); display(results); } # If magic() is called again, it restarts from # the top primary again.
Options passed to the QWizard new() operator define how the QWizard instance will behave. Options are passed in the following manner:
new QWizard (option => value, ...)
Valid options are:
The document title to be printed in the title bar of the window.
GENERATOR is a reference to a valid QWizard generator. Current generator classes are:
- QWizard::Generator::Best (default: picks the best available) - QWizard::Generator::HTML - QWizard::Generator::Gtk2 - QWizard::Generator::Tk - QWizard::Generator::ReadLine (limited in functionality)
The QWizard::Generator::Best generator is used if no specific generator is specified. The Best generator will create an HTML generator if used in a web context (i.e., a CGI script), or else pick the best of the available other generators (Gtk2, then Tk, then ReadLine).
This example forces a Gtk2 generator to be used:
my $wiz = new QWizard(generator => new QWizard::QWizard::Gtk2(), # ... );
If you want the default generator that QWizard will provide you, but would still like to provide that generator with some arguments you can use this token to pass an array reference of arguments. These arguments will be passed to the new() method of the Generator that is created.
This should be the top location of a web page where the questions will be displayed. This is needed for "go to top" buttons and the like to work. This is not needed if the QWizard-based script is not going to be used in a CGI or other web-based environment (eg, if it's going to be used in mod_perl).
my_primaries will define the list of questions to be given to the user. my_primaries just defines the questions, but does not mean the user will be prompted with each question. The questions in this series that will be displayed for the user to answer is determined by the magic() function's starting arguments, described below. The format of the primaries hash is described in the Primaries Definition section below. The recognized values in the primaries hash is described in the Primaries Options section.
If set, the final confirmation screen will not be displayed, but instead the resulting actions will be automatically run. This can also be achieved inside the wizard tokens primaries by setting a question name to no_confirm with a value of 1 (using a hidden question type.)
This function will be called just before a set of questions is displayed. It can be used to perform such functions as printing preliminary information and initializing data.
This function will be called just after a set of questions is displayed.
A place to add extra widgets to a primary at the very top.
See the bar documentation in the QUESTION DEFINITIONS section below for details on this field.
Adds a left or right side frame to the main screen where the WIDGETS are always shown for all primaries. Basically, these should be "axillary" widgets that augment the widgets is the main set of questions. They can be used for just about anything, but the look and feel will likely be better if they're suplimental.
The WIDGETS are normal question widgets, just as can appear in the questions section of the primaries definition as described below.
In addition, however, there can be subgroupings with a title as well. These are then in a sub-array and are displayed with a title above them. EG:
leftside => [ { type => 'button', # ... normal button widget definition; see below }, [ "Special Grouped-together Buttons", { type => 'button', # ... }, { type => 'button', # ... }, ], ],
The above grouped set of buttons will appear slightly differently and grouped together under the title "Special Grouped-together Buttons".
The widget-test-screen.pl in the examples directory shows examples of using this.
Important note: Not all backends support this yet. HTML and Gtk2 do, though.
The primaries argument of the new() function defines the list of questions that may be posed to a user. Each primary in the hash will contain a list of questions, answers, etc., and are grouped together by a name (the key in the hash). Thus, a typical primary set definition would look something like:
%my_primaries = ( # The name of the primary. 'question_set_1' => # its definition { title => 'My question set', questions => # questions are defined in an array of hashes. [{type => 'checkbox', text => 'Is this fun?', name => is_fun, default => 1, values => [1, 0] }, {type => 'text', text => 'Enter your name:', name => 'their_name'}], post_answers => # post_answers is a list of things to do immediately after # this set of questions has been asked. [ sub { print "my question set answered" } ], actions => # actions is a list of actions run when all is said and done. [ sub { return "msg: %s thinks this %s fun.\n", qwparam('their_name'), (qwparam('is_fun')) ? "is" : "isn't" }], actions_descr => # An array of strings displayed to the user before they agree # to commit to their answers. [ 'I\'m going to process stuff from @their_name@)' ] });
See the QWizard::API module for an alternative, less verbose, form of API for creating primaries which can produce more compact-looking code.
In the documentation to follow, any time the keyword VALUE appears, the following types of "values" can be used in its place:
- "a string" - 10 - \&sub_to_call - sub { return "a calculated string or value" } - [\&sub_to_call, arguments, to, sub, ...]
Subroutines are called and expected to return a single value or an array reference of multiple values.
Much of the time the VALUE keyword appears in array brackets: []. Thus you may often specify multiple values in various ways. E.g., a values clause in a question may be given in this manner:
sub my_examp1 { return 3; } sub my_examp2 { return [$_[0]..$_[1]]; } values => [1, 2, \&my_examp1, [\&my_examp2, 4, 10]],
After everything is evaluated, the end result of this (complex) example will be an array passed of digits from 1 to 10 passed to the values clause.
In any function at any point in time during processing, the qwparam() function can be called to return the results of a particular question as it was answered by the user. I.e., if a question named their_name was answered with "John Doe" at any point in the past series of wizard screens, then qwparam('their_name') would return "John Doe". As most VALUE functions will be designed to process previous user input, understanding this is the key to using the QWizard Perl module. More information and examples follow in the sections below.
These are the tokens that can be specified in a primary:
The title name for the set of questions. This will be displayed at the top of the screen.
Introductory text to be printed above the list of questions for a given primary. This is useful as a starting piece of text to help the user with this particular wizard screen. Display of the introductory text is controlled by the Net-Policy pref_intro user preference. The default is to display introductory text, but this setting can be turned off and on by the user.
This is a list of questions to pose to the user for this screen.
The Question Definitions section describes valid question formatting.
This is a list of actions to run after the questions on the screen have been answered. Although this is a VALUES clause, as described above, these should normally be subroutines and not hard-coded values. The first argument to the VALUE functions will be a reference to the wizard. This is particularly useful to conditionally add future screens/primaries that need to be shown to the user. This can be done by using the following add_todos() function call in the action section:
if (some_condition()) { $_[0]->add_todos('primary1', ...); }
See the QWizard Object Functions section for more information on the add_todos() function, but the above will add the 'primary1' screen to the list of screens to display for the user before the wizard is finished.
A post_answers subroutine MUST return the word "OK" for it to be successful. Returning anything else will print the result, as if it is an error message, to the user.
For HTML output, these will be run just before the next screen is printed after the user has submitted the answers back to the web server. For window-based output (Gtk2, Tk, etc.) the results are similar and these subroutines are evaluated before the next window is drawn.
The action functions will be run after the entire wizard series of questions has been displayed and answered and after the user has hit the "commit" button. It is assumed that the actions of the earlier screens are dependent on the actions of the later screens and so the action functions will be executed in reverse order from the way the screens were displayed. See the add_todos() function description in the QWizard Object Functions section for more information on to change the order of execution away from the default.
The collected values returned from the VALUES evaluation will be displayed to the user. Any message beginning with a 'msg:' prefix will be displayed as a normal output line. Any value not prefixed with 'msg:' will be displayed as an error (typically displayed in bold and red by most generators.)
Just before the actions are run, a change-summary screen is shown to the user. A "commit" button will also be given on this screen. VALUE strings, function results, etc., will be displayed as a list on this commit screen. Strings may have embedded special @TAG@ keywords which will be replaced by the value for the question with a name of TAG. These strings should indicate to the user what the commit button will do for any actions to be run by this set of questions. If any question was defined whose name was no_confirm and whose value was 1, this screen will be skipped and the actions will be run directly.
This hash value adds the specified sub-modules to the list of screens to display after this one. This is equivalent to having a post_answers clause that includes the function:
sub { $_[0]->add_todos('subname1', ...); }
Allows primaries to be optional and only displayed under certain conditions.
If specified, it should be a CODE reference which when executed should return a 1 if the primary is to be displayed or a 0 if not. The primary will be entirely skipped if the CODE reference returns a 0.
A place to add extra widgets to a primary at the very top.
See the bar documentation in the QUESTION DEFINITIONS section below for details on this field.
Adds a left or right side frame to the main screen where the WIDGETS are shown for this primary.
Important note: See the leftside/rightside documentation for QWizard for more details and support important notes there.
This hash value lets a subroutine completely take control of processing beyond this point. The wizard methodology functionally stops here and control for anything in the future is entirely passed to this subroutine. This should be rarely (if ever) used and it is really a way of breaking out of the wizard completely.
Questions are implemented as a collection of hash references. A question generally has the following format:
{ type => QUESTION_TYPE text => QUESTION_TEXT, name => NAME_FOR_ANSWER, default => VALUE, # for menus, checkboxes, multichecks, ... : values => [ VALUE1, VALUE2, ... ], # i.e., [VALUES] # for menus, checkboxes, multichecks, ... : labels => { value1 => label1, value2 => label2 } # i.e., [VALUES] }
Other than this sort of hash reference, the only other type of question allowed in the question array is a single "" empty string. The empty string acts as a vertical spatial separator, indicating that a space should occur between the previous question and the next question.
The fields available to question types are given below. Unless otherwise stated, the fields are available to all question types.
Names the answer to the question. This name can then be used later in other sections (action, post_answers, etc.) to retrieve the value of the answer using the qwparam() function. For example, qwparam('NAME') at any point in future executed code should return the value provided by the user for the question named 'NAME'.
The namespace for these names is shared among all primaries (except 'remapped' primaries, which are described later). A warning will be issued if different questions from two different primaries use the same name. This warning will not be given if the question contains an override flag set to 1.
Text displayed for the user for the given question. The text will generally be on the left of the screen, and the widget the user is supposed to interact with will be to the question text's right. (This is subject to the implementation of the back-end question Generator. The standard QWizard generators use this layout scheme.)
Defines the type of question. TYPE can be one of:
Displays information on the screen without requesting any input. The text of the question is printed on the left followed by the values portion on the right. If the values portion is omitted, the text portion is printed across the entire width of the screen.
Paragraphs are similar to labels but are designed for spaces where text needs to be wrapped and is likely to be quite long.
Text input. Displays an entry box where a standard single line of text can be entered.
Text input, but in a large box allowing for multi-line entries.
Obscured text input. Displays a text entry box, but with the typed text echoed as asterisks. This is suitable for prompting users for entering passwords, as it is not shown on the screen.
A checkbox. The values clause should have only 2 values in it: one for its "on" value, and one for its "off" value (which defaults to 1 and 0, respectively).
If a backend supports key accelerators (GTk2): Checkbox labels can be bound to Alt-key accelerators. See QUESTION KEY-ACCELERATORS below for more information.
Multiple checkboxes, one for each label/value pair. The name question field is a prefix, and all values and/or label keywords will be the second half of the name.
For example, the following clauses:
{ type => 'multi_checkbox', name => 'something', values => ['end1','end2'], ... }
will give parameters of 'somethingend1' and 'somethingend2'.
If a backend supports key accelerators (GTk2): Checkbox labels can be bound to Alt-key accelerators. See QUESTION KEY-ACCELERATORS below for more information.
Radio buttons, only one of which can be selected at a time. If two questions have the same name and are of type 'radio', they will be "linked" together such that clicking on a radio button for one question will affect the other.
If a backend supports key accelerators (GTk2): Radio button labels can be bound to Alt-key accelerators. See QUESTION KEY-ACCELERATORS below for more information.
Pull-down menu, where each label is displayed as a menu item. If just the values clause (see below) is used, the labels on the screen will match the values. If the default clause is set, then that menu entry will be the menu's initial selection. If the labels clause is used, the values shown to the user will be converted to the screen representations that will differ from the qwparam() values available later. This is useful for displaying human representations of programmatic values. E.g.:
{ type => 'menu', name => 'mymenu', labels => [ 1 => 'my label1', 2 => 'my label2'] }
In this example, the user will see a menu containing 2 entries "my label1" and "my label2", but qwparam() will return 1 or 2 for qwparam('mymenu').
Table to display. The values section should return a reference to an array, where each element of the array is a row containing the columns to display for that row. The top-most table must actually be returned in an array itself. (This is due to an oddity of internal QWizard processing). E.g.:
{ type => 'table', text => 'The table:', values => sub { my $table = [['row1:col1', 'row1:col2'], ['row2:col1', 'row2:col2']]; return [$table]; } }
This would be displayed graphically on the screen in this manner:
row1:col1 row1:col2 row2:col1 row2:col2
Additionally, a column value within the table may itself be a sub-table (another double-array reference set) or a hash reference, which will be a sub-widget to display any of the other types listed in this section.
Finally, a headers clause may be added to the question definition which will add column headers to the table. E.g.:
headers => [['col1 header','col2 header']]
A dialog box for a user to upload a file into the application. When a user submits a file the question name can be used later to retrieve a read file handle on the file using the function qw_upload_file('NAME'). qwparam('NAME') will return the name of the file submitted, but because of the variability in how web-browsers submit file names along with the data, this field should generally not be used. Instead, get access to the data through the qw_upload_fh() function instead.
A button that allows the user to download something generated by the application. The data that will be stored in this file should be defined in the 'data' field or the 'datafn' field. The name displayed within the button will be taken from the default question parameter.
The data field is processed early during display of the screen, so generation of large sets of data that won't always be downloaded or will take a lot of memor shouldn't use the data field. The data field is processed like any other value field where raw data or a coderef can be passed that will be called to return the data.
The datafn field should contain a CODE reference that will be called with five arguments:
Example usage:
{ type => 'filedownload', text => 'Download a file:', datafn => sub { my $fh = shift; print $fh "hello world: val=" . qwparam('someparam') . "\n"; } },
Currently only Gtk2 supports this button, but others will in the future.
Image file. The image name is specified by the image hash keyword. Several optional hash keys are recognized to control display of the image. imagealt specifies a string to display if the image file is not found. height specifies the height of the image. width specifies the width of the image. (height and width are currently only implemented for HTML.)
Graph of passed data. This is only available if the GD::Graph module is installed. Data is passed in from the values clause and is expected to be an array of arrays of data, where the first row is the x-axis data, and the rest are y values (one line will be drawn for each y value).
Additionally, the GD::Graph options can be specified with a graph_options tag to the question, allowing creation of such things as axis labels and legends.
Hierarchical tree. Displays a selectable hierarchical tree set from which the user should pick a single item. Two references to subroutines must be passed in via the parent and children question tags. Also, a root tag should specify the starting point.
Widget Options:
The parent function will be called with a wizard reference and a node name. It is expected to return the name of the node's parent.
The function should return undef when no parent exists above the current node.
The children function will be passed a wizard reference and a node name. It is expected to return an array reference to all the children names. It may also return a hash reference for some names instead, which will contain an internal name tag along with a label tag for displaying something to the user which is different than is internally passed around as the resulting selected value.
An example return array structure could look like:
[ 'simple string 1', 'simple string 2', { name => 'myanswer:A', label => 'Answer #A' }, { name => 'myanswer:B', label => 'Answer #B' }, ]
The function should return undef when no children exist below the current node.
The expand_all tag may be passed which will indicate that all initial option trees sholud be expanded up to the number indicated by the expand_all tag.
Button widget. When the button is clicked, the QWizard parameter name (available by calling qwparam('name')) will be assigned the value indicated by the default clause. The parameter value will not be set if the button is not clicked. The button's label text will be set to the value of the values clause.
The button widget will be equivalent to pressing the next button. The next primary will be shown after the user presses the button.
If a backend supports key accelerators (GTk2): Button labels can be bound to Alt-key accelerators. See QUESTION KEY-ACCELERATORS below for more information.
A bar widget is functionally a separator in the middle of the list of questions. It is useful for breaking a set of questions in two as well as providing button-containers or menu containers within widget sets and not having them tied to the normal QWizard left/right feel. One intentional artifact of this is they can be used to provide a visual difference between the flow of the questions. EG, if the QWizard primary showed a screen which had two questions in it, it would look something like the following when displayed by most of the generators that exist today:
+-------------------+-----------------+ | Question 1 | Answer Widget 1 | | Longer Question 2 | Answer Widget 2 | +-------------------+-----------------+
Adding a bar in the middle of these questions, however, would break the forced columns above into separate pieces:
+------------+------------------------+ | Question 1 | Answer Widget 1 | +------------+------------------------+ | BAR | +-------------------+-----------------+ | Longer Question 2 | Answer Widget 2 | +-------------------+-----------------+
Finally, there is an implicit top bar in every primary and the QWizard object as a whole. You can push objects onto this bar by adding objects to the $qwizard->{'topbar'} array or by adding objects to a primary's 'topbar' tag. E.G.
my $qw = new QWizard(primaries => \%primaries, topbar => [ { type => 'menu', name => 'menuname', values => [qw(1 2 3 4)], # ... }]);
The widgets shown in the topbar will be a merge of those from the QWizard object and the primary currently being displayed.
TODO: make it work better with merged primaries
TODO: make a bottom bar containing the next/prev/cancel buttons
This clause is used to set internal parameters (name => value), but these values are not shown to the user.
Note: This is not a secure way to hide information from the user. The data set using hidden are contained, for example, in the HTML text sent to the user.
A dynamic question is one where the values field is evaluated and is expected to return an array of question definitions which are in turn each evaluated as a question. It is useful primarily when doing things like creating a user-defined number of input fields, or interacting with an external data source where the number of questions and their nature is directly related to the external data source.
Raw data. The values portion is displayed straight to the screen. Use of this is strongly discouraged. Obviously, the values portion should be a subroutine that understands how to interact with the generator.
Really, don't use this. It's for emergencies only. It only works with HTML output.
An array of values that may be assigned to question types that need choices (eg: menu, checkbox, multi_checkbox.) It should be a reference to an array containing a list of strings, functions to execute, and possibly sub-arrays containing a function and arguments, as described by the VALUE conventions section above. Any function listed in a values clause should return a list of strings.
The values clause is not needed if the labels clause is present.
Assigns labels to the question's values. Labels are displayed to the user instead of the raw values. This is useful for converting human-readable text strings into real-world values for use in code.
If the values clause is not specified and the labels clause is, the values to display are extracted from this labels clause directly. If a value from the values clause does not have a corresponding label, the raw value is presented and used instead. Generally, only the labels clause should be used with radio buttons, menus, or check boxes; but either or both in combination work.
The labels clause subscribes to all the properties of the VALUES convention previously discussed. Thus, it may be a function, an array of functions, or any other type of data that a VALUE may be. The final results should be an array, especially if the values clause is not present, as the order displayed to the user can be specified. It can also be a hash as well but the displayed order is subject to Perl keys() conventions and thus an array is preferred when no values clause has been defined.
The default value to use for this question. It may be a subroutine reference which will be called to calculate and return the value.
A script to check the answer submitted by the user for legality. The script should return 'OK' to indicate no error found. In the event an error is detected, it should return an error string. The string will be shown to the user as an error message that the user must fix before being allowed to proceed further in the wizard screens. Alternately, the script may return 'REDISPLAY' to indicate no error but that screen should be redisplyed (perhaps with new values set with qwparam() from within the script). In the case of error or 'REDISPLAY', the current primary screen will be repeated until the function returns 'OK'.
The arguments passed to the function are the reference to the wizard, a reference to the question definition (the hash), and a reference to the primary containing the question (also a hash.) The function should use the qwparam() function to obtain the value to check. An array can be passed in which the first argument should be the subroutine reference, and the remaining arguments will be passed back to the subroutine after the already mentioned default arguments.
There are a set of standard functions that can be used for checking values. These are:
Ensures that a value is supplied or else a "This is a required field" error message is returned. The function only checks that the value is non-zero in length.
Ensures that the value is an integer value (required or not, respectively.)
Ensures that the value is a hex string (required or not, respectively.)
Ensures that a value is supplied and is a hex string sufficiently long for length bytes. This means that the hex string must be "length * 2" ASCII characters (two hex characters per byte.)
Ensures that the value specified falls within one of the lowX - highX ranges. The value must be between (low1 and high1) or (low2 and high2).
qw_check_length_ranges is similar to qw_check_int_ranges(), but it checks that the length of the data string specified by the user falls within the given ranges.
Allows questions to be optional and only displayed under certain conditions.
If specified, it should be a CODE reference which when executed should return a 1 if the question is to be displayed or a 0 if not. The question will be entirely skipped if the CODE reference returns a 0.
If specified, these define the help text for a question. helpdescr should be short descriptions printed on screen when the wizard screen is displayed, and helptext should be a full length description of help that will be displayed only when the user clicks on the help button. helpdescr is optional, and a button will be shown linking to helptext regardless.
When this is specified as a question argument, if the user changes the value then it will also be the equivelent of pressing the 'Next' button at the same time. With the HTML generator, this requires javascript and thus you shouldn't absolutely depend on it working.
If the contents of a screen are generated based on data extracted from dynamically changing sources (e.g., a database), then setting this parameter to 1 will make the current question force a refresh if the value changes (ie, when they pull down a menu and change the value or click on a checkbox or ...) and the screen be redrawn (possibly changing its contents).
As an example, Net-Policy uses this functionality to allow users to redisplay generated data tables and changes the column that is used for sorting depending on a menu widget.
The handle_results tag can specify a CODE reference to be run when the questions are answered so each question can perform its own processing. This is sort of like a per-question post_answers hook equivalent.
Some generators (currently only Gtk2 actually) support key accelerators so that you can bind alt-keys to widgets. This is done by including a '_' (underscore) character where appropriate to create the binding. EG:
{ type => 'radio', text => 'select one:', values => ['_Option1','O_ption2', 'Option3'] }
When Gtk2 gets the above construct it will make Alt-o be equivelent to pressing the first option and Alt-p to the second. It will also display the widget with a underline under the character that is bound to the widget. HTML and other non-accelerator supported interfaces will strip out the _ character before displaying the string in a widget.
In addition, unless a no_auto_accelerators = 1> option is passed to the generator creation arguments, widgets will automatically get accelerators assigned to them. In the above case the 't' in Option3 would automatically get assigned the Alt-t accelerator (the 't' is selected because it hasn't been used yet, unlike the o and p characters). You can also prefix something with a ! character to force a single widget to not receive an auto-accelerator (EG: "!Option4" wouldn't get one).
A few QWizard parameters are special and help control how QWizard behaves. Most of these should be set in the primaries question sets using a hidden question type.
If set to 1, the actions phase will not be run.
If set to 1, the screen which prompts the user to decide if they really want to commit their series of answers won't be shown. Instead, QWizard will jump straight to the actions execution (if appropriate.) This can also be given as a parameter to the QWizard new() function to make it always true.
If the contents of a screen are generated based on data extracted from dynamically changing sources (e.g., a database), then setting this parameter to 1 will add a "Refresh" button beside the "Next" button so that the user can request the screen be redrawn (possibly changing its contents).
As an example, Net-Policy uses this functionality to allow users to redisplay generated graphs and maps that will change dynamically as network data are collected.
This token can also be set directly in a primary definition to affect just that primary screen.
The button text to display for the "Next" button. This defaults to "_Next" but can be overridden using this parameter.
The button text to display for the "Commit" button. This defaults to "_Commit" but can be overridden using this parameter. The commit button is shown after the questions have been asked and the actions_descr's are being shown to ask the user if they really want to run the actions.
The button text to display for the "Finish" button. This defaults to "_Finish" but can be overridden using this parameter. The finish button is shown after the actions have been run and the results are being displayed.
The following parameters are used internally by QWizard. They should not be modified.
The following functions are defined in the QWizard class and can be called as needed.
This tells QWizard to start its magic, beginning at the primary named primary_name. Multiple primaries will be displayed one after the other until the list of primaries to display is empty. The actions clauses of all these primaries will not be run, however, until after all the primaries have been processed.
The magic() routine exits only after all the primaries have been run up through their actions, or unless one of the following conditions occurs:
- $qw->{'one_pass'} == 1 - $qw->{'generator'}{'one_pass'} == 1
By default, some of the stateless generators (HTML) will set their one_pass option automatically since it is expected that the client will exit the magic() loop and return later with the next set of data to process. The magic() routine will automatically restart where it left off if the last set of primaries being displayed was never finished. This is common for stateless generators like HTTP and HTML.
Closes the open qwizard window. Useful after your magic() routine has ended and you don't intend to call it again. Calling finished() will remove the QWizard window from visibility.
If available, some generators (Gtk2 can) may be able to display a progress meter. If they are, calling this function inside action clauses, for example, will start the display of this meter with FRACTION complete (0 <= FRACTION <= 1). The TEXT is optional and if left blank will be set to NN% showing the percentage complete.
Adds a primary to the list of screens to display to the user. This function should be called during the post_answers section of a primary. Options that can be passed before the first primary name are:
Adds the primaries in question as early as possible in the todo list (next, unless trumped by future calls.) This is the default.
Adds the primary to the end of the list of primaries to call, such that it is called last, unless another call to add_todos() appends something even later.
All the actions of subsequent primaries that have been added as the result of a current primary's post_answers clauses are called before the actions for the current primary. This means that the actions of any childrens are executed prior to the actions of their parents. This is done by default, as the general usage prediction is that parent primaries are likely to be dependent on the actions of their children in order for their own actions to be successful.
However, this flag indicates that the actions of the childrens' primaries listed in this call are to be called before the current primary's actions.
Merges all the specified primaries listed into a single screen. This has the effect of having multiple primaries displayed in one window.
Important note: you can not use both -remap (see below) and -merge at the same time! This will break the remapping support and you will not get expected results!
If a series of questions must be called repeatedly, you can use this flag to remap the names of the child primary questions to begin with this prefix. The children's clauses (questions, actions, post_answers, etc.) will be called in such a way that they can be oblivious to the fact this is being done behind their backs, allowing qwparam() to work as expected. However, for the current primary (and any parents), the 'NAME' prefix will be added to the front of any question name values that the child results in defining.
This is rather complex and is better illustrated through an example. There is an example that illustrates this in the QWizard Perl module source code examples directory, in the file number_adding.pl. This code repeatedly asks for numbers from the user using the same primary.
Important note: you can not use both -remap and -merge at the same time! This will break the remapping support and you will not get expected results!
Adds a primary definition into the existing primary data set for the QWizard object. One key value pair MUST be a 'name' => 'NAME' pair, where NAME will be the installed primary name for later referral (e.g., in add_todos() calls.) If a name collision takes place (a primary already exists under the given name), the original is kept and the new is not installed.
Merges a new set of primaries into the existing set. If a name collision takes place, the original is kept and the new is not installed.
Returns a primary definition given its NAME.
Retrieves a value specified by NAME that was submitted by a user from a QWizard widget. If a VALUE is specified as a second argument, it replaces the previous value with the new for future calls.
QWizard parameters are accessible until the last screen in which all the actions are run and the results are displayed. Parameters are not retained across primary execution.
The qwparam() function is exported by the QWizard module by default, so the function shouldn't need to be called directly from the QWizard object. Thus, just calling qwparam('NAME') by itself will work.
qwpref() acts almost identically to qwparam(), except that it is expected to be used for "preferences" -- hence the name. The preferences are stored persistently across primary screens, unlike parameters. Preferences are not erased between multiple passes through the QWizard screens. (In the HTML generator, they are implemented using cookies).
TBD: Document the $qw->add_hook and $qw->run_hooks methods that exist.
(basically $qw->add_hook('start_magic', \&coderef) will run coderef at the start of the magic function. Search the QWizard code for run_hooks for a list of hook spots available.
The variable $QWizard::qwdebug controls debugging output from QWizard. If set to 1, it dumps processing information to STDERR. This can be very useful when debugging QWizard scripts as it displays the step-by-step process about how QWizard is processing information.
Additionally, a qwdebug_set_output() function exists which can control the debugging output destination. Its argument should be a reference to a variable where the debugging output will be stored. Thus, debugging information can be stored to a previously opened error log file by doing the following:
our $dvar; $QWizard::qwdebug = 1; $qw->qwdebug_set_output(\$dvar); $qw->magic('stuff'); print LOGFILE $dvar;
There are a few usage examples in the examples directory of the source package. These examples can be run from the command line or installed as a CGI script without modification. They will run as a CGI script if run from a web server, or will launch a Gtk2 or Tk window if run from the command line.
qwparam(), qwpref()
qw_required_field(), qw_integer(), qw_optional_integer(), qw_check_int_ranges(), qw_check_length_ranges(), qw_hex(), qw_optional_hex(), qw_check_hex_and_length()
Wes Hardaker, hardaker@users.sourceforge.net
For extra information, consult the following manual pages:
Manual page for more information about the Various QWizard display generators and the questions and arguments that each one supports. This page is actually generated from what the generator actually advertises that it supports.
If you get tired of typing anonymous hash references, this API set will let you generate some widgets with less typing by using APIs instead.
Example API call:
perl -MQWizard::API -MData::Dumper -e 'print Dumper(qw_checkbox("my ?","it",'A','B', default => 'B'));' $VAR1 = { 'text' => 'it', 'name' => 'my ?', 'default' => 'B', 'values' => [ 'A', 'B' ], 'type' => 'checkbox' };
The entire QWizard system was created to support a multiple-access-point network management system called "Net-Policy" and the SVN repository for Net-Policy actually contains the QWizard development tree. | http://search.cpan.org/~hardaker/QWizard-3.04/QWizard.pm | crawl-002 | en | refinedweb |
Vol. 11, Issue 6, 2085-2102, June 2000
*Department of Biology, Graduate School of Science, Kyushu
University, Fukuoka 812-8581, Japan;
CREST, Japan Science
and Technology Corporation, Tokyo 170-0013, Japan
Rat cDNA encoding a 372-amino-acid peroxin was isolated, primarily by functional complementation screening, using a peroxisome-deficient Chinese hamster ovary cell mutant, ZPG208, of complementation group 17. The deduced primary sequence showed ~25% amino acid identity with the yeast Pex3p, thereby we termed this cDNA rat PEX3 (RnPEX3). Human and Chinese hamster Pex3p showed 96 and 94% identity to rat Pex3p and had 373 amino acids. Pex3p was characterized as an integral membrane protein of peroxisomes, exposing its N- and C-terminal parts to the cytosol. A homozygous, inactivating missense mutation, G to A at position413, in a codon (GGA) for Gly138 and resulting in a codon (GAA) for Glu was the genetic cause of peroxisome deficiency of complementation group 17 ZPG208. The peroxisome-restoring activity apparently required the full length of Pex3p, whereas its N-terminal part from residues 1 to 40 was sufficient to target a fusion protein to peroxisomes. We also demonstrated that Pex3p binds the farnesylated peroxisomal membrane protein Pex19p. Moreover, upon expression of PEX3 in ZPG208, peroxisomal membrane vesicles were assembled before the import of soluble proteins such as PTS2-tagged green fluorescent protein. Thus, Pex3p assembles membrane vesicles before the matrix proteins are translocated.
Membrane biogenesis and its regulation are one of the major foci
in modern molecular cell biology (Schatz and Dobberstein, 1996
). The
peroxisome has been widely used as a model intracellular organelle
suitable for studies using mammals and yeast (Erdmann et
al., 1997
; Fujiki, 1997
; Subramani, 1997
; Waterham and Cregg, 1997
). Some human fatal genetic disorders such as Zellweger syndrome are linked to peroxisomal malfunction and failure of peroxisome biogenesis (Lazarow and Moser, 1995
; Fujiki, 1997
). Genetic
heterogeneity has been found in subjects with these peroxisome
biogenesis disorders (PBDs), comprising 13 different complementation
groups (CGs) (Shimozawa et al., 1992
; Moser et
al., 1995
; Poulos et al., 1995
; Kinoshita et
al., 1998
; Shimozawa et al., 1998a
,b
). Understanding of
peroxisome biogenesis has significantly progressed, mainly based on
findings of topogenic signals and peroxins required for peroxisomal
protein import (Erdmann et al., 1997
; Fujiki, 1997
;
Subramani, 1997
; Waterham and Cregg, 1997
). Generally accepted models
include peroxisomal soluble as well as membrane proteins being encoded
by nuclear genes, translated on free polyribosomes in the cytosol, most
of which, if not all, are posttranslationally translocated to
preexisting peroxisomes (Lazarow and Fujiki, 1985
). Recent evidence
suggests the involvement of endoplasmic reticulum (ER) in peroxisomal
membrane biogenesis in yeast (Titorenko and Rachubinski, 1998
).
We identified 16 CGs in mammals by CG analysis between Chinese hamster
ovary (CHO) cell mutants and fibroblasts from PBD patients (Fujiki,
1997
; Kinoshita et al., 1998
; Ghaedi et al.,
1999a
). Therefore, mammalian peroxisome biogenesis probably requires at least 16 genes or their products. We isolated a novel CG of CHO cell
mutants, ZPG208 and ZPG209 (Ghaedi et al., 1999b
). These mutants are apparently defective in peroxisome membrane assembly, as
are the ZP119 cells (Kinoshita et al., 1998
) and the cells of patients (Shimozawa et al., 1998a
) of CG-G, CG-J, and
CG-D (Honsho et al., 1998
). More recently, we isolated
peroxin cDNA, PEX19, by functional phenotype complementation
assay, using ZP119 (Matsuzono et al., 1999
). Thus,
peroxisome assembly-defective CHO cell mutants are useful to
investigate molecular and cellular mechanisms involved in peroxisome
biogenesis and for elucidation of primary defects of PBD (Fujiki, 1997
;
Okumoto and Fujiki, 1997
; Okumoto et al., 1998b
; Otera
et al., 1998
; Tamura et al., 1998
). Using another
approach, i.e., expressed sequence tagging search on a human DNA
database using yeast peroxin genes, we identified the human orthologue
of Yarrowia lipolytica PEX16 (Honsho et al., 1998
). PEX16 (Honsho et al., 1998
; South and
Gould, 1999
) and PEX19 (Matsuzono et al., 1999
)
are responsible for PBDs of CG-D and CG-J, respectively. While the
present work was in progress, human PEX3 was cloned using a
homology search and yeast PEX3 (Kammerer et al.,
1998
; Soukupova et al., 1999
). It interacts with Pex19p (Soukupova et al., 1999
). A potential region responsible for
targeting of human (Kammerer et al., 1998
; Soukupova
et al., 1999
) and yeast (Baerends et al., 1996
;
Wiemer et al., 1996
) Pex3p has also been reported. However,
a PEX3-defective phenotype has not been described for
mammalian cells. Moreover, contrary to extensive investigation of the
import of soluble proteins, molecular mechanisms involved in assembly
of peroxisomal membrane vesicles are not well understood.
We isolated PEX3 cDNAs from rat, human, and Chinese hamster, primarily by genetic phenotype complementation screening using ZPG208. We also identified an inactivating, missense transition mutation in the PEX3 gene of ZPG208 cells. We found that Pex3p is involved at the initial stage in peroxisome membrane assembly, before the import of matrix protein. Topogenic and functional analyses of Pex3p are also discussed.
Rat Liver cDNA Library and Search for Complementing cDNA
The rat (Rn) liver cDNA library, containing
unidirectionally inserted cDNA under the cytomegalovirus (CMV) promoter
in a ZAP Express predigested vector (Stratagene, La Jolla, CA), has
been described previously (Okumoto and Fujiki, 1997
; Okumoto et
al., 1998b
). We screened the cDNA library by functional
complementation assay, using a CHO cell mutant, ZPG208, as described
(Okumoto et al., 1998b
; Tamura et al., 1998
).
Among the cDNA pools examined, a positive one (C8), containing 6000 clones that restored peroxisomes in ZPG208, was further divided into
subpools and screened. While complementing cDNA cloning was in
progress, using C8 subpools consisting of 300 clones per pool, we
identified a human expressed sequence tag clone, AA305508, that
showed good homology (47% identity) to PEX3 of
Saccharomyces cerevisiae (Hoehfeld et al., 1991
). We then asked whether the cDNA pool C8 might contain a plasmid
corresponding to human PEX3. We used PCR primers, sense (5'-AAGATGCTGAGGTCTGTATG-3'; potential initiation codon,
underlined) and antisense (5'-GGCTCTCGGAATTCAGTTGC-3'), containing
nucleotide residues at positions
3 to 17 and 246-265, respectively,
of the expressed sequence tag clone AA305508. A PCR product of the expected size was obtained. Thereby, full-length RnPEX3 cDNA
cloning was facilitated by colony hybridization of the C8, using as a probe the 268-bp human (Hs) PEX3 cDNA PCR
product. Three clones, pBK-CMV-1-3, were isolated and separately
transfected into ZPG208 cells. Numerous green fluorescent protein
(GFP)-positive punctates, presumably peroxisomes, were observed in
respective transfectants (our unpublished results), thus indicating
that PEX3 is a potential complementing gene. An
EcoRI-XhoI cDNA fragment of each of the three cDNA clones was directly sequenced. The nucleotide sequence of
both strands was determined by the dideoxy-chain termination method,
using various oligo-DNA primers, RnPEX3 Internal F and RnPEX3 Full R1 (Table 1), and
a Dye-terminator DNA sequence kit (Applied Biosystems, Foster City,
CA). Alignment was done using a GENETYX-Mac program (Software
Development, Tokyo, Japan). pBK-CMV-1-3 contained the same open
reading frame (ORF); pBK-CMV-2 was named pBK-CMV-RnPEX3 (see
RESULTS).
Screening of Human and Chinese Hamster cDNA Libraries
Full-length HsPEX3 cDNA was isolated by colony
hybridization on a human liver cDNA library (Tamura et al.,
1998
) in pCMVSPORT (Life Technologies, Rockville, MD) with the 268-bp
HsPEX3 cDNA (see above) as probe. One positive clone was
isolated from a subpool, F7-16, and its nucleotide sequence, named
pCMVSPORT·HsPEX3, had 1428 bp and contained a 1119-bp ORF
encoding a 373-amino-acid polypeptide. Approximately 3.3 × 105 independent colonies of a cDNA library from
wild-type CHO-K1 cells in pSPORT I (Otera et al., 1998
) were
screened using the 32P-labeled RnPEX3
(0.35-kb PCR product and a pair of primers, RnPEX3 F1 and
RnPEX3 Internal R2), by hybridization and washing at 37 and
55°C, respectively. Three positive clones were isolated; a shorter
one was subcloned into pBluescript II SK(
) (Stratagene) at the
SalI-NotI site and sequenced using
oligonucleotide primers ClPEX3 Internal F1 and
ClPEX3 Internal FR1. The
SalI-NotI fragment of Chinese hamster
(Cl) PEX3 was subcloned into the pCMVSPORT I vector.
Transfection of PEX3
The BamHI-XhoI fragment of
RnPEX3 in pBK-CMV vector (Okumoto et al., 1998b
)
was ligated into the BamHI-XhoI sites of an
expression vector, pcDNA3.1/Zeo(+), containing the Zeocin gene
(Invitrogen, Carlsbad, CA). ZPG208 cells were transfected with
pcDNA3.1/Zeo·RnPEX3 by lipofection (Tamura et
al., 1998
; Shimizu et al., 1999
). Stable transformants
were selected in the presence of 250 µg/ml Zeocin (Invitrogen) and
were examined for peroxisomes using import of peroxisome targeting
signal type 2-tagged GFP (PTS2-GFP) (Ghaedi et al., 1999b
).
One of the transformants highly expressing GFP in peroxisomes, cloned
by the limiting dilution method, was termed 208P3. Eleven other groups
of CHO cell mutants as well as fibroblasts derived from
peroxisome-deficient patients were similarly transfected with
pcDNA3.1/Zeo·RnPEX3. Transfection HsPEX3 and
ClPEX3 was likewise done.
Morphological Analysis
PTS2-GFP in TKaG2-derived cells such as ZPG208 that had been
grown on a cover glass was observed, without cell fixation (Ghaedi et al., 1999b
), under a Carl Zeiss (Thornwood, NY) Axioskop
FL microscope using a number 17 filter. Peroxisomes in CHO cells and
human fibroblasts were assessed by indirect immunofluorescence light
microscopy. Antibodies used were rabbit antibodies to rat liver
catalase (Tsukamoto et al., 1990
), human catalase (Shimozawa et al., 1992
), PTS1 peptide (Otera et al., 1998
),
70-kDa peroxisomal integral membrane protein (PMP70) (Tsukamoto
et al., 1990
), and Pex14p (Shimizu et al., 1999
),
as well as goat anti-rat catalase antibody (Okumoto et al.,
1998b
). Anti-Pex3p antibody was raised in rabbits by immunizing with a
synthetic peptide comprising the C terminus, an 18-amino-acid sequence
of human Pex3p (see Figure 2, dashed underline) supplemented with
Gly-Cys at the N terminus that had been linked to keyhole limpet
hemocyanin (Tsukamoto et al., 1991
). Antigen-antibody
complex was detected using Texas Red-labeled sheep anti-rabbit
immunoglobulin G (IgG) antibody (Cappel, Durham, NC) or donkey
anti-goat IgG antibody conjugated to rhodamine (Chemicon,
Pittsburgh, PA).
Mutation Analysis
Total RNA was obtained from ZPG208 and ZPG209 cells, using an RNeasy kit (Qiagen, Hilden, Germany). Reverse transcription (RT)-PCR was performed using 5 µg of total RNA, Superscript reverse transcriptase (Life Technologies), and a pair of ClPEX3-specific PCR primers: ClPEX3 Full F and Full R (Table 1). The RT-PCR product was cloned into the pGEM-T Easy vector (Promega, Madison, WI) and sequenced. ZPG208-derived PEX3 cDNA was inserted into pCMVSPORT and transfected to CHO cells by lipofection.
Expression of Epitope-tagged Pex3p
Tagging of epitopes, flag, and tandem hemagglutinin
(HA-HA, influenza virus hemagglutinin) to the N and C terminus,
respectively, of RnPex3p was done as follows. The full
length of RnPEX3 was amplified using a pair of primers:
RnPEX3 F1 and RnPEX3 R1/NheI. The
PCR product was digested with BamHI and NheI
and then ligated into the BamHI-NheI sites,
upstream of a double-HA tag sequence, of pBluescript II
SK(
)·HsPEX16-HA (Honsho et al., 1998
).
BamHI-ApaI fragment of the pBluescript II
SK(
)·RnPEX3-HA was inserted into the
pUcD2SR
Hyg·flag-RnPEX12 (Okumoto et al.,
1998b
), in place of the RnPEX12 cDNA. All plasmid constructs
were assessed by sequence analysis. Pex3p-HA and flag-Pex3p were
detected using rabbit anti-HA antibody and mouse anti-flag antibody
(M2; Scientific Imaging Systems, New Haven, CT), in cells that had been
fixed with 4% paraformaldehyde and then permeabilized with either 25 µg/ml digitonin or 0.1% Triton X-100 (Motley et al.,
1994
; Okumoto and Fujiki, 1997
; Okumoto et al., 1998b
).
Antigen-antibody complex was detected using fluorescein
isothiocyanate-labeled sheep anti-rabbit IgG antibody (Cappel) or sheep
anti-mouse IgG antibody (Amersham Pharmacia Biotech, Tokyo, Japan) and
Texas Red-labeled goat anti-rabbit IgG antibody (Leinco Technologies,
Ballwin, MO).
Protease Protection Assay
The postnuclear supernatant (PNS) fraction of CHO-K1 cells
transfected with flag-RnPEX3-HA was treated with several
different concentrations of proteinase K, in the presence and absence
of 1% Triton X-100 for 30 min on ice. The reaction was terminated with
1 mM phenylmethylsulfonyl fluoride and then by precipitation using
trichloroacetic acid (Shimizu et al., 1999
). The resulting whole-cell proteins were analyzed by SDS-PAGE. Pex3p, Pex14p, and
acyl-coenzyme A (CoA) oxidase (AOx), a matrix protein, were assessed by
immunoblot with antibodies to flag, HA, Pex14p (Shimizu et al., 1999
), and AOx, respectively.
Subcellular Fractionation
Subcellular fractionation of rat liver and CHO cells was
done as described (Tsukamoto et al., 1991
; Miura et
al., 1992
; Kinoshita et al., 1998
; Shimizu et
al., 1999
). Each fraction was separated by SDS-PAGE and
electrophoretically transferred onto a polyvinylidene difluoride
membrane (Bio-Rad, Hercules, CA). Pex3p and peroxisomal marker
proteins, including 3-ketoacyl-CoA thiolase (Tsukamoto et
al., 1990
) and Pex13p (Toyama et al., 1999
), were
probed with the respective antibodies and then visualized using the ECL
Western blotting detection reagent (Amersham Pharmacia Biotech). For
determination of intraperoxisomal localization, the peroxisomal
fraction was diluted with 20 mM HEPES-KOH, pH 7.6. Membrane and soluble
fractions were separated by centrifugation for 30 min at 100,000 × g. Sodium carbonate treatment (Fujiki et al.,
1982a
) and Triton X-114 extraction (Bodier, 1981
) were done as described.
Construction of Pex3p Fusion with enhanced GFP (EGFP)
cDNAs encoding fusion proteins of the wild-type Pex3p and
various truncated mutants with EGFP were constructed as follows: the
ORF coding for the full-length Pex3p and its truncated mutants comprising amino acid residues 16-372, 1-312, and 1-40 were
generated by PCR strategy, with pPBK-CMV·RnPEX3 as the
template and primer-pairs designed for creating BamHI and
NcoI at 5' and 3' tails, RnPEX3 F1 and
RnPEX3 R1/NcoI, RnPEX3 F1.1 and
RnPEX3 R1/NcoI, RnPEX3 F1 and
RnPEX3 R2/NcoI, and RnPEX3 F1 and
RnPEX3 R40/NcoI, respectively. Each of
BamHI and NcoI fragments of the resulting PCR
products was inserted into the BamHI and NcoI
sites of pEGFP vector upstream of an EGFP gene (Clontech, Palo Alto,
CA). The BamHI-ApaI fragments containing the ORF
encoding Pex3p-EGFP fusion proteins were inserted into the
pUcD2Hyg·flag-RnPEX12 (Okumoto et al., 1998b
)
in place of the RnPEX12 cDNA.
Yeast Two-Hybrid Assay
Maintenance and transformation of yeast cells, using the
Proquest two-hybrid system (Life Technologies), were done according to
the manufacturer's protocol. The ORFs for the full-length rat Pex3p
and its truncated mutants were amplified by PCR with
pBK-CMV·RnPEX3 as the template and primers (see below)
introducing the SalI and NotI sites at 5' and
3' sites, respectively. Primers used were sense primer,
RnPEX3 F1/SalI-hyb, and antisense,
RnPEX3 R1/NotI-hyb, for the full length, amino
acid residues 1-372 (Table 1). PCR for PEX3 variants
encoding amino acid residues 1-312 and 1-40 were done with
RnPEX3 F1/SalI-hyb as a forward primer and
RnPEX3 R2/NotI-hyb and RnPEX3
R40/NotI-hyb as reverse primers, respectively. A
PEX3 mutant for residues 110-372 was amplified with a
forward primer, RnPEX3 F3/SalI-hyb, and a reverse
primer, RnPEX3 R1/NotI-hyb. The ORF for
Pex3p-G138E was amplified using primers RnPEX3
F1/SalI-hyb and RnPEX3 R1/NotI-hyb and
ZPG208-derived PEX3 as the template. The resulting PCR
products were excised with SalI and NotI. The fragments were separately inserted into the
SalI-NotI sites downstream of the GAL4
DNA-binding domain in the pDBLeu plasmid or the GAL4-activating domain
in the pPC86 plasmid. For the constructs encoding fusions with human
Pex19p, PCR amplification was done with primers HsPEX19 F/SalI and HsPEX19 R/NotI using
pUcD2Hyg·HsPEX19 (Matsuzono et al., 1999
) as
the template. The resulting product was inserted into pDBLeu and pPC86
plasmids, as described for PEX3.
Cotransformation of two hybrid vectors into S. cerevisiae
MaV203 (Mat
, leu2-3, 112,
trp1-901, his3
200,
ade2-101, gal4
, gal80
,
SPAL10::URA3,
GAL1::lacZ, HIS3UAS
GAL1::HIS3@LYS2,
can1R, cyh2R)
was done according to instructions of the manufacturer. Individual transformants were screened for their potential to grow on synthetic complete medium lacking tryptophan, leucine, and histidine by expression of their chromosomal HIS3 gene. The transformants
were also assayed for
-galactosidase activity of the lacZ
marker gene.
Coimmunoprecipitation Assay
To verify the findings in vivo, we did a coimmunoprecipitation
assay. Human Pex3p and Pex19p were separately synthesized in a rabbit
reticulocyte cell-free translation system (Miyazawa et al.,
1989
) using in vitro transcripts of HsPEX3 and
HsPEX19 (Matsuzono et al., 1999
), in the presence
and absence of 1.2 mCi/ml [35S]methionine and
[35S]cysteine (Amersham Pharmacia Biotech),
respectively. [35S]Pex3p and Pex19p were
incubated overnight at 4°C and subjected to immunoprecipitation with
anti-Pex19p antibody (Matsuzono et al., 1999
), as described
(Miyazawa et al., 1989
).
Other Methods
In vitro transcription and translation (Miyazawa et
al., 1989
) were done as described. Immunoprecipitation of
[35S]Pex3p and catalase latency assay with
digitonin were done as described (Tsukamoto et al., 1990
).
Protein assay was done using a Bio-Rad protein assay kit. Cell
resistance to the 12-(1'-pyrene)dodecanoic acid/long-wavelength UV
light (P12/UV) and 9-(1'-pyrene)nonanol/UV (P9OH/UV) treatments was
determined under conditions of 2 µM, 1.5 min and 6 µM, 2 min
(Shimozawa et al., 1992
), respectively.
Cloning of a Rat PEX3 cDNA
We used a transient expression assay as a cDNA cloning strategy
(Tsukamoto et al., 1995
; Okumoto and Fujiki, 1997
; Okumoto et al., 1998b
; Tamura et al., 1998
) to search for
a complementing cDNA of a CG17 CHO cell mutant, ZPG208, defective in
peroxisome assembly (Ghaedi et al., 1999b
) (Figure
1a). A rat liver cDNA library divided
into small pools was transfected to ZPG208. Peroxisome-restoring positive cDNA clones were isolated by searching for a punctate fluorescent pattern of PTS2-GFP in cells, presumably restoring peroxisomal import. One combined pool (C8) yielded several
peroxisome-restored cells of ZPG208 in a single dish (Figure 1b). After
a third round of screening, i.e. at the step of 300 clones per pool
screening, final cDNA cloning was done by a colony hybridization
method, using a 268-bp HsPEX3 cDNA (see MATERIALS AND
METHODS) as probe. One positive clone, named pBK-CMV·PEX3,
was isolated, which restored peroxisomal import of PTS2-GFP in ZPG208
(our unpublished data). The cDNA portion of
pBK-CMV·PEX3, sequenced on both strands, indicated that
the cDNA was 1952 bp in length with an ORF encoding a protein consisting of 372 amino acids (Figure 2).
The calculated molecular mass of its deduced amino acid sequence was
42,209 Da. The amino acid sequence showed 25, 32, and 33% identity
with those of Pex3p from S. cerevisiae (Hoehfeld et
al., 1991
), Hansenula polymorpha (Baerends et
al., 1996
), and Pichia pastoris (Wiemer et
al., 1996
), respectively. Thus we termed this cDNA rat
PEX3, RnPEX3. (The GenBank database accession
number for rat PEX3 is AB035306.) RnPEX3
complemented peroxisomal import of PTS2-GFP in ZPG208 (Figure 1c).
Human (Hs) and Chinese hamster (Cl)
PEX3 cDNA were cloned by colony hybridization from human and
Chinese hamster cDNA libraries. Both HsPEX3 and
ClPEX3 encoded 373-amino-acid Pex3p, with 94 and 97%
identity to rat Pex3p at a deduced amino acid sequence level, whereas
rat Pex3p was shorter by one amino acid, at alignment position 222 (Figure 2). (The GenBank database accession numbers for human and
Chinese hamster PEX3 are AB035307 and AB035308, respectively.) Pex3p apparently contained at least two hydrophobic segments, thereby suggesting that Pex3p is a membrane protein (Figure
2, overlines).
PEX3 Restored Peroxisome Biogenesis in ZPG208
Several phenotypic abnormalities attributable to peroxisome
deficiency, such as impaired import of both matrix and membrane proteins, were found in ZPG208 (Ghaedi et al., 1999b
). To
determine whether RnPEX3 could correct these mutant
phenotypes, a stable RnPEX3-transformant of ZPG208, named
208P3, was isolated. PTS1 proteins were noted in numerous vesicular
structures, presumably peroxisomes, when stained with antibodies to
catalase (Figure 1d), PTS1 (Figure 1e), and 3-ketoacyl-CoA thiolase, a
PTS2 protein (our unpublished data). Numerous PMP70-positive particles
were detected in 208P3 cells (Figure 1g), whereas peroxisomal membrane remnants were not discernible in ZPG208 (Ghaedi et al.,
1999b
) (Figure 1f). These results strongly suggested that 208P3 cells had morphologically normal peroxisomes, as seen in the wild-type CHO-K1
cells. When another CHO cell mutant, ZPG209 of the same CG as ZPG208,
was transfected with human PEX3 (HsPEX3),
catalase was likewise localized in peroxisomes (Figure 1h), thus
demonstrating that HsPEX3 is functional in CHO cells.
In peroxisome-deficient cells, peroxisomal proteins mislocalized to the
cytosol are rapidly degraded or are not converted to mature forms,
despite normal synthesis (Tsukamoto et al., 1990
; Shimozawa
et al., 1992
; Okumoto et al., 1997
). In the
digitonin titration assay, nearly 60% of the catalase activity was
latent at the digitonin concentration of 100 µg/ml in the wild-type
cells (Figure 3A). In ZPG208 cells,
nearly full activity of catalase was detected at 100 µg/ml digitonin,
with the same latency profile as lactate dehydrogenase, a cytosolic
enzyme (Okumoto et al., 1998b
; Tamura et al.,
1998
) (our unpublished results); hence catalase was present in the
cytosol. This is consistent with our earlier observations using several
CGs of CHO mutants (Tsukamoto et al., 1990
; Shimozawa
et al., 1992
; Okumoto et al., 1998b
; Tamura
et al., 1998
). In 208P3 cells, catalase showed almost the
same latency as in wild-type CHO-K1 cells, thereby demonstrating
restoration of peroxisome biogenesis. A moderately higher level of
total catalase activity in ZPG208 and 208P3 cells, compared with that
of CHO-K1, apparently reflects cell size (Table
2).
AOx, the first enzyme of the peroxisomal fatty acid
-oxidation
system, is synthesized as a 75-kDa polypeptide (A component) and is
proteolytically converted into 53- and 22-kDa polypeptides (B and C
components, respectively) in peroxisomes (Miyazawa et al.,
1987
; Miyazawa et al., 1989
; Tsukamoto et al.,
1990
). All three polypeptide components were evident in CHO-K1 cells,
exclusively in the organelle fraction, presumably in peroxisomes, as
assessed by immunoblotting (Figure 3B, top panel, lanes
1-3), whereas AOx protein was under the detectable level in ZPG208,
probably because of rapid degradation (Tsukamoto et al.,
1990
; Ghaedi et al., 1999b
) (Figure 3B, top panel, lanes
4-6). The three components of AOx were found in particulate fractions
in 208P3, as in CHO-K1 cells (Figure 3B, top panel, lanes 7-9),
indicative of proper import and proteolytic conversion of AOx.
Peroxisomal 3-ketoacyl-CoA thiolase, the third enzyme of the
peroxisomal
-oxidation system, is synthesized as a larger precursor
with an amino-terminal presequence, which contains PTS2 (Osumi et
al., 1991
; Swinkels et al., 1991
) and is converted to a
mature form in peroxisomes (Hijikata et al., 1987
; Tsukamoto
et al., 1990
; Miura et al., 1994
; Tsukamoto et al., 1994a
). In wild-type CHO-K1 cells, only the matured
thiolase was detected and mostly in the PNS as well as organelle
fractions, presumably in peroxisomes, with a little in the cytosol
(Figure 3B, middle panel, lanes 1-3), thereby reflecting rapid
processing of the precursor form. In ZPG208 cells, only the larger
precursor was found in particulate and soluble fractions (Figure 3B,
middle panel, lanes 4-6), implying the absence of processing activity. The thiolase precursor in the membrane pellet may be due to nonspecific binding to some organelles. Physiological implications remain to be
clarified. 208P3 cells showed only the mature form of thiolase, as in
CHO-K1, demonstrating the complementation of PTS2 protein import and
processing (Figure 3B, middle panel, lanes 7-9).
PMP70 was absent in ZPG208 (Ghaedi et al., 1999b
) (Figure
3B, bottom panel, lanes 4-6), presumably by a rapid degradation as in
the pex19 CHO mutant ZP119 (Kinoshita et al.,
1998
). Another peroxisomal membrane protein such as Pex13p was also
under the detectable level in ZPG208 (see Figure 5A). Appearance of
PMP70 in the particulate fraction of 208P3, at a similar level as in
CHO-K1, also suggested the restored biogenesis of peroxisomal membrane
proteins (Figure 3B, bottom panel, lanes 1-3 and 7-9).
P9OH, incorporated into plasmalogens at an early stage of synthesis,
produces active oxygen by UV irradiation (Morand et al., 1990
). Cell culture in the presence of P9OH, followed by short exposure
to UV, kills wild-type CHO cells but not peroxisome-defective mutants.
Conversely, P12/UV treatment specifically kills peroxisome-defective cells grown in P12-added medium upon UV irradiation, because of lack of
synthesis of plasmalogen, an oxygen radical scavenger (Zoeller et
al., 1988
). 208P3 cells were resistant to P12/UV treatment and
sensitive to P9OH/UV, similar to CHO-K1 cells (Table 2). In contrast,
mutant ZPG208 cells were resistant to P9OH/UV treatment and sensitive
to P12/UV (Ghaedi et al., 1999b
) (Table 2). Taken together,
these results demonstrated that RnPEX3 restored peroxisome biogenesis in ZPG208.
At 3 d after RnPEX3 transfection, CG17 CHO mutants,
ZPG208 and ZPG209, were mostly complemented for peroxisome assembly,
whereas none of the other CGs of peroxisome-deficient CHO cell mutants showed peroxisomes (Table 3).
Furthermore, RnPEX3 was introduced into fibroblasts from
patients with PBD, such as Zellweger syndrome, of CGs, D and G of Gifu
University (Gifu, Japan) (Shimozawa et al., 1992
; Poulos
et al., 1995
) and CG-VI of the Kennedy-Krieger Institute
(Baltimore, MD) (Shimozawa et al., 1992
) that were distinct from CHO mutants. As expected, none of the PBD fibroblasts was morphologically restored for peroxisome assembly (Table 3).
Collectively, these results demonstrate that Pex3p is the peroxisome
biogenesis factor only for CG17.
Dysfunction of Pex3p in CHO Mutants
PEX3 in Mutants.
To investigate dysfunction of Pex3p in ZPG208, we determined the
nucleotide sequence of Pex3p cDNA isolated from ZPG208 by RT-PCR. In
the six independent cDNA clones isolated, all showed a point mutation
at position 413 of a codon for Gly138
(GGA) to a codon for Glu138
(GAA), termed ClPEX3G138E (Figure
4A), strongly suggesting a homozygous
mutation. Therefore, dysfunction of the Pex3p caused by a missense
mutation is most likely to be the primary defect in the mutant ZPG208.
It is noteworthy that the mutation site, G138E, was located in the
interior of the hydrophobic segment (see Figure 2). The same homozygous
missense mutation was found in ZPG209 (our unpublished results).
Complementation of Protein Transport by ClPEX3. When mutant ZPG208 cells were transfected with wild-type ClPEX3 cDNA, PTS2-GFP was found in numerous particles, thereby indicating complementation of peroxisomal protein import (Figure 4B, a), as was the case with RnPEX3 (see Figure 1). To assess the impaired function of Pex3p in ZPG208, ZPG208-derived PEX3 cDNA, ClPEX3G138E, was transfected back to the mutant cells. PTS2-GFP was present in the cytosol, in a diffuse manner, in ClPEX3G138E-transfected ZPG208 (Figure 4B, b), hence demonstrating dysfunction of the mutated form of Pex3p. Moreover, ZPG209, the same CG mutant as ZPG208, showed cytosolic PTS2-GFP in transfectants of ClPEX3G138E (our unpublished results), confirming the impaired function of ClPEX3G138E.Taken together, we conclude that dysfunction of Pex3p caused by a missense mutation is the primary defect in impaired peroxisome biogenesis in CG17 CHO mutants ZPG208 and ZPG209.
Intracellular Localization of Pex3p
The C-terminal peptide of human Pex3p (residues 355-372) was used
to raise rabbit antibody. This antibody reacted only with a single
protein with an apparent molecular mass of ~42 kDa, nearly the same
size as the calculated one, in immunoblot of rat liver homogenates (our unpublished results), indicating that the antibody is
specific. The mobility in SDS-PAGE of Pex3p synthesized in vitro by
coupled transcription-translation of ClPEX3 was
indistinguishable from that of Pex3p detected by immunoblot
of subcellular fractions of CHO-K1, i.e. PNS and organellar fractions
(Figure 5A, upper panel, compare lane 2 and lanes 4 and 5, arrow), thereby indicating that a cloned
ClPEX3 encodes bona fide Pex3p associated with organelles. This result implies the synthesis of Pex3p at its final size, consistent with a general rule for peroxisomal proteins (Lazarow and
Fujiki, 1985
). Human Pex3p synthesized in vitro also showed a similar
mobility in SDS-PAGE (Figure 5A, lane 1), whereas rat Pex3p showed a
slightly higher mobility (lane 3, arrowhead).
Intracellular localization of Pex3p was also investigated by
subcellular fractionation of 208P3 cells stably expressing rat Pex3p.
Pex3p was detected in the PNS fraction and then exclusively recovered
in the organellar fraction, not in the cytosol (Figure 5A, upper panel,
lanes 7-9, arrowhead), as was the case for endogenous Pex3p in CHO-K1
(lanes 4-6). Peroxisomal membrane remnants, called peroxisomal ghosts,
are found in most CHO cell mutants impaired in peroxisome biogenesis
(Shimozawa et al., 1992
) such as a pex2 mutant,
Z65 (Tsukamoto et al., 1991
), as well as in PBD patient fibroblasts (Santos et al., 1988
). Pex3p was detected in Z65
cells and fractionated in the membrane fraction (Figure 5A, lanes
13-15, arrow), whereas it was not discernible in a pex3,
ZPG208 (lanes 10-12), possibly because of a rapid degradation. This
implies that Pex3p is localized in endomembranes in
Pex2 cells, presumably in peroxisomal remnants where PMP70
is targeted (see below). Biogenesis of other peroxisomal membrane
proteins such as Pex13p (Toyama et al., 1999
) and Pex14p was
investigated. Pex14p was detectable in the membrane fraction of ZPG208
(Figure 5A, lower panel, lanes 4-6) but was less in amount compared
with the level in CHO-K1 and 208P3 (lanes 1-3 and 7-9). In contrast,
Pex13p was not detectable in ZPG208 (lanes 4-6). Pex12p, another
peroxin integrated to peroxisomal membranes (Okumoto and Fujiki, 1997
),
was likewise below the detection level (our unpublished results). In
CHO-K1 as well as 208P3 cells, Pex13p and Pex14p were detected in
organelle fractions (lanes 1-3 and 7-9). It may be that Pex14p
locates in peroxisome-related membrane vesicles not morphologically
detectable or in some endomembranes by mistargeting. In contrast,
Pex13p appears to be degraded. The results imply that the stability of
peroxisomal membrane proteins in the pex3 mutant may vary
from one protein to another.
Upon further fractionation of the light mitochondrial fraction from rat
liver by sucrose density gradient centrifugation, Pex3p was detected as
a single band and cosedimented with peroxisomal marker enzymes catalase
and AOx as well as peroxisomal integral membrane proteins Pex14p
(Shimizu et al., 1999
) and PMP70, thereby indicating that
Pex3p is a peroxisomal protein (Figure 5B). This is consistent with
morphological observations (see below). In addition, the distribution
of Pex3p on the gradient was different from that of marker enzymes,
glutamate dehydrogenase for mitochondria, esterase for microsomes, and
N-acetyl-
-glucosaminidase for lysosomes, thus confirming
the peroxisomal location of Pex3p.
The subcellular localization of Pex3p was also determined by
immunofluorescent microscopy with Pex3p tagged at its N terminus with
an epitope flag. In wild-type CHO-K1 cells transfected with flag-HsPEX3, Pex3p was detected in a punctate staining
pattern with use of an anti-flag antibody (Figure
6A, a). The pattern was superimposable on
that obtained using an anti-catalase antibody (Figure 6A, b), thereby
demonstrating that flag-Pex3p was targeted to peroxisomes. Similar
results were obtained when flag-RnPEX3 was expressed in
CHO-K1 cells and stained with an anti-flag antibody (our unpublished
results). The flag-tagged Pex3p fully restored peroxisome assembly in
ZPG208, as efficiently as did Pex3p, and was colocalized with catalase,
indicating that the N-terminal tagging did not interfere with function
of Pex3p (our unpublished results). These results were interpreted to
mean that flag-Pex3p was translocated to peroxisomes. Endogenous Pex3p
was barely detectable using an anti-Pex3p antibody in CHO-K1 (our
unpublished results). PEX2-defective Z65, (Tsukamoto
et al., 1991
) representing a typical pex
phenotype with normal import of membrane proteins (Shimozawa et
al., 1992
), was transfected with flag-RnPEX3-HA.
Flag-Pex3p-HA was colocalized with PMP70 in peroxisomal ghosts (Figure
6A, c and d). Thus, translocation of Pex3p does not appear to be
impaired in these mutant cells, consistent with the notion that import of peroxisomal membrane proteins is normal in mutants with peroxisomal remnants. Collectively, the data demonstrate peroxisomal localization of Pex3p.
Hydropathy analysis of Pex3p suggested that Pex3p apparently
contains at least two hydrophobic segments (see Figure 2, overlines). Pex3p was not extractable with 50 mM HEPES-KOH, pH 7.6, from freshly isolated rat liver peroxisomes (our unpublished results). The integrity
of Pex3p was verified by extraction with 0.1 M sodium carbonate, pH
11.5 (Fujiki et al., 1982a
), and treatment with 1% Triton
X-114 (Bodier, 1981
) (Figure 6B). Pex3p was not extractable with sodium
carbonate, as was the case for peroxisomal integral membrane proteins
PMP22 (Fujiki et al., 1984
) and Pex12p (Okumoto and Fujiki,
1997
; Okumoto et al., 1998b
) (Figure 6B, lane 3), in
contrast to a matrix enzyme, catalase (lane 2),
thereby strongly suggesting that Pex3p is an integral
membrane protein. Upon treatment with Triton X-114, Pex3p as well as
PMP22 and Pex12p were recovered in a detergent phase, and catalase was
recovered in an aqueous phase (lanes 4 and 5), thereby indicating that
Pex3p is an integral membrane protein.
Membrane Topology of Pex3p
Topology of Pex3p in peroxisomal membranes was investigated using
a differential cell permeabilization procedure, in which detection of
Pex3p was done using antibodies to epitope tags. When ZPG208 cells
transfected with flag-RnPEX3-HA encoding both N- and
C-terminally epitope-tagged Pex3p were treated with Triton X-100, which
solubilizes all cellular membranes, both flag-Pex3p-HA and catalase
were detected in particulates, in a superimposable manner, thus
indicating localization of Pex3p in peroxisomes (Figure 7, A and B, a and b) and consistent with
findings described above. Hence, flag and HA tagging does not affect
localization and function of Pex3p. When ZPG208 cells expressing
flag-RnPex3p-HA were permeabilized with 25 µg/ml
digitonin, under conditions in which plasma membranes are selectively
permeabilized and intraperoxisomal proteins are inaccessible to
exogenous antibodies (Okumoto and Fujiki, 1997
; Okumoto et
al., 1998b
). Flag-Pex3p-HA was observed in a punctate staining
pattern with use of an anti-flag antibody, whereas
almost no staining of cells was noted using an anti-catalase antibody (Figure 7A, c and d). Similar punctate staining was discernible using
an anti-HA antibody (Figure 7B, c and d). The data strongly suggest
that both N- and C-terminal parts of Pex3p are exposed to the cytosol.
The same results regarding Pex3p topology were obtained by expression
of flag-Pex3p-HA in CHO-K1 cells (our unpublished results).
Transmembrane topology of Pex3p was also determined by means of
protease treatment. When the PNS fraction of CHO-K1 cells expressing
flag-RnPex3p-HA was treated with proteinase K, Pex3p was
detected in immunoblots using antibodies to flag and HA,
before treatment with proteinase K (Figure 7C, lane 1). With 25 µg/ml the protease Pex3p was hardly detectable using the flag or HA antibody
(lane 2). After digestion of the PNS fraction with 50 and 100 µg/ml
proteinase K, Pex3p was no longer visible (lanes 3 and 4). Similarly,
Pex3p was completely digested when adding 1% Triton X-100 (lane 5).
Pex14p, a peroxin integrated to peroxisomal membranes and exposing its
N- and C-terminal regions to the cytosol (Shimizu et al.,
1999
), was likewise digested by proteinase K, as assessed with antibody
to Pex14p C-terminal peptide (Shimizu et al., 1999
). Under
such conditions, AOx, a matrix enzyme, was fully protected from
digestion (lanes 2-4), whereas pretreatment with Triton X-100
abolished the protease protection (lane 5). Similar results were
obtained using flag-RnPex3p-HA-expressing, peroxisome-restored ZPG208 cells (our unpublished results).
Collectively, we conclude that both N- and C-terminal parts of Pex3p
are exposed to the cytosol.
Kinetics of Peroxisome Biogenesis
We investigated kinetics of peroxisome assembly with respect
to membrane vesicle formation as well as soluble protein import. ZPG208
originally expressing PTS2-GFP was transfected with
flag-RnPEX3-HA and monitored under a fluorescent microscope.
RnPex3p was detectable by immunoblot with
anti-Pex3p antibody at 12 h after the transfection and reached a
steady level at ~24 h after transfection (Figure 8B). RnPex3p was also
morphologically visible by staining with an anti-HA antibody in several
punctate structures in cells at 12 h, whereas PTS2-GFP was
diffused in the cytoplasm and apparently in the nucleus as well (Figure
8A, a with arrow and e), as seen in the untransfected cells (Figure
1a). At 18 h, Pex3p became more clearly visible in punctate
structures in part of the transfected cells, possibly representing
assembled peroxisomal membranes, where PTS2-GFP was not discernible in
a punctate manner (Figure 8A, b with arrow and f). Several
PMP70-positive particles in a cell were discernible at 18 h
(Figure 8A, j, arrowheads). At 24 h, Pex3p-positive vesicles that
had increased in number were colocalized with PTS2-GFP in a
superimposable manner (Figure 8A, c and g). This was interpreted to
mean that part of the assembled peroxisomal membrane vesicles imported
PTS2-GFP. At 24 h, PMP70-positive particles, similar in number to
Pex3p-carrying ones, were visible, thereby demonstrating
reestablishment of membrane assembly (Figure 8A, k). In contrast,
catalase was imported into peroxisomes (Figure 8A, m-p),
superimposable with PTS2-GFP-positive vesicles (our unpublished
results), only at 36 h after RnPEX3 transfection, thereby indicating that catalase is imported at a slower rate, compared
with the case of PTS2. Collectively, peroxisomal membrane vesicles
containing Pex3p as well as PMP70 are likely to form before the import
of matrix proteins. The import kinetics of matrix proteins appears to
be variable.
Functional and Topogenic Regions of Pex3p
To elucidate structural and functional aspects of Pex3p, we
constructed various truncated Pex3p variants by C-terminally fusing to
EGFP and expressed them in ZPG208 and CHO-K1 cells. Full-length Pex3p
was functional as a peroxin in restoring peroxisome biogenesis in
ZPG208 and targeted to peroxisomes (Figure
9, A and B, a and e). ZPG208-derived,
full-length but functionally inactive Pex3p was translocated to
peroxisomes, as assessed by colocalization with PTS1 proteins, when
expressed in CHO-K1 (our unpublished results). However, the endogenous
Pex3p mutant was not found in ZPG208 on immunoblots,
presumably because of a degradation. Pex3p variants truncated at the
N-terminal portion, such as those of residues 1-15 (Figure 9, A and B,
b and f) as well as 1-30, 1-109, and 1-150 (Table
4), were all biologically inactive and
were apparently localized in the cytoplasm. In contrast, Pex3p with residues 1-312, i.e., with deletion of the C-terminal 60 amino acids,
was localized in peroxisomes, as assessed with PTS1, although the
biological activity was eliminated (Figure 9, A and B, c and g).
Another mutant deleted from residue 204 to the C terminus was also
inactive but was still targeted to peroxisomes (Table 4). All of the
truncation mutants used for fusion with EGFP were N-terminally tagged
with flag and likewise expressed in ZPG208. These Pex3p mutants did not
restore the impaired assembly of peroxisomes (our unpublished data),
consistent with results described above. A Pex3p variant with only the
N-terminal sequence 1-40 directed EGFP to peroxisomes when expressed
in CHO-K1 (Figure 9B, d and h). This 40-amino-acid Pex3p protein was
targeted to many vesicular structures in ZPG208, where no
complementation was evident (Figure 9A, d and h). Interestingly, not
only full-length Pex3p-EGFP, similar to flag-Pex3p (see Figure 6A), but
also that of residues 1-40 were targeted to peroxisomal remnants in
pex2 Z65 (our unpublished results). Taken together, it is
apparent that nearly full-length Pex3p is required for biological
activity, whereas peroxisome-targeting information resides in
N-terminal residues 1-40, consistent with the findings in humans
(Kammerer et al., 1998
; Soukupova et al., 1999
)
and yeast (Wiemer et al., 1996
) Pex3p.
Identification of Pex19p as a Binding Partner of Pex3p
To determine whether Pex3p interacts with mammalian
peroxins, we did the yeast two-hybrid assay. Pex19p, a farnesylated
protein required for peroxisome assembly in CG-J (Matsuzono et
al., 1999
), gave positive
-galactosidase activity and yeast
growth in His
/3-aminotriazole
(AT)+ medium (Figure
10A). In contrast, the other peroxins,
including Pex16p (Honsho et al., 1998
), Pex14p (Shimizu
et al., 1999
), Pex13p (Toyama et al., 1999
),
Pex11p
(Abe et al., 1998
), and Pex11p
(Abe and Fujiki,
1998
), as well as the RING peroxins Pex2p (Tsukamoto et al.,
1991
), Pex10p (Okumoto et al., 1998a
), and Pex12p (Okumoto et al., 1998b
) resulted in negative findings (our
unpublished results). To search for a region responsible for the
interaction with Pex19p, a Pex3p variant with amino acid residues
1-312 gave a weak signal (Figure 10A), whereas all of the other
mutants, including those with residues 110-372 (Figure 10A), 151-372
and 1-203 (our unpublished results), and 1-40 (Figure 10A) did not
interact with Pex19p. Interestingly, ZPG208-derived Pex3p-G138E was
positive, both in
-galactosidase activity and yeast growth in
His
/AT+ medium.
Therefore, the interaction apparently requires nearly full-length
Pex3p.
To confirm the findings in the two-hybrid assay, cell-free
synthesized rat 35S-Pex3p was incubated with the
in vitro transcription-translation product of human PEX19.
Immunoprecipitation of Pex19p gave rise to concomitant recovery of
35S-Pex3p, whereas that with the preimmune serum
showed no protein band (Figure 10B, lanes 1-3). It is noteworthy that
cell-free synthesized Pex19p was detected as two bands on
immunoblots: one representing the farnesylated Pex19p
(solid arrowhead) and the other (open arrowhead) for the nonmodified
one (lane 4), as described (Matsuzono et al., 1999
).
Conversely, 35S-Pex19p coimmunoprecipitated
with Pex3p when using anti-Pex3p antibody (our unpublished results).
Collectively, the results demonstrate that Pex3p specifically binds to Pex19p.
The CG17 CHO cell mutants ZPG208 and ZPG209 are defective in
import of both matrix and membrane proteins, similar to the phenotype of a pex19 mutant, ZP119 (Kinoshita et al.,
1998
). Peroxisomal remnants were seen in 10 other CGs of CHO cell
mutants (Zoeller et al., 1989
; Shimozawa et al.,
1992
; Okumoto et al., 1997
; Tateishi et al.,
1997
; Otera et al., 1998
; Ghaedi et al., 1999a
;
Toyama et al., 1999
) and 9 CGs of fibroblasts from PBD
patients (Santos et al., 1992
; Wendland and Subramani, 1993
;
Shimozawa et al., 1998a
), excluding CG-G (Shimozawa et
al., 1998a
), PEX16-deficient CG-D (Honsho et
al., 1998
), and PEX19-defective CG-J (Kinoshita et al., 1998
; Shimozawa et al., 1998a
; Matsuzono
et al., 1999
). In the present work, we isolated a rat Pex3p
cDNA by functional complementation of ZPG208. Expression of the
full-length RnPEX3 fully restored the impaired peroxisome
biogenesis, including membrane vesicle assembly, in ZPG208 and ZPG209.
We delineated the homozygotic mutant PEX3 allele from ZPG208
and ZPG209: a one-base transition, G413 to A in a
codon for Gly138, resulted in
Glu138. Pex3p with G138E was not functionally
active in complementing impaired peroxisome biogenesis in ZPG208.
Accordingly, PEX3 is responsible for the peroxisome
biogenesis of CG17 and is the 12th gene to be identified to date in
mammals (Table 3). None of the fibroblasts from patients with PBD of 13 CGs was complemented, indicating that the PEX3 gene is not
the causal gene of human peroxisome-defective disorders of the CGs so
far classified. ZPG208 and ZPG209 are thus the first pex3
mutants to be identified in mammals. It is noteworthy that yeast Pex3p
expression complemented peroxisome biogenesis in respective
pex3 mutants of S. cerevisiae (Hoehfeld et
al., 1991
), H. polymorpha (Baerends et al.,
1996
), and P. pastoris (Wiemer et al., 1996
),
where pex3 cells of H. polymorpha and P. pastoris were apparently absent from peroxisomal structures
(Baerends et al., 1996
; Wiemer et al., 1996
).
Upon transfection of RnPEX3 into ZPG208 devoid of
peroxisomal remnants, most striking was the formation of
morphologically recognizable peroxisomal membrane vesicles, apparently
preceding the import of matrix proteins such as PTS1 and PTS2 proteins
and catalase. Dysfunction of Pex3p caused impaired membrane assembly, resulting in the mutant phenotype defect of matrix protein import and
used for mutant screening. Pex3p can be classified as a peroxin essential for the assembly of peroxisome membranes. Very recently, Pex16p and Pex19p were also shown to function in assembly of peroxisome vesicles in mammals (Honsho et al., 1998
; Matsuzono et
al., 1999
; South and Gould, 1999
), as was the case for Pex19p in
yeast (Snyder et al., 1999
). Mutation of human Pex16p
(Honsho et al., 1998
; South and Gould, 1999
) and Pex19p
(Matsuzono et al., 1999
) severely affected peroxisome
assembly in CG-D (CG-IX) and CG-J patients with Zellweger syndrome.
Accordingly, Pex16p and Pex19p can also be categorized into this group
of peroxins. We demonstrated in the present study that the membrane
assembly process(es) involving integration of Pex3p is temporally
differentiated from the import of soluble proteins during peroxisome
biogenesis. Moreover, import of PTS2 and catalase at a different rate
in PEX3-transfected ZPG208 implies temporally differential
translocation of matrix proteins into peroxisomal membrane vesicles.
Similar types of protein import, distinct between membrane polypeptides
and soluble proteins, have been observed in pex16 and
pex19 mutant cells, upon expression of complementing cDNAs
PEX16 and PEX19, respectively (Honsho et al., 1998
; Matsuzono et al., 1999
; South and Gould,
1999
). Therefore, it is most likely that Pex3p functions as an
essential factor required for the translocation process of membrane
protein and/or membrane vesicle assembly, possibly in a concerted
manner with other peroxins such as Pex16p and Pex19p. Taken together,
our results provide evidence that peroxisomes may form de novo and do
not have to arise from preexisting, morphologically recognizable peroxisomes. At such an early stage of peroxisome assembly, ER may be
involved, as was suggested for Pex2p and Pex16p, both initially residing in ER, in Y. lipolytica (Titorenko and Rachubinski,
1998
). However, no direct evidence for the involvement of ER in
peroxisome assembly has been noted in mammalian cells. Accordingly,
several issues, including those regarding roles of the peroxins Pex3p, Pex16p, and Pex19p in assembly of membrane vesicles as well as translocons for membrane polypeptides and soluble matrix proteins, remain to be addressed.
We found that Pex3p interacts with Pex19p both in vivo and in vitro.
However, it is unclear whether the interaction is direct or is mediated
by a factor(s), if any, present in the assay used. Such Pex3p-Pex19p
binding was recently found in S. cerevisiae (Goette et
al., 1998
) and P. pastoris (Snyder et al.,
1999
) as well as in human cells (Soukupova et al., 1999
). It
is interesting to note that Pex3p with G138E also interacts with Pex19p
in vivo, in yeast two-hybrid assays. Although Pex3p-G138E was under the detectable level in the mutant ZPG208, this mutant Pex3p appears to be
stable in S. cerevisiae used for the binding assay, as in CHO-K1 cells. We interpreted this observation to mean that the undetectable level of Pex3p, presumably because of a rapid turnover, is
the cause of the phenotype of pex3 ZPG208. Pex3p may
function in peroxisomal membrane assembly at an early stage of
peroxisome assembly by interacting with Pex19p but not with Pex16p.
Pex19p is a farnesylated peroxin, partly if not all residing on the
peroxisome membrane, exposing its N-terminal portion to the cytosol
(Matsuzono et al., 1999
). However, it is not clear whether
Pex3p is a prerequisite peroxin for Pex19p to be localized to and/or
anchored on peroxisomal membranes. Because several truncated Pex3p
mutants, including that consisting of only the N-terminal 40 amino acid
residues, were properly translocated to peroxisomal membranes despite
the lack of binding to Pex19p, as verified in yeast two-hybrid assays, Pex3p does not appear to require Pex19p for targeting. It is possible that interaction of Pex3p and Pex19p leads to assembly of potential membrane vesicles, which then function as protein import-competent machinery, comprising at least Pex14p (Fransen et al., 1998
;
Shimizu et al., 1999
; Will et al., 1999
) and
Pex13p (Liu et al., 1999
; Shimozawa et al., 1999
;
Toyama et al., 1999
) and possibly including the RING family
Pex2p (Tsukamoto et al., 1991
), Pex10p (Okumoto et
al., 1998a
; Warren et al., 1998
), and Pex12p (Chang
et al., 1997
; Okumoto and Fujiki, 1997
; Okumoto et
al., 1998b
). It is also noteworthy that Pex3p may function
upstream of Pex19p in P. pastoris (Snyder et al.,
1999
). All these findings suggest that assembly processes of peroxisome
membrane vesicles are mediated by Pex3p, Pex16p, and Pex19p in mammals.
The assembled membrane vesicles import other membrane components,
including those for a potential matrix protein import machinery, to
form "premature peroxisomes," which are capable of importing matrix
proteins. The matured peroxisomes then divide so that progeny
peroxisomes emerge. Pex11p is more likely involved in proliferation and
division of peroxisomes (Abe and Fujiki, 1998
; Abe et al.,
1998
; Schrader et al., 1998
) (I. Abe and Y. Fujiki,
unpublished observation). Further investigation, using ZPG208 and
ZPG209 together with PEX3, should shed light on molecular
mechanisms involved in peroxisome biogenesis, especially with respect
to membrane vesicle assembly at the initial stage of peroxisome biogenesis.
Pex14p has been characterized as a convergent component of potential
peroxisomal import machinery of soluble proteins such as PTS1 and PTS2
(Albertini et al., 1997
; Fransen et al., 1998
; Shimizu et al., 1999
; Will et al., 1999
).
Peroxisomal membrane ghosts are discernible in CHO pex14
mutants, where the matrix protein import is severely impaired (Shimizu
et al., 1999
). The presence of Pex14p in the pex3
mutant is consistent with that of Pex14p found in a pex16
Zellweger patient fibroblasts (South and Gould, 1999
) (M. Honsho and Y. Fujiki, unpublished observation) and a pex19 mutant
(Matsuzono et al., 1999
) (Y. Matsuzono and Y. Fujiki,
unpublished observation). This may mean that Pex14p in the sedimentable
membrane fraction functions, in addition to a role as a convergent
component of the translocon for soluble proteins, as a factor,
including a scaffold for the interaction and assembly of several
peroxins such as Pex3p, Pex16p, and Pex19p. These are all required for
peroxisome membrane biogenesis. Pex14p may thus regulate an early stage
of peroxisome biogenesis. However, it is also equally possible that
contrary to Pex12p, Pex13p, and PMP70, Pex14p is simply one of the
membrane proteins that is stable in cell mutants lacking
morphologically recognizable peroxisomal membrane vesicles.
Physiological consequences of Pex14p-positive membranes found in
pex3, pex16, and pex19 mutants remain
to be determined.
With respect to the membrane topology of Pex3p, we concluded from
several lines of morphological and biochemical evidence that both N-
and C-terminal parts are oriented to the cytosol. Contrary to this
observation, Soukupova et al. (1999)
suggested that
myc-tagged human Pex3p faces its C-terminal region to the cytoplasm and
the N-terminal part to the matrix side of peroxisomes, when expressed
in cultured human skin fibroblasts, as is the case for S. cerevisiae Pex3p (Hoehfeld et al., 1991
). It is less
likely that Pex3p has a different topology depending on cell types,
even in mammalian cells, although different epitope tags may affect the
Pex3p topology. We showed that topogenic information of Pex3p resides
at the N-terminal region and comprises residues at positions 1-40, in
good agreement with the observation by other investigators on human
Pex3p (Kammerer et al., 1998
; Soukupova et al.,
1999
) and P. pastoris Pex3p (Wiemer et al.,
1996
). Baerends et al. (1996)
noted that the N-terminal
sequence, 1-16, of H. polymorpha Pex3p targeted catalase
lacking functional PTS1 to ER and nuclear membranes, thus inferring
ER-mediated peroxisome assembly. The highly conserved residues at
9-15, LKRHKKK (proposed consensus residues in boldface letters) of human Pex3p are likely to be
responsible for such targeting activity. Pex3p (1-40)-GFP was targeted
not only to peroxisomes in the wild-type CHO-K1 cells but also to
peroxisomal remnants in CHO mutants defective in PTS1 and PTS2 import,
which implies that the N-terminal 40-amino-acid sequence is sufficient
for localization of Pex3p to peroxisomal membranes. A potential
receptor recognizing such membrane topogenic signals may be
investigated if one makes use of Pex3p and its variants together with
the pex3 mutant.
We thank M. Honsho for helpful comments and discussion and other members of our laboratory for stimulating discussion. M. Ohara provided language assistance. This work was supported in part by a CREST grant (to Y.F.) from the Japan Science and Technology Corporation and a grant-in-aid for scientific research (09044094 to Y.F.) from The Ministry of Education, Science, Sports, and Culture of Japan.
Corresponding author. E-mail address:
yfujiscb{at}mbox.nc.kyushu-u.ac.jp.
Abbreviations used: AOx, acyl-coenzyme A oxidase; AT, 3-aminotriazole; CG, complementation group; CHO, Chinese hamster ovary; CoA, coenzyme A; EGFP, enhanced green fluorescent protein; ER, endoplasmic reticulum; GFP, green fluorescent protein; HA, hemagglutinin; IgG, immunoglobulin G; ORF, open reading frame; P9OH, 9-(1'-pyrene) nonanol; P12, 12-(1'-pyrene) dodecanoic acid; PBD, peroxisome biogenesis disorder; PMP70, 70-kDa peroxisomal integral membrane protein; PNS, postnuclear supernatant; PTS, peroxisomal targeting signal; RT, reverse transcription.
This article has been cited by other articles: | http://www.molbiolcell.org/cgi/content/full/11/6/2085 | crawl-002 | en | refinedweb |
Xml Serialization Handler
[editor's editor's note: I made another minor change. local-name() is a function, Jef. A function. That's what I get for not having a seamless publishing mechanism for my code straight from VS.NET.]
[editor's note: I made a minor mod to the code below after posting it, changing the xpath to ignore IncludeTypes if they weren't provided, rather than looking for the child configuration node by index]
Craig Andera blogged at one point about the last configuration section handler he would ever write. I had a similar idea, I googled, found his, liked it, used it. A couple of additional features were desirable to me, and so I modified it as follows:
// no warranties expressed or implied :) using System; using System.Configuration; using System.IO; using System.Text; using System.Xml; using System.Xml.Serialization; using System.Xml.XPath; namespace Your.Namespace.Here { public class XmlSerializerSectionHandler : IConfigurationSectionHandler { public XmlSerializerSectionHandler() {} public object Create( object parent, object configContext, System.Xml.XmlNode section) { string typename = section.Attributes["type"].Value; XmlNodeList list = section.SelectNodes("./IncludedTypes/Type/text()"); XmlNode configNode = section.SelectSingleNode("child::*[not(local-name() = 'IncludeTypes')]"); Type[] types = new Type[list.Count]; for(int i = 0; i < list.Count; i++) { types[i] = Type.GetType(list[i].Value, true); } Type t = Type.GetType ( typename ); XmlSerializer ser = new XmlSerializer ( t, new XmlAttributeOverrides(), types, new XmlRootAttribute(configNode.LocalName), null); return ser.Deserialize(new XmlNodeReader(configNode)); } } }
This version allows me to have a different name for the configuration object than the class name -
via new XmlRootAttribute(configNode.LocalName) - and allows me to provide custom types to support plug-in style functionality in the objects I am configuring. This lets me avoid the monstronsity that is XmlInclude (hate it hate it hate it hate it) while still having extensibility. I will try to provide a more detailed example shortly, but until then, here, at least, is a snippet of an example configuration xml:
<configuration>
<configSections>
<section name="SomeConfiguration" type="Your.Namespace.XmlSerializerSectionHandler, Your.Assembly"/>
</configSections>
<SomeConfiguration type='Your.Namespace.SomeConfiguration, Your.Assembly'>
<IncludedTypes>
<Type>Your.Namespace.ThisMessageSender, Your.Assembly</Type>
<Type>LandSafe.Messaging.ThatMessageReceiver, Your.Assembly</Type>
<Type></Type>
</IncludedTypes>
<NodeConfiguration
xmlns:xsd=''
xmlns:
<Nodes>
<Node Verbose='false'>
<MessageHandler xsi:
<Receiver xsi:
<QueuePath>.\requests</QueuePath>
<MillisecondTimeout>30000</MillisecondTimeout>
</Receiver>
<Sender xsi:
<QueuePath>.\happy</QueuePath>
</Sender>
<ErrorSender xsi:
<QueuePath>.\errror</QueuePath>
</ErrorSender>
<ThreadCount>2</ThreadCount>
</Node>
</Nodes>
</NodeConfiguration>
</SomeConfiguration>
</configuration>
I hope that you can monkey around with it and infer proper use from that. A couple of nits -- Xml Serialization only supports abstract classes, it does not support interfaces. I do not try to take into account scoping classes to particular namespaces, or any such thing, though I am sure it isn't that difficult to get there. This hits the sweet spot for me.
(parenthetically speaking, (note the parentheses) Is it just me, or is DevelopMentor bleeding like a stuck pig? First the Don leaves. Then he steals many of the DM-meisters from the fold to work in the big house. Then I notice people like Aaron Skonnard going to Northface University (at least temporarily?), and Mike Woodring moving his .NET sample page back to his indiependent consulting site. Then I notice Barracuda while googling for a SharePoint 2004 implementation I was working on. Then Pluralsight kicks off, and dm folks move like fleas from a wet dog, Craig A. being the latest I have noticed. And not too long ago, the middleware co gave up on training and traded their courseware, etc to dm in exchange for content hosting. Related? Inquiring minds want to know.)
Posted by: Barny | 2004.07.09 at 12:45 AM
Posted by: haacked | 2004.07.09 at 01:02 AM
Posted by: TorstenR | 2004.07.09 at 02:28 AM
Posted by: Jef Newsom | 2004.07.09 at 07:11 AM
Posted by: Craig | 2004.07.09 at 07:33 AM
Posted by: Ionic | 2004.11.19 at 04:33 PM | http://integralpath.blogs.com/thinkingoutloud/2004/07/xml_serializati.html | crawl-002 | en | refinedweb |
Rosetta::Language - Design document of the Rosetta D language
The native command language of a Rosetta DBMS (database management system) / virtual machine is called Rosetta D; this document, Rosetta::Language ("Language"), is the human readable authoritative design document for that language, and for the Rosetta virtual machine in which it executes. If there's a conflict between any other document and this one, then either the other document is in error, or the developers were negligent in updating it before Language, so you can yell at them.
Rosetta D is intended to qualify as a "D" language as defined by "The Third Manifesto" (TTM), a formal proposal for a solid foundation for data and database management systems, written by Christopher J. Date and Hugh Darwen; see for a publishers link to the book that formally publishes TTM. See for some references to what TTM is, and also copies of some documents I used in writing Rosetta D. The initial main reference I used when creating Rosetta D was the book "Database in Depth" (2005;), written by Date and published by Oreilly.
It should be noted that Rosetta D, being quite new, may omit some features that are mandatory for a "D" language initially, to speed the way to a useable partial solution, but you can be comforted in knowing that they will be added as soon as possible. Also, it contains some features that go beyond the scope of a "D" language, so Rosetta D is technically a "D plus extra"; examples of this are constructs for creating the databases themselves and managing connections to them. However, Rosetta D should never directly contradict The Third Manifesto; for example, its relations never contain duplicates, and it does not allow nulls anywhere, and you can not specify attributes by ordinal position instead of by name. That's not to say you can't emulate all the SQL features over Rosetta D; you can, at least once its complete.
Rosetta D also incorporates design aspects and constructs that are taken from or influenced by Perl 6, pure functional languages like Haskell, Tutorial D, various TTM implementations, and various SQL dialects and implementations (see the Rosetta::SeeAlso file). While most of these languages or projects aren't specifically related to TTM, none of Rosetta's adaptions from these are incompatible with TTM.
Note that the Rosetta documentation will be focusing mainly on how Rosetta itself works, and will not spend much time in providing rationales; you can read TTM itself and various other external documentation for much of that.
Rosetta D is a computationally complete (and industrial strength) high-level programming language with fully integrated database functionality; you can use it to define, query, and update relational databases. It is mainly imperative in style, since at the higher levels, users provide sequential instructions; but in many respects it is also functional or declarative, in that many constructs are pure or deterministic, and the constructs focus on defining what needs to be accomplished rather than how to accomplish that.
This permits a lot of flexability on the part of implementers of the language (usually Rosetta Engine classes) to be adaptive to changing constraints of their environment and deliver efficient solutions. This also makes things a lot easier for users of the language because they can focus on the meaning of their data rather than worrying about implementation details, which relieves burdens on their creativity, and saves them time. In short, this system improves everyone's lives.
The Rosetta DBMS / virtual machine, which by definition is the environment in which Rosetta D executes, conceptually resembles a hardware PC, having a command processor (CPU), standard user input and output channel, persistant read-only memory (ROM), volatile read-write memory (RAM), and read-write persistent disk or network storage.
Within this analogy, the role of the PC's user, that feeds it through standard input and accepts its standard output, is fulfilled by the application that is using the Rosetta DBMS; similarly, the application itself will activate the virtual machine when wanting to use it (done in this distribution by instantiating a new Rosetta::Interface::DBMS object), and deactivate the virtual machine when done (letting that object expire).
When a new virtual machine is activated, the virtual machine has a default state where the CPU is ready to accept user-input commands to process, and there is a built-in (to the ROM) set of system-defined data types and operators which are ready to be used to define or be invoked by said user-input commands; the RAM starts out effectively empty and the persistant disk or network storage is ignored.
Following this activation, the virtual machine is mostly idle except when executing Rosetta D commands that it receives via the standard input (done in this distribution by invoking methods on the DBMS object). The virtual machine effectively handles just one command at a time, and executes each separately and in the order received; any results or side-effects of each command provide a context for the next command.
At some point in time, as the result of appropriate commands, data repositories (either newly created or previously existing) that live in the persistant disk or network storage will be mounted within the virtual machine, at which point subsequent commands can read or update them, then later unmount them when done. Speaking in the terms of a typical database access solution like the Perl DBI, this mounting and unmounting of a repository usually corresponds to connecting to and disconnecting from a database. Speaking in the terms of a typical disk file system, this is mounting or unmounting a logical volume.
Any mounted persistent repository, as well as the temporary repository which is most of the conceptual PC's RAM, is home to all user-defined data variables, data types, operators, constraints, packages, and routines; they collectively are the database that the Rosetta DBMS is managing. Most commands against the DBMS would typically involve reading and updating the data variables, which in typical database terms is performing queries and data manipulation. Much less frequently, you would also see changes to what variables, types, etcetera exist, which in typical terms is data definition. Any updates to a persistent repository will usually last between multiple activations of the virtual machine, while any updates to the temporary repository are lost when the machine deactivates.
All virtual machine commands are subject to a collection of both system-defined and user-defined constraints (also known as business rules), which are always active over the period that they are defined. The constraints restrict what state the database can be in, and any commands which would cause the constraints to be violated will fail; this mechanism is a large part of what makes the Rosetta DBMS a reliable modeller of anything in reality, since it only stores values that are reasonable.
Rosetta D commands are structured as arbitarily complex routines / operators, either named or anonymous, and they can have (named) parameters, can contain single (usually) or multiple Rosetta D statements or value expressions, and can return one or more values.
Rosetta D command routine definitions can either be named and stored in a persistent repository for reuse like a repository's data types or variables, or they can be anonymous and furnished by an application at run-time for temporary use. A command routine can take the form of either a function / read-only operator or a procedure / update operator; the former has a special return value which is the value of the evaluated function invocation within a value expression; the latter has no such special return value, and can not be invoked within a value expression.
An application can only ever directly define and invoke an anonymous command routine, but an anonymous routine can in turn invoke (and define if it is a procedure) named command routines within the DBMS environment.
Speaking in terms of SQL, a Rosetta D statement or value expression corresponds to a SQL statement or value expression, a Rosetta D named command routine corresponds to a SQL named stored procedure or function, a Rosetta D anonymous command procedure corresponds to a SQL anonymous subroutine or series of SQL statements, the parameters of a Rosetta D named routine correspond to the parameters of a SQL named stored procedure or function, and the parameters of a Rosetta D anonymous routine correspond to SQL host parameters or bind variables.
A Rosetta D procedure parameter can be read-only or read-write (which corresponds to SQL's IN or OUT+INOUT parameter types), but a Rosetta D function parameter can only be read-only (a function may not have any side-effects). When invoking a routine, an argument corresponding to a read-only parameter can be an arbitrarily complex value expression (which is passed in by value), but an argument corresponding to a read-write parameter must be a valid target variable (which is passed in by reference), and that target variable may be updated during the procedure invocation. A function always returns its namesake mandatory special return value using the standard "return" keyword, and it may not update any global variables. For a procedure, the only way to pass output directly to its invoker (meaning, without updating global variables) is to assign that output to read-write parameters. Note that "return" can be used in a procedure too for flow control, but it doesn't pass a value as well.
Orthogonal to the procedure/function and named/anonymous classifications of Rosetta D routines is the deterministic/nondeterministic classification. A routine that is deterministic does not directly reference (for reading or updating) any global variables nor invoke any nondeterministic routines; its behaviour is governed soley by its arguments, so given only identical arguments it has identical behaviour; if the routine is a function, this means that the return value is always the same for the same arguments. A routine that is nondeterministic does directly reference (for reading or updating) one or more global variable or does invoke a nondeterministic routine; its behaviour can change, or it return different results, even if given the all of the same arguments. Generally speaking, all routines / operators that are specific to a data type (such as typical comparison and assignment operators) must be deterministic, while routines / operators that are not specific to a data type do not need to be. Most built-in routines are deterministic. Note that a deterministic routine can indeed operate with or on global variables if they are passed to it as arguments.
The Rosetta DBMS is designed to allow user-applications to furnish the definition of an anonymous command routine once and then execute it multiple times (for efficiency and ease of use); speaking in terms of SQL, the Rosetta DBMS supports prepared statements. The arguments for any routine parameters are provided at execution time, and they are used for values that are intended to be different for each execution of the command, as well as to return results that probably differ with each execution; as an exception to the latter, the application does not have to pre-define an anonymous function's special return value, which doesn't correspond to a parameter. Presumably, any values that will be constant through the life of a command routine will be coded as literal values in its definition rather than parameters.
(In this distribution, you furnish an anonymous command routine definition for reuse using a DBMS object's "prepare" or "compile" method; that method returns a new Rosetta::Interface::Command object. You then associate a Rosetta::Interface::Variable/::Value object with each of the routine's parameters using the Routine object's "bind_param" method, and then invoke the Command object's "execute" method. Any Variable/Value objects corresponding to input parameters need to be set by the application prior to "execute", and following the "execute", the application can read the routine's output from the Variable objects associated with output parameters. When the Command is a function, "execute" will generate and return a new Value object with the special return value.)
The Rosetta D language has all the standard imperative language keywords, any of which a Rosetta D routine (both anonymous and named) can contain, including: conditionals ("if"), loops ("for", "while"), procedure invocation ("call"), normal routine exit ("return"), plus exception creation and resolution ("throw", "try", "catch"). For all types of routines, the "throw" keyword takes a value expression whose resolved value is the exception to be thrown, and visually looks like "return" does for functions. Note that a thrown exception which falls out of an anonymous procedure will result in an exception thrown out to the application (in this distribution, it will be as a thrown new Rosetta::Interface::Exception object). For our purposes, transaction control statements ("start", "commit", "rollback") and resource locking statements are also grouped with these standard keywords. Note that value assignment of a value expression's result to a named target is not accomplished with a keyword, but rather with an update procedure that is defined for the value's data type, with the target provided to it as a read-write argument.
Value assignment, which pdates a target variable to a new value provided by a Rosetta D expression, is used frequently in Rosetta D, and is the form of all its major functionality. If the target variable is an anonymous procedure's read-write parameter, the statement corresponds to an unencapsulated SQL "select" query; or, the same task is usually done using "return" in a function. If the target variable is an ordinary variable, and particularly if it is a repository's component data variable, the statement's effect corresponds to SQL "data manipulation" (usually "insert" or "update" or "delete"). If the target variable is a repository's special catalog variable, the statement's effect corresponds to SQL "data definition" (usually "create" or "alter" or "drop"); this is also how all named command routines are defined, by such statements in other usually-anonymous routines. If the target variable is the DBMS' own special catalog of repositories, then the effect is to mount or unmount a repository, which corresponds to SQL client statements like "connect to".
All types of Rosetta D command routines can have assignment statements which target their own lexical variables, but only procedures (that are not invoked by operators / functions) are allowed to target global variables, which are declared in a repository directly, or have read-write parameters. In other words, an function may not have side-effects, though it can read from global variables. Moreover, any procedure that is invoked by an function is subject to the same restriction against targeting globals, since it is effectively part of the function. A few special exceptions may be made to this restriction on functions, but for the most part, the restriction is in place to prevent inconsistencies between reads of the environment/globals from multiple functions that are invoked in the same Rosetta D expression; all reads in the same expression need to see the same state, so the expression's result is the same regardless of any logically-equivalent changes to order of execution of the sub-expressions. Further to this goal, any target variable may not be used more than once in the same Rosetta D statement; target meaning a read-write procedure parameter's argument, or directly referenced global variable.
The Rosetta DBMS / virtual machine itself does not have its own set of named users where one must authenticate to use it. Rather, any concept of such users is associated with individual persistent repositories, such that you may have to authenticate in order to mount them within the virtual machine; moreover, there may be user-specific privileges for that repository that restrict what users can do in regards to its contents.
The Rosetta privilege system is orthogonal to the standard Rosetta constraint system, though both have the same effect of conditionally allowing or barring a command from executing. The constraint system is strictly charged with maintaining the logical integrity of the database, and so only comes into affect when an update of a repository or its contents are attempted; it usually ignores what users were attempting the changes. By contrast, the privilege system is strictly user-centric, and gates a lot of activities which don't involve any updates or threaten integrity.
The privilege system mainly controls, per user, what individual repository contents they are allowed to see / read from, what they are allowed to update, and what routines they are allowed to execute; it also controls other aspects of their possible activity. The concerns here are analagous to privileges on a computer's file system, or a typical SQL database.
This official specification of the Rosetta DBMS includes full ACID compliance as part of the core feature set; moreover, all types of changes within a repository are subject to transactions and can be rolled back, including both data manipulation and schema manipulation; moreover, an interrupted session with a repository must result in an automatic rollback, not an automatic commit.
It is important to point out that any attempt to implement the Rosetta DBMS (a Rosetta Engine) which does not include full ACID compliance, with all aspects described above, is not a true Rosetta DBMS implementation, but rather is at best a partial implementation, and should be treated with suspicion concerning reliability. Of course, such partial implementations will likely be made and used, such as ones implemented over existing database products that are themselves not ACID compliant, but you should see them for what they are and weigh the corruption risks of using them.
Each individual instance of the Rosetta DBMS is a single process virtual machine, and conceptually only one thing is happening in it at a time; each individual Rosetta D statement executes in sequence, following the completion or failure of its predecessor. During the life of a statement's execution, the state of the virtual machine is constant, except for any updates (and side-effects of such) that the statement makes. Breaking this down further, a statement's execution has 2 sequential phases; all reads from the environment are done in the first phase, and all writes to the environment are done in the second phase. Therefore, regardless of the complexity of the statement, and even if it is a multi-update statement, the final values of all the expressions to be assigned are determined prior to any target variables being updated. Moreover, all functions may not have side-effects, so that avoids complicating the issue due to environment updates occuring during their invoker statement's first phase.
In account to situations where external processes are concurrently using the same persistent (and externally visible) repository as a Rosetta DBMS instance, the Rosetta DBMS will maintain a lock on the whole repository (or appropriate subset thereof) during any active read-only and/or for-update transaction, to ensure that the transaction sees a consistent environment during its life. The lock is a shared lock if the transaction only does reading, and it is an exclusive lock if the transaction also does writing. Speaking in terms of SQL, the Rosetta DBMS supports only the serializable transaction isolation level.
Note that there is currently no official support for using Rosetta in a multi-threaded application, where its structures are shared between threads, or where multiple thread-specific structures want to use the same repositories. But such support is expected in the future.
No multi-update statement may target both catalog and non-catalog variables. If you want to perform the equivalent of SQL's "alter" statement on a relation variable that already contains data, you must have separate statements to change the definition of the relation variable and change what data is in it, possibly more than one of each; the combination can still be wrapped in an explicit transaction for atomicity.
Transactions can be nested, by starting a new one before concluding a previous one, and the parent-most transaction has the final say on whether all of its committed children actually have a final committed effect or not. The layering of transactions can involve any combination of explicit and implicit transactions (the combination should behave intuitively).
The lifetimes of all transactions in Rosetta D (except those declared in anonymous routines) are bound to specific lexical scopes, such that they begin when that scope is entered and end when that scope is exited; if the scope is exited normally, its transaction commits; if the scope terminates early due to a thrown exception, its transaction rolls back.
Each Rosetta D named routine as a whole (being a lexical scope), whether built-in and user-defined, is implicitly atomic, so invoking one will either succeed or have no side-effect, and the environment will remain frozen during its execution, save for the routine's own changes. The implicit transaction of a function is always read-only, and the implicit transaction of a procedure is either read-only or for-update depending on what it wants to do. Each try-block is also implicitly atomic, committing if it exits normally or rolling back if it traps an exception.
Every Rosetta D statement (including multi-update statements) is atomic; all parts of that statement and its child expressions will see the same static view of the environment; if the statement is an update, either all parts of that update will succeed and commit, or none of it will (accompanied by a thrown exception) and no changes are left.
Explicit atomic statement blocks can also be declared within a routine.
Rosetta D also supports the common concept of explicit open-ended transaction statements that start or end transactions which are not bound to lexical scopes; however, these statements may only be invoked within anonymous routines, that an application invokes directly, and not in any named routines, nor within atomic statement blocks in anonymous routines.
While scope-bound transactions always occur entirely within one invocation of the DBMS by an application, the open-ended transactions are intended for transactions which last over multiple DBMS invocations of an application.
All currently mounted repositories (persistent and temporary both) are joined at the hip with respect to transactions; a commit or rollback is performed on all of them simultaneously, and a commit either succeeds for all or fails for all (a repository suddenly becoming inaccessable counts as a failure). Note that if a Rosetta DBMS implementation can not guarantee such synchronization between multiple repositories, then it must refuse to mount more than one repository at a time under the same virtual machine (users can still employ multiple virtual machines, that are not synchronized); by doing one of those two actions, a less capable implementation can still be considered reliable and recommendable.
Certain Rosetta D commands can not be executed within the context of a parent transaction; in other words, they can only be executed directly by an anonymous routine, the main examples being those that mount or unmount a persistent repository; this is because such a change in the environment mid-transaction would result in an inconsistent state.
Rosetta D lets you explicitly place locks on resources that you don't want external processes to change out from under you, and these locks do not automatically expire when transactions end; or maybe they do; this feature has to be thought out more.
TODO.
TODO.
Rosetta D is designed for a specific virtual environment that is implemented by a DBMS (database management system). This environment is home to zero or more data repositories, each of which users may create, have a dialog with (over a connection), and delete; the components of the dialog, including queries and updates of the database, are the scope of the "D proper" language, and the other actions framing the dialog are the "D plus extra".
From an application's point of view, a DBMS is a library that provides services for storing data "some where" (which may be in memory, or the file system, or a network service, depending on implementation), like using files but more abstract and flexible; its API provides functions or methods for reading data from and writing data to the store. This API takes richly structured commands which are written in Rosetta D, either AST (abstract syntax tree) form or string form. Considering the distribution that contains the Language document you are reading now, Rosetta is the main API that uses Rosetta D, and Rosetta::Model provides the AST representation of Rosetta D.
A database is a fully self-contained and fully addressable entity. Fully self-contained means that nothing in the database depends on anything that is external to the database (such as in type or constraint definitions), save the DBMS implementing that database. Fully addressable means that the database is what an application opens a "data source" connection to, and its address can include such things as a file name or network server location or abstract DSN, depending on the implementation.
A database is a usually-persistent container for relvars (relation variables), in which all kinds of data are stored, and it provides relational operators for querying, updating, creating, and deleting those relvars. A database also stores user-defined data types and operators for working with them, and relvars can be defined in terms of those user-defined types (as well as built-in types). A database also defines various kinds of logical constraints that must be satisfied at all times, some system defined and some user defined, which complete the picture such that relvars are capable on their own of modelling anything in the real world. A database also defines users that are authorized to access it, mediated by the DBMS.
Rosetta D is a low-sugar language, such that its string form, which will be used for illustrative purposes in this documentation, has a very simple grammar and visually corresponds one-to-one with its abstract syntax tree.
All Rosetta D types and operators are grouped into a hierarchical name space for simplicity and ease of use; the fully-qualified name of any type and operator includes its namespace hierarchy,
with the highest level namespace appearing first (left to right).
You are recommended to use the fully-qualified names at all times (eg:
root.branch.leaf),
although you may also use partially qualified (eg:
branch.leaf) or unqualified versions (eg:
leaf) if they are unambiguous.
For that matter,
all standard relvars and constraints are likewise in that namespace hierarchy.
All Rosetta D entity identifiers,
built-in and user-defined,
including names of types,
operators,
relvars,
constraints,
and users,
are all case-sensitive and may contain characters from the entire Unicode 4+ character repertoire (even whitespace),
in each level of their namespace.
But fully or partially qualified identifiers always use the radix point character (.) to delimit each level.
Each namespace level may be formatted either with or without double-quote (") delimiters,
if said name only contains non-punctuation and non-whitespace characters; if it does contain either of those,
then it must always appear in delimited format.
All built-in entities only use characters that don't require delimiters (the upper and lowercase letters A-Z,
and the underscore,
and sometimes are partially composed of the digits 0-9),
and your code will be simpler if you do likewise.
All built-in type and possrep names conform to best practices for Perl package names (eg:
CharStr),
and all built-in names for operators,
constraints,
relvars,
and users,
conform to best practices for Perl routine and variable names (eg:
the_x),
and certain pre-defined constant value names conform to best practices (eg:
TRUE).
No built-in operators have symbols like "+" or "=" as names,
but rather use letters,
"add" and "eq" in this case.
All Rosetta D expressions are formatted in prefix notation,
where the operator appears before (left to right) all of its arguments,
and the argument list is delimited by parenthesis and delimited by commas (eg:
<op>( <arg>,
<arg> )).
This is like most programming languages but unlike most logical or mathematical expressions,
which use infix notation (eg:
<arg> <op> <arg>).
In addition,
all arguments are named (formatted in name/value pairs),
rather than positional,
so they can be passed in any order (eg:
<op>( <arg-name> => <arg-val> )),
and so the expressions are more self-documenting about what the arguments mean (eg: source vs target).
As an extension to this,
if an operator takes a variable number of arguments that are all being used for the same purpose (eg: a list of numbers to add,
or a list of relations to join),
then those are collected into a single named argument whose value is a parenthesized and comma-delimited but un-ordered list (eg:
<op>( <arg-name> => (<arg-val>,
<arg-val>) )).
The root level of a database's name hierarchy contains these 4 name-spaces, which are searched in top-down order when trying to resolve unqualified entity identifiers:
system
All system-defined entities go here, including built-in data types and operators, and the catalog relvars that allow user introspection of the database using just relational operators (analagous to SQL's "information schema", but the provided meta-data is always fully decomposed), and constraints on the above.
The standard way to create, alter, or drop user-defined entities is to update the catalog relvars concerning them (although some short-hand "create", etc, operators are provided to simplify those tasks). It is like the user-defined entities are views defined in terms of the catalog relvars, and so explicitly changing the former results in implicitly changing the latter.
For uniformity, the system-defined entities are also listed in the catalog relvars (or, for the types, at least their interfaces are), but constraints on the catalog relvars forbid users from updating or removing the built-ins, or adding entities that say they are built-ins.
local
All persistent user-defined entities go here,
including real and virtual relvars,
types,
operators,
and constraints.
This is the "normal" or "home" namespace for users.
All entities here may only be defined in terms of either
system entities or other entities here.
Typically,
the next name space level down under
local would be functionally similar to a list of schemata as larger SQL databases typically provide so that each of a database's users can have a separate place for the types,
relvars,
etc,
that they create.
In fact,
to best be able to represent various existing DBMSs that have anywhere from zero to 2 or 3 such name spaces,
Rosetta D allows you to have an arbitrary number of such intermediate name space levels,
or use none at all.
In fact,
unless you actually need these intermediate levels,
it is highly recommended that you don't use them at all,
to reduce complexity.
But as I mentioned earlier,
unless the database has more than one entity with the same unqualified or semi-qualified name,
you can just use those shorter names everywhere,
which results in the optional hierarchy being abstracted away.
temporary
All user-defined entities go here whose fates are tied to open connections; each connection to a database has its own private instance of this name-space,
and its contents disappear when the connection is closed.
These entities can be all of the same kinds as those that go in
local.
They can be defined in terms of
local entities,
but the reverse isn't true.
Generally,
temporary is the name-space for entities that are specific to the current application,
but that it makes sense for them to exist within the Tutorial D virtual environment for efficiency.
remote
If the current DBMS has support for federating access to external databases,
effectively by "mounting" their contents within the current database as an extension to it,
so users with a connection to the current database can access those other databases through the same connection,
then those contents appear under
remote.
This may or may not count as the current DBMS being a proxy.
In terms of a hypothetical federated DBMS that lets you use a single "connection" to access multiple remote databases at once,
such as for a database cloning utility or a multiplexer,
all of the interesting contents would be
remote,
and the
local name space would be empty.
Typically,
the next name space level down under
remote will contain a single name per distinct mounted external database,
and then below each of those may be that database's
local items,
or alternately and more likely we would see literal
system,
local,
etc folders like our own root.
This feature is more experimental and has yet to be fleshed out.
Types and relvars would then have their unqualified names sitting just below the above name spaces,
per root space; so,
for example,
we would have fully qualified names like
system.CharStr or
local.suppliers; simple.
However,
operators have mandatory "package" name-spaces under which their otherwise unqualified names would go,
and these are usually identical to the data type name that they are primarily associated with.
So,
for example,
we would have fully qualified names like
system.NumInt.add or
system.CharStr.substr or
system.Relation.join.
Note that type selector operators and such would be named in exactly the same way.
Constraints on data types, that are specifically part of the definitions of the data types, have their names package-qualified like operators, while constraints on relvars don't have to be, or aren't.
Rosetta D is a strongly typed language, where every value and variable is of a specific data type, and every operator and expression is defined in terms of specific data types. A variable can only store a value which is of its type, and every operator can only take argument values or expressions that are the same types as its parameters.
Values can only be explicitly converted from one data type to another (such as when comparing two values for equality) using explicitly defined operators for that purpose (this includes selectors, which typically convert from character strings to something else), and value type conversions can not happen implicitly; the sole exception to this is if one of the two involved types is defined as a constraint-restricted sub-type of the other, or if both are similarly restricted from a common third type.
All data types in Rosetta D fit into 3 main categories, which are scalar types, tuple types, and relation types. For our purposes, every data type that is not a tuple type or relation type is a scalar type.
A scalar type is a named set of scalar values; its sub-types mainly include booleans, numerics, character strings, bit strings, temporals, spacials, and any custom / user-defined data types.
A custom data type can be defined as a sub-type of a system-defined or user-defined scalar type that has extra constraints, which are named; for example, to restrict its set of scalar values to a sub-set of its parent type's set of scalar values (eg: restrict from an integer to an integer that has to be in the range 1 to 100).
Alternately, a custom data type can be defined to have one or more named possreps (possible representations), each of being different from the others in appearance but being identical in meaning; every possible value of that type should be renderable in each of its possible representations. For example, we could represent a point in space using either cartesian coordinates or polar coordinates. Each possrep is defined in terms of a list of components, where each component has a name and a type, and that type is some other system-defined or user-defined type. Such a custom data type can also have named constraints as part of its definition (eg: the point may not be more than a certain distance from the origin).
You can not declare named custom tuple types or relation types, as you can with scalar types, but rather all values and variables of these types carry with them a definition provided by a tuple or relation generator operator.
A tuple value or variable consists of an unordered set of zero or more attributes, where each attribute consists of a name (that is distinct within the tuple) and a type (that type is some other system-defined or user-defined type); it also has exactly one value per attribute which is of the same type as the attribute.
A relation value or variable consists of an unordered set of zero or more attributes, where each attribute consists of a name (that is distinct within the relation) and a type (that type is some other system-defined or user-defined type); it also has an unordered set of zero or more tuples whose set of attributes are all identical to those of the relation, and where every tuple value is distinct within the relation.
Generally speaking, any two data types are considered to be mutually exclusive, such that a data value can only be in the domain of one of the types, and not the other. (The exception is if type A is declared to be a restriction of type B, or both are restrictions of type C.)
If you want to compare two values that have different data types, you must explicitly cast one or both values into the same data type. Likewise, if you want to use a value of one type in an operation that requires a value of a different type, such as when assigning to a container, the value must be cast into the needed type. The details of casting depend on the two types involved, but often you must choose from several possible methods of casting; for example, when casting between a numeric and a character string, you must choose what numeric radix to use. However, no casting choice is necessary if the data type of the value in hand is a restriction of the needed data type.
IRL gains rigor from this requirement for strong typing and explicit casting methods because you have to be very explicit as to what behaviour is expected; as a result, there should be no ambiguity in the system and the depot manager should perform exactly as you intended. This reduces troublesome subtle bugs in your programs, making development faster, and making your programs more reliable. Your data integrity is greatly improved, with certain causes of corruption removed, which is an important goal of any data management system, and supports the ideals of the relational data model.
IRL gains simplicity from this same requirement, because your depot-centric routines can neatly avoid the combinatorial complexity that comes from being given a range of data types as values when you conceptually just want one type, and your code doesn't have to deal with all the possible cases. The simpler routines are easier for developers to write, as they don't have to worry about several classes of error detection and handling (due to improper data formats), and the routines would also execute faster since they do less actual work. Any necessary work to move less strict data from the outside to within the depot manager environment is handled by the depot manager itself and/or your external application components (the latter is where any user interaction takes place), so that work is un-necessary to do once the data is inside the depot manager environment.
IRL has 2 main classes of data types, which are opaque data types and transparent data types.
An opaque data type is like a black box whose internal representation is completely unknown to the user (and is determined by the depot manager), though its external interace and behaviour are clearly defined. Or, an opaque data type is like an object in a typical programming language whose attributes are all private. Conceptually speaking, all opaque data values are atomic and no sub-components are externally addressable for reading and changing, although the data type can provide its own specific methods or operators to extract or modify sub-components of an opaque data value. An example is extracting a sub-string of a character string to produce a new character string, or extracting a calendar-specific month-day from a temporal type.
A transparent data type is like a partitioned open box, such that each partition is a visibly distinct container or value that can be directly addressed for reading or writing. Or, an opaque data type is like an object in a typical programming language whose attributes are all public. Conceptually speaking, all transparent data types are named collections of zero or more other data types, as if the transparent data value or container was an extra level of namespace. Accessing these sub-component partitions individually is unambiguous and can be done without an accessor method. An example is a single element in an array, or a single member of a set, or a single field in a tuple, or a single tuple in a relation.
Opaque data types are further classified into unrestricted opaque data types and restricted opaque data types. An unrestricted opaque type has the full natural domain of possible values, and that domain is infinite in size for most of them; eg, the unrestricted numerical type can accomodate any number from negative to positive infinity, though the unrestricted boolean type still only has 2 values, false and true. A restricted opaque type is defined as a sub-type of another opaque type (restricted or not) which excludes part of the parent type's domain; eg a new type of numerical type can be defined that can only represent integers between 1 and 100. A trivial case of a restricted type is one declared to be identical in range to the parent type, such as if it simply served as an alias; that is also how you always declare a boolean type. A restricted type can implicitly be used as input to all operations that its parent type could be, though it can only be used conditionally as output.
Note that, for considerations of practicality, as computers are not infinite, IRL requires you to explicitly declare a container (but not a value) to be of a restricted opaque type, having a defined finite range in its domain, though that domain can still be very large. This allows depot manager implementations to know whether or not they need to do something very inefficient in order to store extremely large possible values (such as implement a numeric using a LOB), or whether a more efficient but more limited solution will work (using an implementation-native numeric type); stating your intentions by defining a finite range helps everything work better.
Transparent data types are further classified into collective and disjunctive transparent data types. A collective transparent data type is what you normally think of with transparent types, and includes arrays, sets, relations, and tuples; each one can contain zero or more distinct sub-values at once. A disjunctive transparent data type is the means that IRL provides to simulate both weak data types and normal-but-nullable data types. It looks like a tuple where only one field is allowed to contain a non-empty value at once, and it has a distinct field for each possible strong data type that the weak type can encompass (one being of the null type when simulating nullability); it actually has one more field than that, always valued, which says which of the other fields contains the important value.
IRL is strongly typed, following the relational model's ideals of stored data integrity, and the actual practice of SQL and many database products, and Rosetta's own ideals of being rigorously defined. However, its native set of data types also includes ones that have the range of typical weak types such as some database products and languages like Perl use.
A data type is a set of representable values. All data types are based on the concept of domains; any variable or literal that is of a particular data type may only hold a value that is part of the domain that defines the data type. IRL has some native data types that it implicitly understands (eg, booleans, integers, rational numbers, character strings, bit strings, arrays, rows, tables), and you can define custom ones too that are based on these (eg, counting numbers, character strings that are limited to 10 characters in length, rows having 3 specific fields).
All Rosetta::Model "domain" Nodes (and schema objects) are user defined, having a name that you pick, regardless of whether the domain corresponds directly to a native data type, or to one you customized; this is so there won't be any name conflicts regardless of any same named data types that a particular database implementation used in conjunction with Rosetta::Model may have.
It is the general case that every data type defines a domain of values that is mutually exclusive from every other data type; 2 artifacts having a common data type (eg, 2 character strings) can always be compared for equality or inequality, and 2 artifacts of different data types (eg, 1 character string and 1 bit string) can not be compared and hence are always considered inequal. Following this, it is mandatory that every native and custom data type define the 'eq' (equal) and 'ne' (not equal) operators for comparing 2 artifacts that are of that same data type. Moreover, it is mandatory that no data type defines for themselves any 'eq' or 'ne' operators for comparing 2 artifacts of different data types.
In order to compare 2 artifacts of different data types for equality or inequality, either one must be cast into the other's data type, or they must both be cast into a common third data type. How exactly this is done depends on the situation at hand.
The simplest casting scenario is when there is a common domain that both artifacts belong to, such as happens when either one artifact's data type is a sub-domain of the other (eg, an integer and a rational number), or the data types of both are sub-domains of a common third data type (eg, even numbers and square whole numbers). Then both artifacts are cast as the common parent type (eg, rationals and integers respectively).
A more difficult but still common casting scenario is when the data types of two artifacts do not have a common actual domain, but yet there is one or more commonly known or explicitly defined way of mapping members of one type's domain to members of the other type's domain. Then both artifacts can be cast according to one of the candidate mappings. A common example of this is numbers and character strings, since numbers are often expressed as characters, such as when they come from user input or will be displayed to the user; sometimes characters are expressed as numbers too, as an encoding. One reason the number/character scenario is said to be more difficult is due to there being multiple ways to express numbers in character strings, such as octal vs decimal vs hexadecimal, so you have to explicitly choose between multiple casting methods or formats for the version you want; in other words, there are multiple members of one domain that map to the same member of another domain, so you have to choose; a cast method can not be picked simply on the data type of the operands.
A different casting scenario occurs when one or both of the data types are composite types, such as 2 tuples that are either of different degrees or that have different attribute names or value types. Dealing with these involves mapping all the attributes of each tuple against the other, with or without casting of the individual attributes, possibly into a third data type having attributes to represent all of those from the two.
Most data types support the extraction of part of an artifact to form a new artifact, which is either of the same data type or a different one. In some cases, even if 2 artifacts can't be compared as wholes, it is possible to compare an extract from one with the other, or extractions from both with each other. Commonly this is done with composite data types like tuples, where some attributes are extracted for comparison, such when joining the tuples, or filtering a tuple from a relation.
Aside from the 'eq' and 'ne' comparison operators, there are no other mandatory operators that must be defined for a given custom data type, though the native ones will include others. However, it is strongly recommended that each data type implement the 'cmp' (comparison) operator so that linearly sorting 2 artifacts of that common data type is a deterministic activity.
IRL requires that all data types are actually self-contained, regardless of their complexity or size. So nothing analagous to a "reference" or "pointer" in the Perl or C or SQL:2003 sense may be stored; the only valid way to say that two artifacts are related is for them to be equal, or have attributes that are equal, or be stored in common or adjoining locations.
IRL natively supports the special NULL data type, whose value domain is by definition mutually exclusive of the domains of all other data types; in practice, a NULL is distinct from all possible values that the other IRL native primitive types can have. But some native complex types and user customized types could be defined where their domains are a super-set of NULL; those latter types are described as "nullable", while types whose domains are not a super-set of NULL are described as "not nullable".
The NULL data type represents situations where a value of an arbitrary data type is desired but none is yet known; it sits in place of the absent value to indicate that fact. NULL artifacts will always explicitly compare as being unequal to each other; since they all represent unknowns, we can not logically say any are equal, so they are all treated as distinct. This data type corresponds to SQL's concept of NULL, and is similar to Perl's concept of "undef". A NULL does not natively cast between any data types.
Rosetta::Model does not allow you to declare "domain" Nodes that are simply of or based on the data type NULL; rather, to use NULL you must declare "domain" Nodes that are either based on a not-nullable data type unioned with the NULL type, or are based on a nullable data type. The "domain" Node type provides a short-hand to indicate the union of its base type with NULL, in the form of the boolean "is_nullable" attribute; if the attribute is undefined, then the nullability status of the base data type is inherited; if it is defined, then it overrides the parent's status.
All not-nullable native data types default to their concept of empty or nothingness, such as zero or the empty string. All nullable native types, and all not-nullable native types that you customize with a true is_nullable, will default to NULL. In either case, you can define an explicit default value for your custom data type, which will override those behaviours; details are given further below.
These are the simplest data types, from which all others are derived:
BOOLEAN
This data type is a single logical truth value, and can only be FALSE or TRUE. Its concept of nothingness is FALSE.
NUMERIC
This data type is a single rational number. Its concept of nothingness is zero. A subtype of NUMERIC must specify the radix-agnostic "num_precision" and "num_scale" attributes, which determine the maximum valid range of the subtype's values, and the subtype's storage representation can often be derived from it too.
The "num_precision" attribute is an integer >= 1; it specifies the maximum number of significant values that the subtype can represent. The "num_scale" attribute is an integer >= 0 and <= "num_precision"; if it is >= 1, the subtype is a fixed radix point rational number, such that 1 / "num_scale" defines the increment size between adjacent possible values; the trivial case of "num_scale" = 1 means the increment size is 1, and the number is an integer; if "num_scale" = 0, the subtype is a floating radix point rational number where "num_precision" represents the product of the maximum number of significant values that the subtype's mantissa and exponent can represent. IRL does not currently specify how much of a floating point number's "num_precision" is for the mantissa and how much for the exponent, but commonly the exponent takes a quarter.
The meanings of "precision" and "scale" are more generic for IRL than they are in the SQL:2003 standard; in SQL, "precision" (P) means the maximum number of significant figures, and the "scale" (S) says how many of those are on the right side of the radix point. Translating from base-R (eg, R being 10 or 2) to the IRL meanings are as follows (assuming negative numbers are allowed and zero is always in the middle of a range). For fixed-point numerics, a (P,S) becomes (2*R^P,R^S), meaning an integer (P,0) becomes (2*R^P,1). For floating-point numerics, a (P) sort-of becomes (2*R^P,0); I say sort-of because SQL:2003 says that the P shows significant figures in just the mantissa, but IRL currently says that the size of the exponent eats away from that, commonly a quarter.
As examples, a base-10 fixed in SQL defined as [p=10,s=0] (an integer in -10^10..10^10-1) becomes [p=20_000_000_000,s=1] in IRL; the base-10 [p=5,s=2] (a fixed in -1_000.00..999.99) becomes [p=200_000,s=100]; the base-2 [p=15,s=0] (a 16-bit int in -32_768..32_767) becomes [p=65_536,s=1]; the base-2 float defined as [p=31] (a 32-bit float in +/-8_388_608*2^+/-128) becomes [p=4_294_967_296,s=0].
A subtype of NUMERIC may specify the "num_min_value" and/or "num_max_value" attributes, which further reduces the subtype's valid range. For example, a minimum of 1 and maximum of 10 specifies that only numbers in the range 1..10 (inclusive) are allowed. Simply setting the minimum to zero and leaving the maximum unset is the recommended way in IRL to specify that you want to allow any non-negative number. Setting the minimum >= 0 also causes the maximum value range allowable by "num_precision" to shift into the positive, rather than it being half there and half in the negative. Eg, an (P,S) of (256,1) becomes 0..255 when the minimum = 0, whereas it would be -128..127 if the min/max are unset.
CHAR_STR
This data type is a string of characters. Its concept of nothingness is the empty string. A subtype of CHAR_STR must specify the "char_max_length" and "char_repertoire" attributes, which determine the maximum valid range of the subtype's values, and the subtype's storage representation can often be derived from it too.
The "char_max_length" attribute is an integer >= 0; it specifies the maximum length of the string in characters (eg, a 100 means a string of 0..100 characters can be stored). The "char_repertoire" enumerated attribute specifies what individual characters there are to choose from (eg, Unicode 4.1, Ascii 7-bit, Ansel; Unicode is the recommended choice).
A subtype of CHAR_STR may specify the "char_min_length" attribute, which means the length of the character string must be at least that long (eg, to say strings of length 6..10 are required, set min to 6 and max to 10).
STR_BIT
This data type is a string of bits. Its concept of nothingness is the empty string. A subtype of BIT_STR must specify the "bit_max_length" attribute, which determines the maximum valid range of the subtype's values, and the subtype's storage representation can often be derived from it too.
The "bit_max_length" attribute is an integer >= 0; it specifies the maximum length of the string in characters (eg, an 8000 means a string of 0..8000 bits can be stored).
A subtype of BIT_STR may specify the "bit_min_length" attribute, which means the length of the bit string must be at least that long (eg, to say strings of length 24..32 are required, set min to 24 and max to 32).
A subtype of any of these native primitive types can define a default value for the subtype, it can define whether the subtype is nullable or not (they are all not-nullable by default), and it can enumerate an explicit list of allowed values (eg, [4, 8, 15, 16, 23, 42], or ['foo', 'bar', 'baz'], or [B'1100', B'1001']), one each in a child Node (these must fall within the specified range/size limits otherwise defined for the subtype).
IRL has native support for a special SCALAR data type, which is akin to SQLite's weakly typed table columns, or to Perl's weakly typed default scalar variables. This data type is a union of the domains of the BOOLEAN, NUMERIC, CHAR_STR, and BIT_STR data types; it is not-nullable by default. Its concept of nothingness is the empty string.
Go to Rosetta for the majority of distribution-internal references, and Rosetta::SeeAlso for the majority of distribution-external references.. | http://search.cpan.org/dist/Rosetta/lib/Rosetta/Language.pod | crawl-002 | en | refinedweb |
Re: Finding the Control Panel Applets With Change Icon Dialogue Box
From: David Candy (david_at_mvps.org)
Date: 06/29/04
- Next message: David Candy: "Re: Windows animation speed"
- Previous message: Dave Higton: "Re: address bar"
- In reply to: Chad Harris: "Re: Finding the Control Panel Applets With Change Icon Dialogue Box"
- Messages sorted by: [ date ] [ thread ]
Date: Tue, 29 Jun 2004 19:17:28 +1000
XP incorporates many of the concepts coming up. XP's search is about finding relevent content. EG It only searches web pages for things you would see viewing the page and not for things that IE uses to render the page (web pages are text files). It's not meant for hackers but for users who are searching for Aunt Mary's photo or a sales manager searching for an old tender document. This is really smart searching. The web sites are by morons who lack the ability to understand how MS improved search. They want it to work in the old stupid searching way. They also don't realise that the old way didm't do what they think it did. It pisses me of too but I realise that the things we do aren't normal things to do with computers. Montoya may compain about his mum's car engine but it is designed to drive to the shop not race at Monte Carlo. His mum would be pissed if she had to replace the engine every week and every time she starts going she does donuts.
Have a look at the icons page here
To change My Computer icon get it clsid and look it up
HKEY_CURRENT_USER\Software\Microsoft\Windows\CurrentVersion\Explorer\CLSID
These are the desktop ones only and override the system defaults if present
All of them are here
HKEY_CLASSES_ROOT\CLSID\
EG My Comp
HKEY_CLASSES_ROOT\CLSID\{20D04FE0-3AEA-1069-A2D8-08002B30309D}\DefaultIcon
NSE all have the string ShellFolder under there key. Search on it. But some get theirs from a desktop.ini. See my 98 web page.
-- ---------------------------------------------------------- 'Not happy John! Defending our democracy', "Chad Harris" <ddram32_nospam@yahoo.com> wrote in message news:eGCk6FbXEHA.2388@TK2MSFTNGP11.phx.gbl... > David-- > > There was no question in 1-3. I appreciate a lot your trying to make this > understandable to me, and will work with the info later on today. You > didn't say how you go after icons within XP if you want to change them; I > was looking for any ways to improve that over the hit and miss method I use > and the 3rd party apps i know about. I don't know if many others share my > frustration that Windows XP's "Search" is markedly erratic, but I suspect > some do since there are a number of web sites devoted to that topic. You > simply cannot rely on it to find all folders even when you have enabled it > maximally. I only hope MSFT's enthusiasm for improving the mediocre search > on its site announced yesterday by Mr. Gates will impact the search in its > next OS reloaded XP or Longhorn: > > > > Thanks, > > Chad Harris > > > > _____________________________________ > "David Candy" <david@mvps.org> wrote in message > news:%23QfjScSXEHA.796@TK2MSFTNGP10.phx.gbl... > Can yopu rephrase 1, 2, and 3. I don't see a question in them. > > 4. Take Control Panel. It is a Namespace Extension (NSE) like My Computer, > the Desktop (not the desktop folders), Fonts, Schedule Tasks, & Network > Neighbourhood. Folder Options (and the desktop IE icon) is a seperate type > of NSE and not called that. It is defined by registry entries. But there > needs to be some file with code in it to give it behaviour. > > Control Panel, and most of the ones I mention, are in Shell32.dll. These are > virtual folders. > > EG. There is no My Computer Folder but their is code in shell32 that looks > for drives and other things and makes it appear as a folder. With Fonts (I > think it also is shell32) the things it shows are also in the folder but > what you see is a program listing fonts not the contents of the folder, even > though they are closely linked. It is a NSE for the special font vieing > commands and so dragging a file in causes it to actually install the font - > the font's don't HAVE to be in the fonts folder to be displayed - it just > has to be installed. With My Docs it is identical to a file view it is a > program showing it (It's a NSE so you can have a custom property *** for > it, else it wouldn't need to be one). > > If you search the registry for ShellFolder you'll find all the ones on your > machine. There is also code in shdocvw that can allow anyone to make their > own simple NSE just by registry entries (called a Shell Instance Object). > It's simple because it has 2 behaviours (and sone sub options) only to > choose from. > > The Start Menu is another which is why dragging files to it cause a shortcut > not a move/copy. > > The shell namespace is different to the file namespace. > > In shell talk it is > Desktop\My Computer\C Drive\boot.ini > > in file talk > C:\boot.ini > > Programs tend to use file paths not shell paths. Explorer ISN'T a file > manager but a namespace browser (as is IE). > > -- > ---------------------------------------------------------- > > "Chad Harris" <ddram32_nospam@yahoo.com> wrote in message > news:%23ZecqJSXEHA.1440@TK2MSFTNGP12.phx.gbl... > > David-- > > > > I need some guidance and thanks--you made it easy to find ______cpl.dlls. > > Here's what I did, and I have a couple questions and would like to know if > > you go after icons that aren't favicons in Windows (and from 3rd party > > downloads and apps) how you do it and how you'd do this. > > > > 1) I have long been collecting icons by adding .ico onto urls, then > > changing > > dragging to the desktop and changing their name and storing them in a > > favicon folder (or dragging them out of TIFs before they disappear on a > > TIF > > clear or a reboot. > > 2) I haven't used the many 3rd party icon extractors with the system 32, > > zip, downloaded programs and many other folders where icon staches (native > > Windows and 3rd party) are availalbe but just hit or miss as with the > > system > > 32 folder. > > 3) With your tip I showed all hidden folders/protected Windows and file > > extensions and was able to get a list of 6 cpl.dlls. If you use XP's > > Search > > on System 32 it turns up 4 interestingly, and if you directly look there > > are > > 6. I don't know why search misses 2 of them, but that's not surprising. > > I > > see it miss folders all the time that I find directly even when you give > > it > > every possible chance to search. > > 4) What do you mean when you say "or else it's a NSE and most are backed > > by > > a dll file. Same story." I don't know what NSE stands for. Can you help > > me > > with this. Is NSE a source of .dlls with icons where I can harvest them? > > > > Thanks a lot for the help. > > > > Chad > > _______________________________________ > > "David Candy" <david@mvps.org> wrote in message > > news:OpLS7QLXEHA.2852@TK2MSFTNGP12.phx.gbl... > > No. cpl are dll files. Browse to the respective cpl and choose the icon in > > that. Else it's a NSE and most are backed by a dll file, Same story. > > > > -- > > ---------------------------------------------------------- > > > > "Chad Harris" <ddram32_nospam@yahoo.com> wrote in message > > news:upC8DLLXEHA.644@tk2msftngp13.phx.gbl... > > > That answers my question then. And going to copl files in system 32 > > > is > > > not going to give you anything to apply as an icon substitute. There is > > > no > > > way quick and direct way then to use those applets for icons. It would > > > have to be done by using an icon app, and imaging them and using the > > > bit > > > maps. > > > > > > Thanks David. > > > > > > Chad > > > > > > "David Candy" <david@mvps.org> wrote in message > > > news:OPK8lCLXEHA.1128@TK2MSFTNGP10.phx.gbl... > > > There is no control panel folder. It is contrructed from namespaces > > > defined > > > in the registry or cpl files in system32 directory. > > > > > > -- > > > ---------------------------------------------------------- > > > > > > "Chad Harris" <ddram32_nospam@yahoo.com> wrote in message > > > news:%23LQ58UKXEHA.1152@TK2MSFTNGP09.phx.gbl... > > > > I wanted to get a scanner icon from the control panel applet, and now > > > > the > > > > issue is for me whether I can find a folder with control panel applets > > > > to > > > > use for icons. I can find a scanner icon one in Scan Sof't's folder > > > > ,and > > > > on in the C:\WINDOWS\system32\stimon.exe folder in the system 32 > > > > folder > > > > to > > > > do this with the Change Icon Dialogue box (reached by right clicking > > > > the > > > > shortcut, but I couldn't find the Control Panel folder with the > > > > dialogue > > > > box. I could probably find an HP Scanner Icon. What I want to do is > > > > reach > > > > Control Panel Applets for Icons if this is possible. > > > > > > > > > > > > The Control Panel folder is listed in My Computer as an "Other System > > > > Folder" when you Use the Explorer view "Show in Groups." > > > > > > > > Does anyone know how to reach the Control Panel's folder from the > > > > Change > > > > Icon Dialogue Box if this can be done (right clicking a desktop > > > > shortcut > > > > and > > > > going through Properties? > > > > > > > > TIA, > > > > > > > > Chad Harris > > > > > > > > > > > > > > > > > > > >
- Next message: David Candy: "Re: Windows animation speed"
- Previous message: Dave Higton: "Re: address bar"
- In reply to: Chad Harris: "Re: Finding the Control Panel Applets With Change Icon Dialogue Box"
- Messages sorted by: [ date ] [ thread ] | http://www.tech-archive.net/Archive/WinXP/microsoft.public.windowsxp.customize/2004-06/2345.html | crawl-002 | en | refinedweb |
Re: Visual Basic for Autorun?
From: J French (erewhon_at_nowhere.com)
Date: 06/08/04
- ]
Date: Tue, 8 Jun 2004 08:38:16 +0000 (UTC)
On Tue, 8 Jun 2004 16:11:44 +1000, "Michael Culley"
<mculley@NOSPAMoptushome.com.au> wrote:
>"J French" <erewhon@nowhere.com> wrote in message news:40c542bc.78643818@news.btclick.com...
>> Why more professional ?
>
>Take as an example a grid I wrote for one project.
Been there - very useful
>Time was set aside to write it and it was tested on its own and once it reached a
>certain state it was compiled and used in the main app. It reached a point where it was sealed and delivered. Changes still happen
>ocassionally but it is much more controlled. Now several apps use the same grid and it is impossible for developers to make sly mods
>to suit their application. If changes are required they can't be specific to one app.
Realistically the point is preventing other coders mucking things up
- it is a valid point
- but there is nothing to stop them mucking up the OCX
- assuming they can get into version control
>The alternative would be to copy the code from
>project to project and not really be sure of the changes made to each. Obviously this doesn't apply to all usercontrols and should
>only be used where appropriate, but even when usercontrols are very specific to a project I create a seperate ocx project that will
>only be used by the one exe.
I generally make another (local) copy of a UserControl and work on
that while enhancing a 'library' UC
It only goes into the 'library area' when I am sure of it
>
>Here's a few other reasons:
>If a programmer passes in an invalid parameter to my dll/ocx then I can raise an error and it breaks on their line of code, not
>mine.
True - I tend to prefer a simple MsgBox
- that goes back to writing library code well before Windows
>If their app closes due to an error the control still shuts down correctly. This basically means class_terminate fires so resources
>can be tidied up.
You mean at run time
>Extender properties (top, left etc) are not always available if the control is in the exe.
Circumstances ?
>Version tracking is easier, eg I can tell that my grid was updated on a certain date and that the last 3 versions of my app used the
>one version of the grid.
That is true - a major plus
>IDE runs quicker which is vital for larger projects. The one I'm working on now has 300 forms and is slow enough as it is. I have 2
>projects loaded into the IDE usually, I can't imagine how slow it would be if the code from its 11 projects was all loaded at once.
It would not necessarily be the code from 11 projects
>Friend functions. This on it's own is enough of an arguement :-)
True
>Class scope. You can define a class as private or public not createable. The second means it can only be created from within your
>dll, so you can specify a function has to be called to create an instance of your class. This is a bit like contructors in more of
>an oop language. Marking a class as private makes it possible for me to use it to support a public class/control without it being
>visible to the outside world.
Yes - private Classes - Scope is a PITA in VB
>Usercontrols don't get those diagonal stripes on them when you modify the project.
Those can be unsettling at first - now I just keep punching Ctl F4
>Functions can be marked as hidden.
Not sure about that one - that may be a VB6 enhancement
>> I most certainly know /how/ to work with them, I simply do not like
>> working with them.
>
>OK, that wasn't really a fair statement of mine. On occasions ActiveX dll/ocxs can cause problems but they are usually fairly easy
>to get around. Maybe you didn't persist with them enough to get around the problems.
I did not have the motivation
>> Where I need DLLs for complex things used by many different EXEs then
>> I use Delphi.
>
>Personally I would think this is not a very good solution. I presume you need to use declare statements and have to know the names
>of the functions. With activeX you just add a reference and can see all the functions. Intellisense works and you can raise errors
>in the dll to be caught in the exe. Also, everything is in the one language. Generally I only use a second language if there is a
>really good reason and that's usually C because the main language couldn't do what C can.
Yes, you need the declare statements, often I wrap them in a
mini-Class - you get the Intellisense Ok
True one has to have the Declares, but with AX you need the Type Libs
Personally I like Delphi as a second language, it is a bit like VB on
steroids.
When writing Delphi DLLs I know that I am going into 'Tiger Country'
- and am extra careful, as I am aware that any stupidity on my part
will be hard to trace from the main App
>> But that is not very important nowadays.
>
>I agree, that's why we stopped doing it.
Sensible - a bit like not relying on incremental backups
>> So far I have only heard of one compelling reason for converting
>> UserControls into OCXes
>> - which was to prevent junior programmers tampering with the code
>
>Not just the junior programmers :-) At least with the juniors you can tell them off, the seniors have a tendancy to tell you to piss
>off :-)
How do you stop them just checking out the OCX's source
- years ago in another environment we came up with a solution
- each library module had an 'owner' (author) and a 'co-owner'
- so only two people were permitted to make mods
(everyone could see and play with the code, but if they altered the
library version then they were dead)
>> I am intrigued why you think it 'professional' to turn UserControls
>> into OCXes - personally I consider it slightly dangerous.
>
>I've done it fairly extensively and it works quite well as long as you keep good control over it. I just like they way that the
>control is encapsulated into a neat little project with good control over how developers can use it.
Ok, I can see that in your environment control is extremely important
- so that you are willing to put up with the annoyances
- and have probably put in some layers of automation
Quite a lot of my UserControls are also used by another software
house, and they /are/ OCXed by them - for most of the reasons you give
(their boss - who once worked for me - is quite rightly a control
freak)
There are other aspects though, like you I tend to develop
UserControls as part of the App, they are great for 'encapsulation'
Some of those eventually turn into 'generics' but many don't.
Even with things that may become 'generics' I'm developing them in
conjunction with the App, so having them as part of the Project is
very convenient.
One can quickly add properties and functionality and tracing is much
easier.
Often the App specific UCs are so much part of the App, that to have
them separately compiled would be a PITA
Once a UC has been promoted to the status of 'generic' it gets its own
location and Testbed. I then regard it as a .OBJ file.
Heck - I'm sure MS could have done a better job
- the Registry is such a fragile construct
- ] | http://www.tech-archive.net/Archive/VB/microsoft.public.vb.general.discussion/2004-06/1449.html | crawl-002 | en | refinedweb |
Timothy William Bray is a Canadian software developer and entrepreneur. He co-founded Open Text Corporation and Antarctica Systems. Currently, Tim is the Director of Web Technologies at Sun Microsystems.
Early life
Tim was born on June 21, 1955 in Alberta, Canada. He grew up in Beirut, Lebanon and graduated in 1981 with a Bachelor of Science (double major in Mathematics and Computer Science) from the University of Guelph in Guelph, Ontario. Tim described his switch of focus from Math to Computer Science this way: "In math I’d worked like a dog for my Cs, but in CS I worked much less for As — and learned that you got paid well for doing it."[1]
In June of 2009, he received an honorary Doctor of Science degree from the University of Guelph[2]..
Entrepreneurship
Waterloo Maple
Tim Bray served as the part-time CEO of Waterloo Maple Inc. during 1989-1990. Waterloo Maple is the developer of the popular Maple mathematical software.
Open Text Corporation
Bray left the new OED project in 1989 to co-found Open Text Corporation with two colleagues. Open Text was the commercialization vehicle for the high-performance search engine employed in the new OED project.
Tim recalled that “in 1994 I heard a conference speaker say that search engines would be big on the Internet, and in five seconds all the pieces just fell into place in my head. I realized that we could build such a thing with our technology.”[3] Thus in 1995, Open Text released the Open Text Index, one of the first popular commercial web search engines. Open Text Corporation is now publicly traded on the Nasdaq under the symbol OTEX. From 1991 until 1996, Tim held the position of Senior Vice President - Technology.
Textuality
Tim Bray, along with Lauren Wood, ran Textuality, a successful consulting practice in the field of web and publishing technology. He was contracted by Netscape in 1999 in part to create a new version, with Ramanathan V. Guha, of Meta Content Framework called Resource Description Framework (RDF), that used the XML language.
Antarctica Systems
In 1999 he founded Antarctica Systems, a Vancouver, Canada-based company that specializes in visualization-based business analytics.
Standardization efforts
XML
As an Invited Expert at the World Wide Web Consortium between 1996 and 1999, Bray co-edited the XML and XML namespace specifications. Halfway through the project Bray accepted a consulting engagement with Netscape, provoking vociferous protests from Netscape competitor Microsoft (who had supported the initial moves to bring SGML to the web.) Bray was temporarily asked to resign the editorship. This led to intense dispute in the Working Group, eventually solved by the appointment of Microsoft's Jean Paoli as third co-editor..[4]
W3C TAG
Between 2001 and 2004 he served as a Tim Berners-Lee appointee on the W3C Technical Architecture Group.[5]
Atom
Until October 2007, Tim was co-chairing, with Paul Hoffman, the Atom-focused Atompub Working Group of the Internet Engineering Task Force. Atom is a web syndication format developed to address perceived deficiencies with the RSS 2.0 format.
Software tools
Bray has written many software applications, including Bonnie, a Unix file system benchmarking tool, Lark, the first XML Processor, and APE the Atom Protocol Exerciser.
See also
References
- ^ Joe Cellini. "Biomedical Visualization". Apple Inc... Retrieved on 2008-10-26.
- ^ "Eight to Receive Honorary Degrees". June 1, 2009..
- ^ "Biomedical Visualization". Apple Inc... Retrieved on 2008-10-26.
- ^ Tim Bray. "ongoing · The Real AJAX Upside"... Retrieved on 2008-10-26.
- ^ David Becker. "How does XML measure up?". CNET Networks.. Retrieved on 2008-10-26.
External links
- ongoing - Tim Bray's weblog
- ongoing - Software - Summary Page on Tim Bray's weblog
- Textuality
- Lark - the first XML Processor
- Tim Bray @ FOWA Expo 08 - The Fear Factor
This entry is from Wikipedia, the leading user-contributed encyclopedia. It may not have been reviewed by professional editors (see full disclaimer) | http://www.answers.com/tim%20bray | crawl-002 | en | refinedweb |
Re: SBS R2 Premium - Monitoring & Reports Broken.
- From: Phil E. <PhilE@xxxxxxxxxxxxxxxxxxxxxxxxx>
- Date: Wed, 14 Feb 2007 15:24:05 -0800
I should have been a little more clear in my language as far as timelines are
concerned.
It is a fresh install, however the server has been running for about two or
three weeks now.
No reports have showed up in my inbox since inception. It is one of our
boxes, so it sat behind while we took care of client needs first.
So, now I am looking to figure out why it sits there with the default
message that shows up just after M&R is installed.
Thanks for that though...
A provider, PerfProv, has been registered in the WMI namespace,
ROOT\CIMV2\MicrosoftHealthMonitor\PerfMon,.
That is it!
I would like to get the reports going!
Thanks for your help...
- References:
- Re: SBS R2 Premium - Monitoring & Reports Broken.
- From: Les Connor [SBS MVP]
- Prev by Date: Cannot install certificate on Vista
- Next by Date: Re: terminal services
- Previous by thread: Re: SBS R2 Premium - Monitoring & Reports Broken.
- Next by thread: Re: SBS R2 Premium - Monitoring & Reports Broken.
- Index(es): | http://www.tech-archive.net/Archive/Windows/microsoft.public.windows.server.sbs/2007-02/msg02132.html | crawl-002 | en | refinedweb |
SQL Server and .NET Interview questions free download
- From: "Jobs" <jobatyourdoorstep@xxxxxxxxxxx>
- Date: 27 Oct 2006 23:20:54 ?
Reporting Services
Can you explain how can we make a simple report in reporting services?
How do I specify stored procedures in Reporting Services?
What is the architecture for "Reporting Services "?: how to check whether a number is an integer?
- Next by Date: Re: vb, vb.net, & vba
- Previous by thread: Constants can't be defined in a namespace. WTH?
- Next by thread: CONNECTING PRINTER FROM VB.NET 2005
- Index(es): | http://www.tech-archive.net/Archive/DotNet/microsoft.public.dotnet.languages.vb/2006-10/msg02574.html | crawl-002 | en | refinedweb |
RE: Windows Client using Generated WS Proxy
From: Tomas (Tomas_at_discussions.microsoft.com)
Date: 09/15/04
- ]
Date: Wed, 15 Sep 2004 06:11:05 -0700
Hi Gravy,
I have just completed a contract where I ran into the situation I just
described.
I was using several class that were to be shared across the presentation and
business tiers of my application. I have placed these data class e.g.
AddressData, UserData in an assembly I was planning to share across tiers.
The WSDL tool did indeed create "duplicates" of these classes which, to be
honest, really annoyed me.
The solution I used was to generate one proxy using the tool and then I
simply went through it and modified it. Since I am using VSS, versioning has
not been a real problem. Since the initial design of these data classes I've
added some methods for presentation e.g. a toString() that puts the users
name together in a nice format.
Overall it's been quite successful, but that partly stems from the fact that
we had very good designs on our webservices before we started client side
development. Signatures have changed slightly, but you can modify the custom
proxy without too much difficulty. Major headaches arise when you add new
webmethods, fortunately for me, that happened only one over the life of the
project so it was only a minor task. This would be a big problem if methods
were being added quite frequently.
I hope this has been of some benefit to you. Let me know what you decided to
do.
Tom
"Gravy" wrote:
> case for using them still stand.
>
> I have an application that has a windows client talking to a set of services
> hosted in ASP.NET. Now whilst writing the service layer I created quite a
> few classes that represent the data to method calls, i.e. entity type
> classes. the typical sort of thing is a Customer or an Account.
>
> Now, if I expose a web method that takes a Customer or Account the client
> proxy that is generated automatically creates another definition for
> Customer and Account. If MY entities contain the data plus a little
> validation, i.e. Name cannot be empty or greater than 10 then I would want
> my client to use them as well as the server. But this means I now have a
> conflict. the WS Proxy thinks it knows what a Customer is and the client
> code also thinks it knows what a Customer is.
>
> Does anyone else suffer from these conflicts, or do people just use the
> proxy generated class.
>
> I can think of a couple of solutions to the problem.
>
> 1) Change the reference.cs file to use my namespace for my Customer and
> Account class. Then remove the auto generated versions from this class.
>
> 2) Manually convert from my definition of a Customer to the proxy's
> definition before a call the web service!!
>
> One aspect of this that I'm worried about is versioning!!
>
> Does anyone have any comments on this?
>
> Regards
>
> Graham Allwood
>
>
>
- ] | http://www.tech-archive.net/Archive/DotNet/microsoft.public.dotnet.framework.aspnet.webservices/2004-09/0217.html | crawl-002 | en | refinedweb |
Plastic editor for Asp.Net Mvc ready to betaasp.net, geek, microsoft, mvc, opensource, plastic, programming, tech May 4th, 2008
Anyone out there with Visual Studio 2008 installed is welcome to try out an open source utility I’ve put together. If you have feedback or bug reports those would be greatly appreciated.
It’s called MvcPlastic.
- Capturing view context and view data as it occurs
- Reviewing rendered screens as you would a slide deck
- In-place editing of the views and content files
- Tweak repeatedly without re-submitting forms or re-executing controller actions
It’s a single dll you can drop into a bin directory of an asp.net mvc (4/16) web app and activate with a line added to the web.config.
If you want to take a look see below for the svn info. You don’t need MVC installed - any bins needed come with the source and you can delete the folder without leaving anything in the registry or gac..
So I did some experimenting and saw how easy it was to capture and recreate a person’s visit like it was a slide-show. I added the ability to edit the view template files to make it easy for a designer or html guru to adjust project source files on an engineer’s machines using only a browser. Sort of like a wiki.
That’s one of the biggest problems with wysiwyg ide editors - there’s no sample data or real “context” for the editor so they work well only on the most trivial pages. By capturing the view data used to render a page, you can edit and review as many times as you want and have real-world results shown to you immediately.
Links and resources
- Main MvcPlastic project page
-
-
-
Or get the source for the project from MvcPlastic-source.zip, or get more current source from source control at:
svn co plastic
I’ve tested the plastic editor with the various alternate view engines in the MvcContrib project: NVelovity, Brail, NHaml. All three of their sample sites are included in the plastic/trunk solution. They’re unaltered except for the addition of MvcPlastic.dll, just so you know it’s compatible with those technologies as well.
And if you’re curious how the guts work to re-view a previous page - there’s an iframe where the source points at an action named regenerate. The id argument to regenerate is a guid that’s was created as a key when the view was captured.
So on the plastic controller is the following action:
public ActionResult Regenerate(Guid id) { Client client = GetClient(); Visit visit = client.Visits.FirstOrDefault(v => v.Id == id); return new RegenerateViewResult { ViewEngine = visit.ViewEngine, ViewContext = visit.ViewContext }; }
The data model in plastic has a client with an array of visits. The GetClient method uses the request address to associate any actions with a particular ‘client’ instance. That way you won’t other users visits mixed with your history, and it also helps just a little if it’s accidentally left enabled on a staging server. Possible security/confidentiality concerns if you could see other users visits.
And the regenerate view result does the following:
public class RegenerateViewResult : ActionResult { public IViewEngine ViewEngine { get; set; } public ViewContext ViewContext { get; set; } public override void ExecuteResult(ControllerContext context) { // use all of the old values except the HttpContext ViewContext newContext = new ViewContext( new ControllerContext( context.HttpContext, ViewContext.RouteData, ViewContext.Controller), ViewContext.ViewName, ViewContext.MasterName, ViewContext.ViewData, ViewContext.TempData); ViewEngine.RenderView(newContext); } }
Not that much to it, is there? The architecture they’re putting together is very clean and elegant. It’s also probably going to change significantly with each new preview so I’m thinking I’ll be rewriting these bits of code every four months or so until they release the first version.
The capturing of the views is also very simple. The original view engine is wrapped with a light-weight interceptor that stores the visit information as the site is used.
class PlasticViewEngine : IViewEngine { readonly PlasticEditor _editor; readonly IViewEngine _original; public PlasticViewEngine(PlasticEditor editor, IViewEngine original) { _editor = editor; _original = original; } public void RenderView(ViewContext viewContext) { RecordVisit(viewContext); _original.RenderView(viewContext); } void RecordVisit(ViewContext viewContext) { Client client = _editor.GetCurrentClient( viewContext.HttpContext.Request); if (client == null || client.Enabled == false) return; client.Visits.Add(new Visit { Id = Guid.NewGuid(), ViewEngine = _original, ViewContext = viewContext }); } }
So there you go! Nothing to it. The tricky bit was getting a many-browser compatible draggable edit form to work.
May 14th, 2008 at 5:49 am
[...] Plastic editor for Asp.Net Mvc ready to beta [...]
May 14th, 2008 at 5:50 am
[...] Plastic editor for Asp.Net Mvc ready to beta [...] | http://whereslou.com/2008/05/04/plastic-editor-for-asp-net-mvc-ready-t-beta | crawl-002 | en | refinedweb |
Today we will take a look at Sparks's module for MLlib or its built-in machine learning library Sparks MLlib Guide . KMeans is a popular clustering method. Clustering methods are used when there is no class to be predicted but instances are divided into groups or clusters. The clusters hopefully will represent some mechanism at play that draws the instance to a particular cluster. The instances assigned to the cluster should have a strong resemblance to each other. A typical use case for KMeans is segmentation of data. For example suppose you are studying heart disease and you have a theory that individuals with heart disease are overweight. You have collected data from individuals with and without heart disease and measurements of their weight like their body mass index, waist-to-hip ratio, skinfold thickness, and actual weight. KMeans is used to cluster the data into groups for further analysis and to test the theory. You can find out more about KMeans on Wikipedia Wikipedia KMeans .
The data that we are going to use in today's example is stock market data with the ConnorsRSI indicator. You can learn more about ConnorsRSI at ConnorsRSI. Below is a sample of the data. ConnorsRSI is a composite indicator made up from RSI_CLOSE_3, PERCENT_RANK_100, and RSI_STREAK_2. We will use these attributes as well as the actual ConnorsRSI (CRSI) and RSI2 to pass into our KMeans algorithm. The calculation of this data is already normalized from 0 to 100. The other columns like ID, LABEL, RTN5, FIVE_DAY_GL, and CLOSE we will use to do further analysis once we cluster the instances. They will not be passed into the KMeans algorithm.
Sample Data (CSV): 1988 instances of SPY SPY
The KMeans algorithm needs to be told how many clusters (K) the instances should be grouped into. For our example let's start with two clusters to see if they have a relationship to the label, "UP" or "DN". The Apache Spark scala documentation has the details on all the methods for KMeans and KMeansModel at KMeansModel
Below is the scala code which you can run in a zeppelin notebook or spark-shell on your HDInsight cluster with Spark. HDInsight
import org.apache.spark.mllib.linalg.Vectors
import org.apache.spark.mllib.clustering.KMeans
import org.apache.spark.sql.functions._
// load file and remove header
val data = sc.textFile("wasb:///data/spykmeans.csv")
val header = data.first
val rows = data.filter(l => l != header)
// define case class
case class CC1(ID: String, LABEL: String, RTN5: Double, FIVE_DAY_GL: Double, CLOSE: Double, RSI2: Double, RSI_CLOSE_3: Double, PERCENT_RANK_100: Double, RSI_STREAK_2: Double, CRSI: Double)
// comma separator split
val allSplit = rows.map(line => line.split(","))
// map parts to case class
val allData = allSplit.map( p => CC1( p(0).toString, p(1).toString, p(2).trim.toDouble, p(3).trim.toDouble, p(4).trim.toDouble, p(5).trim.toDouble, p(6).trim.toDouble, p(7).trim.toDouble, p(8).trim.toDouble, p(9).trim.toDouble))
// convert rdd to dataframe
val allDF = allData.toDF()
// convert back to rdd and cache the data
val rowsRDD = allDF.rdd.map(r => (r.getString(0), r.getString(1), r.getDouble(2), r.getDouble(3), r.getDouble(4), r.getDouble(5), r.getDouble(6), r.getDouble(7), r.getDouble(8), r.getDouble(9) ))
rowsRDD.cache()
// convert data to RDD which will be passed to KMeans and cache the data. We are passing in RSI2, RSI_CLOSE_3, PERCENT_RANK_100, RSI_STREAK_2 and CRSI to KMeans. These are the attributes we want to use to assign the instance to a cluster
val vectors = allDF.rdd.map(r => Vectors.dense( r.getDouble(5), r.getDouble(6), r.getDouble(7), r.getDouble(8), r.getDouble(9) ))
vectors.cache()
//KMeans model with 2 clusters and 20 iterations
val kMeansModel = KMeans.train(vectors, 2, 20)
//Print the center of each cluster
kMeansModel.clusterCenters.foreach(println)
// Get the prediction from the model with the ID so we can link them back to other information
val predictions = rowsRDD.map{r => (r._1, kMeansModel.predict(Vectors.dense(r._6, r._7, r._8, r._9, r._10) ))}
// convert the rdd to a dataframe
val predDF = predictions.toDF("ID", "CLUSTER")
The code imports some methods for Vector, KMeans and SQL that we need. It then loads the .csv file from disk and removes the header that have our column descriptions. We then define a case class, split the columns by comma and map the data into the case class. We then convert the RDD into a dataframe. Next we map the dataframe back to an RDD and cache the data. We then create an RDD for the 5 columns we want to pass to the KMeans algorithm and cache the data. We want the RDD cached because KMeans is a very iterative algorithm. The caching helps speed up performance. We then create the kMeansModel passing in the vector RDD that has our attributes and specifying we want two clusters and 20 iterations. We then print out the centers for all the clusters. Now that the model is created, we get our predictions for the clusters with an ID so that we can uniquely identify each instance with the cluster it was assigned to. We then convert this back to a dataframe to analyze.
Below is a subset of the allDF dataframe with our data.
Below is a subset of our predDF dataframe with the ID and the CLUSTER. We now have a unique identifier and which cluster the KMeans algoritm assigned it to. Also displayed is the mean for each of the attributes passed into the KMeans algorithm for each cluster. Cluster 0 and Cluster 1. You can see that the means are very close in each cluster. For Cluster 0 it is around 27 and for cluster 1 it is around 71.
Because the allDF and predDF dataframes have a common column we can join them and do more analysis.
// join the dataframes on ID (spark 1.4.1)
val t = allDF.join(prdDF, "ID")
Now we have all of our data combined with the CLUSTER that the KMeans algorithm assigned each instance to and we can continue our investigation.
Let's display a subset of each cluster. It looks like cluster 0 is mostly DN labels and has attributes averaging around 27 like the centers of the clusters indicated. Cluster 1 is mostly UP labels and the attributes average is around 71.
// review a subset of each cluster
t.filter("CLUSTER = 0").show()
t.filter("CLUSTER = 1").show()
Let's get descriptive statistics on each of our clusters. This is for all the instances in each cluster and not just a subset. This gives us the count, mean, stddev, min, max for all numeric values in the dataframe. We filter each by CLUSTER.
// get descriptive statistics for each cluster
t.filter("CLUSTER = 0").describe().show()
t.filter("CLUSTER = 1").describe().show()
So what can we infer from the output of our KMeans clusters?
- Cluster 0 has lower ConnorsRSI (CRSI), with a mean of 27. Cluster 1 has higher CRSI, with a mean of 71. Could these be areas to initiate buy and sells signals?
- Cluster 0 has mostly DN labels, and Cluster 1 has mostly UP labels.
- Cluster 0 has a mean of .28 % gain five days later, while cluster 1 has a loss of .03 five days later.
- Cluster 0 has a mean loss of 1.22% five days before and cluster 1 has a gain of 1.15% five days before. Does this suggest markets revert to their mean?
- Both clusters have min\max of 5 day returns between positive 19.40% to a loss of 19.79%.
This is just the tip of the iceberg with further questions, but gives an example of using HDInsight and spark to start your own KMeans analysis. Spark MLlib has many algorithms to explore including SVMs, logistic regression, linear regression, naïve bayes, decision trees, random forests, basic statistics, and more. The implementation of these algorithms in spark MLlib is for distributed clusters so you can do machine learning on big data. Next I think I'll run the analysis on all data for AMEX, NASDAQ and NYSE stock exchanges and see if the pattern holds up!
Bill
Thank u for this post. I was been the Above code in Spark-shell It works but when i run through maven project.
but when i compile the Above code i get the below error.
Thanks in Advance.
error: No TypeTag available for CC1
val allDF = allData.toDF()
This is my code and exactly i get the error at allData.toDF.
case class CC1(Phone: String, Vmail: Double, DayMins: Double, EveMins: Double, NightMins: Double, IntlMins: Double)
val allSplit = rows.map(line => line.split(","))
val allData = allSplit.map( p => CC1( p(0).toString, p(1).toDouble, p(2).toDouble, p(3).toDouble, p(4).toDouble, p(5).toDouble ))
import sqlContext.implicits._
val allDF = allData.toDF() | https://blogs.msdn.microsoft.com/bigdatasupport/2015/09/24/a-kmeans-example-for-spark-mllib-on-hdinsight/ | CC-MAIN-2017-30 | en | refinedweb |
Hledger.Utils.UTF8IOCompat
Description
UTF-8 aware string IO functions that will work across multiple platforms and GHC versions. Includes code from Text.Pandoc.UTF8 ((C) 2010 John MacFarlane).
Example usage:
import Prelude hiding (readFile,writeFile,appendFile,getContents,putStr,putStrLn) import UTF8IOCompat (readFile,writeFile,appendFile,getContents,putStr,putStrLn) import UTF8IOCompat (SystemString,fromSystemString,toSystemString,error',userError')
2013410 update: we now trust that current GHC versions & platforms do the right thing, so this file is a no-op and on its way to being removed. Not carefully tested.
Synopsis
- readFile :: FilePath -> IO String
- writeFile :: FilePath -> String -> IO ()
- appendFile :: FilePath -> String -> IO ()
- getContents :: IO String
- hGetContents :: Handle -> IO String
- putStr :: String -> IO ()
- putStrLn :: String -> IO ()
- hPutStr :: Handle -> String -> IO ()
- hPutStrLn :: Handle -> String -> IO ()
- type SystemString = String
- fromSystemString :: SystemString -> String
- toSystemString :: String -> SystemString
- error' :: String -> a
- userError' :: String -> IOError
Documentation
readFile :: FilePath -> IO String
The
readFile function reads a file and
returns the contents of the file as a string.
The file is read lazily, on demand, as with
getContents.
writeFile :: FilePath -> String -> IO ()]])
getContents :: IO String
The
getContents operation returns all user input as a single string,
which is read lazily as it is needed
(same as
hGetContents
stdin)..
type SystemString = StringSource
A string received from or being passed to the operating system, such as a file path, command-line argument, or environment variable name or value. With GHC versions before 7.2 on some platforms (posix) these are typically encoded. When converting, we assume the encoding is UTF-8 (cf).
fromSystemString :: SystemString -> StringSource
Convert a system string to an ordinary string, decoding from UTF-8 if it appears to be UTF8-encoded and GHC version is less than 7.2.
toSystemString :: String -> SystemStringSource
Convert a unicode string to a system string, encoding with UTF-8 if we are on a posix platform with GHC < 7.2.
userError' :: String -> IOErrorSource
A SystemString-aware version of userError. | https://hackage.haskell.org/package/hledger-lib-0.23.2/docs/Hledger-Utils-UTF8IOCompat.html | CC-MAIN-2017-30 | en | refinedweb |
VISUAL MEDIA ALLIANCE
SUMMER 2017
THE CONTENTS
04 Sights and Sounds of the Conference
08
11
12
Feature: Endangered Species
Expert Column: Strategic Sales
Expert Column: Human Resources
Will production workers become extinct?
Explore Your Sales Commitment Level
Help for the tough issues
03 10 13 14
Member News VMA Events
Showcase Awards and Luncheon 02
Member Survey Report New Members
04 11
o
VMA Events Find-An-Employee
The Art of Paper Xpri Recycled Xpri Recycled Digital Gloss Velvet Text Cover
MEMBER NEWS
ALL THINGS NEW
New Press, New Tech at Moquin
The first U.S. adopter of Heidelberg’s revolutionary Speedmaster XL 105 technology in 2006, Moquin Press has recently installed a Speedmaster XL 106-6+L. The first machine in the U.S. with Push to Stop technology that is fully integrated with Heidelberg’s Prinect workflow, Inpress Control 2 and Inspection Control 2, the XL 106 runs besides an existing XL 105-6 +L with UV. Moquin, a trade and packaging printer located near San Francisco, has never shied away from investing in new technologies in order.
Greg Moquin, President and Owner, together with the XL 106 press operator at Moquin Press.
DOME Makes a Bold Move
Bob Lindgren Retires
PIASC executive leadership transitioned from Bob Lindgren, who has served as its President/CEO since 1982, to Louis J. (Lou) Caron on June 1st. Lou is a CPA, and has served as CFO of both insurance firms and printing companies and therefore comes with top flight business skills. Bob will continue as a member of the PIASC staff, writing and editing Update, consulting with members on management issues, and working with other Associations and industry activities. In the past, he has been a good friend to VMA, a featured speaker at VMA dinner meetings, retreats and conferences, and is known for his financial acumen regarding the printing industry. Beginning with Henry Henneberg in 1947, PIASC has had only three CEOs—first Henry, then Bob and now Lou—a great record of continuity.
DOME, Sacramento, is embarking on a momentous transition that will consolidate their five facilities and merge the now 280,000 total square feet of production into one location. DOME will expand into a newly renovated 320,000 square foot facility in Sacramento’s McClellan Park..”
New Press at Pyramid
Pyramid Printing, San Francisco is running a new HP Indigo 10000, a digital press designed to keep the company competitive in its marketplace. Pyramid is a direct marketing company that focuses on direct response campaigns, digital printing, web to print, print to web, and variable data printing. These services create the core offering that ties directly into an integrated campaign their clients may have with their marketing team or agency. Agencies like the fact that they have the design experience and the tools to create the interactive components the campaigns require.
Eric Zirbel (left), HP, and Kingman Leung, Pyramid Printing & Graphics pose with the company’s new HP Indigo 10000. VISUAL MEDIA ALLIANCE
| CONNECTED | SUMMER 2017
3
BOARD ROSTER: CHAIRMAN
Ian Flynn, Direct Response Imaging
IMMEDIATE PAST CHAIRMAN
John Crammer, Best Label Company
BOARD MEMBERS:
Gil Caravantes Commerce Printing Services John Crammer Best Label
Sights Sounds of the Conference
By Barbara Silverman
Chris Cullen API Group Ian Flynn Direct Repsonse Imaging Dava Guthmiller Noise 13 Jeff Jarvis Spicers Frank Parks The Parks Group Chris Shadix Dome Print and Marketing Solutions San Francisco Division Cindy Sonnenberg K/P Corporation Stephen Sprinkel Sprinkel Media Network
STAFF ROSTER:
The Sights
PRESIDENT
Dan Nelson
DIRECTOR MEMBER SERVICES Jim Frey
DIRECTOR MEMBER PROGRAMS Laura Vargas
DIRECTOR EDUCATION Barbara Silverman
DESIGN MANAGER Todd Donahue
PROGRAM ADMINISTRATOR Gabrielle Disario
MEMBERSHIP SALES Shannon Wolford
FINANCIAL MANAGER Emily Gotladera
ACCOUNTING SPECIALIST Maria Salita
VICE PRESIDENT INSURANCE SERVICES David Katz
INSURANCE CUSTOMER SERVICE REPS
Renee Prescott, Crystal Carlson, Lena Nelson, Sue Benavente, Jessica Clark, Diedra Lovan, Jimmie Thompson
DIRECTOR SUPPLEMENTAL BENEFITS Greg Golin
DIRECTOR GOVERNMENT AFFAIRS Gerry Bonetto
HUMAN RESOURCE SPECIALIST Cheryl Chong
EDITOR
Noel Jeffrey
ON THE COVER:
VMA’s recent Design Conference is being heralded as the “best ever.” Audiences certainly enjoyed their experiences. Photos in collage by Kimberly Beck Rubio Photography.
4
The Sounds
It just seems to get better each year! The VMA Design Conference was held on June 14, as part of the opening day of AIGA’s SF Design Week. This year the conference was moved to Bespoke, an amazing new hi-tech venue conveniently located in the center of town, in the Westfield San Francisco Centre. The event began as our high energy hostess-with-the-mostest, Lauren Elliot of Wicked Good Print Partners (WGPP) kicked off the sessions, introducing the “Large Man” and creative visionary Aaron Draplin of The Draplin Design Co., who shocked the audience with his unconventional delivery along with creative approaches to earning a living in design. Dava Guthmiller from Noise 13 facilitated the recovery, discussing a sane yet creative approach to achieving meaning in a new brand identity. It was a perfect segue to Brian Dougherty who filled us in with stories of his quests for environmental and social impact design. Who would have thought that packaging light bulbs could be both fun and environmentally sound?
And More
Sounds More Sounds
David Hogue from Google presented some thought provoking ponderings as he asked us to consider what’s next? Where is all this going? And what should we really expect from our connected world in the future? Corey Lewis of Black Flag Creative set his pirate ship afloat as he reviewed his methods of smooth sailing when dealing with design that would span many different channels. Among the many highlights was IDEO’s Neil Stevenson. Stevenson’s mission is to understand creativity and find new ways to enable and encourage creativity in others. He shared some of his own stories, about stories to help us learn to apply storytelling in the service of creativity. The founder of Social Media Trackers, Mark Schwartz opened the eyes of many
of us as he shared real life experiences of how amazing Facebook can be for not only personal (how he met his wife) but business success. And he has the data to prove it. When Neal Haussel followed, he shared what he believes to be the future of packaging, considering the rise of e-commerce. His Unboxing videos were both amusing and convincing.
We had a fascinating discussion about AR and print by Erica Aiken of Rods and Cones and Cindy Walas of Walas Younger, LTD. They proved that amazing possibilities are now within reach with their own magazine “Out of Chaos” where attendees got to experience AR first hand.
Zooka Creative’s Director of Strategy, Santiago Sinisterra provided an overview of what a brand really is and then went on to share a fascinating case study of the rebranding of Union City. He was swamped with questions in the panel discussion that followed. Peleg Top closed the day by enrapturing us all with his own story. We were almost there with him as he shared his history that led to a 2-year sabbatical from our overly connected world and then the wisdom he acquired from it. He focused on how we can get more out of life by having less.
Barbara Stephenson from 300FEETOUT offered us a lighthearted look into the workings of a functioning design studio and how to keep the creativity flowing. It was the perfect segue for Michael Osborne of Michael Osborne Design and one of our regulars, who challenged us to find our creativity and keep it flowing. Perhaps that is easy for Michael but it’s not always that easy for many of us and we certainly appreciated his insights. Photos by Kimberly Beck Rubio Photography
Wrap It Up
Among the bonuses of this conference were the breaks! Along with visits to the wonderful exhibitors, attendees had ample opportunity to mingle and learn from each other.
It was quite a day. Word on the street is that this event was clearly one of great inspiration and education and a perfect kick-off to SF Design Week! See ya next year! Barbara Silverman is the Director of Education at VMA (barbara@vma.bz). VISUAL MEDIA ALLIANCE
| CONNECTED | SUMMER 2017
5
BEYOND FLEXIBILITY
REDEFINE INNOVATION UNLEASH NEW OPPORTUNITIES WITH THE POWER OF PROVEN OCÉ INKJET TECHNOLOGIES Thanks to the Océ iQuarius™ technologies breakthrough in high-speed sheet-fed inkjet versatility, print providers can now handle new and more diverse applications—making it possible to profitably address new market segments. And with even more qualified inkjet papers, including high-quality gloss and lightweight stocks, the possibilities are endless. Discover how the Océ VarioPrint® i300 inkjet press, powered by Océ iQuarius technologies, is redefining cost, productivity, and throughput equation without compromising quality.
Watch the Océ VarioPrint i300/iQuarius Technologies video: PPS.CSA.CANON.COM/IQUARIUS
877-623-4969 CSA.CANON.COM
Canon is a registered trademark of Canon Inc. in the United States and elsewhere. Océ and Océ VarioPrint are registered trademarks of Océ-Technologies B.V. in the United States and elsewhere. Océ iQuarius is a trademark of Océ-Technologies B.V. in the United States and elsewhere. All other referenced product names and marks are trademarks of their respective owners and are hereby acknowledged. © 2017 Canon Solutions America, Inc. All rights reserved.
And the Conference Exhibitors said…
Jeff Jarvis SPICERS PAPER “I can honestly say this was the best event in years. We had quality traffic at our booth and we were able to get some solid business connections that we will be following up on.”
Travis Gilkey BEST LABEL COMPANY “We met a contact at Foster Farms Creative and feel that meeting alone was worth the cost of the Conference!”
Chris Lambert NEENAH PAPER “This was the best Conference in three years!”
Glenn Hollingsworth APPLETON COATED “I thought it was well worth the price of admission. Thanks again.”
Ray Mireles COMMERCE PRINTING “We made some really great contacts. Setting up a plant tour with a large, potential client right now. And the event location was also great this year.”
Ian Flynn DRI “Bespoke is terrific! We made some great new contacts and reconnected with old clients. It was certainly worthwhile for us.”
Kate Stoness FUNCTIONFOX “We had a steady flow of attendees to our exhibit space throughout the day. Overall we thought it was a great event and we would be happy to be a part of it again next year.” VISUAL MEDIA ALLIANCE
| CONNECTED | SUMMER 2017
7
D E R E G N A D N E
S E I C E P S Today’s Production Workers are hard to find. Will they be extinct by tomorrow?
BY NOEL JEFFREY
What are some of the skills that modern manufacturers are looking for? • Knowledge of mechanical and electrical engineering processes • Ability to work with computerized systems • Ability to read and write machine programming code • Ability to read manufacturing blueprints • Ability to operate automated manufacturing systems • Understanding of hydraulic, pneumatic and electrical systems 8
VISUAL MEDIA ALLIANCE
T
he Wall Street Journal’s front page fivecolumn headline on June 3-4 read, “Jobless Rate Falls to 16-Year Low.” The tagline that followed said, “Fewer jobs are being created though in a sign firms are struggling with labor shortages.” In short, manufacturing workers, especially people skilled in modern manufacturing techniques are hard to find. They need to be “coddled” as carefully as the Galápagos Penguin or Leatherback Sea Turtle. Brian Regan, co-founder and president of Semper International, agrees that printing is among the manufacturing areas where it’s difficult to find production workers—skilled or not. “With a national unemployment rate of 4.54%,” Regan says, “anyone worth their salt is working.” Semper is a staffing solutions service for the printing and graphic arts industry. Regan, who ran a press himself in the past, is also deeply engaged in the printing community, having served as the past Chairman and active board member of the Printing and Graphics Scholarship Foundation (PGSF).
HOW WE GOT HERE
The industry has a recruitment challenge. Regan traces the beginning of today’s significant problem of production worker shortages back some 17 years. “That’s when it became ‘common knowledge’ that print was dead,” he says. “When that was pushed out to the public parents and students began to see print as ‘old,’ no fun—just reinforcing that message. Schools citing the Bureau of Labor Statistics showing less need began canceling General Degree programs and shop classes.” What is true for print extends to most careers that do not require a college degree. As an example, on May 26, Fox News TV host Tucker Carlson featured a segment with Mike Rowe, the TV host of the special series Dirty Jobs. Rowe is also founder of | CONNECTED | SUMMER 2017
the mikeroweWORKS Foundation,* which awards scholarships to students pursuing a career in the skilled trades. He is closely associated with the Future Farmers of America, Skills USA, and the Boy Scouts of America. Rowe pointed out that a number of years ago parents and counselors determined that the alternative to college preparatory programs in high school were subordinate. Vocational education was a “consolation prize.” “There are 5.6 million open jobs available that do not require a four-year degree,” Rowe says. “But somewhere in the reptilian parts of our brains people still call these substandard.” And, there’s more to printing’s woes. “Then the great recession hit and half of the production workers in the industry were laid off as companies went under or consolidated.” Regan continues. “We lost a whole flock of skilled workers who because they were adept at mechanical tasks made easy transitions to other industries like fracking.” The last straw as it were is that the boomer generation is retiring rapidly. “It’s not likely that they will be coming back,” he says. Jules VanSant also credits the great recession and trade schools “going away” as factors causing today’s difficulties in finding production personnel. “In the next years the shortage will be very critical,” she notes. VanSant is Executive Director of VMA neighbor PPI, The New Visual Communications Industries Association that is also known as the Pacific Printing Industries. This PIA affiliate represents six states: Oregon, Washington, Idaho, Montana, Alaska and Hawaii.
CONFRONTING THE DANGER
Some industry segments are responding. VanSant cites new emphasis on CTE* programs in Oregon and Washington as positive. “The states are starting to see the need to support manufacturing and beginning to
Healthcare Complexity, Simplified! Let VMA simplify your life with: Choice, Proven Cost-lowering Strategies, Exceptional Customer Service, Time-saving Simplicity, Time-saving Technology, Premium Human Resource Support Shannon Wolford 415.710.0568 shannon@vma.bz
David Katz 415.489.7614 david@vma.bz
VMA Insurance Services (800) 659-3363 • insurance.vma.bz VMA Insurance Services has been ensuring business success since 1986.
NEW! Spot and Flood UV Specialty Finishes 1 and 2 Side Laminating
On Line Bindery, Inc.
Family owned and operated since 1990
High speed bindings
Perfect/Saddle Stitch-Loop and Standard Wire-O/Plastic Spiral/Metal Spiral/Comb
Gluing brochures & gluefold envelopes
Remoist/Fugitive/Cold/Hot Melt
15 folders
From ž inch panels to 40" wide All folds, double gates, maps etc.
Other capabilities and services
Also own paper shredding company, discounts to bindery customers. Look us up at shreddefense.com
Rooted in Tradition – Growing Toward the Future
L awson Drayage, Inc. Machinery Moving, Rigging & Heavy Transportation for Any Industry
Join us for Career Day Looking for new employees or planning for the future? You won’t want to miss Career Day on October 20.
Industry trusted PrIntIng & LabeLIng equIPment mover for over 50 years!
Cal Poly Graphic Communication (GrC) graduates hit the ground running! Post on the GrC Job Board Reach soon-to-be graduates and seasoned alumni.
Learn More Visit Call 805.756.2645
> Machinery Moving and Rigging > Facility Relocation > Storage and Warehousing > Pier Pick Up and Delivery > Crating & Special Packaging of Machinery & Equipment
CAREER DAY DATES
Fall October 20, 2017
san francIsco bay area
Winter January 26, 2018
sacramento & san JoaquIn vaLLeys
3402 Enterprise Avenue Hayward, CA 94545 Phone: 510-785-5100 Fax: 510-785-8156
Spring April 19, 2018
9900 Kent Street Elk Grove, CA 95624 Phone: 916-686-2600 Fax: 916-686-2601
Online: | Email: sales@lawsoninc.com
A
VMA Education. magazine The Choice is Yours. ClieNt
PhoNe
authoRized sigNatuRe
date
CheCk eaCh box as a guide: ❑ Name correct? ❑ address correct? ❑
Phone # correct?
❑ ad copy correct? ❑ offer correct, if a
Stay ahead of the game by learning new skills.
• look over your project and check for errors; spelling, address, telephone number’s, copy or content. Process magazine is not responsible for typos or incorrect information.
magazine • signBecause this page and it back to Process magazine. We’re here for you. ofemail you. Visual Media Alliance is a trade • anythe changes from thisand pointbusinesses forward may costof youour in time and materials.to be association helping careers members • Process magazine cannot process your job until receipt of sign-off. successful. We offer over 100 public classes, delivered to you in-person, onlineemail and through customized training solutions. magazine baCk to: process@switchstudio.com
magazine
DESIGN
MARKETING
OFFICE
PROGRAMMING
View Website For Education Listings (800) 659-3363 • education.vma.bz Workshops • Seminars • Webinars • One-on-One • Customized Training Solutions
925-400-4165
AD Form
Putting Your Inspiration On To Media
…. Ensuring that your creative genius doesn’t go unnoticed. Ray Morgan Company has over 60 years experience supporting client’s who take the time and effort to design the perfect document that will carry their message.
Let us help you provide your clients with the best quality imaging on any media substrate with the most competitive support programs in the industry.
any?
❑ ad approved as is
❑ ad approved with corrections indicated ❑ Re-proof after corrections are made
Digital Presses - Flatbed Printers - Copiers – Scanners – Outsourced IT Services
ad aPPRoval:
Sales, Service & Supplies From 22 Locations in California & Nevada Call Us For A Free Imaging Technology Assessment 925-400-4165 * * pressforsuccess@raymorgan.com Sales & Service 22 locations in California & Nevada
eXclUsiVe Pia Print toWel ProGram “think Green, think clean, think Prudential.�
Prudential Overall Supply is a uniform provider with solutions for businesses requiring uniforms and textile rental programs. We offer a variety of types and sizes of towels. All our printers towels are manufactured for maximum absorbency with egyptian cotton, sized to fit the job. Our cleaning systems are designed to clean towels soiled with solvents and inks in order to create a safe, economical and ecologically friendly working environment. Uniform programs include uniform rental, uniform lease and uniform purchase. We also offer businesses facility products such as floor mats, restroom supplies, paper products, linen supplies, reusable towels, mops and microfiber products. to sign up and start saving through our preferred agreement contact ashley carroll, Key account Manager ashleyc@pos-clean.com 800-767-5536 prudentialuniforms.com
Uniform rental - lease - PUrchase Linen SuppLieS - paper productS - FLoor MatS - reuSabLe toweLS - MopS - MicroFiber productS - reStrooM SuppLieS
step up. We are trying to liaison with these programs. Our industry is going to have to Career Technical Education (CTE) is a spend some money. high school curriculum aimed at equipping Custom manufacturing students with the training and job skills to like printing is a great go directly into industry and the workforce place for students to or into post-secondary education. In recent land.” VanSant is a board years, there has been a growing emphasis member of the PGSF within CTE to pair job training with academic and would like to see the content that can improve both college and group expand beyond career-readiness. The movement seeks to giving scholarships. bridge a long-standing divide between a “We have to be curriculum that prepares students for college thoughtful about and one that often has tracked students into recruitment and insert work-only prospects after graduation. It also ourselves into the design seeks to give students the more advanced of these programs,” she knowledge necessary to compete with today’s continues. “We have to try highly skilled workforce. to encourage legitimate paths to the skill sets needed in printing so that eventually we get a better curriculum. We are also encouraging vendors to do some training and sending representatives to job fairs. We have stepped up our game and participate in career days and talk about careers and what they pay. We show them the pathway and try to Heidelberg USA supports industry education make it hip and cool.” and offers an apprentice program In addition, PPI participates in Printworkers.com, a Job Board for Premedia, Digital Print & Traditional Print Professionals. In fact, PIA national and most affiliate associations including VMA have job boards and register job seekers on their websites as part of their services to member companies. Semper’s Regan would like to promote additional efforts through the associations. “We are at the point where the only solution to finding workers is to either train them or steal them,” Regan notes. Training, through scholarships, apprenticeships and internships is obviously the healthier solution. He believes that in addition to the recruitment problem, printing production suffers from a retention problem. From his own experience in placing temporary workers, Regan observes that maybe one out of ten people may show promise in attitude and abilities to consider training. Here’s where the retention problem shows up. When the job is over, it’s over and the workers are unemployed and out of touch with the printing industry. They may have liked printing but they are “interested, not vested.” Other opportunities, like the oil industry beckon. Regan thinks that if associations would become
*CTE Defined
involved with the workers that have potential, perhaps offering training programs or finding full blown apprenticeships, it could ease the problem. He’d like to work with the associations to accomplish this. He notes that employers should consider offering training as well. Boomer retirees who might appreciate part time or occasional work could make excellent trainers. Northern California, with its extraordinarily high salaries for tech workers and its extraordinarily high cost of living and presents an especially difficult challenge with no easy answers from or for anyone. Everyone Connected talked to or researched agreed that printers here will have to pay more overall, particularly as the $15 hourly minimum wage kicks in for truly unskilled workers.
REPOPULATING THE SPECIES
One longstanding program that local printers could support in partnership with local educators is Skills USA * with its regional and national programs and competitions. This year, the National Competition, is being held in Louisville, KY as we go to press. State Competition Gold Medalists from Riverside Community College and Eagle Rock and South Pasadena High Schools are competing. For inspiration, printers can look to Heidelberg,* a company that sets an example for the industry as a whole. While it has had an apprenticeship program based in Germany for years, it has recently added an apprentice position here in the US. Heidelberg USA has also been a steadfast supporter of the national Skills USA program and has hosted events in the past. Even with continued industry consolidation, production workers will be needed in the foreseeable future. “Companies that are not investing in new technologies have no future,” VanSant says, “but those who have found a niche and are continuing to add capabilities are growing and they need people. Print is the disruptive media now. There are studies coming out that show the effectiveness of a print spend incorporated into a digital sales spend. Our employers must step up. That’s my call to action.”
RUNNING R2D2
Mike Rowe also tackles the prediction that robotics and automation will eventually displace all manufacturing workers. To the extent that a task can be automated, that’s certainly true. We can see it now in prepress workflows that take in a file and control it through the bindery. We see it in automated plate changers, cloud connected maintenance and more. However, someone has to program all this. “Learning a skill that’s desirable negates the whole conversation,” he says. “If your skills are in demand, you can work where you want. Skills are inherently mobile.” He cites welding as an example, noting that while a starting salary might be $45,000, welders can make over $100,000 when overtime is factored in. Brian Regan makes the same point for the printing VISUAL MEDIA ALLIANCE
| CONNECTED | SUMMER 2017
9
industry. He explains that a philosopher makes a high salary but works unlimited hours. An experienced web press operator is paid for overtime and can take home a six figure salary as well. And, overtime is a reality in the graphic communications industry. As PIASC’s Bob Lindgren has written for years, it almost always makes more sense for owners to plan on overtime rather than to try to support overstaffing during slow periods. Finally, Rowe points out that trades typically represent the path to small business ownership. A person may start out on the bottom rung as a plumber but plan to acquire a truck then employees or a partner then another truck etc. A pressman could want to take a plunge with his own machine. Today’s employers may have started that way themselves. Tomorrow’s will as well. When it happens, as Rowe says, “R2D2 take a bow.”
profoundlydisconnected.com
The mikeroweWORKS Foundation started the Profoundly Disconnected® campaign to challenge the absurd belief that a four-year degree is the only path to success. The Skills Gap is here, and if we don’t close it, it’ll swallow us all.
UPCOMING VMA EVENTS
Places to be. Things to do. People to see.
Showcase Awards Reception
August 24 Scott’s Seafood Restaurant, Oakland 5:30 - 9:00 pm Member - $55 , Non-Member - $65
Celebrate the winning entries of the 20th Annual VMA Showcase Awards. Take this opportunity to entertain your clients to cocktails and dinner and celebrate your work! Awards will be presented for Grand Awards and Best of Show for Print and Design. Gold Award winning entries will be on display.
OUTLOOK 17 CONFERENCE
September 10, 2017 8:00 am - 12:00 noon McCormick Place South, Chicago
OUTLOOK, the annual C-level industry trends and technology update conference held prior to the opening of PRINT [and GRAPH EXPO] is always highly popular among graphic communications industry leaders. This year’s 10
VISUAL MEDIA ALLIANCE
| CONNECTED | SUMMER 2017
PPI Executive Director Jules VanSant calls for printers to step up recruitment efforts.
Programs of Interest press_1/news_overview/press_release_20928.jsp
OUTLOOK 17 conference promises to continue the tradition by offering a solid lineup of topics and speakers set to address game-changing business management strategies, exciting new profit opportunities, breaking economic updates, and more..
PRINT17
September 10 - 14 McCormick Place South, Chicago
The largest gathering of print and graphic communications buyers, decision makers and suppliers in North America will return to McCormick Place in Chicago this September. Whether you’re looking for cutting-edge technologies, want to explore the latest products and services on the market or need the knowledge to overcome your business challenges, you’ll get it at PRINT 17.
VMA Day at the BallPark
Giants vs San Diego Padres September 30 Virgin America Club Level 1:05 pm game time with tailgate starting at 11 am AT&T Park, San Francisco Member - $105 ,Non-Member - $125
Just around the corner from the offices of Visual Media Alliance is one of the best ballparks in the country, with one of the most exciting teams. Visual Media Alliance’s Day At AT&T Park is a great company outing or another opportunity to spend time with your industry friends. The tailgate party is always a great time to fill up and warm up for the San Francisco Giants!
4. You deliver more than you promise, and always promise a lot! There’s the old sales mantra that says “under commit and over-deliver,” but you never want to “over commit and under-deliver.”
STRATEGIC SELLING STORY | LESLIE GROENErenowned motivational speaker. To purchase her book or contact her please go to. Here is the link to her e-newsletter,. groeneconsulting.com/ Newsletter/2014.12/
10 Ways to Explore Your Sales Commitment Level! 1. You don’t think in terms of sales but rather in terms of building a business. Great salespeople. You build your businesses one customer at a time and then always leverage the last customer into more customers. Don’t ever just make a sale and forget about that client. The last sale you make should always open the door to new relationships and clients.
3. You listen more than you speak, getting an understanding of the customer’s needs and then finding a solution. Great salespeople always ask their clients why they want something done. In listening more than talking, you can better accommodate what they are looking for.
5. You invest time in things (people) that positively affect your income and avoid spending time on things (people) that have no return. Great producers know how to spend time on activity that rings the register. Don’t waste your time on activity that can’t tell you anything, or doesn’t produce anything now or in the future.
6. You are always seeking new, better and faster ways to increase your sales efforts. Be really concerned about time. Time really is money! Great sales people consistently work on improving themselves and look for faster ways to close transactions.
7. You’re willing to invest in networking, community and relationships, knowing that the difference between a contact and a contract is the “R” that stands for “Relationship.” Invest in your community. Don’t look at it as an expense since you need to develop these relationships. So, go ahead and join the country club and give money to politicians. In other words, be involved as much as you can.
8. You don’t depend on marketplace economies for the outcome and instead rely on your actions. If you’re great, you’re going to do well in any economy, because you create your own economy. You run your own race and make something happen despite the environment.
9. Surround yourself with overachievers and have little time for those who don’t create opportunities. Sometimes you might be viewed as being uninterested in others, but the truth is that you’re just not interested in low production. You don’t want to waste time with people who can’t get anything done.
10. You’re fanatical about selling. The best salespeople are obsessed with their customers and growing their businesses. VISUAL MEDIA ALLIANCE
| CONNECTED | SUMMER 2017
11
HUMAN RESOURCES STORY | BY CHERYL CHONG
Tough Ones All – Call for Information and Assistance CHERYL CHONG Cheryl Chong is VMA’s Human Resources Director and your #1 source for assistance responsible for counseling on HR matters like family leave, discrimination, sexual harassment and wage and hour compliance. She has a Bachelors and Masters degree from Chapman University in Orange, CA, along with 20+ years of HR experience in the trenches. Think of her as an extension of your HR department, courtesy of VMA. Please feel free to reach out for answers or introduce your organization by calling 800659-3363 or cheryl@vma.bz.
12
VISUAL MEDIA ALLIANCE
A member firm recently called about a death of an employee. This was their first time experiencing this type of traumatic event so they needed assistance in guiding them during this time. One of the most valuable tools a company can utilize and offer to their employees during such types of events is the use of an Employee Assistance Program (EAP). This is the type of support employees can have in a private setting. An EAP is a voluntary, work-related program that offers free and confidential assessments, referrals and short-term counseling to employees who have work-related or personal issues. Usually, an EAP is paired with the employer health plan. For more information regarding your EAP, contact Sue or e-mail sue@vma.bz. With regards to preparing for the death of an employee, if the employee dies while on the job, CAL OSHA will need to be contacted soonest to report the details. For assistance on how to prepare a final check and to get a death checklist, contact me. The California Labor Code (CLC 2751) provides that employers of persons paid by commission must give each such person a signed contract covering their terms of compensation. This contract can be prospectively changed at any time by written notice from the employer. Not only are such contracts required by law, but they avoid misunderstandings and disputes about the proper payment of commission. A sample contract | CONNECTED | SUMMER 2017
and explanatory material is available at bit.ly/ SalesCompensationAgreements on piasc.org. For help on this, call me. Employers who are concerned with the maintenance of a safe workplace wonder about the efficacy of pre-employment drug testing and random drug tests of employees. The background reality is that recreational drugs are becoming legal in California and a number of other states. More importantly, a significant proportion of younger people use recreational drugs (pot, etc.) and most people indulge in alcohol. Both are intoxicants and can degrade safe, efficient behavior. The challenge is that drug screens will pick up pot use but not see alcohol since it metabolizes quickly. If one is to follow an absolute policy of declining to hire anyone who fails to pass the drug screen or dismiss ones who fail the random test, they are likely to face difficulties flowing from the loss of otherwise useful employees or candidates. Clearly, an employee who appears impaired can be sent home and their condition confirmed with a drug test. If the employer’s policy is focused upon dealing with impairment that prevents safe and efficient workplace performance, it’s on sound ground. Going beyond this may present difficult discrimination issues and adverse actions may be difficult to defend. There is an exception to these concerns if the employee is a motor vehicle operator or in some government contract situations. Call me for help on this. It may seem natural to ask a non-exempt (hourly) employee to help finish a project at their home or answer business calls in the evening. If they’re not paid for doing this, the door has been opened to costly claims for back wages which can balloon into class action suits involving all employees. Expressing frustration and anger over these issues will make the problem worse. It’s also important to remember that wage and hour claims are now usually excluded from the EPL insurance coverage that the firm may have. For assistance with these types of wage and hour issues call me.
MEMBER SURVEY VMA staff appreciates the participants in this spring’s Member Interest Survey and extends a thank-you for the time spent on the survey and the guidance it provides. First and foremost, the answers to why owners and managers enrolled their companies indicates that the Association is emphasizing programs that are most important to membership. Responses also indicate that while no change in overall direction is needed for the group, improvements, especially in more outreach to outlying districts and innovation in program delivery are desirable. Here’s how members answered the significant Why question.
Why did you join? (Check all that apply.) Health / Business Insurance ..........62.1% ......... 59 Industry News...............................58.9% ......... 56 Education & Training .....................43.2% ......... 41 Networking & Events ....................40.0% ......... 38 Buying Discounts ..........................40.0% ......... 38 HR Services ..................................27.4% ......... 26
Yields Actionable Results
Surveys & Studies.........................25.3% ......... 24 Find-an-Employee / Find-a-Job ....22.1% ......... 21 Sales Support ...............................16.8% ......... 16
Members Look For When asked about management education and training programs that would be most useful Sales and Management (Profitability, Forecasts, Trends) and Insurance (Health, Commercial, Workers Comp) were the most in demand with over 50 percent or close to 50 percent of respondents checking those. These dovetail the top reasons for joining. Not surprising when you take the work week into consideration, Tuesday is the most convenient day for webinars and Thursday evening for dinner meetings. Webinars prove to be somewhat “controversial” with several members requesting more, especially companies further from the Bay Area, while several said they are too boring or time consuming. One actionable suggestion might solve part of those objections. “If a webinar can be recorded and viewed at a later date/time on demand that would allow more people the ability
to watch even if they can’t ask questions,” one member wrote. Again, members from outlying areas would like more VMA events. Here are some specific requests. “Have more presence in Reno. We pretty much belong to VMA to support the industry and PIA.” “Holding evets at locations throughout the Bay Area rather than just SF & East Bay.” “More events in central valley.” “More meetings in central CA.”
Constructive Comments VMA staff also appreciates the positive comments regarding the Association’s communication efforts – with 87.2 percent of respondents saying the frequency of communications was just about right and only 9.6 percent saying excessive. The various publications VMA puts out came in for really good marks as did communicating via email, mail and the website. This report remains just a taste. The survey offered lots more and with input like this, VMA will continue to strive to offer ever increasing value to members. For a full report, visit.
VISUAL MEDIA ALLIANCE
| CONNECTED | SUMMER 2017
13
NEW MEMBERS EDITION ONE Edition One Books
Edition One Books, Berkeley, works with design professionals to manufacture short-run books of unmatched quality and customization. We are focused on building long term relationships with designers and creatives, and strive to offer a more personalized book production service. We offer a unique suite of in-house production capabilities and hands-on customer service. Ben Zoltkin 510-705-1930 ben@editiononebooks.com
GMG Color
GMG Color is a leading developer and global supplier of high-end color management software solutions. Headquartered in Germany, its customers span a wide range of industries and application areas including advertising agencies, prepress houses, offset, flexo, packaging, digital, and large-format printers as well as gravure printers. Eric Dalton 646-583-0463 eric.dalton@gmgcolor.com
VM Access Goes Mobile!
Now Visual Media Access is available on all your mobile devices from smart phones to tablets. Try it out today at vmaccess.org. NOW FEATURING: • Location based search • Single search box • Easy to use category search • 12,000 impression a month • As a member, your company is listed. Make sure your company information is up-to-date!
14
VISUAL MEDIA ALLIANCE
| CONNECTED | SUMMER 2017
Caraustar Recycling
Recycling is our life! Caraustar Recycling Group collects, sorts and processes over 4 million tons of waste paper for re-use each year. The company is headquartered in Georgia and has locations nationwide, including Santa Clara. They handle all grades of recycled paper such as cardboard, mixed papers, office paper, magazines and books as well as commercial single stream. Caraustar Recycling is a full service hauler, including recycling audits and collection. Rodney.Dumlao@Caraustar.com 800-246-5634
Leah Molinari-Jones Design
Providing creative graphic design solutions to serve essential business needs. Extensive experience producing marketing materials for digital and print. Publications, brand identity, presentation media, tradeshows, and communications media. Proven success working individually and collaboratively with teams. Location: Campbell. Leah Molinari-Jones 408-204-6842 leahmjones@comcast.net
V-Innovative
Overstreet Associates We’re an advertising agency in San Francisco’s Bay Area (Hayward) that provides big agency know-how on a smallbusiness scale. We are equipped with cutting edge tools, along with a creative and highlyskilled staff, that provides clients with everything they need to successfully market their companies and reach their target audience. With nearly 30 years of experience, we know what we’re doing and we do it well. Scott Overstreet 510-487-8660
V-Innovative is a Bay Area Packaging design company (Hayward) with production facilities in China and USA. Our scale covers concept development, structural engineering, design, sampling, project management and supply chain distribution. Rick Conant 510-780-0638 x1006 r.conant@v-innovative.com
With 97 brightness on both sides of each sheet or roll, Accent® Opaque makes every print sizzle. To see the value and performance Accent brings to every project, ask your paper rep for an Accent swatch book or request one online at.
Accent Opaque ®
© 2017 International Paper Company. All rights reserved. Accent is a registered trademark of International Paper Company.
665 3rd Street, Suite 500 San Francisco, CA 94107
SHEET FED | HEATSET WEB | OPEN WEB | DIGITAL | WIDE FORMAT | BANNERS | MAILING SERVICES
live green, print green ®
SIT BACK & RELAX
LET US BE YOUR ONE STOP PRINTING & MARKETING CommerceSOLUTIONS
& MA RKE TING SOLUTI ON S
Give us a call and one of our printing consultants will be available to help you. 916.442.8100 •
Published on Jul 1, 2017
A quarterly newsletter that reaches VMA members delivering latest program offering by VMA, industry news, and hot topics. | https://issuu.com/visualmediaalliance/docs/vma_connected_magazine_summer_2017_ | CC-MAIN-2017-30 | en | refinedweb |
from numpy import * from scipy.linalg import toeplitz import pylab def forward(size): """ returns a toeplitz matrix for forward differences """ r = zeros(size) c = zeros(size) r[0] = -1 r[size-1] = 1 c[1] = 1 return toeplitz(r,c) def backward(size): """ returns a toeplitz matrix for backward differences """ r = zeros(size) c = zeros(size) r[0] = 1 r[size-1] = -1 c[1] = -1 return toeplitz(r,c).T def central(size): """ returns a toeplitz matrix for central differences """ r = zeros(size) c = zeros(size) r[1] = .5 r[size-1] = -.5 c[1] = -.5 c[size-1] = .5 return toeplitz(r,c).T # testing the functions printing some 4-by-4 matrices print 'Forward matrix' print forward(4) print 'Backward matrix' print backward(4) print 'Central matrix' print central(4)The result of the test above is as follows:
Forward matrix [[-1. 1. 0. 0.] [ 0. -1. 1. 0.] [ 0. 0. -1. 1.] [ 1. 0. 0. -1.]] Backward matrix [[ 1. 0. 0. -1.] [-1. 1. 0. 0.] [ 0. -1. 1. 0.] [ 0. 0. -1. 1.]] Central matrix [[ 0. 0.5 0. -0.5] [-0.5 0. 0.5 0. ] [ 0. -0.5 0. 0.5] [ 0.5 0. -0.5 0. ]]We can observe that the matrix-vector product between those matrices and the vector of equally spaced values of f(x) implements, respectively, the following equations:
Forward difference,
where h is the step size between the samples. Those equations are called Finite Differences and they give us an approximate derivative of f. So, let's approximate some derivatives!
x = linspace(0,10,15) y = cos(x) # recall, the derivative of cos(x) is sin(x) # we need the step h to compute f'(x) # because the product gives h*f'(x) h = x[1]-x[2] # generating the matrices Tf = forward(15)/h Tb = backward(15)/h Tc = central(15)/h pylab.subplot(211) # approximation and plotting pylab.plot(x,dot(Tf,y),'g',x,dot(Tb,y),'r',x,dot(Tc,y),'m') pylab.plot(x,sin(x),'b--',linewidth=3) pylab.axis([0,10,-1,1]) # the same experiment with more samples (h is smaller) x = linspace(0,10,50) y = cos(x) h = x[1]-x[2] Tf = forward(50)/h Tb = backward(50)/h Tc = central(50)/h pylab.subplot(212) pylab.plot(x,dot(Tf,y),'g',x,dot(Tb,y),'r',x,dot(Tc,y),'m') pylab.plot(x,sin(x),'b--',linewidth=3) pylab.axis([0,10,-1,1]) pylab.legend(['Forward', 'Backward', 'Central', 'True f prime'],loc=4) pylab.show()The resulting plot would appear as follows:
As the theory suggests, the approximation is better when h is smaller and the central differences are more accurate (note that, they have a higher order of accuracy respect to the backward and forward ones). | http://glowingpython.blogspot.it/2012_02_01_archive.html | CC-MAIN-2017-30 | en | refinedweb |
LDLREAD(3X) LDLREAD(3X)
NAME
ldlread, ldlinit, ldlitem - manipulate line number entries of a COFF
file function
SYNOPSIS
#include <<stdio.h>>
#include <<filehdr.h>>
#include <<linenum.h>>
#include <<ldfcn.h>>
int ldlread(ldptr, fcnindx, linenum, linent)
LDFILE *ldptr;
long fcnindx;
unsigned short linenum;
LINENO *linent;
int ldlinit(ldptr, fcnindx)
LDFILE *ldptr;
long fcnindx;
int ldlitem(ldptr, linenum, linent)
LDFILE *ldptr;
unsigned short linenum;
LINENO *linent;
AVAILABILITY
Available only on Sun 386i systems running a SunOS 4.0.x release or
earlier. Not a SunOS 4.1 release feature.
DESCRIPTION
ldlread() searches the line number entries of the COFF file currently
associated with ldptr. ldlread() begins its search with the line num-
ber entry for the beginning of a function and confines its search to
the line numbers associated with a single function. The function is
identified by fcnindx, the index of its entry in the object file symbol
table. ldlread() reads the entry with the smallest line number equal
to or greater than linenum into the memory beginning at linent.
ldlinit() and ldlitem() together perform exactly the same function as
ldlread(). After an initial call to ldlread() or ldlinit(), ldlitem()
may be used to retrieve a series of line number entries associated with
a single function. ldlinit() simply locates the line number entries
for the function identified by fcnindx. ldlitem() finds and reads the
entry with the smallest line number equal to or greater than linenum
into the memory beginning at linent().
ldlread(), ldlinit(), and ldlitem() each return either SUCCESS or FAIL-
URE. ldlread() will fail if there are no line number entries in the
object file, if fcnindx does not index a function entry in the symbol
table, or if it finds no line number equal to or greater than linenum.
ldlinit() will fail if there are no line number entries in the object
file or if fcnindx does not index a function entry in the symbol table.
ldlitem() will fail if it finds no line number equal to or greater than
linenum.
The programs must be loaded with the object file access routine library
libld.a.
SEE ALSO
ldclose(3X), ldfcn(3), ldopen(3X), ldtbindex(3X)
19 February 1988 LDLREAD(3X) | http://modman.unixdev.net/?sektion=3&page=ldlread&manpath=SunOS-4.1.3 | CC-MAIN-2017-30 | en | refinedweb |
import java.util.*;17 18 import org.apache.commons.logging.Log;19 import org.apache.commons.logging.LogFactory;20 21 public class TodayDate {22 private static Log log = LogFactory.getLog(TodayDate.class);23 24 private Date date = null; 25 26 public Date getDate() {27 return (new Date());28 }29 30 }
Java API By Example, From Geeks To Geeks. | Our Blog | Conditions of Use | About Us_ | | http://kickjava.com/src/TodayDate.java.htm | CC-MAIN-2017-30 | en | refinedweb |
.component.ComponentContext;20 import javax.jbi.messaging.MessageExchange;21 import javax.jbi.servicedesc.ServiceEndpoint;22 23 /**24 * A simple selection policy where the first endpoint is chosen.25 *26 * @version $Revision: 426415 $27 */28 public class RandomChoicePolicy implements EndpointChooser {29 30 public ServiceEndpoint chooseEndpoint(ServiceEndpoint[] endpoints, ComponentContext context, MessageExchange exchange) {31 int index = (int) (Math.random() * endpoints.length);32 return endpoints[index];33 }34 }35
Java API By Example, From Geeks To Geeks. | Our Blog | Conditions of Use | About Us_ | | http://kickjava.com/src/org/apache/servicemix/jbi/resolver/RandomChoicePolicy.java.htm | CC-MAIN-2017-30 | en | refinedweb |
SwapChainPanel SwapChainPanel SwapChainPanel Class
Definition.
public : class SwapChainPanel : Grid, ISwapChainPanel
public class SwapChainPanel : Grid, ISwapChainPanel
Public Class SwapChainPanel Inherits Grid Implements ISwapChainPanel
<SwapChainPanel .../>
- Inheritance
- SwapChainPanelSwapChainPanelSwapChainPanel
- Attributes
-
Inherited Members
Inherited properties
Inherited events
Inherited methods
Initializes a new instance of the SwapChainPanel class.
public : SwapChainPanel()
public SwapChainPanel()
Public Sub New()
- Attributes
-
Remarks
Important
Initialization through the constructor is not enough to enable the SwapChainPanel element to render the swap chain. You must use a native interface and Microsoft DirectX code. For more info see the "Initializing a SwapChainPanel element" section in the SwapChainPanel class topic.
Properties
Gets the x-axis scale factor of the SwapChainPanel.
public : float CompositionScaleX { get; }
public float CompositionScaleX { get; }
Public ReadOnly Property CompositionScaleX As float
- Value
- float float float
The x-axis scale factor of the SwapChainPanel. A value of 1.0 means no scaling is applied.
- Attributes
-
Remarks
The CompositionScaleXcaleX dependency property.
public : static DependencyProperty CompositionScaleXProperty { get; }
public static DependencyProperty CompositionScaleXProperty { get; }
Public Static ReadOnly Property CompositionScaleXProperty As DependencyProperty
The identifier for the CompositionScaleX dependency property.
- Attributes
-
Gets the y-axis scale factor of the SwapChainPanel.
public : float CompositionScaleY { get; }
public float CompositionScaleY { get; }
Public ReadOnly Property CompositionScaleY As float
- Value
- float float float
The y-axis scale factor of the SwapChainPanel. A value of 1.0 means no scaling is applied.
- Attributes
-
Remarks
The CompositionScaleYcaleY dependency property.
public : static DependencyProperty CompositionScaleYProperty { get; }
public static DependencyProperty CompositionScaleYProperty { get; }
Public Static ReadOnly Property CompositionScaleYProperty As DependencyProperty
The identifier for the CompositionScaleY dependency property.
- Attributes
-
Methods
CreateCoreIndependentInputSource(CoreInputDeviceTypes) CreateCoreIndependentInputSource(CoreInputDeviceTypes) CreateCoreIndependentInputSource(CoreInputDeviceTypes)
Creates a core input object that handles the input types as specified by the deviceTypes parameter. This core input object can process input events on a background thread.
A combined value of the enumeration.
An object that represents the input subsystem for interoperation purposes and can be used for input event connection points.
- Attributes
- Threading and async programming.Source d can return null if deviceTypes was passed as CoreInputDeviceTypes.None (that's not a typical way to call CreateCoreIndependentInputSource though).
- See Also
-
Events
Occurs when the composition scale factor of the SwapChainPanel has changed.
public : event TypedEventHandler CompositionScaleChanged
public event TypedEventHandler CompositionScaleChanged
Public Event CompositionScaleChanged
<SwapChainPanel CompositionScaleChanged="eventhandler"/>
- Attributes
-
Remarks
The supplier of the swap chain content might need to resize their content if a layout pass determines a new size for the panel or containers it's within, or if a RenderTransform is applied on the SwapChainPanel or any of its ancestors. Changes of this nature aren't always originated by app logic that's easy to detect from other events (for example the user might change a device orientation or a view state that causes layout to rerun), so this event provides a notification specifically for the scenario of changing the swap chain content size, which would typically invert the scale factors applied.
Check CompositionScaleX and CompositionScaleY any time you are handling CompositionScaleChanged (CompositionScaleChanged doesn't have event data, but if it fires it means that one or both properties have changed values on this SwapChainPanel ).
This event fires asynchronously versus the originating change. For example, dynamic animations or manipulations might affect the scale factor, and the event is raised when those dynamic changes are completed. | https://docs.microsoft.com/en-us/uwp/api/Windows.UI.Xaml.Controls.SwapChainPanel | CC-MAIN-2017-30 | en | refinedweb |
Support » Programming Orangutans and the 3pi Robot from the Arduino Environment » 5. Arduino Libraries for the Orangutan and 3pi Robot »
5.a. OrangutanAnalog - Analog Input Library
Overview
This library provides a set of methods that can be used to read analog voltage inputs, as well as functions specifically designed to read the value of the trimmer potentiometer (on the 3pi robot, Orangutan SV-xx8, Orangutan LV-168, and Baby Orangutan B), the battery voltage level in millivolts (3pi robot, SV-xx8), and the value of the temperature sensor in tenths of a degree F or C (on the Orangutan LV-168 only). This library gives you more control than existing Arduino analog input functions.
You do not need to initialize your OrangutanAnalog object before use. All initialization is performed automatically when needed.
All of the methods in this class are static; you should never have more than one instance of an OrangutanAnalog object in your sketch.
OrangutanAnalog Methods
Complete documentation of this library’s methods can be found in Section 2 of the Pololu AVR Library Command Reference.
Usage Examples
This library comes with two example sketches that you can load by going to File > Examples > OrangutanAnalog. The example sketches that come with the OrangutanMotors library also make limited use of this library.
1. OrangutanAnalogExample), and then it proceeds to execute the rest of the code in loop() while the ADC hardware works. Polling of the isConverting() method allows the program to determine when the conversion is complete and to update its notion of the trimpot value accordingly. Feedback is given via the red user LED, whose brightness is made to scale with the trimpot position.
#include <OrangutanLEDs.h> #include <OrangutanAnalog.h> /* * OrangutanAnalogExample for the 3pi, Orangutan SV-xx8, * Orangutan LV-168, or Baby Orangutan B * * This sketch uses the OrangutanAnalog library to read the voltage output * of the trimpot in the background while the rest of the main loop executes. * The LED is flashed so that its brightness appears proportional to the * trimpot position. * * * * */ OrangutanLEDs leds; OrangutanAnalog analog; unsigned int sum; unsigned int avg; unsigned char samples; void setup() // run once, when the sketch starts { analog.setMode(MODE_8_BIT); // 8-bit analog-to-digital conversions sum = 0; samples = 0; avg = 0; analog.startConversion(TRIMPOT); // start initial conversion } void loop() // run over and over again { if (!analog.isConverting()) // if conversion is done... { sum += analog.conversionResult(); // get result analog.startConversion(TRIMPOT); // and start next conversion if (++samples == 20) { avg = sum / 20; // compute 20-sample average of ADC result samples = 0; sum = 0; } } // when avg == 0, the red LED is almost totally off // when avg == 255, the red LED is almost totally on // brightness should scale approximately linearly in between leds.red(LOW); // red LED off delayMicroseconds(256 - avg); leds.red(HIGH); // red LED on delayMicroseconds(avg + 1); }
2. OrangutanAnalogExampleahrenheit. <OrangutanLCD.h> #include <OrangutanAnalog.h> /* * OrangutanAnalogExample2: for the Orangutan LV-168 * * This sketch uses the OrangutanAnalog library). * * You should see the trimpot voltage change as you turn it, and you can * get the temperature reading to slowly increase by holding a finger on the * underside of the Orangutan LV-168's PCB near the center of the board. * Be careful not to zap the board with electrostatic discharge if you * try this! */ OrangutanLCD lcd; OrangutanAnalog analog; void setup() // run once, when the sketch starts { analog.setMode(MODE_10_BIT); // 10-bit analog-to-digital conversions } void loop() // run over and over again { lcd.gotoXY(0,0); // LCD cursor to home position (upper-left) lcd.print(analog.toMillivolts(analog.readTrimpot())); // trimpot output in mV lcd.print(" mV "); // added spaces are to overwrite left over chars lcd.gotoXY(0, 1); // LCD cursor to start of the second line // get temperature in tenths of a degree F unsigned int temp = analog.readTemperatureF(); lcd.print(temp/10); // get the whole number of degrees lcd.print('.'); // print the decimal point lcd.print(temp - (temp/10)*10); // print the tenths digit lcd.print((char)223); // print a degree symbol lcd.print("F "); // added spaces are to overwrite left over chars delay(100); // wait for 100 ms (reduces LCD flicker) } | https://www.pololu.com/docs/0J17/5.a | CC-MAIN-2017-30 | en | refinedweb |
As I said a couple posts ago, there are other topics I’d like to consider during this attempt at the 180Days challenge. One of them is the idea of Test Driven Development, or in other words, ‘test before you publish’. Getting into a good habit of testing everything before it goes out the door is important for a number of reasons, but I won’t get into those here. Instead, I’ll mention what I’m planning to use for testing all of the Django code I’m writing as part of this Udemy class- Selenium with Python. Selenium is a browser test framework, though coupled with Python, it becomes a robust test ‘gate’, or shall I say, an ‘all tests must pass before publishing it’ tool.
Here’s a small sample that tests if you have Selenium working on your system:
from selenium import webdriver from selenium.webdriver.common.keys import Keys driver = webdriver.Chrome() driver.get("") assert "Python" in driver.title elem = driver.find_element_by_name("q") elem.send_keys("pycon") elem.send_keys(Keys.RETURN) assert "No results found." not in driver.page_source driver.close()
“Test, you must…” –Yoda Jenkins, Lead QA Tester | https://jasondotstar.com/Day-6-Toying-With-TDD.html | CC-MAIN-2020-29 | en | refinedweb |
NAME
Perform a basic read of the clock.
SYNOPSIS
#include <zircon/syscalls.h> zx_status_t zx_clock_read(zx_handle_t handle, zx_time_t* now);
RIGHTS
handle must be of type ZX_OBJ_TYPE_CLOCK and have ZX_RIGHT_READ.
DESCRIPTION
Perform a basic read of the clock object and return its current time in the now out parameter.
RETURN VALUE
On success, returns ZX_OK along with the clock's current time in the now output parameter.
ERRORS
- ZX_ERR_BAD_HANDLE : handle is either an invalid handle, or a handle to an object type which is not ZX_OBJ_TYPE_CLOCK.
- ZX_ERR_ACCESS_DENIED : handle lacks the ZX_RIGHT_READ right.
- ZX_ERR_BAD_STATE : The clock object has never been updated. No initial time has been established yet. | https://fuchsia.dev/fuchsia-src/reference/syscalls/clock_read | CC-MAIN-2020-29 | en | refinedweb |
Hi,
I think you could use timer to achieve that. In processFunction you could register a
timer at specific time (event time or processing time) and get callbacked at that point. It
could be registered like
ctx.timerService().registerEventTimeTimer(current.lastModified + 60000);
More details on timer could be found in [1] and an example is in [2]. In this example,
a timer is registered in the last line of the processElement method, and the callback is implemented
by override the onTimer method.
[1]
[2]
------------------Original Mail ------------------
Sender:aj <ajainjecrc@gmail.com>
Send Date:Fri May 29 02:07:33 2020
Recipients:Yun Gao <yungao.gy@aliyun.com>
CC:user <user@flink.apache.org>
Subject:Re: Re: Flink Window with multiple trigger condition
Hi,
I have implemented the below solution and its working fine but the biggest problem with this
is if no event coming for the user after 30 min then I am not able to trigger because I am
checking
time diff from upcoming events. So when the next event comes than only it triggers but I want
it to trigger just after 30 mins.
So please help me to improve this and how to solve the above problem.
public class DemandSessionFlatMap extends RichFlatMapFunction<Tuple2<Long, GenericRecord>,
DemandSessionSummaryTuple> {
private static final Logger LOGGER = LoggerFactory.getLogger(DemandSessionFlatMap.class);
private transient ValueState<Tuple3<String, Long, Long>> timeState; // maintain
session_id starttime and endtime
private transient MapState<String, DemandSessionSummaryTuple> sessionSummary; //
map for hex9 and summarytuple
@Override
public void open(Configuration config) {
ValueStateDescriptor<Tuple3<String, Long, Long>> timeDescriptor =
new ValueStateDescriptor<>(
"time_state", // the state name
TypeInformation.of(new TypeHint<Tuple3<String, Long, Long>>()
{
}), // type information
Tuple3.of(null, 0L, 0L)); // default value of the state, if nothing
was set
timeState = getRuntimeContext().getState(timeDescriptor);
MapStateDescriptor<String, DemandSessionSummaryTuple> descriptor =
new MapStateDescriptor<String, DemandSessionSummaryTuple>("demand_session",
TypeInformation.of(new TypeHint<String>() {
}), TypeInformation.of(new TypeHint<DemandSessionSummaryTuple>()
{
}));
sessionSummary = getRuntimeContext().getMapState(descriptor);
}
@Override
public void flatMap(Tuple2<Long, GenericRecord> recordTuple2, Collector<DemandSessionSummaryTuple>
collector) throws Exception {
GenericRecord record = recordTuple2.f1;
String event_name = record.get("event_name").toString();
long event_ts = (Long) record.get("event_ts");
Tuple3<String, Long, Long> currentTimeState = timeState.value();
if (event_name.equals("search_list_keyless") && currentTimeState.f1 == 0)
{
currentTimeState.f1 = event_ts;
String demandSessionId = UUID.randomUUID().toString();
currentTimeState.f0 = demandSessionId;
}
long timeDiff = event_ts - currentTimeState.f1;
if (event_name.equals("keyless_start_trip") || timeDiff >= 1800000) {
Tuple3<String, Long, Long> finalCurrentTimeState = currentTimeState;
sessionSummary.entries().forEach( tuple ->{
String key = tuple.getKey();
DemandSessionSummaryTuple sessionSummaryTuple = tuple.getValue();
try {
sessionSummaryTuple.setEndTime(finalCurrentTimeState.f2);
collector.collect(sessionSummaryTuple);
} catch (Exception e) {
e.printStackTrace();
}
});
timeState.clear();
sessionSummary.clear();
currentTimeState = timeState.value();
}
if (event_name.equals("search_list_keyless") && currentTimeState.f1 == 0)
{
currentTimeState.f1 = event_ts;
String demandSessionId = UUID.randomUUID().toString();
currentTimeState.f0 = demandSessionId;
}
currentTimeState.f2 = event_ts;
if (currentTimeState.f1 > 0) {
String search_hex9 = record.get("search_hex9") != null ? record.get("search_hex9").toString()
: null;
DemandSessionSummaryTuple currentTuple = sessionSummary.get(search_hex9) != null
? sessionSummary.get(search_hex9) : new DemandSessionSummaryTuple();
if (sessionSummary.get(search_hex9) == null) {
currentTuple.setSearchHex9(search_hex9);
currentTuple.setUserId(recordTuple2.f0);
currentTuple.setStartTime(currentTimeState.f1);
currentTuple.setDemandSessionId(currentTimeState.f0);
}
if (event_name.equals("search_list_keyless")) {
currentTuple.setTotalSearch(currentTuple.getTotalSearch() + 1);
SearchSummaryCalculation(record, currentTuple);
}
sessionSummary.put(search_hex9, currentTuple);
}
timeState.update(currentTimeState);
}
On Sun, May 24, 2020 at 10:57 PM Yun Gao <yungao.gy@aliyun.com> wrote:
Hi,
First sorry that I'm not expert on Window and please correct me if I'm wrong, but from
my side, it seems the assigner might also be a problem in addition to the trigger: currently
Flink window assigner should be all based on time (processing time or event time), and it
might be hard to implement an event-driven window assigner that start to assign elements to
a window after received some elements.
What comes to me is that a possible alternative method is to use the low-level KeyedProcessFunction
directly: you may register a timer 30 mins later when received the "search" event and write
the time of search event into the state. Then for the following events, they will be saved
to the state since the flag is set. After received the "start" event or the timer is triggered,
you could load all the events from the states, do the aggregation and cancel the timer if
it is triggered by "start" event. A simpler case is [1] and it does not consider stop the
aggreation when received special event, but it seems that the logic could be added to the
case.
[1]
Best,
Yun
------------------Original Mail ------------------
Sender:aj <ajainjecrc@gmail.com>
Send Date:Sun May 24 01:10:55 2020
Recipients:Tzu-Li (Gordon) Tai <tzulitai@apache.org>
CC:user <user@flink.apache.org>
Subject:Re: Flink Window with multiple trigger condition
I am still not able to get much after reading the stuff. Please help with some basic code
to start to build this window and trigger.
Another option I am thinking is I just use a Richflatmap function and use the keyed state
to build this logic. Is that the correct approach?
On Fri, May 22, 2020 at 4:52 PM aj <ajainjecrc@gmail.com> wrote:
I was also thinking to have a processing time window but that will not work for me. I want
to start the window when the user "search" event arrives. So for each user window will start
from the search event.
The Tumbling window has fixed start end time so that will not be suitable in my case.
On Fri, May 22, 2020 at 10:23 AM Tzu-Li (Gordon) Tai <tzulitai@apache.org> wrote:
Hi,
To achieve what you have in mind, I think what you have to do is to use a
processing time window of 30 mins, and have a custom trigger that matches
the "start" event in the `onElement` method and return
TriggerResult.FIRE_AND_PURGE.
That way, the window fires either when the processing time has passed, or
the start event was recieved.
Cheers,
Gordon
--
Sent from:
--
Thanks & Regards,
Anuj Jain
Mob. : +91- 8588817877
Skype : anuj.jain07
--
Thanks & Regards,
Anuj Jain
Mob. : +91- 8588817877
Skype : anuj.jain07
--
Thanks & Regards,
Anuj Jain
Mob. : +91- 8588817877
Skype : anuj.jain07 | http://mail-archives.apache.org/mod_mbox/flink-user/202005.mbox/%3C6b9c4662-69b2-4d6f-91fe-15e3e2a2072a.yungao.gy@aliyun.com%3E | CC-MAIN-2020-29 | en | refinedweb |
QuickObserver 2.0
A quick way to enable observable behavior on any object.
Why Should I Use This?
If you are looking for a way to decouple the logic of your app from the front end, this is a good way to help. It allows for classes to be lightly coupled and for information to quickly pass in both directions. Either from the View Controller up to the Logic Controller, or for the Logic Controller back down to the View Controller. This also easily allows for multiple related view controllers to use the same logic controller.
Usage
Using the observer is easy, the following is an example observable object.
import QuickObserver class Controller: QuickObservable { var observer = QuickObserver<Actions, Errors>() enum Actions { case action } enum Errors: Error { case error } }
The above class Controller can now be observed, and issue the actions or errors described in the class.
Reporting A Change
Any time you need to alert observing objects that something has changed you can simply call
report(action: Actions) on the
observer like in the following example.
extension Controller { func performAnAction() { // Some Logic observer.report(.action) } }
Once
observer.report(.action) is called it’ll alert every observer that it needs to act on the change.
Adding An Observer
There are two types of observer. A repeat observer that will get updates until the observable object no longer exists, or it no longer exists. The second type is a one-off observer that gets an update and is then removed from future updates. Below are examples of each using the above
Controller class.
Repeat Observer
Below is a view controller that can continue to receive updates from the
Controller object. In the closure passed to the observable object, you see it returns a reference to the passes in observer. In this case that’s the View Controller itself. The
this variable allows you to access the ViewController without having to worry about retaining the reference.
import UIKit class ViewController: UIViewController { var controller = Controller() override func viewDidLoad() { super.viewDidLoad() controller.add(self) { (this, result) in switch result { case .success(let action): this.handle(action) case .failure(let error): this.handle(error) } } } func handle(_ action: Controller.Actions) { switch action { case .action: break // Do Some Work Here } } func handle(_ error: Controller.Errors) { switch error { case .error: break // Handle Error Here } } }
Single Observer
Below is a view controller that can continue to receive a single update from the
Controller object. In this case once the closure is called it is released and never called again.
class ViewController: UIViewController { var controller = Controller() override func viewDidLoad() { super.viewDidLoad() controller.add { [weak self] (result) in switch result { case .success(let action): self?.handle(action) case .failure(let error): self?.handle(error) } } } func handle(_ action: Controller.Actions) { switch action { case .action: break // Do Some Work Here } } func handle(_ error: Controller.Errors) { switch error { case .error: break // Handle Error Here } } }
Installation
Cocoapods
If you already have a podfile, simply add
pod 'QuickObserver', '~> 2.0.0' to it and run pod install.
If you haven’t set up cocoapods in your project and need help, refer to Using Pods. Make sure to add
pod 'QuickObserver', '~> 2.0.0' to your newly created pod file.
Manual
To manually install the files, simply copy everything from the QuickObserver directory into your project.
Latest podspec
{ "name": "QuickObserver", "version": "2.1.1", "summary": "A quick way to enable observable behavior on any object.", "description": "This library enable you to quickly add observers to your project.nWith a little adoption you can make it so any object can report on changes of state, or issue instructions to follower objects. The objects do not hold strong refrences to observing objects, and do not require the use of tokens.", "homepage": "", "documentation_url": "", "license": { "type": "MIT", "file": "LICENSE" }, "authors": { "Timothy Rascher": "[email protected]" }, "platforms": { "ios": "10.0" }, "swift_version": "5.0", "source": { "git": "", "branch": "Cocoapods/2.1.1", "tag": "Cocoapods/2.1.1" }, "source_files": "QuickObserver/**/*.{swift}" }
Wed, 10 Apr 2019 10:13:30 +0000 | https://tryexcept.com/articles/cocoapod/quickobserver | CC-MAIN-2020-29 | en | refinedweb |
Getting Size and Position of an Element in React
Introduction.
Getting the Size and Position
You can use
Element.getClientRects() and
Element.getBoundingClientRect() to get the size and position of an element. In React, you’ll first need to get a reference to that element. Here’s an example of how you might do that.
function RectangleComponent() { return ( <div ref={el => { // el can be null - see if (!el) return; console.log(el.getBoundingClientRect().width); // prints 200px }} style={{ display: "inline-block", width: "200px", height: "100px", background: blue }} /> ); }
This will print the element’s width to the console. This is what we expect because we set the width to 200px in style attribute.
The Problem
This basic approach will fail if the size or position of the element is dynamic, such as in the following scenarios.
- The element contains images and other resources which load asynchronously
- Animations
- Dynamic content
- Window resizing
These are all pretty obvious, right? Here’s a more sneaky scenario.
function ComponentWithTextChild() { return ( <div ref={el => { if (!el) return; console.log(el.getBoundingClientRect().width); setTimeout(() => { // usually prints a value that is larger than the first console.log console.log("later", el.getBoundingClientRect().width); }); setTimeout(() => { // usually prints a value that is larger than the second console.log console.log("way later", el.getBoundingClientRect().width); }, 1000); }} style={{ display: "inline-block" }} > <div>Check it out, here is some text in a child element</div> </div> ); }
This example renders a simple div with a single text node as its only child. It logs out the width of that element immediately, then again in the next cycle of the event loop and a third time one second later. Since we only have static content, you might expect that the width would be the same at all three times, but it is not. When I ran this example on my computer, the first width was
304.21875, the second time it was
353.125 and the third it was
358.078.
Interestingly, this problem does not happen when we perform the same DOM manipulations with vanilla JS.
const div = document.createElement('div') div.style.display = 'inline-block'; const p = document.createElement('p'); p.innerText = 'Hello world this is some text'; div.appendChild(p); document.body.appendChild(div); console.log('width after appending', div.getBoundingClientRect().width); setTimeout(() => console.log('width after a tick', div.getBoundingClientRect().width)); setTimeout(() => console.log('width after a 100ms', div.getBoundingClientRect().width), 100);
If you paste this into a console, you will see that the initial width value is correct. Therefore our problem is specific to React.
Solution #1: Polling
A natural solution to this is to simply poll for size and position changes.
function ComponentThatPollsForWidth() { return ( <div ref={el => { if (!el) return; console.log("initial width", el.getBoundingClientRect().width); let prevValue = JSON.stringify(el.getBoundingClientRect()); const start = Date.now(); const handle = setInterval(() => { let nextValue = JSON.stringify(el.getBoundingClientRect()); if (nextValue === prevValue) { clearInterval(handle); console.log( `width stopped changing in ${Date.now() - start}ms. final width:`, el.getBoundingClientRect().width ); } else { prevValue = nextValue; } }, 100); }} style={{ display: "inline-block" }} > <div>Check it out, here is some text in a child element</div> </div> ); }
Here we can see the values changing over time and about how long it takes to get a final value. In my environment it was somewhere around 150ms on a full page refresh, though I’m rendering it in Storybook which might be adding some overhead.
Pros
- Simple
- Covers all use cases
Cons
- Inefficient - might drain battery on a mobile device
- Updates delayed up to the duration of the polling interval
Solution #2: ResizeObserver
ResizeObserver is a new-ish API that will notify us when the size of element changes.
Pros
- Efficient for browsers that support it
- Automatically get improved performance when other browsers add support
- Nice API
Cons
- Doesn’t provide position updates, only size
- Have to use a polyfill
Resources
Recommendations
- Embrace the fact that size and position will change. Don’t get them once in
componentDidMountand expect them to be accurate.
- Store the element’s size and position on your state. Then check for changes via ResizeObserver or polling depending on your needs.
- If you use polling, remember that updating state causes a render cycle even if your new values are the same as the old values. Therefore, check that the size or position actually changed before updating your state.
A Practical Example With Polling
For my own purposes, I need not only the size, but also the position of the element. Therefore ResizeObserver was ruled out and I had to go with a polling solution. Here is a more pactical example of how you might implement polling.
In this example we’re going to center an element within a container. I’m calling it a tooltip, but it is always visible.
class TooltipContainer extends React.Component { constructor(props) { super(props); const defaultRect = { left: 0, width: 0 }; this.state = { containerRect: defaultRect, tooltipRect: defaultRect }; this.containerRef = React.createRef(); this.tooltipRef = React.createRef(); this.getRectsInterval = undefined; } componentDidMount() { this.getRectsInterval = setInterval(() => { this.setState(state => { const containerRect = this.containerRef.current.getBoundingClientRect(); return JSON.stringify(containerRect) !== JSON.stringify(state.containerRect) ? null : { containerRect }; }); this.setState(state => { const tooltipRect = this.tooltipRef.current.getBoundingClientRect(); return JSON.stringify(tooltipRect) === JSON.stringify(state.tooltipRect) ? null : { tooltipRect }; }); }, 10); } componentWillUnmount() { clearInterval(this.getRectsInterval); } render() { const left = this.state.containerRect.left + this.state.containerRect.width / 2 - this.state.tooltipRect.width / 2 + "px"; return ( <div ref={this.containerRef} style={{ display: "inline-block", position: "relative" }} > <span>Here is some text that will make the parent expand</span> <img src="" /> <div ref={this.tooltipRef} style={{ background: "blue", position: "absolute", top: 0, left }} > Tooltip </div> </div> ); } }
Summary
I wish there was a perfect solution to this problem. I wish ResizeObserver was supported by all browsers and provided position updates. For now, I’m afraid you’re going to have to pick your poison. | https://www.pluralsight.com/tech-blog/getting-size-and-position-of-an-element-in-react/ | CC-MAIN-2020-29 | en | refinedweb |
Enable capturing of events streaming through Azure Event Hubs
Azure Event Hubs Capture enables you to automatically deliver the streaming data in Event Hubs to an Azure Blob storage or Azure Data Lake Storage Gen1 or Gen 2 account of your choice.
You can configure Capture at the event hub creation time using the Azure portal. You can either capture the data to an Azure Blob storage container, or to an Azure Data Lake Storage Gen 1 or Gen 2 account.
For more information, see the Event Hubs Capture overview.
Capture data to Azure Storage
When you create an event hub, you can enable Capture by clicking the On button in the Create Event Hub portal screen. You then specify a Storage Account and container by clicking Azure Storage in the Capture Provider box. Because Event Hubs Capture uses service-to-service authentication with storage, you do not need to specify a storage connection string. The resource picker selects the resource URI for your storage account automatically. If you use Azure Resource Manager, you must supply this URI explicitly as a string.
The default time window is 5 minutes. The minimum value is 1, the maximum 15. The Size window has a range of 10-500 MB.
Note
You can enable or disable emitting empty files when no events occur during the Capture window.
Capture data to Azure Data Lake Storage Gen 2
Follow Create a storage account article to create an Azure Storage account. Set Hierarchical namespace to Enabled on the Advanced tab to make it an Azure Data Lake Storage Gen 2 account.
When creating an event hub, do the following steps:
Select On for Capture.
Select Azure Storage as the capture provider. The Azure Data Lake Store option you see for the Capture provider is for the Gen 1 of Azure Data Lake Storage. To use a Gen 2 of Azure Data Lake Storage, you select Azure Storage.
Select the Select Container button.
Select the Azure Data Lake Storage Gen 2 account from the list.
Select the container (file system in Data Lake Storage Gen 2).
On the Create Event Hub page, select Create.
Note
The container you create in a Azure Data Lake Storage Gen 2 using this user interface (UI) is shown under File systems in Storage Explorer. Similarly, the file system you create in a Data Lake Storage Gen 2 account shows up as a container in this UI.
Capture data to Azure Data Lake Storage Gen 1
To capture data to Azure Data Lake Storage Gen 1, you create a Data Lake Storage Gen 1 account, and an event hub:
Create an Azure Data Lake Storage Gen 1 account and folders
- Create a Data Lake Storage account, following the instructions in Get started with Azure Data Lake Storage Gen 1 using the Azure portal.
- Follow the instructions in the Assign permissions to Event Hubs section to create a folder within the Data Lake Storage Gen 1 account in which you want to capture the data from Event Hubs, and assign permissions to Event Hubs so that it can write data into your Data Lake Storage Gen 1 account.
Create an event hub
The event hub must be in the same Azure subscription as the Azure Data Lake Storage Gen 1 account you created. Create the event hub, clicking the On button under Capture in the Create Event Hub portal page.
In the Create Event Hub portal page, select Azure Data Lake Store from the Capture Provider box.
In Select Store next to the Data Lake Store drop-down list, specify the Data Lake Storage Gen 1 account you created previously, and in the Data Lake Path field, enter the path to the data folder you created.
Add or configure Capture on an existing event hub
You can configure Capture on existing event hubs that are in Event Hubs namespaces. To enable Capture on an existing event hub, or to change your Capture settings, click the namespace to load the overview screen, then click the event hub for which you want to enable or change the Capture setting. Finally, click the Capture option on the left side of the open page and then edit the settings, as shown in the following figures:
Azure Blob Storage
Azure Data Lake Storage Gen 2
Azure Data Lake Storage Gen 1
Next steps
- Learn more about Event Hubs capture by reading the Event Hubs Capture overview.
- You can also configure Event Hubs Capture using Azure Resource Manager templates. For more information, see Enable Capture using an Azure Resource Manager template.
- Learn how to create an Azure Event Grid subscription with an Event Hubs namespace as its source
- Get started with Azure Data Lake Store using the Azure portal | https://docs.microsoft.com/en-us/azure/event-hubs/event-hubs-capture-enable-through-portal | CC-MAIN-2020-29 | en | refinedweb |
We often need to use a randomly generated number for certain situations; using Math.random() * n will usually do the trick, but it will only calculate a number from 0 to n. What if we need a number that doesn't give 0 as the minimum value? How can you generate a random number between 100 and 1000? I'll show you how to do it in this Quick Tip.
Final Result
This example demonstrates the function we'll be creating:
Input two numbers separated by a ',' and press the random button.
Step 1: Brief Overview
Using a function created in ActionScript 3, we will calculate a number between two values. These values will be passed as parameters and used with the Math class to generate a random number.
Step 2: Create a New File
Open Flash and create a new Flash File (ActionScript 3.0).
Step 3: Open the Actions Panel
Press Option + F9 or go to Window > Actions to open the Actions Panel.
Step 4: Function Declaration
Declare a Function and name it randomRange; this function will return the random number, so set the return type to Number.
function randomRange():Number {
Step 5: Set Parameters
Two parameters will be used to calculate the number.
- minNum: The minimum value to return
- maxNum: The maximum value to return
function randomRange(minNum:Number, maxNum:Number):Number {
Step 6: Write the Main Function
This is the function with the actual random number generator line. The power of Math is used to generate the number.
function randomRange(minNum:Number, maxNum:Number):Number { return (Math.floor(Math.random() * (maxNum - minNum + 1)) + minNum); }
Step 7: How it Works
We have our random number generator function, but what does this function do?
Take a look at the following image to get a better idea:
In the image's example, if Math.random() was less than 0.5, the result would be 550.
Step 8: Test with a Trace
A simple way to test the function is to use a trace() function. See the code below:
function randomRange(minNum:Number, maxNum:Number):Number { return (Math.floor(Math.random() * (maxNum - minNum + 1)) + minNum); } trace(randomRange(10, 20)); //A number between 10 and 20
Step 9: Example
This is a working example, it uses a button to calculate the number and display it in a TextField.
/* The randomRange function */ function randomRange(minNum:Number, maxNum:Number):Number { return (Math.floor(Math.random() * (maxNum - minNum + 1)) + minNum); } /* The actions that will perform when the button is pressed */ function buttonAction(e:MouseEvent):void { //An array will store the numbers in the textfield var n:Array = inputText.text.split(","); //Calculate the number based on the input, convert the result to a string //and send that string to the textfield generatedNumber.text = String(randomRange(n[0], n[1])); } //Add button's event listener actionButton.addEventListener(MouseEvent.MOUSE_UP, buttonAction);
Input two numbers separated by a ',' and press the random button.
Conclusion
This is a basic example of how you can use this function; experiment and use it in your own projects!
Thanks for reading!
Envato Tuts+ tutorials are translated into other languages by our community members—you can be involved too!Translate this post
| https://code.tutsplus.com/tutorials/quick-tip-get-a-random-number-within-a-specified-range-using-as3--active-3142 | CC-MAIN-2020-29 | en | refinedweb |
Let's Refactor My Personal Website and Portfolio using Gatsby, Part Four: CSS in JS Using Emotion
Michael Caveney
Updated on
・4 min read
Part One
Part Two
Part Three
Welcome to the latest entry in my portfolio refactor blog series! Today I'm going to discuss how you can make CSS architecture in your applications easier by using the popular CSS-in-JS library Emotion.
What Is CSS-in-JS?
CSS-in-JS is exactly what it sounds like: using some combination of architectural style or third-party library to place most, if not all of a project's CSS in the JS files. It's somewhat controversial amongst developers, but I have found it to simplify the CSS code I need to write in applications, making maintenance, debugging, and overall drafting of code quicker.
What is Emotion and How Does It Work?
There are several strong choices for CSS-in-JS libraries out there, but I'm going with Emotion on this project because I haven't used it before, and it has been one of the favorites for a long time. It's worth noting at this point that the major CSS-In-JS libraries more or less have feature parity, so use the one that works best for you or your team.
To use Emotion in a Gatsby project, we need to include the package(s) and the corresponding plugin:
npm i --save @emotion/core @emotion/styled gatsby-plugin-emotion
And add that plugin to
gatsby-config.js:
`gatsby-plugin-emotion`
There are a couple of different ways that I could go about styling with Emotion in Gatsby. The first, and my least preferred, is using the
css prop, which is literally sets the style(s) as a prop that gets passed into components, ala the following example:
import React from "react" import styled from "@emotion/styled" import { css } from "@emotion/core" const redBackground = css` background-color: red; ` <h1 css={redBackground}>I am a demo headline</h1>
While this can work for smaller insertions of code, I feel that it adds noise to the JS code and makes it less readable.
Edit: Something I've learned the hard way when using Emotion with Gatsby, or perhaps any instance in which you're trying to use it on non-native HTML elements: styled-components may not work as expected on certain elements (like Gatsby/Reach Router's
<Link />, and the css prop as absolutely necessary in instances like this.
My preferred architectural style for Emotion is using styled components, something cribbed from the previously mentioned styled-components library. This works by creating a new named component as a tagged template literal, and adding the styles inside, like the Headline component in the following example:
import React from "react" import styled from '@emotion/styled'; import Layout from "../components/layout" import Image from "../components/image" import SEO from "../components/seo" const IndexPage = () => ( <Layout> <SEO title="Home" /> <Headline>Hi people</Headline> <p>Welcome to your new Gatsby site.</p> <p>Now go build something great.</p> <div style={{ maxWidth: `300px`, marginBottom: `1.45rem` }}> <Image /> </div> </Layout> ); export default IndexPage; const Headline = styled.h1` color: white; background-color: purple `;
I like this style because:
It makes is easy to separate the CSS logic from the React logic on the page. You don't have to put the styled components underneath the export statement, but I do so for readability.
You have the opportunity to again, improve code readability, with semantically named components.
Having all component-specific logic, including styling, in one place can really aid with site building and maintenance speed.
Another advantage I almost forgot to mention is this technique disrupts the normal CSS cascade, resulting is less monkey-patching and hacks like
!importantto get desired results, which can reduce a lot of mental overhead.
You don't HAVE to put everything in components: My usual approach, especially for smaller site like this one is going to be, is to maintain a traditional CSS file for any and all global styling for the site, and put everything else in individual components. When I refactored the last iteration of my portfolio from standard CSS to styled-components, this reduced the size of my global CSS file by about 400 lines.
Wrap Up
At the end of the day, you have to decide if a CSS-In-JS library is a CSS tool that works for you, but I hope that a lot of others can get the vastly improved experience that tools like Emotion and styled-components have given me.
I've written very little application-specific code in these first four part of this walkthrough of my new site, but that changes next week as we take a deep dive into working with images in Gatsby, and a landing page starts to emerge!
What's in your podcast rotation right now?
Curious what everyone is listening to for podcasts these days, whether it be te...
| https://dev.to/dylanesque/let-s-refactor-my-personal-website-and-portfolio-using-gatsby-part-four-css-in-js-using-emotion-fc4 | CC-MAIN-2019-47 | en | refinedweb |
DtMmdbSectionGetShortTitle(library cDatlMlm)dbSectionGetShortTitle(library call)
NAME [Toc] [Back]
DtMmdbSectionGetShortTitle - obtains the short title for a section
SYNOPSIS [Toc] [Back]
#include <DtMmdb.h>
const char* DtMmdbSectionGetShortTitle(
DtMmdbInfoRequest* request,
unsigned int* length);
DESCRIPTION [Toc] [Back]
The DtMmdbSectionGetShortTitle function returns the short title for
the specified section. Do not use the free function on the returned
pointer. Table lookup is involved if the section identifier is
specified by the locator_ptr field.
ARGUMENTS [Toc] [Back]
request Specifies the bookcase in the bookcase descriptor field and
either the section's Database Engine identifier (in the
primary_oid field) or the section's logical identifier (in
the locator_ptr field). If both of these fields have a
value, DtMmdbSectionGetShortTitle uses the locator_ptr
value.
length Specifies the variable to receive the length of the returned
short title, if the returned pointer to the title is not
NULL.
RETURN VALUE [Toc] [Back]
If DtMmdbSectionGetShortTitle completes successfully, it returns a
pointer to a NULL-terminated short title character string. If it
fails, it returns a NULL pointer.
EXAMPLE [Toc] [Back]
The following shows how a DtMmdbSectionGetShortTitle call might be
coded.
DtMmdbInfoRequest request;
/* fill the request here */
DtMmdbSectionGetShortTitle(&request);
SEE ALSO [Toc] [Back]
DtMmdbSectionGetLongTitle(3)
- 1 - Formatted: January 24, 2005 | http://nixdoc.net/man-pages/HP-UX/man3/DtMmdbSectionGetShortTitle.3.html | CC-MAIN-2019-47 | en | refinedweb |
I try to do ascould-i-manage-multiple-hbase-cluster-in-the-same, the "HBASE2" solution.
I have changed params.py, params_linux.py. When I install the regienserver, it always run by ''/usr/bin/yum -d 0 -e 0 -y install hbase_lv''.
I tracd the code about this. Found the argvs passed to hbaseregionserver.execute method includes a json file: `/var/lib/ambari-agent/data/command-1404.json`, and it contains a package_list filed.
I want to know where the pacakge_list comes from?
Created 04-11-2017 05:02 AM
Regarding your query: "I want to know where the pacakge_list comes from?"
.
Please check the following line of code:
It's simple Yum python APIs that are used to determine the 'package_list'
import yum yb = yum.YumBase() name_regex = re.escape(name).replace("\\?", ".").replace("\\*", ".*") + '$' regex = re.compile(name_regex) with suppress_stdout(): package_list = yb.rpmdb.simplePkgList()
On your ambari installation you can find it in :
/usr/lib/ambari-server/lib/resource_management/core/providers/package/__init__.py
. | https://community.cloudera.com/t5/Support-Questions/where-is-the-code-about-var-lib-ambari-agent-data-command/m-p/194517 | CC-MAIN-2019-47 | en | refinedweb |
Manage virtual machines
This section provides an overview of how to create Virtual Machines (VMs) using templates. It also explains other preparation methods, including physical to virtual conversion (P2V), cloning templates, and importing previously exported VMs.
What is a virtual machine?What is a virtual machine?
A Virtual Machine (VM) is a software computer that, like a physical computer, runs an operating system and applications. The VM comprises a set of specification and configuration files backed by the physical resources of a host. Every VM has virtual devices that provide the same functions as physical hardware. VMs can give the benefits of being more portable, more manageable, and more secure. In addition, you can tailor the boot behavior of each VM to your specific requirements. For more information, see VM Boot Behavior.
Citrix Hypervisor supports guests with any combination of IPv4 or IPv6 configured addresses.
Types of virtual machines
In Citrix Hypervisor VMs can operate in one of two modes:
Paravirtualized (PV): The virtual machine kernel uses specific code which is aware it is running on a hypervisor for managing devices and memory.
Fully virtualized (HVM): Specific processor features are used to ‘trap’ privileged instructions that the virtual machine carries out. This capability enables you to use an unmodified operating system. For network and storage access, emulated devices are presented to the virtual machine. Alternatively, PV drivers can be used for performance and reliability reasons.
Create VMsCreate VMs
Use VM templates
VMs are prepared from templates. A template is a gold image that contains all the various configuration settings to create an instance of a specific VM. Citrix Hypervisor ships with a base set of templates, which are raw VMs, on which you can install an operating system. Different operating systems require different settings to run at their best. Citrix Hypervisor templates are tuned to maximize operating system performance.
There are two basic methods by which you can create VMs from templates:
Using a complete pre-configured template, for example the Demo Linux Virtual Appliance.
Installing an operating system from a CD, ISO image or network repository onto the appropriate provided template.
Windows VMs describes how to install Windows operating systems onto VMs.
Linux VMs describes how to install Linux operating systems onto VMs.
Note:
Templates created by older versions of Citrix Hypervisor can be used in newer versions of Citrix Hypervisor. However, templates created in newer versions of Citrix Hypervisor are not compatible with older versions of Citrix Hypervisor. If you created a VM template by using Citrix Hypervisor 8.0, to use it with an earlier version, export the VDIs separately and create the VM again.
Other methods of VM creation
In addition to creating VMs from the provided templates, you can use the following methods to create VMs.
Physical-to-virtual conversion
Physical to Virtual Conversion (P2V) is the process that converts an existing Windows operating system on a physical server to a virtualized instance of itself. The conversion includes the file system, configuration, and so on. This virtualized instance is then transferred, instantiated, and started as a VM on the Citrix Hypervisor server.
Clone an existing VM
You can make a copy of an existing VM by cloning from a template. Templates are ordinary VMs which are intended to be used as master copies to create instances of VMs from. A VM can be customized and converted into a template. Ensure that you follow the appropriate preparation procedure for the VM. For more information, see Preparing for Cloning a Windows VM Using Sysprep and Preparing to Clone a Linux VM.
Note:
Templates cannot be used as normal VMs.
Citrix Hypervisor has two mechanisms for cloning VMs:
A full copy
Copy-on-Write
The faster Copy-on-Write mode only writes modified blocks to disk. Copy-on-Write is designed to save disk space and allow fast clones, but slightly slows down normal disk performance. A template can be fast-cloned multiple times without slowdown.
Note:
If you clone a template into a VM and then convert the clone into a template, disk performance can decrease. The amount of decrease has a linear relationship to the number of times this process has happened. In this event, the
vm-copyCLI command can be used to perform a full copy of the disks and restore expected levels of disk performance.
Notes for resource pools
If you create a template from VM virtual disks on a shared SR, the template cloning operation is forwarded to any server in the pool that can access the shared SRs. However, if you create the template from a VM virtual disk that only has a local SR, the template clone operation is only able to run on the server that can access that SR.
Import an exported VM
You can create a VM by importing an existing exported VM. Like cloning, exporting and importing a VM is fast way to create more VMs of a certain configuration. Using this method enables you to increase the speed of your deployment. You might, for example, have a special-purpose server configuration that you use many times. After you set up a VM as required, export it and import it later to create another copy of your specially configured VM. You can also use export and import to move a VM to the Citrix Hypervisor server that is in another resource pool.
For details and procedures on importing and exporting VMs, see Importing and Exporting VMs.
Citrix VM ToolsCitrix VM Tools
Citrix VM Tools provide high performance I/O services without the overhead of traditional device emulation. Citrix VM Tools consists of I/O drivers (also known as Paravirtualized drivers or PV drivers) and the Management Agent. Install Citrix VM Tools on each Windows VM for that VM to have a fully supported configuration, and to be able to use the xe CLI or XenCenter. The version of Citrix VM Tools installed on the VM must be the same as the latest available version installed on the Citrix Hypervisor server. For example, some hotfixes include an updated Citrix VM Tools ISO that updates the version installed on the host.
The I/O drivers contain storage and network drivers, and low-level management interfaces. These drivers replace the emulated devices and provide high-speed transport between Windows and the Citrix Hypervisor product family software. While installing of a Windows operating system, Citrix Hypervisor uses traditional device emulation to present a standard IDE controller and a standard network card to the VM. This emulation allows Windows to install by using built-in drivers, but with reduced performance due to the overhead inherent in emulating the controller drivers.
The Management Agent, also known as the Guest Agent, is responsible for high-level virtual machine management features and provides a full set of functions to XenCenter. These functions include quiesced snapshots.
You must install Citrix VM Tools on each Windows VM for the VM to have a fully supported configuration. The version of Citrix VM Tools installed on the VM must be the same as the version installed on the Citrix Hypervisor server. A VM functions without the Citrix VM Tools, but performance is hampered when the I/O drivers (PV drivers) are not installed. You must install Citrix VM Tools on Windows VMs to be able to perform the following operations:
Cleanly shut down, reboot, or suspend a VM
View VM performance data in XenCenter
Migrate a running VM (using live migration or storage live migration)
Create quiesced snapshots or snapshots with memory (checkpoints), or revert to snapshots
Adjust the number of vCPUs on a running Linux VM (Windows VMs require a reboot for this change to take effect)
Find out the virtualization state of a VM
XenCenter reports the virtualization state of a VM on the VM’s General tab. You can find out whether or not Citrix VM Tools (I/O drivers and the Management Agent) are installed. This tab also displays whether the VM can install and receive updates from Windows Update. The following section lists the messages displayed in XenCenter:
I/O optimized (not optimized): This field displays whether or not the I/O drivers are installed on the VM. Click the Install I/O drivers and Management Agent link to install the I/O drivers from the Citrix VM Tools ISO.
Note:
I/O drivers are automatically installed on a Windows VM that can receive updates from Windows Update. For more information, see Updating Citrix VM Tools.
Management Agent installed (not installed): This field displays whether or not the Management Agent is installed on the VM. Click the Install I/O drivers and Management Agent link to install the Management Agent from the Citrix VM Tools ISO.
Able to (Not able to) receive updates from Windows Update: specifies whether the VM can receive I/O drivers from Windows Update.
Note:
Windows Server Core 2016 does not support using Windows Update to install or update the I/O drivers. Instead use the installer on the Citrix VM Tools ISO.
Install I/O drivers and Management Agent: this message is displayed when the VM does not have the I/O drivers or the Management Agent installed. Click the link to install Citrix VM Tools. For Linux VMs, clicking the status link switches to the VM’s console and loads the Citrix VM Tools ISO. You can then mount the ISO and manually run the installation, as described in Installing Citrix VM Tools.
Supported guests and allocating resourcesSupported guests and allocating resources
For a list of supported guest operating systems, see Supported Guests, Virtual Memory, and Disk Size Limits
This section describes the differences in virtual device support for the members of the Citrix Hypervisor product family.
Citrix Hypervisor product family virtual device support
The current version of the Citrix Hypervisor product family has some general limitations on virtual devices for VMs. Specific guest operating systems may have lower limits for certain features. The individual guest installation section notes the limitations. For detailed information on configuration limits, see Configuration Limits.
Factors such as hardware and environment can affect the limitations. For information about supported hardware, see the Citrix Hypervisor Hardware Compatibility List.
VM block devices
In the para-virtualized (PV) Linux case, block devices are passed through as PV devices. Citrix Hypervisor does not attempt to emulate SCSI or IDE, but instead provides a more suitable interface in the virtual environment. This interface is in the form of
xvd* devices. It is also sometimes possible to get an
sd* device using the same mechanism, where the PV driver inside the VM takes over the SCSI device namespace. This behavior is not desirable so it is best to use
xvd* where possible for PV guests. The
xvd* devices are the default for Debian and RHEL.
For Windows or other fully virtualized guests, Citrix Hypervisor emulates an IDE bus in the form of an
hd* device. When using Windows, installing the Citrix VM Tools installs a special I/O driver that works in a similar way to Linux, except in a fully virtualized environment. | https://docs.citrix.com/en-us/citrix-hypervisor/vms.html | CC-MAIN-2019-47 | en | refinedweb |
import "github.com/Yelp/fullerite/src/fullerite/util"
Package util is catchall for all utilities that might be used throughout the fullerite code.
iptools.go: It includes functionality to determine the ip address of the machine that's running a fullerite instance.
mesos_leader.go: Detects the leader from amongst a set of mesos masters. It also caches this value for a configurable ttl to save time.
doc.go file.go http_alive.go iptools.go marathon_chronos_leader.go nerve_config.go strutil.go uwsgi_stats_parser.go
func CreateMinimalNerveConfig(config map[string]EndPoint) map[string]map[string]map[string]interface{}
CreateMinimalNerveConfig creates a minimal nerve config
ExternalIP Provides the string representation of the IP address of the box.
GetFileSize returns the size in bytes of the specified file
GetWrapper performs a get against a URL and return either the body of the response or an error
IPInHostInterfaces checks if given IP is assigned to a local interface
IsLeader checks if a given host is the marathon leader
ParseUWSGIWorkersStats Counts workers status stats from JSON content and returns metrics
StrSanitize enables handler lever sanitation
StrToFloat converts a string value to float
EndPoint defines a struct for endpoints
HTTPAlive implements a simple way of reusing http connections
func (connection *HTTPAlive) Configure(timeout time.Duration, aliveDuration time.Duration, maxIdleConnections int)
Configure the http connection
func (connection *HTTPAlive) MakeRequest(method string, uri string, body io.Reader, header map[string]string) (*HTTPAliveResponse, error)
MakeRequest make a new http request
HTTPAliveResponse returns a response
NerveService is an exported struct containing services' info
func ParseNerveConfig(raw *[]byte, namespaceIncluded bool) ([]NerveService, error)
ParseNerveConfig is responsible for taking the JSON string coming in into a map of service:port it will also filter based on only services runnign on this host. To deal with multi-tenancy we actually will return port:service
Package util imports 15 packages (graph). Updated 2019-11-12. Refresh now. Tools for package owners.
The go get command cannot install this package because of the following issues: | https://godoc.org/github.com/Yelp/fullerite/src/fullerite/util | CC-MAIN-2019-47 | en | refinedweb |
import "gopkg.in/src-d/go-git.v4/utils/merkletrie/noder"
Package noder provide an interface for defining nodes in a merkletrie, their hashes and their paths (a noders and its ancestors).
The hasher interface is easy to implement naively by elements that already have a hash, like git blobs and trees. More sophisticated implementations can implement the Equal function in exotic ways though: for instance, comparing the modification time of directories in a filesystem.
NoChildren represents the children of a noder without children.
Equal functions take two hashers and return if they are equal.
These functions are expected to be faster than reflect.Equal or reflect.DeepEqual because they can compare just the hash of the objects, instead of their contents, so they are expected to be O(1).
Hasher interface is implemented by types that can tell you their hash.
type Noder interface { Hasher fmt.Stringer // for testing purposes // Name returns the name of an element (relative, not its full // path). Name() string // IsDir returns true if the element is a directory-like node or // false if it is a file-like node. IsDir() bool // Children returns the children of the element. Note that empty // directory-like noders and file-like noders will both return // NoChildren. Children() ([]Noder, error) // NumChildren returns the number of children this element has. // // This method is an optimization: the number of children is easily // calculated as the length of the value returned by the Children // method (above); yet, some implementations will be able to // implement NumChildren in O(1) while Children is usually more // complex. NumChildren() (int, error) }
The Noder interface is implemented by the elements of a Merkle Trie.
There are two types of elements in a Merkle Trie:
- file-like nodes: they cannot have children.
- directory-like nodes: they can have 0 or more children and their hash is calculated by combining their children hashes.
Path values represent a noder and its ancestors. The root goes first and the actual final noder the path is referring to will be the last.
A path implements the Noder interface, redirecting all the interface calls to its final noder.
Paths build from an empty Noder slice are not valid paths and should not be used.
Children returns the children of the final noder in the path.
Compare returns -1, 0 or 1 if the path p is smaller, equal or bigger than other, in "directory order"; for example:
"a" < "b" "a/b/c/d/z" < "b" "a/b/a" > "a/b"
Hash returns the hash of the final noder of the path.
IsDir returns if the final noder of the path is a directory-like noder.
Last returns the final noder in the path.
Name returns the name of the final noder of the path.
NumChildren returns the number of children the final noder of the path has.
String returns the full path of the final noder as a string, using "/" as the separator.
Package noder imports 3 packages (graph) and is imported by 9 packages. Updated 2019-08-04. Refresh now. Tools for package owners. | https://godoc.org/gopkg.in/src-d/go-git.v4/utils/merkletrie/noder | CC-MAIN-2019-47 | en | refinedweb |
In some cases, special functions need to be predefined in a software application to enhance the functionality of various applications. There are many Microsoft Excel add-ins to improve the functionality of MS Excel. Similarly, SAP facilitates some predefined functions by providing Business Add-Ins known as BADIs.
A BADI is an enhancement technique that facilitates a SAP programmer, a user, or a specific industry to add some additional code to the existing program in SAP system. We can use standard or customized logic to improve the SAP system. A BADI must first be defined and then implemented to enhance SAP application. While defining a BADI, an interface is created. BADI is implemented by this interface, which in turn is implemented by one or more adaptor classes.
The BADI technique is different from other enhancement techniques in two ways −
You can also create filter BADIs, which means BADIs are defined on the basis of filtered data that is not possible with enhancement techniques. The concept of BADIs has been redefined in SAP Release 7.0 with the following goals −
Enhancing the standard applications in a SAP system by adding two new elements in the ABAP language, that is ‘GET BADI’ and ‘CALL BADI’.
Offering more flexibility features such as contexts and filters for the enhancement of standard applications in a SAP system.
When a BADI is created, it contains an interface and other additional components, such as function codes for menu enhancements and screen enhancements. A BADI creation allows customers to include their own enhancements in the standard SAP application. The enhancement, interface, and generated classes are located in an appropriate application development namespace.
Hence, a BADI can be considered as an enhancement technique that uses ABAP objects to create ‘predefined points’ in the SAP components. These predefined points are then implemented by individual industry solutions, country variants, partners and customers to suit their specific requirements. SAP actually introduced the BADI enhancement technique with the Release 4.6A, and the technique has been re-implemented again in the Release 7.0. | https://www.tutorialspoint.com/sap_abap/sap_abap_business_add_ins.htm | CC-MAIN-2019-47 | en | refinedweb |
EmberEndpointDescription Struct Reference
Endpoint information (a ZigBee Simple Descriptor).
#include <
stack-info.h>
Endpoint information (a ZigBee Simple Descriptor).
This is a ZigBee Simple Descriptor and contains information about an endpoint. This information is shared with other nodes in the network by the ZDO.
Field Documentation
The endpoint's device ID within the application profile.
The endpoint's device version.
The number of input clusters.
The number of output clusters.
Identifies the endpoint's application profile.
The documentation for this struct was generated from the following file:
stack-info.h | https://docs.silabs.com/zigbee/6.6/em35x/structEmberEndpointDescription | CC-MAIN-2019-47 | en | refinedweb |
/>
Getting back to the basics of Bayes' Theorem using Python.
Thomas Bayes and Bayesianism
Thomas Bayes was a rather obscure 18th Century English clergyman and it is not even certain when and where he was born, but it was around 1701 and possibly in Hertfordshire just north of London. His only mark on history is the eponymous Bayes' Theorem but the name Bayesian is now used in many different areas, sometimes with only tenuous links to the original theorem.
This gives the impression that Bayesianism is a huge and complex field covering not just probability but extending in to philosophy, computer science and beyond. In this article I will get back to the basics of the theorem, firstly by applying it to its "standard" example of medical tests, and then writing a simple demonstration of its use in Python. then P(ill) = 0.01, P(healthy) = 0.99 and 0.01 + 0.99 = 1.
The | symbol used in the formula extends the notation to indicate the probability of a certain outcome given an existing state, and the | can be read as "given". If in the above example we assume a test is available for the disease then Bayes' Theorem allows us to calculate the probability of a person having the disease given a positive test result.
You might assume that the probability of someone having a disease if they test positive is 1, and conversely the probability is 0 if they test negative. Unfortunately no medical test is perfect: some people with the disease will test negative and some people who do not have the disease will test positive. Even with a highly accurate test this can lead to some startlingly inaccurate results, as we will see.
Let's make up a few fictitious numbers for an equally fictitious disease, just for demonstration purposes. We need to know the population and the percentage which has the disease. We also need a couple of numbers to describe the accuracy of the test: what percentage of people with the disease test positive, and what percentage of people who are healthy test negative. These are the sensitivity and specificity.
Now let's assume everyone has been tested and we have the following figures:
The sensitivity and specificity rates of 99% look impressive, but as you can see from the previous table the number of healthy people who wrongly tested positive (shown in bold) is exactly the same as the number of ill people who correctly tested positive (again shown in bold). Therefore if a person tests positive there is only a probability of 0.5 that they are actually ill.
Plugging the Numbers into the Formula
Using the process above we established the probability of a person testing positive actually having the disease. However, it was a messy process which can be simplified by using the formula for Bayes' Theorem.
This is the theorem applied to our sample problem, which as you can see gives us the 0.5 result we are looking for.
/>
The values above the line are straightforward, and come straight from our table of known data. However, the part below the line, P(positive), needs to be calculated from:
P(healthy) * P(positive|healthy) + P(ill) * P(positive|ill)
This gives us the overall probability of testing positive, irrespective of whether the subject is ill or healthy.
Let's Code It
We can stare at a (virtual) blackboard all day but to fully understand what's going on it's a good idea to implement the formula in code. This also gives us the opportunity to change values quickly and easily to see how this affects the outcome.
The code for this project is all in one short file called bayes.py, and you can download it as a zip file or clone/download the Github repository.
Source Code Links
This is the source code in its entirety.
bayes.py
def main(): """ Call 2 functions to calculate conditional probabilities, firstly using basic arithmetic and then using Bayes' formula. """ population = 1000000 # These 3 variables are for the known probabilities. # Change them to see the effect on P(ill|positive) P_ill = 0.01 P_positive_if_ill = 0.99 # sensitivity P_negative_if_healthy = 0.99 # specificity calculate_without_bayes(population, P_ill, P_positive_if_ill, P_negative_if_healthy) print() calculate_with_bayes(P_ill, P_positive_if_ill, P_negative_if_healthy) def calculate_without_bayes(population, P_ill, P_positive_if_ill, P_negative_if_healthy): """ Calculate P(ill | positive) without Bayes' formula. This is more laborious but shows how the result is calculated using basic arithmetic. """ heading = "Calculate P(ill | positive) without Bayes' Theorem" print(heading) print("=" * len(heading) + "\n") percent_ill = P_ill * 100 number_ill = population * P_ill number_healthy = population * (1 - P_ill) ill_positive = number_ill * P_positive_if_ill healthy_positive = number_healthy * (1 - P_negative_if_healthy) P_ill_if_positive = ill_positive / (ill_positive + healthy_positive) print(f"Population: {population}") print(f"Percent ill: {percent_ill}%") print(f"Number ill: {number_ill:>.0f}") print(f"Number healthy: {number_healthy:>.0f}") print(f"P(positive if ill): {P_positive_if_ill}") print(f"P(negative if healthy): {P_negative_if_healthy}") print(f"Ill and test positive: {ill_positive:>.0f}") print(f"Healthy but test positive: {healthy_positive:>.0f}") print(f"P(ill | positive): {P_ill_if_positive:>.2f}") def calculate_with_bayes(P_ill, P_positive_if_ill, P_negative_if_healthy): """ Calculate P(ill | positive) with Bayes' Theorem. """ P_healthy = 1 - P_ill P_positive_if_healthy = 1 - P_negative_if_healthy P_ill_if_positive = (P_positive_if_ill * P_ill) / ((P_healthy * P_positive_if_healthy) + (P_ill * P_positive_if_ill)) heading = "Calculate P(ill | positive) with Bayes' Theorem" print(heading) print("=" * len(heading) + "\n") print(f"P(ill): {P_ill}") print(f"P(healthy): {P_healthy}") print(f"P(positive if ill): {P_positive_if_ill}") print(f"P(positive if healthy): {P_positive_if_healthy:>.2f}\n") print(" P(positive if ill) * P(ill)") print("P(ill | positive) = -------------------------------------------------------------------") print(" P(healthy) * P(positive if healthy) + P(ill) * P(positive if ill)") print("\n") print(f" {P_positive_if_ill} * {P_ill}") print(" = -------------------------------------------------------------------") print(f" {P_healthy} * {P_positive_if_healthy:>.2f} + {P_ill} * {P_positive_if_ill}") print("\n") print(f" = {P_ill_if_positive:>.2f}") main()
The main Function
Here we just create a few variables for the population and probabilities which are then passed to the two functions which calculate the probability of being ill if testing positive.
The calculate_without_bayes Function
In this function we calculate a few interim values from the specified population and probabilities, and them use them for our ultimate goal of finding the probability of being ill if testing positive.
All the values are then printed which gives an intuitive idea of the process, but this is a bit long-winded so in the next function we'll do it the "correct" way using Bayes' formula.
The calculate_with_bayes Function
Firstly we need to calculate a couple more probabilities from those we already know: the probability of being healthy and the probability of testing positive if healthy. After doing this we can go ahead and implement Bayes' Theorem.
The rest of the function is taken up with printing out the results, including the interim calculations.
Let's Run It
Now we can go ahead and run the program with this command:
Run
python3.8 bayes.py
The output is
Program Output
Calculate P(ill | positive) without Bayes' Theorem ================================================== Population: 1000000 Percent ill: 1.0% Number ill: 10000 Number healthy: 990000 P(positive if ill): 0.99 P(negative if healthy): 0.99 Ill and test positive: 9900 Healthy but test positive: 9900 P(ill | positive): 0.50 Calculate P(ill | positive) with Bayes' Theorem =============================================== P(ill): 0.01 P(healthy): 0.99 P(positive if ill): 0.99 P(positive if healthy): 0.01 P(positive if ill) * P(ill) P(ill | positive) = ------------------------------------------------------------------- P(healthy) * P(positive if healthy) + P(ill) * P(positive if ill) 0.99 * 0.01 = ------------------------------------------------------------------- 0.99 * 0.01 + 0.01 * 0.99 = 0.50
You might want to experiment with different sensitivities and specificities. The 99% ones I used are actually very high and many real world medical tests are much less accurate, which as you have probably realised means that the chances of a person having a disease if they test positive can be very low.
So does this mean that mass testing or screening of patients even if they have no symptoms is too inaccurate to be worthwhile? This is really a matter of opinion, but if you hear of or have personal experience of misdiagnoses then please bear in mind Thomas Bayes and his theorem. | https://www.codedrome.com/the-fundamentals-of-bayes-theorem-in-python/ | CC-MAIN-2021-25 | en | refinedweb |
Tweepy is an easy to use library to Python that will make your life easy. In a terminal type.
pip install tweepy
You might be using pip3 and you might need to be admin or install it locally.
Now to the fun stuff.
import tweepy # You need to replace all these with your tokens. consumer_key = "Replace this with your API token here" consumer_secret = "Replace this with your API secret token here" access_token = "Replace this with your Access token" access_token_secret = "Replace this with your Access secret token" auth = tweepy.OAuthHandler(consumer_key, consumer_secret) auth.set_access_token(access_token, access_token_secret) api = tweepy.API(auth) # Now this is actually doing what we want. api.update_status(status="Hello, World!") | https://www.learnpythonwithrune.org/create-a-twitter-bot-in-python-an-easy-step-through-guide/ | CC-MAIN-2021-25 | en | refinedweb |
method enforces use on the UI thread with UIApplication.EnsureUIThread (), but why? Whats the problem with constructing one in a background thread exactly?
Full definition below:
[Export ("layoutAttributesForCellWithIndexPath:"), CompilerGenerated]
public static UICollectionViewLayoutAttributes CreateForCell (NSIndexPath indexPath)
{
UIApplication.EnsureUIThread ();
if (indexPath == null)
{
throw new ArgumentNullException ("indexPath");
}
return (UICollectionViewLayoutAttributes)Runtime.GetNSObject (Messaging.IntPtr_objc_msgSend_IntPtr (UICollectionViewLayoutAttributes.class_ptr, UICollectionViewLayoutAttributes.selLayoutAttributesForCellWithIndexPath_, indexPath.Handle));
}
WIth a few exceptions UIKit code is not thread-safe.
As such all UIKit.* methods are, by default, calling EnsureUIThread to avoid running into hard to find and near-to-impossible to duplicate bugs caused by accessing UIKit structures from several threads.
If you find some Apple documentation about a specific API being thread-safe let us know and we'll update ou bindings to match this.
More details @ including instruction on how you can, at your own risk, turn the feature completely or partially off. | https://bugzilla.xamarin.com/11/11049/bug.html | CC-MAIN-2021-25 | en | refinedweb |
’ve been working with Task.Factory.FromAsync() methods and have been experiencing severe memory leakage. I’ve used the profiler and it shows that a lot of objects just seem to be hanging around after use:
Heap shot 140 at 98.591 secs: size: 220177584, object count: 2803125, class count: 98, roots: 666
Bytes Count Average Class name
25049168 142325 175 System.Threading.Tasks.Task<System.Int32> (bytes: +398816, count: +2266)
1 root references (1 pinning)
142324 references from: System.Threading.Tasks.Task
142305 references from: System.Threading.Tasks.TaskCompletionSource<System.Int32>
98309 references from: task_test.Task3Test.<Run>c__AnonStorey1
25049024 142324 176 System.Threading.Tasks.Task (bytes: +398816, count: +2266)
142304 references from: System.Threading.Tasks.TaskContinuation
17078880 142324 120 System.Action<System.Threading.Tasks.Task<System.Int32>> (bytes: +271920, count: +2266)
142324 references from: System.Threading.Tasks.TaskActionInvoker.ActionTaskInvoke<System.Int32>
17076600 142305 120 System.Runtime.Remoting.Messaging.MonoMethodMessage (bytes: +271680, count: +2264)
1 root references (1 pinning)
142304 references from: System.MonoAsyncCall
17076584 142305 119 System.AsyncCallback (bytes: +271920, count: +2266)
1 root references (1 pinning)
142304 references from: System.MonoAsyncCall
17076584 142305 119 System.Func<System.Int32> (bytes: +271920, count: +2266)
1 root references (1 pinning)
142305 references from: System.Func<System.IAsyncResult,System.Int32>
142304 references from: System.Runtime.Remoting.Messaging.AsyncResult
1 references from: System.Func<System.AsyncCallback,System.Object,System.IAsyncResult>
17076584 142305 119 System.Func<System.IAsyncResult,System.Int32> (bytes: +271920, count: +2266)
1 root references (1 pinning)
142305 references from: System.Threading.Tasks.TaskFactory.<FromAsyncBeginEnd>c__AnonStorey3A<System.Int32>
17076480 142304 120 System.Runtime.Remoting.Messaging.AsyncResult (bytes: +271800, count: +2265)
98461 references from: System.Object[]
I’m trying to work out what type of things may/may not be occurring that prevent the gc from recognizing the object is no longer in use. FromAsync returns a Task object which is obtained from TaskCompletionSource which has a class variable “source” that holds the value of the Task it in turn gets from the new Task invocation.
Here's the test case. It also includes a case using StartNew() where there is no explosion in memory use. The initial Test3Task below did not use the ContinueWith but to see if it was something we weren't cleaning up we put it in (to no effect). [And no, the listening variable used below is not used - there were plans to make the test more intelligent but a do forever was just as good.]
using System;
using System.Threading;
using System.Threading.Tasks;
namespace task_test
{
class MainClass
{
public static void Main (string[] args)
{
// Test3 - Leaky
var t = new Task3Test();
// Test4 - Doesn't leak
// var t = new Task4Test();
t.Run();
}
}
public class BaseTask
{
public int GetRandomInt(int top)
{
Random random = new Random();
return random.Next(1,top);
}
}
public class FibArgs
{
public byte[] data;
public int n;
}
public class Fib
{
public int Calculate(FibArgs args)
{
int n = args.n;
int a = 0;
int b = 1;
// In N steps compute Fibonacci sequence iteratively.
for (int i = 0; i < n; i++)
{
int temp = a;
a = b;
b = temp + b;
}
Console.WriteLine("ThreadId: {2}, fib({0}) = {1}", n, a, Thread.CurrentThread.GetHashCode());
return a;
}
}
public class Task3Test : BaseTask
{
public void Run()
{
bool listening = true;
long i = 0;
while (listening)
{
i++;
Func<int> fun = () => {
int n = GetRandomInt(100);
Fib f = new Fib();
FibArgs args = new FibArgs();
args.n = n;
return f.Calculate(args);
};
var t = Task<int>.Factory.FromAsync(fun.BeginInvoke, fun.EndInvoke, null);
t.ContinueWith( x => {
if (x.IsCompleted) {
x.Dispose();
x = null;
}
}
);
}
}
}
public class Task4Test : BaseTask
{
public void Run()
{
bool listening = true;
long i = 0;
while (listening)
{
int n = GetRandomInt(100);
Fib f = new Fib();
FibArgs args = new FibArgs();
args.n = n;
Task.Factory.StartNew(() => f.Calculate(args), TaskCreationOptions.LongRunning)
.ContinueWith(x => {
if(x.IsFaulted)
{
Console.WriteLine("OOPS, error!!!");
x.Exception.Handle(_ => true); //just an example, you'll want to handle properly
}
else if(x.IsCompleted)
{
Console.WriteLine("Cleaning up task {0}", x.Id);
x.Dispose();
}
}
);
}
}
}
}
These symptoms only seem to affect x86_64 as when I run on s390x I have no problems with Boehm or sgen. However, if I leave the WriteLine in the Calculate method I do see exponential memory consumption on s390x as well (the heapshot reports are very very different hen running with and without that statement). Removal of that statement from x86_64 has no effect - it grows regardless.
The symptoms that the above test case exhibits are also experienced on an application that only creates a few tasks per second.
Jeremie, could you eyeball this?
Any info you can provide would be useful. I am asking Martin to look at this.
if (i > 1000000)
listening = false;
}
Thread.Sleep (2000);
while(true)
{
GC.Collect (10, GCCollectionMode.Forced);
Thread.Sleep (1000);
}
Thread.Sleep (-1);
Added this at the end of the while block in Test3. As I can see it consumes about 1.3 Gb and after a while releases it going back to 44MB RES. So there is no memory leak, runtime just can't keep up with the speed you are creating new tasks, so they stay scheduled forever.
The weird thing is that a single task consumes about 1MB RAM.
Oh, I've miscalculated. It's not 1MB per task, it's 1KB per task which is acceptable amount.
This is not a GC issue but a TaskScheduler issue.
Tasks can be created at a faster pace than they are completed.
I wrote a small program than can shows us the problem:
Whereas .NET has rarely more than 3000 running tasks, Mono task count diverges.
It is worse when the tasks take more time to complete (eg: doing Console.WriteLine)
This is not a bug. The same behavior can be observed on .net.
The issue is that you're queueing tasks faster than the system can process them. | https://bugzilla.xamarin.com/12/12236/bug.html | CC-MAIN-2021-25 | en | refinedweb |
PivotCustomDrawCellBaseEventArgs Class
Provides data for custom painting events invoked for particular data cells.
Namespace: DevExpress.XtraPivotGrid
Assembly: DevExpress.XtraPivotGrid.v21.1.dll
Declaration
public class PivotCustomDrawCellBaseEventArgs : PivotCellEventArgs
Public Class PivotCustomDrawCellBaseEventArgs Inherits PivotCellEventArgs
Remarks
The PivotCustomDrawCellBaseEventArgs class serves as a base for classes that provide data for custom painting events that fire for particular cells. The PivotCustomDrawCellBaseEventArgs class exposes properties and methods that allow you to identify the cell value, position (column and row), etc.
NOTE
You cannot use the PivotCustomDrawCellBaseEventDrawCellBaseEventArgs.ThreadSafeArgs property to access event data. To learn more, see Asynchronous Mode. | https://docs.devexpress.com/WindowsForms/DevExpress.XtraPivotGrid.PivotCustomDrawCellBaseEventArgs | CC-MAIN-2021-25 | en | refinedweb |
7. June 2011, 16:15 17:15
Empa, Dübendorf, Theodor-Erismann-Auditorium, VE102
Organic light emitting diodes are commercially successful products, and the development of organic semiconductors for electronic, photovoltaic and thermoelectric applications has made significant progress too. At the same time numerous open questions and scientific challenges of fundamental nature capture the researchers' fascination. Some recent examples will be discussed.
The language of the presentation is English.
Free entrance, guests are welcome
Vortragssprache: Englisch
import event in your outlook calendar (follows) | https://events.empa.ch/Veranstaltungsarchiv/2011/event.php?vnr=55f9fee-12a | CC-MAIN-2021-25 | en | refinedweb |
I made a method that would do alpha beta pruning on a tree. However there is a time limit on a single move of 20 seconds. I am to play against a player and his opponent method. I tried going down 6 levels deep but I would lose. I tried going down 7 levels and I was taking too long.
Is there a way to make my code run faster? If possible can you also check my logic for the alpha-beta pruning part and see if I missed anything causing it to run longer that it should. I have included
iterative deepening method that calls alpga-beta pruning method.
I want it to run faster so I can go deeper down the tree so I have an advantage and win the game. The game is won if any of the opponent queen (there are 2 queens for each player) has no place left to move on a block of 7X7
iterative deepening:
def iterative_deepening_alpha_beta(self, game, time_left, depth, alpha=float("-inf"), beta=float("inf"), maximizing_player=True): current_depth = 0 best_move_q_1 = (-1, -1) best_move_q_2 = (-2, -3) best_score = float("-inf") timeout = False while not timeout: try: # self.currentDepth = current_depth m_q_1, m_q_2, score = self.alphabeta( game, time_left, current_depth) best_score = score best_move_q_1 = m_q_1 best_move_q_2 = m_q_2 current_depth += 1 except Timeout: timeout = True self.moveCache = {} self.sorted_cache = {} self.sorted_list_m1 = [] self.sorted_list_m2 = [] # self.currentDepth = 1 print('Iterative deepening ... Timeout approaching', current_depth) break print(best_move_q_1, best_move_q_2, best_score, ' <<<CS>>>') return best_move_q_1, best_move_q_2, best_score
alpha-beta pruning:
def alphabeta(self, game, time_left, depth, alpha=float("-inf"), beta=float("inf"), maximizing_player=True): """Implementation of the alphabeta algorithm Args: game (Board): A board and game state. time_left (function): Used to determine time left before timeout depth: Used to track how deep you are in the search tree alpha (float): Alpha value for pruning beta (float): Beta value for pruning maximizing_player (bool): True if maximizing player is active. Returns: (tuple,tuple, int): best_move_queen1,best_move_queen2, val if depth == 0 or self.is_terminal_state(game): """ if time_left() < 24: raise Timeout moves_1 = game.get_legal_moves_of_queen1() moves_2 = game.get_legal_moves_of_queen2() if depth == 0 or self.is_terminal_state(game): utility_val = CustomEvalFn().score(game, maximizing_player) return None, None, utility_val overall_best_move_q_1 = (-1, -1) overall_best_move_q_2 = (-2, -3) highest_move_diff = float( "-inf") if maximizing_player else float("inf") from_cache = False if depth == 1 and depth in self.moveCache: moves_1 = [] moves_2 = [] if depth in self.sorted_cache: self.moveCache[depth] = self.sorted_cache[depth] else: self.moveCache[depth] = sorted( self.moveCache[depth], key=lambda k: k['score'], reverse=True) self.sorted_cache[depth] = self.moveCache[depth] if len(self.sorted_list_m1) != 0 and len(self.sorted_list_m2) != 0: moves_1 = self.sorted_list_m1 moves_2 = self.sorted_list_m2 from_cache = True else: for m in self.moveCache[depth]: moves_1.append(m["m1"]) moves_2.append(m["m2"]) self.sorted_list_m1 = moves_1 self.sorted_list_m2 = moves_2 if not from_cache: random.shuffle(moves_1) random.shuffle(moves_2) for move_q_1 in moves_1: for move_q_2 in moves_2: if(move_q_1 != move_q_2): possible_game_state = game.forecast_move( move_q_1, move_q_2) m1, m2, score = self.alphabeta( possible_game_state, time_left, depth - 1, alpha, beta, not maximizing_player) if depth == 1: if depth in self.moveCache: self.moveCache[depth].append({ 'score': score, "m1": move_q_1, "m2": move_q_2 }) else: self.moveCache[depth] = [{ 'score': score, "m1": move_q_1, "m2": move_q_2 }] if maximizing_player: if score > highest_move_diff: highest_move_diff = score overall_best_move_q_1 = move_q_1 overall_best_move_q_2 = move_q_2 if highest_move_diff >= beta: return move_q_1, move_q_2, score alpha = max(alpha, score) else: if score < highest_move_diff: highest_move_diff = score overall_best_move_q_1 = move_q_1 overall_best_move_q_2 = move_q_2 if highest_move_diff <= alpha: return move_q_1, move_q_2, score beta = min(beta, score) return overall_best_move_q_1, overall_best_move_q_2, highest_move! | https://extraproxies.com/optimisations-in-alpha-beta-pruning/ | CC-MAIN-2021-25 | en | refinedweb |
I currently am able to include an arbitrary binary blob of data in my code using the PROGMEM keyword, and have code working that can access that data. However, manipulating the data is cumbersome at best. I’m relatively new to C and have little experience with the C preprocessor itself, but I do understand the theory behind what it does. I’m just not sure if it’s possible to accomplish what I’m hoping for.
I’m using HxD to generate/edit raw binary data, and then every time I want to change the data I have to use HxD’s handy “copy as C” function (which forms the data as a long string of “0x” characters, adds the commas, line-breaks it, etc), paste it into the Arduino editor, then remove the variable declaration and closing “};” that HxD includes when you use “copy as C”.
What I’d like to be able to do if possible is simply have the preprocessor include the actual raw binary file directly.
I’m not sure if it’s possible, but ideally I’d like to be able to do something like: static const byte blob PROGMEN =
#include <myBinary.bin>
The key of course is that “myBinary.bin” is actually the raw binary data, nut the string-based representation C requires. So the preprocessor would need to also convert the binary file into the appropriate C++ syntax for a static byte array.
The ideal workflow would be to just edit the .bin file directly with HxD or whatever, then rebuild/reload the sketch and have the new data present. The step of copying, pasting, removing the extra lines, etc. gets annoying after many iterations.
Is this possible? | https://forum.arduino.cc/t/include-an-external-binary-file-as-a-progmem-variable/665246 | CC-MAIN-2021-25 | en | refinedweb |
I try to create app in rails, which would be regular web application and also an api for mobile application written in react native. I take advantage of "devise token auth" gem along wiht "devise".
I added gems to Gemfile and ran bundle. Next I ran
rails g devise_token_auth:install User auth
Rails.application.routes.draw do
devise_for :users
namespace :api do
scope :v1 do
mount_devise_token_auth_for 'User', at: 'auth'
end
end
end
curl -H "Content-Type: application/json" -X POST -d '{"email":"test123@gmail.com","password":"aaaaaaaa"}'
The CSRF token authenticity check is originating from your Rails Application Controller.
# Prevent CSRF attacks by raising an exception. # For APIs, you may want to use :null_session instead. protect_from_forgery with: :exception
There are a variety of ways you can handle the situation, but from the rails guide, they suggest adding a header:
By default, Rails includes jQuery and an unobtrusive scripting adapter for jQuery, which adds a header called X-CSRF-Token on every non-GET Ajax call made by jQuery with the security token. tag printed by <%= csrf_meta_tags %> in your application view. | https://codedump.io/share/M6OXJADqIq6w/1/devise-token-auth-can39t-verify-csrf-token-authenticity | CC-MAIN-2021-25 | en | refinedweb |
Programming Tutorial:Implementing a Signal Processing Filter
This tutorial shows you how to derive a new filter class from GenericFilter, how to check preconditions, initialize your filter, and process data. It will also show you how to visualize the output signal of the filter and present it to the operator user.
Contents
A simple low pass filter
We want to implement a low pass filter with a time constant
(given
in units
of a sample's duration), a sequence
as input and a sequence
as output (where
is a sample index proportional to
time), and obeying
The filter skeleton
The resulting filter class is to be called LPFilter. We create two new files, LPFilter.h, and LPFilter.cpp, and put a minimal filter declaration into LPFilter.h:
#ifndef LP_FILTER_H #define LP_FILTER_H #include "GenericFilter.h" class LPFilter : public GenericFilter { public: LPFilter(); ~LPFilter(); void Preflight( const SignalProperties&, SignalProperties& ) const; void Initialize( const SignalProperties&, const SignalProperties& ); void Process( const GenericSignal&, GenericSignal& ); }; #endif // LP_FILTER_H
Into LPFilter.cpp we put the lines
#include "PCHIncludes.h" // Make the compiler's Pre-Compiled Headers feature happy #pragma hdrstop #include "LPFilter.h" #include "MeasurementUnits.h" #include "BCIError.h" #include <vector> #include <cmath> using namespace std;
The Process function
When implementing a filter, a good strategy is to begin with the Process function, and to consider the remaining class member functions mere helpers, mainly determined by the code of Process. So we convert the filter prescription into the Process code, introducing member variables ad hoc , ignoring possible error conditions, and postponing efficiency considerations:
void LPFilter::Process( const GenericSignal& Input, GenericSignal& Output ) { // This implements the prescription's second line for all channels: for( int channel = 0; channel < Input.Channels(); ++channel ) { for( int sample = 0; sample < Input.Elements(); ++sample ) { mPreviousOutput[ channel ] *= mDecayFactor; mPreviousOutput[ channel ] += Input( channel, sample ) * ( 1.0 - mDecayFactor ); Output( channel, sample ) = mPreviousOutput[ channel ]; } } }
The Initialize member function
As you will notice when comparing Process to the equations above, we introduced member variables representing these sub-expressions:
We introduce these members into the class declaration, adding the following lines after the Process declaration:
private: double mDecayFactor; std::vector<double> mPreviousOutput;
The next step is to initialize these member variables, introducing filter parameters as needed. This is done in the Initialize member function -- we write it down without considering possible error conditions:
void LPFilter::Initialize( const SignalProperties& Input, const SignalProperties& Output ) { // This will initialize all elements with 0, // implementing the first line of the filter prescription: mPreviousOutput.clear(); mPreviousOutput.resize( Input.Channels(), 0 ); double timeConstant = Parameter( "LPTimeConstant" ); mDecayFactor = ::exp( -1.0 / timeConstant ); }
Now this version is quite inconvenient for a user going to configure our filter -- the time constant is given in units of a sample's duration, resulting in a need to re-configure each time the sampling rate is changed. A better idea is to let the user choose whether to give the time constant in seconds or in sample blocks. To achieve this, there is a utility class MeasurementUnits that has a member ReadAsTime(), returning values in units of sample blocks which is the natural time unit in a BCI2000 system. Writing a number followed by an "s" will allow the user to specify a time value in seconds; writing a number without the "s" will be interpreted as sample blocks. Thus, our user friendly version of Initialize reads
void LPFilter::Initialize( const SignalProperties&, const SignalProperties& ) { mPreviousOutput.clear(); mPreviousOutput.resize( Input.Channels(), 0 ); // Get the time constant in units of a sample block's duration: double timeConstant = MeasurementUnits::ReadAsTime( Parameter( "LPTimeConstant" ) ); // Convert it into units of a sample's duration: timeConstant *= Parameter( "SampleBlockSize" ); mDecayFactor = ::exp( -1.0 / timeConstant ); }
The Preflight function
Up to now, we have not considered any error conditions that might occur during execution of our filter code. Scanning through the Process and Initialize code, we identify a number of implicit assumptions:
- The time constant is not zero -- otherwise, a division by zero will occur.
- The time constant is not negative -- otherwise, the output signal is no longer guaranteed to be finite, and a numeric overflow may occur.
- The output signal is assumed to hold at least as much data as the input signal contains.
The first two assumptions may be violated if a user enters an illegal value into the LPTimeConstant parameter; we need to make sure that an error is reported, and no code is executed that depends on these two assumptions. For the last assumption, we request an appropriate output signal from the Preflight function. Thus, the Preflight code reads
void LPFilter::Preflight( const SignalProperties& Input, SignalProperties& Output ) const { double LPTimeConstant = MeasurementUnits::ReadAsTime( Parameter( "LPTimeConstant" ) ); LPTimeConstant *= Parameter( "SampleBlockSize" ); // The PreflightCondition macro will automatically generate an error // message if its argument evaluates to false. // However, we need to make sure that its argument is user-readable // -- this is why we chose a variable name that matches the parameter // name. PreflightCondition( LPTimeConstant > 0 ); // Alternatively, we might write: if( LPTimeConstant <= 0 ) bcierr << "The LPTimeConstant parameter must be greater 0" << endl; // Request output signal properties: Output = Input; }
Constructor and destructor
Because we do not explicitly acquire resources, nor perform asynchronous operations, there is nothing to be done inside the LPFilter destructor . Our constructor will contain initializers for the members we declared, and a BCI2000 parameter definition for LPTimeConstant. Specifying the empty string for both low and high range tells the framework not to perform an automatic range check on that parameter.
LPFilter::LPFilter() : mDecayFactor( 0 ), mPreviousOutput( 0 ) { BEGIN_PARAMETER_DEFINITIONS "Filtering float LPTimeConstant= 16s" " 16s % % // time constant for the low pass filter in blocks or seconds", END_PARAMETER_DEFINITIONS } LPFilter::~LPFilter() { }
Filter instantiation
To have our filter instantiated in a signal processing module, we add a line containing a Filter statement to the module's PipeDefinition.cpp. This statement expects a string parameter which is used to determine the filter's position in the filter chain. If we want to use the filter in the AR Signal Processing module, and place it after the SpatialFilter, we add
#include "LPFilter.h" ... Filter( LPFilter, 2.B1 );
to the file SignalProcessing/AR/PipeDefinition.cpp. Now, if we compile and link the AR Signal Processing module, we get an "unresolved external" linker error that reminds us to add our LPFilter.cpp to that module's project.
Visualizing filter output
Once our filter has been added to the filter chain, the BCI2000 framework will automatically create a parameter VisualizeLPFilter that is accessible under Visualize->Processing Stages in the operator module's configuration dialog. This parameter allows the user to view the LPfilter's output signal in a visualization window. In most cases, this visualization approach is sufficient. For the sake of this tutorial, however, we will disable automatic visualization, and implement our own signal visualization.
To disable automatic visualization, we override the GenericFilter::AllowsVisualization() member function to return false. In addition, to present the LPFilter's output signal in an operator window, we introduce a member of type GenericVisualization into our filter class, adding
#include "GenericVisualization.h" ... class LPFilter : public GenericFilter { public: ... virtual bool AllowsVisualization() const { return false; } private: ... GenericVisualization mSignalVis; }; ...
GenericVisualization's constructor takes a string-valued visualization ID as a parameter; we need to get a unique ID in order to get our data routed to the correct operator window. Given the circumstances, a string consisting of the letters "LPFLT" appears unique enough, so we change the LPFilter constructor to read
LPFilter::LPFilter() : mDecayFactor( 0 ), mPreviousOutput( 0 ), mSignalVis( "LPFLT" ) { BEGIN_PARAMETER_DEFINITIONS "Filtering float LPTimeConstant= 16s" " 16s % % // time constant for the low pass filter in blocks or seconds", "Visualize int VisualizeLowPass= 1" " 1 0 1 // visualize low pass output signal (0=no, 1=yes)", END_PARAMETER_DEFINITIONS }
In Initialize, we add
mSignalVis.Send( CfgID::WindowTitle, "Low Pass" ); mSignalVis.Send( CfgID::GraphType, CfgID::Polyline ); mSignalVis.Send( CfgID::NumSamples, 2 * Parameter( "SamplingRate" ) );
Finally, to update the display in regular intervals, we add the following at the end of Process:
if( Parameter( "VisualizeLowPass" ) == 1 ) mSignalVis.Send( Output );
We might also send data to the already existing task log memo window, adding another member
GenericVisualization mTaskLogVis;
initializing it with
LPFilter::LPFilter() : ... mTaskLogVis( SourceID::TaskLog ) { ... }
and, from inside Process, writing some text to it as in
if( output( 0, 0 ) > 10 ) { mTaskLogVis << "LPFilter: (0,0) entry of output exceeds 10 and is " << output( 0, 0 ) << endl; } | https://www.bci2000.org/wiki/index.php/Programming_Tutorial:Implementing_a_Signal_Processing_Filter | CC-MAIN-2021-25 | en | refinedweb |
Unable to return a tuple when mocking a function
python unittest mock return tuple
python mock variable in function
python mock exception
python mock tuple return values
python mock reset
python mock function return value
pytest-mock
I'm trying to get comfortable with mocking in Python and I'm stumbling while trying to mock the following function.
helpers.py
from path import Path def sanitize_line_ending(filename): """ Converts the line endings of the file to the line endings of the current system. """ input_path = Path(filename) with input_path.in_place() as (reader, writer): for line in reader: writer.write(line)
test_helpers.py
@mock.patch('downloader.helpers.Path') def test_sanitize_line_endings(self, mock_path): mock_path.in_place.return_value = (1,2) helpers.sanitize_line_ending('varun.txt')
However I constantly get the following error:
ValueError: need more than 0 values to unpack
Given that I've set the return value to be a tuple, I don't understand why Python is unable to unpack it.
I then changed my code to have
test_sanitize_line_endings store print return value of
input_path.in_place() and I can see that the return value is a
MagicMock object. Specifically it prints something like
<MagicMock name='Path().in_place()' id='13023525345'>
If I understand things correctly, what I want is to have
mock_path be the MagicMock which has an in_place function that returns a tuple.
What am I doing wrong, and how can I go about correctly replacing the return value of
input_path.in_place() in
sanitize_line_ending.
After much head scratching and attending a meetup I finally came across this blog post that finally solved my issue.
The crux of the issue is that I was not mocking the correct value. Since I want to replace the result of a function call the code I needed to have written was:
@mock.patch('downloader.helpers.Path') def test_sanitize_line_endings(self, mock_path): mock_path.return_value.in_place.return_value = (1,2) helpers.sanitize_line_ending('varun.txt')
This correctly results in the function being able to unpack the tuple, it then immediately fails since like @didi2002 mentioned this isn't a context manager. However I was focussed on getting the unpacking to work, and after I was able to achieve that replaced the tuple with a construct with the appropriate methods.
Having a mock return a tuple (for example mocking out os.walk , Having a mock return a tuple (for example mocking out os.walk()) #86. Closed. dhommel Ran 1 test in 0.000s. FAILED (errors=1) The outer parens are for the function call obviously so let's ignore them. Next is the list, Unable to return a tuple from a constexpr function visual studio 2017 version 15.8 windows 10.0 Jason Turner reported Sep 17, 2018 at 01:24 PM
I struggled with this error
ValueError: need more than 0 values to unpack for several hours. But the problem was not in the way I set the mock up (the correct way was described by @varun-madiath here).
It was in using
@mock.patch() decorator:
@mock.patch('pika.BlockingConnection')
@mock.patch('os.path.isfile')
@mock.patch('subprocess.Popen')
def test_foo(self, **mocked_connection**, mock_isfile, **mock_popen**):
The order of parameters must be reversed! See python docs.
26.4. unittest.mock — mock object library, Mock and MagicMock objects create all attributes and methods as you access or methods on the mock that don't exist on the spec will fail with an AttributeError . If side_effect is an iterable then each call to the mock will return the next value This will be in the form of a tuple: the first member is any ordered arguments Return multiple items from a mocked function with Python's mock. - gist:1174019
To be valid, the return value of input_path.in_place() must be an object that has an __enter__ method that returns a tuple.
This is a (very verbose) example:
def test(): context = MagicMock() context.__enter__.return_value = (1, 2) func = MagicMock() func.in_place.return_value = context path_mock = MagicMock() path_mock.return_value = func with patch("path.Path", path_mock): sanitize_line_ending("test.txt")
When and how to use Python mock, We cannot see print statements of cow and dog anymore. The default return value of calling a Mock function is another Mock instance. I have a function (foo) which calls another function (bar). If invoking bar() raises an HttpError, I want to handle it specially if the status code is 404, otherwise re-raise. I am trying to write some unit tests around this foo function, mocking out the call to bar().
try this for return tuple from mocked function:
ret = (1, 2) type(mock_path).return_value = PropertyMock(return_value = ret)
Mocking in Python ⋆ Mark McDonnell, In this example, the side_effect attribute used the create_response() method to build a dynamic For non-trivial tests, it's important to be sure that the test can actually fail. We used the following: return Mock( status = 201, getheaders The value of response_headers is a sequence of two-tuples that has (key, value) pairs..
Modern Python Cookbook, DELETED def _copy(value): if type(value) in (dict, list, tuple, set): return create magic methods on the # class without stomping on other mocks new = type(cls. fail early? return obj class CallableMixin(Base): def __init__(self, spec=None,’.
unittest.mock, A system Jarvis requested for a specific site "gradeup.co", in return to this request it got the error as "LOOKUP FAILED". S3: Count(*) also counts the tuples The function F will be high only when atleast two of the input variables are set to. Note 2: A Tuple has advantages. It is a reference and can be reused.
GATE 2021: CS & IT Engineering (12 Mock Tests + 20 Subject-wise , If no return value is specified, functional mocks return null. By default, mocks expect one or more calls (i.e., only fail if the function or method is never called). More ways of setting return values. This covers the very basics of setting a return value, but NSubstitute can do much more. Read on for other approachs, including matching specific arguments, ignoring arguments, using functions to calculate return values and returning multiple results.
- type(mock_path).return_value = PropertyMock(return_value = (1, 2)) | https://thetopsites.net/article/53665653.shtml | CC-MAIN-2021-25 | en | refinedweb |
Abstract base class for camera nodes. More...
#include <Inventor/nodes/SoCamera.h>
Abstract base class for camera nodes.
This is the abstract base class for all camera nodes. It defines the common methods and fields that all cameras have. Cameras are used to view a scene. When a camera is encountered during rendering, it sets the projection and viewing matrices and viewport appropriately; it does not draw geometry. Cameras should be placed before any shape nodes or light nodes in a scene graph; otherwise, those shapes or lights cannot be rendered properly. Cameras are affected by the current transformation, so you can position a camera by placing a transformation node before it in the scene graph. The default position and orientation of a camera is at (0,0,1) looking along the negative z-axis.
You can also use a node kit to create a camera; see the reference page for SoCameraKit.
Useful algorithms for manipulating a camera are provided in the SoCameraInteractor class.
Compute the current view vector or up vector.
SoCamera* camera . . . const SbRotation& orientation = camera->orientation.getValue(); SbVec3f upVec; orientation.multVec( SbVec3f(0,1,0), upVec ); SbVec3f vwVec; orientation.multVec( SbVec3f(0,0,-1), vwVec );
Shortcut to get the current view vector or up vector.
Compute the current focal point.
SoOrthographicCamera, SoPerspectiveCamera, SoCameraKit, SoCameraInteractor
Stereo mode.
Viewport mapping.
Allows the camera to render in stereo.
Default value is TRUE.
Reimplemented in SoStereoCamera.
Queries the parallax balance.
Returns the type identifier for this class.
Reimplemented from SoNode.
Reimplemented in SoOrthographicCamera, SoPerspectiveCamera, and SoStereoCamera.
Queries the stereo absolute adjustment state.
Queries the stereo offset.
Queries the stereo mode.
Returns the type identifier for this specific instance.
Reimplemented from SoNode.
Reimplemented in SoOrthographicCamera, SoPerspectiveCamera, and SoStereoCamera.
Returns the viewport region this camera would use to render into the given viewport region, accounting for cropping.
Computes a view volume from the given parameters.
Implemented in SoOrthographicCamera, and SoPerspectiveCamera.
Returns a view volume object, based on the camera's viewing parameters.
This object can be used, for example, to get the view and projection matrices, to project 2D screen coordinates into 3D space and to project 3D coordinates into screen space.
If the useAspectRatio parameter is 0.0 (the default), the camera uses the current value of the aspectRatio field to compute the view volume.
NOTE: In ADJUST_CAMERA mode (the default), the view volume returned when useAspectRatio = 0, is not (in general) the actual view volume used for rendering. Using this view volume to project points will not (in general) produce the correct results.
This is because, in ADJUST_CAMERA mode, Inventor automatically modifies the view volume to match the aspect ratio of the current viewport. This avoids the distortion that would be caused by "stretching" the view volume when it is mapped into the viewport. However the view volume values are not changed, only the values passed to OpenGL. In order to get the modified values (i.e., the actual view volume used for rendering) you must pass the actual viewport aspect ratio to getViewVolume. You can get the current viewport from the renderArea or viewer object that contains the Open Inventor window.
Also note that in ADJUST_CAMERA mode, when the viewport aspect ratio is less than 1, Open Inventor automatically scales the actual rendering view volume by the inverse of the aspect ratio (i.e. 1/aspect). The getViewVolume method does not automatically apply this adjustment. So a correct query of the actual rendering view volume can be done like this:
// Given a viewer object, get the actual rendering view volume float aspect = viewer->getViewportRegion().getViewportAspectRatio(); SoCamera* camera = viewer->getCamera(); SbViewVolume viewVol = camera->getViewVolume( aspect ); if (aspect < 1) viewVol.scale( 1 / aspect );
Implemented in SoOrthographicCamera, and SoPerspectiveCamera.
Returns TRUE if the stereo balance adjustement is defined as a fraction of the camera near distance.
Sets the orientation of the camera so that it points toward the given target point while keeping the "up" direction of the camera parallel to the positive y-axis.
If this is not possible, it uses the positive z-axis as "up."
Scales the height of the camera.
Perspective cameras scale their heightAngle fields, and orthographic cameras scale their height fields.
Implemented in SoOrthographicCamera, and SoPerspectiveCamera.
Sets the stereo balance (the position of the zero parallax plane) and specifies whether the balance value is defined as a fraction of the camera near distance.
Note: Since the projection matrix always depends on the camera's near plane, in some cases it may be necessary to detect changes to the camera near plane and adjust by setting a new stereo balance value. Open Inventor will make these adjustments automatically if the nearFrac parameter is set to TRUE. In this case the stereo balance value is defined as a fraction of the camera near distance.
Default balance is 1.0. The default can be set using the OIV_STEREO_BALANCE environment variable. Default nearFrac is FALSE. The default can be set using the OIV_STEREO_BALANCE_NEAR_FRAC environment variable.
Reimplemented in SoStereoCamera.
Specifies if stereo adjustments are absolute.
FALSE by default.
The default non-absolute mode allows the stereo settings to be valid over a range of different view volume settings. If you chose absolute mode, you are responsible for modifying the stereo settings (if necessary) when the view volume changes.
When absolute mode is TRUE, stereo offset and balance are used as shown in the following pseudo-code for the right eye view:
StereoCameraOffset = getStereoAdjustment(); FrustumAsymmetry = getBalanceAdjustment(); glTranslated (-StereoCameraOffset, 0, 0); glFrustum (FrustumLeft + FrustumAsymmetry, FrustumRight + FrustumAsymmetry, FrustumBottom, FrustumTop, NearClipDistance, FarClipDistance);
The left eye view is symmetric.
When absolute mode is FALSE, stereo offset and balance are used as shown in the following pseudo-code for the right eye view:
Xrange is right minus left (i.e., first two arguments of glFrustum) and multiply that difference by the ratio of the distance to the desired plane of zero parallax to the near clipping plane distance.
StereoCameraOffset = Xrange * 0.035 * getStereoAdjustment(); FrustumAsymmetry = -StereoCameraOffset * getBalanceAdjustment(); ZeroParallaxDistance = (NearClipDistance + FarClipDistance)/0.5; FrustumAsymmetry *= NearClipDistance / ZeroParallaxDistance; glTranslated (-StereoCameraOffset, 0, 0); glFrustum (FrustumLeft + FrustumAsymmetry, FrustumRight + FrustumAsymmetry, FrustumBottom, FrustumTop, NearClipDistance, FarClipDistance);
The left eye view is symmetric.
Not virtual pure for compatiblity reasons.
Reimplemented in SoStereoCamera.
Sets the stereo offset (the distance of each eye from the camera position).
The right eye is moved plus offset and the left eye is moved minus offset. Default is 0.7. The default can be set using OIV_STEREO_OFFSET environment variable.
Reimplemented in SoStereoCamera.
Sets the camera to view the region defined by the given bounding box.
The near and far clipping planes will be positioned the radius of the bounding sphere away from the bounding box's center.
See note about bounding boxes in the sceneRoot version of this method.
Sets the camera to view the scene defined by the given path.
The near and far clipping planes will be positioned slack bounding sphere radii away from the bounding box's center. A value of 1.0 will make the near and far clipping planes the tightest around the bounding sphere.
See note about bounding boxes in the sceneRoot version of this method.
Sets the camera to view the scene rooted by the given node.
The near and far clipping planes will be positioned slack bounding sphere radii away from the bounding box's center. A value of 1.0 will make the near and far clipping planes the tightest around the bounding sphere.
The node.
Warning:
The SoGetBoundingBoxAction will call ref() and unref() on the specified node. If the node's reference count before calling viewAll() is zero (the default), the call to unref() will cause the node to be destroyed.
The ratio of camera viewing width to height.
This value must be greater than 0.0. There are several standard camera aspect ratios defined in SoCamera.h.
The distance from the camera viewpoint to the far clipping plane.
The distance from the viewpoint to the point of focus.
This is typically ignored during rendering, but may be used by some viewers to define a point of interest.
The distance from the camera viewpoint to the near clipping plane.
The orientation of the camera viewpoint, defined as a rotation of the viewing direction from its default (0,0,-1) vector.
The location of the camera viewpoint.
Defines how to map the rendered image into the current viewport, when the aspect ratio of the camera differs from that of the viewport.
Use enum ViewportMapping. Default is ADJUST_CAMERA. | https://developer.openinventor.com/refmans/latest/RefManCpp/class_so_camera.html | CC-MAIN-2021-25 | en | refinedweb |
Qt Positioning C++ Classes
The Positioning module provides positioning information via QML and C++ interfaces. More...
Classes
Detailed Description
To load the Qt Positioning module, add the following statement to your .qml files
import QtPositioning 5.2
For C++ projects include the header appropriate for the current use case, for example applications using routes may use
#include <QGeoCoordinate>
The .pro file should have the positioning keyword added
QT += positioning
See more in the Qt Positioning. | https://doc.qt.io/archives/qt-5.10/qtpositioning-module.html | CC-MAIN-2021-25 | en | refinedweb |
, C# code cannot be called directly from JavaScript however you can call Objective C objects directly from JavaScript. This is made available via the JSExport protocol. This protocol class is exposed to C# thus it should work since the class is available.
The way it should work is that you inherit this protocol class and decorate the method on the class with the Export attribute that you want exposed as an Objective C method. Something like this ...
namespace JavaScriptTest
{
public class JavascriptBridge : JSExport
{
public JavascriptBridge()
{
}
[Export("addContact:")]
public void AddContact(string data)
{
Console.WriteLine("kilroy is here!!!");
}
}
}
Then to establish the JavaScript binding you'd do something like this ...
// get JSContext from UIWebView instance
JSContext context = new JSContext();
// enable error logging
context.ExceptionHandler = new JSContextExceptionHandler((ctx, ex) => {
Console.WriteLine("WEB JS: {0}", ex);
});
// give JS a handle to the JavaScript bridge.
context[new NSString("myApp")] = JSValue.From(new JavascriptBridge(), context);
// JavaScript that calls into the JavaScript bridge method 'addContact'.
string addContactText = "if (myApp.addContact !== undefined) {myApp.addContact('hello world')}";
// Execute the script.
context.EvaluateScript(addContactText);
You can refer to this article from Big Nerd Ranch to see how JSContext is used in Objective C and download the example:
Here's a closely related enhancement (about a different limitation with the current bindings for JavaScriptCore) that contains a little bit of potentially useful additional discussion:.
The original title of this bug was a little confusing when compared to bug 17550, so I reworded it slightly. This bug (bug 23474) is about sending whole C# objects into JavaScript. The Objective-C mechanism for doing that requires defining _new Objective-C protocols_ that extend `JSExport`, and Xamarin does not currently have a way to do that from C#.
Looks like we can't register protocols dynamically for JavaScriptCore:
Fixed.
maccore/master: b0d8355c0931ad3059066cdb0d37305e0431040e
monotouch/master: ff538393745da3924c9346f3e9f769a7bc91efae
The only catch is that the static registrar must be used.
Here is a sample:
@Rolf, I have checked this issue with the help of Bug description and sample code attached in comment 5. I observed that when I run application I am getting "JS exception" as shown in Screencast.
Screencast:
After commenting "context.ExceptionHandler" I am getting following error :
2015-03-02 18:37:50.312 JavaScriptTest[5357:148816] ObjCRuntime.RuntimeException: Detected a protocol (JavaScriptTest.IMyJavaExporter) inheriting from the JSExport protocol while using the dynamic registrar. It is not possible to export protocols to JavaScriptCore dynamically; the static registrar must be used (add '--registrar:static to the additional mtouch arguments in the project's iOS Build options to select the static registrar).
Due to this exception, I am not able to move ahead. Please review the screencast and let us know what additional step we need to follow to verify this issue.
Ide Log:
Environment Info:
=== Xamarin Studio ===
Version 5.8 (build 1041)
Installation UUID: d6f15b80-470e-4e50-9aba-38a45c556680
Runtime:
Mono 3.12.0 ((detached/b8f5055)
GTK+ 2.24.23 (Raleigh theme)
Package version: 312000077
=== Apple Developer Tools ===
Xcode 6.2 (6770)
Build 6C121
=== Xamarin.iOS ===
Version: 8.9.1.397 (Enterprise Edition)
Hash: 016fd4e
Branch: master
Build date: 2015-02-26 19:15:06-0500
=== Xamarin.Android ===
Version: 5.0.99.288 (Enterprise Edition)
Android SDK: /Users/360logica/Desktop/Anddk.13.1.397 (Enterprise Edition)
=== Build Information ===
Release ID: 508001041
Git revision: fb49f5b17bced266244831c07c28f9fc93915cb1
Build date: 2015-02-27 12:58:53-05
Xamarin addins: e88a2212bb7156a42ad6728b82e99846c3eea817
===
@asimk, can you add "--registrar:static" to the additional mtouch arguments in the project's iOS Build options and try again?
I have also tried the sample gist verbatim with the "--registrar:static" option and get the same error:
JS exception: TypeError: undefined is not a function (evaluating 'obj.myFunc ()')
Some debugging I did leaves me to believe the MyJavaExporter instance is seen as an empty object in JS.
// typeof obj;
2015-03-02 15:25:49.763 JSXamarinTest[67469:16727641] object
// typeof obj.myFunc;
2015-03-02 15:25:49.764 JSXamarinTest[67469:16727641] undefined
// Object.keys(obj).length;
2015-03-02 15:25:49.764 JSXamarinTest[67469:16727641] 0
I had some linker errors that went away after changing the iOS build settings to "Link SDK assemblies only", but that did not change the result of evaluating the JS.
@Michael, this fix has not been released yet (it will be included in the upcoming Xamarin.iOS 8.10 release).
*** Bug 32048 has been marked as a duplicate of this bug. *** | https://bugzilla.xamarin.com/23/23474/bug.html | CC-MAIN-2021-25 | en | refinedweb |
How to randomize blocks in OpenSesame (a solution not a question)
Hello everyone,
I've had this problem where I wanted to randomize blocks of trials (i.e., randomize the presentation of the blocks, not the trials within a block). After some researching, I found Sebastian's youtube video on how to counterbalance blocks, and I found that extremely useful. However, if you would like to purely randomize blocks for each participant, I came up with a solution that I think anyone can do fairly simply. I am not a programmer, so if anyone sees something wrong or how to do this more simply, let me know!
1) The first step is to create an inline code at the beginning of the experiment (see the picture in step 2 to figure out where to place it. Mine is called "new_inline_script_3"):
from random import shuffle #create a list of the number of blocks that you need. Always start at 0. Here I have eight blocks. block_list = [0, 1, 2, 3, 4, 5, 6, 7] #randomize the list that you just made shuffle(block_list) #create variables that will be used to help randomize later in your MainSequence. The number of variables depends on the number of blocks you have in your experiment (and should correspond to how many are in the block_list). So if you have only 3 blocks, you would use b0, b1 and b2. b0 = block_list[0] b1 = block_list[1] b2 = block_list[2] b3 = block_list[3] b4 = block_list[4] b5 = block_list[5] b6 = block_list[6] b7 = block_list[7] #Make these variables part of the experiment exp.set('b0',b0) exp.set('b1',b1) exp.set('b2',b2) exp.set('b3',b3) exp.set('b4',b4) exp.set('b5',b5) exp.set('b6',b6) exp.set('b7',b7)
2) Next set up your experiment hierarchically to include a Main Loop and Main Sequence. Call them MainLoop and MainSequence (if you call them something else, you will have to change the 'count_MainSequence' variable in step 4 to whatever you call your "MainSequence" sequence). Add the blocks within the Main Sequence.
3) For the main loop, you need to have as many cycles as you have blocks. v3.1 is a bit different than previous versions in that you can't specify how many cycles you want through a drop down menu. But all you have to do is double click on cells in the first column to specify this.
4) Click on the your Main Sequence and then add the following to the "run if" arguments next to each block:
block1 --> =self.get('b0) == self.get('count_MainSequence')
block2 --> self.get('b1') == self.get('count_MainSequence')"
etc...
Notice that the "b" variables start with 0. So Block 1 should be associated with b0
You can do this by going into the code itself:
set flush_keyboard yes set description "Runs a number of items in sequence" run Block1 "=self.get('b0') == self.get('count_MainSequence')" run Block2 "=self.get('b1') == self.get('count_MainSequence')" run Block3 "=self.get('b2') == self.get('count_MainSequence')" run Block4 "=self.get('b3') == self.get('count_MainSequence')" run Block5 "=self.get('b4') == self.get('count_MainSequence')" run Block6 "=self.get('b5') == self.get('count_MainSequence')" run Block7 "=self.get('b6') == self.get('count_MainSequence')" run Block8 "=self.get('b7') == self.get('count_MainSequence')"
Or you can just click on the "run if" boxes and adding the code.
That's it! Each time you run a participant, they should get a random order of each block. I hope this helps someone, and there is an easier way of doing this, let me know!
I tried this but it keeps showing me there are syntax errors in both the set flush and the set description lines.
Have you encountered this before?
How do you define the blocks as variables? | https://forum.cogsci.nl/discussion/2580/how-to-randomize-blocks-in-opensesame-a-solution-not-a-question | CC-MAIN-2021-25 | en | refinedweb |
Feb 28 2020 06:14 AM
Hello Team,
I have download the "Microsoft-Win32-Content-Prep-Tool-master" from the link and i am trying to use the file: Intunewinapputil.exe in windows 10 1803 32 bit operating system. It gives me an error stating the file is not supported. Please help me in resolving this issue and how to launch the application
Feb 28 2020 07:03 PM
Feb 28 2020 07:03 PM
Mar 01 2020 10:46 PM - edited Mar 01 2020 10:46 PM
Mar 01 2020 10:46 PM - edited Mar 01 2020 10:46 PM
@Moe_Kinani Yes i am able to install it on 64bit and setup is running successfully. I have tested and converted a package to intunewin Format.
I have used 1803 64bit machine for installation
Mar 02 2020 12:50 AM
Mar 02 2020 12:50 AM
Hi @Rahul9588,
the tool is build as x64 tool and will not run on x86. Please use a x64 device to convert the packages.
best,
Oliver
Mar 02 2020 04:59 AM
Mar 02 2020 04:59 AM
@Oliver Kieselbach Thanks for the Quick response. We have a requirement for build Intunewin App for 32-Bit environment, is there way to build tool X86 version.
Mar 02 2020 05:02 AM
Mar 02 2020 05:02 AM
Hi @Rahul9588,
actually Microsoft did not released the source code, just the Tool itself. The intention for this is unclear but as long as we don't have the source code I don't see any chance to re-compile it.
best,
Oliver
Mar 02 2020 05:12 AM - edited Mar 02 2020 05:21 AM
Mar 02 2020 05:12 AM - edited Mar 02 2020 05:21 AM
@Oliver Kieselbach Thanks for the response, but i am working on 32-bit product and I would integrate the Intunewin on my product.
Mar 03 2020 05:11 AM
Mar 03 2020 05:11 AM
@Oliver Kieselbach Is there way to read the output message from tool during conversion process in c#?. Please refer the attached screenshot.
I have tried couple of sample examples but wasn't successful.
Thanks in advance.
Mar 03 2020 12:12 PM
Mar 03 2020 12:12 PM
Hi @Sangamesh_gouri,
this can be accomplished by redirecting StandardOut and StandardError This way you can easily start a process and get all output back to your .NET program.
That's what you are looking for:...
best,
Oliver
Mar 04 2020 04:49 AM
Mar 04 2020 04:49 AM
@Oliver Kieselbach Thanks for the link . i have tried example from msdn link provided by you. But still it is not updating the console output. Please find the attached screen shot for error message below is code i tried along few other samples i analyzed.
namespace ConsoleApplication1
{
using System;
using System.Diagnostics;
using System.Threading.Tasks;
class Program
{
static void Main(string[] args)
{
string source = @"D:\Packages\MSIs\7zip920X64\";
string primary = @"D:\Packages\MSIs\7zip920X64\7z920-x64.msi";
string output = @"D:\Packages\MSIs\7zip920X64\Output";
string arguments = $@"-c ""{source}"" -s ""{primary}"" -o ""{output}"" -q";
string toolPath = "C:\\Current\\Common\\Tools\\IntuneWinConverter\\IntuneWinAppUtil.exe";
using (Process compiler = new Process())
{
compiler.StartInfo.FileName = toolPath;
compiler.StartInfo.Arguments = arguments;
compiler.StartInfo.UseShellExecute = false;
compiler.StartInfo.RedirectStandardOutput = true;
compiler.Start();
Console.WriteLine(compiler.StandardOutput.ReadToEnd());
compiler.WaitForExit();
}
Console.ReadLine();
}
}
} | https://techcommunity.microsoft.com/t5/microsoft-intune/unable-to-open-intune-app-utility/td-p/1200704 | CC-MAIN-2021-25 | en | refinedweb |
- Adding Client-Side Behavior Using the ExtenderControlBase
- Adding Design-Time Support to Your Extender Control
- Adding Animations to Your Extender Control
- Summary Experience
The ImageRotatorDesigner class and ImageList properties that appear in the Properties window while the image control has focus in the designer. This default feature addresses one issue, which is being able to work with the ImageRotator properties in an integrated way, but still does not address the issue of data entry for the properties themselves and how that experience can be enhanced.
Figure 11.4 Extender properties on the image control.
- Add attributes to the property.
- Add editors to assist in assigning values.
- Create a type converter to support serialization.
Add Attributes to the Class
Most users expect when adding multiple entries to a control to be able to add them in the body of the HTML element. This is the experience we have when adding web service references or script references to the ScriptManager
[ParseChildren(true, "ImageList")] ... public class ImageRotatorExtender : ExtenderControlBase { ... }
Listing 11.7. ImageList Assignment in HTML
... <asp:Image <cc2:ImageRotatorExtender <cc2:ImageUrl <cc2:ImageUrl <cc2:ImageUrl </cc2:ImageRotatorExtender> ...
Add Attributes to the Property
To fully implement the ability to add nested image entries to our ImageRot
[ParseChildren(true, "ImageList")] ... public class ImageRotatorExtender : ExtenderControlBase { ... [DesignerSerializationVisibility( DesignerSerializationVisibility.Content)] [PersistenceMode(PersistenceMode.InnerDefaultProperty)] public ImageUrlList ImageList { ... } }.
Figure 11.5 Image URL Collection Editor
Figure 11.6 Image URL Editor
Listing 11.9. ImageUrl Class
[Serializable] public class ImageUrl { [DefaultValue(""),Bindable(true), Editor("System.Web.UI.Design.ImageUrlEditor, System.Design, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a", typeof(UITypeEditor)), UrlProperty] public string Url { get; set; } }Model ConvertToString on the ImageList, the JSON string representation of the ImageList will be returned.
Listing 11.10. ImageListConverter Type Converter Class
public class ImageListConverter : TypeConverter { public override object ConvertTo(ITypeDescriptorContext context, System.Globalization.CultureInfo culture, object value, Type destinationType) { Collection<ImageUrl> imageList = value as Collection<ImageUrl>; if (imageList != null && destinationType == typeof(string)) { StringBuilder builder = new StringBuilder(); builder.Append("["); bool first = true; foreach (ImageUrl imageUrl in imageList) { if(first) { first = false; } else { builder.Append(","); } builder.Append("\""); builder.Append(imageUrl.Url.Replace("~/", "")); builder.Append("\""); } builder.Append("]"); return builder.ToString(); } return base.ConvertTo(context, culture, value, destinationType); } }
Listing 11.11. ImageUrlList Collection Class
[Serializable] [TypeConverter(typeof(ImageListConverter))] public class ImageUrlList : Collection<ImageUrl> { } | https://www.informit.com/articles/article.aspx?p=1224619&seqNum=2 | CC-MAIN-2021-25 | en | refinedweb |
AWS Elastic Kubernetes Service (EKS).
Overview
Pulumi Crosswalk for AWS simplifies the creation, configuration, and management of EKS clusters, in addition to offering a single programming model and deployment workflow that works for your Kubernetes application configuration, in addition to infrastructure. This support ensures your EKS resources are fully integrated properly with the related AWS services. This includes
- ECR for private container images
- ELB for load balancing
- IAM for security
- VPC for network isolation
- CloudWatch for monitoring
Amazon EKS runs up-to-date versions of the open-source Kubernetes software, so you can use all the existing plugins and tooling from the Kubernetes community, including Pulumi’s support for deploying Helm charts. Applications running on Amazon EKS are fully compatible with applications running on any standard Kubernetes environment, whether running in on-premises data centers or public clouds, easing porting from other Kubernetes environments to EKS.
Expressing your infrastructure and Kubernetes configuration in code using Pulumi Crosswalk for AWS ensures your resulting system is ready for production using built-in best practices.
Prerequisites
Before getting started, you will need to install some pre-requisites:
aws-iam-authenticator: Amazon EKS uses IAM to provide secure authentication to your Kubernetes cluster.
These are not required but are recommended if you plan on interacting with your Kubernetes cluster:
kubectl: the standard Kubernetes command line interface.
helm: if you plan on deploying Helm charts to your cluster.
Provisioning a New EKS Cluster
To create a new EKS cluster, allocate an instance of an
eks.Cluster class in your Pulumi program:
import * as eks from "@pulumi/eks"; // Create an EKS cluster with the default configuration. const cluster = new eks.Cluster("my-cluster"); // Export the cluster's kubeconfig. export const kubeconfig = cluster.kubeconfig;
This cluster uses reasonable defaults, like placing the cluster into your default VPC with a CNI interface, using
AWS IAM Authenticator to leverage IAM for secure access to your cluster, and using two
t2.medium nodes.
After running
pulumi up, we will see the resulting cluster’s
kubeconfig file exported for easy access:
$ pulumi up Updating (dev): Type Name Status + pulumi:pulumi:Stack crosswalk-aws-dev created + └─ eks:index:Cluster my-cluster created ... dozens of resources omitted ... Outputs: kubeconfig: { apiVersion : "v1" clusters : [ [0]: { cluster: { certificate-authority-data: "...", server : "" } name : "kubernetes" } ] contexts : [ [0]: { context: { cluster: "kubernetes" user : "aws" } name : "aws" } ] current-context: "aws" kind : "Config" users : [ [0]: { name: "aws" user: { exec: { apiVersion: "client.authentication.k8s.io/v1alpha1" args : [ [0]: "token" [1]: "-i" [2]: "my-cluster-eksCluster-22c2275" ] command : "aws-iam-authenticator" } } } ] } Resources: + 43 created Duration: 11m26s
It is easy to take this file and use it with the
kubectl CLI:
$ pulumi stack output kubeconfig > kubeconfig.yml $ KUBECONFIG=./kubeconfig.yml kubectl get nodes NAME STATUS ROLES AGE VERSION ip-172-31-29-62.us-west-2.compute.internal Ready <none> 1m v1.12.7 ip-172-31-40-32.us-west-2.compute.internal Ready <none> 2m v1.12.7
From here, we have a fully functioning EKS cluster in Amazon, which we can deploy Kubernetes applications to.
Any existing tools will work here, including
kubectl, Helm, and other CI/CD products. Pulumi offers the ability
to define Kubernetes application-level objects and configuration in code too. For instance, we can deploy a canary
to our EKS cluster in the same program if we want to test that it is working as part of
pulumi up:.; // Export the cluster's kubeconfig. export const kubeconfig = cluster.kubeconfig;
If we deploy this on top of our existing EKS cluster, we will see the diff is just the creation of Kubernetes Deployment and Service objects, and the resulting URL for the load balanced service will be printed out. We can see that Pods have been spun up and we can use this URL to check the health of our cluster:
$ pulumi stack output kubeconfig > kubeconfig.yml $ KUBECONFIG=./kubeconfig.yml kubectl get po NAME READY STATUS RESTARTS AGE my-app-de-6gfz4ap5-dc8c6584f-6xmcl 1/1 Running 0 3m my-app-de-6gfz4ap5-dc8c6584f-wzlf9 1/1 Running 0 3m $ curl(pulumi stack output url) <html> <head> <title>Welcome to nginx!</title> </head> <body> <h1>Welcome to nginx!</h1> </body> </html>
For more detail on how to deploy Kubernetes applications using Pulumi, refer to one of these sections:
- Deploying Kubernetes Apps to Your EKS Cluster
- Deploying Existing Kubernetes YAML Config to Your EKS Cluster
- Deploying Existing Helm Charts to Your EKS Cluster
Changing the Default Settings on an EKS Cluster
The above example showed using the default settings for your EKS cluster. It is easy to override them by passing arguments to the constructor. For instance, this example changes the desired capacity, disables the Kubernetes dashboard, and enables certain cluster logging types:;
For a full list of options that you may set on your cluster, see the API documentation. Many common cases are described below.
Configuring Your EKS Cluster’s Networking
By default, your EKS cluster is put into your region’s default VPC. This is a reasonable default, however this is configurable if you want specific network isolation or to place your cluster work nodes on private subnets. This works in conjunction with Pulumi Crosswalk for AWS VPC which makes configuring VPCs easier.
This example creates a new VPC with private subnets only and creates our EKS cluster inside of it:
import * as awsx from "@pulumi/awsx"; import * as eks from "@pulumi/eks"; // Create a VPC for our cluster. const vpc = new awsx.ec2.Vpc("my-vpc"); const allVpcSubnets = vpc.privateSubnetIds.concat(vpc.publicSubnetIds); // Create an EKS cluster inside of the VPC. const cluster2 = new eks.Cluster("my-cluster", { vpcId: vpc.id, subnetIds: allVpcSubnets, nodeAssociatePublicIpAddress: false, }); // Export the cluster's kubeconfig. export const kubeconfig = cluster.kubeconfig;
When you create an Amazon EKS cluster, you specify the Amazon VPC subnets for your cluster to use. These must be in at least two Availability Zones. We recommend a network architecture that uses private subnets for your worker nodes and public subnets for Kubernetes to create Internet-facing load balancers within. When you create your cluster, specify all of the subnets that will host resources for your cluster (including workers and load balancers).
In the above example, we passed both the private and public subnets from our VPC. The EKS package figures out which ones are public and which ones are private – and creates the worker nodes inside only the private subnets if any are specified. EKS will tag the provided subnets so that Kubernetes can discover them. If additional control is needed over how load balancers are allocated to subnets, users can attach additional subnet tags themselves as outlined in Cluster VPC Considerations.
Note that by default the
eks.Clusterwill do the same as what is described here, just inside of the default VPC inside of your account, rather than a custom VPC as shown in this example.
Configuring Your EKS Cluster’s Worker Nodes and Node Groups
Worker machines in Kubernetes are called nodes. Amazon EKS worker nodes run in your AWS account and connect to your cluster’s control plane via the cluster API server endpoint. These are standard Amazon EC2 instances, and you are billed for them based on normal EC2 On-Demand prices. By default, an AMI using Amazon Linux 2 is used as the base image for EKS worker nodes, and includes Docker, kubelet, and the AWS IAM Authenticator.
Nodes exist in groups and you can create multiple groups for workloads that require it. By default, your EKS cluster
is given a default node group, with the instance sizes and counts that you specify (or the defaults of two
t2.medium
instances otherwise). The latest version of Kubernetes available is used by default.
If you would like to disable the creation of a default node group, and instead rely on creating your own, simply pass
skipDefaultNodeGroup
as
true to the
eks.Cluster constructor. Additional node groups may then be created by calling
the
createNodeGroup function on
your EKS cluster, or by creating an
eks.NodeGroup
explicitly. In both cases, you are likely to want to configure IAM roles for your worker nodes explicitly, which can be
supplied to your EKS cluster using the
instanceRole or
instanceRoles properties.
For instance, let’s say we want to have two node groups: one for our fixed, known workloads, and another that is burstable and might use more expensive compute, but which can be scaled down when possible (possibly to zero). We would skip the default node group, and create our own node groups:
import * as aws from "@pulumi/aws"; import * as eks from "@pulumi/eks"; /** * Per NodeGroup IAM: each NodeGroup will bring its own, specific instance role and profile. */ const managedPolicyArns: string[] = [ ", ]; // Creates a role and attaches the EKS worker node IAM managed policies. Used a few times below, // to create multiple roles, so we use a function to avoid repeating ourselves. export function createRole(name: string): aws.iam.Role { const role = new aws.iam.Role(name, { assumeRolePolicy: aws.iam.assumeRolePolicyForPrincipal({ Service: "ec2.amazonaws.com", }), }); let counter = 0; for (const policy of managedPolicyArns) { // Create RolePolicyAttachment without returning it. const rpa = new aws.iam.RolePolicyAttachment(`${name}-policy-${counter++}`, { policyArn: policy, role: role }, ); } return role; } // Now create the roles and instance profiles for the two worker groups. const role1 = createRole("my-worker-role1"); const role2 = createRole("my-worker-role2"); const instanceProfile1 = new aws.iam.InstanceProfile("my-instance-profile1", {role: role1}); const instanceProfile2 = new aws.iam.InstanceProfile("my-instance-profile2", {role: role2}); // Create an EKS cluster with many IAM roles to register with the cluster auth. const cluster = new eks.Cluster("my-cluster", { skipDefaultNodeGroup: true, instanceRoles: [ role1, role2 ], }); // Now create multiple node groups, each using a different instance profile for each role. // First, create a node group for fixed compute. const fixedNodeGroup = cluster.createNodeGroup("my-cluster-ng1", { instanceType: "t2.medium", desiredCapacity: 2, minSize: 1, maxSize: 3, labels: {"ondemand": "true"}, instanceProfile: instanceProfile1, }); // Now create a preemptible node group, using spot pricing, for our variable, ephemeral workloads. const spotNodeGroup = new eks.NodeGroup("my-cluster-ng2", { cluster: cluster, instanceType: "t2.medium", desiredCapacity: 1, spotPrice: "1", minSize: 1, maxSize: 2, labels: {"preemptible": "true"}, taints: { "special": { value: "true", effect: "NoSchedule", }, }, instanceProfile: instanceProfile2, }, { providers: { kubernetes: cluster.provider}, }); // Export the cluster's kubeconfig. export const kubeconfig = cluster.kubeconfig;
After configuring such a cluster, we would then want to ensure our workload’s pods are scheduled correctly on the right nodes. To do so, you will use a combination of node selectors, taints, and/or tolerances. For more information, see Assigning Pods to Nodes and Taints and Tolerances.
Managing EKS Cluster Authentication with IAM.
The
roleMappings property
for your EKS cluster lets you configure custom IAM roles. For example, you can create different IAM roles for cluster
admins, automation accounts (for CI/CD), and production roles, and supply them to
roleMappings; this has the effect of
placing them in the
aws-auth ConfigMap for your cluster automatically. Pulumi also lets you configure Kubernetes
objects, so that can also then create the RBAC cluster role bindings in your cluster to tie everything together.
For a complete example of this in action, see Simplifying Kubernetes RBAC in Amazon EKS.
Deploying Kubernetes Apps to Your EKS Cluster
Pulumi supports the entire Kubernetes object model in the @pulumi/kubernetes package. For more information on these object types, including Deployments, Services, and Pods, see Understanding Kubernetes Objects.
With Pulumi, you describe your desired Kubernetes configuration, and
pulumi up will diff between the current
state and what is desired, and then drive the API server to bring your desired state into existence.
For example, this program creates a simple load balanced NGINX service, exporting its URL:
import * as k8s from "@pulumi/kubernetes"; // Create an NGINX Deployment and load balanced Service. }] }], } } }, }); const service = new k8s.core.v1.Service(`${appName}-svc`, { metadata: { labels: appLabels }, spec: { type: "LoadBalancer", ports: [{ port: 80, targetPort: "http" }], selector: appLabels, }, }); // Export the URL for the load balanced service. export const url = service.status.loadBalancer.ingress[0].hostname;
Running
pulumi up deploys these Kubernetes objects, providing rich status updates along the way:
Updating (dev): Type Name Status pulumi:pulumi:Stack crosswalk-aws-dev + ├─ kubernetes:core:Service my-app-svc created + └─ kubernetes:apps:Deployment my-app-dep created Outputs: + url : "a2861638e011e98a329401e61c-1335818318.us-west-2.elb.amazonaws.com" Resources: + 2 created Duration: 22s
Deploying to Specific Clusters
By default, Pulumi targets clusters based on your local kubeconfig, just like
kubectl does. So if your
kubectl
client is set up to talk to your EKS cluster, deployments will target it. We saw earlier in
Provisioning a New EKS Cluster, however, that you can deploy into any
Kubernetes cluster created in your Pulumi program. This is because each Kubernetes object specification accepts
an optional “provider” that can programmatically specify a kubeconfig to use.
This is done by instantiating a new
kubernetes.Provider object, and providing one or many of these properties:
cluster: A cluster name to target, if there are many in your kubeconfig to choose from.
context: The name of the kubeconfig context to use, if there are many to choose from.
kubeconfig: A stringified JSON representing a full kubeconfig to use instead of your local machine’s.
For example, to deploy an NGINX Deployment into a cluster whose kubeconfig our program has access to:
import * as k8s from "@pulumi/kubernetes"; // Create a provider using our Kubernetes config: const provider = new k8s.Provider("custom-provider", { kubeconfig: "..." }); // Declare a deployment that targets this provider: }] }], } } }, }, { // Use our custom provider for this object. provider: provider, }, );
To ease doing this against an EKS cluster just created, the cluster object itself offers a
provider property of type
kubernetes.Provider, already pre-configured.
For more information about configuring access to multiple clusters, see Configure Access to Multiple Clusters and the Pulumi Kubernetes Setup documentation.
Deploying Existing Kubernetes YAML Config to Your EKS Cluster
Specifying your Kubernetes object configurations in Pulumi lets you take advantage of programming language features, like variables, loops, conditionals, functions, and classes. It is possible, however, to deploy existing Kubernetes YAML. The two approaches can be mixed, which is useful when converting an existing project.
The
ConfigFile class can be
used to deploy a single YAML file, whereas the
ConfigGroup class can deploy
a collection of files, either from a set of files or in-memory representations.
For example, imagine we have a directory,
yaml/, containing the full YAML for the Kubernetes Guestbook application, perhaps across multiple files. We can deploy
it using Pulumi into our EKS cluster with the following code and by running
pulumi up:
import * as eks from "@pulumi/eks"; import * as k8s from "@pulumi/kubernetes"; // Create an EKS cluster. const cluster = new eks.Cluster("my-cluster"); // Create resources from standard Kubernetes guestbook YAML example. const guestbook = new k8s.yaml.ConfigGroup("guestbook", { files: "yaml/*.yaml" }, { provider: cluster.provider }, ); // Export the (cluster-private) IP address of the Guestbook frontend. export const frontendIp = guestbook.getResource("v1/Service", "frontend", "spec").clusterIP;
The
ConfigFile and
ConfigGroup classes both support a
transformations property which can be used to “monkey patch” Kubernetes configuration on the fly. This can be used to rewrite
configuration to include additional services (like Envoy sidecars), inject tags, and so on.
For example, a transformation like the following can make all services private to a cluster, by
changing
LoadBalancer specs into
ClusterIPs, in addition to placing objects into a desired namespace:
const guestbook = new k8s.yaml.ConfigGroup("guestbook", { files: "yaml/*.yaml", transformations: [ (obj: any) => { // Make every service private to the cluster. if (obj.kind == "Service" && obj.apiVersion == "v1") { if (obj.spec && obj.spec.type && obj.spec.type == "LoadBalancer") { obj.spec.type = "ClusterIP"; } } }, // Put every resource in the created namespace. (obj: any) => { if (obj.metadata !== undefined) { obj.metadata.namespace = namespaceName } else { obj.metadata = {namespace: namespaceName} } } ], }, );
Of course, it is easy to create invalid transformations that break your applications, by changing settings the application or configuration did not expect, so this capability must be used with care.
Deploying Existing Helm Charts to Your EKS Cluster
Pulumi can deploy Helm charts through a variety of means. This includes deploying a chart by name from the default Helm “stable charts” repository, from a custom Helm repository (over the Internet or on-premises), or from a tarball directly.
For these examples to work, you will need to install Helm and, once installed, initialize it with
helm init --client-only.
This program installs the stable Wordpress chart into our EKS cluster:
import * as eks from "@pulumi/eks"; import * as k8s from "@pulumi/kubernetes"; // Create an EKS cluster. const cluster = new eks.Cluster("my-cluster"); // Deploy Wordpress into our cluster. const wordpress = new k8s.helm.v3.Chart("wordpress", { repo: "stable", chart: "wordpress", values: { wordpressBlogName: "My Cool Kubernetes Blog!", }, }, { providers: { "kubernetes": cluster.provider } }); // Export the cluster's kubeconfig. export const kubeconfig = cluster.kubeconfig;
The
values array provides the configurable parameters for the chart. If we leave off the
version, the latest
available chart will be fetched from the repository (including on subsequent updates, which may trigger an upgrade).
The
getResourceProperty function on a chart can be used to get an internal resource provisioned by the chart.
Sometimes this is needed to discover attributes such as a provisioned load balancer’s address. Be careful when
depending on this, however, as it is an implementation detail of the chart and will change as the chart evolves.
Note that Pulumi support for Helm does not use Tiller. There are known problems, particularly around security, with Tiller, and so the Helm project is discouraging its use and deprecating it as part of Helm. As a result of this, certain charts that depend on Tiller being present will not work with Pulumi. This is by design, affects only a small number of charts, and given Helm’s direction, this should be considered a bug in the chart itself.
As mentioned, there are other ways to fetch the chart’s contents. For example, we can use a custom repo:
const chart = new k8s.helm.v3.Chart("empty", { chart: "raw", version: "0.1.0", fetchOpts: { repo: "", }, });
Or, we can use a tarball fetched from a web URL:
const chart = new k8s.helm.v3.Chart("empty1", { chart: "", });
Using an ECR Container Image from an EKS Kubernetes Deployment
Pulumi Crosswalk for AWS ECR enables you to build, publish, and consume private Docker
images easily using Amazon’s Elastic Container Registry (ECR). The
aws.ecr.buildAndPushImage function takes a name and a relative location on disk, and will
- Provision a private ECR registry using that name
- Build the
Dockerfilefound at the relative location supplied
- Push the resulting image to that registry
- Return the repository image information, including an image name your Kubernetes objects can use
This makes it easy to version your container images alongside the Kubernetes specifications that consume them.
For example, let’s say we have an
app/ directory containing a fully Dockerized application (including a
Dockerfile), and would like to deploy that as a Deployment and Service running in our EKS cluster. This program
accomplishes this with a single
pulumi up command:
import * as awsx from "@pulumi/awsx"; import * as eks from "@pulumi/eks"; import * as k8s from "@pulumi/kubernetes"; // Create a new EKS cluster. const cluster = new eks.Cluster("cluster"); // Create a NGINX Deployment and load balanced Service, running our app.: awsx.ecr.buildAndPushImage("my-repo", "./app").image(),;
For more information about ECR, see the Pulumi Crosswalk for AWS ECR documentation.
Additional EKS Resources
For more information about Kubernetes and EKS, see the following: | https://www.pulumi.com/docs/guides/crosswalk/aws/eks/ | CC-MAIN-2021-21 | en | refinedweb |
A replacement for setInterval() and setTimeout() which works in unfocused windows.
A replacement for setInterval() and setTimeout() which works in unfocused windows.
For scripts that rely on WindowTimers like setInterval() or setTimeout() things get confusing when the site which the script is running on loses focus. Chrome, Firefox and maybe others throttle the frequency of firing those timers to a maximum of once per second in such a situation. However this is only true for the main thread and does not affect the behavior of Web Workers. Therefore it is possible to avoid the throttling by using a worker to do the actual scheduling. This is exactly what WorkerTimers do.
WorkerTimers are available as a package on npm. Simply run the following command to install it:
npm install worker-timers
You can then require the workerTimers instance from within your code like this:
import * as workerTimers from 'worker-timers';
The usage is exactly the same (despite of the error handling and the differentiation between intervals and timeouts) as with the corresponding functions on the global scope.
var intervalId = workerTimers.setInterval(() => { // do something many times }, 100);
workerTimers.clearInterval(intervalId);
var timeoutId = workerTimers.setTimeout(() => { // do something once }, 100);
workerTimers.clearTimeout(timeoutId);
The native WindowTimers are very forgiving. Calling
clearInterval()or
clearTimeout()without a value or with an id which doesn't exist will just get ignored. In contrast to that workerTimers will throw an error when doing so.
// This will just return undefined. window.clearTimeout('not-an-timeout-id');
// This will throw an error. workerTimers.clearTimeout('not-an-timeout-id');
Another difference between workerTimers and WindowTimers is that this package maintains two separate lists to store the ids of intervals and timeouts internally. WindowTimers do only have one list which allows intervals to be cancelled by calling
clearTimeout()and the other way round. This is not possible with workerTimers. As mentioned above workerTimers will throw an error when provided with an unknown id.
const periodicWork = () => { };
// This will stop the interval. const windowId = window.setInterval(periodicWork, 100); window.clearTimeout(windowId);
// This will throw an error. const workerId = workerTimers.setInterval(periodicWork, 100); workerTimers.clearTimeout(workerId);
This package is intended to be used in the browser and requires the browser to have support for Web Workers. It does not contain any fallback which would allow it to run in another environment like Node.js which doesn't know about Web Workers. This is to prevent this package from silently failing in an unsupported browser. But it also means that it needs to be replaced when used in a web project which also supports server-side rendering. That should be easy, at least in theory, because each function has the exact same signature as its corresponding builtin function. But the configuration of a real-life project can of course be tricky. For a concrete example, please have a look at the worker-timers-ssr-example provided by @newyork-anthonyng. It shows the usage inside of a server-side rendered React app.
If WorkerTimers are used inside of an Angular App and Zones are used to detect changes, the behavior of WorkerTimers can be confusing. Angular is using a Zone which is patching the native setInterval() and setTimeout() functions to get notified about the execution of their callback functions. But Angular (more specifically zone.js) is not aware of WorkerTimers and doesn't patch them. Therefore Angular needs to be notified manually about state changes that occur inside of a callback function which was scheduled with the help of WorkerTimers. | https://xscode.com/chrisguttandin/worker-timers | CC-MAIN-2021-21 | en | refinedweb |
Important: Please read the Qt Code of Conduct -
How to handle a change of the current widget in QStackedWidget?
I found the following signal:
void QStackedWidget::currentChanged(int index).
But I don’t understand how it works or how I use it. Index holds the index of the new current widget. What it means? What am I supposed to give this signal?
What I want is QStackedWidget, which contains 3 widgets. These 3 widgets are just initialized and added using the function:
// Example: QWidget *widget1 = new QWidget(parent); parent->insertWidget(0, widget1); // It’s the same for every widget.
I don’t want to draw a widget unless it is current. So I need to handle a event changing the current widget to render the widget contents using my function, which will create the layout and elements.
If current widget is changed, I want to destroy the last one and render a new one.
call setCurrentIndex(int index) to display the one you want to. I do not think you want to destroy any. Otherwise, QStackedWidget is not needed.
@mingvv
I have done it before so that the widgets are not created initially until the first time the user visits the page. But as @JoeCFD says, if you then wish to destroy the widget page when the user visits a different page, rather than keeping it around for re-use, then there is not much point using a
QStackedWidgetat all.
@JonB said in How to handle a change of the current widget in QStackedWidget?:
@mingvv
I have done it before so that the widgets are not created initially until the first time the user visits the page.
How you did it?
@JonB said in How to handle a change of the current widget in QStackedWidget?:
@mingvv
But as @JoeCFD says, if you then wish to destroy the widget page when the user visits a different page, rather than keeping it around for re-use, then there is not much point using a
QStackedWidgetat all.
Can you offer another type of widget as a replacement?
I don’t want to create a new window and delete the old one. Actually, I came from WPF, there are pages for it. So I can switch pages in one window. As I understood QStackedWidget is analogous to pages and I decided that I would be comfortable to pass from WPF using it.
In principle, I don’t need to save the information when I go back (1<-2, 2<-3 etc.), but I need to pass it forward (0->1, 1->2, 2->3 etc.) through the pages.
@mingvv said in How to handle a change of the current widget in QStackedWidget?:
How you did it?
Looking at the code I didn't actually use
QStackedWidget::currentChanged(), though I probably could have done. I had 14 pages in the app, and did not want the slowed start-up time to create them all initially. I have a menu of buttons for visiting the pages. These are
QActions, which has
triggerredsignal, which is the equivalent of
QStackedWidget::currentChanged.
The
QStackedWidgetis initially populated with "placeholder" (empty) widgets:
for pageName in self.PageNames: page = JPlaceHolder() page.setObjectName(str(pageName)) self.pageStack.addWidget(page)
Connect the
QActions:
for action in self.actions: action.setCheckable(True) self.addAction(action) action.triggered.connect(self.pageChange)
And in the
pageChangeslot:
def pageChange(self): action = self.sender() # find the index in self.actions of the action index = self.actions.index(action) widget = self.pageStack.widget(index) # if the widget is still a placeholder it has not been created yet if isinstance(widget, JPlaceHolder): name = self.PageNames(index) with ShowWaitCursor(): self.createPage(name)
Finally the
createPage():
def createPage(self, name): if name == self.PageNames.SearchPage: try: return self.searchPage except AttributeError: pass self.searchPage = page = SearchPage(self) elif name == self.PageNames....: ... # Now replace the placeholder widget in the page stack with the actual page created # The value of the self.PageNames enumeration gives the position of the page in the page stack index = int(name.value) widget = self.pageStack.widget(index) if isinstance(widget, JPlaceHolder): # if the widget in the page stack is a "placeholder" # we create the actual page now and replace the placeholder with the page self.pageStack.removeWidget(widget) widget.deleteLater() self.pageStack.insertWidget(index, page) return page
Whether it needs to be as much as this I don't recall. Perhaps you can just fill your
QStackedWidgetwith
nullptrs and create the pages in
QStackedWidget::currentChanged(int index)when called, or maybe I found this didn't work if the page was
nullptr....
I don’t need to save the information when I go back (1<-2, 2<-3 etc.), but I need to pass it forward (0->1, 1->2, 2->3 etc.) through the pages.
Have you looked at
QWizard? That allows forward navigation through pages. You get backward navigation for free too, though if you don't want that you don't have to implement it/can forbid it. It's just a different kind of multi-page from
QStackedWidget. | https://forum.qt.io/topic/124968/how-to-handle-a-change-of-the-current-widget-in-qstackedwidget | CC-MAIN-2021-21 | en | refinedweb |
Where do things go?
Now that we know how to define a class, and how to implement its methods, lets look at where things go.
Note: These things are conventions in real life, but for me (and that means for you in this class) they are unbreakable rules!
Rules:
The class defintion belongs in a header file (.h), that has the exact same name and capitalization of the class. Example: Hello.h
The class implementation belongs in a source file (.cpp) that has the exact same name and capitalization of the class. Example: Hello.cpp
The header file will be guared against multiple inclusion with the #ifndef ... #define .. #endif construct
The source file will always include its own class definition (e.g. #include "Hello.h" )
This way, there are always 2 files for each class (exception: pure virtual classes).
Note: If you use eclipse you can have ecplise create these files for you (say New / Class)
To start the program, we still need a main() function. This function should go into its own file, preferably something like "main.cpp". In good OO programs this function is very short! Example:
#include "SomeClass.h" int main() { SomeClass *myInstance = new SomeClass(); myInstance->start(); delete myInstance; return 0; }
Remember to always include things where there are used!
Because I love graphics, here's another graphic showing the same thing:
But enough theory, here is a complete example:
#ifndef HELLO_H_ #define HELLO_H_ class Hello { private: bool formal; public: void greeting(); void setFormal(bool f); bool getFormal(); }; #endif /*HELLO_H_*/
#include "Hello.h" #include <iostream> using namespace std; void Hello::greeting() { if (formal) cout << "Hello, nice to meet you!" << endl; else cout << "What's up?" << endl; } void Hello::setFormal(bool f) { formal = f; } bool Hello::getFormal() { return formal; }
#include "Hello.h" int main() { Hello *h = new Hello(); h->setFormal(true); h->greeting(); delete h; return 0; } | https://max.berger.name/teaching/s06/script/ch11s02.html | CC-MAIN-2021-21 | en | refinedweb |
read ASCII Gaussian Cube Data files More...
#include <vtkGaussianCubeReader.h>
read ASCII Gaussian Cube Data files
vtkGaussianCubeReader is a source object that reads ASCII files following the description in The FileName must be specified.
Definition at line 36 of file vtkGaussianCubeReader.h.
Definition at line 40 of file vtkGaussianCubeReadertkPolyDataAlgorithm.
This is called by the superclass.
This is the method you should override.
Reimplemented from vtkPolyDataAlgorithm.
Reimplemented from vtkPolyDataAlgorithm.
Implements vtkMoleculeReaderBase.
Fill the output port information objects for this algorithm.
This is invoked by the first call to GetOutputPortInformation for each port so subclasses can specify what they can handle.
Reimplemented from vtkPolyDataAlgorithm.
Definition at line 50 of file vtkGaussianCubeReader.h. | https://vtk.org/doc/nightly/html/classvtkGaussianCubeReader.html | CC-MAIN-2021-21 | en | refinedweb |
This morning I came into work and went through my usual 100 or so emails. One of the emails was from MSSQLTips.com, it was on how to monitor SQL Server Database mirroring with email alerts. By Alan Cranfield. While agree with Alan that every DBA should monitor their database mirroring with email alerts I disagreed with his method. He had the DBA create a job that was scheduled to run at some interval throughout the day. His job would query the sys.database_mirroring view. As DBAs we need to know immediately when something fails or changes. 5 minutes could be the difference between a quick fix and restoring a 500 GB db mirror.
So what would be a better way to monitor and alert a DBA when there is a change in the state of Database mirroring? I prefer to use Alerts for events. Event notifications can be created directly in the SQL Server Database Engine or by using the WMI Provider for Server Events. A DBA can specify which db mirroring event they wish to moitor. Here is a table of events to monitor for:
Now that we know the Event and the State here is how to add an Alert to notify you that the state of DB mirroring has changed.
USE [msdb] GO /****** Object: Alert [DBM State Change] Script Date: 10/15/2009 08:03:20 ******/ EXEC msdb.dbo.sp_add_alert @name=N'DBM State Change', @message_id=0, @severity=0, @enabled=1, @delay_between_responses=0, @include_event_description_in=1, @category_name=N'[Uncategorized]', @wmi_namespace=N'\.rootMicrosoftSqlServerServerEventsMSSQLSERVER', @wmi_query=N'SELECT * FROM DATABASE_MIRRORING_STATE_CHANGE WHERE State = 6 ', @job_id=N'00000000-0000-0000-0000-000000000000' GO
This is an alert that I created on the principal server. I also have created a similar alert on the mirror server where I look for state = 5. These two alerts will notify me if the connection between the Principal and Mirror is lost due to network or some other failure.
To receive notification when this event happens it is simple to just create an operator and have the event email the operator if and when the event conditions are met.
What other Mirror Events should every DBA monitor? I find the unsent and unrestored log to be two very import events to receive notifications for. For those events just simply create a new event for the event ID in the table below and set you monitor threshold.
You can also script this by using sp_add_alert as follows:
USE [msdb] GO /****** Object: Alert [DB Mirroring Unsent Log Warning] Script Date: 10/15/2009 08:14:29 ******/ EXEC msdb.dbo.sp_add_alert @name=N'DB Mirroring Unsent Log Warning', @message_id=32042, @severity=0, @enabled=0, @delay_between_responses=0, @include_event_description_in=1, @category_name=N'[Uncategorized]', @job_id=N'00000000-0000-0000-0000-000000000000' GO
Good Luck and Happy monitoring! | https://blogs.lessthandot.com/index.php/datamgmt/dbadmin/how-to-monitor-database-mirroring/ | CC-MAIN-2021-21 | en | refinedweb |
SYSPAGE_ENTRY()
Return an entry from the system page
Synopsis:
#include <sys/syspage.h> #define SYSPAGE_ENTRY( entry )...
Arguments:
- entry
- The entry to get; see below.
Library:
libc
Use the -l c option to qcc to link against this library. This library is usually included automatically.
Description:.
- uint64_t cycles_per_sec — the number of CPU clock cycles per second for this system. For more information, see ClockCycles() .
Returns:
A pointer to the structure for the given entry.
Examples:
; }
Classification:
Caveats:
SYSPAGE_ENTRY() is a macro. | https://developer.blackberry.com/playbook/native/reference/com.qnx.doc.neutrino.lib_ref/topic/s/syspage_entry.html | CC-MAIN-2021-21 | en | refinedweb |
Originally posted on Twitter as a thread:
Huge Update: Video Version Now Available!
👉 YouTube Playlist - Only 13 minutes in total! 🥳
Always wanted to check out Svelte (aka. "the next big thing in web dev") but never got time for it? ⏰
🔥 I've got a 10-minute crash course for you! 👇
(Spoiler alert: Svelte is so intuitive and easy to use that you may feel like you already know it! 🥳)
1 - How Svelte works
- Compiler: Doesn't ship a Svelte "library" to users, but build-time optimized plain JS
- Components: App is made up of composable UI elements
- Reactive: Event/User interaction triggers chain of state changes, auto-updating components throughout the entire app
2 - UI Is a Component Tree
A component defines how your app should interpret some abstract "state" values, so that it can turn them into DOM elements in your browser, and ultimately pixels on your screen.
3 - The Anatomy of a Svelte Component)
4 - The Svelte Template
{ }.
5 - Setting "Props".
6 - Updating Component States
User actions trigger events.
on: lets us listen to events and fire functions to update states. State changes will auto-update the UI.
Data generally flows from a parent to a child, but we can use
bind: to simplify the state-update logic by allowing two-way data flow.
7 -
$: Reactive Statements
"Reactive statements" are those statements marked by
$:.
Svelte analyzes which variables they depend on. When any of those dependencies changes, the corresponding reactive statements will be rerun. Very useful for declaring derived states, or triggering "side effects".
8 - Reactive State "Store"!
9 - Conditional Rendering And Lists.)
10 - Elegant Async/Await
It's super easy to do asynchronous stuff like API requests with Svelte.
We can simply
{#await} a Promise to resolve, displaying a "loading" placeholder before the result is ready.
Note that we await the Promise in the template section, so no
await keyword in
<script>.
BONUS - Animated Transitions
Svelte comes with neat animated transitions built-in. Try giving your components a
transition:fly property! There're also other types like fade, slide, etc. You can also use
in:
out: to separately define intro/outros.
Attached to the transition prop are the params.. The Svelte REPL is a great place to start!
Have fun! 🥳
(Thread crash course format inspired by @chrisachard 😁 Check out his excellent React/Redux, Git crash courses!)
PS
Because Svelte is so expressive, I joked the other day that Svelte devs count "characters of code", instead of "lines of code"...
True that! Here are a few tweet-sized Svelte applets Tomasz Łakomy (@tlakomy) and I (@hexrcs) have been playing with -
Liquid error: internal
OK let me do a tweet-sized @sveltejs demo too. Password strength checker🔑 in 153B
Play with it here svelte.dev/repl/bcda7c79c…
<script>
import z from 'zxcvbn'
let p=''
$: s=z(p).score>3
</script>
<input bind:value={p} />
<p style={s||'color:red'}>{s?'Strong':'Weak'} password</p>19:44 PM - 12 Oct 2019
So, what's the most complex Svelte app that we can fit in a tweet? 😁
Like this post?
I'll be posting more on Twitter: @hexrcs
You can also find me at my homepage: xiaoru.li
Discussion (28)
OMG! This post was amazing! After finishing the official Svelte tutorial, I can tell that you managed to cover many important topics with very few words!
Thanks! We need to start talking more about Svelte and the message behind the framework
Thanks a lot for the kind words! Glad you liked it! 😄
More Svelte users, better ecosystem 😄
Hello! Thanks for the great article!
If I understand correctly, there is a typo here.
I think that "Setting a prop is just like doing regular HTML." should be "Setting a prop is just like doing regular JS".
Hi, it's actually not a typo. What I meant was setting props in regular HTML
<img alt="Nice image" src="nice-img.jpg">(image tag with
altand
srcprop/attribute) is exactly the same in Svelte's template.
I've got it.Thanks!
Very nice visuals! That help a lot to grasp the concept you explain very quickly! Great job!
Thanks! 😁
Could you elaborate on setting props, in example 5? You export dark, so you can set it? But you set color, not dark. It's the one thing I don't understand
Thanks for asking, I see I should have made it more clear there.
So we are dealing with 2 components here, a simpler component
CoolDiv.svelte(whose code is not shown), and the component we are looking at. Let's say it's called
CoolDivWithText.svelte. All the code shown here belongs to
CoolDivWithText.svelte.
The
darkprop is on
CoolDivWithTextwhile
colorprop is on
CoolDiv.
CoolDivWithTextwraps around
CoolDiv; instead of offering arbitrary color options, we limit it to only allow dark theme or bright theme.
(I'm thinking of remaking this example)
Edit: I've updated the image in the article. Here is the original one this comment was talking about.
I'm just totally new to Svelte, but I really like this approach to crash-course.
Glad you liked it! I'm also working on a GraphQL crash course, which I will publish soon.
I updated the 5th pic, by the way. Hopefully, it's less confusing this time.
Sometimes we can become biased and miss the readers' point of view. It's hard to unlearn stuff. 😛 Thanks for pointing it out!
Thanks for creating this post! Reactive Statements remind me of the useEffect hook in React
Yes! But if you are using it to set derived states, you don't have to specify the dependencies manually.
Das ist toll!
Nice read.
It would be great to have you join the Svelte JS developers group on LinkedIn.
linkedin.com/groups/10473500
Thanks, joining the group! (I'm not very active on LinkedIn though 😛)
Thanks, this article is exactly what I was looking for :)
Its a hot topic lately but it doesnt bring anything significant to the table.
Speed, fully reactive state management, much smaller bundle size, write less code to do more? 😉
Yes. Thats what we already have with the popular frameworks.
Aside from that i appreciate you trying it out and letting us know what it can do.
Svelte has very good performance. If somebody wanted to make a hybrid app, this framework is a strong pick, as hybrid apps with popular frameworks often have visible performance issues.
Great work! Best presentation of the "get started with svelte" ive found on the internet. Congrats :)
Awe, thanks so much!
This the tutorials I like. Short, clean and most important, useful!
Great write up. This should be considered the pre-tutorial. A nice 10,000ft view before diving into the nitty gritty.
This was an awesome post!!, wish I learned everything this way. Perfect crash course!
Svelte is very appealing but what about meta data?
Great crash course, very concise, thank you
You're welcome 🙂 | https://practicaldev-herokuapp-com.global.ssl.fastly.net/methodcoder/svelte-crash-course-with-pics-27cc | CC-MAIN-2021-21 | en | refinedweb |
firebase_event_service 0.0.3+3
firebase_event_service: ^0.0.3+3 copied to clipboard
Use this package as a library
Depend on it
Run this command:
With Dart:
$ dart pub add firebase_event_service
With Flutter:
$ flutter pub add firebase_event_service
This will add a line like this to your package's pubspec.yaml (and run an implicit
dart pub get):
dependencies: firebase_event_service: ^0.0.3+3
Alternatively, your editor might support
dart pub get or
flutter pub get.
Check the docs for your editor to learn more.
Import it
Now in your Dart code, you can use:
import 'package:firebase_event_service/firebase_event_service.dart'; | https://pub.dev/packages/firebase_event_service/install | CC-MAIN-2021-21 | en | refinedweb |
This site uses strictly necessary cookies. More Information
Hello, I am still fairly new to unity. I am trying to create a 3rd person FPS game, and I looked at some guides on the internet and on Youtube but couldn't find an answer to my issue.
My issue right now is that I can get the spheres to fire from an empty game object that is attached to the main character, but as it fires the empty game object keeps drifting down further and further (although it says in the inspector that it isn't). The odd part, however, is that they still occasional fire from the empty object while still firing in the constant stream that is drifting downward.
Here is the code I am using for the bullet firing script:
using UnityEngine; using System.Collections;
public class FireSphere : MonoBehaviour {
// Use this for initialization
public GameObject Bullet_Emitter;
public GameObject Bullet;
public float Bullet_Destroy_Time;
void Start () {
}
// Update is called once per frame
void Update () {
if (Input.GetKeyDown("f") || Input.GetMouseButton(0))
{
//The Bullet instiation happens here
GameObject Temporary_Bullet_Handler;
Temporary_Bullet_Handler = Instantiate(Bullet, Bullet_Emitter.transform.position, Bullet_Emitter.transform.rotation) as GameObject;
Rigidbody Temporary_RigidBody;
Temporary_RigidBody = Temporary_Bullet_Handler.GetComponent<Rigidbody>();
Temporary_RigidBody.constraints = RigidbodyConstraints.FreezePositionY;
Temporary_RigidBody.velocity = transform.forward * 50;
/// Temporary_RigidBody.AddForce(transform.forward * 1000);
Destroy(Temporary_Bullet_Handler, Bullet_Destroy_Time);
}
}
}
Here is what it looks like:
And this is what it looks like after about a minute:
Answer by Schek
·
May 16, 2016 at 02:36 PM
No one knows the answer?
Answer by Ninjadk
·
Oct 24, 2016 at 07:05 AM
Did you remove the gravity from the rigidbody.
Sphere with AddForce or Velocity not moving
1
Answer
Why does my character sometimes jump higher than other times?
1
Answer
How to set rigidbody velocity and angularVelocity to Vector3.zero over time?
1
Answer
Having a OnTriggerEnter make another gameobject Add Force in UnityScript.
1
Answer
Unity glitches using Ridigbodies as bullets in multiplayer- am I missing something?
0
Answers
EnterpriseSocial Q&A | https://answers.unity.com/questions/1186728/bullets-falling-through-the-terrain.html | CC-MAIN-2021-21 | en | refinedweb |
Comment on Tutorial - Getting Started with Android By Emiley J
Comment Added by : ovdetowpofoka
Comment Added at : 2017-09-17 06:44:44
Comment on Tutorial : Getting Started with Android By Emiley J
ovdetowpofoka. Simple Example. Understood. Thanx.
In aksha
View Tutorial By: Amila Nadanda at 2011-07-28 06:21:05
2. The tutorial is very usefull, i am happy now
View Tutorial By: Haneefa at 2013-03-30 06:31:47
3. It is very help full to me and very simple underst
View Tutorial By: Reddymalla Babu Sircilla at 2013-02-22 08:28:22
4. What is the difference between an Interface and an
View Tutorial By: RAJA at 2015-06-26 17:28:31
5. java.lang.IllegalArgumentException means usually t
View Tutorial By: Seink at 2008-11-12 11:03:00
6. dear i have different files of ms access 2010 like
View Tutorial By: neeo at 2012-01-16 07:02:20
7. import java.io.*;
import java.util.Stack;
View Tutorial By: jack&jill at 2011-07-26 06:40:32
8. i want a java code to make a GUI in netbeans and s
View Tutorial By: siddharth at 2012-03-17 09:59:59
9. This is really helpful, i'm planning to also add a
View Tutorial By: JhunZhen at 2011-07-09 21:42:32
10. You should add the mime type as follows:
View Tutorial By: Anonymous at 2009-03-19 08:30:35 | https://www.java-samples.com/showcomment.php?commentid=41963 | CC-MAIN-2021-21 | en | refinedweb |
android
/
kernel
/
mediatek
/
android-6.0.0_r0.6
/
.
/
Documentation
/
DMA-API-HOWTO.txt
blob: 14129f149a75432589f3bc925f7776a59354ef62 [, see
DMA-API.txt. (e.g. pci_dma_*).
First of all, you should make sure
#include <linux/dma-mapping.h>
is in your driver. This file will obtain for you the definition of the
dma_addr_t (which can hold any valid DMA address for the platform)
type which should be used everywhere you hold a DMA (bus) limitations
Does your device have any DMA addressing limitations? For example, is
your device only capable of driving the low order 24-bits of address? bus.
This return address and the DMA bus master, alloc);
The "name" is for diagnostics (like a kmem_cache name); dev and size
are as above. (but at that time it may be better to
go for. Not all dma implementations support dma_mapping_error() interface.
However, it is a good practice to call dma_mapping_error() interface, which
will invoke the generic mapping error check interface. Doing so will ensure
that the mapping code will work correctly on all dma implementations without
any dependency on the specifics of the underlying implementation. Using the
returned address without checking for errors could result in failures ranging
from panics to silent data corruption. A couple of examples of incorrect ways
to check for errors that make assumptions about the underlying dma
implementation are as follows and these are applicable to dma_map_page() as
well.
Incorrect example 1:
dma_addr_t dma_handle;
dma_handle = dma_map_single(dev, addr, size, direction);
if ((dma_handle & 0xffff != 0) || (dma_handle >= 0x1000000)) {
goto map_error;
}
Incorrect example 2:
dma_addr_t dma_handle;
dma_handle = dma_map_single(dev, addr, size, direction);
if (dma_handle == DMA_ERROR_CODE) {
goto map_error;
}, the buffer needs to be synced
properly in order for the cpu and device to see the most uptodate(dma_handle)) {
/*
*> | https://android.googlesource.com/kernel/mediatek/+/android-6.0.0_r0.6/Documentation/DMA-API-HOWTO.txt | CC-MAIN-2021-21 | en | refinedweb |
Your browser does not seem to support JavaScript. As a result, your viewing experience will be diminished, and you have been placed in read-only mode.
Please download a browser that supports JavaScript, or enable it if it's disabled (i.e. NoScript).
On 16/07/2015 at 11:09, xxxxxxxx wrote:
From time to time I run up against the problem of the need of passing a message in a plugin such as a button press to the rest of my code in GetVirtualObjects. I've missed some basic coding lesson somewhere. Here is (hopefully) the relevant code:
ABITMAPBUTTON_A = 1042,
ABITMAPBUTTON_B = 1043,
ABITMAPBUTTON_C = 1044,
def Message(self, node, type, data) :
if type==c4d.MSG_DESCRIPTION_COMMAND:
if data['id'][0].id==1042:
self.thePedalA = True
self.thePedalB = False
self.thePedalC = False
if data['id'][0].id==1043:
self.thePedalB = True
self.thePedalA = False
self.thePedalC = False
if data['id'][0].id==1044:
self.thePedalC = True
self.thePedalA = False
self.thePedalB = False
return True
def GetVirtualObjects(self, op, hierarchy help) :
if self.thePedalA == True:
thePedal = pedal_A
if self.thePedalB == True:
thePedal = pedal_B
if self.thePedalC == True:
thePedal = pedal_C
This code works but the changes don't update till I make a change to an element in the plugin to update it. Is this the way that I should be passing info between these two functions, by global variables (this causes errors till a button is pressed), or is there another way?
On 17/07/2015 at 02:32, xxxxxxxx wrote:
Hello,
when you press the button Cinema cannot know that you changed internal data and want to recalculate the cache. So you have to inform Cinema that you changed something.
You can do this by sending the corresponding message the the node itself in the code that reacts to the button:
def Message(self, node, type, data) :
if type==c4d.MSG_DESCRIPTION_COMMAND:
if data['id'][0].id == 1010:
print("button pressed")
node.Message(c4d.MSG_CHANGE)
return True
best wishes,
Sebastian
On 17/07/2015 at 08:49, xxxxxxxx wrote:
Thanks for your help Sebastian.
I was afraid of this, just because with so much work figuring out what works in the GetVirtualObjects function, I'm hesitant to toss my code into a shiny new and different wrapper. I guess that the home place that I usually place my main code just needs to shift a bit to accommodate messages from input.
-David
On 17/07/2015 at 10:03, xxxxxxxx wrote:
I don't really understand what you are talking about, what "wrapper" do you mean? I think all you have to do is to add the one line of code that sends the message to your code and it should work.
Best wishes,
Sebastian
On 17/07/2015 at 10:10, xxxxxxxx wrote:
Aha! Sorry for missing your line of code Sebastian. Now it works perfectly! i guess that I got too used to disappointment to see a solution. Also, I suppose that I should avoid words such as "wrapper" that are also coding words in my posts. I meant wrapper in the wrapped gift sense. | https://plugincafe.maxon.net/topic/8933/11863_message-to-getvirtualobjects-communicationsolved | CC-MAIN-2021-21 | en | refinedweb |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.