text
stringlengths
454
608k
url
stringlengths
17
896
dump
stringclasses
91 values
source
stringclasses
1 value
word_count
int64
101
114k
flesch_reading_ease
float64
50
104
NAME libsolv-pool - Libsolv's pool object PUBLIC ATTRIBUTES void *appdata Stringpool ss Reldep *rels int nrels Repo **repos int nrepos int urepos Repo *installed Solvable *solvables int nsolvables int disttype Id *whatprovidesdata Map *considered int debugmask Datapos pos Queue pooljobs CREATION AND DESTRUCTION Pool *pool_create(); Create a new instance of a pool. void pool_free(Pool *pool); Free a pool and all of the data it contains, e.g. the solvables, repositories, strings. DEBUGGING AND ERROR REPORTING Constants SOLV_FATAL SOLV_ERROR SOLV_WARN SOLV_DEBUG_STATS SOLV_DEBUG_RULE_CREATION SOLV_DEBUG_PROPAGATE SOLV_DEBUG_ANALYZE SOLV_DEBUG_UNSOLVABLE SOLV_DEBUG_SOLUTIONS SOLV_DEBUG_POLICY SOLV_DEBUG_RESULT SOLV_DEBUG_JOB SOLV_DEBUG_SOLVER SOLV_DEBUG_TRANSACTION SOLV_DEBUG_TO_STDERR Functions void pool_debug(Pool *pool, int type, const char *format, ...); Report a message of the type type. You can filter debug messages by setting a debug mask. void pool_setdebuglevel(Pool *pool, int level); Set a predefined debug mask. A higher level generally means more bits in the mask are set, thus more messages are printed. void pool_setdebugmask(Pool *pool, int mask); Set the debug mask to filter debug messages. int pool_error(Pool *pool, int ret, const char *format, ...); Set the pool’s error string. The ret value is simply used as a return value of the function so that you can write code like return pool_error(...);. If the debug mask contains the SOLV_ERROR bit, pool_debug() is also called with the message and type SOLV_ERROR. extern char *pool_errstr(Pool *pool); Return the current error string stored in the pool. Like with the libc’s errno value, the string is only meaningful after a function returned an error. void pool_setdebugcallback(Pool *pool, void (*debugcallback)(Pool *, void *data, int type, const char *str), void *debugcallbackdata); Set a custom debug callback function. Instead of writing to stdout or stderr, the callback function will be called. POOL CONFIGURATION Constants DISTTYPE_RPM DISTTYPE_DEB DISTTYPE_ARCH DISTTYPE_HAIKU Functions int pool_setdisttype(Pool *pool, int disttype); Set the package type of your system. The disttype is used for example to define package comparison semantics. Libsolv’s default disttype should match the package manager of your system, so you only need to use this function if you want to use the library to solve packaging problems for different systems. The Function returns the old disttype on success, and -1 if the new disttype is not supported. Note that any pool_setarch and pool_setarchpolicy calls need to come after the pool_setdisttype call, as they make use of the noarch/any/all architecture id. int pool_set_flag(Pool *pool, int flag, int value); Set a flag to a new value. Returns the old value of the flag. int pool_get_flag(Pool *pool, int flag); Get the value of a pool flag. See the constants section about the meaning of the flags. void pool_set_rootdir(Pool *pool, const char *rootdir); Set a specific root directory. Some library functions support a flag that tells the function to prepend the rootdir to file and directory names. const char *pool_get_rootdir(Pool *pool); Return the current value of the root directory. char *pool_prepend_rootdir(Pool *pool, const char *dir); Prepend the root directory to the dir argument string. The returned string has been newly allocated and needs to be freed after use. char *pool_prepend_rootdir_tmp(Pool *pool, const char *dir); Same as pool_prepend_rootdir, but uses the pool’s temporary space for allocation. void pool_set_installed(Pool *pool, Repo *repo); Set which repository should be treated as the “installed” repository, i.e. the one that holds information about the installed packages. void pool_set_languages(Pool *pool, const char **languages, int nlanguages); Set the language of your system. The library provides lookup functions that return localized strings, for example for package descriptions. You can set an array of languages to provide a fallback mechanism if one language is not available. void pool_setarch(Pool *pool, const char *arch); Set the architecture of your system. The architecture is used to determine which packages are installable and which packages cannot be installed. The arch argument is normally the “machine” value of the “uname” system call. void pool_setarchpolicy(Pool *, const char *); Set the architecture policy for your system. This is the general version of pool_setarch (in fact pool_setarch calls pool_setarchpolicy internally). See the section about architecture policies for more information. void pool_addvendorclass(Pool *pool, const char **vendorclass); Add a new vendor equivalence class to the system. A vendor equivalence class defines if an installed package of one vendor can be replaced by a package coming from a different vendor. The vendorclass argument must be a NULL terminated array of strings. See the section about vendor policies for more information. void pool_setvendorclasses(Pool *pool, const char **vendorclasses); Set all allowed vendor equivalences. The vendorclasses argument must be an NULL terminated array consisting of all allowed classes concatenated. Each class itself must be NULL terminated, thus the last class ends with two NULL elements, one to finish the class and one to finish the list of classes. void pool_set_custom_vendorcheck(Pool *pool, int (*vendorcheck)(Pool *, Solvable *, Solvable *)); Define a custom vendor check mechanism. You can use this if libsolv’s internal vendor equivalence class mechanism does not match your needs. void pool_setloadcallback(Pool *pool, int (*cb)(Pool *, Repodata *, void *), void *loadcbdata); Define a callback function that gets called when repository metadata needs to be loaded on demand. See the section about on demand loading in the libsolv-repodata manual. void pool_setnamespacecallback(Pool *pool, Id (*cb)(Pool *, void *, Id, Id), void *nscbdata); Define a callback function to implement custom namespace support. See the section about namespace dependencies. ID POOL MANAGEMENT Constants ID_EMPTY REL_LT REL_EQ REL_GT REL_AND REL_OR REL_WITH REL_NAMESPACE REL_ARCH REL_FILECONFLICT REL_COND REL_UNLESS REL_COMPAT REL_KIND REL_MULTIARCH REL_ELSE REL_ERROR Functions Id pool_str2id(Pool *pool, const char *str, int create); Add a string to the pool of unified strings, returning the Id of the string. If create is zero, new strings will not be added to the pool, instead Id 0 is returned. Id pool_strn2id(Pool *pool, const char *str, unsigned int len, int create); Same as pool_str2id, but only len characters of the string are used. This can be used to add substrings to the pool. Id pool_rel2id(Pool *pool, Id name, Id evr, int flags, int create); Create a relational dependency from to other dependencies, name and evr, and a flag. See the REL_ constants for the supported flags. As with pool_str2id, create defines if new dependencies will get added or Id zero will be returned instead. Id pool_id2langid(Pool *pool, Id id, const char *lang, int create); Attach a language suffix to a string Id. This function can be used to create language keyname Ids from keynames, it is functional equivalent to converting the id argument to a string, adding a “:” character and the lang argument to the string and then converting the result back into an Id. const char *pool_id2str(const Pool *pool, Id id); Convert an Id back into a string. If the Id is a relational Id, the “name” part will be converted instead. const char *pool_id2rel(const Pool *pool, Id id); Return the relation string of a relational Id. Returns an empty string if the passed Id is not a relation. const char *pool_id2evr(const Pool *pool, Id id); Return the “evr” part of a relational Id as string. Returns an empty string if the passed Id is not a relation. const char *pool_dep2str(Pool *pool, Id id); Convert an Id back into a string. If the passed Id belongs to a relation, a string representing the relation is returned. Note that in that case the string is allocated on the pool’s temporary space. void pool_freeidhashes(Pool *pool); Free the hashes used to unify strings and relations. You can use this function to save memory if you know that you will no longer create new strings and relations. SOLVABLE FUNCTIONS Solvable *pool_id2solvable(const Pool *pool, Id p); Convert a solvable Id into a pointer to the solvable data. Note that the pointer may become invalid if new solvables are created or old solvables deleted, because the array storing all solvables may get reallocated. Id pool_solvable2id(const Pool *pool, Solvable *s); Convert a pointer to the solvable data into a solvable Id. const char *pool_solvid2str(Pool *pool, Id p); Return a string representing the solvable with the Id p. The string will be some canonical representation of the solvable, usually a combination of the name, the version, and the architecture. const char *pool_solvable2str(Pool *pool, Solvable *s); Same as pool_solvid2str, but instead of the Id, a pointer to the solvable is passed. DEPENDENCY MATCHING Constants EVRCMP_COMPARE EVRCMP_MATCH_RELEASE EVRCMP_MATCH EVRCMP_COMPARE_EVONLY Functions int pool_evrcmp(const Pool *pool, Id evr1id, Id evr2id, int mode); Compare two version Ids, return -1 if the first version is less than the second version, 0 if they are identical, and 1 if the first version is bigger than the second one. int pool_evrcmp_str(const Pool *pool, const char *evr1, const char *evr2, int mode); Same as pool_evrcmp(), but uses strings instead of Ids. int pool_evrmatch(const Pool *pool, Id evrid, const char *epoch, const char *version, const char *release); Match a version Id against an epoch, a version and a release string. Passing NULL means that the part should match everything. int pool_match_dep(Pool *pool, Id d1, Id d2); Returns “1” if the dependency d1 (the provider) is matched by the dependency d2, otherwise “0” is returned. For two dependencies to match, both the “name” parts must match and the version range described by the “evr” parts must overlap. int pool_match_nevr(Pool *pool, Solvable *s, Id d); Like pool_match_dep, but the provider is the "self-provides" dependency of the Solvable s, i.e. the dependency “s→name = s→evr”. WHATPROVIDES INDEX void pool_createwhatprovides(Pool *pool); Create an index that maps dependency Ids to sets of packages that provide the dependency. void pool_freewhatprovides(Pool *pool); Free the whatprovides index to save memory. Id pool_whatprovides(Pool *pool, Id d); Return an offset into the Pool’s whatprovidesdata array. The solvables with the Ids stored starting at that offset provide the dependency d. The solvable list is zero terminated. Id *pool_whatprovides_ptr(Pool *pool, Id d); Instead of returning the offset, return the pointer to the Ids stored at that offset. Note that this pointer has a very limit validity time, as any call that adds new values to the whatprovidesdata area may reallocate the array. Id pool_queuetowhatprovides(Pool *pool, Queue *q); Add the contents of the Queue q to the end of the whatprovidesdata array, returning the offset into the array. void pool_addfileprovides(Pool *pool);. void pool_addfileprovides_queue(Pool *pool, Queue *idq, Queue *idqinst); Same as pool_addfileprovides, but the added Ids are returned in two Queues, idq for all repositories except the one containing the “installed” packages, idqinst for the latter one. This information can be stored in the meta section of the repositories to speed up the next time the repository is loaded and addfileprovides is called void pool_set_whatprovides(pool, Id id, Id offset); Manually set an entry in the whatprovides index. You’ll never do this for package dependencies, as those entries are created by calling the pool_createwhatprovides() function. But this function is useful for namespace provides if you do not want to use a namespace callback to lazily set the provides. The offset argument is a offset in the whatprovides array, thus you can use “1” as a false value and “2” as true value. void pool_flush_namespaceproviders(Pool *pool, Id ns, Id evr); Clear the cache of the providers for namespace dependencies matching namespace ns. If the evr argument is non-zero, the namespace dependency for exactly that dependency is cleared, otherwise all matching namespace dependencies are cleared. See the section about Namespace dependencies for further information. void pool_add_fileconflicts_deps(Pool *pool, Queue *conflicts); Some package managers like rpm report conflicts when a package installation overwrites a file of another installed package with different content. As file content information is not stored in the repository metadata, those conflicts can only be detected after the packages are downloaded. Libsolv provides a function to check for such conflicts, pool_findfileconflicts(). If conflicts are found, they can be added as special REL_FILECONFLICT provides dependencies, so that the solver will know about the conflict when it is re-run. UTILITY FUNCTIONS char *pool_alloctmpspace(Pool *pool, int len); Allocate space on the pool’s temporary space area. This space has a limited lifetime, it will be automatically freed after a fixed amount (currently 16) of other pool_alloctmpspace() calls are done. void pool_freetmpspace(Pool *pool, const char *space); Give the space allocated with pool_alloctmpspace back to the system. You do not have to use this function, as the space is automatically reclaimed, but it can be useful to extend the lifetime of other pointers to the pool’s temporary space area. const char *pool_bin2hex(Pool *pool, const unsigned char *buf, int len); Convert some binary data to hexadecimal, returning a string allocated in the pool’s temporary space area. char *pool_tmpjoin(Pool *pool, const char *str1, const char *str2, const char *str3); Join three strings and return the result in the pool’s temporary space area. You can use NULL arguments if you just want to join less strings. char *pool_tmpappend(Pool *pool, const char *str1, const char *str2, const char *str3); Like pool_tmpjoin(), but if the first argument is the last allocated space in the pool’s temporary space area, it will be replaced with the result of the join and no new temporary space slot will be used. Thus you can join more than three strings by a combination of one pool_tmpjoin() and multiple pool_tmpappend() calls. Note that the str1 pointer is no longer usable after the call. DATA LOOKUP Constants SOLVID_POS SOLVID_META Functions const char *pool_lookup_str(Pool *pool, Id solvid, Id keyname); Return the string value stored under the attribute keyname in solvable solvid. unsigned long long pool_lookup_num(Pool *pool, Id solvid, Id keyname, unsigned long long notfound); Return the 64bit unsigned number stored under the attribute keyname in solvable solvid. If no such number is found, the value of the notfound argument is returned instead. Id pool_lookup_id(Pool *pool, Id solvid, Id keyname); Return the Id stored under the attribute keyname in solvable solvid. int pool_lookup_idarray(Pool *pool, Id solvid, Id keyname, Queue *q); Fill the queue q with the content of the Id array stored under the attribute keyname in solvable solvid. Returns “1” if an array was found, otherwise the queue will be empty and “0” will be returned. int pool_lookup_void(Pool *pool, Id solvid, Id keyname); Returns “1” if a void value is stored under the attribute keyname in solvable solvid, otherwise “0”. const char *pool_lookup_checksum(Pool *pool, Id solvid, Id keyname, Id *typep); Return the checksum that is stored under the attribute keyname in solvable solvid. The type of the checksum will be returned over the typep pointer. If no such checksum is found, NULL will be returned and the type will be set to zero. Note that the result is stored in the Pool’s temporary space area. const unsigned char *pool_lookup_bin_checksum(Pool *pool, Id solvid, Id keyname, Id *typep); Return the checksum that is stored under the attribute keyname in solvable solvid. Returns the checksum as binary data, you can use the returned type to calculate the length of the checksum. No temporary space area is needed. const char *pool_lookup_deltalocation(Pool *pool, Id solvid, unsigned int *medianrp); This is a utility lookup function to return the delta location for a delta rpm. As solvables cannot store deltas, you have to use SOLVID_POS as argument and set the Pool’s datapos pointer to point to valid delta rpm data. void pool_search(Pool *pool, Id solvid, Id keyname, const char *match, int flags, int (*callback)(void *cbdata, Solvable *s, Repodata *data, Repokey *key, KeyValue *kv), void *cbdata); Perform a search on all data stored in the pool. You can limit the search area by using the solvid and keyname arguments. The values can be optionally matched against the match argument, use NULL if you do not want this matching. See the Dataiterator manpage about the possible matches modes and the flags argument. For all (matching) values, the callback function is called with the cbdata callback argument and the data describing the value. JOB AND SELECTION FUNCTIONS A Job consists of two Ids, how and what. The how part describes the action, the job flags, and the selection method while the what part is in input for the selection. A Selection is a queue consisting of multiple jobs (thus the number of elements in the queue must be a multiple of two). See the Solver manpage for more information about jobs. const char *pool_job2str(Pool *pool, Id how, Id what, Id flagmask); Convert a job into a string. Useful for debugging purposes. The flagmask can be used to mask the flags of the job, use “0” if you do not want to see such flags, “-1” to see all flags, or a combination of the flags you want to see. void pool_job2solvables(Pool *pool, Queue *pkgs, Id how, Id what); Return a list of solvables that the specified job selects. int pool_isemptyupdatejob(Pool *pool, Id how, Id what); Return “1” if the job is an update job that does not work with any installed package, i.e. the job is basically a no-op. You can use this to turn no-op update jobs into install jobs (as done by package managers like “zypper”). const char *pool_selection2str(Pool *pool, Queue *selection, Id flagmask); Convert a selection into a string. Useful for debugging purposes. See the pool_job2str() function for the flagmask argument. ODDS AND ENDS void pool_freeallrepos(Pool *pool, int reuseids); Free all repos from the pool (including all solvables). If reuseids is true, all Ids of the solvables are free to be reused the next time solvables are created. void pool_clear_pos(Pool *pool); Clear the data position stored in the pool. ARCHITECTURE POLICIES An architecture policy defines a list of architectures that can be installed on the system, and also the relationship between them (i.e. the ordering). Architectures can be delimited with three different characters: ':' '>' '=' An example would be 'x86_64:i686=athlon>i586'. This means that x86_64 packages can only be replaced by other x86_64 packages, i686 packages can be replaced by i686 and i586 packages (but i686 packages will be preferred) and athlon is another name for the i686 architecture. You can turn off the architecture replacement checks with the Solver’s SOLVER_FLAG_ALLOW_ARCHCHANGE flag. VENDOR POLICIES Different vendors often compile packages with different features, so Libsolv only replace installed packages of one vendor with packages coming from the same vendor. Also, while the version of a package is normally defined by the upstream project, the release part of the version is set by the vendor’s package maintainer, so it’s not meaningful to do version comparisons for packages coming from different vendors. Vendor in this case means the SOLVABLE_VENDOR string stored in each solvable. Sometimes a vendor changes names, or multiple vendors form a group that coordinate their package building, so libsolv offers a way to define that a group of vendors are compatible. You do that be defining vendor equivalence classes, packages from a vendor from one class may be replaced with packages from all the other vendors in the class. There can be multiple equivalence classes, the set of allowed vendor changes for an installed package is calculated by building the union of all of the equivalence classes the vendor of the installed package is part of. You can turn off the vendor replacement checks with the Solver’s SOLVER_FLAG_ALLOW_VENDORCHANGE flag. BOOLEAN DEPENDENCIES Boolean Dependencies allow to build complex expressions from simple dependencies. Note that depending on the package manager only a subset of those may be useful. For example, debian currently only allows an "OR" expression. REL_OR REL_AND REL_WITH REL_COND REL_UNLESS REL_ELSE Each sub-dependency of a boolean dependency can in turn be a boolean dependency, so you can chain them to create complex dependencies. NAMESPACE DEPENDENCIES Namespace dependencies can be used to implement dependencies on attributes external to libsolv. An example would be a dependency on the language set by the user. This types of dependencies are usually only used for “Conflicts” or “Supplements” dependencies, as the underlying package manager does not know how to deal with them. If the library needs to evaluate a namespace dependency, it calls the namespace callback function set in the pool. The callback function can return a set of packages that “provide” the dependency. If the dependency is provided by the system, the returned set should consist of just the system solvable (Solvable Id 1). The returned set of packages must be returned as offset into the whatprovidesdata array. You can use the pool_queuetowhatprovides function to convert a queue into such an offset. To ease programming the callback function, the return values “0” and “1” are not interpreted as an offset. “0” means that no package is in the return set, “1” means that just the system solvable is in the set. The returned set is cached, so that for each namespace dependency the callback is just called once. If you need to flush the cache (maybe because the user has selected a different language), use the pool_flush_namespaceproviders() function. AUTHOR Michael Schroeder <mls@suse.de>
https://man.archlinux.org/man/community/libsolv/libsolv-pool.3.en
CC-MAIN-2022-40
refinedweb
3,520
61.46
I've installed md5 (also tried blueimp-md5) package with corresponding typings like this: nmp install --save md5 @types/md5 nmp install --save blueimp-md5 @types/blueimp-md5 When I try to import it like this: import { md5 } from '../../../node_modules/md5' I get an error: Module <path> was resolved to <path>/md5.js, but '--allowJs' is not set. This makes me think that installed @types/md5 typings are simply not discovered. In tsconfig.json I have: "typeRoots": [ "../node_modules/@types" ] So I think it should be detecting typings from node_modules/@types folder automatically, but it apparently does not. Exactly same thing with blueimp-md5 package. The md5 folder exists in node_modules/@types folder, so it has everything in place but still doesn't work. Visual Studio Code, TypeScript 2, Angular 2 project. What am I doing wrong? Edit: this is content of @types/md5/index.d.ts file: /// <reference types="node" /> declare function main(message: string | Buffer): string; export = main; You don't need to specify the path inside the node_modules, it should be: import * as md5 from "md5"; The compiler will look for the actual module in the node_modules, and will look for the definition files in node_modules/@types. There's a long doc page about it: Module Resolution That's because of how the md5 module is exporting, as it does this: declare function main(message: string | Buffer): string; export = main; This case is covered in the docs: The export = syntax specifies a single object that is exported from the module. This can be a class, interface, namespace, function, or enum. When importing a module using export =, TypeScript-specific import let = require("module") must be used to import the module. In your case it should be: import md5 = require("md5"); If you're targetting es6 then you need to do: const md5 = require("md5"); (or let or var of course).
http://www.dlxedu.com/askdetail/3/81267002394ebfbe7261179f443f373c.html
CC-MAIN-2018-47
refinedweb
311
62.27
Bastian Blank [bastian@waldi.eu.org] wrote:| On Mon, Dec 01, 2008 at 12:15:06PM -0800, Sukadev Bhattiprolu wrote:| > Bastian Blank [bastian@waldi.eu.org] wrote:| > | If I see this correctly this information is already covered in si_code| > | with SI_USER and SI_TKILL. SI_KERNEL is used for explicit kernel| > | generated signals.| > | > Yes, but si_code from sys_rt_sigqueueinfo() cannot be trusted.| | sys_rt_sigqueueinfo disallows setting si_code to any value which| describes kernel signals from userspace. So using SI_FROMUSER should be| sufficient.Hmm, unless I am missing something, sys_rt_sigqueuinfo() does this: if (info.si_code >= 0) return -EPERM;This does not prevent user from setting si_code to SI_ASYNCIO, which,from include/asm-generic/siginfo.h is:#define SI_ASYNCIO -4 /* sent by AIO completion */Also,#define SI_FROMUSER(siptr) ((siptr)->si_code <= 0)SI_ASYNCIO qualifies as SI_FROMUSER() even when it originates fromkernel (usb/core/devio.c async_completed())...| | > IOW, we need to find the namespace of the sender only if the sender is| > a user process. If signal is originating from kernel, safely checking| > namespace becomes more complex.| | Where does this imply checking sender for kernel generated signals?... so what I meant is that in send_signal(), it will be harder todetermine in the SI_ASYNCIO case whether the signal is from driver orrt_sigqueueinfo().If we know that it came from rt_sigqueueinfo(), we can safely checkthe namespace. If it came from driver we should skip the ns check.| | > Yes, current approach is somewhat hacky. We tried other approaches| > before and they were either intrusive or required non-trivial changes| > to semantics of signals to global init or both.| | Message-IDs?Yes, (Eric Biederman, Dec 2007) Nesterov, Aug 2007: had sent out a summary of the above attempts to Containers list recently: | > | > +static inline int siginfo_from_ancestor_ns(struct task_struct *t,| > | > + siginfo_t *info)| > | > +{| > | > + if (!is_si_special(info) && (info->si_signo & SIG_FROM_USER)) {| > | > + /* if t can't see us we are from parent ns */| > | What?| > I assume your question is about the comment :-)| | Yes.| | > Yes, a process can see all its descendants and processes in descendant| > namespaces. But it can only see its ancestors upto the most recent| > CLONE_NEWPID. (kind of like chroot in filesystems). So if receiver| > can't see sender, sender must be an ancestor.| | Please add a complete comment to the function which describes the| function. And don't us "it" for not defined entities.Ah, I see the problem now. The 't' refers to the task parameter - howabout changing comment to: /* If receiver can't see us, we are from parent ns */| | Bastian| | -- | I have never understood the female capacity to avoid a direct answer to| any question.| -- Spock, "This Side of Paradise", stardate 3417.3
http://lkml.org/lkml/2008/12/2/255
CC-MAIN-2016-07
refinedweb
432
58.08
.RELEASE). -.3.2). Apache HttpClient is an optional dependency (but recommended) dependency of Spring Social. If it is present, Spring Social will use it as a HTTP client. If not, Spring social will use the standard Java SE components. - Spring Social (version 1.1.0.RELEASE). - The config module contains the code used to parse XML configuration files using the Spring Social XML namespace. It also adds support for Java Configuration of Spring Social. -.RELEASE) is an extension to Spring Social and it provides Facebook integration. - Spring Social Twitter (version 1.1.0.RELEASE) is an extension to Social Social which provides Twitter integration. The relevant part of the pom.xml file looks as follows: <!-- Spring Security --> <dependency> <groupId>org.springframework.security</groupId> <artifactId>spring-security-core</artifactId> <version>3.2.0.RELEASE</version> </dependency> <dependency> <groupId>org.springframework.security</groupId> <artifactId>spring-security-config</artifactId> <version>3.2.0.RELEASE</version> </dependency> <dependency> <groupId>org.springframework.security</groupId> <artifactId>spring-security-taglibs</artifactId> <version>3.2.0.RELEASE</version> </dependency> <dependency> <groupId>org.springframework.security</groupId> <artifactId>spring-security-web</artifactId> <version>3.2.0.RELEASE</version> </dependency> <!-- Use Apache HttpClient as HTTP Client --> <dependency> <groupId>org.apache.httpcomponents</groupId> <artifactId>httpclient</artifactId> <version>4.3.2</version> </dependency> <!-- Spring Social --> <dependency> <groupId>org.springframework.social</groupId> <artifactId>spring-social-config</artifactId> <version>1.1.0.RELEASE</version> </dependency> <dependency> <groupId>org.springframework.social</groupId> <artifactId>spring-social-core</artifactId> <version>1.1.0.RELEASE</version> </dependency> <dependency> <groupId>org.springframework.social</groupId> <artifactId>spring-social-security</artifactId> <version>1.1.0.RELEASE</version> </dependency> <dependency> <groupId>org.springframework.social</groupId> <artifactId>spring-social-web</artifactId> <version>1.1.0.RELEASE</version> </dependency> <!-- Spring Social Facebook --> <dependency> <groupId>org.springframework.social</groupId> <artifactId>spring-social-facebook</artifactId> <version>1.1.0.RELEASE</version> </dependency> <!-- Spring Social Twitter --> <dependency> <groupId>org.springframework.social</groupId> <artifactId>spring-social-twitter</artifactId> <version>1.1.0.RELEASE</version> </dependency> You might also want to read the following documents which give you more information about the dependencies of the frameworks discussed in this blog post (Spring Security and Spring Social): MVC Test. configureup/**", "/user/register/**" ).permitAll() //The rest of the our application is protected. .antMatchers("/**").hasRole("USER") //Adds the SocialAuthenticationFilter to Spring Security's filter chain. .and() .apply(new SpringSocialConfigurer()); } @Override protected void configureSources. - Configure character encoding filter. - Configure the Spring Security filter chain. - Configure Sitemesh. - login functions to our example application. As always, the example application of this blog post is available at Github. Looking forward to second part of tutorial… I start writing it tomorrow. I think that I can publish it next week. great post, helped me very much. I’m waiting for the next. obrigado I am happy to hear that I could help you out. Petri, I have made a pause in a Spring article writing, but you inspired me to return to this =) Hi Alexey, It is good to hear from you! Also, continue writing Spring articles. :) Great, I’m from Brazil and this post helped to understand spring’s configuration. congratulations Thank you! I appreciate your kind words. Hi Petri, nice detail articular. i am struggling to get this working, appreciate you help. 1. i could not working, hence i configured below in xml then, at deployment it failed with below error exception Caused by: org.springframework.beans.factory.UnsatisfiedDependencyException: Error creating bean with name ‘socialAuthenticationFilter’ defined in ServletContext resource [/WEB-INF/spring/LBSWeb/security-config.xml]: Unsatisfied dependency expressed through constructor argument with index 2 of type [org.springframework.social.connect.UsersConnectionRepository]: Could not convert constructor argume nt value of type [com.sun.proxy.$Proxy198] to required type [org.springframework.social.connect.UsersConnectionRepository]: Failed to convert value of type ‘com.sun.proxy.$Proxy198 implementing org.springframework.social.connect.ConnectionRepository,java.io.Serializable,org.springframework. aop.scope.ScopedObject,org.springframework.aop.framework.AopInfrastructureBean, org.springframework.aop.Spring Proxy,org.springframework.aop.framework.Advised’ to required type ‘org.springframework.social.connect.UsersConnectionRepository’; nested exception is java.lang.IllegalStateException: Cannot convert value of type [com.sun.proxy.$Proxy198 implementing org.springframework.social.connect.ConnectionRepository,java.io.Serializable, org.springframework.aop.scope.ScopedObject,org.springframework.aop.framework. AopInfrastructureBean, org.springframework.aop.SpringProxy,org.springframework.aop.framework.Advised] to required type [org.springframework.social.connect.UsersConnectionRepository]: no matching editors or conversion strategy found at org.springframework.beans.factory.support.ConstructorResolver.createArgumentArray (ConstructorResolver.java:702) hope you can help me out. Hi Sam, It seems that Wordpress ate your XML configuration. However, it seems that Spring cannot convert the constructor argument with index 2 to required type ( UsersConnectionRepository). It is kind of hard to figure out what could be wrong without seeing the XML configuration file. Can you paste it to Pastebin? Also, have you compared the XML configuration of the example application to your application context configuration? The configuration files which are relevant to you are: exampleApplicationContext-social.xml and exampleApplicationContext-security.xml. Thank you for the quick revert. i posted the xml on Pastebin ‘ SamJay – spring social xml configuration issue’. i am using spring.social.version>1.1.0.M4 actually i tried to make the configuration work as give by you. but for some reason social:jdbc-connection-repository did not work or recongnized, it failed to compiler with nodefine bean expectation for usersConnectionRepository, that’s why i switched to xml configuration at first place. i understand there are 2 place where usersConnectionRepository has being used (socialAuthenticationFilter and socialAuthenticationProvider ), i get the exception with socialAuthenticationProvider . thank you. For some reason I cannot find the XML from Pastebin by using the search term: ‘SamJay – spring social xml configuration issue’. Could you provide a direct link to it? By the way, this example assumes that you use Spring Social version 1.1.0.BUILD-SNAPHOT. The reason for this is that some classes which makes the configuration a lot simpler are not available in the version 1.1.0.M4. here is the link Petri. i did try with the as is configuration given by you but it still does not pick the social:jdbc-connection-repository so its failed at deployment to JBoss. thanks. The reason why you cannot use Java configuration if you deploy to JBoss is that JBoss doesn’t support Spring Java configuration yet. Have you tried to deploy the application to Tomcat 7? It could be useful because this way you could figure out if this is a JBoss related problem. I noticed that you don’t set the value of the indexattribute when you use the constructor-argelement. Have you tried to set it? Also, some of your configuration files use the old Spring versions (3.1). You should probably update them to the latest versions. Have you tried to update your Spring Social version to 1.1.0.BUILD-SNAPSHOT? If you would do that, you should be able to use my versions of the XML configuration files. This would make your application context configuration files easier to read and less brittle. thanks, i am not using Java configuration at all only old faction xml, and adding constructor-arg element also made no difference. i will deploy the app to the tomcat to eliminate the server. below has simile issue being discussed, but could not really help. thanks will write to you with update. After I read that discussion, I realized that is probably an AOP related issue. I noticed a small difference between your XML configuration file and a file mentioned in that thread. Have you tried to declare the JdbcUsersConnectionRepositorybean as follows: thanks, i tried that, but still get the same error, one other think i tried to get the Spring Social version 1.1.0.BUILD-SNAPHOT from., but it also failed downloading the jar’s. dependencies> org.springframework.social spring-social 1.1.0.BUILD-SNAPSHOT spring-snapshots Spring Snapshots true Hi, You can get the correct dependencies from the Spring snapshot repository by following these steps: Check the POM file of the example application for more details about this. Also, It seems that the required modules are found from the snapshot repository. Perhaps the download failed because of a network issue or something. Hi Petri, deployment failed on tomcat 7 as well with same exception. I was expecting that. The problem is related to Spring AOP and not to the used server. I noticed that you answered to this thread. Let’s hope that you get an answer soon (I want to hear what the problem was)! Will keep you posted as soon as i get a answer. on a different note can you please enplane the below please. with the ConnectController, called back (redirect) into your app: GET /connect/facebook?code=xxx, which ends up with page not found. how should i capture the call back here and seamlessly integrate with the app If you want to redirect the user a specific page after the connection has been created, you should override the connectionStatusRedirect() method of the ConnectionController class. Hi do u know the reason for this error please ‘state’ parameter doesn’t match.). Redirecting to facebook connection status page. I haven’t run into this problem but I found some additional information it. You might want to check out the Github issue titled: Facebook connection infinite redirect loop workaround. Hi Petri, I have added /web-inf/jsp/js/app.js and web-inf/jsp/js/controller.js and updated layout.jsp with below includes. <script type="text/javascript" src="”> getting below errors in javascript console Refused to execute script from ‘’ because its MIME type (‘text/html’) is not executable, and strict MIME type checking is enabled. login:1 Refused to execute script from ‘’ because its MIME type (‘text/html’) is not executable, and strict MIME type checking is enabled. login:1 do i need to update webapp? any pointers?? sridhar Hi Sai, The problem is that tou put your Javascript files to WEB-INF/jsp/js/ directory, and servlet containers do not serve any content put to the WEB-INF directory. You can solve this by moving your Javascript files to the src/main/webapp directory. If you use the same approach which I used in the example application, you should move the app.js and controller.js files to the src/main/webapp/static/js/app directory and add the following code to the layout.jsp file: I hope that this answered to your question. I think there is a dependency missing: spring-social-config You are right! Thanks for pointing this out. It seems that the spring-social-config module is a transitive dependency but I agree that it might be better to explicitly specify it (at least in this case). I will update the blog post and the example application. oops, forgot to say first, awesome article, thanks a lot for sharing :) Thanks! I appreciate your kind words. Hi petri, i’am newbie in spring. How to add your example source to my project in eclipse? Thank you Hi Davi, I haven’t been using Eclipse for several years but let’s see if I can help you out. Which Eclipse version are you using? hi Petri, What are you using for this project? I use IntelliJ Idea, but the example application uses Maven. In other words, you can compile / package / run it without using an IDE. All you have to do is to clone it from the Github repository and you are ready to go (if you have installed JDK + Maven). Thank you for your reply. I’m using eclipse kepler. It seems that you should be able to do this by navigating to: File -> Import -> Browse to general -> Maven Projects This wiki page has more information about this (including screenshots). I hope that this solved your problem. Great, thank you. Great tutorial petri. uhmm. It’s should not be this complicated. I agree. It will be interesting to see if Spring Boot will be integrated with Spring Social. New to this , I am getting the following error when I import the code to the eclipse Error loading property file ‘/Users/akumar/Documents/development/tracks/git/spring-social-examples/sign-in/spring-mvc-normal/profiles/dev/socialConfig.properties’ (org.apache.maven.plugins:maven-resources-plugin:2.6:testResources:default-testResources:process-test-resources) You need to create the socialConfig.propertiesfile yourself. This file contains the configuration of your Facebook and Twitter applications. See the README of the example application for more details about this. Hello, im having huge problems adopting new facebook api to our application. Before i knew ill have to add it, i’ve created normal spring security besed User management. But now i have to add facebook. With new Social Security have been added XML based configuration with UserIdSource etc. But i’ve no idea how to use it. Could you be so nice and also create tutorial for XML based configuration that can be adopted to already existing spring security projects :( ? Huge thx for all help. Hi, Have you checked out the XML configuration of the example application? It should help you get started if you want to configure your application by using XML. I was planning to describe it in this blog post as well but I noticed that the blog post would probably be too long. That is why I decided to skip it and provided a link to the XML configuration instead. If you cannot solve problem by reading the configuration files, let me know and I will write a brief description about the required steps. Hi Petri, I have managed to get the FB logging to work. Now i see that data are being populated to ‘userconnection’ table through ConnectController and when i disconnected from the service, the data from the table get deleted as well (hope the expected behavior). My query is: I have a table called ‘user’ which maintains the application form logging users information’s and authenticates via spring-security. What i want to figure out is, i would like to sync userconnection data which maintains FB user data with ‘user’ table which maintain the form application local user accounts. So in a situation where a client logging with a FB, i should be able to create an account in the site as well and pass that information (ex: a passwrod) via a mail. So that user has the ability to either use FB or site account. Can you please help me to understand am i thinking on the right direction? And what are the steps that i should do to achieve this. Thanks Sam Hi Sam, If your application uses email as username and you get the email address of the user from Facebook, you can create a row to the usertable when a user signs in by using Facebook. The way I see this, you have two options: The first option provides a better user experience. The problem is that you cannot use it if your application has to support social sign in providers which don’t return the email address of the user or if you don’t use email address as the username. The second option is easier to implement but it can be annoying because often users expect that they don’t have to enter password information if they use social sign in. I hope that this answered to your question. If you need more advice about this, don’t hesitate to ask additional questions. HiPetri, Thank you very much for the detail explanation & ill go through the links you provided and get back to you on the outcome. I do want to support FB, Twitter, Googal+, hence need to check whether email is being returned by those services. but my current implementation does not use email as the username, yet i am able to get the username with below. Regarding the second point: i am not clear on this, what do you mean is; at the end of authentication success, inject a page to capture a password, is it? another query that i came across is, once the FB authentication is successful, default behaviors is, the flow returns to the facebookConnected.jsp. what is the configuration (bean) to allow the flow to be continued to the application since the user is now authenticated ? Thanks Saranga Hi Sam, First, I am not sure if you have tried the example application of this blog post but its registration flow goes like this: What I meant was that you could ask the password information from users who are using social sign in. If you want to know more about the registration flow of the example application, you should read my blog post titled Adding Social Sign In to a Spring MVC Web Application: Registration and Login. Second, have you integrated Spring Social with Spring Security or are you using only Spring Social? If you have integrated Spring Social with Spring Security, you can configure the AuthenticationSuccessHandlerbean. If you are using Java configuration, you should take a look at this StackOverflow question. On the other hand, if you are using only Spring Social, you could try to set the url by calling the setPostSignInUrl()method of the ProviderSignInControllerclass. I haven’t tested this though so I am not sure if this is the right way to handle this. Hi Petri, i am using the Spring Social with Spring Security and i have gone though your example which user xml, configuration. i have posted my 2 xml file here for your reference. (social-security xmls). hope it will make sense. i am going through the clinks that you are given. i am stuck on what to do with when the authentication call back return to facebookConnected.jsp. i guess, i want to capture the callback and take the control to spring-security and let the application work flow proceed. as you can see, i have used default spring provided controller, i guess i need to overwrite this and configure a way to let the flow run through the application flow. thank you very much for your help. thanks saranga Hi Sam, I just realized something. Do you use the url /auth/[provider] (in other words, for Facebook this url would be /auth/facebook) to start the social sign in flow or do you use the url /connect/[provider]? If you only want to authenticate the user, you should use the first url (/auth/[provider]) because requests send to that url are processed by the SocialAuthenticationFilter. I took a very quick look at your configuration files and they seemed to be fine. I want to make a full example integrating spring social and spring security using MongoDB , i need some examples , links or tuorials that help me to achieve that. i don’t know the needed changes to make in order to use mongodb instead of mysql because the problem that i faced is that Spring Social project already provides a jdbc based connection repository implementation to persist user connection data into a relational database. i don’t know if this is only used by relational databases :( Hi Moussi, The JdbcUsersConnectionRepositoryclass persist connection information to a relational database. If you want to save this information to MongoDB, you have implement the UsersConnectionRepositoryinterface. I have never done this but I found a blog post titled Customize Spring Social Connect Framework For MongoDB (unfortunately this blog post has been removed). It explains how you can persist connection information to MongoDB. You can get the example application of this blog post from Github. I hope this helps you to achieve your goal! I am looking forward on using this solution for testing in my environment. As i have had no contact yet with sitemesh, here’s my question. How would i do sth. like this: I think this work should be done in ExampleApplicationConfig, but i am stuck with this. Is there some easy solution to add things like ? Hi there, forget about my last post. I made a small change in “ExampleApplicationConfig” on setting up the sitemesh FilterRegistration.Dynamic sitemesh = servletContext.addFilter(“sitemesh”, new TagBundlerFilterForSite()); sitemesh.addMappingForUrlPatterns(dispatcherTypes, true, “*.jsp”); While adding the new Class to the same package: public class TagBundlerFilterForSite extends ConfigurableSiteMeshFilter { public TagBundlerFilterForSite(){ this.applyCustomConfiguration(new SiteMeshFilterBuilder()); } @Override protected void applyCustomConfiguration(SiteMeshFilterBuilder builder) { builder.addTagRuleBundle (new DivExtractingTagRuleBundle ()); } } I can now do this: Template: JSP-Page:Some more data for me this really helps to use my template well, maybe u or others can use that. I know the setup on the constructor is a bad thing, but as i really tried to get this working, i was happy for now. If there is a better solution, let me know ! :) It is great to hear that you were able to solve your problem! Also, thanks for coming back and posting your solution to my blog. Now I know where to look if I need similar functionality (I have never needed it yet). Heh, This is really an awesome post.This will help me a lot. Can you please mail me the zip file of the complete code? I tried to copy the code and and run, but it’s not working. Am trying to remove errors since last 3 days, but not able to do so. Please help me out. Mail me as soon as possible. You can get the code from Github (you can either clone the repository or download the code as a Zip file). Remember to read the README as well. Hi petri, I have one doubt.How to set the anonymous user for authentication without xml configuration? Hi, If you are talking about the anonymous authentication support of Spring Security, it should be enabled by default. The default role of the anonymous user is ROLE_ANONYMOUS, and you can use this role when you specify the authorization rules of your application. Unfortunately I don’t how you can customize the anonymous authentication support without using XML configuration. :( Hello! I’m trying to follow this tutorial, but I have a problem downloading the dependencies. Can you help me out? Thanks The following artifacts could not be resolved: org.springframework.social:spring-social-config:jar:1.1.0.BUILD-SNAPSHOT, org.springframework.social:spring-social-core:jar:1.1.0.BUILD-SNAPSHOT, org.springframework.social:spring-social-security:jar:1.1.0.BUILD-SNAPSHOT, org.springframework.social:spring-social-web:jar:1.1.0.BUILD-SNAPSHOT, org.springframework.social:spring-social-facebook:jar:1.1.0.BUILD-SNAPSHOT: Failure to find org.springframework.social:spring-social-config:jar:1.1.0.BUILD-SNAPSHOT in was cached in the local repository, resolution will not be reattempted until the update interval of spring-milestone has elapsed or updates are forced Hi Roman, It seems that I forgot to mention that you have to add the Spring Snapshot repository to the pom.xml file (see the pom.xml file of the example application). Also, you might want to use version 1.1.0.RC1 because it was released this week (I will update the example during the weekend). I hope that this answered to your question. If not, let me know! Please let me know if I’m misunderstanding, but it appears that this application permits a user to associate exactly one social-media account–of any type–with their application account, so that a user can’t associate both Facebook and Twitter accounts simultaneously. It appears that the SocialUserDetails interface has a massive flaw in that its getUserId() method takes no parameter specifying *which* social service for which we’re looking up the user’s identity. Did I overlook some out-of-band information to the persistence layer about which social network is being talked about (such as an injectable thread-local holder), or does is this entire setup limited to a single social-media association per user? You are right. If a user creates a user account by using Facebook, he cannot sign in by using Twitter (or associate a Twitter account with his local user account). However, it is possible to support multiple social sign in providers as well. I haven’t done this myself but I think that you can do this by following these steps: Userentity and add support for multiple social sign in providers (this is required only if you want to store this information). UserIdSourceinterface and create a userId which contains the information required to identify the local user and the used social sign in provider. SocialUserDetailsServiceinterface and ensure that it can handle the “new” userId (you have to parse the local username from userId and load the correct user). ProviderSignInUtilsclass. If you have any further questions, don’t hesitate to ask them. Petri, I was looking around on the web to find out how to change the scope when doing a Facebook login authorization. I was looking to make the change in the configuration rather having to send a post with a hidden variable on the social login button. I see in SocialContext that we are adding the Facebook connection factory and it has a method to set the scope. I changed the scope and it does not change the scope on the authorization. Do you know how to change this at the configuration level? There is an “Authorization scope” section that explains how it is done but not at the configuration level. Have you done this before? I was able to verify that setting the scope by calling the setScope(String scope)method of the OAuth2ConnectionFactoryclass ( FacebookConnectionFactoryextends this class) doesn’t seem to do anything. Unfortunately I cannot provide an answer right away, but I promise that I will take a closer look at this because the Javadoc of that method suggests that it should work. I will let you know if I find a solution. I looked around tonight and did not find a way to set the scope but I did find that OAuth2AuthenticationService.defaultScope is really what is being used when it adds the scope to the URL. If you don’t pass a scope as a hidden variable it will use the defaultScope. Thanks again for always being so helpful. Petri, Have you found a work around for this? I have not. I tried looking through the source code and kept getting a little lost and did not find a path to set the scope. Hi, Actually I did found something from the web: It seems that if you want to set the default scope, you have to use the FacebookAuthenticationServiceclass. The SecurityEnabledConnectionFactoryConfigurerobject given as a method parameter to the addConnectionFactories()method of the SocialConfigurerinterface creates the required authentication service objects BUT it doesn’t set the default scope. I assume that if you want to set the default scope, you have remove the @EnableSocialannotation from the SocialContextclass and create all required beans manually. I can take a closer look at this tomorrow. I checked this out and I noticed one problem: I should create a ConnectionRepositorybean but I am unable to do it because the ConnectionRepositoryclass is package-private (and I don’t want to move the SocialContextclass to the same package). You could of course configure Spring Social by using XML configuration but if you don’t want to do that, you should create a new issue to the Spring Social project. Petri, Thanks again. I created a ticket. With your help I hope they have enough information to find the bug. I am a little lost. You are welcome! Let’s hope that the Spring Social team can solve this soon. great post, helped me very much. I am happy to hear that. Hi Petri, I’m trying to implement spring-social-facebook in my application, however I’m stuck in the JdbcUsersConnectionRepository part. I would like to have my own UsersConnectionRepository using Hibernate but without JPA. Thanks much in advance The JdbcUsersConnectionRepositoryclass uses the JDBC API. In other words, you can use it in an application which uses the “native” Hibernate API. You can of course create your own implementation by implementing the UsersConnectionRepositoryinterface but I am not sure if it is worth the effort. Did you mean that you want to create a custom UserDetailsServicewhich uses Hibernate instead of Spring Data JPA? HI Petri At the end of this tutorial I have an error with the servletContext.addServlet, servletContext.addFilter and servletContext.addListener … I´m workinh with Eclipse and the message that appear is “The method addListener(ContextLoaderListener) is undefined for the type ServletContext”. The solution tath Eclipse suggest me is “Add Cast to servletContext” What can I do? Thanks much in advance You need to use the Servlet API 3.0. You can get the full list of required dependencies by reading the pom.xmlfile of the example application. Hi Petri, Thank you very much for the detailed tutorials! I have a question that is similar to the one from Ademir. I would like to integrate spring social into my project however I don’t need any of the spring social persistence stuff and just seems to conflict with my application. All I really need is Spring Social’s Facebook methods. Is this possible to simplify the setup in this way? Any help would be greatly appreciated! Hi, Do you want that your application is able to read information from Facebook and write information to Facebook (and that is it)? If that is the case, you should read a tutorial titled Accessing Facebook Data. It should help you to get started. If that guide doesn’t solve your problem, let me know! hello my friend, i have a trouble to run this sample, in “” class on line 68 we have .apply(new SpringSocialConfigurer()) but i get can’t resolve method ! i’m sure that i provide all maven dependencies correctly, then i tried to upgrade the “spring.security.version” to 3.2.4.RELEASE but the problem remains. whats the problem ? thanks. Are you getting a compilation error or a runtime error? Also, if you get a runtime error, it would be useful to see the stacktrace. hello again, when i start packaging app by using maven directly, i get rid of my first question, because that was just an IDE wrong alert. but now i have another problem after returning from facebook auth, the page redirected into and i an error 404 will raise. of course i can’t find any controller matching /signin url why!? whats the problem you think ? thank you Hi, Petri can you help me on my question ? thanks Hi, Yes, I was wondering if your previous problem was an IDE alert because I used to see a similar alert in IntelliJ Idea. However, it disappeared after I updated it to a newer version. I assume that your problem occurs when a registered user tries to log in to your application (because of the url). If this isn’t the case, let me know. Anyway, you can configure the url in which the user is redirected after a successful login by following the instructions which I gave in this comment. Let me know if this doesn’t solve your problem. here is my changes in http:/localhost:8080/login i click on Sign in With facebook then i redirected to facebook and have a successful login but it still redirected to http:/localhost:8080/signin#_=_ here is statcktrace: 0][route: {s}->https:/graph.facebook.com:443][total kept alive: 0; route allocated: 1 of 5; total allocated: 1 of 10] DEBUG – MainClientExec – Opening connection {s}->https:/graph.facebook.com:443 DEBUG – ttpClientConnectionManager – Connecting to graph.facebook.com/173.252.112.23 allocated: 0 of 10] sorry Petri, can i ask what url you expected to called after return from facebook ? which controller must catch the request and how can i get auth_token to get all friends of logged in user ? i have so many quastions, but at first i need to run application properly, i googling so much for some other examples but yours is best article ever. thanks again. No problem. I happy to help but I am on summer holiday at the moment so my response time might be a bit longer than in a normal situation. Anyway, if you implement the registration and login functions as described in this blog post, the only urls you should care about are: I have never experienced a situation where the user would have been redirected to the ‘/signin’ url, so I am not sure how you can solve this problem (The log you added to your comment doesn’t reveal anything unusual). I think that the easiest to way to solve this problem is to compare the configuration of your application with the configuration of my example application. Unfortunately it is impossible to figure out the root cause without seeing the source code of your application. About your second question: I have never used Spring Social Facebook for accessing user’s Facebook data (such as list of friends), so I don’t know how you can do it. I think that your best bet is to read this guide which explains how you can create a simple web application that reads data from Facebook. excuse me Petri, i found that my server doesn’t have an access to graph.facebook.com:443 this make’s the problem, after i resolve this issue now have a null pointer exception at org.springframework.social.security.SocialAuthenticationFilter.doAuthentication(SocialAuthenticationFilter.java:301) Authentication success = getAuthenticationManager().authenticate(token); and getAuthenticationManager(). return null value ! do you have any suggestion ? thanks you for your replies, Have you configured the AuthenticationManagerbean? You can do this by overriding the protected void configure(AuthenticationManagerBuilder auth)method of the WebSecurityConfigurerAdapterin the SecurityContextclass (assuming that your configuration is similar than mine). finally, all thing worked together successfully. thanks for all of your advises. have a good holidays. Thanks! It is good to hear that you were able to solve your problem. Hi in RepositoryUserDetailService I am getting error “The method getBuilder() is undefined for the type ExampleUserDetails”…Please Help anyone.. Some of the methods of the ExampleUserDetailsclass were left out from this blog post because I wanted to shorten the code listings. The getBuilder()method was one of them. You can get the source code of the ExampleUserDetailsclass from Github. hello Peter.. thank your for this article.. can u create a video tutorial for this article..? Hi, That is a great idea. I will add this to my Trello board, and I will record the video tutorial in the future. I’m using your example and my question is where the method is implemented public User findByEmail(String email);?? i dont see (interface UserRepository.class) The findByEmail(String email)is a custom query method which is found from the UserRepositoryinterface. This has been explained in the section titled ‘Implementing the UserDetailsService interface’. You can also get the source code of the UserRepositoryinterface from Github. yes, but I do not see where is the implementation of the method, see the calling but where is the implementation? I really doubt that userRepository class implements the interface for the method to work findByEmail sorry for my English thanks!! The example application uses Spring Data JPA which provides a proxy implementations for each interface which extends the Repositoryinterface. That is why there is no need to implement the UserRepositoryinterface. Hi Petri, I in the process of implemented Spring Security & Spring Social for the website and also would like to allow for the iOS and Android apps to connect via the social as well. Idea i had in mind is to have the centralised api expose in web end and let it handle the social signup/sing in process where mobile end only connect to this api. Can you please help me on modelling such and how should i go on about this. Thanks Hi Sam, I have never done this myself (surprising but it is the truth). However, I found an answer to StackOverflow question titled ‘Integrate app with spring social website’ which looks quite promising. The idea is that you have to first implement the social sign in by using the native APIs (Android, iOS) and then provide the required credentials to your backend and log the user in. I am planning to extend my Spring Social tutorial to cover REST APIs in the future. I will probably take a closer look at this when I do that. Thanks Petri. Look forward to it. Hi Petri, Did you have a chance to add REST API support in this example? Hi Armen, No. I will update my Spring Data JPA and Spring MVC Test tutorials before I will update this tutorial. If everything goes as planned, I will do it at November or most likely at December. Actually I asked this from the creator of Spring Social because one of my readers asked the same question, and he said that at the moment the best way to do this is to do a normal GET request and let the backend handle the authentication dance. Hi Petri, Thank you for your quick response. I tried working with maven but facing below issue. Can you please help me to look into this issue. [INFO] ———————————————————————— [INFO] BUILD FAILURE [INFO] ———————————————————————— [INFO] Total time: 4.906 s [INFO] Finished at: 2014-07-09T15:47:17+05:30 [INFO] Final Memory: 13M/154M [INFO] ———————————————————————— [ERROR] Failed to execute goal org.apache.maven.plugins:maven-resources-plugin:2.6:resources (defau lt-resources) on project spring-mvc-normal: Error loading property file ‘F:\workspace\login\profile s\dev\socialConfig.properties’ -> [Help 1] [ERROR] Hi Amit, the Maven build is a bit different than the setup described in this blog post. It expects to find socialConfig.propertiesfile from the profiles/dev/directory. This properties file contains the configuration of your Facebook and Twitter application (app id and app secret). Also, the example application doesn’t contain this file. This means that you have to create it yourself.The README of the example application explains how you can set up the example application before you can run it. I hope this answered to your question. If not, feel free to ask more questions! I deployed the project but not able to access registration page and also authentication page is not available. can you please provide me complete setup of this project to learn more. Did you follow the steps described in the README of the example application? If you did follow them, could you check the log and paste the relevant part of it here (also, if you see any stack traces, add them here as well)? I followed the steps as you have mentioned and getting the login page. But when i tried to login my url is this “” and after submitting the URL it redirects to “”. So, I feel the problem is redirecting the URL. It’s not redirecting properly. Also, After login with facebook the url redirects to “” and it shows 404 error. Please let me know if i have to update anything more in configuration. The reason for this is that the actionattribute of the login form ignores the context path. You can fix this by replacing the formtag with this one: I made the fix to the example application. Thank you for reporting this bug. Another reader had the same problem, and the reason for this behavior was that his server didn’t have access to the Facebook Graph API. Thank you Petri. It works…. cheers :) I tried as mentioned above but still it redirects to same url. I pasted my console log. Please have a look and help me to fix this issue. DEBUG – ttpClientConnectionManager – Connection leased: [id: 0] allo cated: 0 of 10]: 1]-1: Shutdown connection DEBUG – MainClientExec – Connection discarded DEBUG – anagedHttpClientConnection – http-outgoing-1: Close connection DEBUG – ttpClientConnectionManager – Connection released: [id: 1][route: {s}->https :/graph.facebook.com:443][total kept alive: 0; route allocated: 0 of 5; total allo cated: 0 of 10] Your log file looks pretty similar than farziny’s log. His problem was that his server could not access the Facebook Graph API. I am not exactly sure what he meant by that, but I assumed that either his FB application was not configured properly, or the app secret and app key were not correct. When you created the FB Application for your web application, did you enable the Facebook login? Hi Petri, As, you said its blocking the facebook graph api, you were correct. I fixed the problem and working fine everything. You rocks dude… Trying to implement to get the post feed of user and friend list also. May be your help required in future again. So, thanks a lot in advance. Hi Amit, Petri, Even I am getting the same error you specified earlier. After the Facebook login, the control is not returning back to login page/signUp page. Do you know how to resolve this/ Enable the Facebook graph API on the server? DEBUG – headers – http-outgoing-1 << Access-Control-Allow-Origin: * DEBUG – headers – http-outgoing-1 << X-FB-Rev: 1653508 DEBUG – headers – http-outgoing-1 << ETag: "02e90b73697f1bf84bb1c08a06c30817978e2ff1" DEBUG – headers – http-outgoing-1 << Pragma: no-cache DEBUG – headers – http-outgoing-1 << Cache-Control: private, no-cache, no-store, must-revalidate DEBUG – headers – http-outgoing-1 << Facebook-API-Version: v2.0 DEBUG – headers – http-outgoing-1 << Expires: Sat, 01 Jan 2000 00:00:00 GMT DEBUG – headers – http-outgoing-1 << X-FB-Debug: KntgReJ8rZbpdGdWOho0pLPgYBPEpFQei1a+jQNDJJBs+qoE6Sx9pBiHGMk0MsA5NEv6oa0uEv5ABrrVqMwgJg== DEBUG – headers – http-outgoing-1 << Date: Sun, 22 Mar 2015 18:07:53 GMT DEBUG – headers – http-outgoing-1 << Connection: keep-alive DEBUG – headers – http-outgoing-1 < can be kept alive indefinitely DEBUG – ttpClientConnectionManager – Connection released: [id: 1][route: {s}-> kept alive: 1; route allocated: 1 of 5; total allocated: 1 of 10] Hi Petri, I am facing a problem where the Facebook sign-in happens but after that the control is not returning to my applicaiton. It is simply waiting for localhost. I am simply running this code as is without any modification( except creating a new socialConfig.properties ) what basically happens after the facebook login? Does it check for the user existance? I am asking this because I havent configured the Database. Hi, The registration and login process is explained in the second part of my Spring Social tutorial. Does your problem occur when the user tries to create a user account by using social sign in (i.e. he clicks the social sign in link for the first time)? Hi Petri, Nice article!! I have followed exactly the same steps as described here and can go to facebook login page from my apps login page (by clicking on facebook link).After fb login , the app lands back to the apps login page with the url appended with ‘#_=_’ e.g. ‘’. Actually I expected the registration page will be displayed instead.SocialUserService::loadUserByUserId() is not getting called as I put some sop there. Any hints? Best regards, Pradeep my own mistake, forgot adding /usr/register twith permitAll() in security context. Can see the register form now. It is good to hear that you were able to solve your problem! Well the user registers now,I mean registration screen comes and user details are saved to my Candidate table but UserConnection table has no entries of this new user.So looks like some configuration error still there regarding SocialUserService since it’s loadUserByUserId() is still not called. My securitycontext.xml is: and social.xml is My databasecontext.xml has the rest: 3 ? request.getRequestURI().split(‘/’)[3] : ‘guest’}” /> Any hints for this problem? Wordpress ate your XML configuration but I happen to have an idea what might be the problem. You have to persist the user’s connection to the UserConnectiontable after the registration has been successful (and the user uses social sign in). You can do this by following the instructions given in the second part of this tutorial. However, please note that the static handlePostSignUpmethod of the ProviderSignInUtilsclass has been deprecated. You should use the doPostSignUp()method instead. I hope that this solves your problem. That was a perfect hint!! Actually I have commented the call to ProviderSignInUtils.handlePostSignUp() thinking its not useful in my case. Thanks a lot :) You are welcome. It is good to hear that you were able to solve your problem. This article is not simple. It’s difficult to understand so please make simple example Unfortunately I have no idea how I could create a simpler example application since I think that the example application is already pretty simple. However, if you let me know which parts of this blog post were hard to understand, I can try to make them easier to understand. HI Petri, Logout is not working for facebook. It just redirecting to login page. How can I do logout in facebook. And for twitter I’m getting this error. org.springframework.web.client.HttpClientErrorException: 406 Not Acceptable org.springframework.web.client.DefaultResponseErrorHandler.handleError(DefaultResponseErrorHandler.java:91) Do you mean that log out function of my example application is not working, or are you trying to follow the instructions given in this blog post and cannot get the log out to work in your application? Also, do you mean that after the user clicks the log out link, the user is still logged in to the application and can access protected resources (such as the front page of my example application)? yes, when I do click on logout button it’s redirecting to login page. but when I try to open facebook.com it’s directly shows me home page.(It’s not asking me use name and password again) That means logut is not working properly. As far as I know, Spring Social doesn’t support this. However, there are a couple of workarounds which are described in the following forum discussions: I hope that this answered to your question. Also, remember that you have to use similar workarounds for other social sign in providers as well.) Hi Petri, Find the solution. I was not setting callback URL in twitter that was the problem. But when I do login in twitter and when it’s successful it’s redirecting me to registration form. Why is so? do I need to change my callback URL? Great! It is good to hear that you were able to solve your problem. The user is redirected to the registration page because either the user account or the persisted connection is not found from the database. This can be a bit confusing so I will explain it here: When a user “returns” to the example application after clicking a social sign button, Spring social tries to find the persisted connection from the UserConnectiontable. If the connection is not found, the user is forwarded to the registration page because the application assumes that the user doesn’t have a user account yet. On the other hand, if the connection is found, Spring Social tries to find the correct user account by using the value of the userIdcolumn (the example application uses email address as user id). If the user is not found, the user is forwarded to the registration page. I hope that this answered to your question. By the way, if you want to get more information about the social sign in flow, you should read the second part of this tutorial. Thanks Petri. Thank you for help. Project is now working properly. :) You are welcome. :) Hello, I have a problem with facebook when I turn on HTTPS for /**. When user is redirected back from Facebook site after successful login, and after granting permissions for my app. It goes back to my SocialAuthenticationFilter and attemptAuthentication method. Everythink is ok for http, but with https this method is called one more time, and the user is already authenticated (in attemptAuthService()) so it tries to addConnection but token.principal is null so entire method returns null. In the end the AuthenticationServiceException(“authentication failed”) is thrown and user is redirected to defaultFailureUrl. I use XML version of your config. I tried to force http for /auth/** and it WORKS, but i don’t think it’s safe to transfer tokens on unsecured channel. Don’t know what to do :( When the user is redirected back to your web application, is the user redirected by using HTTP or HTTPS? The reason why I ask this is that I am using Spring Social in a web application that is served by using HTTPS, and the Facebook login is working correctly. I checked the Facebook application configuration of that web application and I noticed that I had entered the website url by using HTTPS (e.g. instead of). If I would be you, I would check the website url setting found from the Facebook application configuration and ensure that the website url setting has the ‘https’ prefix instead of ‘http’. Let me know if this solves your problem. Aww… I have finally figured it out. The problem was in my social configuration. I have added authenticationSuccessHandler (SavedRequestAwareAuthenticationSuccessHandler) with useReferer=true to my socialAuthenticationFilter. I have done that because I have bootstrap modal dialog with login form on every page and I wanted to redirect user to the same page after authentication. I had totally forgot about that. It is good to hear that you were able to solve your problem! Everytime a sql-query is placed it results in a 404 Error. For example I can load the main page without problems. But if I put User test = userRepository.findByEmail(“test”) somewhere in the code i get the 404 Error again. Also everytime I try to do something like login -> 404. From the logs i see the sql query which works fine if I copy it into phpmyadmin. Except for this logging I se nothing. If I place a logging before and after the query I only see the first one. I guess the query somehow crashes and produces a 404. I know 404 means not found but this does not make any sense. What can I do? Is there a way to turn up more loggings? I use Glassfish with the source provided from github. I have never tried to run this with Glassfish but I can try to figure out what is going on. Which version of Glassfish are you using? Is it possible to override the /auth/{providerId} path? I need to do this because I have multiple security chains and in one of them the social login is meant to do something a bit different and also direct you to somewhere a bit different to. Hi Ricardo, There is a way to override the url that is processed by the social authentication filter, but you have to make your own copies of the SpringSocialConfigurerand SocialAuthenticationFilterclasses. I know that this is an ugly solution but unfortunately it seems that there is no other way to do this (yet?). The source code of the CustomSocialAuthenticationFilterclass looks as follows (check the comments for more details): The source code of the CustomSpringSocialConfigurerclass looks as follows: After you have created these classes, you need configure your application to use them (modify either the exampleApplicationContext-security.xml file or the SecurityContextclass). I haven't compiled these classes yet but you probably get the idea anyway. Also, I remember that I removed some irrelevant code to make the code sample more clear. This code must be present in your CustomSocialAuthenticationFilterand CustomSpringSocialConfigurerclasses. I hope that this answered to your question. Hi Petri, I am using your demo. I have rename one table named “userconnection” to “mg_userconnection” . I have changed table named in script. But when I am redirecting to register page after social login it’s throws error like this. “bad SQL grammar [select userId from UserConnection where providerId = ? and providerUserId = ?]; nested exception”. My question is how can I rename that table name? Thanks. Hi Naitik, You can configure the table prefix by using setTablePrefix()method of the JdbcUsersConnectionRepositoryclass. The SocialContextclass implements the getUsersConnectionRepository(ConnectionFactoryLocator connectionFactoryLocator)method of the SocialConfigurerclass. You can make the required modifications to this method. After you are done, the getUsersConnectionRepository(ConnectionFactoryLocator connectionFactoryLocator)method looks as follows: Now Spring Social is using the table mg_UserConnectioninstead of the UserConnectiontable. At the moment it is not possible to configure the “whole” name of this database table. In other words, if your database uses case sensitive table names, you cannot transform the name of created database table into lowercase. Hi Petri, with a little bit of customization I’ve this scenario: when a new user registered (not social) displays the phrase “You cannot create an user account because you are already logged in.”. but if I try to go to the home (index.jsp) I get an internal server error, maybe because the context is wrong. here the code of home button: <a href="">home</a> but if I use different path (/upload for example) all works fine (I’ve a controller) all works fine if I logout and login again whit the login form… the problem is for new registration only. can you help me? Hi Tiziano, I tested this out by following these steps: The result is that I can see the home page in my browser. My home link looks as follows: However, this assumes that the application uses context path ‘/’. If you want to use another context path, you should create your link by using the following code: Did I miss something? If so, let me know :) Thanks Petri, I forgot to tell you that I already use the contextPath tag. sorry… I obtain a generic error (500 error). with debug I see that creation time in Object principal (cast to UserDetails) is null inside database until I logout and login again. something miss in registration process? Did you remember to log the user in after the registration is completed? You should take a look at the controller method that processes the request created when the user’s submits the registration form. If you are logging the user in after registration, could you add the stacktrace found from the log file here? It is kind of hard to say what is going on without seeing any code or a log. Hi Petri, Thanks for posting this tutorial this is really good. I used your xml configuration with web.xml . it is unable to locate the bean of userConnectionRepository and connectionFactoryLocator. Below are the error what i am getting Cannot resolve reference to bean ‘usersConnectionRepository’ while setting constructor argument; nested exception is org.springframework.beans.factory.NoSuchBeanDefinitionException: No bean named ‘usersConnectionRepository’ is defined. Thank you for pointing this problem out. I just noticed that the XML configuration is still using Spring profiles (I removed these profiles from the Java config). If you want to use the XML configuration, you should either <beans profile="application">element and leave the other elements intact. I will remove the Spring profiles from the XML configuration when I have got time to do it. Again, thank you for pointing this out! thanks Petri for your great tutorial and quick response. I have one clarification that how user/provider is binded with registeration. i want to just persist the data and allow the user what should i do. More clear I dont want the registeration form in the case of social sign in. Hi, The second part of this tutorial explains how you can create a registration form that supports “normal” registration and registration via social sign in. If you have any other questions about the registration process, leave a comment to that blog post. thanks ,very usefully You are welcome! I am happy to hear that this blog post was useful to you. Hi, I cloned the repo from and executed the below command: mvn clean install It failed by giving the below error: [ERROR] Failed to execute goal org.apache.maven.plugins:maven-resources-plugin:2.6:resources (default-resources) on project spring -mvc-normal: Error loading property file ‘D:\Personal\BACKUP\Non-Cloud\Projects\workspace\petrikainulainen-spring-social-examples- master\sign-in\spring-mvc-normal\profiles\dev\socialConfig.properties’ -> [Help 1] [ERROR] Can you help? Thanks. Hi, you need to create a Facebook and a Twitter application before you can run the example application. Also, you need to create the socialConfig.propertiesfile and configure the used social sign in providers in that file. The README of the example application provides more details about these steps. Hi Petri, Thanks for your prompt response. I finished first two parts of this tutorial and sincerely thank you for all your efforts in creating a top-quality example app. I am new to Java & Spring development and learned a lot by going through these two articles and the source code. I am able to run the app by “mvn jetty:run -P dev” now after reading the README file but facing few issues in which I need your help. I plan to create a tomcat-deployable WAR file so I did: mvn war:war -P dev, and came up with a WAR. Now things don’t work as they were working earlier: 1. Once I Log in using my FB account, the app shows me a Welcome page. When I press the Logout button on that page, it takes me to instead of the logout success URL:. 2. Also, if I type again after logging out, I am able to see the Welcome page. That means user is still logged in my app as well as in FB. 3. Both problems 1 & 2 are there when I create a new normal user (without FB etc). 4. Create user account also doesn’t work now. It comes to and displays a blank page. I believe either the WAR is not getting created properly or Tomcat is not getting configured. Can you suggest what could the problem here? I found this link “” but not sure whether it is relevant to my problem. Thanks once again for writing such a wonderful tutorial. I think that these problems are caused by the fact that the application assumes that it is run by using context path ‘/’. It seems that you are running it by using the context path ‘/spring-mvc-normal/’. Am I right? I thought that I already fixed this, but it seems that I didn’t fix it after all. Thank you for pointing this out! I will fix the JSP pages and commit my changes to Github. Update: I fixed the JSP pages. You can get the working example application from Github. Thanks Petri, you nailed down the problem. After pulling your last commit, I was able to run the app perfectly fine from Tomcat. BTW, I want to know whether there was a mistake in my running the app. Is it not the right way to launch when one would deploy the app in production environment? You are welcome. The problem was that I didn’t think that it should be possible to run this application by using other context paths than ‘/’. In other words, it was a bug in the application (my mistake, not yours). Hi Petri, I am facing one strange problem now. When I try to login / register for the first time, I don’t see the “Create user account” link at the top right corner of the page. Once I log in successfully, I don’t see the log out button anymore. I re-used your JSP pages as they are in my test app. But looks like something is missing. What do I need to effectively use “sec:authorize” in my JSP pages? Thanks for your help. Hi, The reference manual of Spring Security states that: Do you configure your application by using XML configuration or Java configuration? If you use Java configuration, you should check the source code of the SecurityContextclass. If you use XML configuration, you should check the exampleApplicationContext-security.xmlfile. Hi Petri, I have matched the XML and JSP files many times but there is no change except the package names which are as per my application. During my debugging, what I realized is that none of the spring security tags present in the layout.jsp file is working. I put many debug prints in layout.jsp but nothing came up on the screen. If I do the same in other JSP files,they all show up. Does this ring any bell? Where have I screwed up? Sorry but since I am on the learning path, hence troubling you too much. Thanks. Hi Sona, Have you declared the Spring Security taglib in every JSP file that uses it? Don’t worry about this. I am happy to help. Hi Petri, Finally, months after reporting this issue, I found the root cause and believe me, it is one of most silly solutions I have found for my issues. Problem is in the filter ordering in my web.xml file. I wasn’t aware that ordering plays a critical role and here sitemesh and spring security were not in order. Following thread from SO helped me in identifying this: Thanks for all your support in debugging this :) Hi Sona, You are are welcome (although I think that was not able to provide you a lot of help on this matter). In any case, it is great to hear that you finally found the reason for your problem and were able solve it! Hi Petri, I am very new to spring security at all and I have implemented spring security with openID authentication(Login through gmail) now I am trying spring facebook integration. For this, I have written custom class which is generic for all i.e. simple security,openId Auth and spring facebook as follows:- public class UserAuthenticationFilter implements UserDetailsService,AuthenticationUserDetailsService,SocialUserDetailsService { public UserDetails loadUserByUsername(String username) throws UsernameNotFoundException, DataAccessException { } public UserDetails loadUserDetails(OpenidAuthenticationToken token) throws UsernameNotFoundException, DataAccessException { } public SocialUserDetails loadUserByUserId(String username) throws UsernameNotFoundException, DataAccessException { } in this way, in above filter, I have overridden the required methods.But I am not able to execute the loadIUserByUserId method at all. I have added following code in security.xml:- and in jsp code is as follows <a href="”> Login with facebook Is it necessary to create the facebook app? Can u plz tell that Is It a correct way to implement this. plz give me the solution to get success on spring facebook login and plz suggest what implemtation is remaining . I am stuck on this. Thanks in advance Hi, Yes. You cannot implement a Facebook login without it because you need a valid app id and app secret. If you haven’t created a Facebook application for your application, this is the reason why you your Facebook login is not working. Unfortunately Wordpress “ate” your XML configuration so I cannot say if there is something wrong with it. However, you can take a look at the exampleApplicationContext-security.xmlfile, and compare it to your security.xmlfile. I hope that this answered to your question. Hi Petri, Thanks for reply, Actually I have same xml configuaration as ur exampleApplicationContext-security.xml and exampleApplicationContext-social.xml. And I have already done data source and transactionManger like configuration in application context.xml. But I have some doubts and issues as follows:- 1)I am getting this error in exampleApplicationContext-security.xml Line 77 in XML document from ServletContext resource [/WEB-INF/application-security.xml] is invalid; nested exception is org.xml.sax.SAXParseException; lineNumber: 77; columnNumber: 65; cvc-complex-type.2.4.c: The matching wildcard is strict, but no declaration can be found for element ‘constructor-arg’ 2) We don’t have created any object then why are we using the “usersConnectionRepository” this value as ref in security.xml plz help me to make this integration working. I am not getting clicked anyway here. Thanks a lot petri…… Hi, it seems that the problem is that you haven’t configured the UsersConnectionRepositorybean. This bean is required because Spring Social persists the user’s connection to a social service provider. My example application uses the JdbcUsersConnectionRepositoryclass that persists connection data to a relational database. The section 2.3. Persisting Connections of the Spring Social reference manual provides more information about this. Hi Petri, Can u plz tell me that how to integrate facebook login with localhost application. Plz help me. I am using gwt with spring and hibernate. My generated local URL is What is the correct way to implement this? Thanks Thanks in advance Hi, I have never used GWT (Google Web Toolkit?), and unfortunately I have no idea if it is possible to integrate Spring Social with it. :( If you want to use Spring Social, you should use Spring MVC and follow the instructions given in my Spring Social tutorial. Excellent post for Spring MVC. How can we make it work for Spring MVC without ViewModel, but plain REST services that return JSON? In that case, the user probably uses some javascript based SPA that already authenticates with the SocialSignInProvider outside the MVC application and already has a token. Thank you for your kind words. I really appreciate them. Unfortunately I have never added social sign in to a Spring powered REST API. In other words, I don’t know how you can use it with Spring MVC if your application provides only a REST API. I have been planning to write a separate tutorial about this for a long time. I guess now is a good time to actually do it. Petri, One of the finest tutorial available on the internet. I searched a lot for spring social sign in with Google+. Your example also implements facebook and twitter Could you please guide or leave comment on how can I implement Google+ sign in using spring MVC? Again many thanks. Thank you for your kind words. I really appreciate them. Unfortunately I have never done this myself :( However, I am going to update my Spring Social tutorial when I have got some free time (I don’t know the exact schedule). I will add the Google sign in to my example application and add a new part to my Spring Social tutorial. Great tutorial ! I was able to integrate Facebook, LinkedIn and Google+ Sign-In and was able to post link on Facebook & LinkedIn. But wondering which API of google to use for the same. Any help please ? Also I noticed, when logging out, data from UserConnection is not getting deleted. Is it right behavior or I missed something in configuration? I noticed , record gets deleted when used POST /connect/facebook with delete hidden parameter like below <form action="” method=”post”> Spring Social Showcase is connected to your Facebook account. Click the button if you wish to disconnect. Disconnect Thank you for your kind words. I really appreciate them! There is a community project called Spring Social Google, but I haven’t tried it out yet. If you want to give it a shot, you can get started by readings its reference manual. This is normal. The data found from the UserConnectiontable links the information received from the social sign in provider with the local user account. If you delete this link, the application cannot log you in (unless you create a new user account). Hi Petri, I have implemented spring social.When I clicked on the”login with facebook” button then it is get redirected to the facebook login page successfully but now after login the facebook ,its get redirected to signup url means it is not finding the user in database. 1) I have registered the user in my application and able to login alsoby using simple username and password without using the facebook login. 2)But when I login through the facebook with the same user then why it is no finding that user and it is get redirected to signup url. First it was giving the errror as “userconnection table not exit” now I have created the userconnection table in my database. I have the following xml configuration :- ——— xml configuration—— —– End of xml configuration—– In jsp page, I have used the code like:- These are my all changes. Apart from this,I couldnt have implemented anything else. So please, what should I do here for the proper working of the applicaiton, my facebook login user should redirect properly to my application and data should be inserted by connect controller in “userconnection” table. I am stuck over here please highligh what should I implement now at this stage. Many Many thanks in advance… The reason for this is that the user who tries to log in by using a social sign in provider is not the “same” user than the user who created a user account by using the “normal” registration. Because the user created a user account by using the “normal” registration, he/she cannot sign in by using a social sign in provider and expect the application to identify his/her existing user account. The application cannot do this because the UserConnectiontable is empty. Also, you cannot just insert a row to this table (unless the user is creating a user account by using social sign in) because you need the information returned by the social sign in provider. If you need to support only Facebook, you can try to fix this by making the following changes to the controller method that renders the sign up view: UserConnectiontable, and log the user in. Also, you have to configure Spring Social to request the user’s email address when the user starts the social sign in flow by using Facebook. This approach has two limitations: I hope that this answered to your question. Thank u so much Petri. As u suggested I tried a lot to create user connection but not work. Is there any url(signup url) through which user connection is created automatically. Thanks.. My example application has no such url, but you can persist the user’s connection by invoking the doPostSignUp()method of the ProviderSignInUtilsclass. Hi Petri, I tried to use the example of Maven but appeared this error in building:cmd.exe /X /C “”C:\Program Files\Java\jdk1.8.0_05\jre\bin\java” -javaagent:C:\Users\Victor\.m2\repository\org\jacoco\org.jacoco.agent.6.3.201306030806\org.jacoco.agent-0.6.3.201306030806-runtime.jar=destfile=C:\Users\Victor\Documents\spring-social-examples\sign-in\spring-mvc-normal\target\coverage-reports\jacoco-ut.exec -jar C:\Users\Victor\Documents\spring-social-examples\sign-in\spring-mvc-normal\target\surefire\surefirebooter5500528348155860158.jar C:\Users\Victor\Documents\spring-social-examples\sign-in\spring-mvc-normal\target\surefire\surefire5217935832005982224tmp C:\Users\Victor\Documents\spring-social-examples\sign-in\spring-mvc-normal\target\surefire\surefire_07294595199859845725tmp” -> [Help 1] Can you help me? Thanks I removed a few lines from this comment since they were irrelevant – Petri Hi, I noticed that you are using Java 8. I assume that the JaCoCo Maven plugin 0.6.3.201306030806 is not compatible with Java 8. If you don’t need to code coverage reports, you can simply remove the plugin configuration. If you want to generate the code coverage reports, you should update its version to 0.7.2.201409121644. Let me know if this solved your problem. Hi Petri, Good morning !!! I made a small application going through your fantastic tutorials and tried hosting it using Apache & Tomcat on a professional Hosting Server. However, I am not able to access the site without having the app name in the URL. I always have to do:. I followed many SO posts and tutorials but no where I could find some one who is having the same environment. So finally thought of asking you as how it should be done. Details of what I have tried are captured here: Please see if you can help. Thanks once again for all your help. Hi, The easiest way to solve to solve your problem is to change the name of deployed war file to ROOT.war. When you deploy a war file called ROOT.war to Tomcat, the context path of that application is ‘/’. This means that you can access your application by using the url. Hi Petri, Thanks for the suggestion. I tried that and it worked like a charm for my localhost but not for the production app running at my hosting machine. When I tried mydomainname.com, it gave me: Index of / [ICO] Name Last modified Size Description [IMG] favicon.ico 2015-02-26 15:54 822 [TXT] index.html.backup 2014-12-20 10:49 603 I checked both tomcat server.xml files and didn’t find any difference. Is there something else which I am missing? It’s kind of hard to say what could be wrong, but the first thing that came my mind was that your Apache might not be configured properly (it seems that is not “forwarding” requests to Tomcat). Are you using an AJP connector? Hi, My project run well based on Spring + Spring Security + JSF + WebFlow . Can I integrate Spring Social to my project without Spring Controler. Thanks in advance Hi, Because I don’t have any experience from JSF, I cannot answer to your question. However, it seems that using Spring Social without Spring MVC requires a lot of work. Hi Petri, You have the best example on spring social. am trying to add ‘login with facebook’ on my website where form based login is already implemented. You have merged both form based and social login into one and made confusing. Is there a way you can create an example with ‘login with facebook’ only based on xml configuraiton? Hi Shashi, Actually the login functions are not merged. The form login is processed by Spring Security and the social login is processed by Spring Social. However, if a user tries to sign in by using a social sign in provider and he doesn’t have an user account, he has to register one by using the registration form. This process is explained in the next part of this tutorial. You can of course skip the registration phase and create a new user account if the user account isn’t found from the database. If you want to do this, you need to modify the SignUpControllerclass. You need to read the user’s information from the request and use this information to create a new user account. The example application has XML configuration files as well. By the way, I will update my Spring Social tutorial later this year. I will address your concerns when I do that. For example, I plan to make this tutorial less confusing :) Thank you for the feedback! Thanks for a wonder post on this! It helped me understand how spring social works quicker than the official spring social post. :) I wanted to add spring social feature to an Appfuse app (uses spring security) that I had been playing with. Following your 2 posts, I was trying to integrate step by step and trying to see the effect of each step. But I ran into a problem right immediately : WARN [qtp937612-50] PageNotFound.noHandlerFound(1118) | No mapping found for HTTP request with URI [/app/auth/facebook] in DispatcherServlet with name ‘dispatcher’ Can you help explain where is the “/auth/facebook” request mapped? Is this done by simply declaring a connectController bean in your SocialContext class (which I have copied)? @Bean public ConnectController connectController(ConnectionFactoryLocator connectionFactoryLocator, ConnectionRepository connectionRepository) { return new ConnectController(connectionFactoryLocator, connectionRepository); } Thank you for your kind words. I really appreciate them! The SocialAuthenticationFilterclass processes requests send to the url: ‘/auth/facebook’. The SpringSocialConfigurerclass creates and configures this filter (check out the SecurityContextclass for more details). Thanks for this project, it is very good to understand social core.I only have question, there is any possibility to adapt this project in mobile or rest api ? I have problem with redirecting in process /signup for REST &( angularApp and mobileApp) ? I have never done this myself, but I think that it is doable. I tried to find information from Google, but I couldn’t find anything interesting. My next step was to ask help from my Twitter followers. I hope that someone can shed some light on this problem. Update: It seems that your only option is to do a normal GET request and let the backend handle the authentication dance. And how to send response to android or rest api after this dance ? Hi Wojciech and Petri, did you fix this problem i have the same many thanks! Hi Mova, Unfortunately your best option is to do a normal GET request and let the backend handle the authentication dance. However, that doesn’t solve your problem because you still need to respond to the client after the dance is complete. I have one idea that might help you to do this: The easiest way to solve this to create two controller methods which handle failed sign in attempt and successful sign in. Implement these controller methods by setting the preferred HTTP response code to the HTTP response. After you have implemented these controller methods, you have to configure Spring Social to redirect the request to the correct url. If you use Java configuration, you can do this by setting the values of the postLoginUrland postFailureUrlfields of the SpringSocialConfigurerclass. If you use XML configuration, you can do this by setting the values of the postLoginUrland postFailureUrlfields of the SocialAuthenticationFilterfilter class. If you don’t have any idea how to do this, check out the XML configuration file of my example application. It should help you to get started. Did this answer to your comment? hi petri, I tried the first part of this tutorial using xml configurations, however i am getting the following error.. No qualifying bean of type [com.project.service.UserRepository] found for dependency: expected at least 1 bean which qualifies as autowire candidate for this dependency. Dependency annotations: {} could you please help me out in this, it seems to be an autowiring or dependency injection issue also is there any implementation of UserRepository that we have to write. thanks, shashwat Update: I removed the stack trace since the error message gave enough information about the problem – Petri Hi Shashwat, The example application uses Spring Data JPA, and the UserRepositoryis the repository that is used to manage user information. It seems that you have changed the package hierarchy of the project because old package of the UserRepositoryinterface was: net.petrikainulainen.spring.social.signinmvc.user.repository. This is why Spring Data JPA cannot create the UserRepositorybean. You can solve this problem by configuring Spring Data JPA to use the correct package. You can do this by changing the value of the jpa:repositorieselement’s base-packageattribute. You can find the persistence layer configuration from the exampleApplicationContext-persistence.xml file. Also, if you are not familiar with Spring Data JPA, you should take a look at my Spring Data JPA tutorial. If have any further questions, don’t hesitate to ask them. Hey Petri, thanks for a quick revert. So I get the issue, you are using jpa repository. But the main problem i m facing is that i am using hibernate in my whole application and it would be difficult to migrate the whole app to spring data jpa to just implement a login functionality. Could you please help me with the changes that i will have to do in case i have to implement it in hibernate. I m attaching below my dao setup below which uses a jndi resource. Any help would be appreciated. com.myproject.pojo.* org.hibernate.dialect.MySQL5InnoDBDialect false false true true true false false org.joda.time.contrib.hibernate.PersistentDateTime java:comp/env/jdbc/myproject regards, shashwat Hi, It is actually quite easy to get rid of Spring Data JPA. You need to create a Hibernate repository class that is used to handle user information (or you can use an existing repository) and ensure that the repository implements these methods: User findByEmail(String email)method returns the user whose email address is given as a method parameter. If no use is found, it returns null. This method is invoked when the user logs in to the system or creates a new user account. User save(User saved)method saves the information of the Userobject given as a method parameter and returns the saved object. This method is invoked when a user creates a new user account. If you have any other questions, feel free to ask them. thanks petri, got it working, will be moving to part 2 of this tutorial, will let you know if i get stuck anywhere. Hi Shashwat, You are welcome! It is nice to hear that you were able to solve your problem. Hi Petri, I have integrated spring security with spring social using your example code , configuration is same as you provided in your github code.However after calling /auth/twitter it goes to twitter sigin page and after that it authenticates and redirects back to my login page.What can be the reason for this? I am using the latest version of spring, spring-security and spring-social. Hi Viral, I have a couple of ideas: First, check that the callback url of your Twitter application is correct. Second, Does this happen when the user clicks the “Sign in With Twitter” link for the first time? If so, one possible reason for this is that Spring Social cannot find the persisted connection from the UserConnectiontable. When this happens, it will redirect user to the sign up page (see this discussion). Also, if the user account is not found the database, the user is redirected to the sign up page. Do you use your login page as the sign up page? Hi Petri , Thanks for Your help. It worked ,I was Sign in With Twitter for the first time .Also I had changed the spring social version to 1.1.2 Release ,ProviderSignInUtils class is getting error in 1.1.2 .It is working fine in 1.1.0 Release . It seems that the Spring Social team did some major changes to the ProviderSignInUtilsclass between Spring Social 1.1.0 and 1.1.2. Anyway, another reader had the same problem, and I posted the solution here. Are you planning to update the tutorial to version 2.0.x of spring.social.facebook? There are big changes there, for example, there is no default constructor for ProviderSignInUtils. I am currently struggling with updating my code to version 2.0.x. I would love to do it right now, but I am afraid that I won’t be able to do it until next year because I need to update two older tutorials before I can move on to this one. I took a quick look at the Javadoc of the ProviderSignInUtilsclass and noticed that it has a constructor which takes ConnectionFactoryLocatorand UsersConnectionRepositoryobjects as constructor arguments. You could simply inject these beans to the component that uses the ProviderSignInUtilsclass and pass them as constructor arguments when you create a new ProviderSignInUtilsobject. If you have any other questions, don’t hesitate to ask them! Hi Petri Kainulainen, How can I workout with mongoDB having this set of example, please help me out.. Hi Vinodh, Take a look at this comment. hi Petri, I am getting error in POM.xml when I import your project in Eclipse Mars IDE: Plugin execution not covered by lifecycle configuration: org.jacoco:jacoco-maven-plugin:0.6.3.201306030806:prepare- agent (execution: pre-unit-test, phase: initialise) regards, Laxman Hi, You can fix this by removing the JaCoCo Maven Plugin from the pom.xml file. for those who is also struggle about the Auth setScope issue mentioned at Pls update spring-social-config to later version (1.1.2.REALEASE), you should be fine. Hi, Thank you for sharing! This is a quite common use case, and that is why it is great to find out how I can solve this problem. Hi Petri, I have few questions regarding this tutorial. I am trying to have only ‘login with facebook’ as the way users can signIn to the application. Questions: 1. When would user get a session Id, if user tries to login with facebook? 2. Spring Social document says to have a schema for UserConnections table. So when does Spring adds a particular entry (user) to this table ? Hi Jay, The user gets a session id when he/she opens the first page of your application. Spring Security can create a new session for him/her after login if you have specified it to do so. You need to insert a new row into this table by using the ProviderSignInUtilsclass. If you want get more information about this, check out the next part of my Spring Social tutorial. Hi Petri, I have deployed the project into amazon aws but it is not working fine. sometimes not able to get repsonse from auth/facebook. getting exception “Page has too many redirection” but all the functionalities are working fine in local tomcat. Need suggestion.. I think that this is an AWS specific problem. Unfortunately I don’t know what the problem is :( Dear my friends, I need sample project that use spring MVC framework + restful and use OAuth 2 and use MYSQL database if some body can help me please email to me,thank you in advance. Hi, I am not sure if you can find one example that fulfills your requirements. However, if you are willing to do part of the work yourself, you can take a look at these tutorials: I hope that this helps. Dear Petri, when i want to run this project with wildfly i have this error can you help me. org.springframework.social.config.annotation.SocialConfiguration.connectionFactoryLocator()] threw exception; nested exception is java.lang.IllegalArgumentException: Circular placeholder reference ‘twitter.consumer.key’ in property definitions Update: I removed the irrelevant part of the stack trace – Petri Hi, How did you create the war file? The reason why I ask this is that Maven should replace the property placeholders found from the properties file with the values found from the profile specific configuration file. However, the error message of the thrown IllegalArgumentExceptionstates that this did not happen. Dear Petri, How should i create the war file i run it in Intellij Idea and make the artifact and configure it but this error happened when i select each of AuthServer and RestServer. ERROR [org.springframework.web.servlet.DispatcherServlet] (MSC service thread 1-3) Context initialization failed: java.lang.IllegalArgumentException at org.springframework.asm.ClassReader.(Unknown Source) [spring-core-3.2.1.RELEASE.jar:3.2.1.RELEASE] at org.springframework.asm.ClassReader.(Unknown Source) [spring-core-3.2.1.RELEASE.jar:3.2.1.RELEASE] at org.springframework.asm.ClassReader.(Unknown Source) [spring-core-3.2.1.RELEASE.jar:3.2.1.RELEASE] Dear Petri, Can you tell me how to run this project in Intellij Idea and with wildfly application server with screen shot,thank you in advance. I don’t know how you can run this application with IntelliJ Idea and Wildfly because I have never used Wildfly. However, you should be able to run it as long as you have: Also, keep in mind that you have to create the deployed .war file by running the package Maven goal by using the dev Maven profile. You can configure IntelliJ Idea to invoke a Maven goal before it starts the application server. Hi Petri, Can you tell me this project how to work with Twitter and Facebook. and i want work this with linkedin and google too. Hi Morteza, The example application should work with Facebook and Twitter as long as you have configured it properly (check my previous comment). These projects provide support for Google and LinkedIn: The example application doesn’t support them at the moment. This means that you have to make some minor configuration changes to the application. These changes are explained in the reference manuals of these projects. Also, you need to add the sign in links to the login page and add the new providers to the SocialMediaServiceenum. Dear Petri, I have this error when click the link of facebook (Invalid App ID: foo) and this error when i click the link of twitter org.springframework.web.client.HttpClientErrorException: 401 Authorization Required org.springframework.web.client.DefaultResponseErrorHandler.handleError(DefaultResponseErrorHandler.java:91) org.springframework.web.client.RestTemplate.handleResponseError(RestTemplate.java:576) org.springframework.web.client.RestTemplate.doExecute(RestTemplate.java:532) org.springframework.web.client.RestTemplate.execute(RestTemplate.java:504) org.springframework.web.client.RestTemplate.exchange(RestTemplate.java:449) org.springframework.social.oauth1.OAuth1Template.exchangeForToken(OAuth1Template.java:187) org.springframework.social.oauth1.OAuth1Template.fetchRequestToken(OAuth1Template.java:115) what i have to do and i configure the Twitter as you said. Did you create your own Facebook and Twitter applications? The error messages suggest that the configuration file (socialConfig.properties) which should contain the real API credentials provided by Facebook and Twitter contains my placeholders (foo and bar). Yes,I create my own Twitter applications and above error has happened. Did you configure the domains that are allowed to access the application? how to configure domainds? Hi, Unfortunately I don’t remember the details anymore, but if I remember correctly, you should be able to configure the domains in the settings of your Twitter and Facebook applications. Hi, I am P V REDDY.I am Senior Software Engineer.What I have to in our Project is I need authenticate Either Email Id or Phone Number with OTP or 3 Questions and Answers before Login into the Web application in Spring MVC . I need some reference and also I need some api’s Hi, I have never used OTP in my Spring applications, but I found a library that adds OTP support to Spring Security. I think that you should take a look at it. Dear Petri, When i use Google+ and configure with this application and click the button of Google+ redirect me to the page with this error: 400. That’s an error. Error: invalid_request Missing required parameter: scope Learn more Request Details That’s all we know. Hi, It seems that you need to specify the value of the scopeparameter. Take a look at this StackOverflow answer. It explains how you can the add the scopeparameter in your sign in form. Hi Petri, I am trying to use social login in my current Spring project. I have added the dependencies, but when I build the project, I am not able to see the /connect url mapped logs in the console. Unfortunately it’s impossible to say what is going on without running your code. Do have a sample project that reproduces this problem? Superb Post. Thanks a lot You are welcome. Hi I imported the code from Github. And i had maven clean install. But when i run the code using this path url in the browser… I get 404 Error like this …….. What is the reason ? HTTP ERROR 404 Problem accessing /spring-mvc-normal/login. Reason: NOT_FOUND The 404 error means that the page is not found. Are you running the web application by using Jetty Maven plugin or do you use some other servlet container? Yes , I am running the web application using Jetty server Hmm. I have to admit that I don’t know what is wrong :( By the way, are you using Java 8? If I remember correctly, the example application doesn’t work with Java 8 because it uses a Spring version that doesn’t support Java 8.
https://www.petrikainulainen.net/programming/spring-framework/adding-social-sign-in-to-a-spring-mvc-web-application-configuration/
CC-MAIN-2018-22
refinedweb
15,290
56.96
A reference is like a const pointer to a variable. Assigning a reference is a bit like using a pointer but with & not * and you don't need to dereference. The difference is that you assign an address to a pointer but a variable to a reference variable. The line below suggests that the value of a is copied into aref. But it is not, instead aref is a reference to the variable a. Once assigned, aref is the same as a. Any changes to aref are changes to a as example 8_1 shows. int & aref = a; //ex8_1Things you should always remember about references. #include <stdio.h> #include "stdafx.h" int main() { int a=9; int & aref = a; a++; cout << "The value of a is %i\n" << aref; return 0; } - A reference must always refer to something. NULLs are not allowed. - A reference must be initialized when it is created. An unassigned reference can not exist. - Once initialized, it cannot be changed to another variable. What is the point of references?For code like this not much. But in functions, references allow values to be passed by reference. In C, the big problem was that parameters to functions were copied in. To change a variable defined external to the function required pointers which made programming more complicated. On the next page : Learn about reference parameters
http://cplus.about.com/od/learning1/ss/references.htm
crawl-002
refinedweb
225
77.23
THREAD. Generally critical sections of the code are usually marked with synchronized keyword. Examples of using Thread Synchronization is in “The Producer/Consumer Model”. Locks are used to synchronize access to a shared resource. A lock can be associated with a shared resource. Threads gain access to a shared resource by first acquiring the lock associated with the object/block of code. At any given time, at most only one thread can hold the lock and thereby have access to the shared resource. A lock thus implements mutual exclusion. The object lock mechanism enforces the following rules of synchronization: - A thread must acquire the object lock associated with a shared resource, before it can enter the shared resource. The runtime system ensures that no other thread can enter a shared resource if another thread already holds the object lock associated with the shared resource. If a thread cannot immediately acquire the object lock, it is blocked, that is, it must wait for the lock to become available. -. There can be 2 ways through which synchronized can be implemented in Java: - synchronized methods - synchronized blocks Synchronized statements are same as synchronized methods. A synchronized statement can only be executed after a thread has acquired the lock on the object/class referenced in the synchronized statement. SYNCHRONIZED METHODS Synchronized methods are methods that are used to control access to an object. A thread only executes a synchronized method after it has acquired the lock for the method’s object or class. .If the lock is already held by another thread, the calling thread waits. A thread relinquishes the lock simply by returning from the synchronized method, allowing the next thread waiting for this lock to proceed. Synchronized methods are useful in situations where methods can manipulate the state of an object in ways that can corrupt the state if executed concurrently. This is called a race condition. It occurs when two or more threads simultaneously update the same value, and as a consequence, leave the value in an undefined or inconsistent state. While a thread is inside a synchronized method of an object, all other threads that wish to execute this synchronized method or any other synchronized method of the object will have to wait until it gets the lock. This restriction does not apply to the thread that already has the lock and is executing a synchronized method of the object. Such a method can invoke other synchronized methods of the object without being blocked. The non-synchronized methods of the object can of course be called at any time by any thread. Below is an example shows how synchronized methods and object locks are used to coordinate access to a common object by multiple threads. If the ‘synchronized’ keyword is removed, the message is displayed in random fashion. public class SyncMethodsExample extends Thread { static String[] msg = { "Beginner", "java", "tutorial,", ".,", "com", "is", "the", "best" }; public SyncMethodsExample(String id) { super(id); } public static void main(String[] args) { SyncMethodsExample thread1 = new SyncMethodsExample("thread1: "); SyncMethodsExample thread2 = new SyncMethodsEx synchronized void run() { SynchronizedOutput.displayList(getName(), msg); } } class SynchronizedOutput { // if the 'synchronized' keyword is removed, the message // is displayed in random fashion public static synchronized void displayList(String name, String list[]) { for (int i = 0; i < list.length; i++) { SyncMethodsExample t = (SyncMethodsExample) Thread .currentThread(); t.randomWait(); System.out.println(name + list. Download Synchronized Methods Thread Program Example Class Locks SYNCHRONIZED BLOCKS Static methods synchronize on the class lock. Acquiring and relinquishing a class lock by a thread in order to execute a static synchronized method, proceeds analogous to that of an object lock for a synchronized instance method. A thread acquires the class lock before it can proceed with the execution of any static synchronized method in the class, blocking other threads wishing to execute any such methods in the same class. This, of course, does not apply to static, non-synchronized methods, which can be invoked at any time. Synchronization of static methods in a class is independent from the synchronization of instance methods on objects of the class. A subclass decides whether the new definition of an inherited synchronized method will remain synchronized in the subclass.The synchronized block allows execution of arbitrary code to be synchronized on the lock of an arbitrary object. The general form of the synchronized block is as follows: synchronized (<object reference expression>) { <code block> } A compile-time error occurs if the expression produces a value of any primitive type. If execution of the block completes normally, then the lock is released. If execution of the block completes abruptly, then the lock is released. A thread can hold more than one lock at a time. Synchronized statements can be nested. Synchronized statements with identical expressions can be nested. The expression must evaluate to a non-null reference value, otherwise, a NullPointerException is thrown. The code block is usually related to the object on which the synchronization is being done. This is the case with synchronized methods, where the execution of the method is synchronized on the lock of the current object: public Object method() { synchronized (this) { // Synchronized block on current object // method block } } Once a thread has entered the code block after acquiring the lock on the specified object, no other thread will be able to execute the code block, or any other code requiring the same object lock, until the lock is relinquished. This happens when the execution of the code block completes normally or an uncaught exception is thrown. Object specification in the synchronized statement is mandatory. A class can choose to synchronize the execution of a part of a method, by using the this reference and putting the relevant part of the method in the synchronized block. The braces of the block cannot be left out, even if the code block has just one statement. class SmartClient { BankAccount account; // … public void updateTransaction() { synchronized (account) { // (1) synchronized block account.update(); // (2) } } } In the previous example, the code at (2) in the synchronized block at (1) is synchronized on the BankAccount object. If several threads were to concurrently execute the method updateTransaction() on an object of SmartClient, the statement at (2) would be executed by one thread at a time, only after synchronizing on the BankAccount object associated with this particular instance of SmartClient. Inner classes can access data in their enclosing context. An inner object might need to synchronize on its associated outer object, in order to ensure integrity of data in the latter. This is illustrated in the following code where the synchronized block at (5) uses the special form of the this reference to synchronize on the outer object associated with an object of the inner class. This setup ensures that a thread executing the method setPi() in an inner object can only access the private double field myPi at (2) in the synchronized block at (5), by first acquiring the lock on the associated outer object. If another thread has the lock of the associated outer object, the thread in the inner object has to wait for the lock to be relinquished before it can proceed with the execution of the synchronized block at (5). However, synchronizing on an inner object and on its associated outer object are independent of each other, unless enforced explicitly, as in the following code: class Outer { // (1) Top-level Class private double myPi; // (2) protected class Inner { // (3) Non-static member Class public void setPi() { // (4) synchronized(Outer.this) { // (5) Synchronized block on outer object myPi = Math.PI; // (6) } } } } Below example shows how synchronized block and object locks are used to coordinate access to shared objects by multiple threads. public class SyncBlockExample extends Thread { static String[] msg = { "Beginner", "java", "tutorial,", ".,", "com", "is", "the", "best" }; public SyncBlockExample(String id) { super(id); } public static void main(String[] args) { SyncBlockExample thread1 = new SyncBlockExample("thread1: "); SyncBlockExample thread2 = new SyncBlockEx void run() { synchronized (System.out) { for (int i = 0; i < msg.length; i++) { randomWait(); System.out.println(getName() + msg. Synchronized blocks can also be specified on a class lock: synchronized (<class name>.class) { <code block> } The block synchronizes on the lock of the object denoted by the reference <class name>.class. A static synchronized method classAction() in class A is equivalent to the following declaration: static void classAction() { synchronized (A.class) { // Synchronized block on class A // … } } In summary, a thread can hold a lock on an object - by executing a synchronized instance method of the object - by executing the body of a synchronized block that synchronizes on the object - by executing a synchronized static method of a class
http://www.wideskills.com/java-tutorial/java-threads-tutorial/p/0/1
CC-MAIN-2017-13
refinedweb
1,425
50.77
In the first part of this article, TkCOMApplication, we have seen how to instantiate and use COM objects in Tcl/Tk. In this article, I would like to show you how to use it in Python which is a powerful object oriented scripting language. This article assumes that the reader already has good understanding of COM. This article also cannot be held good to understand Python fully. The reader is expected to get a hold on Python using other available resources. This article is intended only to demonstrate the use of COM objects in Python. Before showing how to instantiate COM component and use the interfaces in Python, I would like to give a brief introduction about Python itself. Python is a very handy tool whenever you need to put together a small script that manipulates some files in a few minutes. Moreover, it is also useful for bigger projects, as you get all the power from data structures, modularization, object orientation, unit testing, profiling, and the huge API. Python has connection with almost everything. You have very advanced string and regular expression handling, you have threading, networking (with many protocols built-in), compression, cryptography, you can build GUIs with Tcl/Tk, and these are just a few of the built-in features. string If you look around on the Internet, you will be surprised to see how many applications and libraries have Python bindings: MySQL, ImageMagick, SVN, Qt, libXML, and so on. There are applications that provide plug-in interface through Python, like Blender and GIMP. Beyond that, you can even write extension modules for Python in C or C++ using the Python/C API, or in the reverse case, you can use the Python interpreter as a module of your native application, that is, you can embed Python into your software. My article Python_Embedded_Dialog shows the use of the Python Interpreter for parsing mathematical expressions and making use of the results in GUI programming in C/C++. However, unlike Tcl/Tk, Python doesn't have GUI support out of the box. It supports GUI programming via extensions written for existing toolkits like Tk (TkInter), Qt (PyQt), GTK+ (PyGtk), Fltk (PyFltk), wxWidgets (wxPython) -- to name a few popular toolkits. TkInter is, however, the default implementation packaged with Python. Python is available for most of the platforms: Linux, Solaris, Windows, Mac, AIX, BeOS, OS/2, DOS, QNX, or PlayStation, for example. To use Python on Windows, download the installer from. The installer will install a Python interpreter shell and a useful editor called PythonWin editor. We also need to install the comtypes extension that would add the COM support for Python. This is available for Windows platform only. You can download the corresponding version of the comtypes from. As this article cannot explain much of the Python theory, syntax or anything like that, let's get started with some practice right away. Let us see a typical hello world in Python: Start the Python interpreter: Type the following line followed by a carriage return. print "Hello World!" The interpreter will print out the statement "Hello world!" Hello world! Now let us see the power of the Python interpreter as a mathematical calculator. Type the following lines each followed by a carriage return and see the results. from math import* x = sin(pi/4) print x x = 10.0/(12.5*1.25)*log(2) print x pow(12, 2) And now let us see a small class. Note, Python is a language which depends heavily on indents. Proper indentation is the only way to delimit blocks of code.Type the following lines each followed by a carriage return. For clarity, I have shown the '__' symbol as one level of indent. class Point: __x = 100 __y = 200 __def setCoord(self, x, y): ____self.x = x ____self.y = y __def printCoord(self) ____print "X %0.3f Y %0.3f" % (float(self.x), float(self.y)) That's the end of the class definition. Finally one more carriage return to get the prompt back.Now let us instantiate an object of the class and use the methods, type the following lines each followed by a carriage return and see the results. pnt = Point() pnt.printCoord() pnt.setCoord(10, 20) pnt.printCoord() All this can be put into a file with the extension .py and executed by double clicking it in the explorer. For GUI applications, using the extension .pyw will show only the GUI window. The console window will not be shown. I hope you might have now got a feel of the Python language. Let us see how to use TkInter to create the GUI similar to the one we saw in the first part of this article and then instantiate a COM object in the SimpleCOM library and use its methods. .py .pyw SimpleCOM In this article, we have a COM Library called SimpleCOM (source code and VS2008 project provided in the download) which has an object called GraphicPoint. GraphicPoint implements three interfaces, all derived from IDispatch for scripting support... GraphicPoint GraphicPoint IDispatch IPoint SetCoord GetCoord Distance X Y Z IGraphicPoint Draw IColor SetColor GetColor OLE_COLOR We would instantiate 2 GraphicPoint objects, get their IPoint interfaces and set the coordinates of the Points using the method SetCoord. The coordinates are obtained from the user input through the GUI that we would develop using Python TkInter. We will also set the colors for both the points by getting the interface IColor. Here, we will see how to convert RGB components, obtained from the color dialog, into OLE_COLOR. Then we will calculate the distance between the 2 points by calling the Distance method from the IPoint interface. We will also simulate the drawing of the point by popping up message boxes which show the coordinates and colors of each point. For this, we call the Draw method of the IGraphicPoint interface. When the points are instantiated, from the Python code, we will also pop a messagebox showing the coordinates we have set. For this we will call the X, Y and Z properties of the point object. All this will cover the activities of instantiating a COM object, Querying for respective interfaces and using the methods and properties. IPoint Points Distance Draw IGraphicPoint messagebox The script begins with the necessary importing for system related, TkInter and COM related extensions. import sys # for TkInter GUI support from Tkinter import * import tkMessageBox import tkColorChooser # for COM support import comtypes.client as cc import comtypes # Load the typelibrary registered with the Windows registry tlb_id = comtypes.GUID("{FA3BF2A2-7220-47ED-8F07-D154B65AA031}") cc.GetModule((tlb_id, 1, 0)) # Alternately you can use this method also by direct loading the dll file #cc.GetModule("SimpleCOM.dll") # Import the SimpleCOMLib library from comtypes import comtypes.gen.SimpleCOMLib as SimpleCOMLib Now the class definition for the application GUI: # Application class for the GUI class AppDlg: # member variables X1 = 0.0 Y1 = 0.0 Z1 = 0.0 X2 = 0.0 Y2 = 0.0 Z2 = 0.0 distVal = 0.0 # methods # constructor def __init__(self, master): master.title("Test COM InterOp in Python/Tkinter") master.maxsize(400, 210) master.minsize(400, 210) frame1 = Frame(master, padx=5, pady=5) frame1.pack(anchor=N, side="top", fill=X, expand=Y) # Point 1 Data self.labelframe1 = LabelFrame (frame1, padx=5, pady=5, relief="groove", text="Point 1 Data") self.labelframe1.pack(side=LEFT, fill=BOTH, expand=Y) self.frameX1 = Frame(self.labelframe1) self.frameY1 = Frame(self.labelframe1) self.frameZ1 = Frame(self.labelframe1) self.frameX1.pack() self.frameY1.pack() self.frameZ1.pack() self.labelX1 = Label(self.frameX1, text="X") self.labelX1.pack(side=LEFT, padx=2, pady=2) self.entryX1 = Entry(self.frameX1) self.entryX1.insert(0, self.X1) self.entryX1.pack() ... ... <code skipped for brevity> ... # variable to store colors self.colorTuple1 = ((255, 255, 255), '#ffffff') self.colorTuple2 = ((255, 255, 255), '#ffffff') # Apply button callback def onApply(self): self.X1 = self.entryX1.get() self.Y1 = self.entryY1.get() self.Z1 = self.entryZ1.get() self.X2 = self.entryX2.get() self.Y2 = self.entryY2.get() self.Z2 = self.entryZ2.get() #print self.colorTuple1 #print self.colorTuple2 #] self.color1 = (((0xff & b) << 16) | ((0xff & g) << 8) | (0xff & r)) #print "Color Point1 is %d\n" % self.color1 if self.colorTuple2[0] is None: r = 255 g = 255 b = 255 else: r = self.colorTuple2[0][0] g = self.colorTuple2[0][1] b = self.colorTuple2[0][2] self.color2 = (((0xff & b) << 16) | ((0xff & g) << 8) | (0xff & r)) #print "Color Point2 is %d\n" % self.color2 # Create COM Point1 self.aGrPoint = cc.CreateObject ("SimpleCOM.GraphicPoint", None, None, SimpleCOMLib.IGraphicPoint) self.aPoint = self.aGrPoint.QueryInterface(SimpleCOMLib.IPoint) #help(self.aP() # Create COM Point2 self.aGrPoint2 = cc.CreateObject("SimpleCOM.GraphicPoint", None, None, SimpleCOMLib.IGraphicPoint) self.aPoint2 = self.aGrPoint2.QueryInterface(SimpleCOMLib.IPoint) #help(self.aPoint2) self.aPoint2.SetCoord(float(self.X2), float(self.Y2), float(self.Z2)) tkMessageBox.showinfo("From Python-Tkinter App", "Point 2 Created At X%0.3f, Y%0.3f, Z%0.3f"\ % (float(self.aPoint2.X), float(self.aPoint2.Y), float(self.aPoint2.Z))) self.aColor2 = self.aGrPoint2.QueryInterface(SimpleCOMLib.IColor) if self.colorTuple2: self.aColor2.SetColor(self.color2) self.aGrPoint2.Draw() self.distVal = self.aPoint.Distance(self.aPoint2) self.entryDist.delete(0, END) self.entryDist.insert(0, self.distVal) # Color selection button callbacks def onSelectColor1(self): # tkColorChooser returns a tuple containing the RGB tuple and the html color value self.colorTuple1 = tkColorChooser.askcolor() if self.colorTuple1: self.colorbtn1.configure(bg=self.colorTuple1[1]) def onSelectColor2(self): self.colorTuple2 = tkColorChooser.askcolor() if self.colorTuple2: self.colorbtn2.configure(bg=self.colorTuple2[1]) # Start the TkInter root window rootwin = Tk() # Instantiate the GUI class object AppDlg(rootwin) # Run the TkInter main event loop rootwin.mainloop() The callback for the color selection buttons simply show up a color chooser dialog and store the selected color in a global variable which is accessed by the callback procedure for Apply button. The color is stored as a list of 3 values of RGB. We take each element and with the following code convert it into the OLE_COLOR equivalent. #] # code for converting RGB values into OLE_COLOR self.color1 = (((0xff & b) << 16) | ((0xff & g) << 8) | (0xff & r)) To create an object of the COM class, use the comtypes.client method CreateObject and to access an Interface call the method QueryInterface method on the interface variable. e.g. comtypes.client CreateObject QueryInterface self.aGrPoint = cc.CreateObject("SimpleCOM.GraphicPoint", None, None, SimpleCOMLib.IGraphicPoint) self.aPoint = self.aGrPoint.QueryInterface(SimpleCOMLib.IP() I hope you enjoyed this article. The Python script file for this demo and the COM VS2008 project file is included in the downloadable zip file. Windows 7 users would need to run Visual Studio in Administrator mode to build the project and register the COM DLL. The power is COM and the IDispatch is once again proven. The flexibility it provides to access Components from RAD tools and scripts is immense. Python has many extensions for developing robust and flexible applications like numerical and scientific applications, games, cad and.
http://www.codeproject.com/Articles/73880/Using-COM-Objects-in-Scripting-Languages-Part-2-Py?fid=1568983&df=90&mpp=10&sort=Position&spc=None&tid=4259794
CC-MAIN-2014-42
refinedweb
1,829
50.12
I was going over the stuff I’m going to cover next week in my “From Flash to iPhone” workshop at Flash on Tap, and I started getting into how an iPhone app launches when you tap its icon. It might be a bit heavy to go into detail on it at the workshop, but I thought it was pretty interesting and writing about it might shed some light for developers moving past the newbie phase. Coming from the Flash/Flex world – app start up is pretty simple. You specify a main class. When the SWF is launched, that class is instantiated, calling its constructor. Whatever is in the constructor of that main class is what happens. But in your typical iPhone apps, you have C files, Objective-C classes, xibs, pchs, frameworks, and plists. Which is the chicken, which is the egg, and which comes first? First of all, realized that Objective-C is an extension of C to add object oriented capabilities. If you’ve ever taken a C course, or cracked open a book on C, you’re going to see a file like this: #include <stdlib.h> int main(int argc, char *argv[]) { printf("Hello World"); return 0; } When you run the program, it looks for a function called “main” and runs it. It passes in the number of command line arguments (int argc) and an array of strings that are the arguments. It executes whatever code is in that function and then exits, returning an int – usually 0 means it exited as usual, anything else means some kind of error. Well, it turns out that Objective-C programs launch in exactly the same way. In fact, if you fire up XCode and create a Window Based Application named “Test”, you’ll see a file under “Other Sources” called “main.m”. It contains this: ; } This is the first bit of code that gets executed when you launch an app. For the most part, you don’t want to mess with it, unless you really know what you are doing. So what does it do? Creates an autorelease memory pool and then calls a function called UIApplicationMain, passing in the command line arguments, and two other nil arguments. We’ll see what at least one of those is in a bit. UIApplicationMain creates whatever UI you have created and starts the program loop. Which means it just loops and sits there waiting for events. Eventually at some point, the user quits the program. This ends the execution of the UIApplicationMain method, returning an int return value. The main function releases the memory pool it created and returns that return value to the system. That’s all. So if UIApplicationMain is a standard function in UIKit, how does it know about your classes and interfaces and stuff? Well, one of the first things it does is to read your Info.plist file, which looks something like this, if you view it as xml: <>${PRODUCT_NAME}</string> <key>CFBundleExecutable</key> <string>${EXECUTABLE_NAME}</string> <key>CFBundleIconFile</key> <string>icon.png</string> <key>CFBundleIdentifier</key> <string>com.yourcompany.${PRODUCT_NAME:identifier}</string> <key>CFBundleInfoDictionaryVersion</key> <string>6.0</string> <key>CFBundleName</key> <string>${PRODUCT_NAME}</string> <key>CFBundlePackageType</key> <string>APPL</string> <key>CFBundleSignature</key> <string>????</string> <key>CFBundleVersion</key> <string>1.0</string> <key>LSRequiresIPhoneOS</key> <true/> <key>NSMainNibFile</key> <string>MainWindow</string> </dict> </plist> You’ll notice the last key/value pair theire is NSMainNibFile, which is set to MainWindow. You can also set this value in the Target info window: You see the “Main Nib File:” entry way down at the bottom. Setting the value in this window, simply rights it into the plist file. And what do you know, we happen to have a .xib (nib) file called MainWindow.xib in our project! Coincidence? I think not. Let’s take a look at that. Double click on it to open it in Interface Builder. The window itself is uninteresting, but the Document window shows us all the stuff that’s defined in here: So main starts UIApplicationMain, which looks at the Info.plist file, sees this nib is the main nib, and loads it in. For most of the items in there that are linked to classes, it will create an instance of that class, deserialize any properties, styles, etc. you might have set on it, and then hook up any outlets or actions you might have set on it. So what do we have here? First we have an entry called “File’s Owner” This is linked to the UIApplication class. So an instance of UIApplication is created. Yay!As clarified in the comments, UIApplication is actually created by the call to UIApplicationMain. The entry here merely links to that instance that gets created. It exists here mainly so we can set its delegate, as we will see soon. Then we have “First Responder”. This is a bit of a special case, i.e. I don’t fully understand it. But my understanding is that this does not get instantiated per se, but is a fill in for another class that will be responding to events. Next up we have “Test App Delegate” which is linked to the class TestAppDelegate. If you jump back into XCode, you’ll see that you do indeed have a class with that name. So this class is instantiated. Finally, we have “Window”, which is of type UIWindow. So we get one of those. So the result of this is we now have instances of UIApplication, TestAppDelegate, and UIWindow created. Now, if you click on the Window object and look at the Attributes tab in the Inspector, you’ll see there are various properties you can set on it, such as scale mode, alpha, background color, tag, opaque, hidden, etc. The size tab also has dimensional and layout properties that can be assigned. Any properties set here are then set on the newly created objects. Finally, we go to the connections tab. This is where our outlets and actions are defined. If you look at UIApplication, you’ll see it has an outlet called delegate, which is set to Test App Delegate. This is the same as calling setDelegate on the UIApplication instance, and passing in the instance of TestAppDelegate. Also note that TestAppDelegate has an outlet called window which is set to the Window instance. We are not going to dive into UIApplication’s code, but if you look at TestAppDelegate.h, you’ll see it has a @property called window, which is indeed a UIWindow. #import <uikit/UIKit.h> @interface TestAppDelegate : NSObject <uiapplicationDelegate> { UIWindow *window; } @property (nonatomic, retain) IBOutlet UIWindow *window; @end So now our UIApplication instance has a reference called delegate which points to our instance of TestAppDelegate, and that has a referenced called window which points to the UIWindow instance. We are done with the nib. Now, UIApplication starts its event loop. When certain events occur, it passes them over to its assigned delegate to handle. The first, and most important event it delegates is “applicationDidFinishLaunching”. This means that all this plist and nib loading and deserializing and whatever else it needs to do, is done. So it calls the applicationDidFinishLaunching on its delegate. And, look at that, our TestAppDelegate class just happens to have a method with that very same name. - (void)applicationDidFinishLaunching:(UIApplication *)application { // Override point for customization after application launch [window makeKeyAndVisible]; } All this does by default is take that window that was set and make it the key, visible window. And that’s where it ends. If you want to do more, you would add code in that applicationDidFinishLaunching method to create views, view controllers, or whatever else. Now, if you had created a View Based application, a lot of this would be the same, but you’d also have a TextViewController nib. This would be loaded by the MainWindow nib. This new nib would contain a reference to the TestViewController class, and a UIView linked to the view outlet on that view controller. So an instance of this view controller and a UIView would be created. In applicationDidFinishLaunching, this view would be added to the window: - (void)applicationDidFinishLaunching:(UIApplication *)application { // Override point for customization after app launch [window addSubview:viewController.view]; [window makeKeyAndVisible]; } OK, but say you don’t even want to use nibs. Can you just create a UIWindow by yourself and make it key and visible? And can you specify an app delegate without a nib? Yes and yes. Back in our Test application, delete the MainWindow.xib file completely. Also go into Info.plist and remove the main nib file entry. Then open up main.m Remember when we called UIApplicationMain and we passed in two nil args at the end? Now we see what those are for. They are string representations for two classes. The first one is the principle class name. If you pass in nil, this will use UIApplication as the principle class. Unless you know enough about UIApplication to recreate it or at least subclass it, you probably don’t ever want to mess with this. The last argument is the name of the delegate class you want to use. Again this is a string. So, now that we’ve gotten rid of our nib, we can tell UIApplication to use our TestAppDelegate directly: #import <uikit/UIKit.h> int main(int argc, char *argv[]) { NSAutoreleasePool * pool = [[NSAutoreleasePool alloc] init]; int retVal = UIApplicationMain(argc, argv, nil, @"TestAppDelegate"); [pool release]; return retVal; } So now we have our UIApplication instance and our TestAppDelegate instance. But we need our window. We can create this in applicationDidFinishLaunching. It usually looks something like this: - (void)applicationDidFinishLaunching:(UIApplication *)application { window = [[UIWindow alloc] initWithFrame:[[UIScreen mainScreen] bounds]]; [window setUserInteractionEnabled:YES]; [window setMultipleTouchEnabled:YES]; [window setBackgroundColor:[UIColor whiteColor]]; [window makeKeyAndVisible]; } If you now wanted to add a view, you could do so something like this: UIView *view = [[UIView alloc] initWithFrame:window.frame]; [view setBackgroundColor:[UIColor redColor]]; [window addSubview:view]; Well, hopefully that explains a bit about what happens when you fire up your app. And gives you a little more control and a few more options on how to do things. You say, “First we have an entry called ‘File’s Owner’ This is linked to the UIApplication class. So an instance of UIApplication is created.” Whenever you see that translucent type of box in a nib, it is referencing what IB calls an “External Object”. If you read the description for this, it says, “This object is a placeholder for an object that exists outside of the contents of this document.” That means the nib does not create the object. It is created externally, but a placeholder exists in the nib to allow you to create connections to it. Show how does UIApplication get created? Take a look at the documentation for UIApplicationMain. It says that “This function is called in the main entry point to create the application object”, so it is UIApplicationMain that instantiates UIApplication. Now take a look at the delegate. In the nib, it is not a translucent box, it is a solid box. That means the nib creates the instance of it. Back to the UIApplicationMain documentation, “Specify nil if you load the delegate object from your application’s main nib file”. Thanks for the clarification. I kind of realized that as I was writing the last part of the entry. Obviously, if you don’t load a nib, the UIApplication still gets created, and it occurs in that UIApplicationMain function. I will correct this. Keith I am sure haXe could do with your input, they have started testing feasibility of compiling haXe(c++) for the iPhone, early days I doubt there is anything you can download Hugh has only started looking the other day, but exciting times of course I am still hoping that they look at Android soon, but theoretically it will be the same haXe. hope the tip off is useful (if you did not catch the news elsewhere), if you want some help on moving from as3 to haxe just email me although not yet tried c++ haxe last time I tried it was tricky to set up on a mac as its is still fairly new. Cheers ;j yeah, i did hear mention of that project. i think i’m going to roll with cocos2d for a while. not really looking at helping reinvent a better wheel, more in using something that already works. Supplying the right bolt is maybe a better analogy. The wheel tooling is in essence not contrived framework but more, a language whore, created with a design based on evolved flash so it tends to be surprisingly pliable and good at real UI. I suspect it is a new concept in language conception that goes beyond, and will succeed where dot net bloated cross language (ie IRuby and IPython) is destined to fail, but haXe is still very raw and sometimes dirty, so only time will tell. However it has an ability, and design that allows haxe platform abstraction to an extent that is write once distribute everywhere works. Sorry if I diverge from Iphone focus but believed it was was worth contrasting my view from a native Iphone approach, I may well be wrong but hopefully the evaluation will confirm your commitment and test some of your current assumptions. Thank you. I go crazy if I can’t follow program flow from point A to B. And using XCode and IB there’s a lot of points missing… Hi: I noticed this article is more than 2 years old, and nevertheless it helped me a lot to understand the launch sequence that happens behind the curtains in a Cocoa app. Thanks a lot!
http://www.bit-101.com/blog/?p=2159
CC-MAIN-2014-49
refinedweb
2,292
64.2
Nested Page Flows - Introduction - Basics - More Nested Page Flow Features - Other Notes Introduction By default, executing an action in a new page flow causes the current page flow to be discarded. This behavior allows you to create separate controllers for different sections of your project, and it minimizes the amount of data kept in the user session at one time. Each page flow manages its own state and logic. "Nested page flows" give you an even greater ability to break up your project into separate, self-contained bits of functionality. At its heart, "nesting" is a way of pushing aside the current page flow temporarily and transferring control to another page flow with the intention of coming back to the original one. So when would you use this?. Basics Let's start with a very simple example. You can find the code for this under basicNesting in the NetUI Samples application. Here, we have a "main" page flow which forwards to a nested page flow, which later returns to the main page flow. The flow looks like this: Here is the code for the main page flow: @Jpf.Controller( simpleActions={ @Jpf.SimpleAction(name="begin", path="index.jsp"), @Jpf.SimpleAction(name="goNested", path="../nested/NestedFlow.jpf"), @Jpf.SimpleAction(name="nestedDone", path="success.jsp") } ) public class MainFlow extends PageFlowController { } As you can see, the begin action forwards to index.jsp, which allows you to raise the goNested action. This action enters the nested page flow simply by forwarding to it. Any time you hit the URL for a nested page flow (or any one of its actions or pages), you enter the nested page flow, and the current one is pushed aside. When the nested page flow returns, it causes the nestedDone action to run, and this action simply forwards to success.jsp So how does the nested page flow return to the current one, and raise the nestedDone action? Here is the code for the nested flow: @Jpf.Controller( nested=true, simpleActions={ @Jpf.SimpleAction(name="begin", path="index.jsp"), @Jpf.SimpleAction(name="done", returnAction="nestedDone") } ) public class NestedFlow extends PageFlowController { } Note the nested =true, which defines this as a nested page flow. Also note the returnAction attribute on the simple action done. When this action is executed, it returns to the original page flow (MainFlow) and raises its nestedDone action. This is called an exit point of the nested page flow. The nested flow looks like this: More Nested Page Flow Features Often, you want to do more than simply invoke and return from a nested page flow. For instance, you may want to gather data from a nested page flow, for use in the current page flow. In the example below (found under nesting in the NetUI Samples project), the user is forwarded to a nested page flow (/nesting/chooseAirport/chooseAirport.jpf), which is a wizard that helps the user find an airport. The nested page flow returns the chosen airport to the original page flow, which continues with its sequence. First, here is a diagram of the main page flow: This page flow demonstrates two new features related to nesting: - The nested page flow returns a form bean (ChooseAirport.Results) when it raises the chooseAirportDone action. - If the nested page flow raises a chooseAirportCancelled action, the page flow will go back to the most recent page shown to the user., whatever that page is. Returning data from a nested page flow Here is a diagram of the nested page flow ChooseAirport.jpf, with the "happy path" to the chooseAirportDone return-action hightlighted in red: During the course of this page flow, a member variable called _currentResults, of type ChooseAirport.Results is kept. As the user moves through the page flow, possibly re-trying and changing the desired result, this member variable is kept up-to-date. In the end, it is returned as an "output form bean" along with the chooseAirportDone return-action, using the outputFormBean attribute. This is what it looks like in the annotations: @Jpf.SimpleAction(name="confirmResults", returnAction="chooseAirportDone", outputFormBean="_currentResults") This annotation simply specifies that the value of _currentResults will be sent along with the return-action chooseAirportDone. @Jpf.Action( forwards={ @Jpf.Forward(name="done", returnAction="chooseAirportDone", outputFormBeanType=Results.class) } ) public Forward confirmResults() { Results results = initialize a Results object return new Forward("done", results); } In the original page flow, there is a chooseAirportDone method that accepts this form bean as an argument, like this: @Jpf.Action( ... ) protected Forward chooseAirportDone(ChooseAirport.Results results) { ... } As you can see, the page flow handles this returned form bean just like it would handle a form bean posted from a page. Navigating back to the original page of the main flow The the current example, the main page flow goes back to the most recent page whenever the nested page flow raises the chooseAirportCancelled action. We are referring to this section of the diagram of the main page flow: To navigate to the most recent page, the page flow uses the navigateTo attribute on a @Jpf.SimpleAction (or a @Jpf.Forward ), like this: @Jpf.SimpleAction(name="chooseAirportCancelled", navigateTo=Jpf.NavigateTo.currentPage) Or could also go back to the previous page: @Jpf.SimpleAction(name="chooseAirportCancelled", navigateTo=Jpf.NavigateTo.previousPage) Or it could re-run the most recent action in the current page flow: @Jpf.SimpleAction(name="chooseAirportCancelled", navigateTo=Jpf.NavigateTo.previousAction) Passing Data to a Nested Page Flow Sometimes you will want to pass data into a nested page flow. While this may be less common than returning data from a nested page flow, it is useful for initializing data in the flow. To do this, you will add a form bean to the begin action of the nested page flow, and in the calling page flow you will pass an instance of this bean on the Forward object that is sent to the nested page flow. Add a form bean to the nested page flow's begin action To add a form bean, simply add a single argument to a begin action method: @Jpf.Action( forwards={ @Jpf.Forward(name="index", path="index.jsp") } ) public Forward begin(InitBean initBean) { ... } The bean type can be any class of your choosing; for instance, you can make it a String, which means that the nested page flow requires a String to be initialized: @Jpf.Action( forwards={ @Jpf.Forward(name="index", path="index.jsp") } ) public Forward begin(String initString) { ... } Pass the form bean from the calling page flow to the nested page flow You can pass the initialization bean to the nested page flow by adding it to the Forward object that is sent to the nested page flow: @Jpf.Action( forwards={ @Jpf.Forward(name="nestedFlow", path="/nested/NestedFlow.jpf", outputFormBeanType=InitBean.class) } ) public Forward goNested() { InitBean initBean = initialize the bean return new Forward("nestedFlow", initBean); } Note that the outputFormBeanType annotation attribute is optional; it mainly helps tools understand the output of the action, and it ensures that an incompatible type will not be passed. Other Notes Here are some other notes about using nested page flows: - Aside from the nested=true on the @Jpf.Controller annotation, and defining exit points through returnAction, nested page flows are built just like other non-nested page flows. - You enter a nested page flow by hitting its URL, by hitting the URL for any of its actions, or by hitting the URL for any of its pages (pages in the same directory path). When nesting occurs, the original page flow is pushed onto the "nesting stack", and is popped off the stack when the nested page flow hits an exit point (through a returnAction attribute). - Nested page flows can nest themselves. - While in a nested page flow, you can get a reference to the calling page flow through PageFlowUtils.getNestingPageFlow().
http://beehive.apache.org/docs/1.0.2/netui/nestedPageFlow.html
crawl-002
refinedweb
1,292
53.81
Silly question I am sure, and I'm almost positive I know the answer to this, but, if I have a static method that has a return type that is a non-static class type, when the static method is executing and after the method has initialized the class to return (but has not returned it yet), without locking the method it IS thread safe since it is not using a shared object and if the static method is called in succession by many threads, each will return their own correct "version" of the class object, correct? For example (this is really quick so just follow the general idea): public class myClass { int i; string s; public myClass () { } } public class myOtherClass { public myOtherClass() { } public static myClass myMethod(int i, string s) { myClass mc = new myClass(); mc.i = i; mc.s = s; return mc; } } If multiple threads call myMethod() at the same time, is that thread safe? I would say yes. Anyone disagree? Your method example creates a new instance of the myClass class EACH TIME it is called, and stores the value of the parameters in its own instance variables. This is thread-safe. I'm not sure that ALL static methods are thread-safe though. It is possible to access static global variables that are potentially shared by multiple threads from within a static method. I'm sure there are probably other ways in which a static method could be thread-unsafe but it's late here in Australia and I can't think of another example right now sorry! Someone please correct me if I'm wrong! That's kinda what I thought. Just wanted to make sure, I was doubting myself :)
http://www.daniweb.com/software-development/csharp/threads/440389/static-method-returning-a-non-static-class-and-thread-safety
CC-MAIN-2014-15
refinedweb
284
67.28
here is the problem: here is what I have:here is what I have:Write a program that computes the following sum: sum = 1.0/1 + 1.0/2 + 1.0/3 + 1.0/4 + 1.0/5 + .... + 1.0/N N is an integer limit that the user enters. Enter N 4 Sum is: 2.08333333333 import java.util.Scanner; class nSum { public static void main ( String[] args ) { Scanner scan = new Scanner ( System.in ); System.out.println("Enter an integer: "); int N = scan.nextInt(); double sum = 1.0; double count = 0.0; while( N > 0 ) { count = count + 1.0; sum = sum / count; N = (N - 1); } System.out.println("The sum is: " + sum); System.out.println(""); } } what am I doing wrong here? as far as I can tell the problem is within the while loop but the math looks right. when it's run, it seems to be doing integer math instead of floating point math and I have no idea why. any help would be greatly appreciated! :/
http://www.javaprogrammingforums.com/loops-control-statements/35693-very-beginner-questions-loops-2nd-post.html
CC-MAIN-2014-35
refinedweb
168
79.16
ml Word2Vec's findSynonyms methods depart from mllib in that they return distributed results, rather than the results directly: def findSynonyms(word: String, num: Int): DataFrame = { val spark = SparkSession.builder().getOrCreate() spark.createDataFrame(wordVectors.findSynonyms(word, num)).toDF("word", "similarity") } What was the reason for this decision? I would think that most users would request a reasonably small number of results back, and want to use them directly on the driver, similar to the take method on dataframes. Returning parallelized results creates a costly round trip for the data that doesn't seem necessary. The original PR: Manoj Kumar - do you perhaps recall the reason? - relates to SPARK-19866 Add local version of Word2Vec findSynonyms for spark.ml: Python API - Resolved - links to -
https://issues.apache.org/jira/browse/SPARK-17629
CC-MAIN-2019-39
refinedweb
123
55.54
PythonScript - how to access mouse coordinates? I’ve found out that PythonScript supports this Scintilla call: editor.positionFromPoint(x, y) Which can be very useful for mouse features. For example I can read or write something without moving the caret. But how do I read current mouse coordinate inside the editor widget?? I’ve read all PythoScript reference but can’t figure it out, please help.. Anyway, it is definitely possible to do. I’m curious to know what you have in mind to use this functionality for? @Scott-Sumner said:. Intersting… Seems that you’ve already done something similar ;) If there were some sources with examples, it would be really great. I’m curious to know what you have in mind to use this functionality for? Sure, something simple that comes to my mind: E.g. I have the caret on the line X, then I hover mouse above some line Y and run the script which does e.g.: - retrieve the position/line number under mouse cursor to get Y; - depending on algorithm, make some manipulations, e.g. swap the lines or simply line-select or add lines to a python list without need to move the caret. BTW, afaik, there is no command to programmatically move the caret where the mouse is ? This would be the simplest application indeed. So I think the PythonScript should have a helper function, e.g. getmouseXY() for active editor surface. Otherwise the above mentioned Scintilla command can’t be used directly. - Claudia Frank last edited by Claudia Frank What about using scintilla *dwell* notification and setting a reasonable *dwelltimeout? Cheers Claudia @Mikhail-V @Claudia-Frank The default dwell time is 10000000 milliseconds–too large to be useful so one assumes it is set with that value to be “out of the way”, i.e., never trigger. So here’s some sample code to illustrate what Claudia mentioned; running it will show how the callbacks trigger: editor.setMouseDwellTime(750) # set a more reasonable value than the default def callback_sci_DWELLEND(args): print "sci_DWELLEND", args editor.callback(callback_sci_DWELLEND, [SCINTILLANOTIFICATION.DWELLEND]) def callback_sci_DWELLSTART(args): print "sci_DWELLSTART", args editor.callback(callback_sci_DWELLSTART, [SCINTILLANOTIFICATION.DWELLSTART]) And here’s some sample output to the console for various mouse moves and pauses: sci_DWELLSTART {'y': 74, 'position': 166, 'code': 2016, 'x': 599} sci_DWELLEND {'y': 74, 'position': 166, 'code': 2017, 'x': 599} sci_DWELLSTART {'y': 159, 'position': 350, 'code': 2016, 'x': 295} sci_DWELLEND {'y': 159, 'position': 350, 'code': 2017, 'x': 295} sci_DWELLSTART {'y': 151, 'position': -1, 'code': 2016, 'x': 303} sci_DWELLEND {'y': 151, 'position': -1, 'code': 2017, 'x': 303} Maybe I still don’t understand the use-case…but that’s OK. :-) @Scott-Sumner Thanks. Your example worked. Still I can’t get it work for a real application. Here is a script that I use for a test: def p(s): console.write(s) editor.setMouseDwellTime(0) p("hello") def callback_sci_DWELLEND(args): p("END") #editor.callback(callback_sci_DWELLEND, [SCINTILLANOTIFICATION.DWELLEND]) editor.callbackSync(callback_sci_DWELLEND, [SCINTILLANOTIFICATION.DWELLEND]) def callback_sci_DWELLSTART(args): p("START") #editor.callback(callback_sci_DWELLSTART, [SCINTILLANOTIFICATION.DWELLSTART]) editor.callbackSync(callback_sci_DWELLSTART, [SCINTILLANOTIFICATION.DWELLSTART]) notepad.clearCallbacks() editor.setMouseDwellTime(10000000) So I tried also with callbackSyncjust hoping it helps, but it does not work either way. Nothings happens - it only prints hello, but none of the callbacks register any event. Could you look into it to make it work? Hmmmm…well with your last line it appears you are setting the dwell time to almost 3 hours?? So maybe you just need to be really patient to see it working…? :-D @Scott-Sumner :) maybe, I just don’t know how to handle event properly. So for now exactly what I want is: run the script, catch the event exactly ONCE, stop the script. If I don’t close the script then it continues to work forever (only closing NPP can stop it) and I should set DwellTime back to default I suppose? Wouldn’t it otherwise spam the events forever? You could set the dwell time to a big value INSIDE the event handler function body to effectively have it run once. Or clear the callback there… The script itself ends pretty much immediately. However, by “giving” the event functions to Pythonscript, they are not always running, but rather are ready to run when the relevant events occur (e.g., dwelling the mouse, or moving it after dwelling). You can remove the events at any time (e.g. in another script, or in one script that both sets/clears the event handlers on every other run…). ?
https://community.notepad-plus-plus.org/topic/14932/pythonscript-how-to-access-mouse-coordinates/11
CC-MAIN-2020-10
refinedweb
748
56.25
Python/PyQt/PySide - How to add an argument to derived class's constructor This may be as much a Python question as a PyQt/PySide one. I need some Python expert who knows about sub-classing, and also typingannotations, including overloadon functions. I come from a C++ background. I do not understand the syntax/code I need in a class I am deriving from a PyQt class to allow a new parameter to be passed to the constructor. I see that I asked this question a long time ago at but never got an answer. I now want to sub-class from QListWidgetItem. That starts with these constructors in the (C++) docs: QListWidgetItem(QListWidget *parent = nullptr, int type = Type) QListWidgetItem(const QString &text, QListWidget *parent = nullptr, int type = Type) QListWidgetItem(const QIcon &icon, const QString &text, QListWidget *parent = nullptr, int type = Type) QListWidgetItem(const QListWidgetItem &other) My sub-class should still support these constructors. In addition to the existing text, I want my sub-class to be able to store a new optional value. At minimum/sufficient I want a new possible constructor like one of the following: MyListWidgetItem(const QString &text, const QVariant &value, QListWidget *parent = nullptr, int type = Type) # or MyListWidgetItem(const QString &text, QVariant value = QVariant(), QListWidget *parent = nullptr, int type = Type) See that I'm putting the new valueparameter to come after text. So for Python I know I start with a typing overload definition (for my editor) like class MyListWidgetItem(QListWidgetItem) @typing.overload def __init__(self, text: str, value: typing.Any, parent: QListWidget=None, type: int=Type) pass Then I get to the definition bit. To cater for everything am I supposed to do: def __init__(self, *__args) # Now what?? super().__init__(__args) Is that how we do it? Is it then my responsibility to look at __args[1]to see if it's my value argument? And remove it from __argsbefore passing it onto super().__init__(__args)? Or, am I not supposed to deal with __args, and instead have some definition with all possible parameters explicitly and deal with them like that? Or what? This is pretty fundamental to sub-classing to add parameters where you don't own the code of what you're deriving from. It's easy in C-type languages; I'm finding it real to hard to understand what I can/can't/am supposed to do for this, I'd be really gratefully for a couple of lines to show me, please...! :) - SGaist Lifetime Qt Champion last edited by Hi, To the best of my knowledge, you should pass on the parameters as they are expected and then continue with your own code. @SGaist Unfortunately I have no idea to approach that. Hence my question. If I type help(QtWidgets.QListWidgetItem)I get told: Help on class QListWidgetItem in module PyQt5.QtWidgets:) But Python, unlike C++, does not really have any such thing as "multiple overloaded definitions" of a function/method./constructor. You can "annotate" (via typing.overload) your various "overloads" to make it show up like this to the user/editor, which doubtless is what PyQt has done for this. Think of this as the equivalent of the multiple overloads you would export in your C++ .hfile. But when it comes to the definition (rather than declaration) of the method, equivalent of what you'd write in your .cppfile, quite unlike C++ there can only be one, single def method(arg1, arg2, ...): ...into which you code your actual implementation. In some shape or form, that must allow for the varying number of parameters and/or types which all your overloads put together allow for. You have to write runtime code in that one definition which recognises how many parameters and of which type they are in order to figure how to correctly call the base method. And that is what I don't know how to do, especially when I wish to insert an extra argument to some new overload I am creating, so clearly I must correctly recognise and remove it when my method is called in that case because I must not pass on that parameter to the existing base class which does not accept such a parameter.... Hence, here I would like to add, say, the following one overload: MyListWidgetItem(text: str, value: QVariant, parent: QListWidget = None, type: int = QListWidgetItem.Type) to those already defined for QListWidgetItemwhilst maintaining all the existing ones unchanged. I don't know what code I am supposed to write in the definition to achieve this correctly. - SGaist Lifetime Qt Champion last edited by Well, AFAIK, these are all init method "overloads" so I would write it as is and in the implementation call: super(MyListWidgetItem, self).__init__("original_arguments_list")and then go on with your code. @SGaist I know you're trying to help! I can't recall whether you know any Python or not. I don't quite know how I'm supposed to do exactly what you say.... Briefly, let's restate the problem. The existing QListWidgetItemclass apparently implements/caters for the following:) Note that already not only is the number of arguments variable but also so are the types. For example, looking through all the overloads parameter #1 might be a QListWidget, a str, a QIconor a QListWidgetItem, or not even supplied, and depending on that influences what the second argument can be, etc. I wish to add an extra one: MyListWidgetItem(text: str, value: QVariant, parent: QListWidget = None, type: int = QListWidgetItem.Type) So I need to recognise this new one when it's called; I need to pull out my new value: QVariantto set my variable, I also need to remove it before calling the base class constructor . Two questions: - I can only guess: is it my job in order to recognise my case to write like: if len(arguments) >= 2: if isinstance(arguments[0], str) and isinstance(arguments[1], QVariant): self.value = arguments[1] arguments.removeAt(1) - Am I supposed to write the single __init__()definition (not overloaddeclarations) for my new sub-class along the lines of: def __init__(self, *__args) ... super().__init__(*__args) or along the lines of: def __init__(self, arg1: typing.Union[QListWidget, str, icon, QListWidgetItem, None], arg2: typing..., arg3: typing..., arg4) ... super().__init__(arg1, arg2, arg3, arg4) Just to close this up. Although I posted this question here, to the the PyQt mailing list and on stackoverflow, since I never got an actual answer as to how to write the derived class's constructor to accept the new positional argument while retaining all the existing overloads, in the end I have actually gone for a named argument: class JListWidgetItem(QtWidgets.QListWidgetItem): def __init__(self, *args, value: typing.Any=None, **kwargs): super().__init__(*args, **kwargs) # optional `value` member, passed in as `value=...`, stored in data(Qt.UserRole) # this is a common thing to want, cf. `QComboBox.itemData()` if value is not None: self.setValue(value) and caller goes e.g. JListWidgetItem("text", value=1). Some people have said this is the more "pythonic" way, who knows, but at least it works! Thanks to all.
https://forum.qt.io/topic/99708/python-pyqt-pyside-how-to-add-an-argument-to-derived-class-s-constructor/6
CC-MAIN-2019-39
refinedweb
1,182
62.58
hmmm... well if i have a string array like string authors[] inside a structure named library and i have a pointer to the library called temp. how do i make the array of authors bigger? it doesnt like me doing this.... struct library { string authors[]; }; library *temp; int nAuthors = 3; temp->authors = new string[nAuthors]; > how do i make the array of authors bigger? If the size is going to be dynamic, you'll have a more pleasant time using vectors than arrays: #include <string> #include <vector> struct library { std::vector<std::string> authors; }; Dynamically resizing arrays is tedious and error prone. hmmm well i got it working. ive basically got a linked list that contains the book name and the different authors (there can be n authors for each book). struct node { string bookTitle; string *authors; node *next; }; node *start_ptr; void Library::add(string title, string authors[], int nAuthors) { node *temp = new node; temp->bookTitle = title; temp->authors = new string[nAuthors]; for (int i = 0; i<nAuthors; ++i) { temp->authors[i] = authors[i]; } temp->next = NULL; if (start_ptr == NULL) start_ptr = temp; else { node *p = start_ptr; while (p->next != NULL) p = p -> next; p->next = temp; } } and this works as far as i can tell. edit: how would i figure out how many authors are stored in any given node?? > we're not allowed to use anything from STL Here's a utility function that will resize an array of strings: #include <iostream> #include <string> namespace Ed { void resize(std::string*& list, int oldSize, int newSize) { // Don't do anything if the sizes are the same if (newSize == oldSize) return; std::string *result = new std::string[newSize]; int nCopied = oldSize; // Don't copy everything if the array got smaller if (newSize < oldSize) nCopied = newSize; // Preserve existing stored strings for (int i = 0; i < nCopied; i++) result[i] = list[i]; delete[] list; list = result; } } > how would i figure out how many authors are stored in any given node?? Store it as a separate member in the node. Or better yet, write an Authors collection class that stored all of the information you need and handles the resizing of the array internally. :)
https://www.daniweb.com/programming/software-development/threads/123202/string
CC-MAIN-2016-50
refinedweb
360
62.51
Lvalue Reference Declarator: & Holds the address of an object but behaves syntactically like an object. You can think of an lvalue reference as another name for an object. An lvalue reference declaration consists of an optional list of specifiers followed by a reference declarator. A reference must be initialized and cannot be changed. Any object whose address can be converted to a given pointer type can also be converted to the similar reference type. For example, any object whose address can be converted to type char * can also be converted to type char &. Do not confuse reference declarations with use of the address-of operator. When the & identifier is preceded by a type, such as int or char, identifier is declared as a reference to the type. When & identifier is not preceded by a type, the usage is that of the address-of operator. The following example demonstrates the reference declarator by declaring a Person object and a reference to that object. Because rFriend is a reference to myFriend, updating either variable changes the same object. // reference_declarator.cpp // compile with: /EHsc // Demonstrates the reference declarator. #include <iostream> using namespace std; struct Person { char* Name; short Age; }; int main() { // Declare a Person object. Person myFriend; // Declare a reference to the Person object. Person& rFriend = myFriend; // Set the fields of the Person object. // Updating either variable changes the same object. myFriend.Name = "Bill"; rFriend.Age = 40; // Print the fields of the Person object to the console. cout << rFriend.Name << " is " << myFriend.Age << endl; } Bill is 40
https://msdn.microsoft.com/en-us/library/w7049scy(v=vs.110).aspx
CC-MAIN-2015-35
refinedweb
254
58.89
When we are building an automation framework it is very important to perform end to end automation including starting and stopping appium server. keeping that in mind in this post I will be sharing simplest and quickest way to install appium server and start & stop appium server using simple java code. This is very simple and easy way to start appium server programmatically using java code. In our previous tutorial we have seen how to start appium server using java code with this AppiumDriverLocalService class. But in this tutorial we will see very simple and quick way of installing appium server and starting it using simple java code. Step 1 > Install Node Js from HERE Step 2 > Open Node Js Command Prompt as shown below. Step 3 > Execute command npm install -g appium Step 4 > Verify Appium is Installed Successfully by executing command appium -v Step 5 > Start appium server using command appium -a 127.0.0.1 -p 4723 Step 6 > Do CTRL + C to stop ther server Below is the Java Code to Start and Stop Appium Server Programaticall. In the code we are executing the command using java to start and stop the server. Do try this out and post your feedback, Suggestion and questions in comment section below. import java.io.IOException; /** * Appium Manager - this class contains method to start and stops appium server */ public class AppiumManager { public void startServer() { Runtime runtime = Runtime.getRuntime(); try { runtime.exec("cmd.exe /c start cmd.exe /k \"appium -a 127.0.0.1 -p 4725 --session-override -dc \"{\"\"noReset\"\": \"\"false\"\"}\"\""); Thread.sleep(10000); } catch (Exception e) { e.printStackTrace(); } } public void stopServer() { Runtime runtime = Runtime.getRuntime(); try { runtime.exec("taskkill /F /IM node.exe"); runtime.exec("taskkill /F /IM cmd.exe"); } catch (IOException e) { e.printStackTrace(); } } } Notable points on your website. We are interested to add some more information to this post. Keep posting new content. Thanks Reliable dedicated hosting
http://www.qaautomated.com/2018/06/easiest-way-to-start-appium-server-with.html
CC-MAIN-2020-05
refinedweb
320
51.04
07 June 2012 16:29 [Source: ICIS news] LONDON (ICIS)--The ICIS Petrochemical Index (IPEX) has declined for the second consecutive month, with the June index falling to 335.37, its lowest level in the second quarter of 2012. This represents a 6.8% decrease in the IPEX from its revised* May figure of 359.97, on weaker global chemical prices. The lower June IPEX is primarily the result of a 10.7% drop in the Asian component of the index to the lowest since January. The steepest price drops were in the Asian butadiene (BD) price, by 31.1%, followed by the Asian ethylene price by 13.8%, and the propylene price by 11.7%. The price decreases were due to continued weakness in demand and poor downstream market conditions. The only Asian price increase was in methanol, rising by 3.9%, mainly due to tight supply and strong demand from ?xml:namespace> The The European sub-index saw a 2.5% drop, as more than two-thirds of European chemical prices decreased in euro-terms and a 2.7% strengthening of the dollar in May compounded the decline. The greatest price declines were mainly in polymer prices, polyethylene (PE) by 4.7% and PP by 4.4%, followed by a 4.0% fall in ethylene, all in dollar terms. Polymer), PE, PP and polystyrene (PS). *The May IPEX has been revised from 359.91 to 359.97, following settlement of the April US ethylene and styrene and Asian styrene contract prices. This month’s index is also subject to revision once the May US ethylene, styrene and PVC and Asian styrene contract prices settle. The revised historical IPEX data is available from ICIS on request For a full methodology of the revised IPEX,
http://www.icis.com/Articles/2012/06/07/9567344/june-ipex-falls-as-all-regional-sub-indices-drop.html
CC-MAIN-2014-42
refinedweb
295
77.74
To control the visibility of UI elements for specific users, you can use the visible property. You might hide widgets from a user if the widget isn't relevant or the user shouldn't be able to use it. For example, a page doesn't need a Save button to save changes to a record until the user changes a record. If a user doesn't have permission to access a page, you can hide a link to that page. Key concepts: - The visibleproperty has a boolean value (true = visible, false = hidden). You can bind the value or use a script to determine whether the user meets the requirements for a widget to be visible. - Call a server script function from a client script when the visibility condition depends on another server script or information from a third party service. - When a widget contains other widgets, the setting on the parent widget applies to its children. For example, if you have a form widget that contains a title, input fields, and a Submit button, you can hide individual components or the entire form. Use a script to control widget visibility Write a client script that determines if the user can access the widget and returns a boolean. For example, the Travel Approval template has the client script Utility. It has three functions to test whether the user is a member of a specific role: /** * Determines whether the user has specified role. * @param {string} roleName - name of the role to check. * @return {boolean} true if user has the role. */ function hasRole(roleName) { return (app.user.roles.indexOf(roleName) > -1); } /** * Determines whether the user is admin. * @return {boolean} true if user is an admin. */ function isAdmin() { return hasRole('Admins'); } /** * Determines whether the user is approver. * @return {boolean} true if user is an approver. */ function isApprover() { return hasRole('Approvers'); } The hasRole(roleName)function gets a list of roles that the current user is a member of and searches for the presence of the specified role. For example, if a client script calls isAdmin(), the script searches for the Admins role. If the Admins role is present, it has an index value of 0 or more and the return expression evaluates to true. Select the widget you want to set visibility for. Remember that any widgets under it in the hierarchy inherit the parent widget's visibility. In the Property editor, click Display. Click the visible dropdown menu and select binding. In the binding dialog, enter an expression that calls the client script. For example, to show the widget if the user is a member of Admins or Approvers, enter the following: isAdmin()||isApprover() Use a binding to control widget visibility - Select the widget you want to set visibility for. Remember that any widgets under it in the hierarchy inherit the parent widget's visibility. - In the Property editor, click Display. - Click the visible dropdown menu and select binding. In the binding dialog, write a binding expression. For example, to give access to members of the Managers role: @user.role.Managers Use numbers and strings in bindings as boolean values You can bind the visible property to a field with a Number or String value (not a Date) and App Maker can automatically convert the value to a boolean. The conversions for numbers and special cases (undefined, null, and NaN) match JavaScript type conversions. The conversions for strings don't match JavaScript type conversions.
https://developers-dot-devsite-v2-prod.appspot.com/appmaker/ui/element-visibility
CC-MAIN-2020-45
refinedweb
570
55.95
(5) Nishant Mittal(4) Pradip Pandey(3) Rahat Yasir(3) Ajay Yadav(3) Akshayrao V(2) Allen O'neill(2) Mahesh Chand(2) Madan Bhintade(2) Neeraj Kumar(2) Prakash Tripathi(2) Sandeep Singh Shekhawat(2) Rahul Sahay(2) Mukesh Kumar(2) Gul Md Ershad(2) Pankaj Kumar Choudhary(2) Pranay Rana(2) N Vinodh(2) Manoj Bhoir(2) Jasminder Singh(2) Tom Mohan(2) Gayathri Anbazhagan(1) Anish Ansari(1) Nemi Chand (1) Rakesh (1) Rajeesh Menoth(1) Saineshwar Bageri(1) Jignesh Trivedi(1) Nirav Daraniya(1) Debendra Dash(1) Sr Karthiga(1) Ehsan Sajjad(1) Ibrahim Ersoy(1) Raghu Gurumurthy(1) Kuppurasu Nagaraj(1) Khawar Islam(1) Nitin Pandit(1) Sundus Naveed(1) Asfend Yar(1) Saillesh Pawar(1) Amit Choudhary(1) Zain Nisar(1) Ayyaz Ahmad(1) Shakti Saxena(1) Madhuram Srivastava(1) Ranjan Dailata(1) Nilesh Jadav(1) Vikram Chaudhary(1) Sanjukta Pearl(1) Pramod Thakur(1) Bruno Leonardo Michels(1) Emiliano Musso(1) Krishnanand Sivaraj(1) Nitin (1) Josue Yeray Julian Ferreiro(1) Manpreet Singh(1) Madrina Thapa(1) Raja K(1) Gaurav Kumar(1) Francis (1) Swapnal Chonkar(1) Deepti Nahar(1) Harishsady D(1) Michal Habalcik(1) Vithal Wadje(1) Ck Nitin(1) Sibeesh Venu(1) Shiju Joseph(1) Sachin Kalia(1) Selva Ganapathy(1) Abhishek Jaiswal :)(1) Resources No resource found Database Driven Development And Developing The User Interface Using C# Jun 21, 2017. In this article, I am going to explain about Database Driven Development and Developing the User Interface Using C#. Working With Raspberry Pi And 16 X 2 LCD Display May 19, 2017. In this tutorial we will be talking about how to interface the raspberry pi and 16 X 2 LCD using GPIO and Python. Cookie Manager Wrapper In ASP.NET Core May 03, 2017. In this article, you will learn how to work with cookies in an ASP.NET Core style (in the form of an interface) , abstraction layer on top of cookie object and how to secure cookie data. An Introduction To Interface May 02, 2017. In this Article, I will explain about Interface and why an interface is known as pure polymorphism. Code First Migration - ASP.NET Core MVC 6 With EntityFrameWork Core Feb 21, 2017. In this article, we are going to explain Code First Migration in ASP.NET Core MVC 6 With Entity Framework Core , using Command Line Interface ( CLI ). Creating Sliding Tab Layout Interface Using Xamarin Android Using Visual Studio RC 2017 Feb 06, 2017. This Android app development tutorial enables you to understand the Tab layout. For example, I demonstrate how to slide a Tab from the layout to another page layout. Designing User Interface With Views In Android Applications Jan 30, 2017. In this article, you will learn about basic views with their event handling that can be used to design the UI for Android. Multi Threading With Windows Forms Jan 10, 2017. Some quick code for updating a Windows form application user interface. Embed A Web Server In A Windows Service Dec 06, 2016. Using NancyFX to provide a web-interface to a Windows Service.. Introduction to IEquatable<T> interface in C# Jul 03, 2016. In this article, you will learn about IEquatable<T> interface and Equality in C#. Getting Started With Interfaces In .NET Feb 14, 2016. In this article you will learn how to get started with interfaces in .NET.. Class Vs Abstract Class Vs Interfaces Dec 16, 2015. In this article you will learn about the differences between Class, Abstract Class and Interfaces in C# language. Getting Started With ASP.NET Web API Dec 13, 2015. In this article you will learn how to start ASP.NET Web Application Programming Interface (API). Overview Of Interfaces Dec 13, 2015. In this article you will learn about overview of Interfaces in C# with example. Abstract Class vs Interfaces In Object Oriented EcoSystem Dec 03, 2015. In this blog you will learn about the difference between Abstract class and Interface.. IComparable, IComparer And IEquatable Interfaces In C# Oct 27, 2015. In this article you will learn about IComparable, IComparer And IEquatable Interfaces In C#.. Why We Use Interfaces in C# Aug 23, 2015. In this article you will learn about interfaces in C#.. Interfaces in C# Apr 13, 2015. This article explains interface in C#. MBColorPicker Control For Windows Applications Apr 12, 2015. In this article provides a graphical interface to select a color from a set of various colors.. Handle Unmanaged Resources Mar 28, 2015. This article explains how to handle unmanaged resources in a program. .NET 4.5 Read-Only Interfaces Mar 26, 2015. This article introduces the brand new .NET 4.5 read-only interfaces that is similar to core generic interfaces. Hierarchy of core generic interface with read only interface.. C#: Implicit and Explicit Implementation of Interfaces Feb 19, 2015. This article explains the implicit and explicit implementation of interfaces and its purposes. Open Web Interface For .NET (OWIN) Feb 15, 2015. This article explains the Open Web Interface For .NET (OWIN). Difference Between Abstract Classes and Interfaces Jan 28, 2015. In this article we will learn about the differences between abstract classes and interfaces. Explicit Interfaces in C# Jan 08, 2015. In this article we will learn about explicit interface implementation. What Interfaces in C# Are Jan 05, 2015. This article explains what interfaces in C# are.. Extend the C# Types Easily With Extension Methods Dec 12, 2014. This article provides an introduction to extension methods and shows how to extend existing types without having to modify them in any way, ADO.NET Overview Dec 09, 2014. In this article we examine the connected layer and learn about the significant role of data providers that are essentially concrete implementations of several namespaces, interfaces and base classes.. IEnumerable Interface in C# Nov 21, 2014. In this article we will learn about the IEnumerable interface of C#. MSIL Programming: Part 2 Nov 16, 2014. The primary goal of this article is to exhibit the mechanism of defining (syntax and semantics) the entire typical Object Oriented Programming “terms” like namespace, interface, fields, class and so on. What is New in WPF 4.5 (InotifyDataErrorInfo): Part 1 Nov 04, 2014. This article begins my WPF 4.5 features article series. Here you will learn about one new interface InotifyDataErrorInfo introduced in WPF 4.5. Interview Questions For 3 Year .NET Professionals Oct 28, 2014. This article describes the experience and provides the questions and answers from the interview and a few other points. MVVM, Simple Way You Can Think Oct 25, 2014. This article exlains MVVM in the simplest way you can think of. Data Binding With INotifyPropertyChanged Interface Oct 21, 2014. In this article you will learn a little bit advanced topic like Data Binding. It’s really useful when you’ve massively structured code, and you’ve to handle a lots of data, not like our typical controls. Integration Services in Business Intelligence Development Studio (BIDS) Oct 17, 2014. This article explains the BIDS Interface that you can use to develop packages for data Extraction, Transformation and Loading (ETL) in SSIS. Shed Light on Facade Pattern Just an Interface Oct 10, 2014. This article will shed some light on Façade Patterns that are just an interface. The Generic Scripts Sep 27, 2014. Improved efficiency in combination with Unit Testing and User Interface or User Experience testing by generic script of dynamic values. ASP.Net MVC4, a Walk-Through Sep 26, 2014. This article will provides a quick introduction to ASP.NET MVC 4 as well as an explanation of how ASP.NET MVC 4 fits into ASP.NET. About ServletRequest-Interface NA File APIs for .NET Aspose are the market leader of .NET APIs for file business formats – natively work with DOCX, XLSX, PPT, PDF, MSG, MPP, images formats and many more!
http://www.c-sharpcorner.com/tags/ServletRequest-Interface
CC-MAIN-2017-26
refinedweb
1,306
58.79
Created 24 February 2010, last updated 25 July 2010 Complex test suites may spawn subprocesses to run tests, either to run them in parallel, or because subprocess behavior is an important part of the system under test. Measuring coverage in those subprocesses. Measuring coverage in subprocesses is a little tricky. When you spawn a subprocess, you are invoking Python to run your program. Usually, to get coverage measurement, you have to use coverage.py to run your program. Your subprocess won’t be using coverage.py, so we have to convince Python to use coverage even when not explicitly invokved.processes. As long as the environment variable is visible in your subprocess, it will work. You can configure your Python installation to invoke the process_startup function in two ways: Create or append to sitecustomize.py to add these lines: import coveragecoverage. « Previous: Branch coverage measurement Next: Coverage API » 2010, Ned Batchelder
http://nedbatchelder.com/code/coverage/subprocess.html
crawl-003
refinedweb
151
50.63
Macroeconomic Impacts S. 804 Table 9 summarizes projected macroeconomic activity in three cases: the AEO2002 Revised Reference and two of the S. 804 cases, focusing on the impacts in 2010, 2015 and 2020. As can be seen, the macroeconomic impacts are relatively small. There are three major effects that influence the economy at the aggregate level. First, with stricter CAFE standards there is an increase in the average price of light duty trucks. The higher vehicle cost to the consumer has an adverse effect on the family budget. As a consequence, aggregate personal consumption expenditures are lower relative to the reference case. With higher prices, sales of light trucks for investment purposes are also lower and thus the initial impact on real investment is also negative. Second, with greater fuel efficiency and a decline in aggregate expenditures, there is a reduction in energy use in the economy due to a decline in oil demand. This decline in energy use reduces imports of oil, and domestic production also declines slightly. Third, as a result of a decrease in energy demand, energy prices decline relative to the reference case. This relative decline in energy prices sets into motion deflationary forces that stimulate aggregate demand over time, for all goods and services in the economy, including energy. As described earlier, the incremental cost of light duty trucks for the two S. 804 Cases is shown in Figure 29. By 2010, the incremental cost for light trucks is $601 (expressed in 2000 dollars) in the S. 804 Case and $1,013 in the S. 804 Advanced Date Case. The S. 804 Advanced Date Case reduces the cumulative fines imposed on manufacturers for not achieving the standard under S. 804, but the price increase of the now available technology is higher. However, in the Advanced Date Case, the incremental cost levels off beyond 2010 and by 2020 is $1,116. While the S. 804 Case has initially lower incremental costs in 2010, these costs rise relatively more than the Advanced Date Case and by 2020 the incremental cost of a new light truck is $1,294. The effect of this incremental cost of new light trucks is reflected in decreased sales of light duty vehicles, including cars and trucks. The analysis assesses the change in light vehicle sales in the aggregate, but cannot assess shifts between cars and light trucks. Some economists believe that some consumers will purchase large cars rather than light trucks with reduced horsepower and weight. However, for this assessment, the projected changes in vehicle sales due to increased vehicle prices reported in this study are assumed to affect light trucks only. Sales of light duty trucks are lower, relative to the reference, in every year of the forecast. Of the two CAFE cases, the decrement in sales is greater under the S. 804 Advanced Date Case early in the forecast period, given the faster rise in incremental costs in this Case (Figure 30). In 2010 sales decline by 363 thousand vehicles in the S. 804 Advanced Date Case, relative to a reference projection for light duty vehicles of 17.3 million vehicles. By contrast, the S. 804 Case is projected to have a reduction of sales of 247 thousand units. However, by 2015 and 2020, the reduction in vehicle sales is slightly larger in the S. 804 Case. This is because the incremental cost of trucks continues to rise in the S. 804 Case while the S. 804 Advanced Case shows the incremental cost rising more slowly and then leveling off. By 2020, light truck sales are forecast to decline by 453 thousand vehicles in the S. 804 Case as compared to 450 thousand in the S. 804 Advanced Date Case. Over the 2003 to 2020 time period, sales of light duty trucks are 5.2 and 5.4 million units lower under the S. 804 and S. 804 Advanced Date Cases respectively, compared to the reference case. From a macroeconomic perspective, declining real consumption and investment expenditures dominate the early part of the forecast period and introduce cyclical behavior in the economy, resulting in small output and employment losses through 2010. In 2010, real GDP is forecast to be 0.14 percent lower in the S. 804 Case relative to the reference and the S. 804 Advanced Date Case is 0.22 percent lower. Accompanying this, non-agricultural employment declines by 214 thousand and 325 thousands jobs, respectively under the two Cases. This represents a percentage reduction in employment of between 0.15 percent and 0.22 percent of total non-agricultural employment in the economy. Further into the forecast period, the impacts on the economy are moderated as the incremental cost of light trucks in both cases begins to level off, and with the decline in the world oil price relative to reference levels. Investment, the most volatile component of GDP, initially declines in response to the decline in aggregate demand early in the forecast. With the economy reaching its peak GDP loss between 2010 and 2015, investment activity rebounds strongly in anticipation of increasing aggregate demand. The level of investment activity by 2020 is actually greater than in the reference forecast, making up some of the lost capital stock precipitated by the early loss in aggregate demand. In the long run, the economy is expected to recover and move back toward the reference growth path. By 2020, real GDP is still 0.07 percent below the reference in the S. 804 Case, but the path is beginning to return to the reference. The S. 804 Advanced Date Case is more cyclical, in part because of the initial larger, then subsequent smaller incremental cost path relative to the S. 804 Case. Also, when the economy is adversely affected earlier, such as in the S. 804 Advanced Date Case, there is a strong tendency of the economy to attempt to return to its natural long-run growth path. This results in a strong rebound, in response to the strong decline early in the forecast period. The net effects on the trade balance are influenced by opposing sets of pressures – those which affect the oil import bill directly and those that influence other traded goods and services. The initial reduction in gasoline demand results in a reduction in imported oil. This lower demand for gasoline leads to a slight decline in the world price of oil, which stimulates the demand for energy and reduces domestic production slightly. On balance, energy demand is expected to be lower and the oil import bill reduced. However, with a reduction in oil imports and aggregate demand, there is pressure on U.S. export commodities due to the resultant foreign exchange rate depreciation, which may offset the reduction in the oil import bill. As a result of these opposing tendencies, it is difficult to predict the direction of the trade balance. The results indicate that the trade balance generally deteriorates. S. 517 Since the S. 517 Case applies to both cars and light trucks, the price of each is expected to increase. Figure 31 shows the incremental cost for cars, light trucks and the average for all light-duty vehicles. Moreover, the profile of the price path is different from the two S. 804 Cases discussed above. The incremental costs in the S. 517 Case commence later, but rise steadily through the forecast. In 2010, the incremental cost for light-duty vehicles in the S. 517 Case is about even with the overall incremental cost of light-duty vehicles in the S. 804 Case, at $368 and $361 respectively, but below the $505 incremental cost in the S. 804 Advanced Date Case. However, by 2015 the average incremental cost of light duty vehicles is higher than both of the S. 804 cases, and this trend continues through 2020. In 2020, the incremental cost of light-duty vehicles is $756 in the S. 517 Case, as compared to $630 for the S. 804 Case and $542 for the S. 804 with Advanced Date Case. This different cost profile has an impact on the size and duration of the economic impacts associated with the S. 517 Case. Figure 32 shows the effect on light duty vehicle sales for both cars and light trucks. In the aggregate, light duty vehicle sales decline at a slower rate early in the reference forecast. By 2010 sales are down relative to the forecast by 231 thousand vehicles, about the same as in the S. 804 Case. However, by 2015, with the incremental cost of light duty vehicles above both of the S. 804 cases, new vehicle sales decline by 604 thousand and by 2020 are also lower than the reference case by 604 thousand vehicles. The impact on the economy is small through 2010 (Table 10). By 2010, real GDP is projected to be 0.14 percent lower than the reference, almost the same impact as under the S. 804 Case. However, with the steady increase in the incremental cost of new light duty vehicles from 2010 through 2020, the economy continues to worsen and by 2015 is 0.30 percent lower than the reference. The economy begins to rebound past 2015, but by 2020 is still 0.15 percent lower. By 2015, the peak loss in non-agricultural employment is 453 thousand jobs, 0.30 percent of the total non-agricultural employment in the economy. By 2020, with the economy beginning to recover, non-agricultural employment is still down by 293 thousands jobs (0.19 percent). Present Value of Impacts Table 11 provides the sum of the discounted changes (billions of dollars discounted at 7 percent) in real GDP and personal consumption expenditures over the entire 18-year forecast period for the S. 804 Case, the S. 804 Advanced Date Case, and the S. 517 Case. These can be viewed as summary measures of the net effects on the macroeconomy. To provide perspective about the magnitude of losses, these discounted values are also expressed as percentages of the total discounted sum of values of real GDP and consumption over the same period. These percentages imply that the losses in real GDP and personal consumption expenditures are small. Alternative Fuels Provisions Table 12 summarizes the alternative fuels legislation examined in this S. 176615 The alternative fuel provisions of S. 1766 have two main purposes: increase the use of alternative fuels in Federal fleets and fund a large demonstration program aimed at using alternative, fuel cell, and ultra-low sulfur diesel school buses. Section 811. Increased use of alternative fuels by Federal fleets The section amends the Energy Policy and Conservation Act (EPCA) to require that dual-fueled vehicles be operated such that by September 30, 2003, at least 50 percent of total fuel used in such vehicles will be from alternative fuels.16 The percentage will increase to at least 75 percent of total fuel used in dual fueled vehicles by September 30, 2005. Under current regulations, dual fueled or flexible fuel vehicles qualify as alternative fuel vehicles (AFVs) even if they consume only gasoline. This provision would require such vehicles to actually use alternative fuels for 75 percent of their consumption by 2005. This section also amends EPCA to include as a “dedicated vehicle” three-wheeled enclosed electric vehicles with a vehicle identification number. The impact of Section 811 would be similar to the requirements in Executive Order 13149 (April 21, 2000) to use "alternative fuels to meet a majority of the fuel requirements” of AFVs. In effect, Section 811's main provisions would place into law the requirements included in existing Executive Orders. Consequently, little, if any, additional impact on future transportation energy relative to the Reference Case is expected. Estimated alternative fuel consumption by Federal agencies was 5.8 million gallons in 1999, which was 1.7 percent of total U.S. alternative fuel consumption of 339.3 million gallons (Table 13). At the same time, Federal agencies accounted for about 276 million gallons of gasoline consumption, which amounts to 0.2 percent of total U.S. gasoline consumption. Overall, alternative fuels make up about 0.3 percent of the combined total of alternative fuels plus gasoline. The type of fuel consumed by dual fuel vehicles in the Federal fleet must be estimated because specific data are not available. Table 14 separates the Federal AFV fleet into two categories, Dedicated and Non-Dedicated. Dedicated AFVs use only alternative fuel; Non-Dedicated AFVs may use an alternative fuel as well as non-alternative fuel. Most of the Non-Dedicated AFVs use compressed natural gas (CNG) or liquefied petroleum gas (LPG) as the alternative fuel. The Flexible Fuel AFVs in Table 11 consist of those Non-Dedicated AFVs that use either E85 or M85. These Flexible Fuel AFVs probably consume very little of the alternative fuel, relying almost entirely on gasoline for fuel. Federal agencies’ inventory of AFVs was about 24 thousand in 1999 (Table 10), with flexible fuel vehicles accounting for almost 40 percent of the alternative fuel vehicles in the Federal fleet. As an upper bound estimate, assume that all flexible fuel vehicles in 1999 consumed only gasoline. If these vehicles consumed the average gallons of gasoline per car,17 5.1 million gallons of gasoline would be consumed. If it were required that 75 percent of fuel used in flexible fuel vehicles be alternative fuels, the Federal fleet alternative fuel consumption would be increased by 3.8 million gallons, with a corresponding decrease in gasoline consumption. With these assumptions, the flexible fuel requirement would have reduced 1999 Federal fleet petroleum consumption by 1.4 percent. Since the alternative fuel consumed contains 15 percent gasoline, carbon emissions would be reduced by 1.2 percent. Section 812. Exception to HOV passenger requirements for alternative fuel vehicles This provision would allow single passenger alternative fuel vehicles to use HOV lanes, as some States already do. Presumably, this would increase the incentive to purchase AFVs to some extent. However, allowing single passenger vehicles in HOV lanes could lead to additional congestion in the HOV lanes, which would lead to increased overall fuel consumption. On balance, the impact on fuel consumption of the HOV exception cannot be quantified but is likely to be minimal. Section 814. Green school bus pilot program The proposed legislation would provide grants for the demonstration and commercial application of alternative fuel school buses and ultra-low sulfur diesel18 school buses to replace buses manufactured before model year 1977 or diesel-powered buses manufactured before 1991. The section further specifies that 20 percent to 25 percent of the funds granted must be for ultra-low sulfur diesel school buses. Authorized funding for this program is shared with the funding for the fuel cell bus program described in Section 815. This means that over the 2003-2006 period, at least $235 million and as much as $260 million is authorized for the green school bus pilot program. It has been estimated that 30 States have no pre-1977 school buses.19 For most others, the percentage of pre-1977 school buses is 1 to 2 percent. A major exception is California’s school bus fleet, which is estimated to have 9 percent pre-1977 buses (2,180 vehicles). In light of the small number of affected buses, the overall impact of reducing the use of pre-1977 buses would be minimal, although perhaps significant for some State fleets. While there is some uncertainty about the number of school buses in service the Transportation Energy Data Book, an authoritative source, reports there were approximately 592 thousand school buses in service in 1999 (Table 15),29 that consumed 76 trillion Btu (608 million gallons gasoline-equivalent) of transportation fuel. However, since the number of pre-1991 diesel-power buses in the school bus fleet is not known, no further evaluation of the impact of this provision can be done. Section 815. Fuel cell bus development and demonstration program This section establishes a program for cooperative agreements with the private sector to develop fuel cell-powered school buses. The program will also include at least two different local government entities currently using natural gas-powered school buses to demonstrate (along with the fuel cell developers) the use of fuel cell-powered school buses. The funding is not to exceed $25 million over the 2003-2006 period. Because it is difficult to relate levels of funding for research, development, or demonstration programs directly to specific improvements in the characteristics, benefits, and availability of energy technologies, the overall impact of this proposal cannot be assessed. In general increased research, development, and demonstration would be expected to lead to advances, but it is impossible to determine which programs would or would not be successful or how successful they might be. Section 816. Appropriations for 814 and 815 As noted above, the fuel cell bus program cannot exceed a total of $25 million, with the remainder going to the green school bus pilot program. The total authorization for the fuel cell bus and green school bus pilot programs for 2003 to 2006 is as follows: $50 million in 2003; $60 million in 2004; $70 million in 2005; $80 million in 2006. $50 million in 2003; $60 million in 2004; $70 million in 2005; $80 million in 2006. Section 819. Neighborhood electric vehicles This provision would amend the Energy Policy Act of 1992 (EPACT) to allow some electric vehicles that are not intended to be used on highways to count as alternative fuel vehicles for Federal fleet purposes. This is consistent with Section 811, which would include enclosed three wheel vehicles. In the absence of data on such vehicles, no evaluation of likely impacts can be done. Table 16 summarizes the potential energy impacts of the S. 1766 provisions. H.R. 421 Section 151. High Occupancy Vehicle Exception This provision would allow single passenger hybrid or alternative fuel vehicles to use HOV lanes. This provision differs from Section 812 of S. 1766 by including hybrid vehicles. However, allowing single passenger vehicles in HOV lanes could lead to additional congestion, which would lead to increased overall fuel consumption. On balance, the impact on fuel consumption of the HOV exception cannot be quantified but is likely to be minimal. Section 205. Hybrid Vehicles and Alternative Vehicles Currently, Section 301 of EPACT requires AFVs to be 75 percent of new Federal vehicle acquisitions (police, emergency, and military are excepted from the rule). This provision would amend EPACT to increase the percentage AFVs required by the following amounts: 5 percent in 2004 and 2005 10 percent in 2006 and later years. This means that the total percentage AFVs would increase to 80 percent in 2004-2005 and to 85 percent thereafter. Current regulations do not include hybrid vehicles as AFVs for purposes of EPACT compliance. Section 205 would also amend EPACT to specify that hybrid vehicles would count as AFVs. While the impact of this provision cannot be evaluated quantitatively, it would increase the potential market for hybrid vehicles in the Federal fleet. Section 206. Federal Fleet Petroleum-Based Nonalternative Fuels The purpose of this provision is to reduce the Federal fleet purchases of petroleum-based nonalternative fuel vehicles over the model years 2004-2010 such that the Federal fleet fuel consumption will be totally reliant on alternative fuels by the end of fiscal year 2009. Estimated alternative fuel consumption by Federal agencies was 5.8 million gallons in 1999, compared with estimated US total alternative fuel consumption of 339 million gallons (Table 9). In the same year, Federal fleets consumed 276 million gallons of petroleum for transportation use. If all Federal fleet petroleum consumption were converted to alternative fuels,22 then U.S. alternative fuel consumption in 1999 would have been 81 percent higher, amounting to 615 million gallons. This eventuality would have resulted in alternative fuels accounting for 0.5 percent of total U.S. gasoline and alternative fuel transportation fuels. The feasibility of achieving 100 percent alternative fuel use by 2009 is difficult to assess. However, at the end of FY1999, there were 554 thousand gasoline or diesel fueled vehicles in the Federal fleet.23 In the same fiscal year, 58 thousand gasoline or diesel-fueled vehicles were purchased.24 If the bulk of the purchases were to replace retired vehicles rather than to expand the fleet, the existing fleet of gasoline or diesel vehicles could be replaced in 10 years. Section 2101-2105. Alternative Fuel Vehicle Acceleration Act of 2001 The purpose of these sections is to establish competitive grant pilot programs to provide not more than 15 grants to State and local governments to acquire alternative fuel vehicles, including ultra-low sulfur diesel vehicles. Flexible fuel vehicles that could operate solely on petroleum-based fuels are explicitly excluded. The maximum amount of any grant cannot exceed $20 million. A total of $200 million would be authorized for this program. State agencies’ fleets are estimated to have consumed 1.9 billion gallons of gasoline in 1999 (Table 9). During the same period, these fleets contained about 78 thousand alternative fuel vehicles (Table 10). However, of that total, only about 14 thousand were non-petroleum AFVs.25 If the entire $200 million was available to purchase alternative fuel vehicles that cost an average of $15 thousand each,26 13,333 alternative fuel vehicles could be added to State and local agencies’ fleets, almost doubling the number of non-petroleum AFVs. If these vehicles average the same gallons per year as the Federal fleet average (557 gallons per year), petroleum consumption would fall 7.4 million gallons or 0.4 percent of State and local agencies’ 1999 gasoline consumption. Section 2131-2133. Secondary Electric Vehicle Battery Use The proposed legislation would establish a research, development, and demonstration program for the secondary use of batteries where the original use of such batteries was in electric vehicles. The secondary uses specified include utility and commercial power storage, and power quality. Funding to be authorized for the secondary electric vehicle battery program is as follows: $1 million in 2002; $7 million in 2003 and 2004. Because it is difficult to relate levels of funding for research, development, or demonstration programs directly to specific improvements in the characteristics, benefits, and availability of energy technologies, the overall impact of this proposal cannot be accessed. Sections 2141-2144. Clean Green School Bus Act of 2001 The provisions of this section parallel the provisions in S. 1766, sections 814-816. Any potential impacts would be similar. The following summarizes the concordance between the two bills:. The proposed authorization for the Clean Green School Bus Act of 2001 for 2002 to 2006 is as follows: $40 million in 2002; $50 million in 2003; $60 million in 2004; $70 million in 2005; $80 million in 2006. $40 million in 2002; $50 million in 2003; $60 million in 2004; $70 million in 2005; $80 million in 2006. Table 17 summarizes the potential energy impacts of the alternative fuels provisions of H.R. 4. Uncertainties The fuel economy projections presented in this report reflect a continuation of consumer purchase patterns by vehicle size class and type (car versus light truck). Because it is projected that significant changes will occur in vehicle weight, horsepower, and price to meet the CAFE standards examined in this report, it is likely that these changes will affect consumer purchase patterns. To compensate for lighter vehicles, consumers may decide, for safety reasons, to move to larger size classes. But increased vehicle costs may force consumers into smaller less expensive vehicles. In addition, significant sales shifts may occur between cars and light trucks. In the S. 517 Case, the projected reduction in car weight may influence more consumers to purchase light trucks. It is also likely that increasing the maximum gross vehicle weight rating of vehicles covered under CAFE to less than 10,000 pounds will serve to push the sales of these types of vehicles to the next largest size class where they would not be subject to fuel economy regulation. Although many light trucks are now used as passenger vehicles, performance attributes like towing and hauling capability have remained relatively consistent, while vehicle acceleration has increased significantly. Increasing the CAFE standards for light trucks will have significant impacts on both the cost and performance attributes of these vehicles. The availability of advanced technology will be critical to maintaining vehicle performance while also increasing vehicle fuel economy at an acceptable price. Depending upon the availability of technology and its effect on vehicle price and performance, light trucks meeting the new CAFE standards could be viewed as either superior or inferior products. If manufacturers opt to minimize price impacts and produce light trucks with significantly reduced performance to achieve the new CAFE standards, then consumers may view the product as inferior and opt to purchase a midsize or large car to meet their needs. If advanced technology becomes available and manufacturers produce light trucks that meet the new CAFE standard and maintain performance attributes with slightly higher vehicle costs, then consumers may view this product as superior, resulting in more consumers shifting their next vehicle purchase to light trucks. Notes
http://www.eia.doe.gov/oiaf/servicerpt/cafe/macro.html
crawl-002
refinedweb
4,192
54.02
16388/azure-event-hubs-and-multiple-consumer-groups Need help on using Azure event hubs in the following scenario. I think consumer groups might be the right option for this scenario, but I was not able to find a concrete example online. Here is the rough description of the problem and the proposed solution using the event hubs (I am not sure if this is the optimal solution. Will appreciate your feedback) I have multiple event-sources that generate a lot of event data (telemetry data from sensors) which needs to be saved to our database and some analysis like running average, min-max should be performed in parallel. The sender can only send data to a single endpoint, but the event-hub should make this data available to both the data handlers. I am thinking about using two consumer groups, first one will be a cluster of worker role instances that take care of saving the data to our key-value store and the second consumer group will be an analysis engine (likely to go with Azure Stream Analysis). Firstly, how do I setup the consumer groups and is there something that I need to do on the sender/receiver side such that copies of events appear on all consumer groups? I did read many examples online, but they either use client.GetDefaultConsumerGroup(); and/or have all partitions processed by multiple instances of a same worker role. For my scenario, when a event is triggered, it needs to be processed by two different worker roles in parallel (one that saves the data and second one that does some analysis) Thank You! TLDR: Looks reasonable, just make two Consumer Groups by using different names with CreateConsumerGroupIfNotExists. Consumer Groups are primarily a concept so exactly how they work depends on how your subscribers are implemented. As you know, conceptually they are a group of subscribers working together so that each group receives all the messages and under ideal (won't happen) circumstances probably consumes each message once. This means that each Consumer Group will "have all partitions processed by multiple instances of the same worker role." You want this. This can be implemented in different ways. Microsoft has provided two ways to consume messages from Event Hubs directly plus the option to use things like Streaming Analytics which are probably built on top of the two direct ways. The first way is the Event Hub Receiver, the second which is higher level is the Event Processor Host. I have not used Event Hub Receiver directly so this particular comment is based on the theory of how these sorts of systems work and speculation from the documentation: While they are createdfrom EventHubConsumerGroups this serves little purpose as these receivers do not coordinate with one another. If you use these you will need to (and can!) do all the coordination and committing of offsets yourself which has advantages in some scenarios such as writing the offset to a transactional DB in the same transaction as computed aggregates. Using these low level receivers, having different logical consumer groups using the same Azure consumer group probably shouldn't (normative not practical advice) be particularly problematic, but you should use different names in case it either does matter or you change to EventProcessorHosts. Now onto more useful information, EventProcessorHosts are probably built on top of EventHubReceivers. They are a higher level thing and there is support to enable multiple machines to work together as a logical consumer group. Below I've included a lightly edited snippet from my code that makes an EventProcessorHost with a bunch of comments left in explaining some choices. //We need an identifier for the lease. It must be unique across concurrently //running instances of the program. There are three main options for this. The //first is a static value from a config file. The second is the machine's NETBIOS //name ie System.Environment.MachineName. The third is a random value unique per run which //we have chosen here, if our VMs have very weak randomness bad things may happen. string hostName = Guid.NewGuid().ToString(); //It's not clear if we want this here long term or if we prefer that the Consumer //Groups be created out of band. Nor are there necessarily good tools to discover //existing consumer groups. NamespaceManager namespaceManager = NamespaceManager.CreateFromConnectionString(eventHubConnectionString); EventHubDescription ehd = namespaceManager.GetEventHub(eventHubPath); namespaceManager.CreateConsumerGroupIfNotExists(ehd.Path, consumerGroupName); host = new EventProcessorHost(hostName, eventHubPath, consumerGroupName, eventHubConnectionString, storageConnectionString, leaseContainerName); //Call something like this when you want it to start host.RegisterEventProcessorFactoryAsync(factory) You'll notice that I told Azure to make a new Consumer Group if it doesn't exist, you'll get a lovely error message if it doesn't. I honestly don't know what the purpose of this is because it doesn't include the Storage connection string which needs to be the same across instances in order for the EventProcessorHost's coordination (and presumably commits) to work properly. Here I've provided a picture from Azure Storage Explorer of leases the leases and presumably offsets from a Consumer Group I was experimenting with in November. Note that while I have a testhub and a testhub-testcg container, this is due to manually naming them. If they were in the same container it would be things like "$Default/0" vs "testcg/0". As you can see there is one blob per partition. My assumption is that these blobs are used for two things. The first of these is the Blob leases for distributing partitions amongst instances see here, the second is storing the offsets within the partition that have been committed. Rather than the data getting pushed to the Consumer Groups the consuming instances are asking the storage system for data at some offset in one partition. EventProcessorHosts are a nice high level way of having a logical consumer group where each partition is only getting read by one consumer at a time, and where the progress the logical consumer group has made in each partition is not forgotten. Remember that the throughput per partition is measured so that if you're maxing out ingress you can only have two logical consumers that are all up to speed. As such you'll want to make sure you have enough partitions, and throughput units, that you can: In conclusion: consumer groups are what you need. The examples you read that use a specific consumer group are good, within each logical consumer group use the same name for the Azure Consumer Group and have different logical consumer groups use different ones. I haven't yet used Azure Stream Analytics, but at least during the preview release you are limited to the default consumer group. So don't use the default consumer group for something else, and if you need two separate lots of Azure Stream Analytics you may need to do something nasty. But it's easy to configure! Basically there was only minute differences between ...READ MORE It's confusing since Docker (the company) is ...READ MORE Microsoft has created the document with a comparison for ...READ MORE Here's an updated version of the previous ...READ MORE Here's a PowerShell script for you. Replace ...READ MORE foreach loop is the most efficient way though. ...READ MORE You can take advantage of the WebJobs ...READ MORE You don't need to create a custom ...READ MORE From the Windows Azure Portal you can ...READ MORE There are a couple options here within ...READ MORE OR Already have an account? Sign in.
https://www.edureka.co/community/16388/azure-event-hubs-and-multiple-consumer-groups
CC-MAIN-2019-47
refinedweb
1,258
52.39
- vb2005 - how to overwrite an existing file? - Include File - difference between 'overrides' and 'overloads' in class? - Client Callbacks And Event Handlers - Files re-install on program startup - Select All Button - get ip address of this pc with windows app - Nothing Generates for OleDbCommandBuilder - VBScript -> C# .. construct problem - pass comtrol name to a subroutine - Simple clock on a aspx page? - Force My Program to run in 96dpi? - does vb2005 have builtin UnDo feature for apps? or do I have to wr - Stmp Delivery Agent - rowfilter problem with fieldname containing space - where to get a replacement CD/download for VS 6.0? - Need a very simple WCF example/walkthrough - Why is MDI form minimized by MessageBox - openfiledialog not realeasing resources - Windows service problem on x64 server - File Owner - Crystal reports very weird bug/error - Date vs. DateTime - reasons to hate C# - sending data with serial port problem (16chars=ok, >17chars Not working) - Datagridview Calculation - .Net Application on share with permission List Folder/Read Data not allowed results in .Net Framework Initialization error - Database Records Represantation Problem ? ? ? - Inheritence and overloading operators - Text in .NET - Erro OLEDB - how to create class instance? - Create Class O/R Mapping - To create table in SQL by reading the values from XML file - Find control type in DataGrid - Testing functions from the immediate window on 64 bit Visual studio - VS 2003 Keyboard issue - Problem of ToOADate function - Need to add more than 3 worksheets to excel workbook?? - Deserialize matchData in sql reporting services 2005 - Printing Help - Accessing external file from Windows Service - Web reference timeout - Service Installer - Type Conversion with Date formatted as YYYYMMDD - Pressing the <enter> key & javascript - Assigning Default value to Enum Variable - if listbox.selecteditem(I) I get Invalid cast Exception was unha - Code generation for property 'Controls' failed. Error was: 'Objectreference not set to an instance of an object.' - Serial Port stops receiving. - VB 2005 and Exchange Server - System.Net.Mail.MailMessage - Form that is not a top-level form cannot be displayed as a modal dialog box error - Figure out domain of system? - Binary String into Byte - MCSD-An Excellent Resource - Dynamically creating and naming collections - SUM STRING UNTIL - C++ Function Pointer to VB.Net - OLEDBConnection Or SQLConnection - Sending Email in VB 2005 - Leaking window handles - unmanaged code calling managed - Problem with Declare sub xxxDLL - Process.Start and Console Window Issue - Creating a B&W tiff file in vb 2003 - How do you make a dialog forget its fields when reshown? - music file properites? - Change forms language - ComboBox Scope Help Needed - Wild Character Help - Sub Main Not Found - SendMerssage in vb.net - How to change a String into a Byte to show a ASCII Character - Converting a base64 string to a Hex string - check daylight saving on PC - Thread problem using FileSystemWatcher - Naive question: Is VB.NET strongly-typed? - How to insert a node in xml using visual basic express ? - Does it make sense in FormClosed to do Me.Dispose - Problems Manipulating StringBuilder Output - Plugin Software Components - Bring Another Application To Front? - Printing using the print dialog control in vb.net 2005 - Strange datagridview behavior - webbrowser and problem - draw line chart in VB 2005 - Index out of bounds error -- how to trap? - vb.2005 window app run Window 2003? - String formating issue with doubles - specify length of a string variable - Need to be notified if files are changed - How to do this - Immagine run time in crystal report - JPEG et RTF - Datagridview Add row when databound - PK auto increment on sgdb access - Opening Default E-Mail Client - image and notifyicon problem - Problems "publishing" from Visual Basic Express - How can I write a Regex to do this? - FolderBrowserDialog doesn't initialize - How do I change the duration of a splash screen? - Multiple pages in print preview dialog but only one page prints - Reading XML String - Check Bitwise flags - How to prevent multiple child forms opening - MASK - ContextSwitchDeadlock was detected - how to get a httpwebrequest's response url - How to print a pdf file to the default printer? - Reset PropertyGrid to Defaults - system call - Show About form on DLL start - SIP development in .NET??? - Mouse behavior of ToolStripMenuItems - Task scheduler - bindinglist(of T) bound to two controls - Compile error help - Passing exceptions from a Dll back to the calling application. - Problem using BackGroundWorker to ping multiple LAN hosts - print a .jpg image from vb .net - Update Access DB With Downloaded Data - Creating a temporary file from a bytearray - Printing - how to access from code-behind a label into CreateUserWizard control? - Using vb.net to dynamically create excel activeX controls - Q: TableAdaptor - Sixe in KB of a dataset or datatable - vb.net search network drive - Need Microsoft.Office.Interop.dll - Checking for A Blank String - Showing the Icon after setup - C# , SQL Server , Vista , Ajax Interview questions links - control name - Help with streamwriter - Single instance form (child) on MDI - to create classes starting from a DBMS - Problem understanding Synclock - Password protect access DB? - How To Change Or Modify Embedded Resource In An Assembly - How to make / plot graph in vb.net? - Trying to "embed" winamp in my app - How to capture VB.NET Event From VB6???? - findign even characters odf a string? - Menuitems do not display - How can I make this into a class?? - System.IO.PathTooLongException - Bind combobox to sql statement result in VB2005 - Image List - Find row using * wildcard or similar (Like) with dataview/dataTabl - How to add programmatically a label into a content page? - VB2005 Line drawing graphics slow - Rounding and math.sqrt - how can I tell how many window handles are free - RaiseEvent is frozen on .net 2005 platform - on the fly Identity column in dataTable not populating - any ideas - Question on SQLCommand in a loop - A little rant - VB2005 & Smartphone development - A little off topic but close - ado.net: trying to catch duplicate and null errors on sqlcommand.executenonquery - OpenMode enumeration - Problem Using a Delphi Com DLL with VB.NET 2005 - disable close item of window - Suppressing Dialog from a COM dll - DirectoryWatcher question - How to Debug a DLL ? - Help With Making an Executable project in vb.net - SMTP Email in VB2005 - Help me - defining a flexible PROPERTY - Convert Numeric Format to Time Format - How to upgrade Objptr function from VB6 to VS.net 2003 - where ItemData in ComboBox? - I need some help on Crystal report - SQLBulkCopy and SQL Server 2000? - ssl question - TextBox Watermark NOT ASP.net - DataColumn.Expression with DateTime Datatype - Transfer data in Data Control - How to sort an array of objects - Downloading new application assemblies - Strings ~ Help Please - Beginner GDI+ question - Example of dynamic binding for GridView - Dynamic binding to Gridview - compare one number with other numbers in a set - compare one number with other numbers in a set - how to use a class in an aspx file? - append one line of file - Interface through the parallel port - How to ping another computer in Vb 2005 - Visual Studio Life Expectancy - for each on multiple collections - Microsoft Expression vs Visual Studio? - Regular Expressions to XML - Datagrid datasource binding error - VB.NET Compiler - Scrolling text - Read a file in the web - DLL Question - Uncaught system.invalidoperationexception - show new form - Class/structure/something else in class - array list - how do I pass a parameter or argument from one windows form to another? - Print Page and Header - How to get a string from a html source code? - how to use scroll bar in menuitems - How can I match a hyphen in this regular expression? - RaiseEvent - Image Button Problem: "SetPixel is not supported for images with indexed pixel formats" - DataGridView - Does Clone really make a new copy ? - Wartermark - tabcontrol - Queue or the sorts - Should I use threads when an event is filling a buffer? - Transactional Program Question - Arrays - Public Events - vbNullString - Changing drive permissions - index to foxpro files - how to read taskMgr/Proccess to see if an app is running? - Clear Graphics - Detect if autostart - dynamic table - performance profiling help - Best way to Call method on MDI Parent from MDI child (vb.net 2005) - Get treenode value from vb.net 2005 winforms treeview - serializing - On Windows Vista, Process.Start() generates Win32Exception. - detailsview cancel history.back() asp.net - how to insert data from datagrid to datasource using insert sql statement - Updateing event of datagridview - Circular referance and latebinding ... - Files Cut or Copied to the clipboard - Catching a power state change... (specifically entering sleep) - PostgreSQL with .NET - Creating a Label in Code [VB.NET VS 2005] - Asynchronous SQL VS 2003 .NET 1.1 - Asynchronous SQL VS 2003 .NET 1.1 - Using DataGrid to data input - Accessing a Parent Object's Properties - An unhandled exception of type 'System.ExecutionEngineException' occurred in System.Data.dll - Using Excel from Threads - Editing a Crystal Reports Report? - Editing a Crystal Reports Report? - Editing a Crystal Reports Report? - Need help with Math... - ListBox Columns - TIFF/Image clean-up / compression component - BackgroundWorker run multiple new BackGroundWorkers ? - Font - Dispose question. - Getting a list of loaded assemblies - where VbNullString in VB.Net? - Javascript function form.submit() not working when hooking into IE6 with mshtml.HTMLDocumentEvents2 - Treeview Tag loosing value - dataset sample - Set Backcolor for forms - User input in textboxes - Namespaces - Access and VB.Net - Is it posible to do this casting in VB? - Services Template - Serial Port / RS232 - NumericUpDown Controls - problem with "Object reference not set to an instance of an object" - Treeview Checked Count Problem - vista - Sending Mail - Strong Name Key - New web service in VB.NET 2003 std - Open MS Office Documents and Check for Errors - how to create this class? - Web Browser Exception - Schedule problem - Binary To ASCII? - Sending data to a web database from Windows forms application - referring to imagelists - Update and delete sql not generated by wizard - Problem with using VB6 control in VB2005 (0/1) - Referencing external forms - "Data Dictionary" for database updates - Parameter Passing from VB .NET to a DLL function (in C++/CLI) - Developing Stand Alone Apps in VB - VB 2005 dll in VB 6.0 - Image Help - Column does not allow nulls when saving - graphics.drawstring - displaying image in report - Application form size - Manifest signing certificate error - Using the Of keyword with the VB.NET Collection class - Cannot find the assembly _______, Version ...., Culture, etc... - API in vb.net - OpenInputDesktop and GetUserObjectInformation - Is there builtin to get C:\Document&Settings\Username\startmenu..P - Question about Monitor Display Settings - Is there a way I can reference the button control that activated the mouseenter event? - which famework will be used - Why is double click event not working? - Read File again - Form Closing VB.NET VS2005 - covnerting C# declaration to VB - graphics question - Split strings whit strings - What is the difference between "Me" and "this" in C#? - Need some code help with tool tip display text - mouse hover problem - Error when trying to run an application from some computers - Help needed linking dynamic textboxs to tooltip text - SendMessage and WM_SETTEXT - oracle database connection - verify email availability - My.Settings provider for sql? - Array Question - How can i increase the mousehover event timer? - H can i increase the mousehover timer? - OOP: mutliple references to same Object: how? - Q: Which Access version - Hang in DataGridView - Changing from bound textbox to dropdownlist - create a GUI with drag and drop functionality for the user... - Cut or Copy - Using Combocox Selection to select/update bound items - how can I fix this dbl to str conversion? - Reading Directory and Files - Enter Key behavior - Integration with exchange server - Strange DLLs created into program folder - Cross thread (VB.NET 2005) - ado.net adding a new record without having a dataset - multi-line text box - Sort an array (structure array) - Write a Manifest Resource to File (VB, .net 2.0) - Read new record from CSV textfile and item not found - How to convert a stream to a string? - Reading in a textfile? - launch vb.net app from its shortcut from another app after deployi - How to create Icon for ALL Users - VB.NET 2003 Deployment Project - Reading a network stream from a Telnet site - Calling mousehover event for dynamic button control? - Is vbc useful? - How to create a System Environment Variable and update it in current cmd.exe? - Datagridview -- Won't Clear - Microsoft.ReportingServices.Interfaces.dll 64 or 32bit? - Using the Express Version - Getting the actual file size - Read REG_BINARY Value into its string equalent - 010307 - MEIBAC.DLL - How can I count.... - ReflectionOnlyLoad in Macro - Modify Toolstripbutton in design time - How can I capture input from another program - Counting the lines of text, but upwards... - Problem with Visual Studio 2005 Macros - Smooth scrolling a listbox - "You do not have a license to use this ActiveX control" - Is ther a way to get all the country's in regional that MS lists in control panel - How to set value of SecureString? - Dumb Question - Simple password protect help - Microsoft.PointofService help - Where is Sub New - Serialize global variables - Parameters of a printer - If statment and between range in the form text control - VB 2005 w/ 2.0 framework - datetime picker problems - Calling C# DLL from VB.NET - Need Help On Restarting Threads - Creating and writing to a TIF file, how? - resourses release problem - New event for Web-form textbox - Please help - Preserving My.Settings across a rebuild. - Trying to create a range dynamically - problem with changing color when logged - SecurityException was Unhandled - Use vb2005 to detect the pen driver - Use vb2005 to detect the pen driver - Use vb2005 to detect the pen driver - VB.NET app on Vista w/ 1.1 Framework - Closing a window opened in a frame - Q: Parent Child Update Problem - C# , SQL Server , Vista , Ajax Interview questions links - Visual basic express, read and update Xml file - Converting plain-text report files to MS Word .doc files using Word.Application - ScrollWindowEx and WM_VSCROLL for a ListBox control - can one stop/start and drive particular events in the printer spooler? - VB.net courses material - Synclock in IIS Hosted Remoting - Single Threading Function Call? - Monitoring Directory for access...filesystemwatcher? - Exec a stor.proc. that includes a query to a linked Access db - Custom form border - Multiple instances of same object ? - Tab window? - UI design/flow question - Immediate Termination of a BackgroundWorker Thread - So nobody knows how to do this? - VB2005 shared mem problem - No Delete or Update Commands - DataGrid Headers and columns Independant alignment - Prepared statement expects parameter which was not supplied - Problems with DataGridViewComboBox - How to use GetObject("winmgmts:\\" & StrComputer & "\root\cimv2") - .NET Runtime Optimatization Service is trying to send a packet.. - vb EXPRESS publish external file - raise mouse event on timer? - how can one build .exe standalone application and what's better/faster? - Who is the code sherriff around here? - Need Expert Help and Advice. Thank You. - Using Word.Application with Option Strick On - Problem with "Publish" - How to add date in access database - How to assign the image field in the picture Box - Click a listview cell - Send Outlook task using ASP.NET - autocomplete textbox like google - How do I read all the files in a folder sorted by Created Date Time (Using System.IO). - get the rowindex in Gridview using Templatefield in ROWCOMMAND event - Formatting ? - display this comma delimited text ?? - for loop does not work as it should - VB syntax highlighting on a Mac - Dispose then set to nothing - 3d in visual basic .net - any simple and fast way to check if file is open (locked) other than T/C/F ? - Accessing an Oracle database with vb.net - NullReferenceException when binding DataGridView to datasource - Help bring window forward - Link in DataGridView - Selecting a File from Various versions - Selecting a File from Various versions - app written in .net 2003 beta version? - function to strip out matching value? vb Noob - displaying delimited text - Unload for my Form1 ? - DirectoryInfo Getfiles - only for one file - VS2005 and .Net Framework 3.0 - Sorting Points - question about Serialization - ServerXMLHTTP40 and charset - Is there en easy way to go from control to control?? - Financial Functions in VB.net - Is there an easy way to move from control to control with Enter button? - Restriction for receiving data through UDP socket from a single host - wait or sleep function in vb8? - Handle Keypress on FormLevel - Unlimited Array in Visual Basic 8 - Strange concurrency error - Is there any activex or .net control for embedding the Windows Picture and Fax Viewer into my application? - Regex: How to remove all non-printable characters - including nulls - TSQL select for populate datagridview - Sub in child class - VS 2003, false data concurrency error - VS2003 and supported .NET framework versions - Can I somehow password-protect pre-import CSV files? - forcing a formfeed with PrintDocument - ClickOnce Question - questions about arrays and collections - Returning Function or Sub Name - How to change code during debugging - AddPortEx Trouble - have a form return a value when it closes - Timer event - New To Visual Basic 2005 - Handle the "X" button - read 'long-raw' data type from oracle - Easy Question - Sharing datasources among forms - VS2005 - Determine support for ClearType - Customize Windows.Forms.MdiClient - Convert PDF to Tif File - Form.Invoke not calling delegate for some reason - DLL distribution - VB.NET Desktop Application Built on Windows XP, Excel 2003 - Vb6 AscB to what in vbnet ? - BindingList vs. List - unhandled exception error during release, but not development - Deleting ASPNETDB.MDF gives me Info messages in Project - MySQL date format - simple service to call a web page - How to read and write a structure to a file with Option Strict ON - Shared Constructors and Threads - RegAsm.exe on Vista - Checkbox and StartIndex Error - New features in ADO.NET - Rename application - How to retriving the Image files from ACCESS 2003 - clickable email address - Creating Printer Port in VB - connection close problem - Proper Design - Europe Tests Established Chemicals on Millions of Animals - exception using HTTPWebRequest with SSL - XmlDocument.Load() crashes designer but works anyway? - make checkBox ReadOnly wihtout Enabled = False? VB2005 - why is this code executed twice? - Array Help - Stop listbox scrolling - Better way to load user controls into panel in Windows app? - Correct syntax in treeview attributes - which vb newsgroup - which vb newsgroup - Bookmark All in entire solution gives errror - can you verify by running this simple test? - debug output / clearing window contents - Scheduling movies in a database - In PictureBox click event and need to know which button was pressed - Add compilation time and date to project - code for insert values into database using stored procedures - Adding 3rd Party OCX To WinForm - Using stored procedures to insert values into databse - Altering interface - Handling List(Of T) Events in Custom Class - xml documentfragment namespaces - FieldName From in Outlook 2003 IPM.Post - SortedList (Dates) with Duplicate Keys - Split large text file by number of lines? - .NET EVents & Threading - How to reconnect to a db automatically? - SMTP question, read receipt - Why cannot inherit from public class that can be instantiated? - Problems With Multiple Form Tags - what might cause: "Thread was being aborted" exception - writing text to text files in vb. need help - Hiding warnings - geting user group on domain - Timeout while executing stored procedure from VB.net - vb.net creating xml - XML and Direct Pathing to a node - Filling a Dataset - Radio Buttons question - problem in first web form application - Enlarge the screen - exit the form ? - Share resources between projects - Show Total row in a DataGridView - VintaSoftTwain.NET Library v2.1 has been released. - Authentication with WorkGroup - Convert C# to VB.Adding Eventhandler. - How to rename files with support wildcard ? - Literal Control Equivalent Windows Forms - convert Uint32 to System.drawing.color - freeze cursor - Mulitble forms in VB - collection property and refresh control - how to recognise unrecognised objects
https://bytes.com/sitemap/f-332-p-21.html
CC-MAIN-2020-45
refinedweb
3,223
55.95
Created on 2013-12-10 07:29 by akira, last changed 2014-04-28 22:46 by akira. This issue is now closed. cert_time_to_seconds() uses `time.mktime()` [1] to convert utc time tuple to seconds since epoch. `mktime()` works with local time. It should use `calendar.timegm()` analog instead. Example from the docs [2] is seven hours off (it shows utc offset of the local timezone of the person who created it): >>> import ssl >>> ssl.cert_time_to_seconds("May 9 00:00:00 2007 GMT") 1178694000.0 It should be `1178668800`: >>>) And `calendar.timegm` returns correct result: >>> calendar.timegm(time.strptime("May 9 00:00:00 2007 GMT", "%b %d %H:%M:%S %Y GMT")) 1178668800 [1]: [2]: Will work on this. Please assign the issue to me. Instructions before proceeding by Tim Golden(python mailing list): Having just glanced at that issue, I would point out that there's been a lot of development around the ssl module for the 3.4 release, so you definitely want to confirm the issue against the hg tip to ensure it still applies. Indeed the example in the docs is wrong, and so is the current behaviour. The example shows "round-tripping" using ssl.cert_time_to_seconds() and then time.ctime(), except that it is bogus as it takes a GMT time and ctime() returns a local time ("""Convert a time expressed in seconds since the epoch to a string representing local time"""). Still, we should only fix it in 3.4, as code written for prior versions may rely on the current (bogus) behaviour. gudge, your contribution is welcome! If you need guidance about how to write a patch, you can read the developer's guide: Also you will have to sign a contributor's agreement: gudge, There is also an issue with the current strptime format [1] (`"%b %d %H:%M:%S %Y GMT"`). It is locale-dependent and it may fail if a non-English locale is in effect. I don't know whether I should open a new issue on this or are you going to fix it too. `cert_time_to_seconds()` is documented [2] to parse notBefore, notAfter fields from a certificate. As far as I can tell they do not depend on current locale. Thus the following code should not fail: >>>>> import ssl >>> ssl.cert_time_to_seconds(timestr) 1178661600.0 >>> import locale >>> locale.setlocale(locale.LC_TIME, 'pl_PL.utf8') 'pl_PL.utf8' >>> ssl.cert_time_to_seconds(timestr) Traceback (most recent call last): ...[snip]... ValueError: time data 'May 9 00:00:00 2007 GMT' does not match format '%b %d %H:%M:%S %Y GMT' [1]: [2]: 1) Can I get a list of failures. The summary of test results which I compare on my machine. 2) ----------------------------------------------------------------------------------------------------- >>> import ssl >>> ssl.cert_time_to_seconds("May 9 00:00:00 2007 GMT") 1178649000.0 >>>) >>> import calender Traceback (most recent call last): File "<stdin>", line 1, in <module> ImportError: No module named 'calender' >>> import callendar Traceback (most recent call last): File "<stdin>", line 1, in <module> ImportError: No module named 'callendar' >>> import calendar >>> calendar.timegm(time.strptime("May 9 00:00:00 2007 GMT", "%b %d %H:%M:%S %Y GMT")) 1178668800 ---------------------------------------------------------------------------------------------------- I am running a VM on windows host machine. In your comment ou have specified: >>> import ssl >>> ssl.cert_time_to_seconds("May 9 00:00:00 2007 GMT") 1178694000.0 It should be `1178668800`: But I get also get the same answer with the Python build from latest sources? Therefore I do not get you? 3) 3 tests omitted: test___all__ test_site test_urllib2net 348 tests OK. 3 tests failed: test_codecs test_distutils test_ioctl 2 tests altered the execution environment: test___all__ test_site 33 tests skipped: test_bz2 test_codecmaps_cn test_codecmaps_hk test_codecmaps_jp test_codecmaps_kr test_codecmaps_tw test_curses test_dbm_gnu test_dbm_ndbm test_devpoll test_gzip test_idle test_kqueue test_lzma test_msilib test_ossaudiodev test_readline test_smtpnet test_socketserver test_sqlite test_ssl test_startfile test_tcl test_timeout test_tk test_ttk_guionly test_ttk_textonly test_urllibnet test_winreg test_winsound test_xmlrpc_net test_zipfile64 test_zlib Are these results fine. These results are with no changes. How can I make all tests (skipped and omiited pass) What about the 3 tests which failed. Are these known failures? 4) Now say I have to pull time again to get the latest code. Does it help to do a make. Or I will have o do configure again. 5) I had posted a query on core-metorship? No answers? Not that I am entitled to. Thanks Sorry I think I did not read msg205774 (1st comment) correctly. It clearly says: "cert_time_to_seconds() uses `time.mktime()` [1] to convert utc time tuple to seconds since epoch. `mktime()` works with local time. It should use `calendar.timegm()` analog instead." So the function cert_time_to_seconds() has to be fixed? Thanks > So the function cert_time_to_seconds() has to be fixed? Yes! Patch is uploaded. I will also copy paste it. I have created the patch with git. Let me know if it is okay with you. If it is unacceptable I will try and create one for mercury Patch: ------------------------------------------------------------------ diff --combined Doc/library/ssl.rst index a6ce5d6,30cb732..0000000 --- a/Doc/library/ssl.rst +++ b/Doc/library/ssl.rst @@@ -366,7 -366,7 +366,7 @@@ Certificate handlin >>> import ssl >>> ssl.cert_time_to_seconds("May 9 00:00:00 2007 GMT") - 1178694000.0 + 1178668800 >>> import time >>> time.ctime(ssl.cert_time_to_seconds("May 9 00:00:00 2007 GMT")) 'Wed May 9 00:00:00 2007' diff --combined Lib/ssl.py index f81ef91,052a118..0000000 --- a/Lib/ssl.py +++ b/Lib/ssl.py @@@ -852,8 -852,7 +852,8 @@@ def cert_time_to_seconds(cert_time) a Python time value in seconds past the epoch.""" import time - return time.mktime(time.strptime(cert_time, "%b %d %H:%M:%S %Y GMT")) + import calendar + return calendar.timegm(time.strptime(cert_time, "%b %d %H:%M:%S %Y GMT")) PEM_HEADER = "-----BEGIN CERTIFICATE-----" PEM_FOOTER = "-----END CERTIFICATE-----" ----------------------------------------------------------------- Test Results: 358 tests OK. 1 test failed: test_compileall 1 test altered the execution environment: test___all__ 28 tests skipped: test_codecmaps_cn test_codecmaps_hk test_codecmaps_jp test_codecmaps_kr test_codecmaps_tw test_curses test_dbm_gnu test_dbm_ndbm test_devpoll test_idle test_kqueue test_lzma test_msilib test_ossaudiodev test_smtpnet test_socketserver test_sqlite test_startfile test_tcl test_timeout test_tk test_ttk_guionly test_ttk_textonly test_urllibnet test_winreg test_winsound test_xmlrpc_net test_zipfile64 ------------------------------------------------------------------------ Doc changes won't effect the code. The tests would not fail. How would I check if the doc changes are coming up fine in the final version. >>> import ssl >>> ssl.cert_time_to_seconds("May 9 00:00:00 2007 GMT") 1178668800 I do not have a printer curretly. I will sign the license agreement in a few days. Answering to your questions: > I have created the patch with git. Let me know if it is okay with you. Yes, it's ok. Also, please don't copy / paste it. Uploading is enough. > Doc changes won't effect the code. The tests would not fail. > How would I check if the doc changes are coming up fine in the > final version. The devguide has detailed documentation about how to modify and build the documentation :) As for the tests: 1. for this issue you should probably concentrate on test_ssl: to run it in verbose mode, "./python -m test -v test_ssl" (please read) 2. you will need to add a new test to test_ssl, to check that this bug is indeed fixed gudge, have you seen (the locale issue)? If you can't fix it; say so, I'll open another issue after this issue is fixed. Akira, I will fix it. I will put in the patch in the same bug. 1) I understand I can run a whole test suite as ./python -m test -v test_abc as mentioned in How do I run a particluar test case, like the test I added test_cert_time_to_seconds 2) I have a added a test case test_cert_time_to_seconds to test_ssl.py. 3) ./python -m test -v test_ssl is all PASS. 4) I will start my work on. 5) The patch is attached. Can you please provide some hints on how to handle. The value of format_regex 1) Without locale set: re.compile('(?P<b>jan|feb|mar|apr|may|jun|jul|aug|sep|oct|nov|dec)\\s+(?P<d>3[0-1]|[1-2]\\d|0[1- 9]|[1-9]| [1-9])\\s+(?P<H>2[0-3]|[0-1]\\d|\\d):(?P<M>[0-5]\\d|\\d):(?P<S>6[0-1]|[0-5]\\d|\\d)\\s +(?P<Y>\\d\\d\\d\\d, re.IGNORECASE) 2) With locale set: re.compile('(?P<b>sty|lut|mar|kwi|maj|cze|lip|sie|wrz|pa\\ź|lis|gru)\\s+(?P<d>3[0-1]|[1-2]\\d|0[ 1-9]|[1-9]| [1-9])\\s+(?P<H>2[0-3]|[0-1]\\d|\\d):(?P<M>[0-5]\\d|\\d):(?P<S>6[0-1]|[0-5]\\d|\\d)\ \s+(?P<Y>\\d\\d\\d\, re.IGNORECASE) The value of months are different. Thanks The point of the locale issue is that "notBefore", "notAfter" strings do not change if your locale changes. You don't need a new regex for each locale. I've attached ssl_cert_time_seconds.py file that contains example cert_time_to_seconds(cert_time) implementation that fixes both the timezone and the locale issues. Akira, do you want to write a proper patch with tests? If you are interested, you can take a look at You'll also have to sign a contributor's agreement at Antoine, I've signed the agreement. I've added ssl_cert_time_toseconds.patch with code, tests, and documention updates. Akira, thanks. I have posted a review; if you haven't received the e-mail notification, you can still access it at Antoine, I haven't received the e-mail notification. I've replied to the comments on Rietveld. Here's the updated patch with the corresponding changes. Here's a new patch with a simplified ssl.cert_time_to_seconds() implementation that brings strptime() back. The behaviour is changed: - accept both %e and %d strftime formats for days as strptime-based implementation did before - return an integer instead of a float (input date has not fractions of a second) I've added more tests. Please, review. Replace IndexError with ValueError in the patch because tuple.index raises ValueError. I've updated the patch: - fixed the code example in the documentation to use int instead of float result - removed assertion on the int returned type (float won't lose precision for the practical dates but guaranteeing an integer would be nice) - reworded the scary comment - removed tests that test the tests Ready for review. Thanks for the updated patch, Akira! I'm gonna take a look right now. New changeset 7191c37238d5 by Antoine Pitrou in branch 'default': Issue #19940: ssl.cert_time_to_seconds() now interprets the given time string in the UTC timezone (as specified in RFC 5280), not the local timezone. I've committed the patch. Thank you very much for contributing! Antoine, thank you for reviewing. I appreciate the patience.
https://bugs.python.org/issue19940
CC-MAIN-2018-34
refinedweb
1,765
68.36
How To Build A jQuery-free “Companion Nav” One delimma I constantly run into is whether to use jQuery on a project that I have already set up without it. I think we have all been in a similar place - get a project set up from scratch firmly saying "no jQuery this time" and it goes fine, right up to the moment when you need to build out that slider or sticky nav. After much deliberation, you inevitably cave and pull in jQuery and use a handy plugin you have used in the past to get the job done. I believe jQuery is a great tool, and I'm often relieved when I get on a project that makes use of it. However, as many people have pointed out in the last couple of years, you probably don't need jQuery on your project. It's bigger, slower and it's less flexible than standard JavaScript, especially when used in today's client-side applications. On a recent project, the designer wanted the sidebar navigation to follow the user as she or he scrolled within a certain section. The rest of the project was light interaction-wise, so I could not rationalize using jQuery just for this small feature. Instead of caving and pulling in jQuery, I decided to do this with plain JavaScript. Here's the final product: The following steps explain how you can build it yourself, but if you want to play with a demo while you follow along, get it from this repo. Step 1: Adding the Markup and Styles To start with, we need some basic markup to base our scripts (and styles) off of: // index.html <section class="all-items" id="followContainer"> <h2>Follow Nav Section</h2> <div class="container"> <nav> <ul id="followNav"> <li><a class="nav-link" data-Item [i] Group</a></li> </ul> </nav> <div class="item-groups"> <section class="item-group"> <h3 tabindex="0" id="item-[i]">Item [i] Group</h3> <img /> <p></p> </section> </div> </div> </section> This establishes a <nav> with a list of links inside ul#followNav, and a group of correlating <section>s inside div.project-groups. A few other things to note: - The number of <li>s within the ul#followNavmust match the number of <section>s within the div.project-groups. - The [i]placeholders, as you can guess, need to have a 1:1 relationship, meaning if you add an <li>, be sure to give it a unique [i]value and have a corresponding section.project-groupbelow with a matching [i]value. The best way to keep this straight is to start at 1 and go up by 1 for each new item, like project-1, project-2, project-3and so on. - The tabindex="0"on the <h3>allows users who are not navigating your site with a mouse to tab through the sections. It's always a good idea to keep accessibility in mind when building your projects. Since this looks pretty ugly by default, let's add some basic styles. // global.css .container { display: flex; flex-direction: row; } nav { float: left; margin-right: 20%; width: 10%; } nav ul { transition: transform 0.3s ease-out; } nav a { color: #1496bb; display: block; font-size: 18px; padding: 10px; text-decoration: none; } .item-groups { float: left; margin-bottom: 40px; width: 60%; } .item-groups img { display: block; margin: 0 auto; width: 500px; } h1 { font-size: 32px; margin: 50px auto; } h2 { font-size: 28px; margin-bottom: 18px; } h3 { border-bottom: 1px solid #ccd3d6; padding: 18px 0 5px; margin-bottom: 10px; text-align: center; } You will notice I use flexbox in this example. If you aren't using flexbox yet and you can, I highly recommend it. There are many great resources for getting started, including this stellar CSS-Tricks guide. Step 2: Building the Scripts Now that we have a basic structure and basic styles, let's get into the scripts. You can add all this inline to your HTML doc, but I'd recommend going with best practices and saving the following as app.js and placing it in the typical project structure of /assets/javascripts/app.js and be sure to call it in before your <body> tag closes. The first thing we will need to do is set up a namespace, because it's always a good idea to do that, even if you aren't pulling in third party scripts. // app.js // setting up namespace for project var vigetHowTo = vigetHowTo || {}; Now that we have a namespace, let's define a custom function and a few variables to get going: // app.js ... /* A basic module for sticking nav to window when its top edge is in line with window's top edge */ vigetHowTo.followNavAdjust = function() { // target to get 'stuck' plus its parent container var followNav = document.getElementById('followNav'); // parent container var followContainer = document.getElementById('followContainer'); // gathering a few heights to set up a scroll range var followNavHeight = followNav.offsetHeight; var followNavOffset = followContainer.lastElementChild.offsetTop; var followContainerOffset = followContainer.offsetTop; var followContainerHeight = followContainer.offsetHeight; // followNavHeight * 2 adds extra space to account for the height in calculations var scrollMaxRange = (followContainerHeight + followContainerOffset) - (followNavHeight*2); // rescroll checking var scrollPosition = window.scrollY; Now that the browser has all the heights collected, setting up a couple of conditionals is all we need to get it going: // app.js ... // if the scroll position goes less than the range of the section, reset it if (scrollPosition < followNavOffset) { followNav.style.transform = 'translateY(0px)'; // if the scroll position is beyond the range of the section, set it to bottom } else if (scrollPosition > scrollMaxRange) { followNav.style.transform = 'translateY(' + ( followContainerHeight - (followNavHeight*2) )+ 'px)'; } // otherwise, it is in the range and needs to follow the scroll position else { followNav.style.transform = 'translateY(' + (scrollPosition - followNavOffset) + 'px)'; } } And finally, let's add an init function to fire this, along with all the other scripts we add, when the page loads. Add that to the end of your app.js file like so: // app.js ... vigetHowTo.init = function() { vigetHowTo.followNavAdjust(); }; // scripts to fire on page load vigetHowTo.init(); Great! That wasn't too bad, and we now have a well put-together module to plop in and it will work. Step 3: Adding Polish After all this, we have one problem: we set the var scrollPosition to constantly update. That gets pretty heavy on the browser and if you have other stuff going on your project, this will get pretty laggy. Fortunately there is a really cool method in underscore.js called debounce that allows a defined wait period before calling a function. Ah, but what was that about me complaining about bringing in unneccessary libraries? You are right. I should not bring in all of underscores just for this. Fortunately, underscores is super easy to parse out and use in parts. In fact David Walsh wrote his own version of it that we can glean from. Make another file in the same directory as app.js, and call it debounce.js.In it, add David Walsh's debounce function: // debounce.js /* Returns a function, that, as long as it continues to be invoked, will not be triggered. The function will be called after it stops being called for N milliseconds. If immediate is passed, trigger the function on the leading edge, instead of the trailing. Taken from */ var debounce = function(func, wait, immediate) { var timeout; return function() { var context = this, args = arguments; var later = function() { timeout = null; if (!immediate) func.apply(context, args); }; var callNow = immediate && !timeout; clearTimeout(timeout); timeout = setTimeout(later, wait); if (callNow) func.apply(context, args); } } And let's add it into our HTML file, just above where we call app.js. Once that is done, take out the vigetHowTo.followNavAdjust from the init function, so we can call it in an event listener like so: // app.js ... vigetHowTo.init = function() { // ... }; // scripts to fire on page load vigetHowTo.init(); // scripts to fire on page scroll window.addEventListener('scroll', debounce(vigetHowTo.followNavAdjust, 200)); That's great! Now we have a nav that follows users, and if you want to increase or decrease the speed in which it follows, you can change the 200 in the event listener. Now, totally optional, but if you want to add a little more slickness to this, I recommend adding a quick animation function so that when a user clicks on a link in the follow nav, the page animates a "scroll to" rather than a quick, abrupt jump to it. Fortunately, like most things we build, we can leverage small snippits other people built that we can use. I found this helpful smooth scrolling function. Let's add it into another file named smooth-scroll-to.js and place it in the same directory as app.js and debounce.js: // smooth-scroll-to.js /* Smoothly scroll element to the given target (element.scrollTop) for the given duration Returns a promise that's fulfilled when done, or rejected if interrupted Taken from */ var smoothScrollTo = function(element, target, duration) { target = Math.round(target); duration = Math.round(duration); if (duration < 0) { return Promise.reject("bad duration"); } if (duration === 0) { element.scrollTop = target; return Promise.resolve(); } var start_time = Date.now(); var end_time = start_time + duration; var start_top = element.scrollTop; var distance = target - start_top; // based on var smooth_step = function(start, end, point) { if(point <= start) { return 0; } if(point >= end) { return 1; } var x = (point - start) / (end - start); // interpolation return x*x*(3 - 2*x); } return new Promise(function(resolve, reject) { // This is to keep track of where the element's scrollTop is // supposed to be, based on what we're doing var previous_top = element.scrollTop; // This is like a think function from a game loop var scroll_frame = function() { if(element.scrollTop != previous_top) { reject("interrupted"); return; } // set the scrollTop for this frame var now = Date.now(); var point = smooth_step(start_time, end_time, now); var frameTop = Math.round(start_top + (distance * point)); element.scrollTop = frameTop; // check if we're done! if(now >= end_time) { resolve(); return; } // If we were supposed to scroll but didn't, then we // probably hit the limit, so consider it done; not // interrupted. if(element.scrollTop === previous_top && element.scrollTop !== frameTop) { resolve(); return; } previous_top = element.scrollTop; // schedule next frame for execution setTimeout(scroll_frame, 0); } // boostrap the animation process setTimeout(scroll_frame, 0); }); } Then, we need to call it into our main HTML file before we call in the app.js file. Once everything is in place, we need to leverage the power of smoothScrollTo in a custom function in app.js that hooks it into the DOM elements in out project like so: // app.js ... /* Handles smooth scrolling animations in nav */ vigetHowTo.smoothNavScroll = function() { var navLinks = document.querySelectorAll('.nav-link'); var animateScroll = function(e) { e.preventDefault(); var slug = this.getAttribute('data-name'); var scrollTarget = document.getElementById(slug).offsetTop; // using function defined in smooth-scroll-to smoothScrollTo(document.documentElement, scrollTarget, 200); smoothScrollTo(document.body, scrollTarget, 200); } // loop through the navLinks array and add an event listener to each for (var i = 0; i < navLinks.length; i++) { navLinks[i].addEventListener('click', animateScroll, false); } } Lastly, add this function to our empty init function and you are good to go: // app.js ... vigetHowTo.init = function() { vigetHowTo.smoothNavScroll(); }; // scripts to fire on page load vigetHowTo.init(); All finished! Using some custom functions and vanilla JavaScript, we have done something native jQuery can do easily, but we have cut down on load time, project bloat and have made it easier to incorporate other front end frameworks down the road. If you have any questions about JavaScript methods used or anything else that caught your eye, be sure to comment below. Again, if you want to get a version of this project for yourself to play around with, check out the repo I made. Also, be sure to check out an example of this very follow nav in the wild at code.viget.com and see all the other cool open source projects Viget has made! One final note: in the examples above, I did everything by hand, but I normally use a task runner like Gulp to make my workflow more productive. If you have never used a task runner, or you have and you are looking for an upgrade, I highly recommend Dan Tello's Gulp Starter. Not only does it use Gulp for your Sass, but it also leverages Babel and Webpack so that you can be using JavaScript's next-gen version, ESCMAScript 6 today.
https://www.viget.com/articles/how-to-build-a-jquery-free-companion-nav/
CC-MAIN-2018-26
refinedweb
2,061
64.51
Recently I had a need for a marquee type control for a WinForm application I was writing. I wanted a label like control where the text would scroll across the control. It didn�t have to have much user interaction, just look pretty. I figured surely there was a control like this out there, but all I could find were User Controls that contained an inner label control that had its position updated every few ticks. But I didn�t really like that solution; I wanted clean, professional looking (and constructed) GDI+ drawn control. So after an hour of looking I decided to just write my own, which is the topic of this article. (*Note: the images above are saved as a gif file, which really messes up how smooth this control really looks :-) ) First, lets go over the requirements of the control. In essence, I wanted to create a label control that scrolls the text across the width of the control. I want the text to be able to scroll from left to right (once the text scrolls of the control on the right side, it starts again from the left), from right to left, or to bounce back and forth between the two sides. I also want the text to be able to be vertically aligned to the top, middle or bottom of the control. The user of the control should be able to programmatically control the speed of the text scrolling as well. Since I�m inheriting from the Control class (more about this in a bit), I don�t wont get any built in border UI functionality, so I�ll need a way to turn a border on and off, as well as set the color of the border. I also like the look of some of the custom brushes that .Net allows you to create, so I want the user of the control to be able to easily assign custom a brush to the control�s background and foreground. The last requirement, and most important, is I want the scrolling text to act like a hyperlink. When the user moves the mouse over the moving text, the cursor should change to a hand, and if the user clicks on the text, then an event needs to be fired. I also want to add an option to make the text stop scrolling when the user mouse�s over the text, and start scrolling once the user moves the mouse off the text. First step is to decide which class you need to derive (inherit) your new control from. You have 4 basic choices for this. First you could use the UserControl class. The UserControl derives from the ScrollableControl class, which in turn derives from the Control class. If you create UserControl, Visual Studio will give you a designer so that you can drag and drop other controls onto it. So if you need to make something that is a combination of several controls this is the way to go. Next, you could inherit from an existing control, such as the Label control in this case, and override some of its methods to change its behavior. Finally, you could inherit from either the Control class or the ScrollableControl class, neither of which give you a designer to drag and drop other controls onto, so you�ll have to write GDI+ code to handle the entire look and feel of the control. The UserControl doesn�t give you much more than the ScrollableControl class, in the way of properties, methods and events. It gives you an OnLoad, MouseDown, and WndProc event that you don�t get with the ScrollableControl or the Control class, but that�s about it. The major difference is that UserControl gives you a design time designer to drag and drop other controls into it. So if you don�t need to use any existing controls to create your new one, there isn�t much reason to use the UserControl class. Since I want to handle all painting myself, I created a new class called ScrollingText and derived it from the base Control class. The neat thing about writing custom controls is that you don�t have to run the project to actually see the fruits of your labor. Whenever I decide to create a new control, my solution always has two projects in it. The first project is a ClassLibrary project that contains my control. The second one is a WinForms project that I use to test my custom control. Once you have both of these projects created, open the designer for Form1 of your WinForms project. On the Toolbox window, click the �My User Controls� tab to open it, then right click on it and select �Add / Remove Items�. When the �Customize Toolbox� dialog comes up, click the Browse button and navigate to your dll assembly that contains your custom control. Click �OK� and your new control will show up on the �My User Controls� toolbox tab. (As shown below) Once you have your control in the toolbox, you can click / drag it onto your test form. You�ll probably see nothing at this point because you haven�t done any custom painting to your control yet (I�ll cover this next). In your control�s constructor you should assign values to two properties by default, Control.Name and Control.Size. I set the Name property to �ScrollingText�, which is the name the control shows when you add it to toolbox. The other property, ControlSize, should also be set just so you see something when you drop the control on the test form (otherwise your control�s default size will be 0,0). Now we�re ready to start making our control look pretty. To start with I�m going to create public properties in my control for the following list: TextScrollSpeed, TextScrollDistance, ScrollText, ScrollDirection, VerticleTextPosition, ShowBorder, BorderColor, StopScrollOnMouseOver, ForegroundBrush, and BackgroundBrush. As I go along, I�ll explain how each of these is used. The next thing to think about is notifying the control to paint itself. To do this, we�ll use the System.Windows.Forms.Timer class. In the control�s constructor I setup the timer class and assign a method to its Tick delegate. This delegate will get called every time the timer interval expires. By default I�m going to set the timer�s enabled property to true and set the timer interval to 25 milliseconds. This is shown below. public ScrollingText() { // Setup default properties for ScrollingText control InitializeComponent(); //setup the timer object timer = new Timer(); timer.Interval = 25; //default timer interval timer.Enabled = true; timer.Tick += new EventHandler(Tick); } Next we need to create a Tick method that the timer delegate will call. This method is pretty simple. All it does is call the Control.Invalidate() method. The Invalidate() method sends a WM_PAINT message to your control�s message queue, telling it to paint the entire control�s rectangle. There is a problem with this though. Tons of messages are being put in the control�s message queue all the time and the control processes them in order that they are received, so sometimes your control wont paint as fast as you want. To handle this, you can also call the Control.Update() method. When you call the Update() method just after Invalidate() it forces the OS to make the WM_PAINT message to bypass the message the message queue and go directly to the control�s window procedure to be processed. //Controls the animation of the text. private void Tick(object sender, EventArgs e) { //repaint the control this.Invalidate(); this.Update(); } When the control�s window procedure gets a WM_PAINT message, it calls the OnPaint method, which we need to override in order to handle our own custom GDI+ painting. I don�t like to put very much code in the OnPaint override, just to keep it simple, so I create a private method called DrawScrollingText() and pass in the Graphics object. Since DrawScrollingText() handles all the painting for the control, its not necessary to call the base control�s OnPaint() method (shown below, but commented out). It wont hurt the control to call the base method, but its just extra processing that isn�t needed. But if your custom control only handles part of the painting, such as when you derive from UserControl, then you�ll need to call the base control�s OnPaint() method, or else only what you paint will show up. One other thing to remember about calling the base.OnPaint() method, is that if it doesn�t get called, the user of the control cant register a method with the control�s Paint event. Since I don�t want the user to do this, I ignore this method. //Paint the ScrollingTextCtrl. protected override void OnPaint(PaintEventArgs pe) { //Paint the text to its new position DrawScrollingText(pe.Graphics); //pass on the graphics obj to the base Control class //base.OnPaint(pe); } There is another way to intercept all WM_PAINT messages, and that is to use the Control�s Paint event to call a method you register with the controls PaintEventHandler delegate. This would work, but overriding the OnPaint method is much faster than having a delegate�s Invoke() method calling for every WM_PAINT message. One note, you might be tempted to use either the System.Threading.Timer class or System.Timers.Timer class to notify the control to paint itself, but don�t. These timer classes are specifically designed for use in a multithreaded environment, and both use threads from the ThreadPool to run their respective delegates. You could use them to update the control, but you�d have to use the Control.Invoke() method to make the call thread safe, and its more of a pain in the butt than its worth (and not especially fast). Especially since the System.Windows.Forms.Timer class is specifically designed to update the UI of a Form. I don�t have the time or know-how to write a complete introduction to GDI+, especially since it is a huge topic. But I will go over some of the basics and specifically what is needed for this control. The best resources on GDI+ that I�ve found are �WinForms Programming in C#� by Chris Sells and �Graphics Programming with GDI+� by Mahesh Chand, both published by Addison Wesley. The main thing to remember is that almost any object you create and use in GDI+ has a Dispose() method and needs to be cleaned up, otherwise you risk creating a fun memory leak to track down. This is especially important when creating controls, because the code you write in the overridden OnPaint() method will actually get executed whenever the control has been placed on a form and the form�s designer is showing. This is a great development feature, since all you just compile your code and you get instant visual feedback. You don�t even have to run the exe, just look at the form�s designer and the OnPaint() code gets executed. The only bad thing is that you cant debug and step through the code when the form designer is showing, so if you have a nasty bug in the OnPaint() method you�ll have to actually run the exe to debug it. This feature of control design caused me great trouble a few months back. I was creating a custom control, which did all its own painting, and I found that after working in Visual Studio for 5 minutes, my OS would just slow down and eventually freeze up. I�d have to reboot to get everything working fine again. What had happened is I was creating a GDI+ Brush object each time the OnPaint() method was called, but I never called Dispose() on it. This caused a memory leak in my code, which materialized even when I was in Visual Studio�s design mode. I hade to use notepad in order to figure out where I was missing a Dispose() and correct it. The basic flow of the DrawScrollingText() method is to calculate the new position of the text, which gets updated each time the method is called (I don�t show the code for this in the article because its pretty basic stuff and you can look at it in the provided source). Then I paint the background of the control. If the user has set a custom brush via the BackgroundBrush property, I use that to paint the background control. The Graphics class has a FillRectangle() method that I use to do this. Just set the x and y position of the upper left point of the control, and the control�s width and height. If the user didn�t set their own brush object, I call the Graphics.Clear() method, passing in the control�s BackColor. Next I draw the control�s border, if needed. To do this I use the Graphics.DrawRectangle() method. This method is very similar to the FillRectangle, except it takes a Pen object instead of a Brush object. Since my BorderColor property only takes a Color struct, we have to create a new Pen object out of the Color struct. I really like C#�s �using()� functionality, which automatically wraps the code in the using block in a try block, and puts the variable the using keyword is executed on in a finally block, calling the variable�s Dispose Method. For example, take the following code. using (Pen borderPen = new Pen(borderColor)) { canvas.DrawRectangle(borderPen, 0, 0, this.ClientSize.Width-1, this.ClientSize.Height-1); } // Once this code gets compiled into MSIL, it // gets translated to the following Pen borderPen = new Pen(borderColor); try { canvas.DrawRectangle(borderPen, 0, 0, this.ClientSize.Width-1, this.ClientSize.Height-1); } finally { borderPen.Dispose(); } This makes your code read easier and most importantly ensures that no matter what happens in your code, even if an exception is thrown, the object�s Dispose method will get executed. This is essential when working with GDI+ objects, since probably 93.7% of them implement IDisposable. Next I to use GDI+ to draw the text in it�s new, updated position. To do this, use the Graphics.DrawString() method. This method takes the string that you want drawn to the control, the font, a brush that determines what color the text will be, and the x and y position of the upper right hand corner of the text. Then code for drawing all this is shown below. //Draw the scrolling text on the control public void DrawScrollingText(Graphics canvas) { //Calculate x and y position of text each tick . . . /); } One final thing that should be added to the control at this point is an overridden Dispose() method. Since the control stores several objects at the class level that need to be disposed, the control�s Dispose() method is the place to handle this, as shown below. protected override void Dispose( bool disposing ) { if( disposing ) { //Make sure our brushes are cleaned up if (foregroundBrush != null) foregroundBrush.Dispose(); //Make sure our brushes are cleaned up if (backgroundBrush != null) backgroundBrush.Dispose(); //Make sure our timer is cleaned up if (timer != null) timer.Dispose(); } base.Dispose( disposing ); } At this point, we have a working control that scrolls the text left to right or right to left. Yea, we�re done! Well, not quite. If you compiled the control and ran it, you�d see the control flickering just a bit. And if you applied a custom brush, such as a gradient brush, to the control�s background you�d see some major flashing going on as the control gets drawn to the screen. The reason for this is that each time GDI+ is used to paint something via the Graphics object, the control gets updated on the screen. In the DrawScrollingText() method, the Graphics object is used 3 times, once for the background, once for the border, and once for the text. This happens every time the OnPaint event gets called. These multiple updates to the control�s UI are what cause the flicker as the text scrolls across the control. The tried and true method to fix this flickering problem is something called Double Buffering, which is a common practice used in C++ whenever handling the WM_PAINT message. What double buffering means is the code creates a bitmap in memory the same size as the window it�s going to update. The code applies all the GDI+ updates to the bitmap, then copies the bitmap directly to the control. This way, only one update is made to the control per WM_PAINT message, instead of several. Below is a modified version of the DrawScrollingText() method which employs double buffering. The first thing I do is create a new bitmap in memory that has the same dimensions as the control. Next, I create a Graphics object to update the temporary bitmap with the Graphics.FromImage() method. This is the graphics object that I will use to do the individual GDI+ updates. Then, at the very bottom of the method, I apply the newly updated bitmap to the control�s Graphics object, which will update the UI all in one shot. This way I cut the number of updates to the screen from three per WM_PAINT message down to one. //Draw the scrolling text on the control public void DrawScrollingText(Graphics graphics) { //Calculate x and y position of text each tick . . . using (Bitmap scrollingTextBmp = new Bitmap(this.ClientSize.Width, this.ClientSize.Height)) { using (Graphics canvas = Graphics.FromImage(scrollingTextBmp)) { /); //Double Buffering: draw the bitmap in memory onto the control! graphics.DrawImage(scrollingTextBmp, 0, 0); } } } The double buffering technique shown above helps the flickering a great deal, but doesn�t handle it entirely. There is still a faint flicker every now and then, especially when using a GradientBrush applied to the control�s background. Fortunately the good people at Microsoft saw fit to build double buffering into the System.Windows.Forms namespace. Instead of manually creating a bitmap in memory, drawing on the bitmap, and then copying the bitmap to the control, all you have to do is set 3 easy, little properties. Not only that, but Microsoft does double buffering much smoother than the manual way. The code below shows the three lines, which should be added to the control�s constructor. //This turns off internal double buffering of all custom GDI+ drawing this.SetStyle(ControlStyles.DoubleBuffer, true); this.SetStyle(ControlStyles.AllPaintingInWmPaint, true); this.SetStyle(ControlStyles.UserPaint, true); The first line turns on the internally handled double buffering (default is off). This tells the CLR to update all graphic changes to an internal buffer, and then output the result to the screen. When you set this property, you also need to set the ControlStyles.AllPaintingInWmPaint and ControlStyles.UserPaint properties to true as well. AllPaintingInWmPaint tells the CLR to ignore the WM_ERASEBKGND message, which sets the back color of the window, and to handle all painting with the WM_PAINT message. UserPaint tells the CLR that the control overrides the OnPaint method and will handle all its own painting. Once the built in double buffering is in place, the text control animation is completely smooth, absolutely no flickering, even when a gradient brush has been applied. Sweet! The next requirement I want to cover is the making the scrolling text act like a hyperlink label. When the user moves their mouse over the moving text, the default cursor should change to the hand cursor. When the cursor then moves off the text, the cursor changes back to the default cursor. Also, when the user clicks on the text, a TextClicked event should fire, which the user can write code to register to this event. To do this I created the method EnableTextLink, shown below. Passed into the method is a rectangle structure that represents the size and position of the text being drawn to the control. Next, I have to get the cursor position. Because the Cursor.Position property returns the position relative to the desktop as a whole, I need to find a way to get the position relative to the upper left hand corner of the control. The Control.PointToClient() property does just that. It returns a point structure representing the cursor position within the client control. Next I just do a simple boundary check to see if the cursor point falls within the rectangle. When I first wrote this control, I did this manually, but when playing around I found that the Rectangle structure actually has a Contains() method that does this check for you! Anyway, if the rectangle does contain the point, I want to set the Control.Cursor property to the hand cursor. If the rectangle does not contain the point, then I set the Control.Cursor back to its default cursor value. The other thing I handle in this method is stopping the text scrolling when the mouse moves over the text. Remember the StopScrollOnMouseOver public property I mentioned at the beginning of the article? If the user sets this property to true, then this method will set a private scrollOn property to false when the mouse is moved over the text. This field is used to determine if the text x and y position should be updated each time the OnPaint is called. So when scrollOn is set to false, the x and y position of the text is not updated, which stops the scroll. private void EnableTextLink(RectangleF textRect) { Point curPt = this.PointToClient(Cursor.Position); //Check to see if the users cursor is over the moving text //if (curPt.X > textRect.Left && curPt.X < textRect.Right // && curPt.Y > textRect.Top && curPt.Y < textRect.Bottom) if (textRect.Contains(curPt)) { //Stop the text of the user mouse's over the text if (stopScrollOnMouseOver) scrollOn = false; //Set the cursor to the hand cursor this.Cursor = Cursors.Hand; } else { //Make sure the text is scrolling if //user's mouse is not over the text scrollOn = true; //Set the cursor back to its default value this.Cursor = Cursors.Default; } } Now that we have that part handled we need to write functionality to trigger an event when the user clicks on the text. First, I create a ScrollingText_Click() method to handle all click events for the control. Then, in the constructor of the control, I register this new method with the Control.Click() event. Once this is in place I need to create a delegate and event to handle our custom TextClicked event. When the user clicks on the control, the code first checks to see if the Cursor is a hand. If it is, it knows that the cursor is still over the text, and calls the OnTextClicked event handler, which in turn invokes the delegate. The user can use this event to register their own method to it, which will get called anytime the text is clicked. This is shown below. private void ScrollingText_Click(object sender, System.EventArgs e) { //Trigger the text clicked event if the user clicks while the mouse //is over the text. This allows the text to act like a hyperlink if (this.Cursor == Cursors.Hand) OnTextClicked(this, new EventArgs()); } public delegate void TextClickEventHandler(object sender, EventArgs args); public event TextClickEventHandler TextClicked; private void OnTextClicked(object sender, EventArgs args) { //Call the delegate if (TextClicked != null) TextClicked(sender, args); } At this point we have a working scrolling text control, that doesn�t flash and is pretty fast and efficient. But it could be a bit more efficient. Remember the paint method? Every time it�s called we paint the entire control. But is this really necessary? Nope. The only part of the control that really needs to be repainted is where the text is moving to and where the text is moving from. The rest of the control is doesn�t change. Repainting the whole control every time just waists computing cycles that could be used for something else. The way to tell a control what part of itself to paint is by passing in a Region object into the Control.Invalidate() method. The Region class�s constructor can take a rectangle object that represents the exact x and y position of the upper left corner, as well as the width and height of the area it needs to paint. The OS ignores every other part of the control. That�s the nice thing about invalidating just a region of the control. You don�t have to write code in your OnPaint() method to only paint the invalid region. The OS takes care of that for you. For example, lets say you call Graphics.FillRectangle() in your OnPaint() method, which takes a gradient brush. This is a fairly heavy paint command. If Control.Invalidate() is called with a region that only covers the left third of the control, FillRectangle() will only update the left third of the control on the screen, even though FillRectangle should update the entire control. The OS takes care of these details for you. So, lets apply this to the scrolling control. The control�s timer object controls how often Control.Invalidate() is called. In the timer delegate�s Tick() method we need to create a region object and pass it into the Invalidate() method. The first thing to figure out is what area do we need to repaint. If the text is moving left to right, we need to have enough area in the rectangle to cover where the text was (so the left most pixels can be set to the background), and where the text is going to be (so the right most pixels of the text can be drawn). We�ll also need to write logic to handle the text if it is scrolling the other direction. Since I�ve already created a rectangle structure for use in the mouse over logic, we can just use this same rectangle and just modify its width to suite our needs. One thing I�ve found, is if you are very exact with your rectangle size, and the text scrolls entirely off the edge of the control, then a WM_PAINT message is not sent to the control (the OnPaint() method never gets fired). This is because the OS sees that the region to paint is totally off the control, so there is no reason to send the WM_PAINT message to the control. The problem with this is, the OnPaint() method is where the new text position is calculated. If the text scrolls off the control, and OnPaint() is never called, the text will never get repositioned to the other side of the control. This took a bit of time to figure out, but to fix it I added a 10 pixel buffer zone to the left and the right of the text�s rectangle to make sure that part of the region that gets invalidated is still on the control, and the WM_PAINT message gets sent. The nice thing about the Rectangle structure is that it comes with an Inflate() method that takes the height and width you want to increase the rectangle by. This is shown below in the Tick() method. //Controls the animation of the text. private void Tick(object sender, EventArgs e) { //update rectangle to include where to paint for new position //lastKnownRect.X -= 10; //Don�t need to use this //lastKnownRect.Width += 20; lastKnownRect.Inflate(10, 0); //Use the Inflate() method //create region based on updated rectangle Region updateRegion = new Region(lastKnownRect); //repaint the control only where needed Invalidate(updateRegion); Update(); } Now that this is in place, we�ve optimized the drawing of the control pretty well. Only the region of the control that needs to be repainted will actually get updated to the screen. The only thing left to do is a few housecleaning tasks that will give your control a more professional design time look and feel. These aren�t absolutely necessary for the control to work properly, but make it easier for other developers to use your control. The first one is totally aesthetic. All Microsoft WinForm controls have an icon associated with them that show up in the toolbar. But custom controls all get the same stock icon unless you specifically add your own. The way to give you�re control its own icon is by adding a 16x16 icon or bitmap to your control solution. Do this by right clicking on the project, then select Add | Add Existing Item�, then navigate to the icon you want and click ok. Once the icon is part of your project, you�ll want to change its �Build Action� property to �Embedded Resource� (in the property window). Once this is done, the last step is to add a class level attribute to your control, called ToolboxBitmapAttribute. This attribute�s constructor should take two parameters; the type of the control to associate the icon to, and the icon name. This is shown below. [ToolboxBitmapAttribute(typeof(ScrollingTextControl.ScrollingText), "ScrollingText.bmp")] public class ScrollingText : System.Windows.Forms.Control { } Once this is done, you wont immediately see your icon show up in the toolbox because Visual Studio only looks for the icon when you add the control to the toolbox. So you�ll have to right click on your control in the toolbox and select Delete to remove it. Next, recompile your control and go through the steps to add your control back to the toolbox. You should see the default custom control icon replaced with your new icon like the picture below. If you don�t, there are a few things to check. Make sure your icon isn�t a 32x32 icon. Also make sure the icon name in the attribute is correct, and the icon�s Build Action property is properly set. Also, I�ve found that if you add and remove a custom control from the toolbox over and over, Visual Studio starts to complain, so try to get it right the first time. The second thing to add to your new control is a default event. A default event is the event that is auto-created by Visual Studio when you double click on the control when its on a form. For example, if you double click a button control, the button�s Click event is automatically created for you in the code file. To set the default event for your control, if you want one, you add another class level attribute called DefaultEventAttribute. This class�s only constructor is a string of the name of the event. When the control gets double clicked in the designer, Visual Studio uses reflection to find the event that matches the name you set in the attribute, then it generates the appropriate code for that event handler. The new class attribute list is shown below. [ ToolboxBitmapAttribute(typeof(ScrollingTextControl.ScrollingText), "ScrollingText.bmp"), DefaultEvent("TextClicked") ] public class ScrollingText : System.Windows.Forms.Control {� } The last thing I want to go over are the list of attributes that you can apply to your control�s public properties. The first one that I use is the BrowsableAttribute. If set to false, then the property will not show up in the property browser for the control. By default any public property will show up in the property browser. But some properties, like my Brush property shown below, cant be set without code, so there is no reason to make this visible. The second attribute is CategoryAttribute. If you like to sort the properties in the property browser, then you can use this attribute to group similar properties. I like to use this to group all my custom public properties. The third attribute is DescriptionAttribute. The text you set in this attribute is the text that shows up at the bottom of the property browser when you click on that property. This is very helpful for the users of your control. An example of these attributes are all shown below. [ Browsable(true), CategoryAttribute("Scrolling Text"), Description("Determines if the text will stop scrolling" + " if the user's mouse moves over the text") ] public bool StopScrollOnMouseOver { set{stopScrollOnMouseOver = value;} get{return stopScrollOnMouseOver;} } false)> public Brush ForegroundBrush { set{foregroundBrush = value;} get{return foregroundBrush;} } There are a few other attributes that you can apply to a custom control, as well as a whole slew of stuff you can do to customize how your control is used at design time, but that is a topic for a whole other article. General News Question Answer Joke Rant Admin
http://www.codeproject.com/KB/miscctrl/ScrollingTextControlArtic.aspx
crawl-002
refinedweb
5,416
70.53
This article shows how to embed a XNA-based game into a WinForms control with ease. Also, it explains how to integrate an XNA GS project into VS2008 (this IDE is not currently supported by XNA GS), and in turn, be able to use WPF with your XNA-based creation. Any game developer knows that having a level-editing tool to construct the game's world is nowadays "a must". Depending on the project, it helps you build everything up faster and easier than if you had to design the level with the Notepad, by using something like, say: 00001110 00010000 00200100 02000000 0*000000***000 0 ........................... 00030020000900010 What is more, most of us dream of making a level editor that fits our project's needs. Just let us face it: one thing is using a third-party tool, but creating our own custom-made level editor tool is a whole different story. Specially, if you plan to use XNA GS to create that tool. When the XNA Framework was first released, I said "This is what I was waiting for!". When the second version of XNA GS was released some days ago, I added: "This is getting better and better!". As a C# (advanced) programmer, I love using XNA GS for the development of my games, prototypes, and "proof-of-concept". It is fun and -most of the times- simple. Unfortunately, XNA GS seems not to be that straightforward if you want to create your own level editor, given that at first sight, there is no easy way to embed a XNA Game project into a WinForms control. This issue was planned to be solved for the release of v2, but due to time issues, it was postponed by the XNA team for a later update. Please guys, understand all the effort the XNA team put to release XNA GS 2 -with all the new functionality- before Christmas. Come on! In fact, they deserve kudos and a break! OK, but where does this situation leave us? Simple, we have to find a way to deal with it on our own. If you do a Google search, or read this thread on the creator's forums, you will find different ways to get to the same goal, which, most of them implies re-implementing the graphics device, hiding the Game.Run() functionality, and so on. What if you want to take advantage of that functionality and still embed your XNA-based project into a WinForms control? What if you do not want to re-implement anything because you are lazy like me or you do not just feel like it? To make things worst, what if you want to use VS2008 to handle your XNA-based project? You know that both XNA GS versions, either v1 (the refresh), or the just-released v2, do not yet support 2008 editions of this great IDE. Read this thread for more information. To sum up, is there a simple way of embedding a XNA-based project into a WinForms control and at the same time of using VS2008? Well, let us find out. This article is for Windows only (either XP or Vista). Why? Because, in order to compile your games for the Xbox 360, you will only need any edition of VS2005 (as said in the previous section, VS2008 is not yet supported). Plus, I do not believe you can use WinForms controls in the 360. Otherwise, there would be no point in writing this article at all ;) Also, if you are thinking of creating a level-editing tool, take into account that it is not the purpose of this article to explain how to create a level editor; therefore, and in particular, it will not show you how to dynamically load custom content at runtime, say, by using MSBuild. There are plenty of articles and a project that will teach you how to achieve that. In fact, the main objective of this article is to demonstrate how to use VS2008 to create and manage XNA-based projects. In order to compile the projects, you will need to install: In the following sections, I will try to keep everything just "plain and simple" to assure an easy reading and understanding of the concepts and the example code. Attached to this article, you will find two zip files containing the source code of each section. You are free to use and modify both, following "The Code Project Open License" (CPOL). To keep the file size small, all that the XNA-based project do is to show the current date and time on the screen by using SpriteFonts. I assume you have the required knowledge of the XNA framework, so I am sure that after reading this article, you will extend the examples and templates as desired to meet your needs and dreams. By the way, this is the first article I write for "The Code Project", so I appreciate your comments and suggestions ... just be nice, though ;) Although this is not the main purpose of the article, I have found what I deem as an elegant and simple way of embedding an XNA-based game into a WinForms control. Please do not misunderstand me. I still believe that when you need to go beyond what XNA GS offers right now on this regard, handling and controlling (a) how the Graphics device should be created and (b) how the main loop should behave, is the right way to go. No discussion about that. However, and as I said a couple of sections above, if you are lazy like me, you could then be interested in the alternative I will soon present. Thus, if you do, then read on, but if you do not, then just skip this section. First things first: Formcontrol (the IDE will automatically set a reference to System.Windows.Forms), Panel, and Following the criteria of adding a Panel control, the code of your partial class should be something similar to what is shown next: using System; using System.Collections.Generic; using System.ComponentModel; using System.Data; using System.Drawing; using System.Text; using System.Windows.Forms; namespace WindowsGame1 { public partial class Form1 : Form { public Form1() { InitializeComponent(); } public IntPtr PanelHandle { get { return this.panel1.IsHandleCreated ? this.panel1.Handle : IntPtr.Zero; } } } } Now, let us go to main game file, by default named "Game1.cs", to modify the code as required. Here, we will do a few things: Formcontrol, instead. Panel's canvas. Panelis destroyed. In order to do everything but the third task, we need to modify the Initialize method a little bit: ... ... using SysWinForms = System.Windows.Forms; // to avoid conflicts with namespaces ... ... namespace WindowsGame1 { /// <summary> /// This is the main type for your game /// </summary> public class Game1 : Microsoft.Xna.Framework.Game { ... ... Form1 my); this.myForm = new Form1(); myForm.HandleDestroyed += new EventHandler(myForm_HandleDestroyed); myForm.Show(); } ... ... } } Also, we need to implement how we will handle the two above-mentioned events: namespace WindowsGame1 { /// <summary> /// This is the main type for your game /// </summary> public class Game1 : Microsoft.Xna.Framework.Game { ... ... void myForm_HandleDestroyed(object sender, EventArgs e) { this.Exit(); } void gameWindowForm_Shown(object sender, EventArgs e) { ((SysWinForms.Form)sender).Hide(); } ... ... } } It is time to do the third task: to show all the things we draw in our Panel control. In order to do that, we just need to add one simple line at the end of the Draw method: ... ... { /// <summary> /// This is the main type for your game /// </summary> public class Game1 : Microsoft.Xna.Framework.Game { ... ... /// <summary> /// This is called when the game should draw itself. /// </summary> /// <param name="gameTime">Provides a snapshot of timing values.</param> protected override void Draw(GameTime gameTime) { ... ... base.Draw(gameTime); // This one will do the trick we are looking for! this.GraphicsDevice.Present(this.myForm.PanelHandle); } } } At this point, nothing else is needed. You should be able to execute your project and see that this implementation simply works just fine. By the way, now you know -in case you did not discovery it earlier- why we needed to get the Panel's handle in the first place. In the attached zip file, you will find the complete source code with a couple of additions: we draw the current date and time on screen by using a simple SpriteFont, and we also add a PropertyGrid control -as the picture shows at the top of this article- just for fun. One final comment about this implementation: you could probably experience an overhead because even though you are hiding the "main" game's window, you may be still drawing to it. Honestly, I did not test if that is the case because as I am using the technique for creating my own level editor, I just simply do not care that much about performance issues right now. Plus, as you can notice, I am lazy ;) One of the "myths" behind the XNA framework is that you cannot create a project in VS2008. As it will be shown in this section, that assumption is wrong. Or at least, not completely true. Despite the fact that VS2008 is not yet officially supported, you can find workarounds to create and manage your XNA projects in that IDE -edit, compile, run, and debug- since the XNA framework assemblies can be manually set as references, as we usually do with our external components, third-party components and all of the required .NET Framework assemblies to compile our code. What you will lose by taking this path are mainly two crucial things: first, you will not be able to deploy your games to the Xbox 360, and second, using the content pipeline is out of the question. The first restriction is not important to us, since we are targeting our code for the Windows platform only. But, what about the second restriction? Since this is a temporary workaround, here is where things get a bit more complicated in practice -even though the solution is simple in concept. Unless we use MSBuild to compile the assets, we will need both VS2005 and VS2008. What we will do is separate the content from the rest of the code, and manage the former in VS2005 and the latter in VS2008. With this trick, you will be able to compile the content when needed, with VS2005, and then allow your VS2008 game project to use the files generated by the content pipeline in the form of binary output. Take due note that there is no reason to manually copy the output from folder to folder in order to compile and execute your "whole" project, just create both related projects -that is, the VS2005 and VS2008 ones- and let them share the same Debug and Release folders. To change the output path, open your project's Properties, prompt for the "Build" tab, and do the changes in the proper field. Do not forget to target your builds for the "x86" platform in VS2008, or you will get an error. Let us summarize the steps we have just followed: Microsoft.Xna.Frameworkassembly, Microsoft.Xna.Framework.Gameassembly, You will find a complete sample with source code in the attached zip file. An interesting thing about creating your game projects in VS2008 is that you can use all the goodies provided by the .NET Framework 3.5; that is, anonymous types, lambdas, LINQ to SQL, LINQ to Objects, and so on. But I will let you play around with these as a homework. In the next couple of sections, things turn out to be more interesting. Believe me! The previous section was all about integrating XNA into a VS2008 WinForms control. You may probably know this already: the Windows Presentation Foundation allows us to construct UI elements for our applications by using a new declarative XML-based language: XAML (which stands for "eXtensible Application Markup Language"). Now, is it possible to embed an XNA game into a Windows Presentation Foundation's control? What is more, can we use XAML? Simple answer for both questions: Yes! Let us start by setting up a new solution, and then: Microsoft.Xna.Frameworkassembly, Microsoft.Xna.Framework.Gameassembly, Now, here is where differences appear in comparison with the example of the previous section. First of all, in order to embed a WinForms control into a WPF window, you must use WindowsFormsHost, which will serve as a host of the former. Open " Window1" in Design view, and edit the XAML code as follows: <Window x: <Grid Name="myGrid"> <Button Height="23" Margin="104,0,99,11" Name="button1" VerticalAlignment="Bottom" Click="button1_Click">Press Me!</Button> <WindowsFormsHost Margin="20,20,20,45" Name="windowsFormsHost1"> <wfc:Panel x: </WindowsFormsHost> </Grid> </Window> As you can see in the code above, we are directly creating the WinForms Panel within the XAML code, and we are naming it " myXnaControl". It is now turn to modify the code behind this window so that it looks quite like: using System; using System.Collections.Generic; using System.Linq; using System.Text; using System.Windows; using System.Windows.Controls; using System.Windows.Data; using System.Windows.Documents; using System.Windows.Input; using System.Windows.Media; using System.Windows.Media.Imaging; using System.Windows.Shapes; namespace WindowsGame1_WPF { /// <summary> /// Interaction logic for Window1.xaml /// </summary> public partial class Window1 : Window { Game1 game; public Window1() { InitializeComponent(); this.game = new Game1(this.myXnaControl.Handle); this.Closing += new System.ComponentModel.CancelEventHandler(Window1_Closing); } void Window1_Closing(object sender, System.ComponentModel.CancelEventArgs e) { if(this.game != null) { this.game.Exit(); } } private void button1_Click(object sender, RoutedEventArgs e) { this.Background = Brushes.Black; this.button1.IsEnabled = false; this.game.Run(); } } } What is new here? First, when the window is closed, we exit the game (if you remember the previous examples that were handled from within the game class itself). Second, when we construct the game, we are passing the Panel's handle as a parameter. Third, the game will not run until we press a button (you can modify this behavior so that the game runs when you want). using System; using System.Collections.Generic; using Microsoft.Xna.Framework; using Microsoft.Xna.Framework.Audio; using Microsoft.Xna.Framework.Content; using Microsoft.Xna.Framework.GamerServices; using Microsoft.Xna.Framework.Graphics; using Microsoft.Xna.Framework.Input; using Microsoft.Xna.Framework.Net; using Microsoft.Xna.Framework.Storage; using SysWinForms = System.Windows.Forms; // to avoid conflicts with namespaces namespace WindowsGame1_WPF { /// <summary> /// This is the main type for your game /// </summary> public class Game1 : Microsoft.Xna.Framework.Game { GraphicsDeviceManager graphics; SpriteBatch spriteBatch; SpriteFont Font1; Vector2 FontPos; IntPtr myXnaControlHandle; public Game1(IntPtr myXnaControlHandle) { graphics = new GraphicsDeviceManager(this); Content.RootDirectory = "Content"; this.myXnaControlHandle = myXnaControlHandle; } /// ); } void gameWindowForm_Shown(object sender, EventArgs e) { ((SysWinForms.Form)sender).Hide(); } /// <summary> /// LoadContent will be called once per game and is the place to load /// all of your content. /// </summary> protected override void LoadContent() { // Create a new SpriteBatch, which can be used to draw textures. spriteBatch = new SpriteBatch(GraphicsDevice); Font1 = Content.Load<SpriteFont>("MyFont"); // TODO: Load your game content here FontPos = new Vector2(graphics.GraphicsDevice.Viewport.Width / 2, graphics.GraphicsDevice.Viewport.Height /.GraphicsDevice.Clear(Color.CornflowerBlue); spriteBatch.Begin(); // Draw the current date and time string output = DateTime.Now.ToString(); // Find the center of the string Vector2 FontOrigin = Font1.MeasureString(output) / 2; // Draw the string spriteBatch.DrawString(Font1, output, FontPos, Color.LightGreen, 0, FontOrigin, 1.0f, SpriteEffects.None, 0.5f); spriteBatch.End(); base.Draw(gameTime); // This one will do the trick we are looking for! this.GraphicsDevice.Present(this.myXnaControlHandle); } } } The main changes in the above code are: HandleDestroyedevent anymore, and IntPtr. Guess the reason for what is stated in the last bullet? To avoid "interop" issues. Let me remind you that in all the previous examples, we used to get the handle from the respective Panel's property when rendering. We could have implemented all those examples the same way, by passing the handle to the game's constructor, but I wanted to show the difference. Anything left to do? Just compile both projects to see something like the picture below: And the following must happen when you press the button located at the bottom of the window: Piece of cake, right? Now you can take full advantage of this new technology and enjoy its benefits. This is something I will investigate in the future. The reason: maybe there is a way to embed and run an XNA-based application into a webpage by using the Silverlight technology. It would be really nice to see our games loaded and played within a browser. However, thinking loud about it, I believe that some restrictions shall apply: Perhaps, my second article should focus on this investigation once Silverlight 2.0 is finally released; as far as I know, that version will allow us to consume WinForms controls. In the meantime, there is another workaround in this regard, which I hereunder present for learning purposes, only. A strong word of warning before you read on: you should never give any assembly (and or a website) full-trust privileges because you can open your system to face security risks since your machine could get remotely owned by third-parties. If you do, because you think you know what you are doing or for any other reason, you are granting that trust/privileges at your own risk. When I was trying to figure out a way to run a XNA-based game within a browser, I remembered that using the <Object> HTML tag and the proper settings, we can embed a .NET assembly into a webpage. Like, say: <Object id="myControl" name="myControl" classid="myWinControl.dll#myNameSpace.myWinControl" width="400" height="300" VIEWASTEXT></Object> Although the above-mentioned method does work, some conditions shall be met to avoid problems and disappointment: This whole "mess" is not what we want, isn't it? So, I looked into a second approach which, in turn, drove me to this great article: "Hosting a .NET ActiveX Control in Visual FoxPro". Making the assembly COM-visible opens the door for great possibilities in this field since we are exposing our .NET assemblies as ActiveX controls. Plus, it works "with less trouble" than the previous alternative. On the other hand, and as said before, it could also open the door to security risks, so be careful when you mess around with the security policies of your machine or any clients' machines. I repeat, this example is for learning purposes only. OK, let us see some code, shall we? By the way, please bear in mind that this is a "proof-of-concept", with the sole purpose of showing that the idea can be achieved in practice. Therefore, the following implementation is simple in design, and thus, it lacks certain desirable features and controls. First, create a WinForms project with Visual Studio (or VC Express edition), name it "XnaGame", and the add the Game1 file we used in the previous examples. Also, create a UserControl, and name it "XnaPanel". We will not be needing any Form control created, by default, so just delete it. Open the properties of the project, and register the assembly for COM interop; also, make the assembly "COM" visible by modifying the AssemblyInfo.cs file (under the Properties folder): ... // Setting ComVisible to false makes the types in this assembly not visible // to COM components. If you need to access a type in this assembly from // COM, set the ComVisible attribute to true on that type. [assembly: ComVisible(true)] ... Having done so, the code of our XnaPanel class should be the following: using System; using System.Collections.Generic; using System.ComponentModel; using System.Drawing; using System.Data; using System.Text; using System.Windows.Forms; using System.Threading; using System.Runtime.InteropServices; using System.IO; namespace XnaGame { /// <summary> /// This is the control that will host our XNA-based game. /// </summary> [Guid("2CD2873E-2A50-4ac9-98CA-B13ACFCC6DFA")] [ProgId("XnaGame.XnaPanel")] [ComVisible(true)] public partial class XnaPanel : UserControl { static string localPath; /// <summary> /// Main constructor of the control. /// </summary> public XnaPanel() { InitializeComponent(); this.HandleCreated += new EventHandler(XnaPanel_HandleCreated); } /// <summary> /// Executes when the control's handle is created. /// </summary> /// <param name="sender">The source of the event.</param> /// <param name="e">An System.EventArgs object that contains event data.</param> void XnaPanel_HandleCreated(object sender, EventArgs e) { // Set the path to the local Content folder localPath = Environment.GetFolderPath(Environment.SpecialFolder.LocalApplicationData); localPath += @"\Temp\XnaGame\Content"; // Check whether the folder exists, locally while(!Directory.Exists(localPath)) { // If not just wait ... // (you can create a timeout control, instead, to avoid // running this check for ever!) } // Run the game in a new thread. Thread gameThread = new Thread(RunGame); gameThread.Start(); } /// <summary> /// Creates and executes the game. /// </summary> void RunGame() { new Game1(this.Handle, localPath).Run(); } } } What we are doing here is nothing different from what we had seen before, with the exception of: XnaPanel_HandleCreatedmethod, GUID). As you can see from the code above, we will wait until the Content folder is created locally. Why? Well, how can we access the Content folder in the server? So far, and please correct me if I am wrong, we cannot. Given that the COM object is executed on the client-side and that the Content Pipeline does not currently allow us to set a folder on the Internet as our game's Content folder, we must create the Content folder and copy all the assets, locally. Who creates it? And when? We shall see in a couple of paragraphs below. In order to give the final touches to this project, let us change the constructor of our game so that it takes into account the path to the local Content folder and notifies the content pipeline properly: ... /// <summary> /// Main constructor of the game. /// </summary> /// <param name="myPanelHandle">The handle of the XnaPanel control.</param> /// <param name="localPath">The path to the local Content folder.</param> public Game1(IntPtr myPanelHandle, string localPath) { graphics = new GraphicsDeviceManager(this); Content.RootDirectory = localPath; this.myPanelHandle = myPanelHandle; } ... We are now in conditions of building the project. When doing so for the first time, VS will register the assembly for COM interop by including the following entries to the Windows Registry: [HKEY_CLASSES_ROOT\CLSID\{2CD2873E-2A50-4AC9-98CA-B13ACFCC6DFA}] @="XnaGame.XnaPanel" [HKEY_CLASSES_ROOT\CLSID\{2CD2873E-2A50-4AC9-98CA-B13ACFCC6DFA}\Implemented Categories] [HKEY_CLASSES_ROOT\CLSID\{2CD2873E-2A50-4AC9-98CA-B13ACFCC6DFA}\Implemented Categories\ {62C8FE65-4EBB-45e7-B440-6E39B2CDBF29}] [HKEY_CLASSES_ROOT}\ProgId] @="XnaGame.XnaPanel" [HKEY_CLASSES_ROOT\XnaGame.XnaPanel] @="XnaGame.XnaPanel" [HKEY_CLASSES_ROOT\XnaGame.XnaPanel\CLSID] @="{2CD2873E-2A50-4AC9-98CA-B13ACFCC6DFA}" [HKEY_LOCAL_MACHINE\SOFTWARE\Classes\CLSID\{2CD2873E-2A50-4AC9-98CA-B13ACFCC6DFA}] @="XnaGame.XnaPanel" [HKEY_LOCAL_MACHINE\SOFTWARE\Classes\CLSID\ {2CD2873E-2A50-4AC9-98CA-B13ACFCC6DFA}\Implemented Categories] [HKEY_LOCAL_MACHINE\SOFTWARE\Classes\CLSID\ {2CD2873E-2A50-4AC9-98CA-B13ACFCC6DFA}\Implemented Categories\ {62C8FE65-4EBB-45e7-B440-6E39B2CDBF29}] [HKEY_LOCAL_MACHINE\SOFTWARE\Classes}\ProgId] @="XnaGame.XnaPanel" [HKEY_LOCAL_MACHINE\SOFTWARE\Classes\XnaGame.XnaPanel] @="XnaGame.XnaPanel" [HKEY_LOCAL_MACHINE\SOFTWARE\Classes\XnaGame.XnaPanel\CLSID] @="{2CD2873E-2A50-4AC9-98CA-B13ACFCC6DFA}" <ThePathToTheFolderWhereYourAssemblyIsLocated> is not the actual output, just a dummy placeholder for example purposes. Instead, if you check your Registry, that fake path should be substituted with the real path to the folder where the DLL is located on your machine (or remotely, in case you uploaded it to your website on the Internet). Also, this is "automagically" done by VS, but when you deploy your project to the Internet, you must provide a way to register the COM object, as explained by the article "Hosting a .NET ActiveX Control in Visual FoxPro" (please refer to the sample implementation of the RegisterClass/ UnregisterClass static methods in that article). Second project: create a new ASP.NET web project (with VS or VWD Express), name it "XnaOnWebSite", and add the Content folder with the font file to that project (as we had seen in previous sections). First, we modify the code inline as follows: <%@ Page <html xmlns="" > <head runat="server"> <title>Xna-Based Game On A WebPage</title> </head> <body bgcolor="Black"> <form id="form1" runat="server" style="margin-top: 50px;"> <div style="text-align: center;"> <object id="myXnaGameControl" name="myXnaGameControl" classid="clsid:2CD2873E-2A50-4ac9-98CA-B13ACFCC6DFA" width="640" height="480" VIEWASTEXT> </object> </div> </form> </body> </html> Please notice that instead of using classid="XnaGame.dll#XnaGame.XnaPanel", we are using the class GUI we provided to the COM-visible class. Then, we modify the.IO; namespace XnaOnWebSite { /// <summary> /// This class is responsible of copying all the content files /// from the server to the local Content folder. /// </summary> public partial class _Default : System.Web.UI.Page { /// <summary> /// Executes when the page loads. /// </summary> /// <param name="sender">The source of the event.</param> /// <param name="e">An System.EventArgs /// object that contains event data.</param> protected void Page_Load(object sender, EventArgs e) { // Get the physical Path of the Content folder in the server string serverPath = Server.MapPath(@"\Content"); // Get the properties of the Content folder DirectoryInfo serverFolder = new DirectoryInfo(serverPath); // Check whether the folder exists on the server if (!serverFolder.Exists) { throw new DirectoryNotFoundException("The Content" + " folder is not present on the server."); } // Set the path to the local Content folder string localPath = Environment.GetFolderPath( Environment.SpecialFolder.LocalApplicationData); localPath += @"\Temp\XnaGame\Content"; // Check whether the folder exists locally if (Directory.Exists(localPath)) { DirectoryInfo localFolder = new DirectoryInfo(localPath); localFolder.Delete(true); } // Create the folder locally Directory.CreateDirectory(localPath); // Get the properties of the unique content file FileInfo serverFile = new FileInfo(serverPath + @"\myFont.xnb"); // Check whether the file exists on the server if (!serverFile.Exists) { throw new FileNotFoundException("The file 'myFont.xnb' is" + " not present in the server's Content folder."); } // Copy the file to the local Content folder serverFile.CopyTo(localPath + @"\myFont.xnb", true); } } } I guess the code above is self-explainable, but basically, what we are doing is copying the content of the server's Content file to a local Content file. For the example, there is no need to use recursive operations since we know there is only one file to copy. OK. When you build and execute the project, your browser will open the new "local" website (which you must have added to the trusted-zone list). You will notice that the assembly is still cached on your local machine (somewhere on you AppData folder; look for a folder named "dl2" or "dl3") and that the DLL is accompanied by one ini file, by default named "_AssemblyInfo_.ini", which includes a content similar to this: X n a G a m e , 1 . 0 . 0 . 0 , , 9 9 c 2 0 c 9 4 c a 2 9 9 5 7 b <ThePathToTheFolderWhereYourAssemblyIsLocated> / X n a G a m e . d l l One interesting note: you can modify the Registry entries created by VS when you first build the COM-visible assembly; so, let us say that you are hosting your XnaGame.dll file in a folder on the URI ""; you can modify the Registry, changing the "old" path with the new one, and the next time you check for that file, you will get the following: X n a G a m e , 1 . 0 . 0 . 0 , , 9 9 c 2 0 c 9 4 c a 2 9 9 5 7 b h t t p : / / y o u r s i t e . c o m / M y B i n s / X n a G a m e . d l l Meaning? Every time you open the website, the file will be downloaded and cached from that remote location. At last! The nice part: if everything went OK, you should see something like the screenshot below: If something went wrong, do not forget that in order to run the example code, you must: Depending on your browser's configuration, the COM object could get blocked -because it cannot verify the provider since we have not attached any certificate to our assembly- but if you did the above-mentioned tasks, you will do just fine. As said before, this example is just a simple base. A start point. An interesting idea based on this would be creating a "XNA Web Player", where the player assembly remains separated from the game itself. A way to implement this is by creating an installer that saves the COM-visible assembly on the client's "Program Files" directory; and this assembly (a) includes, say, a sort of plug-in system and (b) when executed by the browser, it: In other words, something similar to the usual "web players", or to those offered by game-related frameworks where you install the player and the DLL containing the game-engine DLLs. Guess what? Our basic DLLs are already present on the clients' machines when they installed XNA GS 2 distributable assemblies in the first place. The only ones to handle would be third-party DLLs your game consumes. Then, it would be just a matter of inserting the following code somewhere in the webpage: <Object id="myControl" name="myControl" classid="clsid:2CD2873E-2A50-4ac9-98CA-B13ACFCC6DFA" width="400" height="300" VIEWASTEXT> <param name="Game" value="/myServerFolder/myGame.dll"> <param name="Content" value="/myServerFolder/Content/"> ... </Object> I cannot imagine how handy something like this would come for showing off our demos, prototypes, and so on. And, I guess you share this feeling ... As demonstrated in this article, with "some little" effort, we can use XNA GS 2.0 and VS2008 side by side without problems to create and manage our Windows projects. The workaround presented is simple, and allows us to take advantage of all the new features included in .NET Framework 3.5. It is also demonstrated how to embed a XNA-based game into a webpage by using the <Object> tag and COM interop with .NET assemblies. The example provided in this regard is just a "kick off" for future investigation. I hope you have found this article useful -as well as enjoyed reading it as much as I have writing it. As we wait for an official solution from the XNA team, here is a list of interesting ideas for you to play with: This is the second version of the article. Changes since first version: General News Question Answer Joke Rant Admin
http://www.codeproject.com/KB/game/XNA_And_Beyond.aspx
crawl-002
refinedweb
4,980
56.15
#include <Server.hpp> A cloud server. The Server is an abstract server class, who can build a real-time cloud server, that is following the web-socket protocol. Extends this Server and related classes and overrides abstract methods under below. After the overridings, open this cloud server using the open() method. Templates - Cloud Service Definition at line 51 of file Server.hpp. Default Constructor. Definition at line 73 of file Server.hpp. References createUser(), and ~Server(). Default Destructor. Reimplemented from samchon::protocol::Server. Factory method creating User object. Referenced by addClient(), and Server(). Test wheter an User exists with the accountID. Definition at line 101 of file Server.hpp. References samchon::HashMap< Key, T, Hash, Pred, Alloc >::has(). Get an User object by its accountID. Definition at line 112 of file Server.hpp. References samchon::HashMap< Key, T, Hash, Pred, Alloc >::get(). Sends an Invoke message to all remote clients through the belonged User and Client objects. Sending the Invoke message to all remote clients, it's came true by passing through User.sendData(). And the User.sendData also pass through the Client.sendData(). Definition at line 142 of file Server.hpp. References replyData(), samchon::templates::service::User::sendData(), and samchon::library::UniqueReadLock::unlock(). Handle a replied Invoke message. The Server.replyData() is an abstract method that handling Invoke message that should be handled in the Server level. Overrides this replyData() method and defines what to do with the Invoke message in this Server level. Referenced by sendData(). Add a newly connected remote client. When a remote client connects to this cloud server, then Server queries the {WebClientDriver.getSessionID session id} of the remote client. If the {WebClientDriver.getSessionID session id} is new one, then creates a new User object. At next, creates a Client object who represents the newly connected remote client and insert the Client object to the matched User object which is new or ordinary one following the {WebClientDriver.getSessionID session id}. At last, a Service object can be created with referencing the path. List of objects can be created by this method. Definition at line 190 of file Server.hpp. References createUser(), and samchon::protocol::WebClientDriver::getSessionID().
http://samchon.github.io/framework/api/cpp/d7/dcf/classsamchon_1_1templates_1_1service_1_1Server.html
CC-MAIN-2022-27
refinedweb
361
53.98
the relative-import mechanism is broken... at least on python2.6 but i'd guess on later versions as well. consider this package layout: /tmp/foo/ /tmp/foo/__init__.py /tmp/foo/bar.py where bar.py is: # note this is a relative import and should fail! from .os import walk print walk # and this should also fail from . import os print os running it yields a bug: $ PYTHONPATH="/tmp" python Python 2.6.4 (r264:75706, Dec 7 2009, 18:45:15) [GCC 4.4.1] on linux2 Type "help", "copyright", "credits" or "license" for more information. >>> import foo.bar <function walk at 0xb7d2aa04> # <<<< ?!?! Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/tmp/foo/bar.py", line 4, in <module> from . import os ImportError: cannot import name os "from . import os" fails as expected, but "from .os import walk" works -- although it should obviously fail too. -tomer
https://bugs.python.org/msg99176
CC-MAIN-2017-51
refinedweb
152
80.17
Ian Bicking wrote/napsal: :) My /etc/init.d/webkit script works with AutoReload automatically. I remember, that I tailored it (as the default didn't work for me), but I don't think I've modified it for AutoReload. My version is available at copied from living production server (Debian stable). Věroš Kaplan -- Věroš Kaplan <veros @ tac . cz> system disaster Tacoma Computers, Staňkova 18a, Brno, CZ -- "Perl actually stands for Pathologically Eclectic Rubbish Lister, but don't tell anyone I said that." -- Perl manual page sandra ruiz wrote: > I'm not sure what the error is from. Is the batch file corrupted or something? There also seems to be an unpickleable object in your session, but that doesn't seem to be the source of the problem. Ian ) > >Anyway, use the script in Webware/WebKit/AppServer to start the server, and >it will restart properly. > thanks. _________________________________________________________________ Charla con tus amigos en línea mediante MSN Messenger: Hi, I'm trying to implement a per-session data store for WebKit and this is what I've done so far (in the attachment). It is used like this: store.register('dc', newDebuggingContext) store.register('localizer', newLocalizer) class SitePage: ... def awake(self, t): mvc.View.awake(self, t) SID = self.session().identifier() try: self.localizer = store.get(SID, 'localizer') except cms.config.Error, e: self.localizer = NoOpLocalizerStub() self.handleError("Can't create localizer", e) self.debugContext = store.get(SID, 'dc') The questions: 1. Should I guard the class with threading locks? 2. How can I (if I can) get notified when the session is abandoned to be able to free resources from store? 3. Would it be feasible to manage the db conn. the same way?
https://sourceforge.net/p/webware/mailman/webware-discuss/?viewmonth=200402&viewday=3
CC-MAIN-2016-30
refinedweb
283
61.83
Table Of Contents Widgets¶ Introduction to Widget¶ A Widget is the base building block of GUI interfaces in Kivy. It provides a Canvas that can be used to draw on screen. It receives events and reacts to them. For a in-depth explanation about the Widget class, look at the module documentation. Manipulating the Widget tree¶ Widgets in Kivy are organized in trees. Your application has a root widget, which usually has children that can have children of their own. Children of a widget are represented as the children attribute, a Kivy ListProperty. The widget tree can be manipulated with the following methods: add_widget(): add a widget as a child remove_widget(): remove a widget from the children list clear_widgets(): remove all children from a widget For example, if you want to add a button inside a BoxLayout, you can do: layout = BoxLayout(padding=10) button = Button(text='My first button') layout.add_widget(button) The button is added to layout: the button’s parent property will be set to layout; the layout will have the button added to its children list. To remove the button from the layout: layout.remove_widget(button) With removal, the button’s parent property will be set to None, and the layout will have button removed from its children list. If you want to clear all the children inside a widget, use clear_widgets() method: layout.clear_widgets() Warning Never manipulate the children list yourself, unless you really know what you are doing. The widget tree is associated with a graphic tree. For example, if you add a widget into the children list without adding its canvas to the graphics tree, the widget will be a child, yes, but nothing will be drawn on the screen. Moreover, you might have issues on further calls of add_widget, remove_widget and clear_widgets. Traversing the Tree¶ The Widget class instance’s children list property contains all the children. You can easily traverse the tree by doing: root = BoxLayout() # ... add widgets to root ... for child in root.children: print(child) However, this must be used carefully. If you intend to modify the children list with one of the methods shown in the previous section, you must use a copy of the list like this: for child in root.children[:]: # manipulate the tree. For example here, remove all widgets that have a # width < 100 if child.width < 100: root.remove_widget(child) Widgets don’t influence the size/pos of their children by default. The pos attribute is the absolute position in screen co-ordinates (unless, you use the relativelayout. More on that later) and size, is an absolute size. Widgets Z Index¶ The order of widget drawing is based on the widget’s position in the widget tree. The add_widget method takes an index parameter which can be used to specify its position in the widget tree: root.add_widget(widget, index) The lower indexed widgets will be drawn above those with a higher index. Keep in mind that the default for index is 0, so widgets added later are drawn on top of the others unless specified otherwise. Organize with Layouts¶ layout is a special kind of widget that controls the size and position of its children. There are different kinds of layouts, allowing for different automatic organization of their children. Layouts use size_hint and pos_hint properties to determine the size and pos of their children. BoxLayout: Arranges widgets in an adjacent manner (either vertically or horizontally) manner, to fill all the space. The size_hint property of children can be used to change proportions allowed to each child, or set fixed size for some of them. GridLayout: Arranges widgets in a grid. You must specify at least one dimension of the grid so kivy can compute the size of the elements and how to arrange them. StackLayout: Arranges widgets adjacent to one another, but with a set size in one of the dimensions, without trying to make them fit within the entire space. This is useful to display children of the same predefined size. AnchorLayout: A simple layout only caring about children positions. It allows putting the children at a position relative to a border of the layout. size_hint is not honored. FloatLayout: Allows placing children with arbitrary locations and size, either absolute or relative to the layout size. Default size_hint (1, 1) will make every child the same size as the whole layout, so you probably want to change this value if you have more than one child. You can set size_hint to (None, None) to use absolute size with size. This widget honors pos_hint also, which as a dict setting position relative to layout position. RelativeLayout: Behaves just like FloatLayout, except children positions are relative to layout position, not the screen. Examine the documentation of the individual layouts for a more in-depth understanding. size_hint is a ReferenceListProperty of size_hint_x and size_hint_y. It accepts values from 0 to 1 or None and defaults to (1, 1). This signifies that if the widget is in a layout, the layout will allocate it as much place as possible in both directions (relative to the layouts size). Setting size_hint to (0.5, 0.8), for example, will make the widget 50% the width and 80% the height of available size for the Widget inside a layout. Consider the following example: BoxLayout: Button: text: 'Button 1' # default size_hint is 1, 1, we don't need to specify it explicitly # however it's provided here to make things clear size_hint: 1, 1 Now load kivy catalog by typing the following, but replacing $KIVYDIR with the directory of your installation (discoverable via os.path.dirname(kivy.__file__)): cd $KIVYDIR/examples/demo/kivycatalog python main.py A new window will appear. Click in the area below the ‘Welcome’ Spinner on the left and replace the text there with your kv code from above. As you can see from the image above, the Button takes up 100% of the layout size. Changing the size_hint_x/ size_hint_y to .5 will make the Widget take 50% of the layout width/ height. You can see here that, although we specify size_hint_x and size_hint_y both to be .5, only size_hint_y seems to be honored. That is because boxlayout controls the size_hint_y when orientation is vertical and size_hint_x when orientation is ‘horizontal’. The controlled dimension’s size is calculated depending upon the total no. of children in the boxlayout. In this example, one child has size_hint_y controlled (.5/.5 = 1). Thus, the widget takes 100% of the parent layout’s height. Let’s add another Button to the layout and see what happens. boxlayout by its very nature divides the available space between its children equally. In our example, the proportion is 50-50, because we have two children. Let’s use size_hint on one of the children and see the results. If a child specifies size_hint, this specifies how much space the Widget will take out of the size given to it by the boxlayout. In our example, the first Button specifies .5 for size_hint_x. The space for the widget is calculated like so: first child's size_hint divided by first child's size_hint + second child's size_hint + ...n(no of children) .5/(.5+1) = .333... The rest of the BoxLayout’s width is divided among the rest of the children. In our example, this means the second Button takes up 66.66% of the layout width. Experiment with size_hint to get comfortable with it. If you want to control the absolute size of a Widget, you can set size_hint_x/ size_hint_y or both to None so that the widget’s width and or height attributes will be honored. pos_hint is a dict, which defaults to empty. As for size_hint, layouts honor pos_hint differently, but generally you can add values to any of the pos attributes ( x, y, right, top, center_x, center_y) to have the Widget positioned relative to its parent. Let’s experiment with the following code in kivycatalog to understand pos_hint visually: FloatLayout: Button: text: "We Will" pos: 100, 100 size_hint: .2, .4 Button: text: "Wee Wiill" pos: 200, 200 size_hint: .4, .2 Button: text: "ROCK YOU!!" pos_hint: {'x': .3, 'y': .6} size_hint: .5, .2 This gives us: As with size_hint, you should experiment with pos_hint to understand the effect it has on the widget positions. Adding a Background to a Layout¶ One of the frequently asked questions about layouts is:: "How to add a background image/color/video/... to a Layout" Layouts by their nature have no visual representation: they have no canvas instructions by default. However you can add canvas instructions to a layout instance easily, as with adding a colored background: In Python: from kivy.graphics import Color, Rectangle with layout_instance.canvas.before: Color(0, 1, 0, 1) # green; colors range from 0-1 instead of 0-255 self.rect = Rectangle(size=layout_instance.size, pos=layout_instance.pos) Unfortunately, this will only draw a rectangle at the layout’s initial position and size. To make sure the rect is drawn inside the layout, when the layout size/pos changes, we need to listen to any changes and update the rectangles size and pos. We can do that as follows: with layout_instance.canvas.before: Color(0, 1, 0, 1) # green; colors range from 0-1 instead of 0-255 self.rect = Rectangle(size=layout_instance.size, pos=layout_instance.pos) def update_rect(instance, value): instance.rect.pos = instance.pos instance.rect.size = instance.size # listen to size and position changes layout_instance.bind(pos=update_rect, size=update_rect) In kv: FloatLayout: canvas.before: Color: rgba: 0, 1, 0, 1 Rectangle: # self here refers to the widget i.e FloatLayout pos: self.pos size: self.size The kv declaration sets an implicit binding: the last two kv lines ensure that the pos and size values of the rectangle will update when the pos of the floatlayout changes. Now we put the snippets above into the shell of Kivy App. Pure Python way: from kivy.app import App from kivy.graphics import Color, Rectangle from kivy.uix.floatlayout import FloatLayout from kivy.uix.button import Button class RootWidget(FloatLayout): def __init__(self, **kwargs): # make sure we aren't overriding any important functionality super(RootWidget, self).__init__(**kwargs) # let's add a Widget to this layout self.add_widget( Button( text="Hello World", size_hint=(.5, .5), pos_hint={'center_x': .5, 'center_y': .5})) class MainApp(App): def build(self): self.root = root = RootWidget() root.bind(size=self._update_rect, pos=self._update_rect) with root.canvas.before: Color(0, 1, 0, 1) # green; colors range from 0-1 not 0-255 self.rect = Rectangle(size=root.size, pos=root.pos) return root def _update_rect(self, instance, value): self.rect.pos = instance.pos self.rect.size = instance.size if __name__ == '__main__': MainApp().run() Using the kv Language: from kivy.app import App from kivy.lang import Builder root = Builder.load_string(''' FloatLayout: canvas.before: Color: rgba: 0, 1, 0, 1 Rectangle: # self here refers to the widget i.e FloatLayout pos: self.pos size: self.size Button: text: 'Hello World!!' size_hint: .5, .5 pos_hint: {'center_x':.5, 'center_y': .5} ''') class MainApp(App): def build(self): return root if __name__ == '__main__': MainApp().run() Both of the Apps should look something like this: Add a color to the background of a custom layouts rule/class¶ The way we add background to the layout’s instance can quickly become cumbersome if we need to use multiple layouts. To help with this, you can subclass the Layout and create your own layout that adds a background. Using Python: from kivy.app import App from kivy.graphics import Color, Rectangle from kivy.uix.boxlayout import BoxLayout from kivy.uix.floatlayout import FloatLayout from kivy.uix.image import AsyncImage class RootWidget(BoxLayout): pass class CustomLayout(FloatLayout): def __init__(self, **kwargs): # make sure we aren't overriding any important functionality super(CustomLayout, self).__init__(**kwargs) with self.canvas.before: Color(0, 1, 0, 1) # green; colors range from 0-1 instead of 0-255 self.rect = Rectangle(size=self.size, pos=self.pos) self.bind(size=self._update_rect, pos=self._update_rect) def _update_rect(self, instance, value): self.rect.pos = instance.pos self.rect.size = instance.size class MainApp(App): def build(self): root = RootWidget() c = CustomLayout() root.add_widget(c) c.add_widget( AsyncImage( source="", size_hint= (1, .5), pos_hint={'center_x':.5, 'center_y':.5})) root.add_widget(AsyncImage(source='')) c = CustomLayout() c.add_widget( AsyncImage( source="", size_hint= (1, .5), pos_hint={'center_x':.5, 'center_y':.5})) root.add_widget(c) return root if __name__ == '__main__': MainApp().run() Using the kv Language: from kivy.app import App from kivy.uix.floatlayout import FloatLayout from kivy.uix.boxlayout import BoxLayout from kivy.lang import Builder Builder.load_string(''' <CustomLayout> canvas.before: Color: rgba: 0, 1, 0, 1 Rectangle: pos: self.pos size: self.size <RootWidget> CustomLayout: AsyncImage: source: '' size_hint: 1, .5 pos_hint: {'center_x':.5, 'center_y': .5} AsyncImage: source: '' CustomLayout AsyncImage: source: '' size_hint: 1, .5 pos_hint: {'center_x':.5, 'center_y': .5} ''') class RootWidget(BoxLayout): pass class CustomLayout(FloatLayout): pass class MainApp(App): def build(self): return RootWidget() if __name__ == '__main__': MainApp().run() Both of the Apps should look something like this: Defining the background in the custom layout class, assures that it will be used in every instance of CustomLayout. Now, to add an image or color to the background of a built-in Kivy layout, globally, we need to override the kv rule for the layout in question. Consider GridLayout: <GridLayout> canvas.before: Color: rgba: 0, 1, 0, 1 BorderImage: source: '../examples/widgets/sequenced_images/data/images/button_white.png' pos: self.pos size: self.size Then, when we put this snippet into a Kivy app: from kivy.app import App from kivy.uix.floatlayout import FloatLayout from kivy.lang import Builder Builder.load_string(''' <GridLayout> canvas.before: BorderImage: # BorderImage behaves like the CSS BorderImage border: 10, 10, 10, 10 source: '../examples/widgets/sequenced_images/data/images/button_white.png' pos: self.pos size: self.size <RootWidget> Grid RootWidget(FloatLayout): pass class MainApp(App): def build(self): return RootWidget() if __name__ == '__main__': MainApp().run() The result should look something like this: As we are overriding the rule of the class GridLayout, any use of this class in our app will display that image. How about an Animated background? You can set the drawing instructions like Rectangle/BorderImage/Ellipse/… to use a particular texture: Rectangle: texture: reference to a texture We use this to display an animated background: from kivy.app import App from kivy.uix.floatlayout import FloatLayout from kivy.uix.gridlayout import GridLayout from kivy.uix.image import Image from kivy.properties import ObjectProperty from kivy.lang import Builder Builder.load_string(''' <CustomLayout> canvas.before: BorderImage: # BorderImage behaves like the CSS BorderImage border: 10, 10, 10, 10 texture: self.background_image.texture pos: self.pos size: self.size <RootWidget> Custom CustomLayout(GridLayout): background_image = ObjectProperty( Image( source='../examples/widgets/sequenced_images/data/images/button_white_animated.zip', anim_delay=.1)) class RootWidget(FloatLayout): pass class MainApp(App): def build(self): return RootWidget() if __name__ == '__main__': MainApp().run() To try to understand what is happening here, start from line 13: texture: self.background_image.texture This specifies that the texture property of BorderImage will be updated whenever the texture property of background_image updates. We define the background_image property at line 40: background_image = ObjectProperty(... This sets up background_image as an ObjectProperty in which we add an Image widget. An image widget has a texture property; where you see self.background_image.texture, this sets a reference, texture, to this property. The Image widget supports animation: the texture of the image is updated whenever the animation changes, and the texture of BorderImage instruction is updated in the process. You can also just blit custom data to the texture. For details, look at the documentation of Texture. Nesting Layouts¶ Yes! It is quite fun to see how extensible the process can be. Size and position metrics¶ Kivy’s default unit for length is the pixel, all sizes and positions are expressed in it by default. You can express them in other units, which is useful to achieve better consistency across devices (they get converted to the size in pixels automatically). Available units are pt, mm, cm, inch, dp and sp. You can learn about their usage in the metrics documentation. You can also experiment with the screen usage to simulate various devices screens for your application. Screen Separation with Screen Manager¶ If your application is composed of various screens, you likely want an easy way to navigate from one Screen to another. Fortunately, there is the ScreenManager class, that allows you to define screens separately, and to set the TransitionBase from one to another.
https://kivy.org/doc/master/guide/widgets.html
CC-MAIN-2021-39
refinedweb
2,749
59.4
Building Ajax applications has proven to be a consistent method for providing engaging applications. However, the explosion in popularity of Adobe Flex cannot be ignored. As we are continually pushed to create the best user experience, we're often faced with the difficult task of integrating Flash-based assets embedded in our Ajax applications. This article discusses the integration of Flash content with existing Ajax content using the FABridge, a code library developed by Adobe to handle this very task. To be an Ajax developer today is pretty special. We're always at the front lines, ready to greet users and offer the best first impression to the applications we build for them. As Web standards advance and more vendors decide to implement them, our jobs have become easier, allowing us to focus on the user experience. The further advancements in JavaScript frameworks such as Ext JS, jQuery, and Prototype have also allowed us to spend less time worrying about whether our code will work across the platforms we're asked to support, leaving more time to innovate. Although there are certainly more tools, techniques, and resources available to us today, there is also a shift in development methodology that serves as a push toward the rich world of Flash development. For many shops, the development workflow would involve the user interface (UI) group to produce designs that support a server-side-generated application. With just the JavaScript frameworks we have now, we're pushed in the direction of application development for the client side. However, the emergence of the Flex platform — a free, open source framework for producing Flash applications — brings us further into the application development arena. This type of innovation is good for us on the client side, but we must ensure that we handle the process of integrating it with current architectures in a thoughtful and careful manner. Before I introduce code samples showing how to work with Ajax and Flex assets, you need to understand the tools and skills required: - I produced the Ajax samples in this article using the Ext JS JavaScript library, so you need to download the .zip file that contains the library and supporting documentation. - Next, grab a copy of the free Adobe Flex 3 SDK and Adobe Flash Player 9 with debugging capability, if you don't already have it. - Although not required to follow along in this article, you may also want to check out at least a trial version of Adobe Flex Builder 3, an Eclipse-based IDE that enables rapid Flex application development in addition to superior debugging and profiling capabilities (see Resources). - Finally, a working knowledge of PHP is helpful. If you were looking forward to replacing all your Ajax content with Flex assets, your task would be much simpler. However, this is an unlikely and often unreasonable approach, because there are many reasons to preserve traditional Ajax functionality. Fortunately, there's no reason you can't keep the best of both environments to produce a rich, cohesive application. There are quite a few simplistic methods for passing data to ActionScript code from the Flash container (HTML/JavaScript code), including the use of query strings and <param> tags. However, this method is limited to passing data into the container. A more powerful technique is to use the ExternalInterface class, an application program interface (API) used to broker communication between the ActionScript and JavaScript languages. The use of ExternalInterface is best demonstrated by the example in Listing 1: Listing 1. ExternalInterface example Listing 1 demonstrates a stripped-down example of how to use the ExternalInterface class to register an ActionScript function so that JavaScript code can call it. You do this by first defining an ActionScript function, then using the addCallback() method to expose the function to JavaScript for execution. On the HTML side, simply obtain a handle to the Flash container and call the function, which was named using the first parameter to the addCallback() method. Although this demonstration concentrated on exposing functions to the JavaScript code, you can just as easily go the other way by using the call() method of the ExternalInterface class. The ExternalInterface class can be quite powerful, but there are significant drawbacks to implementing it. To use ExternalInterface, you must be able to write code to implement both the ActionScript and JavaScript environments. This not only requires added skill but double the effort. In this situation, maintaining code as well as two very robust skill sets can become a challenge. To address the limitations of development against the Flash external API, Adobe has released the FABridge. The FABridge, which ships with the Flex SDK, is a small library used to expose Flash content to scripting in the browser and works in most major browser platforms. With the FABridge, plumbing code that was required to directly implement the Flash external API is now virtually eliminated. Further, the skills required to implement the bridge aren't as robust. As a JavaScript developer, you simply need to be able to understand what's available to you in the way of ActionScript properties and methods. Let's get started with a few examples that demonstrate the capabilities of the FABridge. Before you get started using the FABridge, here are the materials and development environment you'll be working with. After downloading the latest Flex SDK, configure the directory structure shown in Listing 2: Listing 2. Directory structure for the FABridge tutorial The directory structure is straightforward: You just have an index page and the FABridge scripts hooked into their own directory named bridge. The location of the FABridge library files depends on your environment. Because I'm using Flex Builder 3 Professional on Mac OS X, my library files reside in install_root/sdks/frameworks/3.0.0/javascript/fabridge/. Now that you have the appropriate architecture in place, you can begin creating the skeletons on both the HTML/JavaScript and ActionScript sides. Use the code from Listing 3 to develop the HTML/JavaScript skeleton: Listing 3. HTML/JavaScript skeleton As you can see, you simply hook the FABridge JavaScript library to your code, and all the functionality of the bridge is immediately available. Next, use the code from Listing 4 to implement the bridge on the ActionScript side: Listing 4. Application skeleton This code might be a bit more unfamiliar to you. The UI is kept clean and simple by defining a single text input control with the ID txt_test and a default value of FABridge rocks! The bridge namespace is defined, and all classes in the bridge directory are imported. Finally, the Flex application is given a name for the bridge to use to access it: flex. To compile this Flex code into a working SWF document, use the mxmlc utility from the Flex 3 SDK. The most basic compile command is shown in Listing 5: Listing 5. Compiling MXML The command in Listing 5 compiles the source file and outputs an SWF file with the same file name as the MXML in the same directory. Assuming a successful compilation, you can now hook the resulting SWF into your HTML file, as shown in Listing 6: Listing 6. Linking the resulting SWF file Note: The code in Listing 6 is deliberately light to keep focus on the task of demonstrating the FABridge. Unless you're targeting a specific environment (Listing 6 is targeting Mozilla), you'll want to add more intelligence in the way of object tags and other load scripts. Assuming that all went well, your application should now look similar to Figure 1: Figure 1. The sample application Now that you have successfully compiled and linked the Flex application into the HTML container, invoke your first FABridge functions to obtain a reference to the Flex application. Use the code in Listing 7 to fill in the empty <script> tag in your HTML skeleton file: Listing 7. Obtaining a reference to the Flex application The code in Listing 7 starts by defining a global JavaScript variable that will hold a reference to the Flex application when the FABridge obtains it. A callback function is defined that sets the global variable and is invoked through the addInitializationCallback() FABridge method. Using this code is simply a matter of matching the name of the bridge that you configured in the Flex application. From here, you're able to access all sorts of ActionScript functionality from the JavaScript code. Working with ActionScript objects Now that you've obtained a global reference to the Flex application, you can access ActionScript objects through the consistent interface that the FABridge provides. In the ActionScript world, you would typically access objects through dot notation object.id. Rather than expose ActionScript objects in dot notation, however, the FABridge makes these objects available through function calls. It is a little different at first, but all you need to know is the template to follow. An object traditionally identified in ActionScript as object.id would now be accessed as object.getId(). This is best demonstrated through example: Type the code from Listing 8 into your HTML skeleton to try it out: Listing 8. Getting ActionScript objects by ID The variable txt is an object that represents the text input control with the ID txt_test from the Flex application. You can see the template you would need to follow for gaining access to other ActionScript objects by ID. The declaration begins with the global reference to the Flex application, then a method call that always begins with the string get followed by the ID of the target object. Notice that the name of the ID must begin with a capital letter in this declaration. Getting and setting the properties of ActionScript objects is similar to the process just used. Keeping up with our example of manipulating the text input control, use the code from Listing 9 to get and set the text property: Listing 9. Get and set ActionScript properties The code in Listing 9 first alerts the original value of the text input control from the Flex application. By following the template described earlier, you can see that the text property is obtained through a function call, with the get string prepended and the property name camel cased. The set() method uses the same process but accepts a parameter used to configure the new value of the object. After the code in Listing 9 executes, you should see a screen similar to Figure 2: Figure 2. Setting ActionScript object properties Now, let's move on to the easiest manipulation of all: calling ActionScript object methods. This process requires no special considerations on your part. ActionScript object methods are used in JavaScript code just as they would be used in ActionScript code. The code in Listing 10 demonstrates the invocation of a method on your text input control: Listing 10. Invoking ActionScript methods The code in Listing 10 sets the text input control in the Flex application to be invisible. The object can still be referenced and manipulated, it's just not physically visible. Between the ActionScript and JavaScript worlds, this is no change in the way the methods are invoked. One of the more powerful features of the FABridge is the ability to pass functions between JavaScript and ActionScript code. Check out the code in Listing 11, which dynamically copies the value of the text input in Flex to a <div> on the HTML/JavaScript side: Listing 11. Passing functions The code in Listing 11 is a JavaScript callback function that's fired each time the text input control value from the Flex application changes. When the value changes, it is copied to a <div> tag with the ID copy. This type of functionality can be very powerful, especially when attempting any sort of integration work between Ajax and Flex content. With both environments relying heavily on events, it's key to be able to have them work together. The last feature this article explores is exception handling. By default, when you use try . . . catch blocks throughout your JavaScript code, you'll be able to at least access an error code that you can then look up in the online reference for ActionScript errors. This methodology certainly works, but during development, you want access to as much information up front as possible. While using the FABridge, you can get this information simply by installing Flash Player 9 with debugging. With this feature installed, you have access to line numbers, file names, error types, and stack traces. Use the code in Listing 12 to see an example: Listing 12. Exception handling An error is thrown from the code in Listing 12 because the method throwsAnError() does not exist. The code from the catch block outputs an alert that looks similar to Figure 3: Figure 3. Exception data As you can see, this data is far more useful than a single error code and less work to troubleshoot. When you're working with complex integration issues between differing technologies such as JavaScript and ActionScript, you'll appreciate this extra help. So far, this article has taken a tutorial-type approach to showing the capabilities of the FABridge. Now it's time to use a real-world scenario to demonstrate its usefulness. As indicated earlier, you want to integrate and use the best of both the Ajax and Flex worlds. One of the components that really shines on the Flex side is its charting capability. Although it's not a free library, it is worth the added cost if you're looking to do some intense client-side application programming. The example you'll work with here is a combination of a PHP service that serves dummy data containing the number of messages received in certain categories for specific users. The data from the PHP service is loaded into a grid control using the Ext JS JavaScript framework, then the same data is pushed over the FABridge as a data provider to a pie chart in Flex. Start by taking at look at the PHP service in Listing 13: Listing 13. The PHP service Note: The data in this service is hard coded and only meant to demonstrate the concept. Next, take a look at Listing 14, which is the MXML to generate the pie chart: Listing 14. Flex pie chart This code was taken from the Adobe documentation and modified to fit the scenario as well as configured for use with the FABridge. One thing to note here is that the variable named The last piece to this puzzle is the JavaScript code in Listing 15: Listing 15. Produce the grid, populate the chart The first thing to notice about this code are the links to the Ext JS resources needed to make it work. After hooking in the default styles and debug scripts for Ext, an onReady block is configured. This block is executed only after a full Document Object Model (DOM) is ready. You should be familiar with the code used to populate the global flexApp variable with a reference to the Flex application. One addition to the callback is the execution of the initUI function. This function is used to create an Ext data store using the PHP service and to populate an Ext grid control using the resulting data in the store. When the Ext data store is loaded, a data structure is created and pushed over the FABridge so that the data binds as a data provider to the pie chart. The final product is shown in Figure 4: Figure 4. The final product As you scan the data in the grid, it should match up with what's represented in the pie chart. This really is a powerful concept, and you can see the possibilities it has to offer. Although this was a single real-world example of how you might want to implement the FABridge, there are several other popular ways to use this library. Syncing security information for remote service authentication and techniques to consistently brand and personalize applications are just a couple of examples of how best to use the bridge. Adobe Flex is an incredible technology that is just starting to reveal its true potential. However, no single product will solve all the wants and needs of developers and users, so it's important that we keep our minds open and explore the possibilities of integration using the FABridge. Learn - Adobe Flex Resources: See a collection of documentation for Adobe Flex. - Ext JS Resources: Find documentation and tutorials for the Ext JS JavaScript framework. - Mastering Ajax: Read the developerWorks series for a comprehensive overview of Ajax. - Technology bookstore: Browse for books on these and other technical topics. - IBM technical events and webcasts: Stay current with developerWorks' Technical events and webcasts. - developerWorks Open source zone: Visit the developerWorks Open source zone for extensive how-to information, tools, and project updates to help you develop with open source technologies and use them with IBM's products. Get products and technologies - Adobe Flex: Visit the Flex product page. - Ext JS: Download the Ext JS JavaScript framework. Discuss - Flex discussion forum: Join the Flex discussion. - Ext JS discussion forum: Join the Ext discussion. - developerWorks blogs: Participate in developerWorks blogs and get involved in the developerWorks community.
http://www.ibm.com/developerworks/web/library/wa-aj-flex/index.html
crawl-002
refinedweb
2,876
51.48
This port allows SDL applications to run on Microsoft's platforms that require use of “Windows Runtime”, aka. “WinRT”, APIs. Microsoft may, in some cases, refer to them as either “Windows Store”, or for Windows 10, “UWP” apps. Some of the operating systems that include WinRT, are: Here is a rough list of what works, and what doens't: What works: __WINRT__, will be set to 1 (by SDL) when compiling for WinRT. What partially works: SDL\src\main\winrt\) directly in order for their C-style main() functions to be called. What doesn't work: SDL 2.0.4 fixes two bugs found in the WinRT version of SDL_GetPrefPath(). The fixes may affect older, SDL 2.0.3-based apps' save data. Please note that these changes only apply to SDL-based WinRT apps, and not to apps for any other platform. SDL_GetPrefPath() would return an invalid path, one in which the path's directory had not been created. Attempts to create files there (via fopen(), for example), would fail, unless that directory was explicitly created beforehand. SDL_GetPrefPath(), for non-WinPhone-based apps, would return a path inside a WinRT ‘Roaming’ folder, the contents of which get automatically synchronized across multiple devices. This process can occur while an application runs, and can cause existing save-data to be overwritten at unexpected times, with data from other devices. (Windows Phone apps written with SDL 2.0.3 did not utilize a Roaming folder, due to API restrictions in Windows Phone 8.0). SDL_GetPrefPath(), starting with SDL 2.0.4, addresses these by: making sure that SDL_GetPrefPath() returns a directory in which data can be written to immediately, without first needing to create directories. basing SDL_GetPrefPath() off of a different, non-Roaming folder, the contents of which do not automatically get synchronized across devices (and which require less work to use safely, in terms of data integrity). Apps that wish to get their Roaming folder's path can do so either by using SDL_WinRTGetFSPathUTF8(), SDL_WinRTGetFSPathUNICODE() (which returns a UCS-2/wide-char string), or directly through the WinRT class, Windows.Storage.ApplicationData. The steps for setting up a project for an SDL/WinRT app looks like the following, at a high-level: Create a new project using one of Visual C++‘s templates for a plain, non-XAML, “Direct3D App” (XAML support for SDL/WinRT is not yet ready for use). If you don’t see one of these templates, in Visual C++'s ‘New Project’ dialog, try using the textbox titled, ‘Search Installed Templates’ to look for one. In the new project, delete any file that has one of the following extensions: When you are done, you should be left with a few files, each of which will be a necessary part of your app's project. These files will consist of: SDL/WinRT can be built in multiple variations, spanning across three different CPU architectures (x86, x64, and ARM) and two different configurations (Debug and Release). WinRT and Visual C++ do not currently provide a means for combining multiple variations of one library into a single file. Furthermore, it does not provide an easy means for copying pre-built .dll files into your app's final output (via Post-Build steps, for example). It does, however, provide a system whereby an app can reference the MSVC projects of libraries such that, when the app is built: To set this up for SDL/WinRT, you'll need to run through the following steps: VisualC-WinRT/UWP_VS2015/- for Windows 10 / UWP apps VisualC-WinRT/WinPhone81_VS2013/- for Windows Phone 8.1 apps VisualC-WinRT/WinRT80_VS2012/- for Windows 8.0 apps VisualC-WinRT/WinRT81_VS2013/- for Windows 8.1 apps Your project is now linked to SDL's project, insofar that when the app is built, SDL will be built as well, with its build output getting included with your app. Some build settings need to be changed in your app's project. This guide will outline the following: To change these settings: A few files should be included directly in your app's MSVC project, specifically: To include these files: SDL_winrt_main_NonXAML.cpp SDL2-WinRTResources.rc SDL2-WinRTResource_BlankCursor.cur SDL_winrt_main_NonXAML.cpp(as listed in your project), then click on “Properties...”. NOTE: C++/CX compilation is currently required in at least one file of your app's project. This is to make sure that Visual C++'s linker builds a ‘Windows Metadata’ file (.winmd) for your app. Not doing so can lead to build errors. At this point, you can add in SDL-specific source code. Be sure to include a C-style main function (ie: int main(int argc, char *argv[])). From there you should be able to create a single SDL_Window (WinRT apps can only have one window, at present), as well as an SDL_Renderer. Direct3D will be used to draw content. Events are received via SDL‘s usual event functions ( SDL_PollEvent, etc.) If you have a set of existing source files and assets, you can start adding them to the project now. If not, or if you would like to make sure that you’re setup correctly, some short and simple sample code is provided below. If you are creating a new app (rather than porting an existing SDL-based app), or if you would just like a simple app to test SDL/WinRT with before trying to get existing code working, some working SDL/WinRT code is provided below. To set this up: right click on your app's project select Add, then New Item. An “Add New Item” dialog will show up. from the left-hand list, choose “Visual C++” from the middle/main list, choose “C++ File (.cpp)” near the bottom of the dialog, next to “Name:”, type in a name for your source file, such as, “main.cpp”. click on the Add button. This will close the dialog, add the new file to your project, and open the file in Visual C++'s text editor. Copy and paste the following code into the new file, then save it. #include <SDL.h> int main(int argc, char **argv) { SDL_DisplayMode mode; SDL_Window * window = NULL; SDL_Renderer * renderer = NULL; SDL_Event evt; if (SDL_Init(SDL_INIT_VIDEO) != 0) { return 1; } if (SDL_GetCurrentDisplayMode(0, &mode) != 0) { return 1; } if (SDL_CreateWindowAndRenderer(mode.w, mode.h, SDL_WINDOW_FULLSCREEN, &window, &renderer) != 0) { return 1; } while (1) { while (SDL_PollEvent(&evt)) { } SDL_SetRenderDrawColor(renderer, 0, 255, 0, 255); SDL_RenderClear(renderer); SDL_RenderPresent(renderer); } } If you have existing code and assets that you'd like to add, you should be able to add them now. The process for adding a set of files is as such. Do note that WinRT only supports a subset of the APIs that are available to Win32-based apps. Many portions of the Win32 API and the C runtime are not available. A list of unsupported C APIs can be found at General information on using the C runtime in WinRT can be found at A list of supported Win32 APIs for WinRT apps can be found at. To note, the list of supported Win32 APIs for Windows Phone 8.0 is different. That list can be found at Your app project should now be setup, and you should be ready to build your app. To run it on the local machine, open the Debug menu and choose “Start Debugging”. This will build your app, then run your app full-screen. To switch out of your app, press the Windows key. Alternatively, you can choose to run your app in a window. To do this, before building and running your app, find the drop-down menu in Visual C++'s toolbar that says, “Local Machine”. Expand this by clicking on the arrow on the right side of the list, then click on Simulator. Once you do that, any time you build and run the app, the app will launch in window, rather than full-screen. These instructions do not include Windows Phone, despite Windows Phone typically running on ARM processors. They are specifically for devices that use the “Windows RT” operating system, which was a modified version of Windows 8.x that ran primarily on ARM-based tablet computers. To build and run the app on ARM-based, “Windows RT” devices, you'll need to: Microsoft's Remote Debugger can be found at. Please note that separate versions of this debugger exist for different versions of Visual C++, one each for MSVC 2015, 2013, and 2012. To setup Visual C++ to launch your app on an ARM device: Try adding the following to your linker flags. In MSVC, this can be done by right-clicking on the app project, navigating to Configuration Properties -> Linker -> Command Line, then adding them to the Additional Options section. For Release builds / MSVC-Configurations, add: /nodefaultlib:vccorlib /nodefaultlib:msvcrt vccorlib.lib msvcrt.lib For Debug builds / MSVC-Configurations, add: /nodefaultlib:vccorlibd /nodefaultlib:msvcrtd vccorlibd.lib msvcrtd.lib.
https://fuchsia.googlesource.com/third_party/sdl/+/76a44039ef778f30f41c303157f275a1009d973e/docs/README-winrt.md
CC-MAIN-2020-50
refinedweb
1,482
62.58
my program that im supposed to write needs to divide 200 hrs of work between four workers , based on the following conditions: if worker one works 2 days, then he must work for only 2 hrs each day, totatl hrs=4 if worker two works 4 days, then he must work for only 4 hrs each day, totatl hrs=16 if worker three works 6 days, then he must work for only 6 hrs each day, totatl hrs=36 if worker four works 12 days, then he must work for only 12 hrs each day, totatl hrs=144 total hours= 144+36+16+4=200 hours therefore, if we divide 200 hours of work over four workers, their number of days worked squared equals the total hours they should work. my question is, what would be the best way to write this program? using a 'for' loop or a do...while loop? ive done it both ways, both not working. can someone please take a look at it? thanks! Code:#include <iostream> #include <cmath> using namespace std; int main () { int a=0, b=0, c=0, d=0; //int counter; //counter= 1; //a==200-b-c-d && b==200-a-c-d && c==200-a-b-d && d==200-a-b-c do { 50= (a+b+c+d); //a=200-b-c-d; //b=200-a-c-d; //c=200-a-b-d; //d=200-a-b-c; if (sqrt(a)==static_cast<int>(sqrt(a))) cout<<a<<" "; if (sqrt(b)==static_cast<int>(sqrt(b))) cout<<b<<" "; if (sqrt(c)==static_cast<int>(sqrt(c))) cout<<c<<" "; if (sqrt(d)==static_cast<int>(sqrt(d))) cout<<d<<" "; //counter++; cout<<endl; } while (200/4== a+b+c+d); //a+b+c+d==200); return 0; } the comments in this program are what i was originally using but made them comments to test if the program would work without them... and using 'for' loop: Code:#include <iostream> #include <cmath> using namespace std; int main () { int Employe=0; int total_hour=0; for(int i=1;i<=4;i++) { Employe=i; if(Employe==1) { total_hour+=((i+i)*(i+i)); //cout<<total_hour; } else if(Employe==2) { total_hour+=((i+i)*(i+i)); } else if(Employe==3) { total_hour+=((i+i)*(i+i)); } else if(Employe==4) { total_hour+=((i*3)*(i*3)); } } cout<<total_hour<<" "<<endl; return 0; }
https://cboard.cprogramming.com/cplusplus-programming/124816-write-program-determines-all-possible-ways-divide-integer.html
CC-MAIN-2017-43
refinedweb
386
57.88
Focus Management Overview Focus management improves the accuracy of responses from Alexa, specifically, when a user makes an ambiguous request. For example: - The product is playing music - User: "Alexa, pause." - User: "Alexa, what's the weather in Seattle?" - User: "Alexa, resume." - The product resumes playback of music Focus is managed in the cloud. A client simply informs Alexa which interface has focus of an audio or visual channel, and when applicable, reports the idle time for each. This state information is sent in the context container under the AudioActivityTracker and VisualActivityTracker namespaces. - Why Do I Need This? - Use Cases - Channels - Report ActivityState - Idle Time - Sample Context - Helpful Links Why Do I Need This? With products that support multi-modal experiences, such as local and offline playback or screen-based interactions unrelated to Alexa, the cloud may not be able to accurately determine what's happening on a product at a given time. By reporting audio and visual activity to Alexa, the product becomes the source of truth for all ongoing activities. This allows Alexa to determine what's in focus and accurately respond to each user request. Use Cases These use cases highlight the benefits of focus management. Audio Music Playback + Sounding Timer This example illustrates how Alexa uses activity state to determine what content a user is attempting to stop: - Music is playing - A timer/alarm goes off - User: "Alexa, stop." - In the contextof the Recognizeevent, the product informs Alexa that the timer/alarm has focus of the audio channel. - The timer/alarm is stopped and music resumes at the previously set volume Bluetooth This example illustrates how Alexa uses activity state to determine what directive is sent to stop music: - A user connects their phone to a paired Alexa Built-in product using Bluetooth: "Alexa, connect my phone". - Music playback is initiated from the phone and output from the Alexa Built-in product. - The user says, "Alexa, stop." The product receives a Bluetooth.Stopdirective. This command is communicated to the phone via Bluetooth. - The user says, "Alexa, play artist on Amazon Music." This results in an AudioPlayer.Playdirective being sent to the Alexa Built-in product. This is because the content originates from an Alexa music provider rather than the paired device. - The user says, "Alexa, stop." - In the contextof the Recognizeevent, the product informs Alexa that the AudioPlayerinterface has focus of the audio channel. The Alexa Built-in product receives and AudioPlayer.Stopdirective. - Music is stopped Without focus management, the Alexa Built-in product may have received a Bluetooth.Stop directive. Visual Display Cards The key takeaway is that visual focus in the cloud expires after 8 seconds have elapsed. Therefore, if a user makes a request after 8 seconds have elapsed, Alexa may be unaware of the client's visual activity state. Here's what can occur without focus management: - "Alexa, show me movie times for movie title." - The user waits 25 seconds, then says: "Alexa, next page." - Alexa responds that she doesn't know how to respond. This is because visual focus in the cloud expires after 8 seconds has elapsed, and Alexa is unaware that the display card still has visual focus on your product. With focus management enabled, a client will report the activity state for each audio and/or visual channel that their product supports as part of context. Since context is required in Recognize events, it is present in all speech requests, therefore, when the user says "Alexa, next page" in Step 3, Alexa is aware that the TemplateRuntime interface has focus of the visual channel and will send the correct directive. Channels Audio and visual data handled by your AVS client are be organized into channels. These channels are: Dialog, Alerts, Content, and Visual. Channels govern how your client should prioritize SpeechFinished event is sent to Alexa. Similarly, when a timer goes off, the Alerts channel becomes active and remains active until the timer is cancelled. This table provides and interface to channel mapping: It is possible for multiple channels to be active at once. For instance, if a user is listening to music and asks Alexa a question, the Content and Dialog channels are concurrently active as long as the user or Alexa is speaking. The visual channel is only active when the client is actively displaying Alexa-provided content to the user. Report ActivityState Both the AudioActivityTracker and VisualActivityTracker namespaces have an ActivityState that needs to be reported as part of Context. VisualActivityTrackeris only applicable for devices with screens. AudioActivityTracker- Specifies which interface is active for each audio channel and the time elapsed since an activity occurred for each channel. VisualActivityTracker- Indicates that visual metadata from the TemplateRuntime interface is currently being displayed to the user. Idle Time For each channel in AudioActivityTracker, idleTimeInMilliseconds is required. If a channel is active at the time that context is reported, idleTimeInMilliseconds must be empty or set to 0. VisualActivityTracker does not track idle time. The TemplateRuntime interface must be reported as in focus while the product is displaying a visual metadata from Alexa, for example, a display card. Sample Context This is a sample message that}} } } } } } }
https://developer.amazon.com/es/docs/alexa-voice-service/focus-management.html
CC-MAIN-2019-47
refinedweb
856
55.24
Sinatra Project Step 1) Gem install “corneal “ We are going to use the corneal gem because it sets up the file structure of the project for us. Not only that; it comes with nearly all the gems you need in your gem file. Next we’ll be making a couple a small updates and additions to the gemfile. Add to gemfile - Gem ‘shotgun’ o To allow us to test in real time • Gem ‘bcrypt’ o For securing & encrypting our passwords • Gem ‘rack-flash3’ o For getting out errors o Go ahead and add this to the environment.rb file • require ‘rack-flash’ o Update Activerecord to version 5.2.3 gem ‘activerecord’, ‘5.2.3’, :require => ‘active_record’ Updating active record is just a personal preference. You can use the default active record that comes with the corneal gem. But when we get to migrations you need to remember to leave off the version number. Step 2) Create the controllers Don’t forget make sure that the other controllers inherit from the main controller. Which in our case is called the “application_controller.rb” In the main controller we are going to store all our helpers. While you can make a separate helper file; for the sake of personal preference, we are going store our helpers in the “application_controller.rb”. It’ll look a little like this. You can add as many methods to it as you want into it.. helpers do def logged_in? !!session[:User_id] end end application_controller.rb(main) o Main controller holds the helpers. • employees_controller.rb o inherits from application_controller.rb o Handles the employee signup • login_controller.rb o inherits from application_controller.rb o Handles the routes related to logging in users • ticket_controller.rb o inherits from application_controller.rb o Handles all the routes related to CRUD Step 3) In config.ru make sure you are using and running the controllers properly. It’s best convention to “use” all the controllers except for the main controller (which we will “run”). use LoginController use EmployeesController use TicketController run ApplicationController And while we are here let’s go ahead and add our middleware because we are trying to implement CRUD functionality. Let’s add use Rack::MethodOverride so that we can access the “Patch”(aka “Update”) route and the “Delete” route # allows us to use PATCH and DELETE routes use Rack::MethodOverride Step 4) Create the Models Make sure all models inherit from ActiveRecord::Base employee.rb Has many (tickets) (Which is provided by active record) o has_secure_password (Which is provided by bcrypt’) class Employee < ActiveRecord::Base has_many :tickets has_secure_password end ticket.rb o belongs to (employees) (Which is provided by active record) class Ticket < ActiveRecord::Base belongs_to :employee end Step 5) Now that we have our models set up lets go ahead and create our tables and relationships • Lets create a migrate file in our db file. Sinatra wont work properly if this is not structured properly o db/migrate Your tables are going to inherit from ActiveRecord::Migration[5.2] Since we are using ActiveRecord::Migration[5.2] we have to specify We are going to use the “change” method because it handles both “up & down” method functionalities We need to use the active record “create_table” command on both tables to create the tables. NOTICE how the tickets table has a foreign key. That is because it “belongs_to” the “Employee” • 01_create_employees_table.rb class CreateEmployeesTable < ActiveRecord::Migration[5.2] def change create_table :employees do |t| t.string :name • t.string :username t.string :password_digest end 02_create_tickets_table.rb class CreateTicketsTable < ActiveRecord::Migration[5.2] def change create_table :tickets do |t| t.string :title t.string :details t.integer :employee_id t.timestamps end • Step 6) Lets run “rake db:migrate”. To actually build our database. Step 7) Lets go into the rake file and add the console task to help us test out relationships in the next step. - task :console - • Pry.start - • End Step 8) Lets run “rake console” And in the terminal lets create Employee objects and Ticket objects and make sure that they are both connected to each other by creating Employee objects & Ticket objects with respective foreign keys. [IMPORTANT it’s because of the foreign keys we are able to have the “Tickets belong to Employees” relationship] Relational Overview Ex: Employee.tickets -> list of Tickets Ex: Ticket.id.employee -> Name of employee Step 9) Create the Views Note ERB is similar to HTML. A good way to think of ERB is that it’s html that you can write ruby code in. So it allows you to use logic. <%%> allows you to write ruby code <%=%> Allows you to display the output of your code. DON’T forget to set your vairables as instances in your route or else you wont be able to call them in the ERB files. There are endless ways you can have your views look. So it’s up to you. For the sake of brevity. Were going to focus on the back end logic. - sessions login.erb - tickets all_tickets.erb - create_ticket.erb - read_ticket.erb - update_ticket.erb - users - layout.erb - handles the layout for all the pages Step 10) Now it’s time to create all our routes in our controller. For the sake of keeping this short. I’m going to link the github repo for those who want to follow along. I’m just going to write about the most mission critical routes and their functionality. The login route deals with the user’s ability to login. We do this by enabling sessions in the main route. And once the session is enabled we are able to persist and verify the session along routes. User’s can only user CRUD functionality if they are logged it. Other wise they are redirected. Now for the Crud Functionality get “/tickets” do #Read -index action if logged_in? @tickets = current_user.tickets erb :’tickets/all_tickets’ else redirect ‘/login’ end end this route allows us to be able to see all the tickets(its the main page) get “/tickets/new” do #Create -new action if logged_in? error_getter_ticket erb :’tickets/create_ticket’ else redirect ‘/login’ end end post “/tickets” do #Create -create action if logged_in? ticket = current_user.tickets.build(params) if ticket.save redirect "/tickets" else error_setter_ticket(ticket) redirect "/tickets/new" end else redirect '/login' end end This route allows us to create tickets. get “/tickets/:id” do #Read — show action if logged_in? @ticket = current_user.tickets.find_by(id: params[:id]) if @ticket erb :'tickets/read_ticket' else redirect '/tickets' end else redirect '/login' end end This is another read route. It allows us to read more information about a particular ticket. get “/tickets/:id/edit” do #Update -edit action if logged_in? error_getter_ticket @ticket = current_user.tickets.find_by(id: params[:id]) if @ticket erb :’tickets/update_ticket’ else redirect ‘/tickets’ end else redirect ‘/login’ end end patch “/tickets/:id” do #Update -update action #”Process the update and redirect” if logged_in? ticket = current_user.tickets.find_by(id: params[:id]) if ticket if ticket.update(title: params[:title], details: params[:details]) redirect “/tickets” else error_setter_ticket(ticket) redirect “/tickets/#{params[:id]}/edit” end else redirect ‘/tickets’ end else redirect ‘/login’ end end This route handles the update functionality. delete “/tickets/:id” do #Delete -delete action #”Delete and redirect” if logged_in? ticket = current_user.tickets.find_by(id: params[:id]) #replace delete with destroy if ticket Ticket.delete(ticket.id) redirect “/tickets” else redirect “/tickets” end else redirect ‘/login’ end end This route handles the delete functionality. The only reason we are able to use the last routes is due to the fact that we have the middle ware. Step 11) Lets add validations to make sure that the data being persisted is the data that we actually want class Employee < ActiveRecord::Base has_many :tickets has_secure_password #name validates :name,:username,:password, presence: true #username validates :username, uniqueness: true end class Ticket < ActiveRecord::Base belongs_to :employee #Ticket Title Validatiors validates :title,:details, presence: true validates :title, length: { in: 2..100 } #Ticket Detail Validators • validates :details, length: { in: 6..1000 } end Step 12) Handling errors. Now that we have our app functional. We have to figure out how we are going to display the error messages. To let the user know that they aren’t putting in proper data. Lets add “use Rack::Flash” to the controllers we want to display our errors in. class EmployeesController < ApplicationController use Rack::Flash … some routes and route logic end “flash” functions similar to sessions. You can get the data to persist along routes through the “flash” hash. Once you try to edit or create data in your applications. If the data does pass the validations we set above. Active record will return an error along with specific information about the error. You can set to error message equal to a flash key of your choice and call the error in another route. Kind of like this. And now that the error has been saved to an instance variable you can actually display it to the user. Bonus Step 13) If you would like to make the application visually appealing. Use Bootstrap. It comes with a lot of coot tools and documentation on how to use the tools. Bonus Step 14) If you want to go over and beyond don’t forget to create the Styles Sheets Lets go ahead and abstract out the logic for our CSS & For the sake of keeping our code dry. Put your “main.css” file into the style sheet foulder. • stylesheets o main.css
https://abdulkhan-81904.medium.com/sinatra-project-57e87b259dac?source=user_profile---------5----------------------------
CC-MAIN-2021-49
refinedweb
1,572
58.18
Hi Johannes,>> so using "/" within the name parameter for request_firmware() is>> actually forbidden. I know that some driver authers think it is a >> good>> idea, but it is not.>> Can you explain why it is allowed now? And maybe why the API was> designed in a way that easily allows it?in the early days we had something like three drivers using the request_firmware() and it was understood between the authors what the filename was meant for. And to be quite honest it was an oversight on our side to not explicitly fail when the filename contains a "/". So it happened that driver authors exploited the fact that they can group firmware files under a subdirectory from within the kernel. Nobody made the effort and proposed changes to udev.Personally I think it is fine to have _ALL_ firmware files in one directory and not namespace them at all, but it seems that this is important for some driver authors.>> I explained this a couple of times. The request_firmware() is an>> abstract mechanism that can request a firmware file. The location of>> the firmware file is up to the userspace. The kernel requests a>> particular file and that is it. All namespacing has to be done by the>> firmware helper script (nowadays udev). That the current>> implementation of the firmware helper maps the filename 1:1 to a file>> under /lib/firmware/ just works, but doesn't have to work all the>> time. It is not the agreed contract between kernel and userspace.>> I don't buy this argument. I could agree if you said that the "agreed> contract" between the kernel and userspace is for the kernel to > request> a firmware file /keyed by an arbitrary, null-terminated string/.>> The fact that it is usually stored on a filesystem where / means a> directory (and thus grouping) can be seen as a nice convenience of the> filesystem storage, but if firmware was stored elsewhere then you > could> degrade to the simple key-based lookup that happens to allow "/" as a> character in the keys.The kernel should not in any case have knowledge about directories or subdirectories where the firmware files are stored. That is fully irrelevant for the kernel.Especially with the case of built-in firmwares now, it because more important to do it right. The one reason why we have to handover the struct device to request_firmware() is that we can give the helper script full access to the device and driver information of the caller. Hence adding for example b43/ as prefix simply duplicates everything since the struct device has a link to the driver that is requesting a firmware file.> b43 comes with 22 firmware files for a single driver, and groups them> using "b43/<name>". What you're proposing will make firmware fail> *again* for all users, and we got a *LOT* of flak from all kinds of> stakeholders (not just the users) when firmware upgrades were > required,> doing it again for such a petty reason is ridiculous.That is not what I am proposing. What I am proposing is that we do this the right way. Meaning that we fix udev to do the namespacing. I am working on a way to have this change in a backward compatible way.RegardsMarcel
http://lkml.org/lkml/2008/5/25/82
CC-MAIN-2014-52
refinedweb
548
70.33
EclipseLink Caching Ability, Page 4 EclipseLink Caching Ability Data caching is essential when building an enterprise application. It becomes the most important aspect of an application when the app requires lots of database access. Caching speeds up database access and increases the performance of an application. JPA 2 supports two Levels of caches: JPA Level 1 Cache (L1) and JPA Level 2 Cache (L2). L1 cache is the persistence context and L2 cache spans across different persistence context. Find more info on JPA 2 Cache in the article JPA 2.0 Cache Vs. Hibernate Cache: Differences in Approach. Every persistence provider has to implement the JPA 2.0 specification to provide the caching facility to cache row objects. In addition to implementing JPA 2, EclipseLink is capable of caching entities, which yields much better performance for the application. This section introduces the basics of the EclipseLink caching capability (covering every aspect of EclipseLink caching is beyond the scope of this article). - Session cache - Unit of workcache Session cache and unit of work cache work together to optimize the application's database access. Instances that are stored and retrieved from the database are managed and maintained by the Session cache. The Session cache stores instances that can be retrieved for future reference beyond the scope of the transaction. The first object accessed from the database is eligible to be added to the Session scope. The Unit of work cache stores instances within that transaction. When the transaction gets completed, the state of the instance gets synchronized with the database. That is when EclipseLink updates the Session cache for that object. Objects are uniquely identified in the database using their primary key values. Within the cache, the primary key value of the persistent entity is the object identity that EclipseLink uses to uniquely identify instances. The objects in the cache are stored in the Identity Map. Caching can be enabled in EclipseLink either by using either annotations or XML files. The following code snippet shows the usage of the @Cache annotation on the Student entity class. import org.eclipse.persistence.annotations.Cache; import org.eclipse.persistence.annotations.CacheCoordinationType; import org.eclipse.persistence.annotations.CacheType; import javax.persistence.Entity; import javax.persistence.Table; @Entity @Table @Cache (type = CacheType.SOFT, shared=true, coordinationType=CacheCoordinationType.INVALIDATE_CHANGED_OBJECTS) public class Student { // Some Code } Here is how the attributes in the code function: - The typeattribute specifies the strategy to be used while caching the object. CacheType.SOFT: This instructs the garbage collector (GC) to collect the object only when the application decides that memory might be low. CacheType.WEAK: The object marked is the one that will be removed as soon as the GC is initiated. CacheType.FULL: This gives full caching facility, as the objects are never flushed from the memory until they are deleted from the memory. CacheType.SOFT_WEAK: This is similar to the WEAKidentity map, but it uses the most frequently used sub cache. CacheType.HARD_WEAK: This is similar to SOFT_WEAKbut it uses the hard references in the sub cache. - The sharedattribute is set to true, which informs EclipseLink to store the object in the L2 cache shared across the persistence context. - The coordinationTypeattribute helps EclipseLink to decide what needs to be done when the state of an instance is modified. The values that this attribute can take are: INVALIDATE_CHANGED_OBJECTS: Invalidates the object in other referencing nodes. The changes made in an object do not reflect in its reference. SEND_OBJECT_CHANGES: When the state is changed, it is reflected in the cache. SEND_NEW_OBJECTS_WITH_CHANGES: Similar to SEND_OBJECT_CHANGES, but this is applicable only for those objects that are newly created in the transactions Caching in JPQL JPA provides a standard and powerful querying mechanism for querying the database. JPQL is the standard querying language used to define queries in JPA. Queries that search for instances in the shared cache are called in-memory queries. A query generally searches in the database except for those that are searching for a single instance. When a query is couched for a single instance, the instance is checked in the cache and then in the database. We can specify whether the query needs to be fired against the database or against the in-memory or against both. The eclipselink.cache-usage hint is used to specify the interaction with the EclipseLink cache. This is a shared cache that will be shared across the persistence context. The following code snippet applies a query hint using @QueryHint. It specifies the cache usage to CheckCacheByPrimaryKey, which specifies that the cache is checked first if the query contains the primary key. @NamedQuery(name = "Employee.findAll", query = "SELECT e FROM Employee e", hints={@QueryHint(name="eclipselink.cache-usage", value=" CheckCacheByPrimaryKey")}) Conclusion In this article, we introduced the open source ORM solution, EclipseLink. We also provided the steps required to create a JPA 2 application with EclipseLink as the provider in both Java SE and Java EE environments. Towards the end of the article, we explored the EclipseLink cache, which differentiates it from other vendors implementing JPA 2 specification. Acknowledgements We would like to sincerely thank Mr. Subrahmanya (SV, VP, ECOM Research Group, E&R) for his constant encouragement and Ms. Sangeetha S for providing ideas, guidance and valuable comments, and for kindly reviewing this article..<<
https://www.developer.com/java/ent/eclipselink-caching-ability.html
CC-MAIN-2018-17
refinedweb
878
56.86
I would like to do the following pass a string as a function argument and the function will return a string back to main. The brief idea is as below: String str1 = "Hello "; String received = function(str1); printf("%s", str3); String function (String data){ String str2 = "there"; String str3 = strcat(str3, str1); String str3 = strcat(str3, str2); //str3 = Hello there return str3; } Strings or character arrays are basically a pointer to a memory location, as you might already know. So returning a string from the function is basically returning the pointer to the beginning of the character array, which is stored in the string name itself. But beware, you should never pass memory address of a local function variable. Accessing such kind of memory might lead to Undefined Behaviour. #include <stdio.h> #include <string.h> #define SIZE 100 char *function(char aStr[]) { char aLocalStr[SIZE] = "there! "; strcat(aStr, aLocalStr); return aStr; } int main() { char aStr[SIZE] = "Hello "; char *pReceived; pReceived = function(aStr); printf("%s\n", pReceived); return 0; } I hope this helps.
https://codedump.io/share/32wCjwbmUaLG/1/string-as-function-argument-and-returning-string-from-function-in-c
CC-MAIN-2021-04
refinedweb
172
51.28
CodePlexProject Hosting for Open Source Software I am having problem with the Model data in my custom module. I am developing a module which will retrieve the data from external system and allow users to update the data in a Orchard form. Retrieving and showing the data is not a problem but when i try to change some values on the form and update the form, the model is not reflecting any data and it is null in the HTTP post. I tried to test by creating a very simple form as below and the result also is the same. Is there anything wrong with my code or did i miss something? I have created a content part and content part record as follow: [OrchardFeature("CRMProfile")] public class CRMProfileRecord : ContentPartRecord { public virtual string UserName { get; set; } public virtual string FullName { get; set; } public virtual Guid MemberId { get; set; } } [OrchardFeature("CRMProfile")] public class CRMProfilePart : ContentPart<CRMProfileRecord> { public string UserName { get { return Record.UserName; } set { Record.UserName = value; } } public string FullName { get { return Record.FullName; } set { Record.FullName = value; } } public Guid MemberId { get { return Record.MemberId; } set { Record.MemberId = value; } } } This is my migration file : public class Migrations : DataMigrationImpl { public int Create() { // Creating table Ibiz_Crm_LoyaltyPortalModule_CRMProfileRecord SchemaBuilder.CreateTable("CRMProfileRecord", table => table .ContentPartRecord() .Column("UserName", DbType.String) .Column("FullName", DbType.String) .Column("MemberId", DbType.Guid) ); return 1; } } This is the controller code:"; } This is my test view.cshtml. Here I tried using both normal input control and also the HTML.Textbox helper. @model Ibiz.Crm.LoyaltyPortalModule.Models.CRMProfileRecord @{ ViewBag.Title = "Test"; } <h2>Test</h2> @using (Html.BeginFormAntiForgeryPost(Url.Action("Test", "Admin", new { Area = "Ibiz.Crm.LoyaltyPortalModule" }), FormMethod.Post, new { enctype = "multipart/form-data" })) { <label for="test" style="font-weight: bold">@T(Model.FullName)</label> @*Html.TextBox("Model.FullName", Model.FullName)*@ <input type="text" id="Model.FullName" value=@Model.FullName /> <button class="primaryAction" type="submit">@T("Save")</button> } and the page is always showing fail. Regards, Thor Why do you need multipart? I am just trying to follow the sample in I need to store the user's guid key in database to retrieve the data from external system and I thought it content part is the way to store the data in orchard database. And in my actual module, i am pulling some other data and show it in the form using another class in view model. The problem is whenever i click on the save button, my modified data is not reflecting in the post function. Even i remove the multipart, it is still not working. You mean that data is not related to a content item? If this is not a content item, don't use a part, use a plain record and do your data access through IRepository<YourRecordType>. I tried that way before I created the part. I can access and display the data on the form but when the form was submitted through post, as I said before, the data model is with 0 records and it is always showing as fail as in my example."; } It;s probably that the model binder can't map because of the way you named your form fields. Thanks a lot. Now it is working. The problem is because of the way the controls are named. Are you sure you want to delete this post? You will not be able to recover it later. Are you sure you want to delete this thread? You will not be able to recover it later.
http://orchard.codeplex.com/discussions/335192
CC-MAIN-2016-44
refinedweb
580
57.27
Hi all, I’ve been noticing my LCD doing some very strange (and harder to track down) things on occasion #include <LCD4Bit.h> LCD4Bit lcd = LCD4Bit(2); int serbyte = 0; void setup() { lcd.init(); Serial.begin(115200); } void loop() { serbyte = Serial.read(); if (serbyte != -1 && serbyte != 94 && serbyte != 37) { lcd.print(serbyte); Serial.print(serbyte); //echo back to the comp so i can find the stupid bug Serial.print("."); //make it easier to read delay(20); } if (serbyte == 94) { //lets make ^ the clear char, easy to type, not used often in mp3 titles lcd.init(); //we need to re-init after a suspend, and it doesn't seem to have any performance hit, so lets use init instead of clear delay(100); } if (serbyte == 37) { //and lets make the % the newline char, again easy to type, and not used often in mp3 titles lcd.cursorTo(2, 0); delay(100); } } It usually works as intended, stuff gets piped over serial, it takes it and prints it to the LCD, and back to serial very rarely it prints the wrong thing, here’s an example, if I echo “^Avenged Sevenfold%City of Evil” to the serial port, here’s what I get back (ascii is my doing, it’s what should be printed) A v e n g e d (wrong though, prints as Ave* Sevenfold) 65.118.101.235.149.145.129. S e v e n f o l d 83.101.118.101.110.102.111.108.100. C i t y 67.105.116.121. sp O f 32.111.102. sp E v i l 32.69.118.105.108. does this make sense to anyone? almost as important, has anyone else seen a problem like this? I havn’t tested enough to be sure, but after only a few test runs, I havn’t been able to make it happen with a slower baud rate (using 57600 instead of 115200) could that have something to do with it?
https://forum.arduino.cc/t/strange-irritating-lcd-issue/9544
CC-MAIN-2021-49
refinedweb
332
77.67
package helloworld; public class simple { public int iTst; public float fcalc; } public class easy extends simple { public int iMy; public float fEsy; } When I try to compile this code i get a message indicating that class simple should be in a file (test.simple). I find this confusiong since 1) the class is declared locally 2) the class is named simple (not test.simple). Is it possible to compile and run this code with class simple begin a part of this file or do I have to put it in another file ? This post has been edited by pbl: 30 March 2010 - 09:50 PM
http://www.dreamincode.net/forums/topic/165165-multiple-public-classes-in-the-same-java-file/
CC-MAIN-2016-40
refinedweb
105
70.97
One of the important concepts functions. This is done using a friend function or/and a friend class. friend Function in C++ If a function is defined as a friend function then, the private and protected data of a class can be accessed using the function. The complier knows a given function is a friend function by the use of the keyword friend. For accessing the data, the declaration of a friend function should be made inside the body of the class (can be anywhere inside class either in private or public section) starting with keyword friend. Declaration of friend function in C++ class class_name { ... .. ... friend return_type function_name(argument/s); ... .. ... } Now, you can define the friend function as a normal function to access the data of the class. No friend keyword is used in the definition. class className { ... .. ... friend return_type functionName(argument/s); ... .. ... } return_type functionName(argument/s) { ... .. ... // Private and protected data of className can be accessed from // this function because it is a friend function of className. ... .. ... } Example 1: Working of friend Function /* C++ program to demonstrate the working of friend function.*/ #include <iostream> using namespace std; class Distance { private: int meter; public: Distance(): meter(0) { } //friend function friend int addFive(Distance); }; // friend function definition int addFive(Distance d) { //accessing private data from non-member function d.meter += 5; return d.meter; } int main() { Distance D; cout<<"Distance: "<< addFive(D); return 0; } Output Distance: 5 Here, friend function addFive() is declared inside Distance class. So, the private data meter can be accessed from this function. Though this example gives you an idea about the concept of a friend function, it doesn't show any meaningful use. A more meaningful use would to when you need to operate on objects of two different classes. That's when the friend function can be very helpful. You can definitely operate on two objects of different classes without using the friend function but the program will be long, complex and hard to understand. Example 2: Addition of members of two different classes using friend Function #include <iostream> using namespace std; // forward declaration class B; class A { private: int numA; public: A(): numA(12) { } // friend function declaration friend int add(A, B); }; class B { private: int numB; public: B(): numB(1) { } // friend function declaration friend int add(A , B); }; // Function add() is the friend function of classes A and B // that accesses the member variables numA and numB int add(A objectA, B objectB) { return (objectA.numA + objectB.numB); } int main() { A objectA; B objectB; cout<<"Sum: "<< add(objectA, objectB); return 0; } Output Sum: 13 In this program, classes A and B have declared add() as a friend function. Thus, this function can access private data of both class. Here, add() function adds the private data numA and numB of two objects objectA and objectB, and returns it to the main function. To make this program work properly, a forward declaration of a class class B should be made as shown in the above example. This is because class B is referenced within the class A using code: friend int add(A , B);. friend Class in C++ Programming Similarly, like a friend function, a class can also be made a friend of another class using keyword friend. For example: ... .. ... class B; class A { // class B is a friend class of class A friend class B; ... .. ... } class B { ... .. ... } When a class is made a friend class, all the member functions of that class becomes friend functions. In this program, all member functions of class B will be friend functions of class A. Thus, any member function of class B can access the private and protected data of class A. But, member functions of class A cannot access the data of class B. Remember, friend relation in C++ is only granted, not taken.
https://cdn.programiz.com/cpp-programming/friend-function-class
CC-MAIN-2020-24
refinedweb
636
61.16
Lesson 3 - Form handling in ASP.NET Core MVC In the previous lesson, First web application in ASP.NET Core MVC, we tried the MVC architecture in practice and learned how to pass data from the model to the view. We said that we use a special collection (mostly ViewBag) to do so. But there's also a second way and that's to connect the model directly to the View. This technique is called model binding. This is very useful when working with forms and we're gonna try it in today's ASP.NET Core tutorial. We'll program a simple calculator. Let's create a new ASP.NET Core Web Application named MVCCalculator. Even though we could start with an empty template, we'll now choose the MVC template. This way, the folders for the MVC components will be generated for us, together with the routes and configurations that we've done manually last time. A sample project including several sliders and even the famous EU cookie message will be also generated. You can try to run it. We didn't use it last time in order to better understand how the MVC works and so it didn't distract us unnecessarily. We won't need this project and therefore we'll remove the contents of the Models/, Controllers/, and Views/ folders in the Solution Explorer, but keep the _ViewImports.cshtml file, otherwise, tag helpers (see below) won't work properly. If we started with an empty project like last time, we'd have to add this file manually. Don't use only Calculator as the project name, as it'd conflict with the name of our class. Let's show how our finished calculator will look like: Model Let's start with the model again which will be the Calculator class. Add it to the Models/ folder. We'll add several public properties to the model, particularly, two input numbers, the selected operation, and the result. The last property will be a list of the SelectListItem type that will include possible operations for the view. It'll be rendered as the <select> HTML element later. We'll fill the list straight in the constructor. Don't forget to add using Microsoft.AspNetCore.Mvc.Rendering. public class Calculator { public int Number1 { get; set; } public int Number2 { get; set; } public double Result { get; set; } public string Operation { get; set; } public List<SelectListItem> PossibleOperations { get; set; } public Calculator() { PossibleOperations = new List<SelectListItem>(); PossibleOperations.Add(new SelectListItem { Text = "Add", Value = "+", Selected = true }); PossibleOperations.Add(new SelectListItem { Text = "Subtract", Value = "-" }); PossibleOperations.Add(new SelectListItem { Text = "Multiply", Value = "*" }); PossibleOperations.Add(new SelectListItem { Text = "Divide", Value = "/" }); } } The Text property of the SelectListItem class is the label of the option the user can see. The Value is the value that is sent to the server (it shouldn't contain any non-alphanumeric characters except for dashes or underscores). We can also set the Selected property to indicate whether the item should be selected when the page is loaded. The only thing left is a method with some logic that, according to the selected Operation and values in Number1 and Number2, calculates the Result: public void Calculate() { switch (Operation) { case "+": Result = Number1 + Number2; break; case "-": Result = Number1 - Number2; break; case "*": Result = Number1 * Number2; break; case "/": Result = Number1 / Number2; break; } } The result is stored to the Result property after calling the method. We could also return the result, as we did in the previous project with the random numbers. But for our further intentions with binding, this will be more useful. We have the model ready, let's add the controller. Controller We'll have only one controller in our application. You surely remember that the controller wires up the model (logic) and view (HTML template). We'll add a new Empty Controller and name it HomeController. So it'll be called when we open the homepage of our application. Let's move to its code and edit the Index() method as follows: public IActionResult Index() { Calculator calculator = new Calculator(); return View(calculator); } When we open the page, the Index() method is called, we already know that. At this time, we create a new model instance, which is still the same thing we did in the previous lesson. However, this time, we pass the model to the view as a parameter. Don't forget to add using MVCCalculator.Models again. View We'll generate the view for the Index() action. We'll do this as always by clicking anywhere in the method by the right mouse button and choosing Add View. As the Template, choose Create, and set the Model class to Calculator. The template allows us to pre-generate some code right to the view, this technique is called scaffolding. The Create template generates a view with a form for the properties of the selected model, wired to this model to create a new model instance. Now when we run the app it looks like this: We can see that Visual Studio generated a total of 4 inputs for the numbers, result, and operation. However, we'd like to specify the operation using the <select> element and print the result into the HTML <p> paragraph instead of a form field. For this reason, let's move to Index.cshtml and change it to the following form: @model MVCCalculator.Models.Calculator @{ ViewData["Title"] = "Calculator"; } <head> <title>@ViewData["Title"]</title> </head> <body> <h2>Index</h2> <h4>Calculator</h4> <hr /> <div class="row"> <div class="col-md-4"> <form asp- <div asp-</div> <div class="form-group"> <label asp-</label> <br /> <input asp- <span asp-</span> </div> <div class="form-group"> <label asp-</label> <br /> <input asp- <span asp-</span> </div> <div class="form-group"> <label asp-</label> <br /> @Html.DropDownListFor(model => model.Operation, new SelectList(Model.PossibleOperations, "Value", "Text")) <span asp-</span> </div> <div class="form-group"> <input type="submit" value="Calculate" class="btn btn-default" /> </div> <p style="font-size: 2em;">@Model.Result</p> </form> </div> </div> @section Scripts { @{await Html.RenderPartialAsync("_ValidationScriptsPartial");} } </body> We've made only minimum changes compared to the original template. At the very beginning of the template, we set the type of the model to which the view is bound. Next, we set the page title and the subtitle. Note that since we don't insert the template into a layout, we added the <head> and <body> elements into it. Next, there's a form generated by Visual Studio and we've only edited it. We add individual editing fields for the model properties the following way: <div class="form-group"> <label asp-</label> <br /> <input asp- <span asp-</span> </div> The asp-for attributes are called tag helpers by which ASP.NET Core can generate an appropriate control element for our property. E.g. a DatePicker is rendered for dates and so on. The asp-validation-for attributes insert a space for the error message in the case when the user fills the field incorrectly. This is again detected from the property data type and everything is completely automatized. A minor disadvantage is that we pass properties to the helpers as strings, which you've certainly noticed. Fortunately, Visual Studio is still able to verify such code. You can see that we combine tag helpers with the older approach of inserting controls using IHtmlHelper( @Html). Not all the controls are currently supported by tag helpers, sometimes we can't avoid this solution. However, we prefer to wire form elements to the model properties using tag helpers and asp-for rather than at-signs. We want the HTML template to look as much as HTML code as possible To make the tag helpers work in your project, you need to have a file called _ViewImports.cshtml in it with the following contents. If you have followed the tutorial, the file is already included in the project. If you accidentally deleted this file or started with an empty project, you can create it now: @using MVCCalculator @using MVCCalculator.Models @addTagHelper *, Microsoft.AspNetCore.Mvc.TagHelpers At the bottom of the page, we'll print the Result property of the model into an HTML <p> paragraph, so we can display it to the user. Our form now looks like this: Once the form is submitted, nothing happens yet. We'll continue next time. In the next lesson, Data processing and validations in ASP.NET Core MVC, we'll finish the app. If you've made a mistake somewhere, you can also download the complete project code in the next lesson. No one has commented yet - be the first!
https://www.ict.social/csharp/asp-net/core/basics/form-handling-in-aspnet-core-mvc/
CC-MAIN-2020-16
refinedweb
1,434
56.05
node-fetchmodule is vitally useful in your transaction script for creating HTTP requests. Transaction test scripts can use node-fetchto make requests against one or more HTTP API endpoints, and to chain data from one call to the next. node-fetchis maintained and distributed by npm at. As such, you should always look there for authoritative information on usage and implementation. node-fetchin ThousandEyes transaction scripts. node-fetchmodule, make sure to first import it within your transaction script: import fetch from 'node-fetch'; fetchfunction: fetchfunction takes the following parameters: fetch()to make arbitrary HTTP requests in transaction scripts. Here is an example using fetch()to pull a webpage that requires Basic authentication (username and password are both "admin"). /var/log/te-sandboxd/te-sandboxd.log: WARNING: IPv4 forwarding is disabled. Networking to destinations outside of the agent will not work. node-fetch, net, and tls. If you are not using any of these, this warning may still appear, but it has no known impact. /etc/sysctl.confas follows: sudo nano /etc/sysctl.conf net.ipv4.ip_forward=1 sudo sysctl -p fetch()properly now. awaitkeyword before the fetchcall, so that the promise can reach the fulfilled state: let resp = await fetch(''); fetchfunction from the node-fetchmodule is imported. Then, when fetchis called, it is preceded by awaitso that the promise can reach the fulfilled state: fetchrequest fetches a list of users; the second request fetches the activity log: ()and an HTTP proxy agent. This approach supports: fetch()calls to auto-detect and apply transaction test settings fetch()command targeting a host for which a certificate was manually installed. For a custom certificate chain, copy the X509 of the certificate into your transaction script body.
https://docs.thousandeyes.com/product-documentation/browser-synthetics/transaction-tests/use-cases/api-monitoring/node-fetch-module
CC-MAIN-2022-40
refinedweb
283
56.15
ncl_cpsprs man page CPSPS1 — Interpolates from an array of data on a "sparse" rectangular grid which is regularly spaced in X and Y to an array of data on a "dense" rectangular grid and initializes contouring from the array on the dense grid. (By a "sparse" grid is meant one whose dimensions are smaller than one would like, so that contour lines constructed directly on it are composed of long straight segments.) CPSPS1 may be viewed as a data smoothing routine. CPSPRS is an alternate name for the routine CPSPS1. Synopsis CALL CPSPS1 (ZSPS, KSPS, MSPS, NSPS, RWRK, LRWK, IWRK, + LIWK, ZDAT, LZDT) C-Binding Synopsis #include <ncarg/ncargC.h> void c_cpsps1 (float *zsps, int ksps, int msps, int nsps, float *rwrk, int lrwk, int *iwrk, int liwk, float *zdat, int lzdt) Description - ZSPS (REAL array, dimensioned KSPS x n, where "n" is greater than or equal to NSPS, input) is the "sparse" array of data, from which the "dense" array is to be generated. - KSPS (INTEGER, input) is the first dimension of the array ZSPS. - MSPS (INTEGER, input) is the first dimension of the "sparse" array of data in ZSPS. MSPS must be less than or equal to KSPS. - NSPS (INTEGER, input) is the second dimension of the "sparse" array of data in ZSPS. NSPS must be less than or equal to the declared second dimension of the array ZSPS. -. - ZDAT (REAL array, dimensioned LZDT, output) is the array in which the interpolated "dense" array of data is to be returned. The dimensions of the interpolated array may be supplied by the user or determined by Conpack, depending on the value of the parameter 'ZDS'. Note that, if the size of the dense array is not a product of the size of the sparse array and some perfect square, the aspect ratio of the dense grid may be slightly different from that of the sparse grid. - LZDT (INTEGER, input) is the length of ZDAT. C-Binding Description The C-binding argument descriptions are the same as the FORTRAN argument descriptions with the following exceptions: - zsps(l,ksps) Dimensioned l by ksps, where l ≥ nsps. - ksps The second dimension of the array zsps. - msps The second dimension of the sparse array of data in zsps. msps ≤ ksps. - nsps The first dimension of the sparse array of data in zsps. nsps ≤ l, where l is the declared first dimension of the array zsps. Usage CPSPS1 performs the same functions as CPRECT, but, in addition, it interpolates from a sparse array of data to a dense array of data. CPSPS1 does this by using the routines BSURF1 and BSURF2, from the package Fitpack, by Alan K. Cline, to fit bicubic splines under tension to the sparse array of data and to compute the dense grid of data that is returned to you. The tension on the spline surfaces is specified by the parameter 'T3D'. By default, CPSPS1 selects the dimensions of the dense array of data; if desired, you can specify these dimensions by setting the parameter 'ZDS' non-zero and the parameters 'ZD1', 'ZDM', and 'ZDN' to the desired values. In either case, once 'ZD1', 'ZDM', and 'ZDN' are set, they should not be reset by you until the contour plot is complete and a different contour plot is to be drawn. Because the routines BSURF1 and BSURF2 do not have a built-in special value feature, if the special value parameter 'SPV' is set non-zero and the sparse array contains occurrences of that value, special action must be taken. The indices of the special values in the sparse array are saved in a part of the integer workspace array; the special values are then replaced by values interpolated from adjacent grid points and the resulting array is used to obtain the dense array; then, the special values in the sparse array are restored and the corresponding elements of the dense array are also given the special value. Access To use CPSPS1 or c_cpsps1, load the NCAR Graphics libraries ncarg, ncarg_gks, and ncarg_c, preferably in that order. Messages See the conpack man page for a description of all Conpack error messages and/or informational messages. See Also2, ncarg_cbind Hardcopy: NCAR Graphics Contouring and Mapping Tutorial University Corporation for Atmospheric Research The use of this Software is governed by a License Agreement.
https://www.mankier.com/3/ncl_cpsprs
CC-MAIN-2017-47
refinedweb
721
56.69
One of the features of Parcels is that it can directly and natively work with Field data discretised on C-grids. These C grids are very popular in OGCMs, so velocity fields outputted by OGCMs are often provided on such grids, except if they have been firstly re-interpolated on a A grid. More information about C-grid interpolation can be found in Delandmeter et al., 2019. An example of such a discretisation is the NEMO model, which is one of the models supported in Parcels. A tutorial teaching how to interpolate 2D data on a NEMO grid is available within Parcels. Here, we focus on 3D fields. Basically, it is a straightforward extension of the 2D example, but it is very easy to do a mistake in the setup of the vertical discretisation that would affect the interpolation scheme. How to know if your data is discretised on a C grid? The best way is to read the documentation that comes with the data. Alternatively, an easy check is to assess the coordinates of the U, V and W fields: for an A grid, U, V and W are distributed on the same nodes, such that the coordinates are the same. For a C grid, there is a shift of half a cell between the different variables. What about grid indexing? Since the C-grid variables are not located on the same nodes, there is not one obvious way to define the indexing, i.e. where is u[k,j,i] compared to v[k,j,i] and w[k,j,i]. In Parcels, we use the same notation as in NEMO: see horizontal indexing and vertical indexing. It is important that you check if your data is following the same notation. Otherwise, you should re-index your data properly (this can be done within Parcels, there is no need to regenerate new netcdf files). What about the accuracy? By default in Parcels, particle coordinates (i.e. longitude, latitude and depth) are stored using single-precision np.float32 numbers. The advantage of this is that it saves some memory ressources for the computation. In some applications, especially where particles travel very close to the coast, the single precision accuracy can lead to uncontrolled particle beaching, due to numerical rounding errors. In such case, you may want to double the coordinate precision to np.float64. This can be done by adding the parameter lonlatdepth_dtype=np.float64 to the ParticleSet constructor. Note that for C-grid fieldsets such as in NEMO, the coordinates precision is set by default to np.float64. dimensionsdictionary?¶ In the following, we will show how to create the dimensions dictionary for 3D NEMO simulations. What you require is a 'mesh_mask' file, which in our case is called coordinates.nc but in some other versions of NEMO has a different name. In any case, it will have to contain the variables glamf, gphif and depthw, which are the longitude, latitude and depth of the mesh nodes, respectively. Note that depthw is not part of the mesh_mask file, but is in the same file as the w data ( wfiles[0]). For the C-grid interpolation in Parcels to work properly, it is important that U, V and W are on the same grid. The code below is an example of how to create a 3D simulation with particles, starting in the mouth of the river Rhine at 1m depth, and advecting them through the North Sea using the AdvectionRK4_3D from parcels import FieldSet, ParticleSet, JITParticle, AdvectionRK4_3D from glob import glob import numpy as np from datetime import timedelta as delta from os import path data_path = 'NemoNorthSeaORCA025-N006_data/' ufiles = sorted(glob(data_path+'ORCA*U.nc')) vfiles = sorted(glob(data_path+'ORCA*V.nc')) wfiles = sorted(glob(data_path+'ORCA*W.nc')) mesh_mask = data_path + 'coordinates.nc' filenames = {'U': {'lon': mesh_mask, 'lat': mesh_mask, 'depth': wfiles[0], 'data': ufiles}, 'V': {'lon': mesh_mask, 'lat': mesh_mask, 'depth': wfiles[0], 'data': vfiles}, 'W': {'lon': mesh_mask, 'lat': mesh_mask, 'depth': wfiles[0], 'data': wfiles}} variables = {'U': 'uo', 'V': 'vo', 'W': 'wo'} dimensions = {'U': {'lon': 'glamf', 'lat': 'gphif', 'depth': 'depthw', 'time': 'time_counter'}, 'V': {'lon': 'glamf', 'lat': 'gphif', 'depth': 'depthw', 'time': 'time_counter'}, 'W': {'lon': 'glamf', 'lat': 'gphif', 'depth': 'depthw', 'time': 'time_counter'}} fieldset = FieldSet.from_nemo(filenames, variables, dimensions) pset = ParticleSet.from_line(fieldset=fieldset, pclass=JITParticle, size=10, start=(1.9, 52.5), finish=(3.4, 51.6), depth=1) kernels = pset.Kernel(AdvectionRK4_3D) pset.execute(kernels, runtime=delta(days=4), dt=delta(hours=6)) WARNING: File NemoNorthSeaORCA025-N006_data/coordinates.nc could not be decoded properly by xarray (version 0.11.0). It will be opened with no decoding. Filling values might be wrongly parsed. WARNING: Casting lon data to np.float32 WARNING: Casting lat data to np.float32 INFO: Compiled JITParticleAdvectionRK4_3D ==> /var/folders/h0/01fvrmn11qb62yjw7v1kn62r0000gq/T/parcels-503/c7806282ccd5229d9f341baefbdfbb23.so %matplotlib inline depth_level = 8 print("Level[%d] depth is: [%g %g]" % (depth_level, fieldset.W.grid.depth[depth_level], fieldset.W.grid.depth[depth_level+1])) pset.show(field=fieldset.W, domain={'N':60, 'S':49, 'E':15 ,'W':0}, depth_level=depth_level) Level[8] depth is: [10.7679 12.846]
https://nbviewer.jupyter.org/github/OceanParcels/parcels/blob/master/parcels/examples/tutorial_nemo_3D.ipynb
CC-MAIN-2019-13
refinedweb
844
56.25
This article is a primer to the file sync provider under the Microsoft Sync Framework. IntroductionCurrently I am working on a project that requires some files/folders to be synchronized at regular intervals across networks ranging in size from an inexpensive LAN to and expensive WAN. The problem was to search for a technology that would be effective and at the same time gel well with the .Net environment, as the rest of the application(s) were to be written in C#. In this process I came across the Microsoft Sync Framework which is still in the Community Preview phase but thought it would be a good idea to share my research with the community.What.This article does not focus on writing a sync provider and just focuses on creating a small application to demonstrate the sync provider for File Systems. Probably sometime soon I will write another article explaining how to utilize additional sync providers. The File Sync ProviderAs mentioned above, the File Sync provider is for synchronizing files and directories on the FAT or NTFS file systems. The way the File Sync works is by comparing the metadata for the replica and detecting changes in files from the last time the replica was synchronized. The metadata is persisted in a file named "filesync.metadata" both in the source and destination replicas. If you are wondering what is a Replica, it is the locations between which the synchronization would happen. For example if your application uses File Sync to synchronize data between folder C:\Sync\Folder1 and C:\Sync\Folder2, then these two folders can be termed as replicas. Now I will write a simple application to demonstrate the File Sync Provider. There are numerous ways of implementing this technology. The following program is just to give you a brief understanding of the technology. Let's say I want to synchronize the contents of the folder C:\Sync\Destination with the contents of the folder C:\Sync\Source. private void btnSynchronize_Click(object sender, EventArgs e) { //Generate a unique Id for the source and store it in a file for //future reference. SyncId sourceId = GetSyncID(@"C:\Sync\Source\File.ID"); //Generate a unique Id for the destination and store it in a file //for future reference. SyncId destId = GetSyncID(@"C:\Sync\Destination\File.ID"); //create a FileSyncProvider object by passing the SyncID and the //source location. FileSyncProvider sourceReplica = new FileSyncProvider(sourceId,@"C:\Sync\Source\"); //destination location. FileSyncProvider destReplica = new FileSyncProvider(destId,"C:\Sync\Destination\"); //Initialize the agent which actually performs the //synchronization. SyncAgent agent = new SyncAgent(); //assign the source replica as the Local Provider and the //destination replica as the Remote provider so that the agent //knows which is the source and which one is the destination. agent.LocalProvider = sourceReplica; agent.RemoteProvider = destReplica; //Set the direction of synchronization from Source to destination //as this is a one way synchronization. You may use //SyncDirection.Download if you want the Local replica to be //treated as Destination and the Remote replica to be the source; //use SyncDirection.DownloadAndUpload or //SyncDirection.UploadAndDownload for two way synchronization. agent.Direction = SyncDirection.Upload; //make a call to the Synchronize method for starting the //synchronization process. agent.Synchronize(); } //This is a private function I have created to generate the SyncID if //it is the first time a Sync is happening on the source and // destination replicas or else retrieve the SyncID stored in a // pre-defined file (File.ID) on the source and destination replicas. // I have used the GUID for generating the SyncID, whereas you may also // use a unique string, byte array etc. for generating the SyncID. // The objective here is to have a unique SyncID. private static SyncId GetSyncID(string syncFilePath) Guid guid; SyncId replicaID = null; if (!File.Exists(syncFilePath)) //The ID file doesn't exist. //Create the file and store the guid which is used to //instantiate the instance of the SyncId. { guid = Guid.NewGuid(); replicaID = new SyncId(guid); FileStream fs = File.Open(syncFilePath, FileMode.Create); StreamWriter sw = new StreamWriter(fs); sw.WriteLine(guid.ToString()); sw.Close(); fs.Close(); } else FileStream fs = File.Open(syncFilePath, FileMode.Open); StreamReader sr = new StreamReader(fs); string guidString = sr.ReadLine(); guid = new Guid(guidString); sr.Close(); return (replicaID); For excluding files from synchronization, you have to create an object of the FileSyncScopeFilter, add the filenames to be excluded from the synchronization and then pass the FileSyncScopeFilter object to the FileSyncProvider constructor while creating an object for the source replica. For example if I was to exclude the file named "File.ID" from the synchronization in the above source code, I would have to do the following: Add the following code just before creating the object named sourceReplica.FileSyncScopeFilter scopeFilter = new FileSyncScopeFilter();scopeFilter.FileNameExcludes.Add("File.ID");And modify the code for creating the sourceReplica object by passing the scopeFilter object to the constructor as follows:FileSyncProvider sourceReplica = new FileSyncProvider(sourceId, @"C:\Sync\Source\",scopeFilter, FileSyncOptions.None);That's all you do to exclude a file from synchronization. You would be wondering what the FileSyncOptions.None does in the code above. Well FileSyncOptions is an enumeration that you use to specify additional information on how the synchronization handles deletes, overwrites etc. For example the synchronization process may lead to deletion of file(s) from the destination replica if the same no longer exists in the source replica. In such cases you may want to send the deleted file to the recycle bin for future reference. The FileSyncProvider does not send the deleted file to the recycle bin by default, but you can set the FileSyncOptions.RecycleDeletes option while creating the destReplica object to send all deleted files to the recycle bin as follows:FileSyncProvider destReplica = new FileSyncProvider(destId, @"C:\Sync\Destination\", scopeFilter, FileSyncOptions.RecycleDeletes);Other FileSyncOptions that you may use are: FileSyncOptions.RecycleOverwrites FileSyncOptions.CompareFileStreamsFileSyncOptions.ExplicitDetectChanges
http://www.c-sharpcorner.com/UploadFile/mail2sharad/FileSyncProvider01012008193150PM/FileSyncProvider.aspx
crawl-002
refinedweb
969
56.45
coreolyn has asked for the wisdom of the Perl Monks concerning the following question: This sub is at the core of controlling / monitoring all external functionality my Perl scripts will be forced to utilize. It is imperative that I trap and log STDERR which is why I went with Open3. However, Niether of the Camels read clearly to me on bidirectional communication, and just looking at this code I get the feeling I'm setting myself up for headaches long term. It's intended to be handed off to others instead of having to support it myself. In particular I'm concerned with my assumptions about gathering $stdout and $stderr, and definately have my ears open to all input # execute() runs a program on any platform and traps # the STDOUT and STDERR. A Calling program must handle # the return value to determine if it worked or not. # if there is a $@ (trapped error) there are BIG problems # There is a danger in overruning the Perl buffers # (if they exceed 255 chars) with either the commandline writer # or the output buffer reader this function gets around this by # (when necessary) creating a batch file in a temporary directory # and then executing that file. sub execute { my $execute = $_[0]; my $logfile = $_[1]; my $writer = IO::Handle->new(); my $reader = IO::Handle->new(); my $debug = "false"; #$debug = "true"; if ( $debug =~ /true/i && $logfile ) { print "::Utility::execute::\$execute = $execute\n"; } elsif ( $debug =~ /true/i ) { print "::Utility::execute::\$logfile = $logfile\n"; } local (*FILE); if ( (length $execute) > 254 ) { my $workdir = OSworkdir(); OSmkdir($workdir); my $filename; if ( $^O =~ /win/i ) { $filename = "$workdir"."/execute.bat"; $filename = OSpath($filename); open ( FILE, ">$filename" ) or OSlogger( $logfile, "::Util +ity::execute Can't open $filename", 1) and die "aw hell"; } elsif ( $^O =~ /solaris/i ) { $filename = OSpath("$workdir/execute.sh"); open ( FILE, 774, ">$filename" ) or OSlogger( $logfile, ": +:Utility::execute Can't open $filename", 1) and die "aw hell"; } print FILE $execute; close (FILE); OSlogger( $logfile, "Created executable $filename", 1); $execute = $filename; } if ( $logfile ) { OSlogger( $logfile, "Executing $execute", 1); } # Here's the actual execution. my $pid; eval { $pid = open3( $writer, $reader, $reader, $execute ); }; die "::Utility::execute died a horrible death!\n" if $@; close($writer); # Lets get STDOUT and STDERR ( note the first read always(?) seems + to get nothing) my $stdout = <$reader> ; $stdout = <$reader> ; my $return_val = $stdout; my @stderr; while (<$reader>) { push (@stderr, $_ ); } if ( $logfile ) { OSlogger( $logfile,"\$run RETURNED: $stdout\n", 1); } if ( @stderr ) { OSlogger( $logfile,"Error on $execute\n\n @stderr\n", 1); $return_val = @stderr; } # Return the output return $return_val; } [download] Edit Masem 2002-02-19 - Added READMORE tag after 1st para this code is running on a production system that synchronizes code between PVCS repositories. it's been running for over a year, and there's only ever been one problem: sporadic failures with IO::Pipe. see my question here: intermittent problem with IPC::Open3. finding no other resolution, i used a short sleep command. perhaps you'll get some mileage from this. ~Particle Thanks particle. Great node. It looks like you were going over the same nodes I was. Finishing that node really helps a lot. I think I can make a few changes and have it much more legible about what the hell is going on. I'm a bit worried about the resource error (maxed filehandles?). I guess should set a trap for that one right off the bat. How long did you sleep before the retry? the production server is a 180MHz pentiumpro w/256MB RAM, running NT 4 SP6, but it has ultra-fast-wide scsi, and a fat network pipe. i did not see this problem on my (then) desktop, a dual pIII 450. your mileage may vary. good luck! You are likely to need the select command or its OO wrapper IO::Select to do this properly. (I think you will find IO::Select easier to figure out.) I should mention that CPAN mods are not a viable option around here (That doesn't reflect my policy just my employers). coreolyn In the latter case, at least, you could install CPAN modules in a lib directory distributed with your application. FindBin and use lib are your friends after that (see FindBin for examples). Matt :)ates* Arg.. I had thought I had overcome the problem of executing large command lines by writing the command line into script that is executed. However, even though the created script runs fine directly from the command line, I have found that if I run it through the attached sub and the line to be executed execeeds 357 characters I 'hang' (MSwin32 - 2000). Additionally I have attempted to implement IO::Handle and IO::Select but apparently not in an effective way. (If it is indeed some type of a buffering problem). I lost all day yesterday to this code and hope someone can pinpoint my ignorance, or am I just exceeding the limitations of open3? Here's the relevant code. (note: I pushed all the logging to a separate module) # RETURNS: %OSexec = ( stdout => $stdout, stderr => $stderr pid => $pi +d ) sub OSexecute { my $execute = $_[0]; my $stdin = IO::Select->new(); # not sure what difference <&STDIN and \*STDIN have $stdin->add("\*STDIN"); my $din = IO::Handle->new(); $stdin->add($din); my $stdout = IO::Select->new(); $stdout->add(\*STDOUT); my $dout = IO::Handle->new(); $stdout->add($dout); my $stderr = IO::Select->new(); $stderr->add(\*STDERR); my $derr = IO::Handle->new(); $stderr->add($derr); my ( $pid, $val, $out, $err ); my ( @stdin, @stdout, @stderr ); my $debug = "false"; #$debug = "true"; if ( $debug =~ /true/i ) { print "OSify::Execute::OSexecute::\$execute = $execute\n"; } # Here is the actual execution eval { print "Executing $execute\n"; $pid = open3( $din, $dout, $derr, $execute ); print "Waiting for pidout\n"; # waitpid waits for the proces to exit # $val could be used as a means to determine status # while waiting if that functionality becomes needed. $val = waitpid(-1,0); #wait's for process to complete # Process the results # Standard Out my $line; my @stdout = <$dout>; foreach $line (@stdout) { chomp($line); $out = $out . $line; } if ( ! $out ) { $out = 1; } # Standard Error @stderr = <$derr>; foreach $line ( @stderr ) { chomp($line); $err = $err . $line; } if ( ! $err ) { $err = 1; } }; $@ && die "OSify::OSexecute died upon execution of\n$execute\nWith +$@"; # Question remains what activity qualifies as draining the buffer? $din->flush(); $din->close; $dout->flush(); $dout->close; $derr->flush(); $derr->close; my %OSexec = ( stdout => $out, stderr => $err, pid => $pid, ); print "Execute Finished with @{[%OSexec]}\n"; return %OSexec; } [download] Here's a very typical script it will execute. The antcall.bat doesn't even have to exist to duplicate the problem. E:\\cccharv\\JDK\\Release\\scripts\\antcall.bat -logfile E:\\cccharv\\ +testApp\\testVer\\Unit_Test\\antscripts\\build\\logs\\20020219-082532 +_Ant_build_testApp.txt -buildfile E:\\cccharv\\testApp\\testVer\\Unit +_Test\\antscripts\\buildTESTAPP.xml compileTESTAPPClean buildTESTAPPJ +ar buildTESTAPPWar buildClientControllerEJB buildTESTAPPSessionEJB bu +ildTESTAPPEar [download] Getting rid of the last three characters eliminates the hang Well!!!! Welp.. turns out that if I pass the command+args as an array instead of a string to Open3 the problem disapears. I don't even have to create an script to run large command lines. The DOS problem was eliminated by slurping up the args via set VAR=%*. So to put an end to this here's the code as I am going to run with it. # RETURNS: %OSexec = ( stdout => $stdout, stderr => $stderr pid => $pi +d ) sub OSexecute { my @execute = @_; my $stdin = IO::Select->new(); # not sure what difference <&STDIN and \*STDIN have $stdin->add("\*STDIN"); my $din = IO::Handle->new(); $din->autoflush(1); $stdin->add($din); my $stdout = IO::Select->new(); $stdout->add(\*STDOUT); my $dout = IO::Handle->new(); $dout->autoflush(1); $stdout->add($dout); my $stderr = IO::Select->new(); $stderr->add(\*STDERR); my $derr = IO::Handle->new(); $derr->autoflush(1); $stderr->add($derr); my ( $pid, $val); my $debug = "false"; #$debug = "true"; if ( $debug =~ /true/i ) { print "OSify::Execute::OSexecute::\$execute = @execute\n"; } # Here is the actual execution eval { $pid = open3( $din, $dout, $derr, @execute ); # waitpid waits for the proces to exit # $val could be used as a means to determine status # while waiting if that functionality becomes needed. $val = waitpid(-1,0); #wait's for process to complete }; $@ && die "OSify::OSexecute died upon execution of\n@execute\nWith +$@"; # Gather the results my $line; # Standard Out my @stdout = <$dout>; my $out; foreach $line (@stdout) { $line = OSify::Utility::trimSpaces($line); $out = $out . $line; } if ( ! $out ) { $out = 1; } # Standard Error my @stderr = <$derr>; my $err; foreach $line ( @stderr ) { $line = OSify::Utility::trimSpaces($line); $err = $err . $line; } if ( ! $err ) { $err = 1; } # Flush and close the Filehandles $din->flush(); $din->close; $dout->flush(); $dout->close; $derr->flush(); $derr->close; my %OSexec = ( stdout => $out, stderr => $err, pid => $pid, ); return %OSexec; } [download] This code can be trimmed down a lot. There are many small problems. Cleaned Up Code # RETURNS: ( $stdout, $stderr ) Both are refs to an array of lines. sub OSexecute { my @execute = @_; local $, = ' '; # for print "...@execute..." my $debug = 0; #$debug = 1; print("OSexecute(@execute)\n") if ($debug); # Here is the actual execution. my $pid = eval { open3($din, $dout, $derr, @execute) }; die "OSexecute(@execute): $@" if ($@); # Wait for process to complete. waitpid($pid, 0); # Gather the results my @stdout = <$dout>; my @stderr = <$derr>; # We should check the return code of the child. # Gotta trap SIGPIPE for that. return ( \@stdout, \@stderr ); } [download] There's a big problem { my $file_name; foreach $file_name ('c:\\tinyfile.txt', 'c:\\biggfile.txt') { my ($stdout, $stderr); print("$file_name\n", ("=" x length($file_name))."\n", "\n"); ($stdout, $stderr) = OSexecute('cmd.exe', '/c', 'type', "\"$file_name\""); print("stdout\n", "------\n", @$stdout); print("Nothing was sent to STDOUT.\n") unless (@$stdout); print("\n"); print("stderr\n", "------\n", @$stderr); print("Nothing was sent to STDERR.\n") unless (@$stderr); print("\n", "\n"); } } [download] outputs c:\tinyfile.txt =============== stdout ------ foo bar bla stderr ------ Nothing was sent to STDERR. c:\biggfile.txt =============== *** HANGS *** [download] The problem is that file handles (including $dout and $derr) have a buffer that's limited in size. biggfile.txt is less than 2KB, which is really quite small, so this needs to be fixed. Fix # RETURNS: ( $stdout, $stderr ) Both are refs to an array of lines. sub OSexecute { my @execute = @_; my @stdout; my @stderr; local $, = ' '; # for print "...@execute..." my $debug = 0; #$debug = 1; print("OSexecute(@execute)\n") if ($debug); # Here is the actual execution. my $pid = eval { open3($din, $dout, $derr, @execute) }; die "OSexecute(@execute): $@" if ($@); my $select = IO::Select->new(); $select->add($dout); $select->add($derr); my @ready; my $fh; # Gather the results while (@ready = $select->can_read()) { foreach $fh (@ready) { push(@stdout, <$fh>) if ($fh == $dout); push(@stderr, <$fh>) if ($fh == $derr); } } # Wait for process to complete and reap it. waitpid($pid, 0); # We should check the return code of the child. # Gotta trap SIGPIPE for that. return ( \@stdout, \@stderr ); } [download] Fixed? Oops! select() (and IO::Select) doesn't work in Windows. That sucks. It means we're gonna have to use threads!!! I have no experience with threads, so maybe another day. | | Yes No Results (63 votes). Check out past polls.
https://www.perlmonks.org/?node_id=145915
CC-MAIN-2022-21
refinedweb
1,820
52.39
Hi, Just wanted to know how to add local image to the page, I have tried "html.Img(src=’’), but it didn’t work. I have my image under same folder as my main.py. Thanks, Hi, Just wanted to know how to add local image to the page, I have tried "html.Img(src=’’), but it didn’t work. I have my image under same folder as my main.py. Thanks, See the solution in for now Did something change that causes that example to break? I just want to show a local image. But I couldn’t get that to work with html.Img(src='image.png'), as pointed out by other users. So I found this post, and tried it out, dropdown menu and all. Unfortunately, the example code does not load the test image: Are there any gotchas that would make this not work? One difference is that in your example you are listening on localhost, but I am running the server on host=0, so that I can work on this example on a vm. I am curious about the line @app.server.route('{}<image_path>.png'.format(static_image_route)) What is the purpose of <image_path>.png? Nothing has changed that would break this. Could you try opening up your dev tools and seeing if there are any errors? You can also inspect the network tab to look at the request and print() some stuff in the serve_image function. <image_path> means that whatever is placed in that location of the string will get passed into the function itself. For example, if /static/my-image.png is passed in, then the function will get the name 'my-image' in the serve_image function. Another solution would be to base64 encode the image and set it as a string in the html.Img component directly (instead of serving the image). Here’s a quick example: import dash import dash_html_components as html import base64 app = dash.Dash() image_filename = 'my-image.png' # replace with your own image encoded_image = base64.b64encode(open(image_filename, 'rb').read()) app.layout = html.Div([ html.Img(src='data:image/png;base64,{}'.format(encoded_image)) ]) if __name__ == '__main__': app.run_server(debug=True) Thanks Chris I like the option of doing things with HTML. But I still want to learn to do it the Plotly/Dash way. In the folder where I am running the script, I have a folder ‘img/’ where ‘1.png’ and ‘2.png’ live. When I select an image from the drop-downs, the console logs a 404 error: GET 404 (NOT FOUND) Using serve_image callback I get: image_path: 1 image_name: 1.png image_directory + image_name: img/1.png Problem is that I don’t know where the underlying Flask server is looking to find the img/ folder. In your example, it found the Desktop folder. In my case, according to the 404 message, it is looking at It is not literally looking at /static/ is it? (I don’t understand URL routing completely, yet) My assumption is that the “root” directory is actually the same directory where the python script lives. Yet Dash/Flask cannot find the images. I just ran the example code on my desktop, using the same folder structure as you, Chris. Works! Which made me look at image_directory that you define at the top. So on my VM I have to do image_directory = os.getcwd() + '/img/' and that worked!! I did not realize that was an absolute path. My bad. Yeah, that’s it exactly! Glad you figured it out FWIW, I have successfully been using base64 encode without problems. It doesn’t feel very “plotly-ish” but it gets the job done – and most importantly, it works not just on my local machine but when I deploy my app to AWS Elastic Beanstalk (the image is uploaded to an S3 bucket and called from there). However, I’m going to attempt the approach suggested here and see if I can reproduce it successfully. Static images (i.e., the company logo) are a part of every app I build! The correct approach with the current version of dash is to use the assets system. You can put your image files in the assets folder, and use app.get_asset_url('my-image.png') to get the url to the image. import dash app = dash.Dash(__name__) app.layout = html.Div(html.Img(src=app.get_asset_url('my-image.png'))) but when I deploy my app to AWS Elastic Beanstalk Just make sure to also include the assets folder in the package for AWS Elastic Beanstalk. I used the same code, but I can’t see my image. Is this method suitable for svg images ? Spoiler alert I does not work in my case One more following up question: if I were using src = ‘ it works great. But src = ‘./color_gradient.png’ do not work. Could anyone help? Thanks! You probably need to set your ospath to your current working directory first. make sure the picture is in the right directory ! in my case, I had to make a directory called “assets” and place the figures there. yeah, definitely this. you have to create a folder called assets and place the image there. This is an easy work-around: import dash import dash_html_components as html import base64 app = dash.Dash() image_filename = 'file/path' # replace with your own image encoded_image = base64.b64encode(open(image_filename, 'rb').read()) app.layout = html.Div([ html.Img(src='data:image/png;base64,{}'.format(encoded_image.decode())) ]) if __name__ == '__main__': app.run_server(debug=True) @zPlotlyUser Have you tried the height property under the html.Img component? I believe it resizes the image to the specified height without perverting the aspect ratio. app.layout = html.Div([ html.Img(src='data:image/png;base64,{}'.format(encoded_image.decode()), height=300) ]) A post was split to a new topic: Including html plotly graphs in dash app
https://community.plotly.com/t/adding-local-image/4896
CC-MAIN-2022-21
refinedweb
976
67.45
This section documents all changes and bug fixes that have been applied in MySQL Cluster Manager 1.4.0 since the release of MySQL Cluster Manager version 1.3.6. Packaging: MySQL Cluster Manager is now built and shipped with GLib-2.44.0, OpenSSL 1.0.1p, and the MySQL 5.6 client library. (Bug #22202878) Agent: When using the import clustercommand, if a mysqld node was started on the command line with options outside of a special, pre-defined set, the import failed with the complaint that those options were unsupported. Now, the import continues, as long as those options and their values are also included in the node's configuration created by MySQL Cluster Manager for import. (Bug #21943518) Agent: A warning is now logged (if log-level=warning) when a failed process is not restarted because the parameter StopOnErroris set to true. (Bug #21575241) Agent: Two new options have been introduced for the upgrade clustercommand: --retryand --nodeid. They, together with the --forceoption, allow a retry after an initial attempt to upgrade a cluster has failed. See the description for upgrade clusterfor details. (Bug #20469067, Bug #16932006, Bug #21200698) Client: The getcommand now returns attributes in the same order as the MySQL Cluster ndb_mgmd command does when the --print-full-configoption is used, with the non-data nodes being listed first and other nodes listed in increasing order of their node IDs. (Bug #22202973) Client: A new autotunecommand has been introduced, which tunes a number of parameters of the cluster to optimize its performance. (Bug #22202855) Client: The show settingscommand has a new --hostinfooption, with which the command prints out information on the host the mcm client is connected to. (Bug #21923561) Client: You can now use the wildcard *(asterisk character) to match attribute names in a getcommand. See The getCommand for examples. (Bug #18069656) Agent: On Windows platform, after a cluster import, the subsequent cluster restart timed out if a non-default value of the option --pid-filehad been imported for a mysqld node. (Bug #21943518) References: This issue is a regression of: Bug #21111944. Agent: When a data node could not be restarted after a setcommand because some attributes were set wrongly, another set command could not be used to correct the attributes, because the setcommand required the data node to be running. With this fix, the second setcommand can now be executed even when the data node is not running, as long as the --forceoption is used. The failed node is then restarted, followed by a rolling restart of the cluster. (Bug #21943518) Agent: restore clustertimed out when the number of tables in the cluster was huge (>1000). It was because a timeout extension was blocked. This fix unblocks the extension. (Bug #21393857) Agent: At the initial startup of a large cluster (with memory size on the order of 10GB), the process might time out while waiting for a data node to start. This fix makes the transaction timeout longer for data node initiation. (Bug #21355383) Agent: Under some conditions, a show statuscommand might report negative node group ID values for processes after an add processcommand was completed. That was because the agent reported the node group IDs before their proper values had arrived, after the creation of new node groups. This fix makes the agent wait for the correct node group IDs before reporting them. (Bug #21346804) Agent: After successful the execution of an add processand a subsequent start process --addedcommand, a third command that was issued very shortly afterward might fail. This was due to the way the updates for the processes' statuses were handled after the new nodes were added, which has now been corrected. (Bug #21138604) References: See also: Bug #21346804. Agent: Setting a value for a “key-only” option for a MySQL node (that is, an option that does not take a value—for example, skip_show_database) with the setcommand and restarting the cluster afterward caused mcmd to attempt a cluster upgrade and back up the cluster. (Bug #21098403) Agent: The create sitecommand sometimes failed with the error message “Lost connection to MySQL server during query.” It was due to an error in the code that handled the socket, which has now been fixed. (Bug #21027818) Agent: Parameters listed under the [mysqld default]or [tcp default]section of the config.inifile were not imported as configuration parameters for unmanaged API nodes. (Bug #20889471) Client: Output of the getcommand used with the --include-defaults( -d) option did not include matching TCP attributes that had default values. (Bug #21895322)
https://dev.mysql.com/doc/relnotes/mysql-cluster-manager/1.4/en/news-1-4-0.html
CC-MAIN-2019-47
refinedweb
757
60.85
If you have ever used a canary build of the Ember.js framework, you are likely familiar with feature flags. Used to bundle functionality and make it available in an application, it also allows for its use to be turned on or off via an entry in the application’s configuration file. While used by the Ember.js community to allow for an easy way to test new, and sometimes experimental, features in upcoming releases of Ember.js, there are times when such capabilities can be useful in your own applications. ( .FEATURES.isEnabled(’ ’) ) {…} In this example, refers to either the name of your application from the package.json file or to the value set in the namespace property in the configuration. refers to the name of the flag used to enable and disable the code features. Now that you have used this configuration to identify which features should be affected by feature flags, you need to configure your application to use them by adding the following to your application’s Brocfile.js: Set each specified flag’s value to true if you desire to have it enabled in a production build. #####Strip Debug Statements: For questions, comments, or for more information, visit github.com. -Jeremy We would love to hear it Open an issue
https://softlayer.github.io/blog/jbrown/ember-cli-defeatureify-addon-feature-flag-support-and-stripping-debug-statements/
CC-MAIN-2018-22
refinedweb
214
55.54
form_driver.3x man page form_driver, form_driver_w — command-processing loop of the form system Synopsis #include <form.h> int form_driver(FORM *form, int c); int form_driver_w(FORM *form, int c, wchar_t wch); Description form_driver Once a form has been posted (displayed), you should funnel input events to it through form_driver. This routine has three major input cases: .IP · 4 The input is a form navigation request. Navigation request codes are constants defined in <form.h>, which are distinct from the key- and character codes returned by wgetch(3X). .IP · 4 The input is a printable character. Printable characters (which must be positive, less than 256) are checked according to the program's locale settings. .IP · 4 The input is the KEY_MOUSE special key associated with an mouse event. form_driver_w. Form-driver requests The form driver requests are as follows: If the second argument is a printable character, the driver places it in the current position in the current field. If it is one of the forms requests listed above, that request is executed. Field validation: .IP · 4 a call to set_current_field attempts to move to a different field. .IP · 4 a call to set_current_page attempts to move to a different page of the form. .IP · 4 a request attempts to move to a different field. .IP · 4 a request attempts to move to a different page of the form. In each case, the move fails if the field is invalid. If the modified field is valid, the form driver copies the modified data from the window associated with the field to the field buffer. Mouse handling: .IP · 4 the form cursor is positioned to that field. .IP · 4. .IP · 4 form_REQUEST_DENIED The form driver could not process the request. - E_SYSTEM_ERROR System error occurred (see errno). - E_UNKNOWN_COMMAND The form driver code saw an unknown request code. See Also curses(3X), form(3X), form_field_buffer(3X), form_field_validation(3X), form_fieldtype(3X), form_variables(3X), getch.
https://www.mankier.com/3/form_driver.3x
CC-MAIN-2018-09
refinedweb
319
57.47
I promised on Twitter to write a blog post explaining why “kata” was the wrong word for the “coding kata” problems presented at CodeMash this past week in Ohio. First and foremost, I absolutely loved the idea of these coding problems. The problems were very similar to those found in computer science classes (for example, find all the prime numbers between 1 and 100), but the goal was to explore new languages or coding techniques (like TDD). For me, getting to pair programming with a coworker using TDD/xUnit to solve a few coding problems was definitely a highlight of the conference. However, as a martial artist, to me “kata” is the wrong terminology to use in this context. The correct terminology is either “coding kihons” or probably more accurately “coding kumite”. What is Kata There are 3 parts to the study of karate: You cannot study karate without all 3 components. In Shotokan karate, the style I practice, there are 26 katas. The movements for each kata never changes. In other words, there is only one way to do the kata, meaning that your stances, kicks, and punches must be exact, and the timing must be correct and sincere, as if you were attaching an invisible opponent. In a karate competition, those who compete in kata are measured based upon who can perform the kata closest to perfection. Over the course of one’s study of karate, you perform the kata over, and over, and over, and over, just like the 10,000 hours theory in Outliers. (Personally, I am not comfortable doing a kata until I’ve done it at least 100 times.) Not only does the body eventually optimize physically, but something mentally happens. You go into an “auto-pilot” mode. For example, have you ever driven to your house one day, but don’t consciously remember the specifics of the drive, because you’ve done it so many times before? This is what a karateka is trying to achieve in kata (and in kumite, and in all walks of life). The term for entering this “auto-pilot” mode is called mushin, but I digress… The basic idea of kata is you’re trying to perfect a given series of moves via repetition. There is no deviation. Or from a Zen perspective, you’re trying to reach that state of mushin where you are in total focus and concentration, where the mind and body have become one (which is also illustrated in “kime” where you unlock your ki / chi in a split second, but I digress yet again). My first karate Sensei told me that in kata you imagine that you are fighting the dark side of yourself, all the things you dislike about your character. You visualize these negative aspects and you fight them. Thus, the more you do kata, the more your character improves. The point I’m trying to make is that there’s a much larger aspect to kata than going through the movements. What a “Coding Kata” would really look like Below are a couple of examples of what I think a coding kata could look like: Kata #1: The Implementation of Hello World in C# public class Hello1 { public static void Main() { System.Console.WriteLine("Hello World!"); } } Kata #2: The Implementation of Bubble Sort in C# (via C# online) private int[] a = new int[100]; private int x; public void SortArray() { int i; int j; int temp; for( i = (x – 1); i >= 0; i– ) { for( j = 1; j <= i; j++ ) { if( a[j-1] > a[j] ) { temp = a[j-1]; a[j-1] = a[j]; a[j] = temp; } } } } And you would practice these katas as many times as possible, until you can code it wearing a blindfold or hold a conversation while coding this method. In my opinion, coding katas are really just sample code or an algorithm for doing something. Just like a real kata, you know exactly what it is you are supposed to do. You’re just learning to repeat it over and over again, so it becomes second nature. But, I’m not sure whether repeating these lines of code over and over again would make you a better coder. It would definitely help initially, but I’m not sure the benefits after that point. Maybe a true “coding kata” is mastered much faster than an actual karate kata. Why Coding Kumite is a better term Kihon is learning the specific techniques, like punches, kicks, stances, etc. In kihon, you practice these techniques in isolation, and you repeat each individually over and over and over again. To me, coding kihon would be the equivalent of learning the syntax of a language, learning lamda expressions, or learning generics. Kihon is not about solving a problem, but rather learning what tools you have available to solve a problem. Only after one learns kihon, can a karate student learn kata and kumite. Looking at these coding problems, you could make the argument that your opponent is the problem to solve. And you’re using all your kihon practices to solve the problem, just like you would do in actual sparring (or in kumite.) Conclusion Having said all of this, my “Coding Kumite” analogy still falls short. I think only in debugging, where you are trying to find and fix bugs, is actual “coding kumite”. But, writing code to solve a problem still feels much closer to kumite to me than kata or kihon. For a different perspective, you can check out Steve Andrew’s blog post called Shotokan Development. He watched my Nidan (2nd degree) black belt exam back in November, and wrote a blog post from the perspective of a software engineer on how to apply Shotokan teaching methods to software engineering. Lastly, I’ve never experienced mushin in coding like i have in karate. Maybe someone out there has and can respond with a counterpoint to this. I’m really curious what others think, and I definitely would love to discuss these concepts further. I really think we could put together a teaching framework based on karate concepts, if anyone is interested in helping me out. Maybe the next open spaces unconference I can propose a topic on karate terms in coding, but that’s only if Doctor Who is no longer making me need a support group. =D
http://blogs.msdn.com/b/saraford/archive/2010/01/17/coding-is-not-kata.aspx
CC-MAIN-2015-32
refinedweb
1,061
67.08
There’s a Gender Extension for PHP Unlike %s\n", $name, $data['country']); break; case Gender::IS_MOSTLY_FEMALE: printf("The name %s is mostly female in %s\n", $name, $data['country']); break; case Gender::IS_MALE: printf("The name %s is male in %s\n", $name, $data['country']); break; case Gender::IS_MOSTLY_MALE: printf("The name %s is mostly male in %s\n", $name, $data['country']); break; case Gender::IS_UNISEX_NAME: printf("The name %s is unisex in %s\n", $name, $data['country']); break; case Gender::IS_A_COUPLE: printf("The name %s is both male and female in %s\n", $name, $data['country']); break; case Gender::NAME_NOT_FOUND: printf("The name %s was not found for %s\n", $name, $data['country']); break; case Gender::ERROR_IN_NAME: echo "There is an error in the given name!\n"; break; default: echo "An error occurred!\n"; break; } While we have this code here, let’s take a look at it. Some really confusing constant names in there – how does a name contain an error? What’s the difference between unisex and couple names? Digging deeper, we see some more curious constants. For example, the class has short names of countries as constants (e.g. BRITAIN) which reference an array containing both an international code for the country ( UK) and the full country name ( GREAT BRITAIN). $gender = new Gender\Gender; var_dump($gender->country(Gender\Gender::BRITAIN)); array(2) { 'country_short' => string(2) "UK" 'country' => string(13) "Great Britain" } Only, UK isn’t the international code one would expect here – it’s GB. Why they chose this route rather than rely on an existing package of geonames or even just an accurate list of constants is anyone’s guess. Once in use, the class uses the get method to return the gender of a name, provided we’ve given it the name and the country (optional – searches across all countries if omitted). But the country has to be the constant of the class (so you need to know it by heart or use their values when adding it to the UI because it won’t match any standard country code list) and it also returns an integer – another constant defined in the class, like so: const integer IS_FEMALE = 70 ; const integer IS_MOSTLY_FEMALE = 102 ; const integer IS_MALE = 77 ; const integer IS_MOSTLY_MALE = 109 ; const integer IS_UNISEX_NAME = 63 ; const integer IS_A_COUPLE = 67 ; const integer NAME_NOT_FOUND = 32 ; const integer ERROR_IN_NAME = 69 ; There’s just no rhyme or reason to any of these values. Another method, isNick, checks if a name is a nickname or alias for another name. This makes sense in cases like Bob vs Robert or Dick vs Richard, but can it really scale past these predictable English values? The method is doubly confusing because it says it returns an array in the signature, whereas the description says it’s a boolean. Finally, the similarNames method will return an array of names similar to the one provided, given the name and a country (if country is omitted, then it compares names across all countries). Does this include aliases? What’s the basis for similarity? Are Mario and Maria similar despite being opposite genders? Or is Mario just similar to Marek? Is Mario similar to Marek at all? There’s no information. I just had to find out for myself, so I installed it and tested the thing. Installation I tested this on an isolated environment via Homestead Improved with PECL pre-installed. sudo pecl install gender echo "extension=gender.so" | sudo tee /etc/php/7.1/mods-available/gender.ini sudo phpenmod gender pear run-scripts pecl/gender The last command will ask where to put a dictionary. I assume this is there for the purposes of extending it. I selected ., as in “current folder”. Let’s try it out by making a simple index.php file with the example content from above and testing that first. Sure enough, it works. Okay, let’s change the country to $country = Gender::CROATIA;. Okay, sure, it’s not a common name, and not in that format, but it’s most similar to Milena, which is a female name in Croatia. Let’s see what’s similar to Milena via similar.php: <?php namespace Gender; $gender = new Gender; $similar = $gender->similarNames("Milena", Gender::CROATIA); var_dump($similar); Not what I expected. Let’s see the original, Milene. So Milena is listed as a name similar to Milene, but Milene isn’t similar to Milena? Additionally, there seem to be some encoding issues on two of them? And the Croatian alphabet doesn’t even have the letter “y”, we definitely have neither of those similar names, regardless of what’s hiding under the question mark. Okay, let’s try something else. Let’s see if Bob is an alias of Robert in alias.php: <?php namespace Gender; $gender = new Gender; var_dump($gender->isNick('Bob', 'Robert', Gender::USA)); Indeed, that does seem to be true. Low hanging fruit, though. Let’s see a local one. var_dump($gender->isNick('Tea', 'Dorotea', Gender::CROATIA)); Oh come on. What about the Mario / Maria / Marek issue from the beginning? Let’s see similarities for them in order. Not good. A couple more tries. To make testing easier, let’s change the $name and $country lines in index.php to: $name = $argv[1]; $country = constant(Gender::class.'::'.strtoupper($argv[2])); Now we can test from the CLI without editing the file. Final few tries. I have a female friend from Tunisia called Manel. I would assume her name would go for male in most of the world because it ends with a consonant. Let’s test hers and some other names. No Tunisia? Maybe it isn’t documented in the manual, let’s output all the defined constants and check. // constants.php <?php $oClass = new ReflectionClass(Gender\Gender::class); var_dump($oClass->getConstants()); No, looks like those docs are spot on. At this point, I stop my playing around with this tool. The whole situation is made even more interesting by the fact that this is a simple class, and definitely doesn’t need to be an extension. No one will call this often enough to care about the performance boost of an extension vs. a package, and a package can be installed by non-sudo users, and people can contribute to it more easily. How this extension, which is both inaccurate and incomplete, and could be a simple class, ended up in the PHP manual is unclear, but it goes to show that there’s a lot of cleaning up to be done yet in the PHP core (I include the manual as the “core”) before we get PHP’s reputation up. In the 9 years (nine!) since development on this port started, not even all countries have been added to the internal list and yet someone decided this extension should be in the manual. Do you have more information about this extension? Do you see a point to it? Which other oddball extensions or built-in features did you find in the manual or in PHP in general?
https://www.sitepoint.com/theres-a-gender-extension-for-php/
CC-MAIN-2020-16
refinedweb
1,173
64.1
Simonpj/Talk:OutsideIn From HaskellWiki Latest revision as of 14:45, 11 April 2011 [edit] Modular type inference with local assumptions: OutsideIn(X) This epic 73-page paper (JFP style) brings together our work on type inference for type functions, GADTS, and the like, in a single uniform framework. Version 3 (camera ready copy for JFP) is very substantially revised compared to the May 2010 version. - Modular type inference with local assumptions: OutsideIn(X): Version 3 PDF - Related papers (constraints) - Related papers (GADTs) - Related papers (type families) Abstract. Advanced type system features, such as GADTs, type classes, and type families have have proven to be invaluable language extensions for ensuring data invariants and program correctness among others. Unfortunately, they pose a tough problem for type inference, because they introduce. Please help us!.. Heisenbug 17:42, 8 February 2011 (UTC) Note from Gabor Greif Just found this paper and had a quick look. You should probably compare your algorithm with Algorithm-P described in Chuan-kai Lin's PhD thesis where he claims to have developed a practical algorithm for GADT type inference. The work presumably obsoletes both references mentioned here to Lin's and Sheard's papers. Version 1 comments, now dealt with; thank you! - Simon Meier 15 May 2010: - Typo on Page 7: "having type Bool" should probably be "having type Int". - Typo on Page 18: "constraint siplifier" - Typo on Page 28: "topl-level" - Typo on Page 31: "But hang on! NoGen means ... in the beginning of Section 4.2 <text-missing-here>" - Typo on Page 53: Formula below "Simplification rules" is missing a space. - Typo on Page 59: "makes sens" Batterseapower 14:24, 14 May 2010 (UTC) You say that local definitions such as "combine" and "swap" could harmlessly be defined at top level. However, there is a cost to doing so in that it pollutes the top-level namespace. Adding a facility for making definitions that are at the top level for the purposes of type checking but not for name resolution would fix this, at the cost of some ugliness. Rgreayer 16:04, 14 May 2010 (UTC) - Revisiting Typo on Page 7: "This means that any expression of type F [Bool] can be considered an expression of type F Bool". But would that not mean the axiom is: F [a] ~ F a, rather than: F [a] ~ a? - Typo, Page 8 (middle): "An similar issue..." Saizan 14:23, 16 May 2010 (UTC) - Figure 2: VarCon uses \nu in the conclusion and x in the premises, it seems they should both be \nu considering the textual description below, but then the definition of "Type environments" in Figure 1 also needs to have \nu rather than x. - typos in Section 9.1: - "f = case (Ex 3) of Ex -> False" should be "f = case (Ex 3) of Ex _ -> False". - "f :: Bool -> Bool" should be "f :: Bool". - Have you considered adding support for partial type signatures? niall 15:07, 17 May 2010 (UTC) - Footnote on page 65 refers to an incorrect link (should perhaps be ) although the journal paper does not seem linked from there. The Appendix with the algorithm is perhaps meant to refer to the appendix of "Simple unification-based type inference for GADTs"? Byorgey 09:10, 21 June 2010 (UTC) - p.8: should be a colon after "Here is an example" - p.10: end of section 2.2: "definition of g" should be "definition of test" - p.15: Indentation of "The judgement Q ; Gamma |- prog..." is too small - p.16: Grammar of Definition 3.1 needs fixing, for example, "An axiom scheme Q is consistent iff whenever ... *we have* (or, *it is also the case that*, etc.) - p.17: add comma after however in "For now, however algorithmic constraints..." - p.19: in Fig 7, should the Empty rule have a premise ftv(Q,Gamma) = empty? - p.21: I find it very confusing to use the notation t1 <= t2 for "t1 is more general than t2"; we are using a "less than" symbol for a concept involving the word "more"....57, Fig 22 (and Fig 23): given constraints F Int ~ a and a ~ Bool, should this be rewritten to F Int ~ Bool and a ~ Bool? Or does this not matter? There don't seem to be rules to accomplish this. -" Longlivedeath 03:11, 15 July 2010 (UTC) - In the References section: replace "Fun with type fnnctions" to "Fun with type functions" Jobo 12:07, 20 July 2010 (UTC) I just took a quick look at the references section: - There are some referenses to a Mr Jones, Simon Peyton. - References in the paper read etal. or et al, shold be et al. (two words, ending in a dot). - Some references have the form "Sulzman et al. (Sulzmann et al. 2006); consider using \citet.
https://wiki.haskell.org/index.php?title=Simonpj/Talk:OutsideIn&diff=39430&oldid=34727
CC-MAIN-2015-32
refinedweb
794
72.76
This is just a short shout out to the PHP folks - please do not cast variables during comparison using either ! or empty(). Let me explain with an example. When I look at a piece of code like this (an example controller written Symfony-style): declare(strict_types=1); namespace App\Controller; use External\Service\Api; use Symfony\Component\HttpFoundation\Response; final class OrderController { /** * @var Api **/ private $api; public function __construct(Api $api) { $this->api = $api; } public function __invoke(): Response { $orders = $api->get('/orders'); if (!$orders) { return new Response('No orders'); } $this->processOrders($orders); return new Response('Processed'); } /** * Oh no, there is no type hint for $orders. */ private function processOrders($orders): void { // Both array and strings can be iterated with a for loop. // This approach is far from ideal, but not impossible to encounter. // count() will throw an exception if the value is not countable, // but only since version 7.2. Before that, a string will also work. for ($i = 0; $i <= count($orders); $i++) { $this->api->post('/complete-order', ['id' => $orders[$i]]); } } } then if I do not have previous experience with the code, I need to do additional checks to see what is actually going on here. Is $orders an array? Is it a JSON string? Something else? I would have to check the API class to see if it has a return type hint and if it does not, I would have to check the processOrders to see how it treats the $orders variable. Let us say it usually returns an array of integers. But what if it would return something different, like a 0, due to some freak error? In that case, it will be cast to a false and the controller would not even notice the difference. But if, for example, you would get a valid JSON string, it would evaluate to true and processOrders will receive a string to iterate over, trying to use individual letters as order ids. It is a made up (and a bit silly) example, I agree, but completely in the realm of possibility, especially if it is part of a legacy application. Now, we can rewrite it like so: declare(strict_types=1); namespace App\Controller; use Exception; use External\Service\Api; use Symfony\Component\HttpFoundation\Response; final class OrderController { /** * @var Api **/ private $api; public function __construct(Api $api) { $this->api = $api; } public function __invoke(): Response { $orders = $api->get('/orders'); if (false === is_array($orders)) { throw new Exception( sprintf( 'Expected an array, got "%s".', true === is_object($orders) ? get_class($orders) : gettype($orders) ) ); } if (0 === count($orders)) { return new Response('No orders'); } $this->processOrders($orders); return new Response('Processed'); } private function processOrders(array $orders): void { // By using array_walk(), we get additional type check for the // individual array items. array_walk($orders, function (int $id): void { $this->api->post('/complete-order', ['id' => $id]); }); } } ... then you will still get an exception in case of an incorrect value being fetched from the API. So why bother? Two reasons: - You will not attempt to execute any code with data that is not at least of a correct type. - Both you and anyone else looking at the code will have no additional mental overhead from figuring out what is being used and if there is any potential for error in the code. If you are not certain of the validity of the data being used, you may feel inclined to investigate and/or refactor a potential hazard. Let me know your thoughts about this. Cheers Discussion (1) More explicit checks make the code more readable and remove ambiguity early and not just pass the confusion down the stack 👍 Still, understanding how $api—>get() behaves is our first priority. If it’s unreliable, we may have to adapt it to something more reliable.
https://practicaldev-herokuapp-com.global.ssl.fastly.net/szymach/php-please-do-not-use-and-empty-if-you-can-help-it-5alh
CC-MAIN-2021-39
refinedweb
620
50.97
Post your Comment getElementById not working getElementById not working I have to get value from a hidden input... guess why? I always thought getElementById was a better way of doing it compared to the other method JavaScript getElementById method JavaScript getElementById method  ... html page by using the method getElementById(). For accessing any element...; Output : Input your text here to show the use of getElementById() method. Your JavaScript Checkbox getElementById on clicking the checkbox. This is done by using the JavaScript method getElementById... JavaScript Checkbox getElementById...; In this section, we are going to use the method getElementById JavaScript getElementById div JavaScript getElementById div...; In JavaScript we can get access to any node by using the method document.getElementById(). This method is very important for the JavaScript and is the entry JavaScript getElementById select JavaScript getElementById select... here to use getElementById with select by using a very simple example... defined two JavaScript functions selectOption() and selectTypeOption() which JavaScript getElementById Style JavaScript getElementById Style...; In this part of JavaScript examples, we have created a simple example which shows the use of style attribute over any element by using the method JavaScript getElementById Iframe JavaScript getElementById IFrame...; We can also use document.getElementById() method with the IFrame... to the text fields by using the method document.getElementById() method. Here JavaScript getElementById innerHTML JavaScript getElementById innerHTML In JavaScript method... with getElementById() method. To show use of both we have created a simple HTML java script getElementById() what is the use of getElementById() method in jsp? Can you please explain it with the help of an example JavaScript add method JavaScript add method The add() method is used to add an option to a dropdown list...() method. Syntax: Object_of_select.add(option,before); option deleteCaption method JavaScript deleteCaption method As in the previous section of JavaScript method... object from the method getElementById(). deleteCaptionExample.html < JavaScript getAdjacentText method JavaScript getAdjacentText method JavaScript getAdjacentText method returns... by using the method getElementById("element id") and it returns the text JavaScript deleteTFoot method JavaScript deleteTFoot method  ... then we can delete this table footer by calling the JavaScript method... the JavaScript method "deleteTableFooter()". It gets element JavaScript createTHead method JavaScript createTHead method In JavaScript we can create and add table header... the method getElementById('table') and with this object we can create table java script java script what is the use of getElementById() method in jsp. and give me its full knowledge with a example method inside the method?? method inside the method?? can't we declare a method inside a method in java?? for eg: public class One { public static void main(String[] args) { One obj=new One(); One.add(); private static void add JavaScript createTFoot method JavaScript createTFoot method  ... the table object by its id using the method getElementById() and with this object... the JavaScript in a similar manner as we have added and deleted the caption Post your Comment
http://www.roseindia.net/discussion/23665-JavaScript-getElementById-method.html
CC-MAIN-2015-22
refinedweb
480
66.74
Subject: Re: [boost] [Boost-users] [afio] Formal review of Boost.AFIO From: Niall Douglas (s_sourceforge_at_[hidden]) Date: 2015-08-29 11:20:55 On 29 Aug 2015 at 7:57, Robert Ramey wrote: > >> There have been several suggestions (implicit and explicit) to move this > >> type into the boost::afio namespace, but I haven' seen a response from > >> you. Have I just missed it? > > > > It's more that the suggestion is irrelevant with respect to the > > library. > > It is irrelevant to the operation of the library. But it's certainly no > irrelevant with respect to the acceptance of the library. This is > exactly the point I was making. For sure. But please be aware that the use of Monad is more presentational than anything. I foolishly made Monad look like not an internal library in the tutorial, and I have been paying for it since. Niall -- ned Productions Limited Consulting Boost list run by bdawes at acm.org, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk
https://lists.boost.org/Archives/boost/2015/08/225089.php
CC-MAIN-2021-10
refinedweb
171
67.96
Mac-Forums Mac-Forums Forums Stop Lion from Remembering Everything Best Youtube downloader? Streaming to TV Toshiba external hard drive not registering Default app for jpg trash multple downloads migration from pc Active programs in mountain lion Connecting MacBook Pro to Asus monitor Launchpad Charger Stuck- iPhone 5 PowerMac G5 Mac book Can I transfer Notes form iMac to new MacBook Pro Desktop Display scanning old photos A question about RAM setting up my Mac Mini Please Help! Icons are messed up Am I a geek? Imac log in problems ! icloud help needed edit photo in external application using iPhoto Running Apple Hardware Test on my mac mini Apple Mail Apple Hardware Test transferring files from PC to AppleMac not able to read iPhoto from external drive macbook air good for guild wars 2? macbook air - mail problem apps for my ibook g4 Time Capsule problem powerpc g5 tower safari not loading Annoying popup any time I want to open a file downloaded from the internet Access Mac from iPhone Imovie video quality help iMac Specs Advice What is a switcher hangout? a really simple shortcut must exist iphoto 11 update Location problem - justifies to old home ICal Help New Mac User Mouse Movement Too Slow Would like new battery for my new to me Macbook 5,1 SSD advice IPhoto slideshow Viewing slideshow iphoto Verizon Hot spot Water Damaged Macbook pro Launchpad kpt 3 iPad video transfer to MacBook Picture folder duplicate is in trash Mac files on 2008 Server (1) Font size (2) Finder & Windows explorer transferring photos from iphone to macbook pro Unwanted info Desktop PC to Macbook Pro - 2 weeks on iMovie Help! Mac Alternatives to often used PC Programs rebooting old iMac with os 9.2 Forgot my password on mail account Uflysoft Wrong apple ID when doing app update iMovie help where is "discoverable" set for my trackpad? upgrading my 2008 10.5.8 IMac paperport Mac Pro 2012 Processors Documents on Mac and PC Best Application for Photo Browsing? How much is my Mac worth? Please help... Firmware update question Entourage Attachments outlook emails disappeared How do I lock files (or how do I take ownership to allow locking)? Thank you for all advice received and to be received! Updating My OS Without Paying For Disks How to get a Sandisk USB drive running on MAC? External daisychain drive problem! updating iphoto Office iMac ejecting USB mass storage devices and DVD's Is there a spreadsheet that will................. iMessage Macs - do you shut them down at night or just close the lid Hughes net and Time Capsule Photo Printing to delete music from iphone Best Burning Software for Mac mac book pro installer problmes upgrading from 10.5.8 to the latest OS How to NOT have Address Book and Mail When I Startup How to unistall mac os x Need help creating smart mailboxes Export MAC contacts in CSV file format?? Need help for my Mac book!!!!!!!! I would like to create a new account The volume is the wrong format for a backup. Can I upgrade this? iPhoto crash Are you an iMac user? Mail problems since I upgraded to Snow Leopard imovie trouble downloading photos Thunderbolt Display - sound out Changing Finder's View Options with Only the Keboard Selecting the File at the Bottom in Finder with Just the Keyboard late 2008 macbook Photo libraries ??? Bibliography Software: Endnote vs. Bookends How do I burn from iMovie to disc on a mac OS X MacBook Air freezes in sleep mode Logitech dark star mouse not working on my MacBook Pro macbookpro download to ipad player HD Im new here. Greetings from a Newbie Leopard or Snow Leopard? could not connect my external hard drive Won't boot up from SSD drive !! surveillance How to retrieve files from time capsule photoshop CS6 mail 6.1 Is it true that Apple no longer supports Snow Leopard with updates ? Video capture Export emails in Outlook to local mac mailbos Firefox 15.01 not safe by Norton Burning a DVD video External mouse Milky Spot on Macbook Pro screen Problem with Canon MP970 left click problems value of 2011 MacBook Pro Mac's iMovie vs Windows Roxio Audio problem Gesture and other questions (new Mac user) lost 259 gb after partition PC programmer needs your help! Onyx /processor My macbook needs more Oompf! How do I write a letter or a note on a macbook pro? wireless HPwireless printer & i MAC Problem with using VGA monitor with Mac Pro The Cloud ITunes problems are making me nuts! new mac user Leopard on PowerMac G5 user names OSx10.6 Where will I find which OS I have on my computer? HELP !!!! macbook fan/issues Redirect when using Chrome transferring phone video logging in Accessing PDF Files with Lion how to tap in the password with voiceover on iMovie help Does this mean I have been hacked/ Time Machine Backups mac - pc sharing Newbie trying to restore apllications Anti-virus that can be used for PC and Mac Return key on macbook pro doesnt work anymore Using two iPods with one Mac New iMac user calendar question Questions on installing more memory Airport MacBook Pro overheating Hot corners CD tray Where is the cheapest mac? iTunes radio error message Sending from a different address in Outlook 2011 for Mac please help asap How do I remove image from my email Best YouTube Downloader for Mac how do I move iPhoto library folder to other hard drive? firefox is crashed Start up Disc full..... G4 wireless connectivity PPS reader for Mac No internet on ethernet port CRM to replace act or how to make act work on MAC Best email account to use samsung tablet and mac compatibility? First time mac buyer How will plans work for iphone 5? slow TimeMachine & Mountain Lion Will Quick Time Pro solve problems watching video on external hard drive enable power nap or disable sleep when lid closed? 17 years on Windows, finally on Mac (my story) shortcut key to replace drag and drop to dock plug ins Outlook 2011 email not working Yet, another Switcher Trash epson printer TXIIO with Mountain Lion flashdrive to ipad External HD problems Images rotated from camera or IPad to a Mac Bluetooth Disappearing Task Bar macbook pro Extenal hard drive iphone purchase? Macbook Air iTunes Options for Custom keyboard shortcut for screenshot in Preview Embedding Vimeo videos in Safari? compatability panasonic hcv700 and imovie 11 Mac Mail Should I create a Guest Account? blocked plug in Mac is amazing! Mail right preview pane error start a non profit corporation OS X 10.4.11 to snow leopard can I find the previous version of this document TouchPad Left and Right Clicks Hello!! just brought a macbook pro. import digital video from Panasonic pv-gs500 with 10.7.4 installing downloaded program thru terminal 1st Gen Time Capsule convert png to pdf Mountain Lion Notifications center not working blank check printing software increase memory power mac wireless usb issues panic error Apple Mail iMac won't start Other Safari constantly zoomed Removing previous system folder cd-dvd door Installing Printer Making my mac my own HO 3050A Deskjet J611g sending batch Photos Using skype... cover for imac apple script / automater Don't want to loose iPhone data when plugin to new pc problem when trying to view certain links on my IMac Canon hv40 and MacBook pro how to manage itunes My Switcher story thus far power problem battery life help :( Newbie question regarding purchasing a mac Suspect email iMovie 08 movie clip editing Update Address Book from 5.0.3 increasing memory event library question Can't add selected song to iPhoto slideshow Safari issue Avatar picture what is 'nesting pages' in iWeb? Keyboard MBP (2010) display not working What does the upper-right-most key on MBP keyboards do? iTunes app on iMac data transfer Can't find my files, please help. My macbook is acting wierd no power Newbie question about the Mac and suggestions imac takes 3-4 mins to start 32 or 64 Bit Suggested Computer Specs Keyboard mapping downloads Font Book: what could happen if I use fonts with "serious error"? Can I delete my picture library from the hard drive once it is backed up ? Apple mail question imac airport Help ichat Please help, lost files!! Question about dock Prepaid mastercard problems Messages and group chat Capturing a still from video in iPhoto Will I have access to all later versions of a Mac App Store app? adding a html meta tag in iweb Probem with Mail.................... Switching hard drives between two macbook pros change background doesnt open Downloading Videos to Hard drive imac dual g5 cooling fan probs? My MAC AIR Keeps trying to log out i have a 500 gb hard drive partitioned for files and time machine but when i connect No Optical Drive on the new iMac Desktop Proxy Newer Mac Book Pro issues slow browsers on imac and macbook Goofy behavior with Spaces music transfer Should i upgrade my macbook or get a new one? file stuck on my desktop printing in color snow leopard OSX 10.6.8 Folder Name Change Mac pro Deleting unwanted templates Right shift key minimizes screen Highlight all instances of searched for keyword in one fell swoop? Mac Pro 3.1, connecting a display Will my scheduled daily wakeup work if I keep the macbook's lid closed? touchpad Checking banking information over W-FI? dock changing? Excel VBA - code works with Microsoft but not Apple - Help Required Re-install DVD Player for OS X 10.8.5...accidentally deleted What parts do I need to make my mac book pro super fast? Panic Report Looks Like Sanskrit to me Cant Open my MacBook Pro Safari Browser Icloud password reset Need advice please? Upgrading my Mac Book Pro iBook OS Question urgent help for proving i am a student :( Mac Mail 50K emails: Too Many? Volume on MacBook Please help me with some decisions about recipes storage etc. Best way to network MacBook with PC/Printers/External Hard Drives Installing Windows 7 on my 15in. Macbook Pro problem Removing Adobe projector not connect with my new OSX 10.8.1 Time capsule Logic Board Black macbook New user How to set up College Email Account on the iPad General starter questions Excel Formula Problem creating PDFs from Word doucments Imovie password when I open my Mac Air Candybar - questions :) 2005 mac mini wireless adapter Software inquiry Moving iTunes from PC to Mac MD214LL/A Good Deal??? Inability to partition an external hard disk keychain access Icon/Print Size on Apple TV downloads Audible files tops playing Looking to buy a laptop - lots of questions Macbook slow to sleep Interactive excel spreadsheet using iweb Anyone having issues with Photobucket and maybe Adobe Flash? software conflict Office for Mac connecting to dlink print server New to Mac Computers Server connection issue Copy photos from one iPhoto event to another Does everyone use Safari Customising my Mac Wifi super slow, MAS nearly unresponsive, help! CHanging wireless keypads new laptop and want to ONLY backup itunes?? Expanding iwork thumbnails How to revert "Erase Free Space" Organizing Iphoto Converting VHS-C tapes to digital Iphoto Linux Server or External Hard Drive? Managing mail on various devices Problem with intel Mac Mini Macbook won't show text! Airport extreme network drops when usb hard drive is attached Problem Using nfl.com in Safari Can't log onto my macbook Newbie help with day 6 game finder software Sleep import image into textedit pakg in snow leopard Is it possible to share apps? WiFi Connecting white screen Mac & Windows Network rosetta OSX Update and Restart Problems - HELP! Delete account Batch files Sound and Brightness Keys are not working Wow, Mail email list color is neon green! Need fixed! Battery life Volume control for different apps Notification Center in ML folder question Best OS for 17" macbook pro Water on macbook pro MBP - update software how to open 2nd cd Mac pro Word for Mac, system upgrade ??? Microsoft for Mac blocked plug ins Microsoft office for mac facebook account on iPhoto Portable Apps DELETE not working as it used to. Two Firefox Icons in Launchpad Converting PageMaker Desktop shortcuts burning HD video Macbook Pro charger only charges when Laptop is shut down. New Mac Owner - advice requested! Using a photo host hdmi to connect to TV Apple mail hacked? macmail html signature 1 malware = how bad? Transfer Thunderbird PC to MAC Just got my first mac. Disable touchpad? iCloud, dropbox, etc new macbook pro itunes/printer/everything issues Upgrading to Mountain Lion transferring my set up from Lion My iPad email sniperspy for mac What killed an entire Apple LAN? Can't play videos in iphoto 08 and no thumbnails blocked plug ins Can't install OS X from usb thumb drive Corrupt OS a effect? imac line-in feature on macbook pro a1278 can't decide which Mac is for me Back to Mac? which multi -usb ports avi. 'document' to avi. 'container'... Not Able to Send or Receive Email I-Pod Touch Mac primer? routers Upgrade to M/Lion and MS word Numeric Keypad macmail Migration question Burning music for older cd players undeleateable excel file on desktop songs in trash Optimal HD availability Magic Mouse Tricks? Extracting files from MacBook pro thru iMac delete plaxo need help with my macbook pro and an external hard drive. ways to increase ram on imac how to play dvd through tv Nervous Switcher Hate imac Streaming Netflix from iMac to TV Spell check Graphics Key stroke repeats? MLion Software Update? MS Office for Mac new to forums looking for music production help Time Machine Set Up Programs scattered all over into pieces... Outgoing email prob "Home" and "End" keys on the standard keyboard How do you search for a word or phrase in a pdf file? VMWare Fusion 2.0.7 iphoto needed for ibook G4 Should I return my MBP? How to run The Master Genealogist on Mac OS X? PhotoStream External hard drive problems Blocked Plug-in iPhoto can find camera MacBook Storage running out - Finding files to purge Sound and brightness keys not working MacBook Pro with water damage SVGA or XGA??? keeping an alias as a new Apple ID VHS to DVD conversion Retina display or without Retinal display PDF Files help with macbook air fcp and avchd files Unauthorized Activity Mac version of the Windows Disk Cleanup reading portable harddrives 'between' mac and pc? screensaver photo shuffle history duplicating a CD Choices for configuring my macbook pro HDMI not connecting Another deleted files not recovering space thread Formatting Replies in Entourage
http://www.mac-forums.com/mac-forums-sitemap.php?forumid=8&page=7
CC-MAIN-2015-22
refinedweb
2,460
57.81
Comment on Tutorial - Struts 1 vs Struts 2 By jcarreira, adrian deccico Comment Added by : Laxmipriya Comment Added at : 2011-05-02 02:36:36 Comment on Tutorial : Struts 1 vs Struts 2 By jcarreira, adrian deccico yes , this article is good & very help ful. i want to know about basic requirement for jdbc , View Tutorial By: Amit sharma at 2009-03-19 04:10:24 3. Great stuff!!! The clickLink functi View Tutorial By: Harro at 2008-10-13 07:25:26 4. Good Example View Tutorial By: Naresh at 2015-02-03 07:41:40 5. I managed to run the code but I'm getting a *** ti View Tutorial By: Bull at 2012-01-11 12:26:57 6. Whne i run this code ,it does not return any ports View Tutorial By: amit at 2009-06-10 02:11:13 7. Hi! I would like to send a string from a j2 View Tutorial By: beetlebrain at 2010-03-26 06:02:00 8. simple way friends: public class palindrome View Tutorial By: Manoj at 2013-01-07 02:18:00 9. Thank you.... View Tutorial By: Prakash at 2013-02-13 08:35:12 10. Nice one really helpful thanks View Tutorial By: Shashikant at 2012-01-06 12:00:45
https://java-samples.com/showcomment.php?commentid=36081
CC-MAIN-2019-43
refinedweb
216
70.63
Asked by: EF one to one relationship Question - I have two models of Product and productDetails, a product may or may be have a ProductDetails. How can I create the entities in the same view? When Editing or creating a product, I want it to be done in the same view. Here are the models below? // ProductDetails model public class ProductDetail { [Key] [ForeignKey("Product")] public int ProductDetailID { get; set; } public string Brand { get; set; } [Display(Name = "Operating System")] public string OperatingSystem { get; set; } [Display(Name = "Mother Board")] public string MotherBoard { get; set; } public string Processor { get; set; } public string Memory { get; set; } public string Storage { get; set; } public string Graphics { get; set; } public string Connectivity { get; set; } [Display(Name = "External Ports")] public string ExternalPorts { get; set; } public virtual Product Product { get; set; } } // Product Model public class Product { public int ID { get; set; } public int? CategoryID { get; set; } [Display(Name = "Product Name")] [Required(ErrorMessage = "The product name cannot be blank")] public string Name { get; set; } [Required(ErrorMessage = "The product description cannot be blank")] [StringLength(200, MinimumLength = 10, ErrorMessage = "Please enter a product description between 10 and 200 characters in length")] [DataType(DataType.MultilineText)] public string Description { get; set; } [Display(Name = "Price ")] [Required(ErrorMessage = "The price cannot be blank")] [DataType(DataType.Currency)] [DisplayFormat(DataFormatString = "{0:c}")] [RegularExpression("[0-9]+(\\.[0-9][0-9]?)?", ErrorMessage = "The price must be a number up to two decimal places")] public decimal Price { get; set; } [Display(Name = "Last Modified")] [DataType(DataType.Date)] [DisplayFormat(DataFormatString = "{0:yyyy-MM-dd}", ApplyFormatInEditMode = true)] /* [HiddenInput(DisplayValue=false)]*/ public DateTime ModifiedDate { get; set; } public virtual Category Category { get; set; } public virtual ICollection<ProductImage> ProductImages { get; set; } }Sunday, March 25, 2018 5:45 PM All replies , I want it to be done in the same view. Here are the models below? Your question is really not about EF. Your question is about controlling the viewmodels, views and partial views. You need to figure that out first. You should post to the MVC forum in ASP.NET forums for help. I'll give you this to think about. >Sunday, March 25, 2018 6:22 PM Hello niftymic, Based on your description, I am afraid that the issue is out of support of C# forum which mainly discusses C# programming language, IDE, libraries, samples and tools. I suggest that you can consult your issue directly on asp.net, March 26, 2018 5:35 AM Thanks, i think that's a good suggestion. I'm new in web development with ASP.Net, and most of the time I get confuse with the C#, asp.net, entity framework and all that staff. Hope to find my way out one day. Thanks again and sorry for any problem created. niftymicFriday, March 30, 2018 9:48 PM - Thanks for your advice and help. I will try them out. niftymicFriday, March 30, 2018 9:50 PM
https://social.microsoft.com/Forums/en-US/9dd415e3-564b-43cb-bdbf-0bd9423c98ea/ef-one-to-one-relationship?forum=Offtopic
CC-MAIN-2021-43
refinedweb
476
55.84
function export external function truncate value file file-ref at value integer absolute optional Argument definitions You can use vfs.truncate to truncate a file. The following program will truncate the specified file after the 5th byte: import "omvfs.xmd" prefixed by vfs. process local vfs.file too-long set too-long to vfs.open "c:\temp\test.txt" for vfs.read-write-mode vfs.truncate too-long at 5 This function will always leave the file with a number of bytes equal to the specified value. To empty a file, specifiy a value of 0 or vfs.start-offset. If the value specified is greater than the current size of the file, the file will be padded with null bytes up to the specified length. If you do not specify the absolute position at which to truncate the file, it will be truncated at the current position. You can determine the current cursor position with vfs.cursor-position. The following exceptions may occur:
http://developers.omnimark.com/docs-extract/html/function/1560.htm
CC-MAIN-2013-20
refinedweb
164
60.11
Frågor om Azure? Kontakta vårt säljteam. United States: 1-800-867-1389 USA: 1-800-867-1389 Enterprise Mobile Apps with Azure App Service Azure Mobile Apps Updates for August 2015 Azure App Service: enterprise mobile data sync maj 19, 2016 - Notification Hubs recently enabled namespace-level tiers so that customers can allocate resources tailored to each namespace’s expected traffic and usage patterns. maj 12, 2016 - With the release of Azure PowerShell 1.4.0, a number of new cmdlets were added to manage Tenant GIT configuration, Properties, and Loggers in API Management. maj 12, 2016 - We're transitioning from Azure Mobile Services to Azure App Service. While no action is required on your part, please note that the management experience will change. You can choose to migrate before September 1, 2016 or wait until Azure migrates you.
https://azure.microsoft.com/sv-se/documentation/services/app-service/mobile/
CC-MAIN-2016-22
refinedweb
139
63.7
RDF XML Now let's step out of theoretical la-la land and back into the here and now. You have resources and you want to use RDF to classify them. The good news is that you don't have to be an academic with a doctorate in formal logic theory to put this stuff to use. We've already seen that RDF specifies that the relationships between nodes of information can be represented using URI references. What does this actually look like in XML? <rdf:RDF xmlns: <rdf:Description rdf: <dc:title>What's the Deal with RDF?</dc:title> <dc:creator>Daniel Appelquist</dc:creator> </rdf:Description> </rdf:RDF> The above bit of RDF takes the previous graph one step further, by describing the article you're reading in RDF's XML syntax using a specified RDF entity set called the Dublin Core. RDF uses XML Namespaces syntax to tell us that the title and creator elements are from an XML namespace that "lives" at the end of the URI. The attribute rdf:about="" (with a blank value for the URI) in the description tag indicates that the RDF description refers to the enclosing resourcein this case, the article you're reading.
http://www.informit.com/articles/article.aspx?p=32050&seqNum=4
CC-MAIN-2020-10
refinedweb
204
59.33
In part 1, we gave a general overview of Decibel. In this part, we cover everyone's favorite section - the definitions! Well, at least we hope that the definitions will be informative. We describe some benefits for developers and benefits for users. Read on for the details. Decibel is a service that is concerned with real time communications; therefore, everything that connects one user with another user and makes it possible to get replies instantaneously is in the scope of Decibel. Decibel is based on the Telepathy D-Bus API's and uses the Tapioca implementation of these API's. Definitions A few Decibel-related definitions: Real Time Communication (RTC) Real time communication refers to all computer-supported interactive means of communication. This includes text chats (AIM, MSN, IRC, Jabber, etc.), telephony (VoIP or CTI), video conferencing, and more. Other means of communication such as email and newsgroups are not instantaneous, and as such are beyond the scope of Decibel. Computer Telephone Integration (CTI) Computer Telephone Integration deals with a phone connecting to and being controlled by a computer. For example, the computer could be used to dial a phone number on the phone using the computer's address book. Also, the computer could display the contact information of an incoming call (by looking up the incoming phone number from the computer's contact data). Telepathy (the project) Telepathy is a project being hosted at freedesktop.org. Its focus is to create a set of API's that talk to Real Time Communication services. These API's are based on D-Bus and are pretty low level. D-Bus D-Bus is also a freedesktop.org project, heavily influenced by KDE's DCOP, and used as a simple means of communications between applications. The primary purposes of D-Bus are for communication between desktop applications (for better perceived integration), and communication between desktop applications and the operating system (including running system daemons and processes). Voice over Internet Protocol (VoIP) VoIP, or Voice over Internet Protocol, is the delivery of voice conversation over an IP-based network. This can be over a local network, using VoIP technologies for intra-office communications, or over the internet for inter-personal communications. VoIP service may make use of an analog POTS (Plain Old Telephone Service) line for access to the traditional telephone system, or simply a connection between two VoIP applications. Tapioca (the project) Tapioca is a project that is working towards implementing the Telepathy specification. Those working on the Tapioca project provide language bindings that are not available from the Telepathy developers. They also attempt to smooth over the 'rough edges' of Telepathy somewhat. Houston Houston is a part of Decibel. It is a policy daemon that tracks the user's online status for all communication channels they use, persistently stores settings, reacts to connections initiated from external sources, and more. Developer Benefits One potential benefit for developers is reported by Tobias: "Application developers will find with Decibel a centralized place to store real time communication settings like account data and online states, a means to establish outgoing connections using these settings and to react to incoming connection attempts. This makes it possible to do things like 'go offline with all my accounts' or 'notify me on all incoming text chats so that I can log them'". Tobias continues, "Decibel will make it easy for a developer to do things like 'start a text chat with the person with these contact details.' Currently, an application developer will need to find and access the user's account data (which can be scattered over several applications), find a protocol the user and the requested contact have accounts for, bring that account online (using one of several libraries) and then initiate a chat session. Decibel tries to hide all these details from an application developer if he does not want to care". Developers with experience in real time communications and those interested in working on Decibel itself are the most likely to be interested in developing for Decibel, although any developer would likely receive some benefit. Keep in mind though that Decibel will not automatically make someone a good programmer. It will just enable good programmers to be more efficient. Having said that, it will enable your application to better integrate with other applications, thus increasing the desirability of the application. Interested developers can help in several ways. The build system used for Decibel has some problems that need to be resolved. Also, the API's need to be tested. This includes things such as connecting the Houston daemon to Akonadi and creating a plugin mechanism for Houston so that it can become desktop-neutral. Other issues to be worked on include writing protocol implementations following the Telepathy specification and coming up with graphical interfaces for the demonstrations of Decibel's capabilities. The Decibel website could also use an overhaul. To find out more about the project, developers can visit the project website and Tobias' blog. Chatters can visit the IRC channel #decibel at irc.freenode.net. Please also visit NLnet, the organization sponsoring the development of Decibel. User Benefits Since Decibel is a service rather than an application, users are not likely to see direct benefits from Decibel. Rather, the benefits they see will be indirect ones. Also, keep in mind that while these benefits are possible, it is still up to each application to decide what features will or will not be used. There are two main factors to keep in mind in dealing with the benefits of Decibel. First, since Decibel deals with Real Time Communications, the benefits would be realized in this arena. Second, since there is currently no comparable system with which to compare Decibel, all examples of benefits will be what Decibel 'could do' as opposed to what Decibel 'does do'. However, these two factors do not mean that the benefits users see must be small. On the contrary, the integration Decibel provides make it possible for users to see some exciting benefits in at least two major areas. First, applications normally associated with real time communications can add more features. For instance, an email program could use Decibel to update the online status of contacts in its address book and mail views. Second, applications not normally associated with real time communications could use Decibel to implement communication features. An office suite could use Decibel to embed chat or even video conferencing with the author of a document or support channels. Since Decibel will make it easy to set up communication channels between users, it might even jump start the development of collaboration features. For example, a graphics program could use Decibel to set up a communication channel to another instance of itself running for another user somewhere on the internet. This channel could then be used for collaborative editing of a graphics document. Decibel just reached the version 0.2.0 milestone which is a mostly feature-complete proof-of-concept implementation of the framework. Upcoming versions of Decibel will focus on integration into the KDE environment as well as improving the existing functionality and demo applications. Decibel will need some more releases before it can be used widely. Obviously, much work remains to be done. However, we hope you have a better understanding of the future possibilities with Decibel. Pillars = What a great stream of articles. Lots of questions are answered. Thanks for you work. You're welcome. My pleasure! Stay tuned for more. :) He's right, this series is brillaint. And tell me, what's the next pillar going to be? solid? phonon? Oxygen is our next target. We are just now starting to work on it, though. This series is brilliant! Hi quality content on the dot, you can usually find it on the digg front page, seems to be a very good promotion. -stephan I absolutely agree. The dot has gained so much quality with the KDE reports, I can't believe how much (quantity) and good (quality) content it has now. You really help the community building. Yours, Kay Seconded! Help spread the word about this wonderful articles:... Help making blogs a better place and kill these Digg-posts from every blog out there. They are f'ing *annoying* and nothing more than useless spam for Digg. You're even worse than Jehovah's Witnesses. 100% agreed Digg has millions of visitors each day, if we can KDE on its front page that's really a great exposure for KDE. People that don't even know about KDE might learn about it. What's wrong with that? I don't get it. The odds of someone who frequents a tech site such as Digg not knowing about KDE are small, in my opinion. Also, have you not seen the huge backlash against Ubuntu on that site due to the fact that there are so many articles about it? There is such a thing as over-exposure, you know :) Thats defianately true. but this article is definately worthy of being read. Perhaps the best thing to do is let the digg readers, rather than the dot.kde readers decide whats best for dig. Don't confuse a "campaign to simulate popularity of KDE" with "helping to promote KDE". And readers of digg, could you please not post links to digg here, if the sole purpose is to to the above? The readers of digg likely can make their own decisions. And honestly, do you think that if you don't know KDE, but read this article, you are going to learn anything?? That article is for KDE3 users. Yours, Kay wasn't houston renamed recently? It would be good to mention something like this in a "definitions list". Yes, I actually removed move references to Houston, just calling the thing Decibel daemon in most places. It still uses "Houston" in its name on the D-Bus, so I did not ask Nathan to change that entry. Developer's will stumble over the name at some point;-) Best Regards, Tobias so the d-bus name differs from the name used everywhere else? would it make sense to harmonize the two so it's a bit more obvious and follows the 'principle of least suprise'? this would probably need to be done before lots of applications start using it (and therefore relying on 'houston'), of course.... You are right, it does make sense to do get rid of the name completely and I am contemplating to rename the service from de.basyskom.Decibel.Houston to de.basyskom.Decibel. I would like to wait a bit longer before I go through with this change (a release or two), just to make sure nothing else will pop up that I need to stick into de.basyskom.Decibel. The Decibel client library contains a couple of string constants with all the important service names and object pathes, so renaming the daemon is not a big deal (at least while binary compatibility is not an issue). It might be nice to put the Decibel service into org.kde at that point. I used de.basyskom mostly because I did not want to pollute the org.kde namespace without getting permission first. Renaming de.basyskom.Decibel.Houston to org.kde.decibel makes a lot of sense. The process of getting permission is a simple email to the kde-core-devel mailing list. I am sure that there will be no objections. Olaf An office suite could use Decibel to embed chat or even video conferencing with the author of a document or support channels. How exactly will the office suit know who to call? extra metadata? and will there be a Koffice window with an ICQ chat in it, or will Koffice just call Kopete? It should support decibel, and just tell it to open a chatwindow. It can be embedded, yes... The idea is to ask the Decibel daemon for a channel an ID of a contact stored e.g. in Akonadi. Decibel will then either return that channel to the calling application to do with it as it pleases (useful for those developers who want to write their own chat software) or it can invoke a channel handler for it. The channel handler is a Decibel-aware application (e.g. based on kopete) configured by the user for a combination of protocol, type of data transmission and contact. So a office suite could e.g. look up the author of a document in its metadata, check for the name found there in Akonadi and then ask Decibel to connect to it, using whatever the user had configured as a GUI to handle the actual communication. One D-Bus call is all that takes:-) well that awnsers everything. Thank you. I love the idea two people can open a document and work on it at the same time. I can't see myself useing it but it feels powerfull and fun to have it around. Like dd. I've used this functionality before with Google Docs and it is in-fact very handy. I want it in kdevelop! Well the usefullness depends entirely on you're persoanal circamstances ;) As for Kdevelop, seems like their's no reason why not. Tobias just announced ( ) that in March, there will be a Decibel Hackathon. Some kopete and Akonadi people already confirmed that they'll be there. So we can expect much more progress on the integration part with other KDE4 technologies. :-) In case any one was wondering why a new telephony system would be designed and then called "plain old telephony system" it is because that term is an American tongue and cheek variation of the proper meaning "Post Office Telephone System". I believe the system was first designed and implemented by the the British post office and later adopted around the world. Just a small correction, but for some reason I felt compelled to tell. openSUSE packages to test Decibel are at (install source) No KDE is needed, since it builds cleanly on Qt 4. You can have a look at the test programs, or get into the APIs and header files and maybe write a UI. Tip:... is a really good read. openSUSE is great! They even offer kde4 packages: Latest build is 14 feb, that's just 3 days ago!! You can watch an aKademy 2006 video about Decibel here (...). The PDF of the slides of the presentation is here (). I forgot to put the links in the main article. Great project. I'm impressed with KDE. I'm starting a project to finance some GPL projects like KDE. If everyting well I will post a new here, a need the support of a teleco enterprise, this week I will have got more information. Thanks
https://dot.kde.org/comment/53781
CC-MAIN-2018-09
refinedweb
2,464
64.2
Sukadev Bhattiprolu <sukadev@linux.vnet.ibm.com> writes:> Eric W. Biederman [ebiederm@xmission.com] wrote:> | > Could you clarify ? How is the call to alloc_pidmap() from clone3() different> | > from the call from clone() itself ?> | > | I think it is totally inappropriate to assign pids in a pid namespace> | where there are user space processes already running.>> Honestly, I don't understand why it is inappropriate or how this differs> from normal clone() - which also assigns pids in own and ancestor pid> namespaces.The fact we can specify which pids we want. I won't claim it is asexploitable as NULL pointer deferences have been but it has that kindof feel to it.> | > | How we handle a clone extension depends critically on if we want to> | > | create a processes for restart in user space or kernel space.> | > | > | > | Could some one give me or point me at a strong case for creating the> | > | processes for restart in user space?> | >> | > There has been a lot of discussion on this with reference to the> | > Checkpoint/Restart patchset. See> | > for instance.> | > | Just read it. Thank you.>> Sorry. I should have mentioned the reason here. (Like you mention below),> flexibility is the main reason.>> | Now I am certain clone_with_pids() is not useful functionality to be> | exporting to userspace.> | > | The only real argument in favor of doing this in user space is greater> | flexibility. I can see checkpointing/restoring a single thread process> | without a pid namespace. Anything more and you are just asking for> | trouble.> | > | A design that weakens security. Increases maintenance costs. All for> | an unreliable result seems like a bad one to me.> | > | > | The pid assignment code is currently ugly. I asked that we just pass> | > | in the min max pid pids that already exist into the core pid> | > | assignment function and a constrained min/max that only admits a> | > | single pid when we are allocating a struct pid for restart. That was> | > | not done and now we have a weird abortion with unnecessary special cases.> | >> | > I did post a version of the patch attemptint to implement that. As> | > pointed out in:> | >> | >> | >> | > we would need more checks in alloc_pidmap() to cover cases like min or max> | > being invalid or min being greater than max or max being greater than pid_max> | > etc. Those checks also made the code ugly (imo).> | > | If you need more checks you are doing it wrong. The code already has min> | and max values, and even a start value. I was just strongly suggesting> | we generalize where we get the values from, and then we have not special> | cases. >> Well, if alloc_pidmap(pid_ns, min, max) does not have to check the> parameters passed in (ie assumes that callers pass it in correctly)> it might be simple. But when user specifies the pid, the >> min == max == user's target pid>> so we will need to check the values either here or in callers.Agreed. When you are talking about the target pid. That code pathneeds the extra check.> Yes the code already has values and a start value. But these are> controlled by alloc_pidmap() and not passed in from the user space.I was only thinking passed in from someplace else in kernel/pid.c> alloc_pidmap() needs to assign the next available pid or a specific> target pid. Generalizing it to alloc a pid in a range seemed be a> bit of an over kill for currently known usages.alloc_pidmap in assigning the next available pid is allocating a pidin a range.> I will post a version of the patch outside this patchset with min> and max parameters and we can see if it can be optimized/beautified.Thanks,Eric
http://lkml.org/lkml/2009/10/20/215
CC-MAIN-2016-26
refinedweb
606
74.29
Hi guys! I hope you are all doing fine. I have started working on a small compilation of projects which you can do under 24 hours. I will be publishing these projects in the form of tutorials on my blog. This is the first project and I hope you will enjoy doing it. Keep these two things in mind: - You don’t need an actual Alexa to follow this tutorial. Amazon provides an online simulator which you can use - I will be omitting the instructions on how to get a virtual environment up and running just so that I can keep this tutorial concise and to the point. You should use a virtualenv and execute all of these pipinstructions in a virtualenv. I did this project at BrickHack 4. This was the very first time I was using Alexa. I had never been in its proximity before. I wanted to develop such a skill which was not available online and which contained a certain level of twists so that it wasn’t boring. In the end I settled on an idea to control my system using Alexa. The main idea is that you can tell Alexa to carry out different commands on your system remotely. This will work even if your system/computer is in your office and you and your Alexa are at home. So without any further ado lets get started. 1. Getting Started If this is your very first time with Alexa I would suggest that you follow this really helpful tutorial by Amazon and get your first skill up and running. We will be editing this very same skill to do our bidding. At the time of this writing the tutorial linked above allowed you to tell Alexa your favourite color and Alexa will remember it and then you can ask Alexa about your favourite color and Alexa will respond with the saved color. We need a way to interface Alexa from our system. So that we can remotely control our system. For that we will use ngrok. Ngrok allows you to expose your local server to the internet. Go to this link and you can read the official install instructions. Come back after you are sure that ngrok is working. Now we also need to open urls from AWS-Lambda. We can use urllib for that but I prefer to use requests so now I will show you how you can use requests on AWS-lambda. - Create a local folder for your AWS-lambda code - Save the code from lambda_function.pyto a lambda_function.pyfile in the local folder - Install the requestslibrary in that folder using pip: pip install -t . - Create a zip of the directory and upload that zip to AWS-Lambda After the above steps your local directory should look something like this: $ ls certifi chardet-3.0.4.dist-info lambda_function.py urllib3 certifi-2018.1.18.dist-info idna requests urllib3-1.22.dist-info chardet idna-2.6.dist-info requests-2.18.4.dist-info You can find the upload zip button on AWS-lambda: Now you can use requests on AWS-lambda. Now just to test whether we did everything correctly, edit the code in lambda_function.py and change the following lines: Replace these lines with this: def get_welcome_response(): """ If we wanted to initialize the session to have some attributes we could add those here """ session_attributes = {} card_title = "Welcome" html = requests.get('') speech_output = "Welcome to the Alexa Skills Kit sample. " \ "Your AWS-lambda's IP address)) We also need to import requests in our lambda_function.py file. To do that add the following line at the very top of the file: import requests Now zip up the folder again and upload it on AWS-lambda or edit the code for this file directly online. Now try asking Alexa to run your skill and Alexa should greet you with AWS-lambda’s public IP address. Now lets plan out the next steps and then we can decide how we want to achieve them. Here is what I have in mind: - We will have a server running on our system - We will ask Alexa to send a certain command from a list of pre-determined commands to our system - The request will go to AWS-lambda - AWS-lambda will open a specific url corresponding to a certain command - Our local server will execute the command based on the url which AWS-lambda accessed So naturally the next step is to get a server up and running. I will be using Python/Flask for this purpose. 2. Creating a boilerplate Flask project The Flask website provides us with some very basic code which we can use as our starting point. from flask import Flask app = Flask(__name__) @app.route("/") def hello(): return "Hello World!" Save the above code in a app.py file. Run the following command in the terminal: $ Flask_APP=app.py flask run This will tell the flask command line program about where to find our flask code which it needs to serve. If everything is working fine, you should see the following output: * Running on If things don’t work the first time, try searching around on Google and you should be able to find a solution. If nothing works then write a comment and I will try to help you as much as I can. 3. Creating Custom URL endpoints Firstly, lets make our Flask app accessible over the internet and after that is done we will create custom URL endpoints. In order to do that we will need to run our Flask app in one terminal tab/instance and ngrok in the other. I am assuming that your flask app is running currently in one terminal. Open another terminal and type the following: ./ngrok http 5000 Make sure that you run this above command in the folder where you placed the ngrok binary. If everything is working perfectly you should see the following output: ngrok by @inconshreveable (Ctrl+C to quit) Session Status online Account Muhammad Yasoob Ullah Khalid (Plan: Free) Version 2.2.8 Region United States (us) Web Interface Forwarding -> localhost:5000 Forwarding -> localhost:5000 Connections ttl opn rt1 rt5 p50 p90 0 0 0.00 0.00 0.00 0.00 This means that every request to will be routed to your system and locally running app.py file will cater to all of the requests. You can test this by opening in a browser session and you should be greeted with this: Hello World! This confirms that till now everything is going according to plan. Now we’ll move on and create a custom url endpoint. Open up your app.py file in your favourite text editor and add in the following piece of code: @app.route('/command', methods=['GET']) def handle_command(): command = request.args.get('command','') return command Here we are using a different module ( request) from the flask package as well so we need to add an import at the very top. Modify from flask import Flask to from flask import Flask, request. Now restart app.py which was running in the terminal. The above piece of code simply takes the query parameters in the URL and echoes them back to the caller. For instance if you access: is amazingyou will get This is amazing as the response. Let’s test whether everything is working fine by modifying our AWS-lambda code and making use of this endpoint. 4. Testing the endpoint with Alexa Open up lambda_function.py and again modify the previously modified code to reflect the following changes: def get_welcome_response(): """ If we wanted to initialize the session to have some attributes we could add those here """ session_attributes = {} card_title = "Welcome" html = requests.get('') speech_output = "Welcome to the Alexa Skills Kit sample. " \ "Your ngrok instance)) Make sure that you modify example.ngrok.io to your own endpoint which ngrok provides you with. Now save this code to AWS-lambda and ask Alexa to use the Color Picker skill (the skill which we have been working with since the beginning). Alexa should respond with something along the lines of: Welcome to the Alexa skills kit sample. Your ngrok instance is working. Please tell me your favourite color by saying, "My favourite color is red". If you get this response its time to move on and make some changes to the Alexa skill from the Amazon developer dashboard. Open the dashboard and navigate to the skill which you created while following the Amazon 5 min skill development tutorial. Now we will change the name of the skill, its invocation name and the interaction model. - Change the Skill name and invocation name to anything which you find satisfying: - Change the intent schema on the next page to this: { "intents": [ { "slots": [ { "name": "Command", "type": "LIST_OF_COMMANDS" } ], "intent": "MyCommandIsIntent" }, { "intent": "WhatsMyCommandIntent" }, { "intent": "AMAZON.HelpIntent" } ] } - Change the Custom Slot Types to this: Type: LIST_OF_COMMANDS Values: shutdown sleep restart using - Replace the sample utterances by these: MyCommandIsIntent send the {Command} command MyCommandIsIntent send {Command} command Now edit your lambda_function.py as well and replace every instance of Color with Command. The file should look something like this after making the required changes: Most of the changes are self-evident. Please go through the code. The main addition/change which I made are the following lines in the set_command_in_session function: html = requests.get(''+favorite_command) speech_output = "I sent the " + \ html.text() + \ " command to your system." \ "Let me know if you want me to send another command." What this does is that after recognizing that the user has asked it to send a command to the system, it accesses the specific custom endpoint which we created. The whole command flow will work something like this: User: Alexa open up System Manager Alexa: Welcome to the Alexa Skills Kit sample. Your ngrok instance is working. Please tell me what command I should send to your system User: Send the shutdown command Alexa: I sent the shutdown command to your system. Let me know if you want me to send another command. User: Thank You Alexa: Your last command was shutdown. Goodbye Boom! Your Alexa side of the program is complete. Now we just need to edit the custom URL endpoint to actually carry out these commands. I will not be doing that. Instead I will be adding voice output for these commands so that we know the commands are working. Edit the app.py file to reflect the following changes: from flask import Flask, request import vlc app = Flask(__name__) @app.route("/") def hello(): return "Hello World!" @app.route('/command', methods=['GET']) def handle_command(): command = request.args.get('command','') p = vlc.MediaPlayer(command+".mp3") p.play() return command Now before you are able to run this code you need to do two things.The first one is to install the libvlc Python bindings. This is required to run .mp3 files in Python. There are a couple of other ways as well but I found this to be the easiest. You can install these bindings by running the following pip command: pip install python-vlc The other thing you need to do is to create an mp3 file for every different command which you want to give through Alexa. These are two of the files which I made: Now place these mp3 files in the same directory as app.py. Restart app.py , upload all of your AWS-lambda specific code online and try out your brand new custom Alexa Skill! Issues which you might face: The ngrok public url changes whenever you restart ngrok so make sure that the url in your lambda_function.py file is upto-date. Further Steps Congrats on successfully completing the Alexa custom Skill development project! You now know the basics of how you can create custom Alexa skills and how you can make a localhost server available on the internet. Try to mix and match your ideas and create some entirely different skills! I know about someone who made story reading skills for Alexa. You could ask Alexa to read you a specific kind of story and Alexa would do that for you. Someone else made a skill where Alexa would ask you about your mood and then based on your mood it will curate a custom Spotify playlist for you. Let me share some more instructions about how you would go about doing the latter project. You can extract the Intent from the voice input. This is similar to how we extracted the Command from the input in this tutorial. Then you can send that Intent to IBM Watson for sentiment analysis. Watson will tell you the mood of the user. Then you can use that mood and the Spotify API to create a playlist based on that specific mood. Lastly, you can play the custom generated playlist using the Spotify API. One thought on “Controlling Your System Using Alexa (Tutorial)” Thanks Bro for your support I will always be thankful to you. Please keep me posted
https://pythontips.com/2018/02/15/controlling-your-system-using-alexa-tutorial/
CC-MAIN-2018-39
refinedweb
2,169
72.76
The Azure Python SDK now supports Azure Managed Disks! Azure Managed Disks and 1000 VMs in a Scale Set are now generally available. Azure Managed Disks provide a simplified disk management, enhanced scalability, and better security. It takes away the notion of storage account for disks, enabling developers to scale without worrying about the limitations associated with storage accounts. This post provides a quick introduction and reference to consuming key service features from Python. From a developer perspective, the Managed Disks experience in Azure CLI is idomatic to the CLI experience in other cross-platform tools. You can use the Azure Python SDK and the azure-mgmt-compute package 0.33.0 to administer Managed Disks. You can create a compute client using this tutorial. The complete API documentation is available on ReadTheDocs. Standalone Managed Disks Prior to Managed Disks, developers needed to maintain images for their VMs in multiple storage accounts to avoid the risk of running out of disk space. It is easy to see how this can complicate the architecture, and the dev-ops, for a service that requires a large number of VMs quickly, and has to be available across multiple regions. With Managed Disks, you do not need to worry about replicating images into new storage accounts. You can have a single image per region, and the service will make sure they are available for up to 10,000 VMs under a single subscription. You can create new disks from various starting points with a lines of Python code. Here are a few specific examples: - Create an empty Managed Disk - Create a Managed Disk from Blob Storage - Create a Managed Disk from our own Image Here’s a quick preview for creating an empty Managed disk in Python with a few lines of code: from azure.mgmt.compute.models import DiskCreateOption from azure.mgmt.compute.models import DiskCreateOption async_creation = compute_client.disks.create_or_update( 'my_resource_group', 'my_disk_name', { 'location': 'westus', 'disk_size_gb': 20, 'creation_data': { 'create_option': DiskCreateOption.empty } } ) disk_resource = async_creation.result() Virtual Machine with Managed Disks Now that you know the basics of creating managed disks, but how do you configure your service to create VMs from images stored on a Managed Disk? The service affords flexibility to create VMs from various types of Managed Disks. You can create a VM with an implicit Managed Disk for a specific disk image. Creation is simplified with implicit creation of managed disks without specifying all the disk details. You do not have to worry about creating and managing Storage Accounts. A Managed Disk is also implicitly created when provisioning a VM from an OS image on the Azure Marketplace. Here’s an example for a Ubuntu VM. Notice how the storage account parameter is optional in the VM definition. storage_profile = azure.mgmt.compute.models.StorageProfile( image_reference = azure.mgmt.compute.models.ImageReference( publisher='Canonical', offer='UbuntuServer', sku='16.04.0-LTS', version='latest' ) ) You can easily attach a previously provisioned Managed Disk as shown here. See a complete example on how to create a VM in Python (including network), and how check the full VM tutorial in Python. Virtual Machine Scale Sets with Managed Disks For very large scale services, Azure recommends using Virtual Machine Scale Sets (VMSS). VMSS allows developers to create a pool of VMs with identical configuration. The service allows “true autoscale” – developers do not need to pre-provision VMs. Prior to Managed Disks, the developers needed to consider the design carefully to ensure efficient Disk IO, ideally using a single storage account for up to 20 VMs. This limitation required developers to create and manage additional storage accounts to support a larger scale. With Managed Disk, you don’t have to manage any storage account at all. If you are used to the VMSS Python SDK, your storage_profile can now be exactly the same as the one used in VM creation. This feature also simplifies programming - you no longer have to manage any storage account at all. The official guide to transitioning from user managed storage to Managed Disks is available in this article. Quick samples are also available for preview. Get productive with the Azure CLI If the CLI is your management tool of choice, there are several handy commands available for various scenarios. For example, here’s how you can create a stand alone Managed Disk from the Azure CLI with a single command: az disk create -n myDisk -g myResourceGroup --size-gb 20 Check out Aaron Roney’s blog post to learn more CLI commands for programming Managed Disks. Other operations There are numerous other quick management operations you might need to get started with Managed Disks. See sample code for the following operations: - Resizing a managed disk from the Azure CLI - Updating the Storage Account type of the Managed Disks - Creating an image from Blob Storage - Creating a snapshot of a Managed Disk that is currently attached to a Virtual Machine In summary Managed Disks can tighten your workflow, simplify your service architecture, and offer you greater peace of mind in running a highly scalable Python cloud service. It also offers better reliability for Availability Sets by ensuring that the disks of VMs in an Availability Set are sufficiently isolated from each other to avoid single points of failure, and offers better security via granular role based access to resources. You can use the Azure CLI to create and manage your Managed Disks. Hopefully this blog post serves as a quick reference as you try Managed Disks on your own. For more information about the service, head over the Azure documentation. For feedback on the Python SDK, please send an email to azurepysdk@microsoft.com.
https://azure.microsoft.com/nl-nl/blog/scale-your-python-service-with-managed-disks/
CC-MAIN-2020-34
refinedweb
939
53.61
multiple buttons to control a single application is, perhaps, a bit of overkill, as is calling separate procedures for each action. A third problem is that apachectl prints a message to standard output to indicate how the command has been acted upon. The application could be improved by including a text widget to display the output of apachectl. In the following script, we will redesign the application to use a radiobutton chooser and a single button by modifying the screen procedure , and build a text widget in a new frame. We also remove the start, stop, and restart procedures and create 2 new procedures. The first, init, will handle the conditionals created by the radio button selection, the second, put_text, will launch Apache and print the apachectl output to a text widget: First, let's have a look at the screen procedure. The radiobutton command works just like html radiobuttons. The -variable parameter accepts the name of the variable as an argument. The -value parameter accepts the variable's value as an argument. The button, .top.submit uses the -command parameter to to call the init procedure defined later in the script. These buttons are then packed into the top frame and a second frame called bottom is created. The bottom frame is composed of a text widget and a scrollbar. Text widgets are created with the text command which takes a variety of options. In this case, we have used the -relief option which specifies the 3D effect for the field (other values for -relief include raised, flat, ridge, solid, groove); -bd option, which specifies borderwidth; and the yscrollcommand which specifies the name of a scrollbar that will be engaged by the textfield. Our scrollbar widget takes one option, -command which specifies how to behave when text scrolls beyond the screen of the text widget that it is interacting with. The init procedure loads the mode variable into its local namespace using the global command and uses a switch statement to set the value of the global variable, action. In this example, the switch command tests whether "$mode" matches the first word on each line in the list, and performs the action specified on the second word of each line. The default value is specified at the bottom of the list and defines the action performed if no match is found. Switch accepts 4 options: -exact, which requires a case-sensitive match, -glob, which uses a glob-style pattern match, -regexp, which uses regular-expression style matching, and --, which indicates the end of options, and is typically used if the pattern being matched has a "-" as a prefix. Note: We could have used an if-elseif-else conditional chain rather than the switch statement: The final thing that the init procedure does is call the put_text procedure. The put_text procedure reads in the value of action that was set in the init procedure, executes apachectl with the appropriate argument as specified by action, and prints apache's output to the .bottom.main text widget. The put_text procedure introduces 3 new commands: First, it sets the value of a variable, f, to the output of the open command. Open can be used to open a file, pipe stream or serial port and returns an identifier which can be used for reading, writing, or closing a stream. Since the first character following the open is a pipe "|", $apachectl $action is treated as a command, and is executed as though the exec had been given. The r specifies that the stream is read-only. Other parameters are as follows: The second new command is while. While is a typical while loop which executes a body of arguments so long as the specified condition is met. In this case, while will read a line of input and save it to the variable x until there is nothing left to read. The insert command inserts each line of input to the zero'th character of line 1 (1.0) of the .bottom.main text widget. 5. Conclusions This. 6. Further Reading Readers..
http://www.linux.com/learn/docs/ldp/Scripting-GUI-TclTk
CC-MAIN-2014-41
refinedweb
678
59.53
This is the eleventh lesson in a series introducing 10-year-olds to programming through Minecraft. Learn more here. The mod is a suggestion from my class :) Goal Fill crafting table with ... - dirt and get 64 diamonds - sand and get 64 emeralds - clay (not blocks) and get 64 obsidian Relevant Classes The class corresponding to items is called ItemStack and it's located in the net.minecraft.item package. We also need to use the GameRegistry class to add our new recipe to the game. Specifically, there is a static method called addShapelessRecipe public static void addShapelessRecipe(ItemStack output, Object... params) What does the ... mean? It signifies we can pass in a list of objects (items) that the user needs to put on the crafting table in order to receive the output. Because we're filling the entire table, the recipe is called shapeless (it doesn't matter which item goes in which square). How do we do it? We need to add the new recipe to the load method. ItemStack diamonds = new ItemStack(Item.diamond, 64); ItemStack dirt = new ItemStack(Block.dirt); GameRegistry.addShapelessRecipe( diamonds, dirt, dirt, dirt, dirt, dirt, dirt, dirt, dirt, dirt); We'll also need to import net.minecraft.block.Block, net.minecraft.item.Item, and net.minecraft.item.ItemStack. Eclipse makes this easy for us: if you however on a word with a red squiggly line, you will get a popup menu with "quick fixes". Most often, the required import will be the top one and you can select it. Now if we run the game (green play button or Ctrl+F11) and play for a little bit, we should get this: The other two recipes are left as an exercise to the reader :) Extra Credit In case you want to test our your recipes without having to actually collect all the required items, you can give your player inventory items for free :) - Add a new class (e.g. ConnectionHandler) that implements IConnectionHandler In the override for playerLoggedIn, add the following code: EntityPlayerMP mp = (EntityPlayerMP)player; mp.inventory.addItemStackToInventory( new ItemStack(...)); Finally, add the following line to your mod's loadevent NetworkRegistry.instance().registerConnectionHandler( new ConnectionHandler());
http://www.jedidja.ca/mod-something-for-nothing/
CC-MAIN-2018-05
refinedweb
361
64.71
C++ includes an extensive standard library that provides IO, input and output (and many other facilities). A stream is a sequence of characters read from or written to an IO device. For example, inputting form the keyboard and outputting to the console. The term stream is intended to suggest that the characters are generated, or consumed, sequentially over time. Using the IO library, we can prompt the user to give us two numbers and then print their sum. #include <iostream> using namespace std; void main() { int v1; int v2; cout << "Enter two numbers: "; cin >> v1 >> v2; cout << "The sum of " << v1 << " and " << v2 << " is " << v1 + v2; cout << endl; } You can use CTRL-F5 to run the above program.
https://codecrawl.com/2014/12/27/cplusplus-iostream/
CC-MAIN-2019-43
refinedweb
118
68.6
. Create a virtual environment using python3 using below command. virtualenv -p /usr/bin/python3 crypto Now activate the virtual environment. source crypto/bin/activate Install the latest Django version and other required libraries. For now only requests package is required. We will add other packages later if required. pip install django requests This will install Django 2.0 and requests package along with some other package. You can verify the same by running command pip freeze . (crypto) rana@Brahma: crypto$ pip freeze certifi==2017.11.5 chardet==3.0.4 Django==2.0 idna==2.6 pytz==2017.3 requests==2.18.4 urllib3==1.22 django-admin startproject crypto Go to crypto project directory and list the files. Since we are working on Django 2.0 , we need to take care of few things which we will highlight as we progress. Now create a new app 'bitcoin'. python manage.py startapp bitcoin Add this app to the list of installed apps in settings.py file. Create urls.py file in new app bitcoin. from django.urls import path from . import views app_name = 'bitcoin' urlpatterns = [ path('', views.index, name="index"), ] Django 2.0 Note: Adding app_name in urls.py is require now, otherwise you will get the below error. . from django.shortcuts import render def index(request): data = {} return render(request, "bitcoin/index.html", data) . December 28, 2017 - 09:00:43 Django version 2.0, using settings 'crypto.settings' Starting development server at Quit the server with CONTROL-C. Go to localhost:8000 and you can see the text which you putted in index.html file. In views.py file add a new function to get bitcoin data. This function will call the api url and get the currency data. Data returned is Json string. Convert it to Json Object. # return the data received from api as json object def get_crypto_data(): api_url = "" try: data = requests.get(api_url).json() except Exception as e: print(e) data = dict() return data Make a call to get_crypto_data in index function and return the rendered response. Complete views.py file : from django.shortcuts import render import requests def index(request): data = {} data["crypto_data"] = get_crypto_data() return render(request, "bitcoin/index.html", data) # return the data received from api as json object def get_crypto_data(): api_url = "" try: data = requests.get(api_url).json() except Exception as e: print(e) data = dict() return data <html> <head> <title>Latest Bitcoin Price - ThePythonDjango.Com</title> "> </head> <body style="margin:20px;"> <div class="alert alert-dark" role="alert"> <span style="font-size:30px;">Bitcoin Latest Price</span> <span style="font-size:15px;">by <a href="" target="_blank">ThePythonDjango.Com</a></span> </div> <div class="list-group"> <div class="list-group-item list-group-item-primary"> <div class="row"> <div class="col-md-3"> <label>Name</label> </div> <div class="col-md-3"> <label>USD Price</label> </div> <div class="col-md-3"> <label>BTC Price</label> </div> <div class="col-md-3"> <label>Change in Last Hour</label> </div> </div> </div> {% for coin in crypto_data %} <div class="list-group-item list-group-item-{% if coin.percent_change_1h.0 == '-'%}danger{% else %}success{% endif %}"> <div class="row"> <div class="col-md-3"> {{coin.name}} </div> <div class="col-md-3"> {{coin.price_usd}} </div> <div class="col-md-3"> {{coin.price_btc}} </div> <div class="col-md-3"> {{coin.percent_change_1h}} </div> </div> </div> {% endfor %} </div> </body> </html>.
https://pythoncircle.com/post/394/get-latest-bitcoin-and-other-crypto-currencies-rates-using-python-django/
CC-MAIN-2022-33
refinedweb
559
52.76
Hide Forgot Spec URL: SRPM URL: Description: Woodstox is a high-performance validating namespace-aware StAX-compliant (JSR-173) Open Source XML-processor written in Java. XML processor means that it handles both input (== parsing) and output (== writing, serialization)), as well as supporting tasks such as validation. I'll review this one as well Package Review ============== Key: - = N/A x = Check ! = Problem ? = Not evaluated === REQUIRED ITEMS === [x] Rpmlint output: woodstox-core-asl.src: W: spelling-error %description -l en_US namespace -> name space, name-space, names pace woodstox-core-asl.noarch: E: explicit-lib-dependency msv-xsdlib woodstox-core-asl.noarch: W: spelling-error %description -l en_US namespace -> name space, name-space, names pace 3 packages and 0 specfiles checked; 1 errors, 2: according to release-notes/FAAQ it's either LGPL or ASL 2.0. But they did this in a weird way. Instead of simply saying "we are dual-licensing this", they say there are two different versions where the only difference is the license. Blocking FE-LEGAL, because I am not sure if we can put "LGPLv2 or ASL 2.0" here or we have to pick one of them. [x] If (and only if) the source package includes the text of the license(s) in its own file, then that file, containing the text of the license(s) for the package is included in %doc. but in case of dual-licensing, don't forget to include LGPL later [x] All independent sub-packages have license of their own [x] Spec file is legible and written in American English. [x] Sources used to build the package matches the upstream source, as provided in the spec URL. MD5SUM this package : 5ceabf6c0f6daa7742cad71ae0a7db78 . [x]) You are using ant instead of maven, but this is more of a suggestion and personally I prefer this build. [x] Avoid having BuildRequires on exact NVR unless necessary [x] Package has BuildArch: noarch (if possible) [x] Latest version is packaged. [x] Reviewer should test that the package builds in mock. Tested on: fedora-rawhide-x86_64 === Issues === 1. Licensing. See FAAQ in resources subdir of tarball, point 3.1 Note that I believe "LGPLv2 or ASL 2.0" was the intention of upstream, supported by but just to be sure... === Final Notes === 1. Package contains src/maven directory with pom file that can be made into usable pom with simple sed. No need to have Source1 And one more thing. It would be nice to file a bug against bea-stax to include pom and depmap so you don't have to use custom one everywhere Unfortunately intentions of upstream are to have two separate JAR packages with separate licensing ... Let me check that with fedora-legal if it's possible to create just one package supplying both poms and artifacts. This is really dumb, however, here's what you should do: Have this package generate two subpackages: woodstox-core-asl and woodstox-core-lgpl Tag each one with the appropriate license and include the appropriately named jar file. Lifting FE-Legal. After talking this through with jcapik, a forced subpackage arrangement, while still acceptable, is simply unnecessary since maven can resolve the naming issue. Just build one jar, add the maven magic to provide mappings to the license-names, and tag the package as: License: ASL 2.0 or LGPLv2+ Altered ... Spec URL: SRPM URL: Looks OK now to me as well. APPROVED New Package SCM Request ======================= Package Name: woodstox-core Short Description: High-performance XML processor Owners: jcapik Branches: f15 f16 InitialCC: java-sig Git done (by process-git-requests). I just received a message from Tatu Saloranta (woodstox developer) where he states it's perfectly ok to create just one dual licensed JAR file. If he could, he would do it once again the same way as we did. Thank You guys, I'm gonna build it. Successfully built - closing.
https://bugzilla.redhat.com/show_bug.cgi?id=738034
CC-MAIN-2019-39
refinedweb
646
63.8
Benko just posted a Visual Studio Tips & Tricks listing. It's missing on of my favorites, which a coworker on the PowerShell team, Ibrahim Abdul Rahim, showed me: In a couple of circumstances, you'll see a red squiggle (like a syntax error) at the end of a line. This squiggle occurs when you're renaming something, or when you're referencing a class that exist in a namespace that hasn't been used. If you press ALT + SHIFT + F10, you'll get a handy menu that allows you to rename all of the references or add a "using" to that namespace. Since this is the first option, and both are really handy, you can fly using this keyboard combo in visual studio by using ALT + SHIFT + F10 + ENTER (to select the first item and add the using or rename other mentions of a property). Hope this helps, James Brundage [MSFT]
http://blogs.msdn.com/b/mediaandmicrocode/archive/2008/12/17/microcode-cool-posts-visual-studio-tips-tricks.aspx
CC-MAIN-2015-40
refinedweb
151
63.83
Feedback Getting Started Discussions Site operation discussions Recent Posts (new topic) Departments Courses Research Papers Design Docs Quotations Genealogical Diagrams Archives A feature I lifted from ksh93, I thought I would share with LTU. Active variables in Common Lisp; variables with callbacks on reading and writing. Possibly a useful feature that I rarely see in modern programming languages. github/cl-active-variables Properties in C# are referenced using the syntax of class or instance variables, but are implemented by a get and an optional set method which can take arbitrary action. Why make activation optional? In all examples that come to my mind, the callbacks are used to enforce invariants on the value being stored and accessed; making it easy to bypass those callbacks -- especially when the short default, the path of least effort, is to bypass -- kinds of defeat the purpose. What's the difference between this and replacing a variable with get/set accessor functions (available in most modern OO languages) and adding a callback to these functions? The orginal poster is referring to variables not slots (members). CLOS indeed gives the equivalent of accessors, it is even called an accessor. It is also allowed for a slot to have more than one accessor, reader and/or writer. This sounds reminiscent of the procedural representation of environments used in reflective Lisps. Sounds like variable traces in Tcl. Used extensively in Tk. Dart, like C#, Ruby, et. al., has getters and setters. It also has top-level members: stuff defined outside of any class. You can combine these and define getters and setters at the top level. For instance, here's a getter: get d6() { return (Math.random() * 6.0).floor(); } main() { print('You rolled ' + d6); } and here's a setter: set thunk(value) { print('You set ' + value); } main() { thunk = 3; } It ends up being handy for "singleton-like" variables that look just like top-level objects but can do some logic when evaluated like lazy initialization or caching. OO languages do it for object slots but not references. I found having this to mix particularly well with FRP ("cells"). The read aspect is the linguistic basis of MzTake, a scriptable debugger :) In Bling, you can create a dependency property and then "Bind" to it. So something like: var dp = IntBl.New(); dp.Bind = Time.Sin; or even assign its value discretely: dp.Value = 42; I've found C#'s accessor syntax to be pretty powerful, although sometimes I really want to overload assignment (=), or perhaps C# could have some overloadable assigners like := or ::=. Access oriented programming was mentioned a while back, which describes systematic use of active variables. Here is a paper describing access oriented and the language Loops. I think such systems won't be easy to reason about, at least not with an imperative paradigm. Too much depends on order for activations, which isn't explicit in the program. Activations aren't idempotent, and even `get` accessors might be effectful, so it becomes more difficult to reason about the program. Stability and reentrant concurrency can also become problems. A powerful technique is to treat all variables as lenses, a construct for bi-directional programming (cf. Boomerang project) via mutable views. There are languages that ease support for this, i.e. treating public attribute X as a pair of methods (GetX, SetX) which can be overloaded.
http://lambda-the-ultimate.org/node/4445
CC-MAIN-2019-09
refinedweb
559
56.05
=head L<perlop>.) List operators take more than one argument, while unary operators can never take more than one argument. Thus, a comma terminates the argument of a unary operator, but merely separates the arguments of a list operator. A unary operator generally provides scalar context to its argument, while a list operator may provide either scalar or list contexts for its arguments. If it does both, scalar arguments come first and list argument follow, and there can only ever be one such list argument. For instance, L<C<splice>|/splice ARRAY,OFFSET,LENGTH,LIST> has three scalar arguments followed by a list, whereas L<C<gethostbyname>|/gethostbyname NAME> has four scalar arguments. I<looks> like a function, therefore it I L<C<use warnings> L<C<time>|/time> and L<C<endpwent>|/endpwent>. For example, C<time+86_400> always means C. X<context> A named array in scalar context is quite different from what would at first glance appear to be a list in scalar context. You can't get a list like C< L<chown(2)>, L<fork(2)>, L<closedir(2)>, etc.) return true when they succeed and L<C<undef>|/undef EXPR> otherwise, as is usually mentioned in the descriptions below. This is different from the C interfaces, which return C<-1> on failure. Exceptions to this rule include L<C<wait>|/wait>, L<C<waitpid>|/waitpid PID,FLAGS>, and L<C<syscall>|/syscall NUMBER, LIST>. System calls also set the special L<C<$!>|perlvar/$!> L<perlapi/PL_keyword_plugin> for the mechanism. If you are using such a module, see the module's documentation for details of the syntax that it defines. =head2 Perl Functions by Category X<function> Here are Perl's functions (including things that look like functions, like some keywords and named operators) arranged by category. Some functions appear in more than one place. Any warnings, including those produced by keywords, are described in L<perldiag> and L<warnings>. =over 4 =item Functions for SCALARs or strings X<scalar> X<string> X<character> =for Pod::Functions =String L<C<chomp>|/chomp VARIABLE>, L<C<chop>|/chop VARIABLE>, L<C<chr>|/chr NUMBER>, L<C<crypt>|/crypt PLAINTEXT,SALT>, L<C<fc>|/fc EXPR>, L<C<hex>|/hex EXPR>, L<C<index>|/index STR,SUBSTR,POSITION>, L<C<lc>|/lc EXPR>, L<C<lcfirst>|/lcfirst EXPR>, L<C<length>|/length EXPR>, L<C<oct>|/oct EXPR>, L<C<ord>|/ord EXPR>, L<C<pack>|/pack TEMPLATE,LIST>, L<C<qE<sol>E<sol>>|/qE<sol>STRINGE<sol>>, L<C<qqE<sol>E<sol>>|/qqE<sol>STRINGE<sol>>, L<C<reverse>|/reverse LIST>, L<C<rindex>|/rindex STR,SUBSTR,POSITION>, L<C<sprintf>|/sprintf FORMAT, LIST>, L<C<substr>|/substr EXPR,OFFSET,LENGTH,REPLACEMENT>, L<C<trE<sol>E<sol>E<sol>>|/trE<sol>E<sol>E<sol>>, L<C<uc>|/uc EXPR>, L<C<ucfirst>|/ucfirst EXPR>, L<C<yE<sol>E<sol>E<sol>>|/yE<sol>E<sol>E<sol>> Regular expressions and pattern matching X<regular expression> X<regex> X<regexp> =for Pod::Functions =Regexp L<C<mE<sol>E<sol>>|/mE<sol>E<sol>>, L<C<pos>|/pos SCALAR>, L<C<qrE<sol>E<sol>>|/qrE<sol>STRINGE<sol>>, L<C<quotemeta>|/quotemeta EXPR>, L<C<sE<sol>E<sol>E<sol>>|/sE<sol>E<sol>E<sol>>, L<C<split>|/split E<sol>PATTERNE<sol>,EXPR,LIMIT>, L<C<study>|/study SCALAR> =item Numeric functions X<numeric> X<number> X<trigonometric> X<trigonometry> =for Pod::Functions =Math L<C<abs>|/abs VALUE>, L<C<atan2>|/atan2 Y,X>, L<C<cos>|/cos EXPR>, L<C<exp>|/exp EXPR>, L<C<hex>|/hex EXPR>, L<C<int>|/int EXPR>, L<C<log>|/log EXPR>, L<C<oct>|/oct EXPR>, L<C<rand>|/rand EXPR>, L<C<sin>|/sin EXPR>, L<C<sqrt>|/sqrt EXPR>, L<C<srand>|/srand EXPR> =item Functions for real @ARRAYs X<array> =for Pod::Functions =ARRAY L<C<each>|/each HASH>, L<C<keys>|/keys HASH>, L<C<pop>|/pop ARRAY>, L<C<push>|/push ARRAY,LIST>, L<C<shift>|/shift ARRAY>, L<C<splice>|/splice ARRAY,OFFSET,LENGTH,LIST>, L<C<unshift>|/unshift ARRAY,LIST>, L<C<values>|/values HASH> =item Functions for list data X<list> =for Pod::Functions =LIST L<C<grep>|/grep BLOCK LIST>, L<C<join>|/join EXPR,LIST>, L<C<map>|/map BLOCK LIST>, L<C<qwE<sol>E<sol>>|/qwE<sol>STRINGE<sol>>, L<C<reverse>|/reverse LIST>, L<C<sort>|/sort SUBNAME LIST>, L<C<unpack>|/unpack TEMPLATE,EXPR> =item Functions for real %HASHes X<hash> =for Pod::Functions =HASH L<C<delete>|/delete EXPR>, L<C<each>|/each HASH>, L<C<exists>|/exists EXPR>, L<C<keys>|/keys HASH>, L<C<values>|/values HASH> =item Input and output functions X<I/O> X<input> X<output> X<dbm> =for Pod::Functions =I/O L<C<binmode>|/binmode FILEHANDLE, LAYER>, L<C<close>|/close FILEHANDLE>, L<C<closedir>|/closedir DIRHANDLE>, L<C<dbmclose>|/dbmclose HASH>, L<C<dbmopen>|/dbmopen HASH,DBNAME,MASK>, L<C<die>|/die LIST>, L<C<eof>|/eof FILEHANDLE>, L<C<fileno>|/fileno FILEHANDLE>, L<C<flock>|/flock FILEHANDLE,OPERATION>, L<C<format>|/format>, L<C<getc>|/getc FILEHANDLE>, L<C<print>|/print FILEHANDLE LIST>, L<C<printf>|/printf FILEHANDLE FORMAT, LIST>, L<C<read>|/read FILEHANDLE,SCALAR,LENGTH,OFFSET>, L<C<readdir>|/readdir DIRHANDLE>, L<C<readline>|/readline EXPR>, L<C<rewinddir>|/rewinddir DIRHANDLE>, L<C<say>|/say FILEHANDLE LIST>, L<C<seek>|/seek FILEHANDLE,POSITION,WHENCE>, L<C<seekdir>|/seekdir DIRHANDLE,POS>, L<C<select>|/select RBITS,WBITS,EBITS,TIMEOUT>, L<C<syscall>|/syscall NUMBER, LIST>, L<C<sysread>|/sysread FILEHANDLE,SCALAR,LENGTH,OFFSET>, L<C<sysseek>|/sysseek FILEHANDLE,POSITION,WHENCE>, L<C<syswrite>|/syswrite FILEHANDLE,SCALAR,LENGTH,OFFSET>, L<C<tell>|/tell FILEHANDLE>, L<C<telldir>|/telldir DIRHANDLE>, L<C<truncate>|/truncate FILEHANDLE,LENGTH>, L<C<warn>|/warn LIST>, L<C<write>|/write FILEHANDLE> Functions for fixed-length data or records =for Pod::Functions =Binary L<C<pack>|/pack TEMPLATE,LIST>, L<C<read>|/read FILEHANDLE,SCALAR,LENGTH,OFFSET>, L<C<syscall>|/syscall NUMBER, LIST>, L<C<sysread>|/sysread FILEHANDLE,SCALAR,LENGTH,OFFSET>, L<C<sysseek>|/sysseek FILEHANDLE,POSITION,WHENCE>, L<C<syswrite>|/syswrite FILEHANDLE,SCALAR,LENGTH,OFFSET>, L<C<unpack>|/unpack TEMPLATE,EXPR>, L<C<vec>|/vec EXPR,OFFSET,BITS> =item Functions for filehandles, files, or directories X<file> X<filehandle> X<directory> X<pipe> X<link> X<symlink> =for Pod::Functions =File L<C<-I<X>>|/-X FILEHANDLE>, L<C<chdir>|/chdir EXPR>, L<C<chmod>|/chmod LIST>, L<C<chown>|/chown LIST>, L<C<chroot>|/chroot FILENAME>, L<C<fcntl>|/fcntl FILEHANDLE,FUNCTION,SCALAR>, L<C<glob>|/glob EXPR>, L<C<ioctl>|/ioctl FILEHANDLE,FUNCTION,SCALAR>, L<C<link>|/link OLDFILE,NEWFILE>, L<C<lstat>|/lstat FILEHANDLE>, L<C<mkdir>|/mkdir FILENAME,MODE>, L<C<open>|/open FILEHANDLE,MODE,EXPR>, L<C<opendir>|/opendir DIRHANDLE,EXPR>, L<C<readlink>|/readlink EXPR>, L<C<rename>|/rename OLDNAME,NEWNAME>, L<C<rmdir>|/rmdir FILENAME>, L<C<select>|/select FILEHANDLE>, L<C<stat>|/stat FILEHANDLE>, L<C<symlink>|/symlink OLDFILE,NEWFILE>, L<C<sysopen>|/sysopen FILEHANDLE,FILENAME,MODE>, L<C<umask>|/umask EXPR>, L<C<unlink>|/unlink LIST>, L<C<utime>|/utime LIST> =item Keywords related to the control flow of your Perl program X<control flow> =for Pod::Functions =Flow L<C<break>|/break>, L<C<caller>|/caller EXPR>, L<C<continue>|/continue BLOCK>, L<C<die>|/die LIST>, L<C<do>|/do BLOCK>, L<C<dump>|/dump LABEL>, L<C<eval>|/eval EXPR>, L<C<evalbytes>|/evalbytes EXPR>, L<C<exit>|/exit EXPR>, L<C<__FILE__>|/__FILE__>, L<C<goto>|/goto LABEL>, L<C<last>|/last LABEL>, L<C<__LINE__>|/__LINE__>, L<C<next>|/next LABEL>, L<C<__PACKAGE__>|/__PACKAGE__>, L<C<redo>|/redo LABEL>, L<C<return>|/return EXPR>, L<C<sub>|/sub NAME BLOCK>, L<C<__SUB__>|/__SUB__>, L<C<wantarray>|/wantarray> L<C<break>|/break> is available only if you enable the experimental L<C<"switch"> feature|feature/The 'switch' feature> or use the C<CORE::> prefix. The L<C<"switch"> feature|feature/The 'switch' feature> also enables the C<default>, C<given> and C<when> statements, which are documented in L<perlsyn/"Switch Statements">. The L<C<"switch"> feature|feature/The 'switch' feature> is enabled automatically with a C<use v5.10> (or higher) declaration in the current scope. In Perl v5.14 and earlier, L<C<continue>|/continue BLOCK> required the L<C<"switch"> feature|feature/The 'switch' feature>, like the other keywords. L<C<evalbytes>|/evalbytes EXPR> is only available with the L<C<"evalbytes"> feature|feature/The 'unicode_eval' and 'evalbytes' features> (see L<feature>) or if prefixed with C<CORE::>. L<C<__SUB__>|/__SUB__> is only available with the L<C<"current_sub"> feature|feature/The 'current_sub' feature> or if prefixed with C<CORE::>. Both the L<C<"evalbytes">|feature/The 'unicode_eval' and 'evalbytes' features> and L<C<"current_sub">|feature/The 'current_sub' feature> features are enabled automatically with a C<use v5.16> (or higher) declaration in the current scope. =item Keywords related to scoping =for Pod::Functions =Namespace L<C<caller>|/caller EXPR>, L<C<import>|/import LIST>, L<C<local>|/local EXPR>, L<C<my>|/my VARLIST>, L<C<our>|/our VARLIST>, L<C<package>|/package NAMESPACE>, L<C<state>|/state VARLIST>, L<C<use>|/use Module VERSION LIST> Miscellaneous functions =for Pod::Functions =Misc L<C<defined>|/defined EXPR>, L<C<formline>|/formline PICTURE,LIST>, L<C<lock>|/lock THING>, L<C<prototype>|/prototype FUNCTION>, L<C<reset>|/reset EXPR>, L<C<scalar>|/scalar EXPR>, L<C<undef>|/undef EXPR> =item Functions for processes and process groups X<process> X<pid> X<process id> =for Pod::Functions =Process L<C<alarm>|/alarm SECONDS>, L<C<exec>|/exec LIST>, L<C<fork>|/fork>, L<C<getpgrp>|/getpgrp PID>, L<C<getppid>|/getppid>, L<C<getpriority>|/getpriority WHICH,WHO>, L<C<kill>|/kill SIGNAL, LIST>, L<C<pipe>|/pipe READHANDLE,WRITEHANDLE>, L<C<qxE<sol>E<sol>>|/qxE<sol>STRINGE<sol>>, L<C<readpipe>|/readpipe EXPR>, L<C<setpgrp>|/setpgrp PID,PGRP>, L<C<setpriority>|/setpriority WHICH,WHO,PRIORITY>, L<C<sleep>|/sleep EXPR>, L<C<system>|/system LIST>, L<C<times>|/times>, L<C<wait>|/wait>, L<C<waitpid>|/waitpid PID,FLAGS> =item Keywords related to Perl modules X<module> =for Pod::Functions =Modules L<C<do>|/do EXPR>, L<C<import>|/import LIST>, L<C<no>|/no MODULE VERSION LIST>, L<C<package>|/package NAMESPACE>, L<C<require>|/require VERSION>, L<C<use>|/use Module VERSION LIST> =item Keywords related to classes and object-orientation X<object> X<class> X<package> =for Pod::Functions =Objects L<C<bless>|/bless REF,CLASSNAME>, L<C<dbmclose>|/dbmclose HASH>, L<C<dbmopen>|/dbmopen HASH,DBNAME,MASK>, L<C<package>|/package NAMESPACE>, L<C<ref>|/ref EXPR>, L<C<tie>|/tie VARIABLE,CLASSNAME,LIST>, L<C<tied>|/tied VARIABLE>, L<C<untie>|/untie VARIABLE>, L<C<use>|/use Module VERSION LIST> =item Low-level socket functions X<socket> X<sock> =for Pod::Functions =Socket L<C<accept>|/accept NEWSOCKET,GENERICSOCKET>, L<C<bind>|/bind SOCKET,NAME>, L<C<connect>|/connect SOCKET,NAME>, L<C<getpeername>|/getpeername SOCKET>, L<C<getsockname>|/getsockname SOCKET>, L<C<getsockopt>|/getsockopt SOCKET,LEVEL,OPTNAME>, L<C<listen>|/listen SOCKET,QUEUESIZE>, L<C<recv>|/recv SOCKET,SCALAR,LENGTH,FLAGS>, L<C<send>|/send SOCKET,MSG,FLAGS,TO>, L<C<setsockopt>|/setsockopt SOCKET,LEVEL,OPTNAME,OPTVAL>, L<C<shutdown>|/shutdown SOCKET,HOW>, L<C<socket>|/socket SOCKET,DOMAIN,TYPE,PROTOCOL>, L<C<socketpair>|/socketpair SOCKET1,SOCKET2,DOMAIN,TYPE,PROTOCOL> =item System V interprocess communication functions X<IPC> X<System V> X<semaphore> X<shared memory> X<memory> X<message> =for Pod::Functions =SysV L<C<msgctl>|/msgctl ID,CMD,ARG>, L<C<msgget>|/msgget KEY,FLAGS>, L<C<msgrcv>|/msgrcv ID,VAR,SIZE,TYPE,FLAGS>, L<C<msgsnd>|/msgsnd ID,MSG,FLAGS>, L<C<semctl>|/semctl ID,SEMNUM,CMD,ARG>, L<C<semget>|/semget KEY,NSEMS,FLAGS>, L<C<semop>|/semop KEY,OPSTRING>, L<C<shmctl>|/shmctl ID,CMD,ARG>, L<C<shmget>|/shmget KEY,SIZE,FLAGS>, L<C<shmread>|/shmread ID,VAR,POS,SIZE>, L<C<shmwrite>|/shmwrite ID,STRING,POS,SIZE> =item Fetching user and group info X<user> X<group> X<password> X<uid> X<gid> X<passwd> X</etc/passwd> =for Pod::Functions =User L<C<endgrent>|/endgrent>, L<C<endhostent>|/endhostent>, L<C<endnetent>|/endnetent>, L<C<endpwent>|/endpwent>, L<C<getgrent>|/getgrent>, L<C<getgrgid>|/getgrgid GID>, L<C<getgrnam>|/getgrnam NAME>, L<C<getlogin>|/getlogin>, L<C<getpwent>|/getpwent>, L<C<getpwnam>|/getpwnam NAME>, L<C<getpwuid>|/getpwuid UID>, L<C<setgrent>|/setgrent>, L<C<setpwent>|/setpwent> =item Fetching network info X<network> X<protocol> X<host> X<hostname> X<IP> X<address> X<service> =for Pod::Functions =Network L<C<endprotoent>|/endprotoent>, L<C<endservent>|/endservent>, L<C<gethostbyaddr>|/gethostbyaddr ADDR,ADDRTYPE>, L<C<gethostbyname>|/gethostbyname NAME>, L<C<gethostent>|/gethostent>, L<C<getnetbyaddr>|/getnetbyaddr ADDR,ADDRTYPE>, L<C<getnetbyname>|/getnetbyname NAME>, L<C<getnetent>|/getnetent>, L<C<getprotobyname>|/getprotobyname NAME>, L<C<getprotobynumber>|/getprotobynumber NUMBER>, L<C<getprotoent>|/getprotoent>, L<C<getservbyname>|/getservbyname NAME,PROTO>, L<C<getservbyport>|/getservbyport PORT,PROTO>, L<C<getservent>|/getservent>, L<C<sethostent>|/sethostent STAYOPEN>, L<C<setnetent>|/setnetent STAYOPEN>, L<C<setprotoent>|/setprotoent STAYOPEN>, L<C<setservent>|/setservent STAYOPEN> =item Time-related functions X<time> X<date> =for Pod::Functions =Time L<C<gmtime>|/gmtime EXPR>, L<C<localtime>|/localtime EXPR>, L<C<time>|/time>, L<C<times>|/times> =item Non-function keywords =for Pod::Functions =!Non-functions C<and>, C<AUTOLOAD>, C<BEGIN>, C<CHECK>, C<cmp>, C<CORE>, C<__DATA__>, C<default>, C<DESTROY>, C<else>, C<elseif>, C<elsif>, C<END>, C<__END__>, C<eq>, C<for>, C<foreach>, C<ge>, C<given>, C<gt>, C<if>, C<INIT>, C<le>, C<lt>, C<ne>, C<not>, C<or>, C<UNITCHECK>, C<unless>, C<until>, C<when>, C<while>, C<x>, C<xor> : L<C<-I<X>>|/-X FILEHANDLE>, L<C<binmode>|/binmode FILEHANDLE, LAYER>, L<C<chmod>|/chmod LIST>, L<C<chown>|/chown LIST>, L<C<chroot>|/chroot FILENAME>, L<C<crypt>|/crypt PLAINTEXT,SALT>, L<C<dbmclose>|/dbmclose HASH>, L<C<dbmopen>|/dbmopen HASH,DBNAME,MASK>, L<C<dump>|/dump LABEL>, L<C<endgrent>|/endgrent>, L<C<endhostent>|/endhostent>, L<C<endnetent>|/endnetent>, L<C<endprotoent>|/endprotoent>, L<C<endpwent>|/endpwent>, L<C<endservent>|/endservent>, L<C<exec>|/exec LIST>, L<C<fcntl>|/fcntl FILEHANDLE,FUNCTION,SCALAR>, L<C<flock>|/flock FILEHANDLE,OPERATION>, L<C<fork>|/fork>, L<C<getgrent>|/getgrent>, L<C<getgrgid>|/getgrgid GID>, L<C<gethostbyname>|/gethostbyname NAME>, L<C<gethostent>|/gethostent>, L<C<getlogin>|/getlogin>, L<C<getnetbyaddr>|/getnetbyaddr ADDR,ADDRTYPE>, L<C<getnetbyname>|/getnetbyname NAME>, L<C<getnetent>|/getnetent>, L<C<getppid>|/getppid>, L<C<getpgrp>|/getpgrp PID>, L<C<getpriority>|/getpriority WHICH,WHO>, L<C<getprotobynumber>|/getprotobynumber NUMBER>, L<C<getprotoent>|/getprotoent>, L<C<getpwent>|/getpwent>, L<C<getpwnam>|/getpwnam NAME>, L<C<getpwuid>|/getpwuid UID>, L<C<getservbyport>|/getservbyport PORT,PROTO>, L<C<getservent>|/getservent>, L<C<getsockopt>|/getsockopt SOCKET,LEVEL,OPTNAME>, L<C<glob>|/glob EXPR>, L<C<ioctl>|/ioctl FILEHANDLE,FUNCTION,SCALAR>, L<C<kill>|/kill SIGNAL, LIST>, L<C<link>|/link OLDFILE,NEWFILE>, L<C<lstat>|/lstat FILEHANDLE>, L<C<msgctl>|/msgctl ID,CMD,ARG>, L<C<msgget>|/msgget KEY,FLAGS>, L<C<msgrcv>|/msgrcv ID,VAR,SIZE,TYPE,FLAGS>, L<C<msgsnd>|/msgsnd ID,MSG,FLAGS>, L<C<open>|/open FILEHANDLE,MODE,EXPR>, L<C<pipe>|/pipe READHANDLE,WRITEHANDLE>, L<C<readlink>|/readlink EXPR>, L<C<rename>|/rename OLDNAME,NEWNAME>, L<C<select>|/select RBITS,WBITS,EBITS,TIMEOUT>, L<C<semctl>|/semctl ID,SEMNUM,CMD,ARG>, L<C<semget>|/semget KEY,NSEMS,FLAGS>, L<C<semop>|/semop KEY,OPSTRING>, L<C<setgrent>|/setgrent>, L<C<sethostent>|/sethostent STAYOPEN>, L<C<setnetent>|/setnetent STAYOPEN>, L<C<setpgrp>|/setpgrp PID,PGRP>, L<C<setpriority>|/setpriority WHICH,WHO,PRIORITY>, L<C<setprotoent>|/setprotoent STAYOPEN>, L<C<setpwent>|/setpwent>, L<C<setservent>|/setservent STAYOPEN>, L<C<setsockopt>|/setsockopt SOCKET,LEVEL,OPTNAME,OPTVAL>, L<C<shmctl>|/shmctl ID,CMD,ARG>, L<C<shmget>|/shmget KEY,SIZE,FLAGS>, L<C<shmread>|/shmread ID,VAR,POS,SIZE>, L<C<shmwrite>|/shmwrite ID,STRING,POS,SIZE>, L<C<socket>|/socket SOCKET,DOMAIN,TYPE,PROTOCOL>, L<C<socketpair>|/socketpair SOCKET1,SOCKET2,DOMAIN,TYPE,PROTOCOL>, L<C<stat>|/stat FILEHANDLE>, L<C<symlink>|/symlink OLDFILE,NEWFILE>, L<C<syscall>|/syscall NUMBER, LIST>, L<C<sysopen>|/sysopen FILEHANDLE,FILENAME,MODE>, L<C<system>|/system LIST>, L<C<times>|/times>, L<C<truncate>|/truncate FILEHANDLE,LENGTH>, L<C<umask>|/umask EXPR>, L<C<unlink>|/unlink LIST>, L<C<utime>|/utime LIST>, L<C<wait>|/wait>, L<C<waitpid>|/waitpid PID,FLAGS> For more information about the portability of these functions, see L<perlport> and other available platform-specific documentation. =head2 Alphabetical Listing of Perl Functions =over =for Pod::Functions a file test (-r, -x, etc) A file test, where X is one of the letters listed below. This unary operator takes one argument, either a filename, a filehandle, or a dirhandle, and tests the associated file to see if something is true about it. If the argument is omitted, tests L<C<$_>|perlvar/$_>, except for C<-t>, which tests STDIN. Unless otherwise documented, it returns C<1> for true and C<''> for false. If the file doesn't exist or can't be examined, it returns L<C<undef>|/undef EXPR> and sets L<C<$!>|perlvar/$!> (errno). With the exception of the C<-l> test they all follow symbolic links because they use C<stat()> and not C<lstat()> (so dangling symlinks can't be examined and will therefore report failure). C<-s/a/b/> does not do a negated substitution. Saying C< C<-r>, C<-R>, C<-w>, C<-W>, C<-x>, and C<-r>, C<-R>, C<-w>, and C<-W> tests always return 1, and C<-x> and C<-X> return 1 if any execute bit is set in the mode. Scripts run by the superuser may thus need to do a L<C<stat>|/stat FILEHANDLE> to determine the actual mode of the file, or temporarily set their effective uid to something else. If you are using ACLs, there is a pragma called L<C<filetest>|filetest> that may produce more accurate results than the bare L<C<stat>|/stat FILEHANDLE> mode bits. When under C<use filetest 'access'>, the above-mentioned filetests test whether the permission can(not) be granted using the L<access(2)> family of system calls. Also note that the C<-x> and C<-X> tests L<C<filetest>|filetest> pragma for more information. The C<-T> and C<-B> tests work as follows. The first block or so of the file is examined to see if it is valid UTF-8 that includes non-ASCII characters. If so, it's a C<-T> file. Otherwise, that same portion of the file is examined for odd characters such as strange control codes or characters with the high bit set. If more than a third of the characters are strange, it's a C<-B> file; otherwise it's a C<-T> file. Also, any file containing a zero byte in the examined portion is considered a binary file. (If executed within the scope of a L<S<use locale>|perllocale> which includes C<LC_CTYPE>, odd characters are anything that isn't a printable nor space in the current locale.) If C<-T> or C<-B> is used on a filehandle, the current IO buffer is examined rather than the first block. Both C<-T> and C<-B> return true on an empty L<C<stat>|/stat FILEHANDLE> or L<C<lstat>|/lstat FILEHANDLE> operator) is given the special filehandle consisting of a solitary underline, then the stat structure of the previous file test (or L<C<stat>|/stat FILEHANDLE> operator) is used, saving a system call. (This doesn't work with C<-t>, and you need to remember that L<C<lstat>|/lstat FILEHANDLE> and C<-l> leave values in the stat structure for the symbolic link, not the real file.) (Also, if the stat buffer was filled by an L<C<lstat>|/lstat FILEHANDLE> call, C<-T> and C<-B> will reset it with the results of C<-f -w -x $file> is equivalent to C<-x $file && -w _ && -f _>. (This is only fancy syntax: if you use the return value of C<-f $file> as an argument to another filetest operator, no special magic will happen.) Portability issues: L<perlport/-X>. To avoid confusing would-be users of your code with mysterious syntax errors, put something like this at the top of your script: use 5.010; # so filetest ops can stack =item abs VALUE X<abs> X<absolute> =item abs =for Pod::Functions absolute value function Returns the absolute value of its argument. If VALUE is omitted, uses L<C<$_>|perlvar/$_>. =item accept NEWSOCKET,GENERICSOCKET X<accept> =for Pod::Functions accept an incoming socket connect Accepts an incoming socket connect, just as L<accept(2)> does. Returns the packed address if it succeeded, false otherwise. See the example alarm SECONDS X<alarm> X<SIGALRM> X<timer> =item alarm =for Pod::Functions schedule a SIGALRM Arranges to have a SIGALRM delivered to this process after the specified number of wallclock seconds has elapsed. If SECONDS is not specified, the value stored in L<C<$_>|perlv C<0> may be supplied to cancel the previous timer without starting a new one. The returned value is the amount of time remaining on the previous timer. For delays of finer granularity than one second, the L<Time::HiRes> module (from CPAN, and starting from Perl 5.8 part of the standard distribution) provides L<C<ualarm>|Time::HiRes/ualarm ( $useconds [, $interval. It is usually a mistake to intermix L<C<alarm>|/alarm SECONDS> and L<C<sleep>|/sleep EXPR> calls, because L<C<sleep>|/sleep EXPR> may be internally implemented on your system with L<C<alarm>|/alarm SECONDS>. If you want to use L<C<alarm>|/alarm SECONDS> to time out a system call you need to use an L<C<eval>|/eval EXPR>/L<C<die>|/die LIST> pair. You can't rely on the alarm causing the system call to fail with L<C<$!>|perlvar/$!> set to C<EINTR> because Perl sets up signal handlers to restart system calls on some systems. Using L<C<eval>|/eval EXPR>/L<C<die>|/die LIST> always works, modulo the caveats given in L<perlipc/"Signals">. eval { local $SIG{ALRM} = sub { die "alarm\n" }; # NB: \n required alarm $timeout; my $nread = sysread $socket, $buffer, $size; alarm 0; }; if ($@) { die unless $@ eq "alarm\n"; # propagate unexpected errors # timed out } else { # didn't } For more information see L<perlipc>. Portability issues: L<perlport/alarm>. =item atan2 Y,X X<atan2> X<arctangent> X<tan> X<tangent> =for Pod::Functions arctangent of Y/X in the range -PI to PI Returns the arctangent of Y/X in the range -PI to PI. For the tangent operation, you may use the L<C<Math::Trig::tan>|Math::Trig/B<tan>> function, or use the familiar relation: sub tan { sin($_[0]) / cos($_[0]) } The return value for C<atan2(0,0)> is implementation-defined; consult your L<atan2(3)> manpage for more information. Portability issues: L<perlport/atan2>. =item bind SOCKET,NAME X<bind> =for Pod::Functions binds an address to a socket Binds a network address to a socket, just as L<bind(2)> does. Returns true if it succeeded, false otherwise. NAME should be a packed address of the appropriate type for the socket. See the examples in L<perlipc/"Sockets: Client/Server Communication">. =item binmode FILEHANDLE, LAYER X<binmode> X<binary> X<text> X<DOS> X<Windows> =item binmode FILEHANDLE =for Pod::Functions prepare binary files for I/O Arr L<C<undef>|/undef EXPR> and sets L<C<$!>|perlvar/$!> (errno). L<C<binmode>|/binmode FILEHANDLE, LAYER> C<:raw> the filehandle is made suitable for passing binary data. This includes turning off possible CRLF translation and marking it as bytes (as opposed to Unicode characters). Note that, despite what may be implied in I<"Programming Perl"> (the Camel, 3rd edition) or elsewhere, C<:raw> is I<not> simply the inverse of C<:crlf>. Other layers that would affect the binary nature of the stream are I<also> disabled. See L<PerlIO>, and the discussion about the PERLIO environment variable in L<perlrun|perlrun/PERLIO>. The C<:bytes>, C<:crlf>, C<:utf8>, and any other directives of the form C<:...>, are called I/O I<layers>. The L<open> pragma can be used to establish default I/O layers. I<The LAYER parameter of the L<C<binmode>|/binmode FILEHANDLE, LAYER> C<:utf8> or C<:encoding(UTF-8)>. C<:utf8> just marks the data as UTF-8 without further checking, while C<:encoding(UTF-8)> checks the data for actually being valid UTF-8. More details can be found in L<PerlIO::encoding>. In general, L<C<binmode>|/binmode FILEHANDLE, LAYER> should be called after L<C<open>|/open FILEHANDLE,MODE,EXPR> but before any I/O is done on the filehandle. Calling L<C<binmode>|/binmode FILEHANDLE, LAYER> normally flushes any pending buffered output data (and perhaps pending input data) on the handle. An exception to this is the C<:encoding> layer that changes the default character encoding of the handle. The C<:encoding> layer sometimes needs to be called in mid-stream, and it doesn't flush the stream. C<:encoding> also implicitly pushes on top of itself the C<:utf8> layer because internally Perl operates on UTF8-encoded Unicode characters. The operating system, device drivers, C libraries, and Perl run-time system all conspire to let the programmer treat a single character (C<\n>) as the line terminator, irrespective of external representation. On many operating systems, the native text file representation matches the internal representation, but on some platforms the external representation of C<\n> as a simple C<\cJ>, but what's stored in text files are the two characters C<\cM\cJ>. That means that if you don't use L<C<binmode>|/binmode FILEHANDLE, LAYER> on these systems, C<\cM\cJ> sequences on disk will be converted to C<\n> on input, and any C<\n> in your program will be converted back to C<\cM\cJ> on output. This is what you want for text files, but it can be disastrous for binary files. Another consequence of using L<C<binmode>|/binmode FILEHANDLE, LAYER> (on some systems) is that special end-of-file markers will be seen as part of the data stream. For systems from the Microsoft family this means that, if your binary data contain C<\cZ>, the I/O subsystem will regard it as the end of the file, unless you use L<C<binmode>|/binmode FILEHANDLE, LAYER>. L<C<binmode>|/binmode FILEHANDLE, LAYER> is important not only for L<C<readline>|/readline EXPR> and L<C<print>|/print FILEHANDLE LIST> operations, but also when using L<C<read>|/read FILEHANDLE,SCALAR,LENGTH,OFFSET>, L<C<seek>|/seek FILEHANDLE,POSITION,WHENCE>, L<C<sysread>|/sysread FILEHANDLE,SCALAR,LENGTH,OFFSET>, L<C<syswrite>|/syswrite FILEHANDLE,SCALAR,LENGTH,OFFSET> and L<C<tell>|/tell FILEHANDLE> (see L<perlport> for more details). See the L<C<$E<sol>>|perlvar/$E<sol>> and L<C<$\>|perlvar/$\> variables in L<perlvar> for how to manually set your input and output line-termination sequences. Portability issues: L<perlport/binmode>. =item bless REF,CLASSNAME X<bless> =item bless REF =for Pod::Functions create an object This function tells the thingy referenced by REF that it is now an object in the CLASSNAME package. If CLASSNAME is an empty string, it is interpreted as referring to the C<main> package. If CLASSNAME is omitted, the current package is used. Because a L<C<bless>|/bless REF,CLASSNAME> is often the last thing in a constructor, it returns the reference for convenience. Always use the two-argument version if a derived class might inherit the method doing the blessing. See L C<0>, because much code erroneously uses the result of L<C<ref>|/ref EXPR> as a truth value. See L<perlmod/"Perl Modules">. =item break =for Pod::Functions +switch break out of a C<given> block Break out of a C<given> block. L<C<break>|/break> is available only if the L<C<"switch"> feature|feature/The 'switch' feature> is enabled or if it is prefixed with C<CORE::>. The L<C<"switch"> feature|feature/The 'switch' feature> is enabled automatically with a C<use v5.10> (or higher) declaration in the current scope. =item caller EXPR X<caller> X<call stack> X<stack> X<stack trace> =item caller =for Pod::Functions get context of the current subroutine call Returns the context of the current pure perl subroutine call. In scalar context, returns the caller's package name if there I<is> a caller (that is, if we're in a subroutine or L<C<eval>|/eval EXPR> or L<C<require>|/require VERSION>) and the undefined value otherwise. caller never returns XS subs and they are skipped. The next pure perl sub will appear instead of the XS sub in caller's return values. In list context, caller returns # 0 1 2 my ($package, $filename, $line) = caller; Like L<C<__FILE__>|/__FILE__> and L<C<__LINE__>|/__LINE__>, the filename and line number returned here may be altered by the mechanism described at L<perlsyn/"Plain Old Comments (Not!)">. C<(eval)> if the frame is not a subroutine call, but an L<C<eval>|/eval EXPR>. In such a case additional elements $evaltext and C<$is_require> are set: C<$is_require> is true if the frame is created by a L<C<require>|/require VERSION> or L<C<use>|/use Module VERSION LIST> statement, $evaltext contains the text of the C<eval EXPR> statement. In particular, for an C<eval BLOCK> statement, $subroutine is C<(eval)>, but $evaltext is undefined. (Note also that each L<C<use>|/use Module VERSION LIST> statement creates a L<C<require>|/require VERSION> frame inside an C<eval EXPR> frame.) $subroutine may also be C<(unknown)> if this particular subroutine happens to have been deleted from the symbol table. C<$hasargs> is true if a new instance of L<C<@_>|perlvar/@_> was set up for the frame. C<$hints> and C<$bitmask> contain pragmatic hints that the caller was compiled with. C<$hints> corresponds to L<C<$^H>|perlvar/$^H>, and C<$bitmask> corresponds to L<C<${^WARNING_BITS}>|perlvar/${^WARNING_BITS}>. The C<$hints> and C<$bitmask> values are subject to change between versions of Perl, and are not meant for external use. C<$hinthash> is a reference to a hash containing the value of L<C<%^H>|perlvar/%^H> when the caller was compiled, or L<C<undef>|/undef EXPR> if L<C<%^H>|perlvar/%^H> was empty. Do not modify the values of this hash, as they are the actual values stored in the optree. Furthermore, when called from within the DB package in list context, and with an argument, caller returns more detailed information: it sets the list variable C<@DB::args> to be the arguments with which the subroutine was invoked. Be aware that the optimizer might have optimized call frames away before L<C<caller>|/caller EXPR> had a chance to get the information. That means that C<caller(N)> might not return information about the call frame you expect it to, for C<< N > 1 >>. In particular, C<@DB::args> might have information from the previous time L<C<caller>|/caller EXPR> was called. Be aware that setting C<@DB::args> is I<best effort>, intended for debugging or generating backtraces, and should not be relied upon. In particular, as L<C<@_>|perlvar/@_> contains aliases to the caller's arguments, Perl does not take a copy of L<C<@_>|perlvar/@_>, so C<@DB::args> will contain modifications the subroutine makes to L<C<@_>|perlvar/@_> or its contents, not the original values at call time. C<@DB::args>, like L<C<@_>|perlvar/@_>, does not hold explicit references to its elements, so under certain cases its elements may have become freed and reallocated for other variables or temporary values. Finally, a side effect of the current implementation is that the effects of C<shift @_> can I<normally> be undone (but not C<pop @_> or other splicing, I<and> not if a reference to L<C<@_>|perlvar/@_> has been taken, I<and> subject to the caveat about reallocated elements), so C<@DB::args> is actually a hybrid of the current state and initial state of L<C<@_>|perlvar/@_>. Buyer beware. =item chdir EXPR X<chdir> X<cd> X<directory, change> =item chdir FILEHANDLE =item chdir DIRHANDLE =item chdir =for Pod::Functions change your current working directory Changes the working directory to EXPR, if possible. If EXPR is omitted, changes to the directory specified by C<$ENV{HOME}>, if set; if not, changes to the directory specified by C<$ENV{LOGDIR}>. (Under VMS, the variable C<$ENV{'SYS$LOGIN'}> is also checked, and used if it is set.) If neither is set, L<C<chdir>|/chdir EXPR> does nothing and fails. It returns true on success, false otherwise. See the example under L<C<die>|/die LIST>. On systems that support L<fchdir(2)>, you may pass a filehandle or directory handle as the argument. On systems that don't support L<fchdir(2)>, passing handles raises an exception. =item chmod LIST X<chmod> X<permission> X<mode> =for Pod::Functions changes the permissions on a list of files Changes the permissions of a list of files. The first element of the list must be the numeric mode, which should probably be an octal number, and which definitely should I<not> be a string of octal digits: C<0644> is okay, but C<"0644"> is not. Returns the number of files successfully changed. See also L<C<oct>|/oct EXPR> if all you have is a string. L<fchmod(2)>, you may pass filehandles among the files. On systems that don't support L C<S_I*> constants from the L<C<Fcntl>: L<perlport/chmod>. =item chomp VARIABLE X<chomp> X<INPUT_RECORD_SEPARATOR> X<$/> X<newline> X<eol> =item chomp( LIST ) =item chomp =for Pod::Functions remove a trailing record separator from a string This safer version of L<C<chop>|/chop VARIABLE> removes any trailing string that corresponds to the current value of L<C<$E<sol>>|perlvar/$E<sol>> (also known as C<$INPUT_RECORD_SEPARATOR> in the L<C<English>|English> module). It returns the total number of characters removed from all its arguments. It's often used to remove the newline from the end of an input record when you're worried that the final record may be missing its newline. When in paragraph mode (C<$/ = ''>), it removes all trailing newlines from the string. When in slurp mode (C<$/ = undef>) or fixed-length record mode (L<C<$E<sol>>|perlvar/$E<sol>> is a reference to an integer or the like; see L<perlvar>), L<C<chomp>|/chomp VARIABLE> won't remove anything. If VARIABLE is omitted, it chomps L<C<$_>|perlvar/$_>. Example: while (<>) { chomp; # avoid \n on last field my @array = split(/:/); # ... } If VARIABLE is a hash, it chomps the hash's values, but not its keys, resetting the L<C<each>|/each HASH> C<chomp $cwd = `pwd`;> is interpreted as C<(chomp $cwd) = `pwd`;>, rather than as C<chomp( $cwd = `pwd` )> which you might expect. Similarly, C<chomp $a, $b> is interpreted as C<chomp($a), $b> rather than as C<chomp($a, $b)>. =item chop VARIABLE X<chop> =item chop( LIST ) =item chop =for Pod::Functions remove the last character from a string Chops off the last character of a string and returns the character chopped. It is much more efficient than C<s/.$//s> because it neither scans nor copies the string. If VARIABLE is omitted, chops L<C<$_>|perlvar/$_>. If VARIABLE is a hash, it chops the hash's values, but not its keys, resetting the L<C<each>|/each HASH> iterator in the process. You can actually chop anything that's an lvalue, including an assignment. If you chop a list, each element is chopped. Only the value of the last L<C<chop>|/chop VARIABLE> is returned. Note that L<C<chop>|/chop VARIABLE> returns the last character. To return all but the last character, use C<substr($string, 0, -1)>. See also L<C<chomp>|/chomp VARIABLE>. =item chown LIST X<chown> X<owner> X<user> X<group> =for Pod::Functions change the ownership on a list of files Changes the owner (and group) of a list of files. The first two elements of the list must be the I<numeric> uid and gid, in that order. A value of -1 in either position is interpreted by most systems to leave that value unchanged. Returns the number of files successfully changed. my $cnt = chown $uid, $gid, 'foo', 'bar'; chown $uid, $gid, @filenames; On systems that support L<fchown(2)>, you may pass filehandles among the files. On systems that don't support L: L<perlport/chown>. =item chr NUMBER X<chr> X<character> X<ASCII> X<Unicode> =item chr =for Pod::Functions get character this number represents Returns the character represented by that NUMBER in the character set. For example, C<chr(65)> is C<"A"> in either ASCII or Unicode, and chr(0x263a) is a Unicode smiley face. Negative values give the Unicode replacement character (chr(0xfffd)), except under the L<bytes> pragma, where the low eight bits of the value (truncated to an integer) are used. If NUMBER is omitted, uses L<C<$_>|perlvar/$_>. For the reverse, use L<C<ord>|/ord EXPR>. Note that characters from 128 to 255 (inclusive) are by default internally not encoded as UTF-8 for backward compatibility reasons. See L<perlunicode> for more about Unicode. =item chroot FILENAME X<chroot> X<root> =item chroot =for Pod::Functions make directory new root for path lookups This function works like the system call by the same name: it makes the named directory the new root directory for all further pathnames that begin with a C</> by your process and all its children. (It doesn't change your current working directory, which is unaffected.) For security reasons, this call is restricted to the superuser. If FILENAME is omitted, does a L<C<chroot>|/chroot FILENAME> to L<C<$_>|perlvar/$_>. B<NOTE:> It is mandatory for security to C<chdir("/")> (L<C<chdir>|/chdir EXPR> to the root directory) immediately after a L<C<chroot>|/chroot FILENAME>, otherwise the current working directory may be outside of the new root. Portability issues: L<perlport/chroot>. =item close FILEHANDLE X<close> =item close =for Pod::Functions close file (or pipe or socket) handle L<C<open>|/open FILEHANDLE,MODE,EXPR> on it, because L<C<open>|/open FILEHANDLE,MODE,EXPR> closes it for you. (See L<C<open>|/open FILEHANDLE,MODE,EXPR>.) However, an explicit L<C<close>|/close FILEHANDLE> on an input file resets the line counter (L<C<$.>|perlvar/$.>), while the implicit close done by L<C<open>|/open FILEHANDLE,MODE,EXPR> does not. If the filehandle came from a piped open, L<C<close>|/close FILEHANDLE> returns false if one of the other syscalls involved fails or if its program exits with non-zero status. If the only problem was that the program exited non-zero, L<C<$!>|perlvar/$!> will be set to C<0>. Closing a pipe also waits for the process executing on the pipe to exit--in case you wish to look at the output of the pipe afterwards--and implicitly puts the exit status value of that command into L<C<$?>|perlvar/$?> and L<C<${^CHILD_ERROR_NATIVE}>|perlvar/${^CHILD_ERROR_NATIVE}>. If there are multiple threads running, L<C<close>|/close FILEHANDLE>. =item closedir DIRHANDLE X<closedir> =for Pod::Functions close directory handle Closes a directory opened by L<C<opendir>|/opendir DIRHANDLE,EXPR> and returns the success of that system call. =item connect SOCKET,NAME X<connect> =for Pod::Functions connect to a remote socket Attempts to connect to a remote socket, just like L<connect(2)>. Returns true if it succeeded, false otherwise. NAME should be a packed address of the appropriate type for the socket. See the examples in L<perlipc/"Sockets: Client/Server Communication">. =item continue BLOCK X<continue> =item continue =for Pod::Functions optional trailing block in a while or foreach When followed by a BLOCK, L<C<continue>|/continue BLOCK> is actually a flow control statement rather than a function. If there is a L<C<continue>|/continue BLOCK> BLOCK attached to a BLOCK (typically in a C<while> or C<foreach>), it is always executed just before the conditional is about to be evaluated again, just like the third part of a C<for> loop in C. Thus it can be used to increment a loop variable, even when the loop has been continued via the L<C<next>|/next LABEL> statement (which is similar to the C L<C<continue>|/continue BLOCK> statement). L<C<last>|/last LABEL>, L<C<next>|/next LABEL>, or L<C<redo>|/redo LABEL> may appear within a L<C<continue>|/continue BLOCK> block; L<C<last>|/last LABEL> and L<C<redo>|/redo LABEL> behave as if they had been executed within the main block. So will L<C<next>|/next LABEL>, but since it will execute a L<C<continue>|/continue BLOCK> block, it may be more entertaining. while (EXPR) { ### redo always comes here do_something; } continue { ### next always comes here do_something_else; # then back the top to re-check EXPR } ### last always comes here Omitting the L<C<continue>|/continue BLOCK> section is equivalent to using an empty one, logically enough, so L<C<next>|/next LABEL> goes directly back to check the condition at the top of the loop. When there is no BLOCK, L<C<continue>|/continue BLOCK> is a function that falls through the current C<when> or C<default> block instead of iterating a dynamically enclosing C<foreach> or exiting a lexically enclosing C<given>. In Perl 5.14 and earlier, this form of L<C<continue>|/continue BLOCK> was only available when the L<C<"switch"> feature|feature/The 'switch' feature> was enabled. See L<feature> and L<perlsyn/"Switch Statements"> for more information. =item cos EXPR X<cos> X<cosine> X<acos> X<arccosine> =item cos =for Pod::Functions cosine function Returns the cosine of EXPR (expressed in radians). If EXPR is omitted, takes the cosine of L<C<$_>|perlvar/$_>. For the inverse cosine operation, you may use the L<C<Math::Trig::acos>|Math::Trig> function, or use this relation: sub acos { atan2( sqrt(1 - $_[0] * $_[0]), $_[0] ) } =item crypt PLAINTEXT,SALT X<crypt> X<digest> X<hash> X<salt> X<plaintext> X<password> X<decrypt> X<cryptography> X<passwd> X<encrypt> =for Pod::Functions one-way passwd-style encryption Creates a digest string exactly like the L<crypt(3)> function in the C library (assuming that you actually have a version there that has not been extirpated as a potential munition). L<C<crypt>|/crypt PLAINTEXT L<C<crypt>|/crypt PLAINTEXT,SALT>'d with the same salt as the stored digest. If the two digests match, the password is correct. When verifying an existing digest string you should use the digest as the salt (like C<crypt($plain, $digest) eq $digest>). The SALT used to create the digest is visible as part of the digest. This ensures L<C<crypt>|/crypt PLAINTEXT,SALT> will hash the new string with the same salt as the digest. This allows your code to work with the standard L<C<crypt>|/crypt PLAINTEXT,SALT> and with more exotic implementations. In other words, assume nothing about the returned string itself nor about how many bytes of SALT may matter. Traditionally the result is a string of 13 bytes: two first bytes of the salt, followed by 11 bytes from the set C<[. C<[./0-9A-Za-z]> (like C<join '', ('.', '/', 0..9, 'A'..'Z', 'a'..'z')[rand 64, rand 64]>). This set of characters is just a recommendation; the characters allowed in the salt depend solely on your system's crypt library, and Perl can't restrict what salts L<C<crypt>|/crypt PLAINTEXT L<C<crypt>|/crypt PLAINTEXT,SALT> function is unsuitable for hashing large quantities of data, not least of all because you can't get the information back. Look at the L<Digest> module for more robust algorithms. If using L<C<crypt>|/crypt PLAINTEXT,SALT> on a Unicode string (which I<potentially> has characters with codepoints above 255), Perl tries to make sense of the situation by trying to downgrade (a copy of) the string back to an eight-bit byte string before calling L<C<crypt>|/crypt PLAINTEXT,SALT> (on that copy). If that works, good. If not, L<C<crypt>|/crypt PLAINTEXT,SALT> dies with L<C<Wide character in crypt>|perldiag/Wide character in %s>. Portability issues: L<perlport/crypt>. =item dbmclose HASH X<dbmclose> =for Pod::Functions breaks binding on a tied dbm file [This function has been largely superseded by the L<C<untie>|/untie VARIABLE> function.] Breaks the binding between a DBM file and a hash. Portability issues: L<perlport/dbmclose>. =item dbmopen HASH,DBNAME,MASK X<dbmopen> X<dbm> X<ndbm> X<sdbm> X<gdbm> =for Pod::Functions create binding on a tied dbm file [This function has been largely superseded by the L<C<tie>|/tie VARIABLE,CLASSNAME,LIST> function.] This binds a L<dbm(3)>, L<ndbm(3)>, L<sdbm(3)>, L<gdbm(3)>, or Berkeley DB file to a hash. HASH is the name of the hash. (Unlike normal L<C<open>|/open FILEHANDLE,MODE,EXPR>, the first argument is I<not> a filehandle, even though it looks like one). DBNAME is the name of the database (without the F<.dir> or F<.pag> extension if any). If the database does not exist, it is created with protection specified by MASK (as modified by the L<C<umask>|/umask EXPR>). To prevent creation of the database if it doesn't exist, you may specify a MODE of 0, and the function will return a false value if it can't find an existing database. If your system supports only the older DBM functions, you may make only one L<C<dbmopen>|/dbmopen HASH,DBNAME,MASK> call in your program. In older versions of Perl, if your system had neither DBM nor ndbm, calling L<C<dbmopen>|/dbmopen HASH,DBNAME,MASK> produced a fatal error; it now falls back to L<sdbm(3)>. If you don't have write access to the DBM file, you can only read hash variables, not set them. If you want to test whether you can write, either use file tests or try setting a dummy hash entry inside an L<C<eval>|/eval EXPR> to trap the error. Note that functions such as L<C<keys>|/keys HASH> and L<C<values>|/values HASH> may return huge lists when used on large DBM files. You may prefer to use the L<C<each>|/each HASH> function to iterate over large DBM files. Example: # print out history file offsets dbmopen(%HIST,'/usr/lib/news/history',0666); while (($key,$val) = each %HIST) { print $key, ' = ', unpack('L',$val), "\n"; } dbmclose(%HIST); See also L<AnyDBM_File> for a more general description of the pros and cons of the various dbm approaches, as well as L<DB_File> for a particularly rich implementation. You can control which DBM library you use by loading that library before you call L<C<dbmopen>|/dbmopen HASH,DBNAME,MASK>: use DB_File; dbmopen(%NS_Hist, "$ENV{HOME}/.netscape/history.db") or die "Can't open netscape history file: $!"; Portability issues: L<perlport/dbmopen>. =item defined EXPR X<defined> X<undef> X<undefined> =item defined =for Pod::Functions test whether a value, variable, or function is defined Returns a Boolean value telling whether EXPR has a value other than the undefined value L<C<undef>|/undef EXPR>. If EXPR is not present, L<C<$_>|perlvar/$_> is checked. Many operations return L<C<undef>|/undef EXPR> to indicate failure, end of file, system error, uninitialized variable, and other exceptional conditions. This function allows you to distinguish L<C<undef>|/undef EXPR> from other values. (A simple Boolean test will not distinguish among L<C<undef>|/undef EXPR>, zero, the empty string, and C<"0">, which are all equally false.) Note that since L<C<undef>|/undef EXPR> is a valid scalar, its presence doesn't I<necessarily> indicate an exceptional condition: L<C<pop>|/pop ARRAY> returns L<C<undef>|/undef EXPR> when its argument is an empty array, I<or> when the element to return happens to be L<C<undef>|/undef EXPR>. You may also use C<defined(&func)> to check whether subroutine C<func> has ever been defined. The return value is unaffected by any forward declarations of C<func>. A subroutine that is not defined may still be callable: its package may have an C<AUTOLOAD> method that makes it spring into existence the first time that it is called; see L<perlsub>. Use of L<C<defined>|/defined EXPR> on aggregates (hashes and arrays) is no longer supported. It used to report whether memory for that aggregate had ever been allocated. You should instead use a simple test for size: if (@an_array) { print "has array elements\n" } if (%a_hash) { print "has hash members\n" } When used on a hash element, it tells you whether the value is defined, not whether the key exists in the hash. Use L<C<exists>|/exists EXPR> L<C<defined>|/defined EXPR> and are then surprised to discover that the number C<0> and C<""> (the zero-length string) are, in fact, defined values. For example, if you say "ab" =~ /a(.*)b/; The pattern match succeeds and L<C<defined>|/defined EXPR> only when questioning the integrity of what you're trying to do. At other times, a simple comparison to C<0> or C<""> is what you want. See also L<C<undef>|/undef EXPR>, L<C<exists>|/exists EXPR>, L<C<ref>|/ref EXPR>. =item delete EXPR X<delete> =for Pod::Functions deletes a value from a hash Given an expression that specifies an element or slice of a hash, L<C<delete>|/delete EXPR> deletes the specified elements from that hash so that L<C<exists>|/exists EXPR> on that element no longer returns true. Setting a hash element to the undefined value does not remove its key, but deleting it does; see L<C<exists>|/exists EXPR>. In list context, usually returns the value or values deleted, or the last such element in scalar context. The return list's length corresponds to that of the argument list: deleting non-existent elements returns the undefined value in their corresponding positions. When a L<keyE<sol>value hash slice|perldata/KeyE<sol>Value Hash Slices> is passed to C<delete>, the return value is a list of key/value pairs (two elements for each item deleted from the hash). L<C<delete>|/delete EXPR> may also be used on arrays and array slices, but its behavior is less straightforward. Although L<C<exists>|/exists EXPR> will return false for deleted entries, deleting array elements never changes indices of existing values; use L<C<shift>|/shift ARRAY> or L<C<splice>|/splice ARRAY,OFFSET,LENGTH,LIST> for that. However, if any deleted elements fall at the end of an array, the array's size shrinks to the position of the highest element that still tests true for L<C<exists>|/exists EXPR>, or to 0 if none do. In other words, an array won't have trailing nonexistent elements after a delete. B<WARNING:> Calling L<C<delete>|/delete EXPR> on array values is strongly discouraged. The notion of deleting or checking the existence of Perl array elements is not conceptually coherent, and can lead to surprising behavior. Deleting from L<C<%ENV>|perlvar/%ENV> modifies the environment. Deleting from a hash tied to a DBM file deletes the entry from the DBM file. Deleting from a L<C<tied>|/tied VARIABLE> hash or array may not necessarily return anything; it depends on the implementation of the L<C<tied>|/tied VARIABLE> package's DELETE method, which may do whatever it pleases. The C<delete local EXPR> construct localizes the deletion to the current block at run time. Until the block exits, elements locally deleted temporarily no longer exist. See L<perlsub/"Localized deletion of elements of composite types">.]; =item die LIST X<die> X<throw> X<exception> X<raise> X<$@> X<abort> =for Pod::Functions raise an exception or bail out L<C<die>|/die LIST> raises an exception. Inside an L<C<eval>|/eval EXPR> the exception is stuffed into L<C<$@>|perlvar/$@> and the L<C<eval>|/eval EXPR> is terminated with the undefined value. If the exception is outside of all enclosing L<C<eval>|/eval EXPR>s, then the uncaught exception is printed to C<STDERR> and perl exits with an exit code indicating failure. If you need to exit the process with a specific exit code, see L<C<exit>|/exit EXPR>. Equivalent examples: die "Can't cd to spool: $!\n" unless chdir '/usr/spool/news'; chdir '/usr/spool/news' or die "Can't cd to spool: $!\n" Most of the time, C. Note that the "input line number" (also known as "chunk") is subject to whatever notion of "line" happens to be currently in effect, and is also available as the special variable L<C<$.>|perlvar/$.>. See L<perlvar/"$/"> and L<perlvar/"$.">. Hint: sometimes appending C<", stopped"> to your message will cause it to make better sense when the string C< LIST was empty or made an empty string, and L<C<$@>|perlvar/$@> already contains an exception value (typically from a previous L<C<eval>|/eval EXPR>), then that value is reused after appending C<"\t...propagated">. This is useful for propagating exceptions: eval { ... }; die unless $@ =~ /Expected exception/; If LIST was empty or made an empty string, and L<C<$@>|perlvar/$@> contains an object reference that has a C<PROPAGATE> method, that method will be called with additional file and line number parameters. The return value replaces the value in L<C<$@>|perlvar/$@>; i.e., as if C<< $@ = eval { $@->PROPAGATE(__FILE__, __LINE__) }; >> were called. If LIST was empty or made an empty string, and L<C<$@>|perlvar/$@> is also empty, then the string C<"Died"> is used. You can also call L<C<die>|/die LIST> with a reference argument, and if this is trapped within an L<C<eval>|/eval EXPR>, L<C<$@>|perlvar/$@> contains that reference. This permits more elaborate exception handling using objects that maintain arbitrary state about the exception. Such a scheme is sometimes preferable to matching particular string values of L<C<$@>|perlvar/$@> with regular expressions. Because Perl stringifies uncaught exception messages before display, you'll probably want to overload stringification operations on exception objects. See L L<C<$@>|perlvar/$@> is a global variable, be careful that analyzing an exception caught by C<eval> } } If an uncaught exception results in interpreter exit, the exit code is determined from the values of L<C<$!>|perlvar/$!> and L<C<$?>|perlvar/$?> with this pseudocode: exit $! if $!; # errno exit $? >> 8 if $? >> 8; # child exit status exit 255; # last resort As with L<C<exit>|/exit EXPR>, L<C<$?>|perlvar/$?> is set prior to unwinding the call stack; any C<DESTROY> or C<END> handlers can then alter this value, and thus Perl's exit code. The intent is to squeeze as much possible information about the likely cause into the limited space of the system exit code. However, as L<C<$!>|perlvar/$!> is the value of C's C<errno>, which can be set by any system call, this means that the value of the exit code used by L<C<die>|/die LIST> can be non-predictable, so should not be relied upon, other than to be non-zero. You can arrange for a callback to be run just before the L<C<die>|/die LIST> does its deed, by setting the L<C<$SIG{__DIE__}>|perlvar/%SIG> hook. The associated handler is called with the exception as an argument, and can change the exception, if it sees fit, by calling L<C<die>|/die LIST> again. See L<perlvar/%SIG> for details on setting L<C<%SIG>|perlvar/%SIG> entries, and L<C<eval>|/eval EXPR> for some examples. Although this feature was to be run only right before your program was to exit, this is not currently so: the L<C<$SIG{__DIE__}>|perlvar/%SIG> hook is currently called even inside L<C<eval>|/eval EXPR>ed blocks/strings! If one wants the hook to do nothing in such situations, put die @_ if $^S; as the first line of the handler (see L<perlvar/$^S>). Because this promotes strange action at a distance, this counterintuitive behavior may be fixed in a future release. See also L<C<exit>|/exit EXPR>, L<C<warn>|/warn LIST>, and the L<Carp> module. =item do BLOCK X<do> X<block> =for Pod::Functions turn a BLOCK into a TERM Not really a function. Returns the value of the last command in the sequence of commands indicated by BLOCK. When modified by the C<while> or C<until> loop modifier, executes the BLOCK once before testing the loop condition. (On other statements the loop modifiers test the conditional first.) C<do BLOCK> does I<not> count as a loop, so the loop control statements L<C<next>|/next LABEL>, L<C<last>|/last LABEL>, or L<C<redo>|/redo LABEL> cannot be used to leave or restart the block. See L<perlsyn> for alternative strategies. =item do EXPR X<do>'; C<do './stat.pl'> is largely like eval `cat stat.pl`; except that it's more concise, runs no external processes, and keeps track of the current filename for error messages. It also differs in that code evaluated with C<do FILE> cannot see lexicals in the enclosing scope; C<eval STRING> does. It's the same, however, in that it does reparse the file every time you call it, so you probably don't want to do this inside a loop. Using C<do> with a relative path (except for F<./> and F<../>), like do 'foo/stat.pl'; will search the L<C<@INC>|perlvar/@INC> directories, and update L<C<%INC>|perlvar/%INC> if the file is found. See L<perlvar/@INC> and L<perlvar/%INC> for these variables. In particular, note that whilst historically L<C<@INC>|perlvar/@INC> contained '.' (the current directory) making these two cases equivalent, that is no longer necessarily the case, as '.' is not included in C<@INC> by default in perl versions 5.26.0 onwards. Instead, perl will now warn: do "stat.pl" failed, '.' is no longer in @INC; did you mean do "./stat.pl"? If L<C<do>|/do EXPR> can read the file but cannot compile it, it returns L<C<undef>|/undef EXPR> and sets an error message in L<C<$@>|perlvar/$@>. If L<C<do>|/do EXPR> cannot read the file, it returns undef and sets L<C<$!>|perlvar/$!> to the error. Always check L<C<$@>|perlvar/$@> first, as compilation could fail in a way that also sets L<C<$!>|perlvar/$!>. If the file is successfully compiled, L<C<do>|/do EXPR> returns the value of the last expression evaluated. Inclusion of library modules is better done with the L<C<use>|/use Module VERSION LIST> and L<C<require>|/require VERSION> operators, which also do automatic error checking and raise an exception if there's a problem. You might like to use L<C<do>|/do EXPR>; } } =item dump LABEL X<dump> X<core> X<undump> =item dump EXPR =item dump =for Pod::Functions create an immediate core dump This function causes an immediate core dump. See also the B<-u> command-line switch in L<perlrun|perlrun/-u>, which does the same thing. Primarily this is so that you can use the B<undump> program (not supplied) to turn your core dump into an executable binary after having initialized all your variables at the beginning of the program. When the new binary is executed it will begin by executing a C<goto LABEL> (with all the restrictions that L<C<goto>|/goto LABEL> suffers). Think of it as a goto with an intervening core dump and reincarnation. If C<LABEL> is omitted, restarts the program from the top. The C<dump EXPR> form, available starting in Perl 5.18.0, allows a name to be computed at run time, being otherwise identical to C<dump LABEL>. B<WARNING>: Any files opened at the time of the dump will I<not> be open any more when the program is reincarnated, with possible resulting confusion by Perl. This function is now largely obsolete, mostly because it's very hard to convert a core file into an executable. As of Perl 5.30, it must be invoked as C<CORE::dump()>. Unlike most named operators, this has the same precedence as assignment. It is also exempt from the looks-like-a-function rule, so C<dump ("foo")."bar"> will cause "bar" to be part of the argument to L<C<dump>|/dump LABEL>. Portability issues: L<perlport/dump>. =item each HASH X<each> X<hash, iterator> =item each ARRAY X<array, iterator> =for Pod::Functions retrieve the next key/value pair from a hash. After L<C<each>|/each HASH> has returned all entries from the hash or array, the next call to L<C<each>|/each HASH> returns the empty list in list context and L<C<undef>|/undef EXPR> in scalar context; the next call following I<that> one restarts iteration. Each hash or array has its own internal iterator, accessed by L<C<each>|/each HASH>, L<C<keys>|/keys HASH>, and L<C<values>|/values HASH>. The iterator is implicitly reset when L<C<each>|/each HASH> has reached the end as just described; it can be explicitly reset by calling L<C<keys>|/keys HASH> or L<C<values>|/values HASH> L<C<each>|/each HASH>, so the following code works properly: while (my ($key, $value) = each %hash) { print $key, "\n"; delete $hash{$key}; # This is safe } Tied hashes may have a different ordering behaviour to perl's hash implementation. The iterator used by C<each> is attached to the hash or array, and is shared between all iteration operations applied to the same hash or array. Thus all uses of C<each> on a single hash or array advance the same iterator location. All uses of C<each> are also subject to having the iterator reset by any use of C<keys> or C<values> on the same hash or array, or by the hash (but not array) being referenced in list context. This makes C C<foreach> loop rather than C<while>-C<each>. This prints out your environment like the L<printenv(1)> program, but in a different order: while (my ($key,$value) = each %ENV) { print "$key=$value\n"; } Starting with Perl 5.14, an experimental feature allowed L<C<each>|/each HASH> to take a scalar expression. This experiment has been deemed unsuccessful, and was removed as of Perl 5.24. As of Perl 5.18 you can use a bare L<C<each>|/each HASH> in a C<while> loop, which will set L<C<$_>|perlvar/$_> on every iteration. If either an C<each> expression or an explicit assignment of an C<each> expression to a scalar is used as a C<while>/C use 5.018; # so each assigns to $_ in a lone while test See also L<C<keys>|/keys HASH>, L<C<values>|/values HASH>, and L<C<sort>|/sort SUBNAME LIST>. =item eof FILEHANDLE X<eof> X<end of file> X<end-of-file> =item eof () =item eof =for Pod::Functions test a filehandle for its end Returns 1 if the next read on FILEHANDLE will return end of file I<or> if FILEHANDLE is not open. FILEHANDLE may be an expression whose value gives the real filehandle. (Note that this function actually reads a character and then C<ungetc>s it, so isn't useful in an interactive context.) Do not read from a terminal file (or call C<eof(FILEHANDLE)> on it) after end-of-file is reached. File types such as terminals may lose the end-of-file condition if you do. An L<C<eof>|/eof FILEHANDLE> without an argument uses the last file read. Using L<C<eof()>|/eof FILEHANDLE> with empty parentheses is different. It refers to the pseudo file formed from the files listed on the command line and accessed via the C<< <> >> operator. Since C<< <> >> isn't explicitly opened, as a normal filehandle is, an L<C<eof()>|/eof FILEHANDLE> before C<< <> >> has been used will cause L<C<@ARGV>|perlvar/@ARGV> to be examined to determine if input is available. Similarly, an L<C<eof()>|/eof FILEHANDLE> after C<< <> >> has returned end-of-file will assume you are processing another L<C<@ARGV>|perlvar/@ARGV> list, and if you haven't set L<C<@ARGV>|perlvar/@ARGV>, will read input from C<STDIN>; see L<perlop/"I/O Operators">. In a C<< while (<>) >> loop, L<C<eof>|/eof FILEHANDLE> or C<eof(ARGV)> can be used to detect the end of each file, whereas L<C<eof()>|/eof FILEHANDLE> L<C<eof>|/eof FILEHANDLE> in Perl, because the input operators typically return L<C<undef>|/undef EXPR> when they run out of data or encounter an error. =item eval EXPR X<eval> X<try> X<catch> X<evaluate> X<parse> X<execute> X<error, handling> X<exception, handling> =item eval BLOCK =item eval =for Pod::Functions catch exceptions or compile and run code C<eval> in all its forms is used to execute a little Perl program, trapping any errors encountered so they don't crash the calling program. Plain C<eval> with no argument is just C<eval EXPR>, where the expression is understood to be contained in L<C<$_>|perlvar/$_>. Thus there are only two real C C<eval> executes. The other form is called "block eval". It is less general than string eval, but the code within the BLOCK is parsed only once (at the same time the code surrounding the C<eval> itself. See L<C<wantarray>|/wantarray> for more on how the evaluation context can be determined. If there is a syntax error or runtime error, or a L<C<die>|/die LIST> statement is executed, C<eval> returns L<C<undef>|/undef EXPR> in scalar context, or an empty list in list context, and L<C<$@>|perlvar/$@> is set to the error message. (Prior to 5.16, a bug caused L<C<undef>|/undef EXPR> to be returned in list context for syntax errors, but not for runtime errors.) If there was no error, L<C<$@>|perlvar/$@> is set to the empty string. A control flow operator like L<C<last>|/last LABEL> or L<C<goto>|/goto LABEL> can bypass the setting of L<C<$@>|perlvar/$@>. Beware that using C<eval> neither silences Perl from printing warnings to STDERR, nor does it stuff the text of warning messages into L<C<$@>|perlvar/$@>. To do either of those, you have to use the L<C<$SIG{__WARN__}>|perlvar/%SIG> facility, or turn off warnings inside the BLOCK or EXPR using S<C<no warnings 'all'>>. See L<C<warn>|/warn LIST>, L<perlvar>, and L<warnings>. Note that, because C<eval> traps otherwise-fatal errors, it is useful for determining whether a particular feature (such as L<C<socket>|/socket SOCKET,DOMAIN,TYPE,PROTOCOL> or L<C<symlink>|/symlink OLDFILE,NEWFILE>) is implemented. It is also Perl's exception-trapping mechanism, where the L<C<die>|/die LIST> operator is used to raise exceptions. Before Perl 5.14, the assignment to L<C<$@>|perlv: =over 4 =item String eval Since the return value of EXPR is executed as a block within the lexical context of the current Perl program, any outer lexical variables are visible to it, and any package variable settings or subroutine and format definitions remain afterwards. =over 4 =item Under the L<C<"unicode_eval"> feature|feature/The 'unicode_eval' and 'evalbytes' features> If this feature is enabled (which is the default under a C<use 5.16> or higher declaration), EXPR is considered to be in the same encoding as the surrounding program. Thus if S<L<C<use utf8> L<C<'unicode_strings"> feature|feature/The 'unicode_strings' feature> is in effect. In a plain C<eval> without an EXPR argument, being in S<C<use utf8>> or not is irrelevant; the UTF-8ness of C<$_> itself determines the behavior. Any S<C<use utf8>> or S<C<no utf8>> declarations within the string have no effect, and source filters are forbidden. (C<unicode_strings>, however, can appear within the string.) See also the L<C<evalbytes>|/evalbytes EXPR> operator, which works properly with source filters. Variables defined outside the C<eval> and used inside it retain their original UTF-8ness. Everything inside the string follows the normal rules for a Perl program with the given state of S<C<use utf8>>. =item Outside the C<"unicode_eval"> feature In this case, the behavior is problematic and is not so easily described. Here are two bugs that cannot easily be fixed without breaking existing programs: =over 4 =item * It can lose track of whether something should be encoded as UTF-8 or not. =item * Source filters activated within C<eval> leak out into whichever file scope is currently being compiled. To give an example with the CPAN module L<Semi::Semicolons>: BEGIN { eval "use Semi::Semicolons; # not filtered" } # filtered here! L<C<evalbytes>|/evalbytes EXPR> fixes that to work the way one would expect: use feature "evalbytes"; BEGIN { evalbytes "use Semi::Semicolons; # filtered" } # not filtered =back =back Problems can arise if the string expands a scalar containing a floating point number. That scalar can expand to letters, such as C<"NaN"> or C<"Infinity">; or, within the scope of a L<C<use locale> C<'$x'>, which does nothing but return the value of $x. (Case 4 is preferred for purely visual reasons, but it also has the advantage of compiling at compile-time instead of at run-time.) Case 5 is a place where normally you I<would> like to use double quotes, except that in this particular situation, you can just use symbolic references instead, as in case 6. An C<eval ''> executed within a subroutine defined in the. =item Block eval If the code to be executed doesn't vary, you may use the eval-BLOCK form to trap run-time errors without incurring the penalty of recompiling each time. The error, if any, is still returned in L<C<$@>|perlv C<eval> unless C<$ENV{PERL_DL_NONLAZY}> is set. See L<perlrun|perlrun/PERL_DL_NONLAZY>. Using the C<eval {}> form as an exception trap in libraries does have some issues. Due to the current arguably broken state of C<__DIE__> hooks, you may wish not to trigger any C<__DIE__> hooks that user code may have installed. You can use the C<local $SIG{__DIE__}> construct for this purpose, as this example shows: # a private exception trap for divide-by-zero eval { local $SIG{'__DIE__'}; $answer = $a / $b; }; warn $@ if $@; This is especially significant, given that C<__DIE__> hooks can call L<C<die>|/die LIST>. C<eval BLOCK> does I<not> count as a loop, so the loop control statements L<C<next>|/next LABEL>, L<C<last>|/last LABEL>, or L<C<redo>|/redo LABEL> cannot be used to leave or restart the block. The final semicolon, if any, may be omitted from within the BLOCK. =back =item evalbytes EXPR X<evalbytes> =item evalbytes =for Pod::Functions +evalbytes similar to string eval, but intend to parse a bytestream This function is similar to a L<string eval|/eval EXPR>, except it always parses its argument (or L<C<$_>|perlvar/$_> if EXPR is omitted) as a string of independent bytes. If called when S<C<use utf8>> is in effect, the string will be assumed to be encoded in UTF-8, and C<evalbytes> will make a temporary copy to work from, downgraded to non-UTF-8. If this is not possible (because one or more characters in it require UTF-8), the C<evalbytes> will fail with the error stored in C<$@>. Bytes that correspond to ASCII-range code points will have their normal meanings for operators in the string. The treatment of the other bytes depends on if the L<C<'unicode_strings"> feature|feature/The 'unicode_strings' feature> is in effect. Of course, variables that are UTF-8 and are referred to in the string retain that: my $ feature|feature/The 'unicode_eval' and 'evalbytes' features> is enabled. This is enabled automatically with a C<use v5.16> (or higher) declaration in the current scope. =item exec LIST X<exec> X<execute> =item exec PROGRAM LIST =for Pod::Functions abandon this program to run another The L<C<exec>|/exec LIST> function executes a system command I<and never returns>; use L<C<system>|/system LIST> instead of L<C<exec>|/exec LIST> if you want it to return. It fails and returns false only if the command does not exist I<and> it is executed directly instead of via your system's command shell (see below). Since it's a common mistake to use L<C<exec>|/exec LIST> instead of L<C<system>|/system LIST>, Perl warns you if L<C<exec>|/exec LIST> is called in void context and if there is a following statement that isn't L<C<die>|/die LIST>, L<C<warn>|/warn LIST>, or L<C<exit>|/exit EXPR> (if L<warnings> are enabled--but you always do that, right?). If you I<really> want to follow an L<C<exec>|/exec LIST> with some other statement, you can use one of these styles to avoid the warning: exec ('foo') or print STDERR "couldn't exec foo: $!"; { exec ('foo') }; print STDERR "couldn't exec foo: $!"; If there is more than one argument in LIST, this calls L<execvp(3)> with the arguments in LIST. If there is only one element in LIST, L<perlop/"`STRING`"> for details. Using an indirect object with L<C<exec>|/exec LIST> or L<C<system>|/system LIST> is also more secure. This usage (which also works fine with L<C<system>|/system LIST>) I<echo> program, passing it C<"surprise"> an argument. The second version didn't; it tried to run a program named I<"echo surprise">, didn't find it, and set L<C<$?>|perlvar/$?> to a non-zero value indicating failure. On Windows, only the C<exec PROGRAM LIST> indirect object syntax will reliably avoid using the shell; C<exec LIST>, even with more than one element, will fall back to the shell if the first spawn fails. Perl attempts to flush all files opened for output before the exec, lost output. Note that L<C<exec>|/exec LIST> will not call your C<END> blocks, nor will it invoke C<DESTROY> methods on your objects. Portability issues: L<perlport/exec>. =item exists EXPR X<exists> X<autovivification> =for Pod::Functions test whether a hash key is present Given L<C<delete>|/delete EXPR> on arrays. B<WARNING:> Calling L<C<exists>|/exists EXPR> C<AUTOLOAD> method that makes it spring into existence the first time that it is called; see L C<< $ref->{"A"} >> and C<< $ref->{"A"}->{"B"} >> will spring into existence due to the existence test for the L<C<exists>|/exists EXPR> is an error. exists ⊂ # OK exists &sub(); # Error =item exit EXPR X<exit> X<terminate> X<abort> =item exit =for Pod::Functions terminate this program Evaluates EXPR and exits immediately with that value. Example: my $ans = <STDIN>; exit 0 if $ans =~ /^[Xx]/; See also L<C<die>|/die LIST>. If EXPR is omitted, exits with C<0> status. The only universally recognized values for EXPR are C<0> for success and C<1> for error; other values are subject to interpretation depending on the environment in which the Perl program is running. For example, exiting 69 (EX_UNAVAILABLE) from a I<sendmail> incoming-mail filter will cause the mailer to return the item undelivered, but that's not true everywhere. Don't use L<C<exit>|/exit EXPR> to abort a subroutine if there's any chance that someone might want to trap whatever error happened. Use L<C<die>|/die LIST> instead, which can be trapped by an L<C<eval>|/eval EXPR>. The L<C<exit>|/exit EXPR> function does not always exit immediately. It calls any defined C<END> routines first, but these C<END> routines may not themselves abort the exit. Likewise any object destructors that need to be called are called before the real exit. C<END> routines and destructors can change the exit status by modifying L<C<$?>|perlvar/$?>. If this is a problem, you can call L<C<POSIX::_exit($status)>|POSIX/C<_exit>> to avoid C<END> and destructor processing. See L<perlmod> for details. Portability issues: L<perlport/exit>. =item exp EXPR X<exp> X<exponential> X<antilog> X<antilogarithm> X<e> =item exp =for Pod::Functions raise I<e> to a power Returns I<e> (the natural logarithm base) to the power of EXPR. If EXPR is omitted, gives C<exp($_)>. =item fc EXPR X<fc> X<foldcase> X<casefold> X<fold-case> X<case-fold> =item fc =for Pod::Functions +fc return casefolded version of a string Returns the casefolded version of EXPR. This is the internal function implementing the C<\F> escape in double-quoted strings. L<Unicode::UCD/B<casefold()>> and<>. If EXPR is omitted, uses L<C<$_>|perlvar/$_>. This function behaves the same way under various pragmas, such as within L<S<C<"use feature 'unicode_strings">>|feature/The 'unicode_strings' feature>, as L<C<lc>|/lc EXPR> does, with the single exception of L<C<fc>|/fc EXPR> of I<LATIN CAPITAL LETTER SHARP S> (U+1E9E) within the scope of L<S<C<use locale>>|locale>. The foldcase of this character would normally be C<"ss">, but as explained in the L<C<lc>|/lc EXPR> section, case changes that cross the 255/256 boundary are problematic under locales, and are hence prohibited. Therefore, this function under locale returns instead the string C<"\x{17F}\x{17F}">, which is the I<LATIN SMALL LETTER LONG S>. Since that character itself folds to L<C<Unicode::Casing>|Unicode::Casing> may be used to provide an implementation. fcntl FILEHANDLE,FUNCTION,SCALAR X<fcntl> =for Pod::Functions file control system call Implements the L<fcntl(2)> function. You'll probably have to say use Fcntl; first to get the correct constant definitions. Argument processing and value returned work just like L<C<ioctl>|/ioctl FILEHANDLE,FUNCTION,SCALAR> below. For example: use Fcntl; my $flags = fcntl($filehandle, F_GETFL, 0) or die "Can't fcntl F_GETFL: $!"; You don't have to check for L<C<defined>|/defined EXPR> on the return from L<C<fcntl>|/fcntl FILEHANDLE,FUNCTION,SCALAR>. Like L<C<ioctl>|/ioctl FILEHANDLE,FUNCTION,SCALAR>, it maps a C<0> return from the system call into C<"0 but true"> in Perl. This string is true in boolean context and C<0> in numeric context. It is also exempt from the normal L<C<Argument "..." isn't numeric>|perldiag/Argument "%s" isn't numeric%s> L<warnings> on improper numeric conversions. Note that L<C<fcntl>|/fcntl FILEHANDLE,FUNCTION,SCALAR> raises an exception if used on a machine that doesn't implement L<fcntl(2)>. See the L<Fcntl> module or your L<fcntl(2)> manpage to learn what functions are available on your system. Here's an example of setting a filehandle named C<$REMOTE> to be non-blocking at the system level. You'll have to negotiate L<C<$E<verbar>>|perlvar/$E<verbar>>: L<perlport/fcntl>. =item __FILE__ X<__FILE__> =for Pod::Functions the name of the current source file A special token that returns the name of the file in which it occurs. It can be altered by the mechanism described at L<perlsyn/"Plain Old Comments (Not!)">. =item fileno FILEHANDLE X<fileno> =item fileno DIRHANDLE =for Pod::Functions return file descriptor from filehandle Returns the file descriptor for a filehandle or directory handle, or undefined if the filehandle is not open. If there is no real file descriptor at the OS level, as can happen with filehandles connected to memory objects via L<C<open>|/open FILEHANDLE,MODE,EXPR> with a reference for the third argument, -1 is returned. This is mainly useful for constructing bitmaps for L<C<select>|/select RBITS,WBITS,EBITS,TIMEOUT> L<C<fileno>|/fileno FILEHANDLE> on a directory handle depends on the operating system. On a system with L<dirfd(3)> or similar, L<C<fileno>|/fileno FILEHANDLE> on a directory handle returns the underlying file descriptor associated with the handle; on systems with no such support, it returns the undefined value, and sets L<C<$!>|perlvar/$!> (errno). =item flock FILEHANDLE,OPERATION X<flock> X<lock> X<locking> =for Pod::Functions lock an entire file with an advisory lock Calls L<flock(2)>, or an emulation of it, on FILEHANDLE. Returns true for success, false on failure. Produces a fatal error if used on a machine that doesn't implement L<flock(2)>, L<fcntl(2)> locking, or L<lockf(3)>. L<C<flock>|/flock FILEHANDLE,OPERATION> is Perl's portable file-locking interface, although it locks entire files only, not records. Two potentially non-obvious but traditional L<C<flock>|/flock FILEHANDLE,OPERATION> semantics are that it waits indefinitely until the lock is granted, and that its locks are B<merely advisory>. Such discretionary locks are more flexible, but offer fewer guarantees. This means that programs that do not also use L<C<flock>|/flock FILEHANDLE,OPERATION> may modify files locked with L<C<flock>|/flock FILEHANDLE,OPERATION>. See L L<Fcntl> module, either individually, or as a group using the C<:flock> tag. LOCK_SH requests a shared lock, LOCK_EX requests an exclusive lock, and LOCK_UN releases a previously requested lock. If LOCK_NB is bitwise-or'ed with LOCK_SH or LOCK_EX, then L<C<flock>|/flock FILEHANDLE,OPERATION> returns immediately rather than blocking waiting for the lock; check the return status to see if you got it. To avoid the possibility of miscoordination, Perl now flushes FILEHANDLE before locking or unlocking it. Note that the emulation built with L<lockf(3)> doesn't provide shared locks, and it requires that FILEHANDLE be open with write intent. These are the semantics that L<lockf(3)> implements. Most if not all systems implement L<lockf(3)> in terms of L<fcntl(2)> locking, though, so the differing semantics shouldn't bite too many people. Note that the L<fcntl(2)> emulation of L<flock(3)> requires that FILEHANDLE be open with read intent to use LOCK_SH and requires that it be open with write intent to use LOCK_EX. Note also that some versions of L<C<flock>|/flock FILEHANDLE,OPERATION> cannot lock things over the network; you would need to use the more system-specific L<C<fcntl>|/fcntl FILEHANDLE,FUNCTION,SCALAR> for that. If you like you can force Perl to ignore your system's L<flock(2)> function, and so provide its own L<fcntl(2)>-based emulation, by passing the switch C<-Ud_flock> to the we're running on a very old UNIX # variant without the modern O_APPEND semantics... L<flock(2)>, locks are inherited across L<C<fork>|/fork> calls, whereas those that must resort to the more capricious L<fcntl(2)> function lose their locks, making it seriously harder to write servers. See also L<DB_File> for other L<C<flock>|/flock FILEHANDLE,OPERATION> examples. Portability issues: L<perlport/flock>. =item fork X<fork> X<child> X<parent> =for Pod::Functions create a new process just like this one Does a L<fork(2)> system call to create a new process running the same program at the same point. It returns the child pid to the parent process, C<0> to the child process, or L<C<undef>|/undef EXPR> if the fork is unsuccessful. File descriptors (and sometimes locks on those descriptors) are shared, while everything else is copied. On most systems supporting L duplicate output. If you L<C<fork>|/fork> without ever waiting on your children, you will accumulate zombies. On some systems, you can avoid this by setting L<C<$SIG{CHLD}>|perlvar/%SIG> to C<"IGNORE">. See also L F</dev/null> if it's any issue. On some platforms such as Windows, where the L<fork(2)> system call is not available, Perl can be built to emulate L<C<fork>|/fork> in the Perl interpreter. The emulation is designed, at the level of the Perl program, to be as compatible as possible with the "Unix" L<fork(2)>. However it has limitations that have to be considered in code intended to be portable. See L<perlfork> for more details. Portability issues: L<perlport/fork>. =item format X<format> =for Pod::Functions declare a picture format with use by the write() function Declare a picture format for use by the L<C<write>|/write FILEHANDLE> function. For example: format Something = Test: @<<<<<<<< @||||| @>>>>> $str, $%, '$' . int($num) . $. Note that a format typically does one L<C<formline>|/formline PICTURE,LIST> per line of form, but the L<C<formline>|/formline PICTURE,LIST> function itself doesn't care how many newlines are embedded in the PICTURE. This means that the C<~> and C<~~> tokens treat the entire PICTURE as a single line. You may therefore need to use multiple formlines to implement a single record format, just like the L<C<format>|/format> compiler. Be careful if you put double quotes around the picture, because an C<@> character may be taken to mean the beginning of an array name. L<C<formline>|/formline PICTURE,LIST> always returns true. See L<perlform> for other examples. If you are trying to use this instead of L<C<write>|/write FILEHANDLE> to capture the output, you may find it easier to open a filehandle to a scalar (C<< open my $fh, ">", \$output >>) and write to that instead. =item getc FILEHANDLE X<getc> X<getchar> X<character> X<file, read> =item getc =for Pod::Functions get the next character from the filehandle Returns the next character from the input file attached to FILEHANDLE, or the undefined value at end of file or if there was an error (in the latter case L<C<$!>|perlvar/$!> C<$BSD_STYLE> should be set is left as an exercise to the reader. The L<C<POSIX::getattr>|POSIX/C<getattr>> function can do this more portably on systems purporting POSIX compliance. See also the L<C<Term::ReadKey>|Term::ReadKey> module on CPAN. =item getlogin X<getlogin> X<login> =for Pod::Functions return who logged in at this tty This implements the C library function of the same name, which on most systems returns the current login from F</etc/utmp>, if any. If it returns the empty string, use L<C<getpwuid>|/getpwuid UID>. my $login = getlogin || getpwuid($<) || "Kilroy"; Do not consider L<C<getlogin>|/getlogin> for authentication: it is not as secure as L<C<getpwuid>|/getpwuid UID>. Portability issues: L<perlport/getlogin>. =item getpeername SOCKET X<getpeername> X<peer> =for Pod::Functions find the other end of a socket connection Returns the packed sockaddr address of the other end of the SOCKET connection. use Socket; my $hersockaddr = getpeername($sock); my ($port, $iaddr) = sockaddr_in($hersockaddr); my $herhostname = gethostbyaddr($iaddr, AF_INET); my $herstraddr = inet_ntoa($iaddr); =item getpgrp PID X<getpgrp> X<group> =for Pod::Functions get process group Returns the current process group for the specified PID. Use a PID of C<0> to get the current process group for the current process. Will raise an exception if used on a machine that doesn't implement L<getpgrp(2)>. If PID is omitted, returns the process group of the current process. Note that the POSIX version of L<C<getpgrp>|/getpgrp PID> does not accept a PID argument, so only C<PID==0> is truly portable. Portability issues: L<perlport/getpgrp>. =item getppid X<getppid> X<parent> X<pid> =for Pod::Functions get parent process ID Returns L<$$|perlvar/$$> for details. Portability issues: L<perlport/getppid>. =item getpriority WHICH,WHO X<getpriority> X<priority> X<nice> =for Pod::Functions get current nice value Returns the current priority for a process, a process group, or a user. (See L<getpriority(2)>.) Will raise a fatal exception if used on a machine that doesn't implement L<getpriority(2)>. C<WHICH> can be any of C<PRIO_PROCESS>, C<PRIO_PGRP> or C<PRIO_USER> imported from L<POSIX/RESOURCE CONSTANTS>. Portability issues: L<perlport/getpriority>. =item getpwnam NAME X<getpwnam> X<getgrnam> X<gethostbyname> X<getnetbyname> X<getprotobyname> X<getpwuid> X<getgrgid> X<getservbyname> X<gethostbyaddr> X<getnetbyaddr> X<getprotobynumber> X<getservbyport> X<getpwent> X<getgrent> X<gethostent> X<getnetent> X<getprotoent> X<getservent> X<setpwent> X<setgrent> X<sethostent> X<setnetent> X<setprotoent> X<setservent> X<endpwent> X<endgrent> X<endhostent> X<endnetent> X<endprotoent> X<endservent> =for Pod::Functions get passwd record given user login name =item getgrnam NAME =for Pod::Functions get group record given group name =item gethostbyname NAME =for Pod::Functions get host record given name =item getnetbyname NAME =for Pod::Functions get networks record given name =item getprotobyname NAME =for Pod::Functions get protocol record given name =item getpwuid UID =for Pod::Functions get passwd record given user ID =item getgrgid GID =for Pod::Functions get group record given group user ID =item getservbyname NAME,PROTO =for Pod::Functions get services record given its name =item gethostbyaddr ADDR,ADDRTYPE =for Pod::Functions get host record given its address =item getnetbyaddr ADDR,ADDRTYPE =for Pod::Functions get network record given its address =item getprotobynumber NUMBER =for Pod::Functions get protocol record numeric protocol =item getservbyport PORT,PROTO =for Pod::Functions get services record given numeric port =item getpwent =for Pod::Functions get next passwd record =item getgrent =for Pod::Functions get next group record =item gethostent =for Pod::Functions get next hosts record =item getnetent =for Pod::Functions get next networks record =item getprotoent =for Pod::Functions get next protocols record =item getservent =for Pod::Functions get next services record =item setpwent =for Pod::Functions prepare passwd file for use =item setgrent =for Pod::Functions prepare group file for use =item sethostent STAYOPEN =for Pod::Functions prepare hosts file for use =item setnetent STAYOPEN =for Pod::Functions prepare networks file for use =item setprotoent STAYOPEN =for Pod::Functions prepare protocols file for use =item setservent STAYOPEN =for Pod::Functions prepare services file for use =item endpwent =for Pod::Functions be done using passwd file =item endgrent =for Pod::Functions be done using group file =item endhostent =for Pod::Functions be done using hosts file =item endnetent =for Pod::Functions be done using networks file =item endprotoent =for Pod::Functions be done using protocols file =item endservent =for Pod::Functions be done using services file These routines are the same as their counterparts in the system C library. In list context, the return values from the various get routines are as follows: # L I L<getpwnam(3)> and your system's F<pwd.h> file. You can also find out from within Perl what your $quota and $comment fields mean and whether you have the $expire field by using the L<C<Config>|Config> module and the values C<d_pwquota>, C<d_pwage>, C<d_pwchange>, C<d_pwcomment>, and C<d_pwexpire>. Shadow password files are supported only if your vendor has implemented them in the intuitive fashion that calling the regular C library routines gets the shadow versions if you're running under privilege or if there exists the L<shadow(3)> functions as found in System V (this includes Solaris and Linux). Those systems that implement a proprietary shadow password facility are unlikely to be supported. The $members value returned by I<getgr*()> is a space-separated list of the login names of the members of the group. For the I<gethost*()> functions, if the C<h_errno> variable is supported in C, it will be returned to you via L<C<$?>|perlvar/$?> if the function call fails. The L<C<gethostbyname>|/gethostbyname NAME> is called in SCALAR context and that its return value is checked for definedness. The L<C<getprotobynumber>|/getprotobynumber NUMBER>: L<C<File::stat>|File::stat>, L<C<Net::hostent>|Net::hostent>, L<C<Net::netent>|Net::netent>, L<C<Net::protoent>|Net::protoent>, L<C<Net::servent>|Net::servent>, L<C<Time::gmtime>|Time::gmtime>, L<C<Time::localtime>|Time::localtime>, and L<C<User::grent> C<File::stat> object is different from a C<User::pwent> object. Many of these functions are not safe in a multi-threaded environment where more than one thread can be using them. In particular, functions like C<getpwent()> iterate per-process and not per-thread, so if two threads are simultaneously iterating, neither will get all the records. Some systems have thread-safe versions of some of the functions, such as C<getpwnam_r()> instead of C<getpwnam()>. There, Perl automatically and invisibly substitutes the thread-safe version, without notice. This means that code that safely runs on some systems can fail on others that lack the thread-safe versions. Portability issues: L<perlport/getpwnam> to L<perlport/endservent>. =item getsockname SOCKET X<getsockname> =for Pod::Functions retrieve the sockaddr for a given socket Returns the packed sockaddr address of this end of the SOCKET connection, in case you don't know the address because you have several different IPs that the connection might have come in on. use Socket; my $mysockaddr = getsockname($sock); my ($port, $myaddr) = sockaddr_in($mysockaddr); printf "Connect to %s [%s]\n", scalar gethostbyaddr($myaddr, AF_INET), inet_ntoa($myaddr); =item getsockopt SOCKET,LEVEL,OPTNAME X<getsockopt> =for Pod::Functions get socket options on a given socket Queries the option named OPTNAME associated with SOCKET at a given LEVEL. Options may exist at multiple protocol levels depending on the socket type, but at least the uppermost socket level SOL_SOCKET (defined in the L<C<Socket> L<C<getprotobyname>|/getprotobyname NAME>. The function returns a packed string representing the requested socket option, or L<C<undef>|/undef EXPR> on error, with the reason for the error placed in L<C<$!>|perlvar/$!>. Just what is in the packed string depends on LEVEL and OPTNAME; consult L<getsockopt(2)> for details. A common case is that the option is an integer, in which case the result is a packed integer, which you can decode using L<C<unpack>|/unpack TEMPLATE,EXPR> with the C<i> (or: L<perlport/getsockopt>. =item glob EXPR X<glob> X<wildcard> X<filename, expansion> X<expand> =item glob =for Pod::Functions expand filenames using wildcards In list context, returns a (possibly empty) list of filename expansions on the value of EXPR such as the standard Unix shell F</bin/csh> would do. In scalar context, glob iterates through such filename expansions, returning undef when the list is exhausted. This is the internal function implementing the C<< <*.c> >> operator, but you can use it directly. If EXPR is omitted, L<C<$_>|perlvar/$_> is used. The C<< <*.c> >> operator is discussed in more detail in L<perlop/"I/O Operators">. Note that L<C<glob>|/glob EXPR> splits its arguments on whitespace and treats each segment as separate pattern. As such, C<glob("*.c *.h")> matches all files with a F<.c> or F<.h> extension. The expression C<glob(".* *")> matches all files in the current working directory. If you want to glob filenames that might contain whitespace, you'll have to use extra quotes around the spacey filename to protect it. For example, to glob filenames that have an C<e> followed by a space followed by an L<C<glob>|/glob EXPR>, no filenames are matched, but potentially many strings are returned. For example, this produces nine strings, one for each pairing of fruits and colors: my @many = glob "{apple,tomato,cherry}={green,yellow,red}"; This operator is implemented using the standard C<File::Glob> extension. See L<File::Glob> for details, including L<C<bsd_glob>|File::Glob/C<bsd_glob>>, which does not treat whitespace as a pattern separator. If a C<glob> expression is used as the condition of a C<while> or C<for> loop, then it will be implicitly assigned to C<$_>. If either a C<glob> expression or an explicit assignment of a C<glob> expression to a scalar is used as a C<while>/C<for> condition, then the condition actually tests for definedness of the expression's value, not for its regular truth value. Portability issues: L<perlport/glob>. =item gmtime EXPR X<gmtime> X<UTC> X<Greenwich> =item gmtime =for Pod::Functions convert UNIX time into record or string using Greenwich time Works just like L<C<localtime>|/localtime EXPR> but the returned values are localized for the standard Greenwich time zone. Note: When called in list context, $isdst, the last value returned by gmtime, is always C<0>. There is no Daylight Saving Time in GMT. Portability issues: L<perlport/gmtime>. =item goto LABEL X<goto> X<jump> X<jmp> =item goto EXPR =item goto &NAME =for Pod::Functions create spaghetti code The C<goto LABEL> form finds the statement labeled with LABEL and resumes execution there. It can't be used to get out of a block or subroutine given to L<C<sort>|/sort SUBNAME LIST>. It can be used to go almost anywhere else within the dynamic scope, including out of subroutines, but it's usually better to use some other construct such as L<C<last>|/last LABEL> or L<C<die>|/die LIST>. The author of Perl has never felt the need to use this form of L<C<goto>|/goto LABEL> (in Perl, that is; C is another matter). (The difference is that C does not offer named loops combined with loop control. Perl does, and this replaces most structured uses of L<C<goto>|/goto LABEL> in other languages.) The C<goto EXPR> form expects to evaluate C<EXPR> to a code reference or a label name. If it evaluates to a code reference, it will be handled like C<goto &NAME>, below. This is especially useful for implementing tail recursion via C<goto __SUB__>. If the expression evaluates to a label name, its scope will be resolved dynamically. This allows for computed L<C<goto>|/goto LABEL>s per FORTRAN, but isn't necessarily recommended if you're optimizing for maintainability: goto ("FOO", "BAR", "GLARCH")[$i]; As shown in this example, C<goto EXPR> is exempt from the "looks like a function" rule. A pair of parentheses following it does not (necessarily) delimit its argument. C<goto("NE")."XT"> is equivalent to C<goto NEXT>. Also, unlike most named operators, this has the same precedence as assignment. Use of C<goto LABEL> or C<goto EXPR> to jump into a construct is deprecated and will issue a warning. Even then, it may not be used to go into any construct that requires initialization, such as a subroutine, a C<foreach> loop, or a C<given> block. In general, it may not be used to jump into the parameter of a binary or list operator, but it may be used to jump into the I<first> parameter of a binary operator. (The C<=> assignment operator's "first" operand is its right-hand operand.) It also can't be used to go into a construct that is optimized away. The C<goto &NAME> form is quite different from the other forms of L<C<goto>|/goto LABEL>. In fact, it isn't a goto in the normal sense at all, and doesn't have the stigma associated with other gotos. Instead, it exits the current subroutine (losing any changes set by L<C<local>|/local EXPR>) and immediately calls in its place the named subroutine using the current value of L<C<@_>|perlvar/@_>. This is used by C<AUTOLOAD> subroutines that wish to load another subroutine and then pretend that the other subroutine had been called in the first place (except that any modifications to L<C<@_>|perlvar/@_> in the current subroutine are propagated to the other subroutine.) After the L<C<goto>|/goto LABEL>, not even L<C<caller>|/caller EXPR> will be able to tell that this routine was called first. NAME needn't be the name of a subroutine; it can be a scalar variable containing a code reference or a block that evaluates to a code reference. =item grep BLOCK LIST X<grep> =item grep EXPR,LIST =for Pod::Functions locate elements in a list test true against a given criterion This is similar in spirit to, but not the same as, L<grep(1)> and its relatives. In particular, it is not limited to using regular expressions. Evaluates the BLOCK or EXPR for each element of LIST (locally setting L<C<$_>|perlvar/$_> L<C<$_>|perlv C<foreach>, L<C<map>|/map BLOCK LIST> or another L<C<grep>|/grep BLOCK LIST>) actually modifies the element in the original list. This is usually something to be avoided when writing clear code. See also L<C<map>|/map BLOCK LIST> for a list composed of the results of the BLOCK or EXPR. =item hex EXPR X<hex> X<hexadecimal> =item hex =for Pod::Functions convert a hexadecimal string to a number Interprets EXPR as a hex string and returns the corresponding numeric value. If EXPR is omitted, uses L<C<$_>|perlvar/$_>. print hex '0xAf'; # prints '175' print hex 'aF'; # same $valid_input =~ /\A(?:0?[xX])?(?:_?[0-9a-fA-F])*\z/ A hex string consists of hex digits and an optional C<0x> or C<x> prefix. Each hex digit may be preceded by a single underscore, which will be ignored. Any other character triggers a warning and causes the rest of the string to be ignored (even leading whitespace, unlike L<C<oct>|/oct EXPR>). Only integers can be represented, and integer overflow triggers a warning. To convert strings that might start with any of C<0>, C<0x>, or C<0b>, see L<C<oct>|/oct EXPR>. To present something as hex, look into L<C<printf>|/printf FILEHANDLE FORMAT, LIST>, L<C<sprintf>|/sprintf FORMAT, LIST>, and L<C<unpack>|/unpack TEMPLATE,EXPR>. =item import LIST X<import> =for Pod::Functions patch a module's namespace into your own There is no builtin L<C<import>|/import LIST> function. It is just an ordinary method (subroutine) defined (or inherited) by modules that wish to export names to another module. The L<C<use>|/use Module VERSION LIST> function calls the L<C<import>|/import LIST> method for the package used. See also L<C<use>|/use Module VERSION LIST>, L<perlmod>, and L<Exporter>. =item index STR,SUBSTR,POSITION X<index> X<indexOf> X<InStr> =item index STR,SUBSTR =for Pod::Functions find a substring within a string, L<C<index>|/index STR,SUBSTR,POSITION> returns -1. =item int EXPR X<int> X<integer> X<truncate> X<trunc> X<floor> =item int =for Pod::Functions get the integer portion of a number Returns the integer portion of EXPR. If EXPR is omitted, uses L<C<$_>|perlvar/$_>. You should not use this function for rounding: one because it truncates towards C<0>, and two because machine representations of floating-point numbers can sometimes produce counterintuitive results. For example, C<int(-6.725/0.025)> produces -268 rather than the correct -269; that's because it's really more like -268.99999999999994315658 instead. Usually, the L<C<sprintf>|/sprintf FORMAT, LIST>, L<C<printf>|/printf FILEHANDLE FORMAT, LIST>, or the L<C<POSIX::floor>|POSIX/C<floor>> and L<C<POSIX::ceil>|POSIX/C<ceil>> functions will serve you better than will L<C<int>|/int EXPR>. =item ioctl FILEHANDLE,FUNCTION,SCALAR X<ioctl> =for Pod::Functions system-dependent device control system call Implements the L<ioctl(2)> function. You'll probably first have to say require "sys/ioctl.ph"; # probably in # $Config{archlib}/sys/ioctl.ph to get the correct function definitions. If F<sys/ioctl.ph> doesn't exist or doesn't have the correct definitions you'll have to roll your own, based on your C header files such as F<< <sys/ioctl.h> >>. (There is a Perl script called B L<C<ioctl>|/ioctl FILEHANDLE,FUNCTION,SCALAR> call. (If SCALAR has no string value but does have a numeric value, that value will be passed rather than a pointer to the string value. To guarantee this to be true, add a C<0> to the scalar before using it.) The L<C<pack>|/pack TEMPLATE,LIST> and L<C<unpack>|/unpack TEMPLATE,EXPR> functions may be needed to manipulate the values of structures used by L<C<ioctl>|/ioctl FILEHANDLE,FUNCTION,SCALAR>. The return value of L<C<ioctl>|/ioctl FILEHANDLE,FUNCTION,SCALAR> (and L<C<fcntl>|/fcntl FILEHANDLE,FUNCTION,SCALAR>) C<"0 but true"> is exempt from L<C<Argument "..." isn't numeric>|perldiag/Argument "%s" isn't numeric%s> L<warnings> on improper numeric conversions. Portability issues: L<perlport/ioctl>. =item join EXPR,LIST X<join> =for Pod::Functions join a list into a string using a separator Joins the separate strings of LIST into a single string with fields separated by the value of EXPR, and returns that new string. Example: my $rec = join(':', $login,$passwd,$uid,$gid,$gcos,$home,$shell); Beware that unlike L<C<split>|/split E<sol>PATTERNE<sol>,EXPR,LIMIT>, L<C<join>|/join EXPR,LIST> doesn't take a pattern as its first argument. Compare L<C<split>|/split E<sol>PATTERNE<sol>,EXPR,LIMIT>. =item keys HASH X<keys> X<key> =item keys ARRAY =for Pod::Functions retrieve list of indices from a hash Called<keys>|/keys HASH> resets the internal iterator of the HASH or ARRAY (see L<C<each>|/each HASH>) before yielding the keys. In particular, calling L<C<keys>|/keys HASH> L<C<values>|/values HASH>. To sort a hash by value, you'll need to use a L<C<sort>|/sort SUBNAME LIST> function. Here's a descending numeric sort of a hash by its values: foreach my $key (sort { $hash{$b} <=> $hash{$a} } keys %hash) { printf "%4d %s\n", $hash{$key}, $key; } Used as an lvalue, L<C<keys>|/keys HASH>--256 of them, in fact, since it rounds up to the next power of two. These buckets will be retained even if you do C<%hash = ()>, use C<undef %hash> if you want to free the storage while C<%hash> is still in scope. You can't shrink the number of buckets allocated for the hash using L<C<keys>|/keys HASH> in this way (but you needn't worry about doing this by accident, as trying has no effect). C<keys @array> in an lvalue context is a syntax error. Starting with Perl 5.14, an experimental feature allowed L<C<keys>|/keys<each>|/each HASH>, L<C<values>|/values HASH>, and L<C<sort>|/sort SUBNAME LIST>. =item kill SIGNAL, LIST =item kill SIGNAL X<kill> X<signal> =for Pod::Functions send a signal to a process or process group Sends C<SIG> prefix, thus C<FOO> and C<SIGFOO> refer to the same signal. The string form of SIGNAL is recommended for portability because the same signal may have different numbers in different operating systems. A list of signal names supported by the current platform can be found in C<$Config{sig_name}>, which is provided by the L<C<Config>|Config> module. See L<Config> for more details. A negative signal name is the same as a negative signal number, killing process groups instead of processes. For example, C<kill '-KILL', $pgrp> and C<kill -9, $pgrp> will send C<SIGKILL> to the entire process group specified. That means you usually want to use positive not negative signals. If SIGNAL is either the number 0 or the string C<ZERO> (or C<SIGZERO>), no signal is sent to the process, but L<C<kill>|/kill SIGNAL, LIST> checks whether it's I L<perlport> for notes on the portability of this construct. The behavior of kill when a I L<perlipc/"Signals"> for more details. On some platforms such as Windows where the L<fork(2)> system call is not available, Perl can be built to emulate L<C<fork>|/fork> at the interpreter level. This emulation has limitations related to kill that have to be considered, for code running on Windows and in code intended to be portable. See L<perlfork> for more details. If there is no I<LIST> of processes, no signal is sent, and the return value is 0. This form is sometimes used, however, because it causes tainting checks to be run. But see L<perlsec/Laundering and Detecting Tainted Data>. Portability issues: L<perlport/kill>. =item last LABEL X<last> X<break> =item last EXPR =item last =for Pod::Functions exit a block prematurely The L<C<last>|/last LABEL> command is like the C<break> statement in C (as used in loops); it immediately exits the loop in question. If the LABEL is omitted, the command refers to the innermost enclosing loop. The C<last EXPR> form, available starting in Perl 5.18.0, allows a label name to be computed at run time, and is otherwise identical to C<last LABEL>. The L<C<continue>|/continue BLOCK> block, if any, is not executed: LINE: while (<STDIN>) { last LINE if /^$/; # exit when done with header #... } L<C<last>|/last<last>|/last LABEL> can be used to effect an early exit out of such a block.<last ("foo")."bar"> will cause "bar" to be part of the argument to L<C<last>|/last LABEL>. =item lc EXPR X<lc> X<lowercase> =item lc =for Pod::Functions return lower-case version of a string Returns a lowercased version of EXPR. This is the internal function implementing the C<\L> escape in double-quoted strings. If EXPR is omitted, uses L<C<$_>|perlvar/$_>. What gets returned depends on several factors: =over =item If C<use bytes> is in effect: The results follow ASCII rules. Only the characters C<A-Z> change, to C<a-z> respectively. =item Otherwise, if C<use locale> for C<LC_CTYPE> is in effect: Respects current C<LC_CTYPE> locale for code points < 256; and uses Unicode rules for the remaining code points (this last can only happen if the UTF8 flag is also set). See L C L<locale|perldiag/Can't do %s("%s") on non-UTF-8 locale; resolved to "%s".> warning. =item Otherwise, If EXPR has the UTF8 flag set: Unicode rules are used for the case change. =item Otherwise, if C<use feature 'unicode_strings'> or C<use locale ':not_characters'> is in effect: Unicode rules are used for the case change. =item Otherwise: ASCII rules are used for the case change. The lowercase of any character outside the ASCII range is the character itself. =back =item lcfirst EXPR X<lcfirst> X<lowercase> =item lcfirst =for Pod::Functions return a string with just the next letter in lower case Returns the value of EXPR with the first character lowercased. This is the internal function implementing the C<\l> escape in double-quoted strings. If EXPR is omitted, uses L<C<$_>|perlvar/$_>. This function behaves the same way under various pragmas, such as in a locale, as L<C<lc>|/lc EXPR> does. =item length EXPR X<length> X<size> =item length =for Pod::Functions return the number of characters in a string Returns the length in I<characters> of the value of EXPR. If EXPR is omitted, returns the length of L<C<$_>|perlvar/$_>. If EXPR is undefined, returns L<C<undef>|/undef EXPR>. This function cannot be used on an entire array or hash to find out how many elements these have. For that, use C<scalar @array> and C<scalar keys %hash>, respectively. Like all Perl character operations, L<C<length>|/length EXPR> normally deals in logical characters, not physical bytes. For how many bytes a string encoded as UTF-8 would take up, use C<length(Encode::encode('UTF-8', EXPR))> (you'll have to C<use Encode> first). See L<Encode> and L<perlunicode>. =item __LINE__ X<__LINE__> =for Pod::Functions the current source line number A special token that compiles to the current line number. It can be altered by the mechanism described at L<perlsyn/"Plain Old Comments (Not!)">. =item link OLDFILE,NEWFILE X<link> =for Pod::Functions create a hard link in the filesystem Creates a new filename linked to the old filename. Returns true for success, false otherwise. Portability issues: L<perlport/link>. =item listen SOCKET,QUEUESIZE X<listen> =for Pod::Functions register your socket as a server Does the same thing that the L<listen(2)> system call does. Returns true if it succeeded, false otherwise. See the example in L<perlipc/"Sockets: Client/Server Communication">. =item local EXPR X<local> =for Pod::Functions create a temporary value for a global variable (dynamic scoping) You really probably want to be using L<C<my>|/my VARLIST> instead, because L<C<local>|/local EXPR> isn't what most people think of as "local". See L<perlsub/"Private Variables via my()"> for details. A local modifies the listed variables to be local to the enclosing block, file, or eval. If more than one value is listed, the list must be placed in parentheses. See L<perlsub/"Temporary Values via local()"> for details, including issues with tied arrays and hashes. The C<delete local EXPR> construct can also be used to localize the deletion of array/hash elements to the current block. See L<perlsub/"Localized deletion of elements of composite types">. =item localtime EXPR X<localtime> X<ctime> =item localtime =for Pod::Functions convert UNIX time into record or string using local time Converts'. C<$sec>, C<$min>, and C<$hour> are the seconds, minutes, and hours of the specified time. C<$mday> is the day of the month and C<$mon> the month in the range C" C<$year> contains the number of years since 1900. To get a 4-digit year write: $year += 1900; To get the last two digits of the year (e.g., "01" in 2001) do: $year = sprintf("%02d", $year % 100); C<$wday> is the day of the week, with 0 indicating Sunday and 3 indicating Wednesday. C<$yday> is the day of the year, in the range C<0..364> (or C<0..365> in leap years.) C<$isdst> is true if the specified time occurs during Daylight Saving Time, false otherwise. If EXPR is omitted, L<C<localtime>|/localtime EXPR> uses the current time (as returned by L<C<time>|/time>). In scalar context, L<C<localtime>|/localtime EXPR> returns the L<ctime(3)> value: my $now_string = localtime; # e.g., "Thu Oct 13 04:54:34 1994" The format of this scalar value is B<not> locale-dependent but built into Perl. For GMT instead of local time use the L<C<gmtime>|/gmtime EXPR> builtin. See also the L<C<Time::Local>|Time::Local> module (for converting seconds, minutes, hours, and such back to the integer value returned by L<C<time>|/time>), and the L<POSIX> module's L<C<strftime>|POSIX/C<strftime>> and L<C<mktime>|POSIX/C<mktime>> functions. To get somewhat similar but locale-dependent date strings, set up your locale environment variables appropriately (please see L C<%a> and C<%b>, the short forms of the day of the week and the month of the year, may not necessarily be three characters wide. The L<Time::gmtime> and L<Time::localtime> modules provide a convenient, by-name access mechanism to the L<C<gmtime>|/gmtime EXPR> and L<C<localtime>|/localtime EXPR> functions, respectively. For a comprehensive date and time representation look at the L<DateTime> module on CPAN. Portability issues: L<perlport/localtime>. =item lock THING X<lock> =for Pod::Functions +5.005 get a thread lock on a variable, subroutine, or method This function places an advisory lock on a shared variable or referenced object contained in I<THING> until the lock goes out of scope. The value returned is the scalar itself, if the argument is a scalar, or a reference, if the argument is a hash, array or subroutine. L<C<lock>|/lock THING> is a "weak keyword"; this means that if you've defined a function by this name (before any calls to it), that function will be called instead. If you are not under C<use threads::shared> this does nothing. See L<threads::shared>. =item log EXPR X<log> X<logarithm> X<e> X<ln> X<base> =item log =for Pod::Functions retrieve the natural logarithm for a number Returns the natural logarithm (base I<e>) of EXPR. If EXPR is omitted, returns the log of L<C<$_>|perlvar/$_>. To get the log of another base, use basic algebra: The base-N log of a number is equal to the natural log of that number divided by the natural log of N. For example: sub log10 { my $n = shift; return log($n)/log(10); } See also L<C<exp>|/exp EXPR> for the inverse operation. =item lstat FILEHANDLE X<lstat> =item lstat EXPR =item lstat DIRHANDLE =item lstat =for Pod::Functions stat a symbolic link Does the same thing as the L<C<stat>|/stat FILEHANDLE> function (including setting the special C<_> filehandle) but stats a symbolic link instead of the file the symbolic link points to. If symbolic links are unimplemented on your system, a normal L<C<stat>|/stat FILEHANDLE> is done. For much more detailed information, please see the documentation for L<C<stat>|/stat FILEHANDLE>. If EXPR is omitted, stats L<C<$_>|perlvar/$_>. Portability issues: L<perlport/lstat>. =item m// =for Pod::Functions match a string with a regular expression pattern The match operator. See L<perlop/"Regexp Quote-Like Operators">. =item map BLOCK LIST X<map> =item map EXPR,LIST =for Pod::Functions apply a change to a list to get back a new list with the changes Evaluates the BLOCK or EXPR for each element of LIST (locally setting L<C<$_>|perlvar/$_> L<perldata> for more details. my %hash = map { get_a_key_for($_) => $_ } @array; is just a funny way to write my %hash; foreach (@array) { $hash{get_a_key_for($_)} = $_; } Note that L<C<$_>|perlvar/$_> is an alias to the list value, so it can be used to modify the elements of the LIST. While this is useful and supported, it can cause bizarre results if the elements of LIST are not variables. Using a regular C<foreach> loop for this purpose would be clearer in most cases. See also L<C<grep>|/grep BLOCK LIST> for a list composed of those items of the original list for which the BLOCK or EXPR evaluates to true. C<{> starts both hash references and blocks, so C<map { ...> could be either the start of map BLOCK LIST or map EXPR, LIST. Because Perl doesn't look ahead for the closing C<}> it has to take a guess at which it's dealing with based on what it finds just after the C<{>. Usually it gets it right, but if it doesn't it won't realize something is wrong until it gets to the C<}> and encounters the missing (or unexpected) comma. The syntax error will be reported close to the C<}>, but you'll need to change something near the C<{> such as using a unary C<+> C<+{>: my @hashes = map +{ lc($_) => 1 }, @array # EXPR, so needs # comma at end to get a list of anonymous hashes each with only one entry apiece. =item mkdir FILENAME,MODE X<mkdir> X<md> X<directory, create> =item mkdir FILENAME =item mkdir =for Pod::Functions create a directory Creates the directory specified by FILENAME, with permissions specified by MODE (as modified by L<C<umask>|/umask EXPR>). If it succeeds it returns true; otherwise it returns false and sets L<C<$!>|perlvar/$!> (errno). MODE defaults to 0777 if omitted, and FILENAME defaults to L<C<$_>|perlvar/$_> if omitted. In general, it is better to create directories with a permissive MODE and let the user modify that with their L<C<umask>|/umask EXPR> than it is to supply a restrictive MODE and give the user no way to be more permissive. The exceptions to this rule are when the file or directory should be kept private (mail files, for instance). The documentation for L<C<umask>|/umask EXPR> discusses the choice of MODE L<C<make_path>|File::Path/make_path( $dir1, $dir2, .... )> function of the L<File::Path> module. =item msgctl ID,CMD,ARG X<msgctl> =for Pod::Functions SysV IPC message control operations Calls the System V IPC function L<msgctl(2)>. You'll probably have to say use IPC::SysV; first to get the correct constant definitions. If CMD is C<IPC_STAT>, then ARG must be a variable that will hold the returned C<msqid_ds> structure. Returns like L<C<ioctl>|/ioctl FILEHANDLE,FUNCTION,SCALAR>: the undefined value for error, C<"0 but true"> for zero, or the actual return value otherwise. See also L<perlipc/"SysV IPC"> and the documentation for L<C<IPC::SysV>|IPC::SysV> and L<C<IPC::Semaphore>|IPC::Semaphore>. Portability issues: L<perlport/msgctl>. =item msgget KEY,FLAGS X<msgget> =for Pod::Functions get SysV IPC message queue Calls the System V IPC function L<msgget(2)>. Returns the message queue id, or L<C<undef>|/undef EXPR> on error. See also L<perlipc/"SysV IPC"> and the documentation for L<C<IPC::SysV>|IPC::SysV> and L<C<IPC::Msg>|IPC::Msg>. Portability issues: L<perlport/msgget>. =item msgrcv ID,VAR,SIZE,TYPE,FLAGS X<msgrcv> =for Pod::Functions receive a SysV IPC message from a message queue Calls C<unpack("l! a*")>. Taints the variable. Returns true if successful, false on error. See also L<perlipc/"SysV IPC"> and the documentation for L<C<IPC::SysV>|IPC::SysV> and L<C<IPC::Msg>|IPC::Msg>. Portability issues: L<perlport/msgrcv>. =item msgsnd ID,MSG,FLAGS X<msgsnd> =for Pod::Functions send a SysV IPC message to a message queue C<pack("l! a*", $type, $message)>. Returns true if successful, false on error. See also L<perlipc/"SysV IPC"> and the documentation for L<C<IPC::SysV>|IPC::SysV> and L<C<IPC::Msg>|IPC::Msg>. Portability issues: L<perlport/msgsnd>. =item my VARLIST X<my> =item my TYPE VARLIST =item my VARLIST : ATTRS =item my TYPE VARLIST : ATTRS =for Pod::Functions declare and assign a local variable (lexical scoping) A L<C<my>|/my VARLIST> declares the listed variables to be local (lexically) to the enclosing block, file, or L<C<eval>|/eval EXPR>. If more than one variable is listed, the list must be placed in parentheses. The exact semantics and interface of TYPE and ATTRS are still evolving. TYPE may be a bareword, a constant declared with L<C<use constant>|constant>, or L<C<__PACKAGE__>|/__PACKAGE__>. It: my ( undef, $min, $hour ) = localtime; =item next LABEL X<next> X<continue> =item next EXPR =item next =for Pod::Functions iterate a block prematurely The L<C<next>|/next LABEL> command is like the C<continue> statement in C; it starts the next iteration of the loop: LINE: while (<STDIN>) { next LINE if /^#/; # discard comments #... } Note that if there were a L<C<continue>|/continue BLOCK> block on the above, it would get executed even on discarded lines. If LABEL is omitted, the command refers to the innermost enclosing loop. The C<next EXPR> form, available as of Perl 5.18.0, allows a label name to be computed at run time, being otherwise identical to C<next LABEL>. L<C<next>|/next<next>|/next LABEL> will exit such a block early.<next ("foo")."bar"> will cause "bar" to be part of the argument to L<C<next>|/next LABEL>. =item no MODULE VERSION LIST X<no declarations> X<unimporting> =item no MODULE VERSION =item no MODULE LIST =item no MODULE =item no VERSION =for Pod::Functions unimport some module symbols or semantics at compile time See the L<C<use>|/use Module VERSION LIST> function, of which L<C<no>|/no MODULE VERSION LIST> is the opposite. =item oct EXPR X<oct> X<octal> X<hex> X<hexadecimal> X<binary> X<bin> =item oct =for Pod::Functions convert a string to an octal number Interprets EXPR as an octal string and returns the corresponding value. (If EXPR happens to start off with C<0x>, interprets it as a hex string. If EXPR starts off with C<0b>, it is interpreted as a binary string. Leading whitespace is ignored in all three cases.) The following will handle decimal, binary, octal, and hex in standard Perl notation: $val = oct($val) if $val =~ /^0/; If EXPR is omitted, uses L<C<$_>|perlvar/$_>. To go the other way (produce a number in octal), use L<C<sprintf>|/sprintf FORMAT, LIST> or L<C<printf>|/printf FILEHANDLE FORMAT, LIST>: my $dec_perms = (stat("filename"))[2] & 07777; my $oct_perm_str = sprintf "%o", $perms; The L<C<oct>|/oct EXPR> function is commonly used when a string such as C<644> needs to be converted into a file mode, for example. Although Perl automatically converts strings into numbers as needed, this automatic conversion assumes base 10. Leading white space is ignored without warning, as too are any trailing non-digits, such as a decimal point (L<C<oct>|/oct EXPR> only handles non-negative integers, not negative integers or floating point). =item open FILEHANDLE,MODE,EXPR X<open> X<pipe> X<file, open> X<fopen> =item open FILEHANDLE,MODE,EXPR,LIST =item open FILEHANDLE,MODE,REFERENCE =item open FILEHANDLE,EXPR =item open FILEHANDLE =for Pod::Functions open a file, pipe, or descriptor Associates C<open> follows. For a gentler introduction to the basics of C<open>, see also the L<perlopentut> manual page. =over =item Working with files Most often, C<open> gets invoked with three arguments: the required FILEHANDLE (usually an empty scalar variable), followed by MODE (usually a literal describing the I/O mode the filehandle will use), and then the filename that the new filehandle will refer to. =over =item Simple examples L<perlintro/Files and I/O>. =item About filehandles The first argument to C<open>, labeled FILEHANDLE in this reference, is usually a scalar variable. (Exceptions exist, described in "Other considerations", below.) If the call to C<open> succeeds, then the expression provided as FILEHANDLE will get assigned an open I<filehandle>. That filehandle provides an internal reference to the specified external file, conveniently stored in a Perl variable, and ready for I/O operations such as reading and writing. =item About modes When calling C<open> with three or more arguments, the second argument -- labeled MODE here -- defines the I<open mode>. MODE is usually a literal string comprising special characters that define the intended I/O role of the filehandle being created: whether it's read-only, or read-and-write, and so on. If MODE is C<< < >>, the file is opened for input (read-only). If MODE is C<< > >>, the file is opened for output, with existing files first being truncated ("clobbered") and nonexisting files newly created. If MODE is C<<< >> >>>, the file is opened for appending, again being created if necessary. You can put a C<+> in front of the C<< > >> or C<< < >> to indicate that you want both read and write access to the file; thus C<< +< >> is almost always preferred for read/write updates--the C<< +> >> mode would clobber the file first. You can't usually use either read-write mode for updating textfiles, since they have variable-length records. See the B<-i> switch in L<perlrun|perlrun/-i[extension]> for a better approach. The file is created with permissions of C<0666> modified by the process's L<C<umask>|/umask EXPR> value. These various prefixes correspond to the L<fopen(3)> modes of C<r>, C<r+>, C<w>, C<w+>, C<a>, and C<a+>.: $!"; =item Checking the return value Open returns nonzero on success, the undefined value otherwise. If the C<open> involved a pipe, the return value happens to be the pid of the subprocess. When opening a file, it's seldom a good idea to continue if the request failed, so C<open> is frequently used with L<C<die>|/die LIST>. Even if you want your code to do something other than C<die> on a failed open, you should still always check the return value from opening a file. =back =item Specifying I/O layers in MODE You can use the three-argument form of open to specify I/O layers (sometimes referred to as "disciplines") to apply to the new filehandle. These affect how the input and output are processed (see L<open> and L<PerlIO> for more details). For example: open(my $fh, "<:encoding(UTF-8)", $filename) || die "Can't open UTF-8 encoded $filename: $!"; This opens the UTF8-encoded file containing Unicode characters; see L<perluniintro>. Note that if layers are specified in the three-argument form, then default layers stored in L<C<${^OPEN}>|perlvar/${^OPEN}> (usually set by the L<open> pragma or the switch C<-CioD>) are ignored. Those layers will also be ignored if you specify a colon with no name following it. In that case the default layer for the operating system (:raw on Unix, :crlf on Windows) is used.. =item Using C<undef> for temporary files As a special case the three-argument form with a read/write mode and the third argument being L<C<undef>|/undef EXPR>: open(my $tmp, "+>", undef) or die ... opens a filehandle to a newly created empty anonymous temporary file. (This happens under any mode, which makes C<< +> >> the only useful and sensible mode to use.) You will need to L<C<seek>|/seek FILEHANDLE,POSITION,WHENCE> to do the reading. =item Opening a filehandle into an in-memory scalar You can open filehandles directly to Perl scalars instead of a file or other resource external to the program. To do so, provide a reference to that scalar as the third argument to C<open>, like so: open(my $memory, ">", \$var) or die "Can't open memory file: $!"; print $memory "foo!\n"; # output will appear in $var To (re)open C<STDOUT> or I<can> fail for a variety of reasons. As with any other C<open>, check the return value for success. I<Technical note>: This feature works only when Perl is built with PerlIO -- the default, except with older (pre-5.16) Perl installations that were configured to not include it (e.g. via C<Configure -Uuseperlio>). You can see whether your Perl was built with PerlIO by running C<perl -V:useperlio>. If it says C<'define'>, you have PerlIO; otherwise you don't. See L<perliol> for detailed info on PerlIO. =item Opening a filehandle into a command If MODE is C<|->, then the filename is interpreted as a command to which output is to be piped, and if MODE is C<-|>, the filename is interpreted as a command that pipes output to us. In the two-argument (and one-argument) form, one should replace dash (C<->) with the command. See L<perlipc/"Using open() for IPC"> for more examples of this. (You are not allowed to L<C<open>|/open FILEHANDLE,MODE,EXPR> to a command that pipes both in I<and> out, but see L<IPC::Open2>, L<IPC::Open3>, and L<perlipc/"Bidirectional Communication with Another Process"> for alternatives.) the form of pipe opens taking three or more arguments, if LIST is specified (extra arguments after the command name) then LIST becomes arguments to the command invoked if the platform supports it. The meaning of L<C<open>|/open FILEHANDLE,MODE,EXPR> with more than three arguments for non-pipe modes is not yet defined, but experimental "layers" may give extra LIST arguments meaning. If you open a pipe on the command C<-> (that is, specify either C<|-> or C<-|> with the one- or two-argument forms of L<C<open>|/open FILEHANDLE,MODE,EXPR>), an implicit L<C<fork>|/fork> is done, so L<C<open>|/open FILEHANDLE,MODE,EXPR> returns twice: in the parent process it returns the pid of the child process, and in the child process it returns (a defined) C<0>. Use C<defined($pid)> or. (If your platform has a real L<C<fork>|/fork>, such as Linux and macOS, you can use the list form; it also works on Windows with Perl 5.22 or later.) L<perlipc/"Safe Pipe Opens"> for more examples of this. =item Duping filehandles You may also, in the Bourne shell tradition, specify an EXPR beginning with C<< >& >>, in which case the rest of the string is interpreted as the name of a filehandle (or file descriptor, if numeric) to be duped (as in L<dup(2)>) and opened. You may use C<&> after C<< > >>, C<<< >> >>>, C<< < >>, C<< +> >>, C<<< +>> >>>, and C<STDOUT> and C<< '<&=X' >>, where C<X> is a file descriptor number or a filehandle, then Perl will do an equivalent of C's L<fdopen(3)> of that file descriptor (and not call L L<C<flock>|/flock FILEHANDLE,OPERATION>. If you do just C<< open(my $A, ">>&", $B) >>, the filehandle C<$A> will not have the same file descriptor as C<$B>, and therefore C<flock($A)> will not C<flock($B)> nor vice versa. But with C<< open(my $A, ">>&=", $B) >>, the filehandles will share the same underlying system file descriptor. Note that under Perls older than 5.8.0, Perl uses the standard C library's' L<fdopen(3)> to implement the C<=> functionality. On many Unix systems, L<fdopen(3)> fails when file descriptors exceed a certain value, typically 255. For Perls 5.8.0 and later, PerlIO is (most often) the default. =item Legacy usage This section describes ways to call C<open> outside of best practices; you may encounter these uses in older code. Perl does not consider their use deprecated, exactly, but neither is it recommended in new code, for the sake of clarity and readability. =over =item Specifying mode and filename as a single argument In the one- and two-argument forms of the call, the mode and filename should be concatenated (in that order), preferably separated by white space. You can--but shouldn't--omit the mode in these forms when that mode is C<< < >>. It is safe to use the two-argument form of L<C<open>|/open FILEHANDLE,MODE,EXPR> if the filename argument is a known literal. open(my $dbase, "+<dbase.mine") # ditto or die "Can't open 'dbase.mine' for update: $!"; In the two-argument (and one-argument) form, opening C<< <- >> or C<-> opens STDIN and opening C<< >- >> opens STDOUT. New code should favor the three-argument form of C<open> over this older form. Declaring the mode and the filename as two distinct arguments avoids any confusion between the two. =item Calling C<open> with one argument via global variables As a shortcut, a one-argument call takes the filename from the global scalar variable of the same name as the filehandle: $ARTICLE = 100; open(ARTICLE) or die "Can't find article $ARTICLE: $!\n"; Here C<$ARTICLE> must be a global (package) scalar variable - not one declared with L<C<my>|/my VARLIST> or L<C<state>|/state VARLIST>. =item Assigning a filehandle to a bareword An older style is to use a bareword as the filehandle, as open(FH, "<", "input.txt") or die "Can't open < input.txt: $!"; Then you can use C<FH> as the filehandle, in C<< close FH >> and C<< <FH> >> and so on. Note that it's a global variable, so this form is not recommended when dealing with filehandles other than Perl's built-in ones (e.g. STDOUT and STDIN). =back =item Other considerations =over =item Automatic filehandle closure The filehandle will be closed when its reference count reaches zero. If it is a lexically scoped variable declared with L<C<my>|/my VARLIST>, that usually means the end of the enclosing scope. However, this automatic close does not check for errors, so it is better to explicitly close filehandles, especially those used for writing: close($handle) || warn "close failed: $!"; =item Automatic pipe flushing. On systems that support a close-on-exec flag on files, the flag will be set for the newly opened file descriptor as determined by the value of L<C<$^F>|perlvar/$^F>. See L<perlvar/$^F>. Closing any piped filehandle causes the parent process to wait for the child to finish, then returns the status value in L<C<$?>|perlvar/$?> and L<C<${^CHILD_ERROR_NATIVE}>|perlvar/${^CHILD_ERROR_NATIVE}>. =item Direct versus by-reference assignment of filehandles If FILEHANDLE -- the first argument in a call to C<open> -- C<use strict "refs"> should I<not> be in effect.) =item Whitespace and special characters in the filename argument The filename passed to the one- and two-argument forms of L<C<open>|/open FILEHANDLE,MODE,EXPR> will have leading and trailing whitespace deleted and normal redirection characters honored. This property, known as "magic open", can often be used to good effect. A user could specify a filename of I<magic> and I<three-argument> form of L<C<open>|/open FILEHANDLE,MODE,EXPR>: open(my $in, $ARGV[0]) || die "Can't open $ARGV[0]: $!"; will allow the user to specify an argument of the form C<"rsh cat file |">, but will not work on a filename that happens to have a trailing space, while open(my $in, "<", $ARGV[0]) || die "Can't open $ARGV[0]: $!"; will have exactly the opposite restrictions. (However, some shells support the syntax C<< perl your_program.pl <( rsh cat file ) >>, which produces a filename that can be opened normally.) =item Invoking C-style C<open> If you want a "real" C L<open(2)>, then you should use the L<C<sysopen>|/sysopen FILEHANDLE,FILENAME,MODE> function, which involves no such magic (but uses different filemodes than Perl L<C<open>|/open FILEHANDLE,MODE,EXPR>, which corresponds to C L<C<seek>|/seek FILEHANDLE,POSITION,WHENCE> for some details about mixing reading and writing. =item Portability issues See L<perlport/open>. =back =back =item opendir DIRHANDLE,EXPR X<opendir> =for Pod::Functions open a directory Opens a directory named EXPR for processing by L<C<readdir>|/readdir DIRHANDLE>, L<C<telldir>|/telldir DIRHANDLE>, L<C<seekdir>|/seekdir DIRHANDLE,POS>, L<C<rewinddir>|/rewinddir DIRHANDLE>, and L<C<closedir>|/closedir DIRHANDLE>. L<C<readdir>|/readdir DIRHANDLE>. =item ord EXPR X<ord> X<encoding> =item ord =for Pod::Functions find a character's numeric representation Returns the numeric value of the first character of EXPR. If EXPR is an empty string, returns 0. If EXPR is omitted, uses L<C<$_>|perlvar/$_>. (Note I<character>, not byte.) For the reverse, see L<C<chr>|/chr NUMBER>. See L<perlunicode> for more about Unicode. =item our VARLIST X<our> X<global> =item our TYPE VARLIST =item our VARLIST : ATTRS =item our TYPE VARLIST : ATTRS =for Pod::Functions +5.6.0 declare and assign a package variable (lexical scoping) L<C<our>|/our VARLIST> makes a lexical alias to a package (i.e. global) variable of the same name in the current package for use within the current lexical scope. L<C<our>|/our VARLIST> has the same scoping rules as L<C<my>|/my VARLIST> or L<C<state>|/state VARLIST>, meaning that it is only valid within a lexical scope. Unlike L<C<my>|/my VARLIST> and L<C<state>|/state VARLIST>, which both declare new (lexical) variables, L<C<our>|/our VARLIST> only creates an alias to an existing variable: a package variable of the same name. This means that when C<use strict 'vars'> is in effect, L<C<our>|/our VARLIST> lets you use a package variable without qualifying it with the package name, but only within the lexical scope of the L<C<our>|/our VARLIST> C L<C<our>|/our VARLIST> L<C<our>|/our VARLIST> declarations with the same name in the same lexical scope are allowed if they are in different packages. If they happen to be in the same package, Perl will emit warnings if you have asked for them, just like multiple L<C<my>|/my VARLIST> declarations. Unlike a second L<C<my>|/my VARLIST> declaration, which will bind the name to a fresh variable, a second L<C<our>|/our VARLIST> L<C<our>|/our VARLIST> declaration may also have a list of attributes associated with it. The exact semantics and interface of TYPE and ATTRS are still evolving. TYPE: our ( undef, $min, $hour ) = localtime; L<C<our>|/our VARLIST> differs from L<C<use vars>|vars>, which allows use of an unqualified name I<only> within the affected package, but across scopes. =item pack TEMPLATE,LIST X<pack> =for Pod::Functions convert a list into a binary representation L C<< > >> and C<< < >> modifiers can also be used on C<()> groups to force a particular byte-order on all components in that group, including all its subgroups. The following rules apply: =over =item * Each letter may optionally be followed by a number indicating the repeat count. A numeric repeat count may optionally be enclosed in brackets, as in C<pack("C[80]", @arr)>. The repeat count gobbles that many values from the LIST when used with all format types other than C<a>, C<A>, C<Z>, C<b>, C<B>, C<h>, C<H>, C<@>, C<.>, C<x>, C<X>, and C<P>, where it means something else, described below. Supplying a C<*> for the repeat count instead of a number means to use however many items are left, except for: =over =item * C<@>, C<x>, and C<X>, where it is equivalent to C<0>. =item * <.>, where it means relative to the start of the string. =item * C<u>, where it is equivalent to 1 (or 45, which here is equivalent). =back One can replace a numeric repeat count with a template letter enclosed in brackets to use the packed byte length of the bracketed template for the repeat count. For example, the template C<x[L]> skips as many bytes as in a packed long, and the template C<"$t X[$t] $t"> unpacks twice whatever $t (when variable-expanded) unpacks. If the template in brackets contains alignment commands (such as C<x![d]>), its packed length is calculated as if the start of the template had the maximal possible alignment. When used with C<Z>, a C<*> as the repeat count is guaranteed to add a trailing null byte, so the resulting string is always one byte longer than the byte length of the item itself. When used with C<@>, the repeat count represents an offset from the start of the innermost C<()> group. When used with C<.>, the repeat count determines the starting position to calculate the value offset as follows: =over =item * If the repeat count is C<0>, it's relative to the current position. =item * If the repeat count is C<*>, the offset is relative to the start of the packed string. =item * And if it's an integer I<n>, the offset is relative to the start of the I<n>th innermost C<( )> group, or to the start of the string if I<n> is bigger then the group level. =back The repeat count for C<u> is interpreted as the maximal number of bytes to encode per line of output, with 0, 1 and 2 replaced by 45. The repeat count should not be more than 65. =item * The C<a>, C<A>, and C<Z> types gobble just one value, but pack it as a string of length count, padding with nulls or spaces as needed. When unpacking, C<A> strips trailing whitespace and nulls, C<Z> strips everything after the first null, and C<a> returns data with no stripping at all. If the value to pack is too long, the result is truncated. If it's too long and an explicit count is provided, C<Z> packs only C<$count-1> bytes, followed by a null byte. Thus C<Z> always packs a trailing null, except when the count is 0. =item * Likewise, the C<b> and C<B> formats pack a string that's that many bits long. Each such format generates 1 bit of the result. These are typically followed by a repeat count like C<B8> or C<B64>. Each result bit is based on the least-significant bit of the corresponding input character, i.e., on C<ord($char)%2>. In particular, characters C<"0"> and C<"1"> generate bits 0 and 1, as do characters C<"\000"> and C<"\001">. Starting from the beginning of the input string, each 8-tuple of characters is converted to 1 character of output. With format C<b>, the first character of the 8-tuple determines the least-significant bit of a character; with format C C<*> for the repeat count uses all characters of the input field. On unpacking, bits are converted to a string of C<0>s and C<1>s. =item * The C<h> and C<H> formats pack a string that many nybbles (4-bit groups, representable as hexadecimal digits, C<"0".."9"> C<"a".."f">) long. For each such format, L<C<pack>|/pack TEMPLATE,LIST> generates 4 bits of result. With non-alphabetical characters, the result is based on the 4 least-significant bits of the input character, i.e., on C<ord($char)%16>. In particular, characters C<"0"> and C<"1"> generate nybbles 0 and 1, as do bytes C<"\000"> and C<"\001">. For characters C<"a".."f"> and C<"A".."F">, the result is compatible with the usual hexadecimal digits, so that C<"a"> and C<"A"> both generate the nybble C<0xA==10>. Use only these specific hex characters with this format. Starting from the beginning of the template to L<C<pack>|/pack TEMPLATE,LIST>, each pair of characters is converted to 1 character of output. With format C<h>, the first character of the pair determines the least-significant nybble of the output character; with format C C<*> for the repeat count uses all characters of the input field. For L<C<unpack>|/unpack TEMPLATE,EXPR>, nybbles are converted to a string of hexadecimal digits. =item * The C<p> format packs a pointer to a null-terminated string. You are responsible for ensuring that the string is not a temporary value, as that could potentially get deallocated before you got around to using the packed result. The C<P> format packs a pointer to a structure of the size indicated by the length. A null pointer is created if the corresponding value for C<p> or C<P> is L<C<undef>|/undef EXPR>; similarly with L<C<unpack>|/unpack TEMPLATE,EXPR>, where a null pointer unpacks into L<C<undef>|/undef EXPR>.. =item * The L<C<pack>|/pack TEMPLATE,LIST>, you write I<length-item>C</>I<sequence-item>, and the I<length-item> describes how the length value is packed. Formats likely to be of most use are integer-packing ones like C<n> for Java strings, C<w> for ASN.1 or SNMP, and C<N> for Sun XDR. For L<C<pack>|/pack TEMPLATE,LIST>, I<sequence-item> may have a repeat count, in which case the minimum of that and the number of available items is used as the argument for I<length-item>. If it has no repeat count or uses a '*', the number of available items is used. For L<C<unpack>|/unpack TEMPLATE,EXPR>, an internal stack of integer arguments unpacked so far is used. You write C</>I<sequence-item> and the repeat count is obtained by popping off the last element from the stack. The I<sequence-item> must not have a repeat count. If I<sequence-item> refers to a string type (C<"A">, C<"a">, or C<"Z">), the I I<length-item> is not returned explicitly from L<C<unpack>|/unpack TEMPLATE,EXPR>. Supplying a count to the I<length-item> format letter is only useful with C<A>, C<a>, or C<Z>. Packing with a I<length-item> of C<a> or C<Z> may introduce C<"\000"> characters, which Perl does not regard as legal in numeric strings. =item * The integer types C<s>, C<S>, C<l>, and C<L> may be followed by a C<!> modifier to specify native shorts or longs. As shown in the example above, a bare C<l> means exactly 32 bits, although the native C<long> as seen by the local C compiler may be larger. This is mainly an issue on 64-bit platforms. You can see whether using C<!> makes any difference this way: printf "format s is %d, s! is %d\n", length pack("s"), length pack("s!"); printf "format l is %d, l! is %d\n", length pack("l"), length pack("l!"); C<i!> and C<I!> are also allowed, but only for completeness' sake: they are identical to C<i> and L<C<Config>|Config> module: use Config; print $Config{shortsize}, "\n"; print $Config{intsize}, "\n"; print $Config{longsize}, "\n"; print $Config{longlongsize}, "\n"; C<$Config{longlongsize}> is undefined on systems without long long support. =item * The integer formats C<s>, C<S>, C<i>, C<I>, C<l>, C<L>, C<j>, and C<J> are I<big-endian> and I<little-endian> are comic references to the egg-eating habits of the little-endian Lilliputians and the big-endian Blefuscudians from the classic Jonathan Swift satire, I L<Config>: use Config; print "$Config{byteorder}\n"; or from the command line: $ perl -V:byteorder Byteorders C<"1234"> and C<"12345678"> are little-endian; C<"4321"> and C<"87654321"> are big-endian. Systems with multiarchitecture binaries will have C<"ffff">, signifying that static information doesn't work, one must use runtime probing. For portably packed integers, either use the formats C<n>, C<N>, C<v>, and C<V> or else use the C<< > >> and C<< < >> modifiers described immediately below. See also L<perlport>. =item * Also floating point numbers have endianness. Usually (but not always) this agrees with the integer endianness. Even though most platforms these days use the IEEE 754 binary format, there are differences, especially if the long doubles are involved. You can see the C<Config> variables C<doublekind> and C<longdblkind> (also C<doublesize>, C<longdblsize>): the "kind" values are enums, unlike C<byteorder>. Portability-wise the best option is probably to keep to the IEEE 754 64-bit doubles, and of agreed-upon endianness. Another possibility is the C<"%a">) format of L<C<printf>|/printf FILEHANDLE FORMAT, LIST>. =item * Starting with Perl 5.10.0, integer and floating-point formats, along with the C<p> and C<P> formats and C<()> groups, may all be followed by the C<< > >> or C<< < >> endianness modifiers to respectively enforce big- or little-endian byte-order. These modifiers are especially useful given how C<n>, C<N>, C<v>, and C<V> don't cover signed integers, 64-bit integers, or floating-point values. Here are some concerns to keep in mind when using an endianness modifier: =over =item * Exchanging signed integers between different platforms works only when all platforms store them in the same format. Most platforms store signed integers in two's-complement notation, so usually this is not an issue. =item * The C<< > >> or C<< < >> modifiers can only be used on floating-point formats on big- or little-endian machines. Otherwise, attempting to use them raises an exception. =item * C<< > >> or C<< < >> on floating-point values can be useful, but also dangerous if you don't know exactly what you're doing. It is not a general way to portably store floating-point values. =item * When using C<< > >> or C<< < >> on a C<()> group, this affects all types inside the group that accept byte-order modifiers, including all subgroups. It is silently ignored for all other types. You are not allowed to override the byte-order within a group that already has a byte-order modifier suffix. =back =item * L<perlport>. If you know I<exactly> what you're doing, you can use the C<< > >> or C<< < >> modifiers to force big- or little-endian byte-order on floating-point values. Because Perl uses doubles (or long doubles, if configured) internally for all numeric calculation, converting from double into float and thence to double again loses precision, so C<unpack("f", pack("f", $foo)>) will not in general equal $foo. =item * Pack and unpack can operate in two modes: character mode (C<C0> mode) where the packed string is processed per character, and UTF-8 byte mode (C<U0> mode) where the packed string is processed in its UTF-8-encoded Unicode form on a byte-by-byte basis. Character mode is the default unless the format string starts with C<U>. You can always switch mode mid-format with an explicit C<C0> or C<U0> in the format. This mode remains in effect until the next mode change, or until the end of the C<()> group it (directly) applies to. Using C<C0> to get Unicode characters while using C<U0> to get I L<C<pack>|/pack TEMPLATE,LIST>/L<C<unpack>|/unpack TEMPLATE,EXPR> as a substitute for the L<Encode> module. =item * You must yourself do any alignment or padding by inserting, for example, enough C<"x">es while packing. There is no way for L<C<pack>|/pack TEMPLATE,LIST> and L<C<unpack>|/unpack TEMPLATE,EXPR> to know where characters are going to or coming from, so they handle their output and input as flat sequences of characters. =item * A C<()> group is a sub-TEMPLATE enclosed in parentheses. A group may take a repeat count either as postfix, or for L<C<unpack>|/unpack TEMPLATE,EXPR>, also via the C</> template character. Within each repetition of a group, positioning with C<@> starts over at 0. Therefore, the result of pack("@1A((@2A)@3A)", qw[X Y Z]) is the string C<"\0X\0\0YZ">. =item * C<x> and C<X> accept the C<!> modifier to act as alignment commands: they jump forward or back to the closest position aligned at a multiple of C<count> characters. For example, to L<C<pack>|/pack TEMPLATE,LIST> or L<C<unpack>|/unpack TEMPLATE,EXPR> a C structure like struct { char c; /* one signed, 8-bit character */ double d; char cc[2]; } one may need to use the template C<c x![d] d c[2]>. This assumes that doubles must be aligned to the size of double. For alignment commands, a C<count> of 0 is equivalent to a C<count> of 1; both are no-ops. =item * C<n>, C<N>, C<v> and C<V> accept the C<!> modifier to represent signed 16-/32-bit integers in big-/little-endian order. This is portable only when all platforms sharing packed data use the same binary representation for signed integers; for example, when all platforms use two's-complement representation. =item * Comments can be embedded in a TEMPLATE using C<#> C</x> can for complicated pattern matches. =item * If TEMPLATE requires more arguments than L<C<pack>|/pack TEMPLATE,LIST> is given, L<C<pack>|/pack TEMPLATE,LIST> assumes additional C<""> arguments. If TEMPLATE requires fewer arguments than given, extra arguments are ignored. =item * Attempting to pack the special floating point values C<Inf> and C<NaN> (infinity, also in negative, and not-a-number) into packed integer values (like C<"L">) is a fatal error. The reason for this is that there simply isn't any sensible mapping for these special values into integers. =back L<C<unpack>|/unpack TEMPLATE,EXPR>. =item package NAMESPACE =item package NAMESPACE VERSION X<package> X<module> X<namespace> X<version> =item package NAMESPACE BLOCK =item package NAMESPACE VERSION BLOCK X<package> X<module> X<namespace> X<version> =for Pod::Functions declare a separate global namespace L<C<eval>|/eval EXPR>). That is, the forms without a BLOCK are operative through the end of the current scope, just like the L<C<my>|/my VARLIST>, L<C<state>|/state VARLIST>, and L<C<our>|/our VARLIST> operators. All unqualified dynamic identifiers in this scope will be in the given namespace, except where overridden by another L<C<package>|/package NAMESPACE> declaration or when they're one of the special identifiers that qualify into C<main::>, like C<STDOUT>, C<ARGV>, C<ENV>, and the punctuation variables. A package statement affects dynamic variables only, including those you've used L<C<local>|/local EXPR> on, but I<not> lexically-scoped variables, which are created with L<C<my>|/my VARLIST>, L<C<state>|/state VARLIST>, or L<C<our>|/our VARLIST>. Typically it would be the first declaration in a file included by L<C<require>|/require VERSION> or L<C<use>|/use Module VERSION LIST>. C<$SomePack::var> or C<ThatPack::INPUT_HANDLE>. If package name is omitted, the C<main> package as assumed. That is, C<$::sail> is equivalent to C<$main::sail> (as well as to C<$main'sail>, still seen in ancient code, mostly from Perl 4). If VERSION is provided, L<C<package>|/package NAMESPACE> sets the C<$VERSION> variable in the given namespace to a L<version> object with the VERSION provided. VERSION must be a "strict" style version number as defined by the L<version> module: a positive decimal number (integer or decimal-fraction) without exponentiation or else a dotted-decimal v-string with a leading 'v' character and at least three components. You should set C<$VERSION> only once per package. See L<perlmod/"Packages"> for more information about packages, modules, and classes. See L<perlsub> for other scoping issues. =item __PACKAGE__ X<__PACKAGE__> =for Pod::Functions +5.004 the current package A special token that returns the name of the package in which it occurs. =item pipe READHANDLE,WRITEHANDLE X<pipe> =for Pod::Functions open a pair of connected filehandles Opens a pair of connected pipes like the corresponding system call. Note that if you set up a loop of piped processes, deadlock can occur unless you are very careful. In addition, note that Perl's pipes use IO buffering, so you may need to set L<C<$E<verbar>>|perlvar/$E<verbar>> to flush your WRITEHANDLE after each command, depending on the application. Returns true on success. See L<IPC::Open2>, L<IPC::Open3>, and L<perlipc/"Bidirectional Communication with Another Process"> for examples of such things. On systems that support a close-on-exec flag on files, that flag is set on all newly opened file descriptors whose L<C<fileno>|/fileno FILEHANDLE>s are I<higher> than the current value of L<C<$^F>|perlvar/$^F> (by default 2 for C<STDERR>). See L<perlvar/$^F>. =item pop ARRAY X<pop> X<stack> =item pop =for Pod::Functions remove the last element from an array and return it Pops and returns the last value of the array, shortening the array by one element. Returns the undefined value if the array is empty, although this may also happen at other times. If ARRAY is omitted, pops the L<C<@ARGV>|perlvar/@ARGV> array in the main program, but the L<C<@_>|perlvar/@_> array in subroutines, just like L<C<shift>|/shift ARRAY>. Starting with Perl 5.14, an experimental feature allowed L<C<pop>|/pop ARRAY> to take a scalar expression. This experiment has been deemed unsuccessful, and was removed as of Perl 5.24. =item pos SCALAR X<pos> X<match, position> =item pos =for Pod::Functions find or set the offset for the last/next m//g search Returns the offset of where the last C<m//g> search left off for the variable in question (L<C<$_>|perlvar/$_> is used when the variable is not specified). This offset is in characters unless the (no-longer-recommended) L<C<use bytes>|bytes> pragma is in effect, in which case the offset is in bytes. Note that 0 is a valid match offset. L<C<undef>|/undef EXPR> indicates that the search position is reset (usually due to match failure, but can also be because no match has yet been run on the scalar). L<C<pos>|/pos SCALAR> directly accesses the location used by the regexp engine to store the offset, so assigning to L<C<pos>|/pos SCALAR> will change that offset, and so will also influence the C<\G> zero-width assertion in regular expressions. Both of these effects take place for the next match, so you can't affect the position with L<C<pos>|/pos SCALAR> during the current match, such as in C<(?{pos() = 5})> or C<s//pos() = 5/e>. Setting L<C<pos>|/pos SCALAR> also resets the I<matched with zero-length> flag, described under L<perlre/"Repeated Patterns Matching a Zero-length Substring">. Because a failed C<m//gc> match doesn't reset the offset, the return from L<C<pos>|/pos SCALAR> won't change either in this case. See L<perlre> and L<perlop>. =item print FILEHANDLE LIST X<print> =item print FILEHANDLE =item print LIST =item print =for Pod::Functions output a list to a filehandle C<+> or put parentheses around the arguments.) If FILEHANDLE is omitted, prints to the last selected (see L<C<select>|/select FILEHANDLE>) output handle. If LIST is omitted, prints L<C<$_>|perlvar/$_> to the currently selected output handle. To use FILEHANDLE alone to print the content of L<C<$_>|perlvar/$_> to it, you must use a bareword filehandle like C<FH>, not an indirect one like C<$fh>. To set the default output handle to something other than STDOUT, use the select operation. The current value of L<C<$,>|perlvar/$,> (if any) is printed between each LIST item. The current value of L<C<$\>|perlvar/$\> (if any) is printed after the entire LIST has been printed. Because print takes a LIST, anything in the LIST is evaluated in list context, including any subroutines whose return lists you pass to L<C<print>|/print FILEHANDLE LIST>. Be careful not to follow the print keyword with a left parenthesis unless you want the corresponding right parenthesis to terminate the arguments to the print; put parentheses around all arguments (or interpose a L<perlipc> for more on signal handling. =item printf FILEHANDLE FORMAT, LIST X<printf> =item printf FILEHANDLE =item printf FORMAT, LIST =item printf =for Pod::Functions output a formatted list to a filehandle Equivalent to C<print FILEHANDLE sprintf(FORMAT, LIST)>, except that L<C<$\>|perlvar/$\> (the output record separator) is not appended. The FORMAT and the LIST are actually parsed as a single list. The first argument of the list will be interpreted as the L<C<printf>|/printf FILEHANDLE FORMAT, LIST> format. This means that C<printf(@_)> will use C<$_[0]> as the format. See L<sprintf|/sprintf FORMAT, LIST> for an explanation of the format argument. If C<use locale> (including C<use locale ':not_characters'>) is in effect and L<C<POSIX::setlocale>|POSIX/C<setlocale>> has been called, the character used for the decimal separator in formatted floating-point numbers is affected by the C<LC_NUMERIC> locale setting. See L<perllocale> and L<POSIX>. For historical reasons, if you omit the list, L<C<$_>|perlvar/$_> is used as the format; to use FILEHANDLE without a list, you must use a bareword filehandle like C<FH>, not an indirect one like C<$fh>. However, this will rarely do what you want; if L<C<$_>|perlvar/$_> contains formatting codes, they will be replaced with the empty string and a warning will be emitted if L<warnings> are enabled. Just use L<C<print>|/print FILEHANDLE LIST> if you want to print the contents of L<C<$_>|perlvar/$_>. Don't fall into the trap of using a L<C<printf>|/printf FILEHANDLE FORMAT, LIST> when a simple L<C<print>|/print FILEHANDLE LIST> would do. The L<C<print>|/print FILEHANDLE LIST> is more efficient and less error prone. =item prototype FUNCTION X<prototype> =item prototype =for Pod::Functions +5.002 get the prototype (if any) of a subroutine Returns the prototype of a function as a string (or L<C<undef>|/undef EXPR> if the function has no prototype). FUNCTION is a reference to, or the name of, the function whose prototype you want to retrieve. If FUNCTION is omitted, L<C<$_>|perlvar/$_> is used. If FUNCTION is a string starting with C<CORE::>, the rest is taken as a name for a Perl builtin. If the builtin's arguments cannot be adequately expressed by a prototype (such as L<C<system>|/system LIST>), L<C<prototype>|/prototype FUNCTION> returns L<C<undef>|/undef EXPR>, because the builtin does not really behave like a Perl function. Otherwise, the string describing the equivalent prototype is returned. =item push ARRAY,LIST X<push> X<stack> =for Pod::Functions append one or more elements to an array Treats L<C<push>|/push ARRAY,LIST>. Starting with Perl 5.14, an experimental feature allowed L<C<push>|/push ARRAY,LIST> to take a scalar expression. This experiment has been deemed unsuccessful, and was removed as of Perl 5.24. =item q/STRING/ =for Pod::Functions singly quote a string =item qq/STRING/ =for Pod::Functions doubly quote a string =item qw/STRING/ =for Pod::Functions quote a list of words =item qx/STRING/ =for Pod::Functions backquote quote a string Generalized quotes. See L<perlop/"Quote-Like Operators">. =item qr/STRING/ =for Pod::Functions +5.005 compile pattern Regexp-like quote. See L<perlop/"Regexp Quote-Like Operators">. =item quotemeta EXPR X<quotemeta> X<metacharacter> =item quotemeta =for Pod::Functions quote regular expression magic characters Returns the value of EXPR with all the ASCII non-"word" characters backslashed. (That is, all ASCII characters not matching C</[A-Za-z_0-9]/> will be preceded by a backslash in the returned string, regardless of any locale settings.) This is the internal function implementing the C<\Q> escape in double-quoted strings. (See below for the behavior on non-ASCII code points.) If EXPR is omitted, uses L<C<$_>|perlvar/$_>. quotemeta (and C<\Q> ... C< C<$sentence> to become C<, L<C<quotemeta>|/quotemeta EXPR> or L<C<use feature 'unicode_strings'>|feature/The 'unicode_strings' feature>, which is to quote all characters in the upper Latin1 range. This provides complete backwards compatibility for old programs which do not use Unicode. (Note that C<unicode_strings> is automatically enabled within the scope of a S<C<use v5.12>> or greater.) Within the scope of L<C<use locale>|locale>, all non-ASCII Latin1 code points are quoted whether the string is encoded as UTF-8 or not. As mentioned above, locale does not affect the quoting of ASCII-range characters. This protects against those locales where characters such as C<"|"> are considered to be word characters. Otherwise, Perl quotes non-ASCII characters using an adaptation from Unicode (see L<>). (C<\ E<verbar> ( ) [ { ^ $ * + ? .>), that we will only use ones that have the Pattern_Syntax property. Perl also promises, that if we ever add characters that are considered to be white space in regular expressions (currently mostly affected by). =item rand EXPR X<rand> X<random> =item rand =for Pod::Functions retrieve the next pseudorandom number Returns a random fractional number greater than or equal to C<0> and less than the value of EXPR. (EXPR should be positive.) If EXPR is omitted, the value C<1> is used. Currently EXPR with the value C<0> is also special-cased as C<1> (this was undocumented before Perl 5.8.0 and is subject to change in future versions of Perl). Automatically calls L<C<srand>|/srand EXPR> unless L<C<srand>|/srand EXPR> has already been called. See also L<C<srand>|/srand EXPR>. Apply L<C<int>|/int EXPR> to the value returned by L<C<rand>|/rand EXPR> if you want random integers instead of random fractional numbers. For example, int(rand(10)) returns a random integer between C<0> and C<9>, inclusive. (Note: If your rand function consistently returns numbers that are too large or too small, then your version of Perl was probably compiled with the wrong number of RANDBITS.) read FILEHANDLE,SCALAR,LENGTH,OFFSET X<read> X<file, read> =item read FILEHANDLE,SCALAR,LENGTH =for Pod::Functions fixed-length buffered input from a filehandle Attempts to read LENGTH I<characters> of data into variable SCALAR from the specified FILEHANDLE. Returns the number of characters actually read, C<0> at end of file, or undef if there was an error (in the latter case L<C<$!>|perlvar/$!>. The call is implemented in terms of either Perl's or your system's native L<fread(3)> library function, via the L<PerlIO> layers applied to the handle. To get a true L<read(2)> system call, see L<sysread|/sysread FILEHANDLE,SCALAR,LENGTH,OFFSET>. Note the I<characters>: depending on the status of the filehandle, either (8-bit) bytes or characters are read. By default, all filehandles operate on bytes, but for example if the filehandle has been opened with the C<:utf8> I/O layer (see L<C<open>|/open FILEHANDLE,MODE,EXPR>, and the L<open> pragma), the I/O will operate on UTF8-encoded Unicode characters, not bytes. Similarly for the C<:encoding> layer: in that case pretty much any characters can be read. =item readdir DIRHANDLE X<readdir> =for Pod::Functions get a directory from a directory handle Returns the next directory entry for a directory opened by L<C<opendir>|/opendir DIRHANDLE,EXPR>. If used in list context, returns all the rest of the entries in the directory. If there are no more entries, returns the undefined value in scalar context and the empty list in list context. If you're planning to filetest the return values out of a L<C<readdir>|/readdir DIRHANDLE>, you'd better prepend the directory in question. Otherwise, because we didn't L<C<chdir>|/chdir EXPR> there, it would have been testing the wrong file. opendir(my $dh, $some_dir) || die "Can't opendir $some_dir: $!"; my @dots = grep { /^\./ && -f "$some_dir/$_" } readdir($dh); closedir $dh; As of Perl 5.12 you can use a bare L<C<readdir>|/readdir DIRHANDLE> in a C<while> loop, which will set L<C<$_>|perlvar/$_> on every iteration. If either a C<readdir> expression or an explicit assignment of a C<readdir> expression to a scalar is used as a C<while>/C<for> condition, then the condition actually tests for definedness of the expression's value, not for its regular truth value. I<only> on Perls of a recent vintage: use 5.012; # so readdir assigns to $_ in a lone while test =item readline EXPR =item readline X<readline> X<gets> X<fgets> =for Pod::Functions fetch a record from a file Reads from the filehandle whose typeglob is contained in EXPR (or from C<*ARGV> if EXPR is not provided). In scalar context, each call reads and returns the next line until end-of-file is reached, whereupon the subsequent call returns L<C<undef>|/undef EXPR>. In list context, reads until end-of-file is reached and returns a list of lines. Note that the notion of "line" used here is whatever you may have defined with L<C<$E<sol>>|perlvar/$E<sol>> (or C<$INPUT_RECORD_SEPARATOR> in L<English>). See L<perlvar/"$/">. When L<C<$E<sol>>|perlvar/$E<sol>> is set to L<C<undef>|/undef EXPR>, when L<C<readline>|/readline EXPR> is in scalar context (i.e., file slurp mode), and when an empty file is read, it returns C<''> the first time, followed by L<C<undef>|/undef EXPR> subsequently. This is the internal function implementing the C<< <EXPR> >> operator, but you can use it directly. The C<< <EXPR> >> operator is discussed in more detail in L<perlop/"I/O Operators">. my $line = <STDIN>; my $line = readline(STDIN); # same thing If L<C<readline>|/readline EXPR> encounters an operating system error, L<C<$!>|perlvar/$!> will be set with the corresponding error message. It can be helpful to check L<C<$!>|perlvar/$!> when you are reading from filehandles you don't trust, such as a tty or a socket. The following example uses the operator form of L<C<readline>|/readline EXPR> and dies if the result is not defined. while ( ! eof($fh) ) { defined( $_ = readline $fh ) or die "readline failed: $!"; ... } Note that you have can't handle L<C<readline>|/readline EXPR> errors that way with the C<ARGV> filehandle. In that case, you have to open each element of L<C<@ARGV>|perlvar/@ARGV> yourself since L<C<eof>|/eof FILEHANDLE> handles C<ARGV> differently. foreach my $arg (@ARGV) { open(my $fh, $arg) or warn "Can't open $arg: $!"; while ( ! eof($fh) ) { defined( $_ = readline $fh ) or die "readline failed for $arg: $!"; ... } } Like the C<< <EXPR> >> operator, if a C<readline> expression is used as the condition of a C<while> or C<for> loop, then it will be implicitly assigned to C<$_>. If either a C<readline> expression or an explicit assignment of a C<readline> expression to a scalar is used as a C<while>/C<for> condition, then the condition actually tests for definedness of the expression's value, not for its regular truth value. =item readlink EXPR X<readlink> =item readlink =for Pod::Functions determine where a symbolic link is pointing Returns the value of a symbolic link, if symbolic links are implemented. If not, raises an exception. If there is a system error, returns the undefined value and sets L<C<$!>|perlvar/$!> (errno). If EXPR is omitted, uses L<C<$_>|perlvar/$_>. Portability issues: L<perlport/readlink>. =item readpipe EXPR =item readpipe X<readpipe> =for Pod::Functions execute a system command and collect standard output EXPR is executed as a system command. The collected standard output of the command is returned. In scalar context, it comes back as a single (potentially multi-line) string. In list context, returns a list of lines (however you've defined lines with L<C<$E<sol>>|perlvar/$E<sol>> (or C<$INPUT_RECORD_SEPARATOR> in L<English>)). This is the internal function implementing the C<qx/EXPR/> operator, but you can use it directly. The C<qx/EXPR/> operator is discussed in more detail in L<perlop/"C<qx/I<STRING>/>">. If EXPR is omitted, uses L<C<$_>|perlvar/$_>. =item recv SOCKET,SCALAR,LENGTH,FLAGS X<recv> =for Pod::Functions receive a message over a Socket Receives L<recvfrom(2)> system call. See L<perlipc/"UDP: Message Passing"> for examples. Note that if the socket has been marked as C<:utf8>, C<recv> will throw an exception. The C<:encoding(...)> layer implicitly introduces the C<:utf8> layer. See L<C<binmode>|/binmode FILEHANDLE, LAYER>. =item redo LABEL X<redo> =item redo EXPR =item redo =for Pod::Functions start this loop iteration over again The L<C<redo>|/redo LABEL> command restarts the loop block without evaluating the conditional again. The L<C<continue>|/continue BLOCK> block, if any, is not executed. If the LABEL is omitted, the command refers to the innermost enclosing loop. The C<redo EXPR> form, available starting in Perl 5.18.0, allows a label name to be computed at run time, and is otherwise identical to; } L<C<redo>|/redo<redo>|/redo LABEL> inside such a block will effectively turn it into a looping construct.<redo ("foo")."bar"> will cause "bar" to be part of the argument to L<C<redo>|/redo LABEL>. =item ref EXPR X<ref> X<reference> =item ref =for Pod::Functions find out the type of thing being referenced Examines the value of EXPR, expecting it to be a reference, and returns a string giving information about the reference and the type of referent. If EXPR is not specified, L<C<$_>|perlvar/$_> will be used. If the operand is not a reference, then the empty string will be returned. An empty string will only be returned in this situation. C<ref> is often useful to just test whether a value is a reference, which can be done by comparing the result to the empty string. It is a common mistake to use the result of C<ref> directly as a truth value: this goes wrong because C<0> (which is false) can be returned for a reference. If the operand is a reference to a blessed object, then the name of the class into which the referent is blessed will be returned. C<ref> doesn't care what the physical type of the referent is; blessing takes precedence over such concerns. Beware that exact comparison of C<ref> results against a class name doesn't perform a class membership test: a class's members also include objects blessed into subclasses, for which C C<ARRAY>,. The ambiguity between built-in type names and class names significantly limits the utility of C<ref>. For unambiguous information, use L<C<Scalar::Util::blessed()>|Scalar::Util/blessed> for information about blessing, and L<C<Scalar::Util::reftype()>|Scalar::Util/reftype> for information about physical types. Use L<the C<isa> method|UNIVERSAL/C<< $obj->isa( TYPE ) >>> for class membership tests, though one must be sure of blessedness before attempting a method call. See also L<perlref> and L<perlobj>. =item rename OLDNAME,NEWNAME X<rename> X<move> X<mv> X<ren> =for Pod::Functions change a filename Changes the name of a file; an existing file NEWNAME will be clobbered. Returns true for success, false otherwise. Behavior of this function varies wildly depending on your system implementation. For example, it will usually not work across file system boundaries, even though the system I<mv> command sometimes compensates for this. Other restrictions include whether it works on directories, open files, or pre-existing files. Check L<perlport> and either the L<rename(2)> manpage or equivalent system documentation for details. For a platform independent L<C<move>|File::Copy/move> function look at the L<File::Copy> module. Portability issues: L<perlport/rename>. =item require VERSION X<require> =item require EXPR =item require =for Pod::Functions load in external functions from a library at runtime Demands a version of Perl specified by VERSION, or demands some semantics specified by EXPR or by L<C<$_>|perlvar/$_> if EXPR is not supplied. VERSION may be either a literal such as v5.24.1, which will be compared to L<C<$^V>|perlvar/$^V> (or C<$PERL_VERSION> in L<English>), or a numeric argument of the form 5.024001, which will be compared to L<C<$]>|perlvar/$]>. An exception is raised if VERSION is greater than the version of the current Perl interpreter. Compare with L<C<use>|/use Module VERSION LIST>, which can do a similar check at compile older code. require v5.24.1; # run time version check require 5.24.1; # ditto require 5.024_001; # ditto; older syntax compatible with perl 5.6 Otherwise, L<C<require>|/require VERSION> demands that a library file be included if it hasn't already been included. The file is included via the do-FILE mechanism, which is essentially just a variety of L<C<eval>|/eval EXPR> C<1;> unless you're sure it'll return true otherwise. But it's better just to put the C<1;>, in case you add more statements. If EXPR is a bareword, L<C<require>|/require VERSION> assumes a F<.pm> extension and replaces C<::> with F<Foo/Bar.pm> file in the directories specified in the L<C<@INC>|perlvar/@INC> array, and it will autovivify the C<Foo::Bar::> stash at compile time. But if you try this: my $) is in effect, C<sort LIST> sorts LIST according to the current collation locale. See L<perllocale>. L<C<sort>|/sort SUBNAME LIST> returns aliases into the original list, much as a for loop's index variable aliases the list elements. That is, modifying an element of a list returned by L<C<sort>|/sort SUBNAME LIST> (for example, in a C<foreach>, L<C<map>|/map BLOCK LIST> or L<C<grep>|/grep BLOCK LIST>) actually modifies the element in the original list. This is usually something to be avoided when writing clear code. Historically Perl has varied in whether sorting is stable by default. If stability matters, it can be controlled explicitly by using the L<sort> pragma. Examples: # use sort 'stable'; my @new = sort { substr($a, 3, 5) cmp substr($b, 3, 5) } @old; Warning: syntactical care is required when sorting the list returned from a function. If you want to sort the list returned by the function call C<find_records(@key)>, you can use: my @contact = sort { $a cmp $b } find_records @key; my @contact = sort +find_records(@key); my @contact = sort &find_records(@key); my @contact = sort(find_records(@key)); If instead you want to sort the array C<@key> with the comparison routine C<find_records()> then you can use: my @contact = sort { find_records() } @key; my @contact = sort find_records(@key); my @contact = sort(find_records @key); my @contact = sort(find_records (@key)); C<$a> and C<$b> are set as package globals in the package the sort() is called from. That means C<$main::a> and C<$main::b> (or C<$::a> and C<$::b>) in the C<main> package, C<$FooPack::a> and C<$FooPack::b> in the C<FooPack> package, etc. If the sort block is in scope of a C<my> or C<state> declaration of C<$a> and/or C<$b>, you I<must> spell out the full name of the variables in the sort block : package main; my $ syntax (C<//>) specifically matches the empty string, which is contrary to its usual interpretation as the last successful match. If PATTERN is C</^/>, then it is treated as if it used the L<multiline modifier|perlreref/OPERATORS> (C</^/m>), since it isn't much use otherwise. C<E<sol>m> and any of the other pattern modifiers valid for C<qr> (summarized in L<perlop/qrE<sol>STRINGE<sol>msixpodualn>) may be specified explicitly. As another special case, L<C<split>|/split E<sol>PATTERNE<sol>,EXPR,LIMIT> emulates the default behavior of the command line tool B<awk> when the PATTERN is either omitted or a string composed of a single space character (such as S<C<' '>> or S<C<"\x20">>, but not e.g. S<C</ />>). In this case, any leading whitespace in EXPR is removed before splitting occurs, and the PATTERN is instead treated as if it were C</\s+/>; in particular, this means that I<any> contiguous whitespace (not just a single space character) is used as a separator. However, this special treatment can be avoided by specifying the pattern S<C</ />> instead of the string S<C<" ">>, thereby allowing only a single space character to be a separator. In earlier Perls this special case was restricted to the use of a plain S<C<" ">> as the pattern argument to split; in Perl 5.18.0 and later this special case is triggered by any expression which evaluates to the simple string S<C<" ">>. As of Perl 5.28, this special-cased whitespace splitting works as expected in the scope of L<< S<C<"use feature 'unicode_strings">>|feature/The 'unicode_strings' feature >>. In previous versions, and outside the scope of that feature, it exhibits L<perlunicode/The "Unicode Bug">: characters that are whitespace according to Unicode rules but not according to ASCII rules can be treated as part of fields rather than as field separators, depending on the string's internal encoding. If omitted, PATTERN defaults to a single space, S<C<" ">>, triggering the previously described I<awk> emulation. If LIMIT is specified and positive, it represents the maximum number of fields into which the EXPR may be split; in other words, LIMIT is one greater than the maximum number of times EXPR may be split. Thus, the LIMIT value C<1> means that EXPR may be split a maximum of zero times, producing a maximum of one field (namely, the entire value of EXPR). For instance: print join(':', split(//, 'abc', 1)), "\n"; produces the output C<abc>, and this: print join(':', split(//, 'abc', 2)), "\n"; produces the output C<a:bc>, and each of these: print join(':', split(//, 'abc', 3)), "\n"; print join(':', split(//, 'abc', 4)), "\n"; produces the output C C<a:b:c>, but the following: print join(':', split(/,/, 'a,b,c,,,', -1)), "\n"; produces the output C C<:abc>. However, a zero-width match at the beginning of EXPR never produces an empty field, so that: print join(':', split(//, ' abc')); produces the output S<C< :a:b:c>> (rather than S<C<: S<C< :a:b:c:>>. If the PATTERN contains L<capturing groups|perlretut/Grouping things and hierarchical matching>, then for each separator, an additional field is produced for each substring captured by a group (in the order in which the groups are specified, as per L<backreferences|perlretut/Backreferences>); if any group does not match, then it captures the L<C<undef>|/undef EXPR> value instead of a substring. Also, note that any such additional field is produced whenever there is a separator (that is, whenever a split occurs), and such an additional field does B') =item sprintf FORMAT, LIST X<sprintf> =for Pod::Functions formatted print into a string Returns a string formatted by the usual L<C<printf>|/printf FILEHANDLE FORMAT, LIST> conventions of the C library function L<C<sprintf>|/sprintf FORMAT, LIST>. See below for more details and see L<sprintf(3)> or L<C<sprintf>|/sprintf FORMAT, LIST> formatting: it emulates the C function L<sprintf(3)>, but doesn't use it except for floating-point numbers, and even then only standard modifiers are allowed. Non-standard extensions in your local L<sprintf(3)> are therefore unavailable from Perl. Unlike L<C<printf>|/printf FILEHANDLE FORMAT, LIST>, L<C<sprintf>|/sprintf FORMAT, LIST> L<C<sprintf>|/sprintf FORMAT, LIST> C<%e>, C<%E>, C<%g> and C<%a> and C<%A>: the exponent or the hexadecimal digits may float: especially the "long doubles" Perl configuration option may cause surprises. Between the C<%> and the format letter, you may specify several additional attributes controlling the interpretation of the format. In order, these are: =over 4 =item format parameter index An explicit format parameter index, such as C<2$>. By default sprintf will format the next unused argument in the list, but this allows you to take the arguments out of order: printf '%2$d %1$d', 12, 34; # prints "34 12" printf '%3$d %d %1$d', 1, 2, 3; # prints "3 1 1" =item flags>" =item vector flag This flag tells Perl to interpret the supplied string as a vector of integers, one for each character in the string. Perl applies the format to each integer in turn, then joins the resulting strings with a separator (a dot C<.> by default). This can be useful for displaying ordinal values of characters in arbitrary strings: printf "%vd", "AB\x{100}"; # prints "65.66.256" printf "version is v%vd\n", $^V; # Perl's version Put an asterisk C<*> before the C<*2$v>; for example: printf '%*4$vX %*4$vX %*4$vX', # 3 IPv6 addresses @addr[1..3], ":"; =item (minimum) width Arguments are usually formatted to be only as wide as required to display the given value. You can override the width by putting a number here, or get the width from the next argument (with C<*>) or from a specified argument (e.g., with C<*2$>): printf "<%s>", "a"; # prints "<a>" printf "<%6s>", "a"; # prints "< a>" printf "<%*s>", 6, "a"; # prints "< a>" printf '<%*2$s>', "a", 6; # prints "< a>" printf "<%2s>", "long"; # prints "<long>" (does not truncate) If a field width obtained through C<*> is negative, it has the same effect as the C<-> flag: left-justification. =item precision, or maximum width X<precision> You can specify a precision (for numeric conversions) or a maximum width (for string conversions) by specifying a C<.> followed by a number. For floating-point formats except C<g> and C<.*>, or from a specified argument (e.g., with C<.*2$>): printf '<%.6x>', 1; # prints "<000001>" printf '<%.*x>', 6, 1; # prints "<000001>" printf '<%.*2$x>', 1, 6; # prints "<000001>" printf '<%6.*2$x>', 1, 4; # prints "< 0001>" If a precision obtained through C<*>>" =item size For numeric conversions, you can specify the size to interpret the number as using C<l>, C<h>, C<V>, C<q>, C<L>, or C<ll>. For integer conversions (C prior to Perl 5.30, types "size_t" or "ssize_t" on Perl 5.14 or later As of 5.14, none of these raises an exception if they are not supported on your platform. However, if warnings are enabled, a warning of the L<C<printf>|warnings> L<Config>: use Config; if ($Config{use64bitint} eq "define" || $Config{longsize} >= 8) { print "Nice quads!\n"; } For floating-point conversions (C<e f g E F G>), numbers are usually assumed to be the default floating-point size on your platform (double or long double), but you can force "long double" with C<q>, C<L>, or C<ll> if your platform supports them. You can find out whether your Perl supports long doubles via L<Config>: use Config; print "long doubles\n" if $Config{d_longdbl} eq "define"; You can find out whether Perl considers "long double" to be the default floating-point size to use on your platform via L C<V> has no effect for Perl code, but is supported for compatibility with XS code. It means "use the standard size for a Perl integer or floating-point number", which is the default. =item order of arguments Normally, L<C<sprintf>|/sprintf FORMAT, LIST> takes the next unused argument as the value to format for each format specification. If the format specification uses C<*> to require additional arguments, these are consumed from the argument list in the order they appear in the format specification I<before> the value to format. Where an argument is specified by an explicit index, this does not affect the normal order for the arguments, even when the explicitly specified index would have been the next argument. So: printf "<%*.*s>", $a, $b, $c; uses C<$a> for the width, C<$b> for the precision, and C<$c> as the value to format; while: printf '<%*1$.*s>', $a, $b; would use C<$a> for the width and precision, and C<$b> as the value to format. Here are some more examples; be aware that when using an explicit index, the" =back If L<C<use locale>|locale> (including C<use locale ':not_characters'>) is in effect and L<C<POSIX::setlocale>|POSIX/C<setlocale>> has been called, the character used for the decimal separator in formatted floating-point numbers is affected by the C<LC_NUMERIC> locale. See L<perllocale> and L<POSIX>. =item sqrt EXPR X<sqrt> X<root> X<square root> =item sqrt =for Pod::Functions square root function Return the positive square root of EXPR. If EXPR is omitted, uses L<C<$_>|perlvar/$_>. Works only for non-negative operands unless you've loaded the L<C<Math::Complex>|Math::Complex> module. use Math::Complex; print sqrt(-4); # prints 2i =item srand EXPR X<srand> X<seed> X<randseed> =item srand =for Pod::Functions seed the random number generator Sets and returns the random number seed for the L<C<rand>|/rand EXPR> operator. The point of the function is to "seed" the L<C<rand>|/rand EXPR> function so that L<C<rand>|/rand EXPR> can produce a different sequence each time you run your program. When called with a parameter, L<C<srand>|/srand EXPR> uses that for the seed; otherwise it (semi-)randomly chooses a seed. In either case, starting with Perl 5.14, it returns the seed. To signal that your code will work I<only> on Perls of a recent vintage: use 5.014; # so srand returns the seed If L<C<srand>|/srand EXPR> is not called explicitly, it is called implicitly without a parameter at the first use of the L<C<rand>|/rand EXPR> operator. However, there are a few situations where programs are likely to want to call L<C<srand>|/srand EXPR>. One is for generating predictable results, generally for testing or debugging. There, you use C<srand($seed)>, with the same C<$seed> each time. Another case is that you may want to call L<C<srand>|/srand EXPR> after a L<C<fork>|/fork> to avoid child processes sharing the same seed value as the parent (and consequently each other). Do B<not> call C<srand()> (i.e., without an argument) more than once per process. The internal state of the random number generator should contain more entropy than can be provided by any seed, so calling L<C<srand>|/srand EXPR> again actually I<loses> randomness. Most implementations of L<C<srand>|/srand EXPR> take an integer and will silently truncate decimal numbers. This means C<srand(42)> will usually produce the same results as C<srand(42.1)>. To be safe, always pass L<C<srand>|/srand EXPR> stat FILEHANDLE X<stat> X<file, status> X<ctime> =item stat EXPR =item stat DIRHANDLE =item stat =for Pod::Functions get a file's status information Returns a 13-element list giving the status info for a file, either the file opened via FILEHANDLE or DIRHANDLE, or named by EXPR. If EXPR is omitted, it stats L<C<$_>|perlvar/$_> (not C<_>!). Returns the empty list if L<C<stat>|/stat FILEHANDLE> fails. Typically used as follows: L<perlport/"Files and Filesystems"> for details. If L<C<stat>|/stat FILEHANDLE> is passed the special filehandle consisting of an underline, no stat is done, but the current contents of the stat structure from the last L<C<stat>|/stat FILEHANDLE>, L<C<lstat>|/lstat FILEHANDLE>, or filetest are returned. Example: if (-x $file && (($d) = stat(_)) && $d < 0) { print "$file is executable NFS file\n"; } (This works on machines only for which the device number is negative under NFS.) C<eq> rather than C<==>. C<eq> will work fine on inode numbers that are represented numerically, as well as those represented as strings. Because the mode contains both the file type and its permissions, you should mask off the file type portion and (s)printf using a C<"%o"> if you want to see the real permissions. my $mode = (stat($filename))[2]; printf "Permissions are %04o\n", $mode & 07777; In scalar context, L<C<stat>|/stat FILEHANDLE> returns a boolean value indicating success or failure, and, if successful, sets the information associated with the special filehandle C<_>. The (C<S_IF*>) and functions (C<S_IS*>) from the C<-u> and C<-d> operators. Commonly available C L<chmod(2)> and L<stat(2)> documentation for more details about the C<S_*> constants. To get status info for a symbolic link instead of the target file behind the link, use the L<C<lstat>|/lstat FILEHANDLE> function. Portability issues: L<perlport/stat>. =item state VARLIST X<state> =item state TYPE VARLIST =item state VARLIST : ATTRS =item state TYPE VARLIST : ATTRS =for Pod::Functions +state declare and assign a persistent lexical variable L<C<state>|/state VARLIST> declares a lexically scoped variable, just like L<C<my>|/my VARLIST>. However, those variables will never be reinitialized, contrary to lexical variables that are reinitialized each time their enclosing block is entered. See L<perlsub/"Persistent Private Variables"> for details. If more than one variable is listed, the list must be placed in parentheses. With a parenthesised list, L<C<undef>|/undef EXPR> can be used as a dummy placeholder. However, since initialization of state variables in such lists is currently not possible this would serve no purpose. study SCALAR X<study> =item study =for Pod::Functions no-op, formerly optimized input data for repeated searches At this time, C<study> does nothing. This may change in the future. Prior to Perl version 5.16, it would create an inverted index of all characters that occurred in the given SCALAR (or L<C<$_>|perlvar/$_> if unspecified). When matching a pattern, the rarest character from the pattern would be looked up in this index. Rarity was based on some static frequency tables constructed from some C programs and English text. =item sub NAME BLOCK X<sub> =item sub NAME (PROTO) BLOCK =item sub NAME : ATTRS BLOCK =item sub NAME (PROTO) : ATTRS BLOCK =for Pod::Functions declare a subroutine, possibly anonymously This is subroutine definition, not a real function I<per se>. Without a BLOCK it's just a forward declaration. Without a NAME, it's an anonymous function declaration, so does return a value: the CODE ref of the closure just created. See L<perlsub> and L<perlref> for details about subroutines and references; see L<attributes> and L<Attribute::Handlers> for more information about attributes. =item __SUB__ X<__SUB__> =for Pod::Functions +current_sub the current subroutine, or C<undef> if not in a subroutine A special token that returns a reference to the current subroutine, or L<C<undef>|/undef EXPR> outside of a subroutine. The behaviour of L<C<__SUB__>|/__SUB__> within a regex code block (such as C</(?{...})/>) is subject to change. This token is only available under C<use v5.16> or the L<C<"current_sub"> feature|feature/The 'current_sub' feature>. See L<feature>. =item substr EXPR,OFFSET,LENGTH,REPLACEMENT X<substr> X<substring> X<mid> X<left> X<right> =item substr EXPR,OFFSET,LENGTH =item substr EXPR,OFFSET =for Pod::Functions get or alter a portion of a string Extracts a substring out of EXPR and returns it. First character is at offset zero. If OFFSET is negative, starts that far back from the end of the string. If LENGTH is omitted, returns everything through the end of the string. If LENGTH is negative, leaves that many characters off the end of the string. L<C<substr>|/substr EXPR,OFFSET,LENGTH,REPLACEMENT> L<C<sprintf>|/sprintf FORMAT, LIST>. If OFFSET and LENGTH specify a substring that is partly outside the string, only the part within the string is returned. If the substring is beyond either end of the string, L<C<substr>|/substr EXPR,OFFSET,LENGTH,REPLACEMENT> L<C<substr>|/substr EXPR,OFFSET,LENGTH,REPLACEMENT> as an lvalue is to specify the replacement string as the 4th argument. This allows you to replace parts of the EXPR and return what was there before in one operation, just as you can with L<C<splice>|/splice ARRAY,OFFSET,LENGTH,LIST>. my $; thus L<C<sysseek>|/sysseek FILEHANDLE,POSITION,WHENCE> returns true on success and false on failure, yet you can still easily determine the new position. =item system LIST X<system> X<shell> =item system PROGRAM LIST =for Pod::Functions run a separate program Does exactly the same thing as L<C<exec>|. On Windows, only the C<system PROGRAM LIST> syntax will reliably avoid using the shell; C<system LIST>, even with more than one element, will fall back to the shell if the first spawn fails.. The return value is the exit status of the program as returned by the L<C<wait>|/wait> call. To get the actual exit value, shift right by eight (see below). See also L<C<exec>|/exec LIST>. This is I<not> what you want to use to capture the output from a command; for that you should use merely backticks or L<C<qxE<sol>E<sol>>|/qxE<sol>STRINGE<sol>>, as described in L<perlop/"`STRING`">. Return value of -1 indicates a failure to start the program or an error of the L<wait(2)> system call (inspect L<C<$!>|perlvar/$!> for the reason). If you'd like to make L<C<system>|/system LIST> (and many other bits of Perl) die on error, have a look at the L<autodie> pragma. Like L<C<exec>|/exec LIST>, L<C<system>|/system LIST> allows you to lie to a program about its name if you use the C<system PROGRAM LIST> syntax. Again, see L<C<exec>|/exec LIST>. Since C<SIGINT> and C<SIGQUIT> are ignored during the execution of L<C<system>|/system LIST>, if you expect your program to terminate on receipt of these signals you will need to arrange to do so yourself based on the return value. my @args = ("command", "arg1", "arg2"); system(@args) == 0 or die "system @args failed: $?"; If you'd like to manually inspect L<C<system>|/system LIST>'s failure, you can check all possible failure modes by inspecting L<C<$?>|perlvar/$?> like this: if ($? == -1) { print "failed to execute: $!\n"; } elsif ($? & 127) { printf "child died with signal %d, %s coredump\n", ($? & 127), ($? & 128) ? 'with' : 'without'; } else { printf "child exited with value %d\n", $? >> 8; } Alternatively, you may inspect the value of L<C<${^CHILD_ERROR_NATIVE}>|perlvar/${^CHILD_ERROR_NATIVE}> with the L<C<W*()>|POSIX/C<WIFEXITED>> calls from the L<POSIX> module. When L<C<system>|/system LIST>'s arguments are executed indirectly by the shell, results and return codes are subject to its quirks. See L<perlop/"`STRING`"> and L<C<exec>|/exec LIST> for details. Since L<C<system>|/system LIST> does a L<C<fork>|/fork> and L<C<wait>|/wait> it may affect a C<SIGCHLD> handler. See L<perlipc> for details. Portability issues: L<perlport/system>. =item syswrite FILEHANDLE,SCALAR,LENGTH,OFFSET X<syswrite> =item syswrite FILEHANDLE,SCALAR,LENGTH =item syswrite FILEHANDLE,SCALAR =for Pod::Functions fixed-length unbuffered output to a filehandle Attempts to write LENGTH bytes of data from variable SCALAR to the specified FILEHANDLE, using L<write(2)>. If LENGTH is not specified, writes whole SCALAR. It bypasses any L<PerlIO> layers including buffered IO (but is affected by the presence of the C<:utf8> layer as described later), so mixing this with reads (other than C<sysread)>), L<C<print>|/print FILEHANDLE LIST>, L<C<write>|/write FILEHANDLE>, L<C<seek>|/seek FILEHANDLE,POSITION,WHENCE>, L<C<tell>|/tell FILEHANDLE>, or L<C<eof>|/eof FILEHANDLE> may cause confusion because the C<:perlio> and C<:crlf> layers usually buffer data. Returns the number of bytes actually written, or L<C<undef>|/undef EXPR> if there was an error (in this case the errno variable L<C<$!>|perlvar/$!>. B<WARNING>: If the filehandle is marked C<:utf8>, C<syswrite> will raise an exception. The C<:encoding(...)> layer implicitly introduces the C<:utf8> layer. Alternately, if the handle is not marked with an encoding but you attempt to write characters with code points over 255, raises an exception. See L<C<binmode>|/binmode FILEHANDLE, LAYER>, L<C<open>|/open FILEHANDLE,MODE,EXPR>, and the L<open> pragma. =item tell FILEHANDLE X<tell> =item tell =for Pod::Functions get current seekpointer on a filehandle Returns the current position I<in bytes> for FILEHANDLE, or -1 on error. FILEHANDLE may be an expression whose value gives the name of the actual filehandle. If FILEHANDLE is omitted, assumes the file last read.. The return value of L<C<tell>|/tell FILEHANDLE> for the standard streams like the STDIN depends on the operating system: it may return -1 or something else. L<C<tell>|/tell FILEHANDLE> on pipes, fifos, and sockets usually returns -1. There is no C<systell> function. Use L<C<sysseek($fh, 0, 1)>|/sysseek FILEHANDLE,POSITION,WHENCE> for that. Do not use L<C<tell>|/tell FILEHANDLE> (or other buffered I/O operations) on a filehandle that has been manipulated by L<C<sysread>|/sysread FILEHANDLE,SCALAR,LENGTH,OFFSET>, L<C<syswrite>|/syswrite FILEHANDLE,SCALAR,LENGTH,OFFSET>, or L<C<sysseek>|/sysseek FILEHANDLE,POSITION,WHENCE>. Those functions ignore the buffering, while L<C<tell>|/tell FILEHANDLE> does not. =item telldir DIRHANDLE X<telldir> =for Pod::Functions get current seekpointer on a directory handle Returns the current position of the L<C<readdir>|/readdir DIRHANDLE> routines on DIRHANDLE. Value may be given to L<C<seekdir>|/seekdir DIRHANDLE,POS> to access a particular location in a directory. L<C<telldir>|/telldir DIRHANDLE> has the same caveats about possible directory compaction as the corresponding system library routine. =item tie VARIABLE,CLASSNAME,LIST X<tie> =for Pod::Functions +5.002 bind a variable to an object class This C<TIESCALAR>, C<TIEHANDLE>, C<TIEARRAY>, or C<TIEHASH>). Typically these are arguments such as might be passed to the L<dbm_open(3)> function of C. The object returned by the constructor is also returned by the L<C<tie>|/tie VARIABLE,CLASSNAME,LIST> function, which would be useful if you want to access other methods in CLASSNAME. Note that functions such as L<C<keys>|/keys HASH> and L<C<values>|/values HASH> may return huge lists when used on large objects, like DBM files. You may prefer to use the L<C<each>|/each HASH> L<perltie>, L<Tie::Hash>, L<Tie::Array>, L<Tie::Scalar>, and L<Tie::Handle>. Unlike L<C<dbmopen>|/dbmopen HASH,DBNAME,MASK>, the L<C<tie>|/tie VARIABLE,CLASSNAME,LIST> function will not L<C<use>|/use Module VERSION LIST> or L<C<require>|/require VERSION> a module for you; you need to do that explicitly yourself. See L<DB_File> or the L<Config> module for interesting L<C<tie>|/tie VARIABLE,CLASSNAME,LIST> implementations. For further details see L<perltie>, L<C<tied>|/tied VARIABLE>. =item tied VARIABLE X<tied> =for Pod::Functions get a reference to the object underlying a tied variable Returns a reference to the object underlying VARIABLE (the same value that was originally returned by the L<C<tie>|/tie VARIABLE,CLASSNAME,LIST> call that bound the variable to a package.) Returns the undefined value if VARIABLE isn't tied to a package. =item time X<time> X<epoch> =for Pod::Functions return number of seconds since 1970 Returns the number of non-leap seconds since whatever time the system considers to be the epoch, suitable for feeding to L<C<gmtime>|/gmtime EXPR> and L<C<localtime>|/localtime EXPR>. L<Time::HiRes> module from Perl 5.8 onwards (or from CPAN before then), or, if you have L<gettimeofday(2)>, you may be able to use the L<C<syscall>|/syscall NUMBER, LIST> interface of Perl. See L<perlfaq8> for details. For date and time processing look at the many related modules on CPAN. For a comprehensive date and time representation look at the L<DateTime> module. =item times X<times> =for Pod::Functions return elapsed time for self and child processes Returns a four-element list giving the user and system times in seconds for this process and any exited children of this process. my ($user,$system,$cuser,$csystem) = times; In scalar context, L<C<times>|/times> returns C<$user>. Children's times are only included for terminated children. Portability issues: L<perlport/times>. =item tr/// =for Pod::Functions transliterate a string The transliteration operator. Same as L<C<yE<sol>E<sol>E<sol>>|/yE<sol>E<sol>E<sol>>. See L<perlop/"Quote-Like Operators">. =item truncate FILEHANDLE,LENGTH X<truncate> =item truncate EXPR,LENGTH =for Pod::Functions shorten a file Truncates the file opened on FILEHANDLE, or named by EXPR, to the specified length. Raises an exception if truncate isn't implemented on your system. Returns true if successful, L<C<undef>|/undef EXPR> on error. The behavior is undefined if LENGTH is greater than the length of the file. The position in the file of FILEHANDLE is left unchanged. You may want to call L<seek|/"seek FILEHANDLE,POSITION,WHENCE"> before writing to the file. Portability issues: L<perlport/truncate>. =item uc EXPR X<uc> X<uppercase> X<toupper> =item uc =for Pod::Functions return upper-case version of a string Returns an uppercased version of EXPR. This is the internal function implementing the C<\U> escape in double-quoted strings. It does not attempt to do titlecase mapping on initial letters. See L<C<ucfirst>|/ucfirst EXPR> for that. If EXPR is omitted, uses L<C<$_>|perlvar/$_>. This function behaves the same way under various pragmas, such as in a locale, as L<C<lc>|/lc EXPR> does. =item ucfirst EXPR X<ucfirst> X<uppercase> =item ucfirst =for Pod::Functions return a string with just the next letter in upper case Returns the value of EXPR with the first character in uppercase (titlecase in Unicode). This is the internal function implementing the C<\u> escape in double-quoted strings. If EXPR is omitted, uses L<C<$_>|perlvar/$_>. This function behaves the same way under various pragmas, such as in a locale, as L<C<lc>|/lc EXPR> does. =item umask EXPR X<umask> =item umask =for Pod::Functions set file creation mode mask Sets the umask for the process to EXPR and returns the previous value. If EXPR is omitted, merely returns the current umask. The Unix permission C<rwxr-x---> is represented as three sets of three bits, or three octal digits: C<0750> (the leading 0 indicates octal and isn't one of the digits). The L<C<umask>|/umask EXPR> value is such a number representing disabled permissions bits. The permission (or "mode") values you pass L<C<mkdir>|/mkdir FILENAME,MODE> or L<C<sysopen>|/sysopen FILEHANDLE,FILENAME,MODE> are modified by your umask, so even if you tell L<C<sysopen>|/sysopen FILEHANDLE,FILENAME,MODE> to create a file with permissions C<0777>, if your umask is C<0022>, then the file will actually be created with permissions C<0755>. If your L<C<umask>|/umask EXPR> were C<0027> (group can't write; others can't read, write, or execute), then passing L<C<sysopen>|/sysopen FILEHANDLE,FILENAME,MODE> C<0666> would create a file with mode C<0640> (because C<0666 &~ 027> is C<0640>). Here's some advice: supply a creation mode of C<0666> for regular files (in L<C<sysopen>|/sysopen FILEHANDLE,FILENAME,MODE>) and one of C<0777> for directories (in L<C<mkdir>|/mkdir FILENAME,MODE>) and executable files. This gives users the freedom of choice: if they want protected files, they might choose process umasks of C<022>, C<027>, or even the particularly antisocial mask of C<077>. Programs should rarely if ever make policy decisions better left to the user. The exception to this is when writing files that should be kept private: mail files, web browser cookies, F<.rhosts> files, and so on. If L<umask(2)> is not implemented on your system and you are trying to restrict access for I<yourself> (i.e., C<< (EXPR & 0700) > 0 >>), raises an exception. If L<umask(2)> is not implemented and you are not trying to restrict access for yourself, returns L<C<undef>|/undef EXPR>. Remember that a umask is a number, usually given in octal; it is I<not> a string of octal digits. See also L<C<oct>|/oct EXPR>, if all you have is a string. Portability issues: L<perlport/umask>. =item undef EXPR X<undef> X<undefine> =item undef =for Pod::Functions remove a variable or function definition Undefines the value of EXPR, which must be an lvalue. Use only on a scalar value, an array (using C<@>), a hash (using C<%>), a subroutine (using C<&>), or a typeglob (using C<*>). Saying C<undef $hash{$key}> will probably not do what you expect on most predefined variables or DBM list values, so don't do that; see L<C<delete>|/delete EXPR>.. =item unlink LIST X<unlink> X<delete> X<remove> X<rm> X<del> =item unlink =for Pod::Functions remove one link to a file Deletes a list of files. On success, it returns the number of files it successfully deleted. On failure, it returns false and sets L<C<$!>|perlvar/$!> (errno): my $unlinked = unlink 'a', 'b', 'c'; unlink @goners; unlink glob "*.bak"; On error, L<C<unlink>|/unlink LIST> will not tell you which files it could not remove. If you want to know which files you could not remove, try them one at a time: foreach my $file ( @goners ) { unlink $file or warn "Could not unlink $file: $!"; } Note: L<C<unlink>|/unlink LIST> will not attempt to delete directories unless you are superuser and the B<-U> flag is supplied to Perl. Even if these conditions are met, be warned that unlinking a directory can inflict damage on your filesystem. Finally, using L<C<unlink>|/unlink LIST> on directories is not supported on many operating systems. Use L<C<rmdir>|/rmdir FILENAME> instead. If LIST is omitted, L<C<unlink>|/unlink LIST> uses L<C<$_>|perlvar/$_>. =item unpack TEMPLATE,EXPR X<unpack> =item unpack TEMPLATE =for Pod::Functions convert binary structure into normal perl variables L<C<unpack>|/unpack TEMPLATE,EXPR> does the reverse of L<C<pack>|/pack TEMPLATE,LIST>: it takes a string and expands it out into a list of values. (In scalar context, it returns merely the first value produced.) If EXPR is omitted, unpacks the L<C<$_>|perlvar/$_> string. See L<perlpacktut> for an introduction to this function. The string is broken into chunks described by the TEMPLATE. Each chunk is converted separately to a value. Typically, either the string is a result of L<C<pack>|/pack TEMPLATE,LIST>, or the characters of the string represent a C structure of some kind. The TEMPLATE has the same format as in the L<C<pack>|/pack TEMPLATE,LIST> function. Here's a subroutine that does substring: sub substr { my ($what, $where, $howmuch) = @_; unpack("x$where a$howmuch", $what); } and then there's sub ordinal { unpack("W",$_[0]); } # same as ord() In addition to fields allowed in L<C<pack>|/pack TEMPLATE,LIST>, C<p> and C<P> formats should be used with care. Since Perl has no way of checking whether the value passed to L<C<unpack>|/unpack TEMPLATE,EXPR> L<C<unpack>|/unpack TEMPLATE,EXPR> may produce empty strings or zeros, or it may raise an exception. If the input string is longer than one described by the TEMPLATE, the remainder of that input string is ignored. See L<C<pack>|/pack TEMPLATE,LIST> for more examples and notes. =item unshift ARRAY,LIST X<unshift> =for Pod::Functions prepend more elements to the beginning of a list Does the opposite of a L<C<shift>|/shift ARRAY>. Or the opposite of a L<C<push>|/push ARRAY,LIST>, L<C<reverse>|/reverse LIST> to do the reverse. Starting with Perl 5.14, an experimental feature allowed L<C<unshift>|/unshift ARRAY,LIST> to take a scalar expression. This experiment has been deemed unsuccessful, and was removed as of Perl 5.24. =item untie VARIABLE X<untie> =for Pod::Functions break a tie binding to a variable Breaks the binding between a variable and a package. (See L<tie|/tie VARIABLE,CLASSNAME,LIST>.) Has no effect if the variable is not tied. =item use Module VERSION LIST X<use> X<module> X<import> =item use Module VERSION =item use Module LIST =item use Module =item use VERSION =for Pod::Functions load in a module at compile time and import its namespace Imports some semantics into the current package from the named module, generally by aliasing certain subroutine or variable names into your package. It is exactly equivalent to BEGIN { require Module; Module->import( LIST ); } except that Module I<must> be a bareword. The importation can be made conditional by using the L<if> module. In the C<use VERSION> form, VERSION may be either a v-string such as v5.24.1, which will be compared to L<C<$^V>|perlvar/$^V> (aka $PERL_VERSION), or a numeric argument of the form 5.024001, which will be compared to L<C<$]>|perlvar/$]>. An exception is raised if VERSION is greater than the version of the current Perl interpreter; Perl will not attempt to parse the rest of the file. Compare with L<C<require>|/require VERSION>, which can do a similar check at run time. Symmetrically, C<no VERSION> allows you to specify that you want a version of Perl older than the specified use v5.24.1; # compile time version check use 5.24.1; # ditto use 5.024_001; # ditto; older syntax compatible with perl 5.6 This is often useful if you need to check the current Perl version before L<C<use>|/use Module VERSION LIST>ing library modules that won't work with older versions of Perl. (We try not to do this more than we have to.) C<use VERSION> also lexically enables all features available in the requested version as defined by the L<feature> pragma, disabling any features not in the requested version's feature bundle. See L<feature>. Similarly, if the specified Perl version is greater than or equal to 5.12.0, strictures are enabled lexically as with L<C<use strict>|strict>. Any explicit use of C<use strict> or C<no strict> overrides C<use VERSION>, even if it comes before it. Later use of C<use VERSION> will override all behavior of a previous C<use VERSION>, possibly removing the C<strict> and C<feature> added by C<use VERSION>. C<use VERSION> does not load the F<feature.pm> or F<strict.pm> files. The C<BEGIN> forces the L<C<require>|/require VERSION> and L<C<import>|/import LIST> to happen at compile time. The L<C<require>|/require VERSION> makes sure the module is loaded into memory if it hasn't been yet. The L<C<import>|/import LIST> is not a builtin; it's just an ordinary static method call into the C<Module> package to tell the module to import the list of features back into the current package. The module can implement its L<C<import>|/import LIST> method any way it likes, though most modules just choose to derive their L<C<import>|/import LIST> method via inheritance from the C<Exporter> class that is defined in the L<C<Exporter>|Exporter> module. See L<Exporter>. If no L<C<import>|/import LIST> method can be found, then the call is skipped, even if there is an AUTOLOAD method. If you do not want to call the package's L<C<import>|/import LIST> method (for instance, to stop your namespace from being altered), explicitly supply the empty list: use Module (); That is exactly equivalent to BEGIN { require Module } If the VERSION argument is present between Module and LIST, then the L<C<use>|/use Module VERSION LIST> will call the C<VERSION> method in class Module with the given version as an argument: use Module 12.34; is equivalent to: BEGIN { require Module; Module->VERSION(12.34) } The L<default C<VERSION> method|UNIVERSAL/C<VERSION ( [ REQUIRE ] )>>, inherited from the L<C<UNIVERSAL>|UNIVERSAL> class, croaks if the given version is larger than the value of the variable C<$Module::VERSION>. The VERSION argument cannot be an arbitrary expression. It only counts as a VERSION argument if it is a version number literal, starting with either a digit or C<v> followed by a digit. Anything that doesn't look like a version literal will be parsed as the start of the LIST. Nevertheless, many attempts to use an arbitrary expression as a VERSION argument will appear to work, because L<Exporter>'s C<import> method handles numeric arguments specially, performing version checks rather than treating them as things to export. Again, there is a distinction between omitting LIST (L<C<import>|/import LIST> called with no arguments) and an explicit empty LIST C<()> (L<C<import>|/import); Some of these pseudo-modules import semantics into the current block scope (like L<C<strict>|strict> or L<C<integer>|integer>, unlike ordinary modules, which import symbols into the current package (which are effective through the end of the file). Because L<C<use>|/use Module VERSION LIST> takes effect at compile time, it doesn't respect the ordinary flow control of the code being compiled. In particular, putting a L<C<use>|/use Module VERSION LIST> inside the false branch of a conditional doesn't prevent it from being processed. If a module or pragma only needs to be loaded conditionally, this can be done using the L<if> pragma: use if $] < 5.008, "utf8"; use if WANT_WARNINGS, warnings => qw(all); There's a corresponding L<C<no>|/no MODULE VERSION LIST> declaration that unimports meanings imported by L<C<use>|/use Module VERSION LIST>, i.e., it calls C<< Module->unimport(LIST) >> instead of L<C<import>|/import LIST>. It behaves just as L<C<import>|/import LIST> does with VERSION, an omitted or empty LIST, or no unimport method being found. no integer; no strict 'refs'; no warnings; Care should be taken when using the C<no VERSION> form of L<C<no>|/no MODULE VERSION LIST>. It is I<only> meant to be used to assert that the running Perl is of a earlier version than its argument and I<not> to undo the feature-enabling side effects of C<use VERSION>. See L<perlmodlib> for a list of standard modules and pragmas. See L<perlrun|perlrun/-m[-]module> for the C<-M> and C<-m> command-line options to Perl that give L<C<use>|/use Module VERSION LIST> functionality from the command-line. =item utime LIST X<utime> =for Pod::Functions set a file's last access and modify times Changes L<touch(1)> command when the files I<already exist> and belong to the user running the program: #!/usr/bin/perl my $atime = my $mtime = time; utime $atime, $mtime, @ARGV; Since Perl 5.8.0, if the first two elements of the list are L<C<undef>|/undef EXPR>, the L<touch(1)> command will in fact normally use this form instead of the one shown in the first example. Passing only one of the first two elements as L<C<undef>|/undef EXPR> is equivalent to passing a 0 and will not have the effect described when both are L<C<undef>|/undef EXPR>. This also triggers an uninitialized warning. On systems that support L<futimes(2)>, you may pass filehandles among the files. On systems that don't support L<futimes(2)>, passing filehandles raises an exception. Filehandles must be passed as globs or glob references to be recognized; barewords are considered filenames. Portability issues: L<perlport/utime>. =item values HASH X<values> =item values ARRAY =for Pod::Functions return a list of the values in a hash<values>|/values HASH> resets the HASH or ARRAY's internal iterator (see L<C<each>|/each HASH>) before yielding the values. In particular, calling L<C<values>|/values HASH> in void context resets the iterator with no other overhead. Apart from resetting the iterator, C<values @array> in list context is the same as plain C<@array>. (We recommend that you use void context C<keys @array> for this, but reasoned that taking C L<C<values>|/values<keys>|/keys HASH>, L<C<each>|/each HASH>, and L<C<sort>|/sort SUBNAME LIST>. =item vec EXPR,OFFSET,BITS X<vec> X<bit> X<bit vector> =for Pod::Functions test or set particular bits in a string). If BITS is 8, "elements" coincide with bytes of the input string. If BITS is 16 or more, bytes of the input string are grouped into chunks of size BITS/8, and each group is converted to a number as with L<C<pack>|/pack TEMPLATE,LIST>/L<C<unpack>|/unpack TEMPLATE,EXPR> with big-endian formats C<n>/C<N> (and analogously for BITS==64). See L<C<pack>|/pack TEMPLATE,LIST> for details. If bits is 4 or less, the string is broken into bytes, then the bits of each byte are broken into 8/BITS groups. Bits of a byte are numbered in a little-endian-ish way, as in C<0x01>, C<0x02>, C<0x04>, C<0x08>, C<0x10>, C<0x20>, C<0x40>, C<0x80>. For example, breaking the single input byte C<chr(0x36)> into two groups gives a list C<(0x6, 0x3)>; breaking it into 4 groups gives C<(0x2, 0x1, 0x3, 0x0)>. L<C<vec>|/vec EXPR,OFFSET,BITS>), L<C<vec>|/vec EXPR,OFFSET,BITS> tries to convert it to use a one-byte-per-character internal representation. However, if the string contains characters with values of 256 or higher, a fatal error will occur. Strings created with L<C<vec>|/vec EXPR,OFFSET,BITS> can also be manipulated with the logical operators C<|>, C<&>, C<^>, and C<~>. These operators will assume a bit vector operation is desired when both operands are strings. See L<perlop/"Bitwise String Operators">. The following code will build up an ASCII string saying C< =item wait X<wait> =for Pod::Functions wait for any child process to die Behaves like L<wait(2)> on your system: it waits for a child process to terminate and returns the pid of the deceased process, or C<-1> if there are no child processes. The status is returned in L<C<$?>|perlvar/$?> and L<C<${^CHILD_ERROR_NATIVE}>|perlvar/${^CHILD_ERROR_NATIVE}>. Note that a return value of C<-1> could mean that child processes are being automatically reaped, as described in L<perlipc>. If you use L<C<wait>|/wait> in your handler for L<C<$SIG{CHLD}>|perlvar/%SIG>, it may accidentally wait for the child created by L<C<qx>|/qxE<sol>STRINGE<sol>> or L<C<system>|/system LIST>. See L<perlipc> for details. Portability issues: L<perlport/wait>. =item waitpid PID,FLAGS X<waitpid> =for Pod::Functions wait for a particular child process to die Waits for a particular child process to terminate and returns the pid of the deceased process, or C<-1> if there is no such child process. A non-blocking wait (with L<WNOHANG|POSIX/C<WNOHANG>> in FLAGS) can return 0 if there are child processes matching PID but none have terminated yet. The status is returned in L<C<$?>|perlvar/$?> and L<C<${^CHILD_ERROR_NATIVE}>|perlvar/${^CHILD_ERROR_NATIVE}>. A PID of C<0> indicates to wait for any child process whose process group ID is equal to that of the current process. A PID of less than C<-1> indicates to wait for any child process whose process group ID is equal to -PID. A PID of L<POSIX/WAIT>). Non-blocking wait is available on machines supporting either the L<waitpid(2)> or L<wait4(2)> syscalls. However, waiting for a particular pid with FLAGS of C<0> is implemented everywhere. (Perl emulates the system call by remembering the status values of processes that have exited but have not been harvested by the Perl script yet.) Note that on some systems, a return value of C<-1> could mean that child processes are being automatically reaped. See L<perlipc> for details, and for other examples. Portability issues: L<perlport/waitpid>. =item wantarray X<wantarray> X<context> =for Pod::Functions get void vs scalar vs list context of current subroutine call Returns true if the context of the currently executing subroutine or L<C<eval>|/eval EXPR>"; L<C<wantarray>|/wantarray>'s result is unspecified in the top level of a file, in a C<BEGIN>, C<UNITCHECK>, C<CHECK>, C<INIT> or C<END> block, or in a C<DESTROY> method. This function should have been named wantlist() instead. =item warn LIST X<warn> X<warning> X<STDERR> =for Pod::Functions print debugging info Emits a warning, usually by printing it to C<STDERR>. C<warn> interprets its operand LIST in the same way as C<die>, but is slightly different in what it defaults to when LIST is empty or makes an empty string. If it is empty and L<C<$@>|perlvar/$@> already contains an exception value then that value is used after appending C<"\t...caught">. If it is empty and C<$@> is also empty then the string C<"Warning: Something's wrong"> is used. as it sees fit (like, for instance, converting it into a L<C<die>|/die LIST>). Most handlers must therefore arrange to actually display the warnings that they are not prepared to deal with, by calling L<C<warn>|/warn LIST> again in the handler. Note that this is quite safe and will not produce an endless loop, since C<__WARN__> hooks are not called from inside one. You will find this behavior is slightly different from that of L<C<$SIG{__DIE__}>|perlvar/%SIG> handlers (which don't suppress the error text, but can instead call L<C<die>|/die LIST> again to change it). Using a L<perlvar> for details on setting L<C<%SIG>|perlvar/%SIG> entries and for more examples. See the L<Carp> module for other kinds of warnings using its C<carp> and C<cluck> functions. =item write FILEHANDLE X<write> =item write EXPR =item write =for Pod::Functions print a picture record Writes a formatted record (possibly multi-line) to the specified FILEHANDLE, using the format associated with that file. By default the format for a file is the one having the same name as the filehandle, but the format for the current output channel (see the L<C<select>|/select FILEHANDLE> function) may be set explicitly by assigning the name of the format to the L<C<$~>|perlvar/$~> variable. C<_TOP> appended, or C<top> in the current package if the former does not exist. This would be a problem with autovivified filehandles, but it may be dynamically set to the format of your choice by assigning the name to the L<C<$^>|perlvar/$^> variable while that filehandle is selected. The number of lines remaining on the current page is in variable L<C<$->|perlvar/$->, which can be set to C<0> to force a new page. If FILEHANDLE is unspecified, output goes to the current default output channel, which starts out as STDOUT but may be changed by the L<C<select>|/select FILEHANDLE> operator. If the FILEHANDLE is an EXPR, then the expression is evaluated and the resulting string is used to look up the name of the FILEHANDLE at run time. For more on formats, see L<perlform>. Note that write is I<not> the opposite of L<C<read>|/read FILEHANDLE,SCALAR,LENGTH,OFFSET>. Unfortunately. =item y/// =for Pod::Functions transliterate a string The transliteration operator. Same as L<C<trE<sol>E<sol>E<sol>>|/trE<sol>E<sol>E<sol>>. See L<perlop/"Quote-Like Operators">. =back =head2 Non-function Keywords by Cross-reference =head3 perldata =over =item __DATA__ =item __END__ These keywords are documented in L<perldata/"Special Literals">. =back =head3 perlmod =over =item BEGIN =item CHECK =item END =item INIT =item UNITCHECK These compile phase keywords are documented in L<perlmod/"BEGIN, UNITCHECK, CHECK, INIT and END">. =back =head3 perlobj =over =item DESTROY This method keyword is documented in L<perlobj/"Destructors">. =back =head3 perlop =over =item and =item cmp =item eq =item ge =item gt =item le =item lt =item ne =item not =item or =item x =item xor These operators are documented in L<perlop>. =back =head3 perlsub =over =item AUTOLOAD This keyword is documented in L<perlsub/"Autoloading">. =back =head3 perlsyn =over =item else =item elsif =item for =item foreach =item if =item unless =item until =item while These flow-control keywords are documented in L<perlsyn/"Compound Statements">. =item elseif The "else if" keyword is spelled C<elsif> in Perl. There's no C<elif> or C<else if> either. It does parse C<elseif>, but only to warn you about not using it. See the documentation for flow-control keywords in L<perlsyn/"Compound Statements">. =back =over =item default =item given =item when These flow-control keywords related to the experimental switch feature are documented in L<perlsyn/"Switch Statements">. =back =cut
http://web-stage.metacpan.org/release/perl/source/pod/perlfunc.pod
CC-MAIN-2021-10
refinedweb
39,051
52.19
Details - Type: New Feature - Status: Closed - Priority: Minor - Resolution: Fixed - Affects Version/s: 1.1-beta-2 - Fix Version/s: 1.6-rc-1, 1.5.8, 1.7-beta-1 - Component/s: groovy-jdk - Labels:None - Environment:all Description Deleting a directory including all subdirectories is a very tedious task. A method in File like "deleteRecursively()" would really help. I am currently using already qutie nice code like this: def dataDir = new File( ... path ...) def dirs = [] dataDir.eachFileRecurse { if (!it.isDirectory()) log.info("Deleting $ else dirs << it } dirs.reverse().each { log.info("Deleting directory: ${it.name} : ${it.delete()} ") } But why could it not be new File ( ... dir ...).deleteDir() or similar? Activity - All - Work Log - History - Activity - Transitions deleteOnExit() deletes at shutdown of the JVM. This is often too late. I would prefer an additional method like Sven proposed. I attached a patch with a new Method deleteDir() for File and a test class for this new method. Here is the JavaDoc for the method: Deletes a directory with all contained files and subdirectories. The method returns true, when deletion was successful true, when it is called for a non existing directory false, when it is called for a file which isn't a directory false, when directory couldn't be deleted Thanks for the patch! How about changing this to File.delete(boolean recurse = false)? wouldn't this do:
https://issues.apache.org/jira/browse/GROOVY-1605?focusedCommentId=119408&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel
CC-MAIN-2015-35
refinedweb
230
60.92
From the IBM WebSphere Developer Technical Journal. An Enterprise Service Bus (ESB) supports interactions across a number of transport and message protocols. In that respect, IBM® WebSphere® ESB is no different. In previous articles in this series, we have described and shown examples for the exchange of messages across WebSphere MQ, JMS, and SOAP over HTTP. Here, we will take the next step and show how WebSphere ESB supports a key principle of the Enterprise Service Bus pattern, namely that of "virtualized" services. Offering a virtual service means hiding the actual location, protocol, and even the exact interface of a service provider from the service requester. With examples, this article illustrates how you can offer a service to a requester in a different protocol than the service provider expects. In fact, we will offer the same services over two protocols at the same time, thus exposing it to a variety of consumers. What you will see is that this does not actually require a lot of extra work when using WebSphere ESB because of its underlying Service Component Architecture (SCA). This article will follow the same outline as earlier articles, first by starting out with the business scenario, then describing the architecture of the solution, and finally explaining how to make it all work (including testing it) in WebSphere ESB. We will reuse two of the earlier scenarios of our fictional Posts-R-Us company. In the first scenario, we had described how each time a package is received, a message is sent to a backend application so that the status of the order is updated accordingly. In Part 2, we showed how the message is sent to the ESB through a JMS queue, and then forwarded (again using JMS) to the backend application, which receives the message through a message-driven bean. We then enhanced that setup in Part 4, adding a new outbound WebSphere MQ channel. Now, finally, we will enhance it yet again by adding client access via a Web service using SOAP/HTTP, as shown in Figure 1. Figure 1. Adding a new channel to send "package received" events (Scenario 1) Through this enhancement, the event indicating that a package was received by a customer can be sent from two different types of client: one using an asynchronous protocol, the other a synchronous protocol. The backend application is not affected by this at all, since the ESB provides the virtual service interface to the clients. The second scenario, which was covered in Part 3, provided a service with which customers and employees could track the status of an individual package. The service was implemented as a regular Web service over SOAP/HTTP. The requester in our example also used SOAP/HTTP as the protocol (in fact, you utilized the Web Services Explorer tool in IBM WebSphere Integration Developer to run the scenario). Here, you will add access to this service through a pair of WebSphere MQ queues, which will offer the service for use from applications that can easily communicate with WebSphere MQ, but don't have any support for Web services. Figure 2. Adding a new channel to receive package status (Scenario 2) Again, the existing service is not affected by this additional consumer; the details of the new protocol are completely handled by the ESB. Scenario 1: Add a SOAP/HTTP consumer to a JMS service Begin by importing the PackageReceivedPart4.ear file into WebSphere Integration Developer. This EAR file (and other files you will need) are contained within the part5downloads.zip file, available in the download section. The EAR file can also be found in the materials you downloaded in Part 4; you won't be changing the application at all. Remember that this is an application with a message-driven bean that receives messages over a JMS queue and prints their content to the screen. After creating the example in Part 2, you will have added another import with MQ bindings to it in Part 4. Don't add the resulting project to the runtime environment yet. Import the project interchange file, which includes the mediation module you will use. It is called PackageReceivedModuleWithMQ.zip, and again, you can retrieve it from the download section of this article or find it with Part 4. Open the Business Integration perspective and load the module into the Assembly Editor, which should look like Figure 3. Figure 3. The unchanged mediation module assembly To make the mediation accessible to a Web services client via SOAP over HTTP, you simply add another export and give it the appropriate bindings. That's the beauty of the SCA assembly model: you don't have to change the actual mediation flow component at all; you can "wire" an additional export to it. In the Assembly Editor, drag an Export from the palette and drop it on the canvas, rename it to SOAPClientExport, and connect it to the mediation flow component. This will also add the appropriate interface to the export. Right-click on the export and select Generate Binding... => Web Service Binding, as shown in Figure 4. Figure 4. Generate Web Service Binding In the next dialog, select soap/http as the transport. Save your changes. Done! Before you can deploy and test the updated module, you need to add an activation spec to the server, since that is what the receiving message-driven bean uses to connect to the proper JMS queue. All of this is described in detail in Part 2 (and you may still have it configured if you have been following this series), but here's a brief summary: - Start the WebSphere ESB test server. - Once it is started, run the admin console for the server by right-clicking on the server and selecting Run administrative console. - In the console, navigate to Resources => JMS Providers => Default messaging. - Select JMS activation specification. - Create a new activation specification with these values: - Name: PackageReceivedActivationSpec - JNDI name: eis/PackageReceivedActivationSpec - Destination JNDI name: PackageReceivedModule/MDBImport_SEND_D - Bus name: SCA.APPLICATION.esbCell.bus - Save your changes and restart the server. To run the JMS test client, you would have to create a few more artifacts; we won't do that here, because we are focusing on the Web services export channel. For detailed information on how to deploy and run the JMS test client, see Part 2; none of that has changed at all in our updated scenario. Also, as mentioned earlier, the module we are using includes a WebSphere MQ import. This will cause the received message to be forwarded to an MQ queue called PackageReceivedQueue. We created the import, as well as the associated queue in Part 4. Once your server is started again, you can add the PackageReceivedModuleApp and PackageReceivedPart4 projects to the server. Make sure you select the module first, since its deployment will generate the queues and other definitions that are used by the message-driven bean. You can test the new Web service export simply by right-clicking on the SOAPClientExport_PackageReceivedIFHttp_Service.wsdl in the J2EE perspective and selecting Web Services => Test with Web Services Explorer. Once you run the test, it should look like Figure 5. Figure 5. Testing scenario 1 Scenario 2: Adding a WebSphere MQ consumer to a SOAP/HTTP Web service Let's switch to our second scenario, which covers a request-response type of service. The service provider offers this service via SOAP over HTTP, and the mediation module (which you built in Part 3) also exposes it over this protocol. You will now add access over WebSphere MQ to this service, essentially opening it up to an asynchronous protocol, even though the service operation itself is implemented synchronously. Figure 2 showed what the updated architecture of the solution looks like. To build this solution: Start by importing the backend service provider application, which you can find in the downloaded PackageStatusNewEAR.ear file. There is nothing exciting about this application, it simply prints out messages received and returns a default response message. You will deploy this application later on the test server. Import the mediation module that is associated with the service, which you can find in the PackageStatusModulePart5.zip project interchange file. (As before, you can alternatively find the already completed solution in the downloaded PackageStatusModulePart5Completed.zip file.) Once you have imported the PackageStatusModule module, open it in the Assembly Editor in the Business Integration perspective. The assembly should like Figure 6. Figure 6. The PackageStatusModule assembly Before you can add the additional export for WebSphere MQ access, take a look at an interesting detail in the service interface for the module or, more specifically, in the schema it contains. Open the PackageTrackingService interface with the WSDL editor (instead of the default Interface Editor) by right-clicking on the file and selecting the Open With => WSDL Editor menu option (Figure 7). Figure 7. Opening the service interface with the WSDL Editor Double-click on the namespace in the Types section (this assumes you selected the Graph tab on the editor window). The elements and types that are defined for this interface will display (Figure 8). Figure 8. Global elements in the service interface There are two global elements in there that would normally not be required in a plain Web service scenario, namely PackageIdentifier and PackageStatus. Looking at the source, these two elements are defined as follows: For your WebSphere MQ bindings, you will want to use those two global elements (which are named after the types they "wrap"), rather than the other two global elements, which are used in a SOAP/HTTP scenario. You are now ready to add the new export. Open the module assembly in the Assembly Editor and add a new export to the canvas. Rename it to PackageStatusExportMQ, and connect it to the mediation module, which will also add the appropriate interface to the export. Right-click on the new export and select Generate Binding... => Message Binding... => MQ Binding. Figure 9. Adding MQ binding to the export In the resulting dialog, you need to specify the WebSphere MQ parameters that you want to use for the exchange. The values here will depend somewhat on your local MQ installation, but they should basically be like this (fields not listed here can retain their default values): - Request Queue Manager: [this is your default queue manager] - Receive Destination Queue: PackageStatusRequestQueue - Send Destination Queue: PackageStatusResponseQueue - Host Name: localhost - Request Serialization Type: Serialized as XML - Response Serialization Type: Serialized as XML Figure 10. Defining MQ binding properties All of these values can also be changed later in the Properties view. Click OK and save your changes. You won't be making any changes to the mediation flow component implementation in this example, but feel free to open the component in the Mediation Flow Editor to see what the mediation actually does. Notice that it logs both request and response messages. Moreover, the response flow contains a custom mediation that prints out the response message. The code within this custom mediation looks like this: This is very generic code that you can reuse for other types of mediations, too, given that you change the namespace and root message values, respectively. Before you can deploy this updated module, you need to create the queues that we specified in Figure 9. Open the WebSphere MQ Explorer, and create two new local queues under the queue manager that you already defined. Name them PackageStatusRequestQueueand PackageStatusReponseQueue. There is nothing special required for these queues, so you can use all default values for them. Switch back to WebSphere Integration Developer, but leave the MQ Explorer window open because you will need it later for testing. You can now deploy your module and service provider to the test server. Start the server, then once started, add the PackageStatusServiceNewEAR and PackageStatusModuleApp projects. (These instructions assume that the test server you use is running on port 9081, which is the default for the WebSphere ESB test server. If you are using a different port, you must update the port definitions for both the module and the actual service in the PackageStatusModule project, as well as in the properties for the PackagesStatusServiceImport.) You are now ready to send a test MQ message to the mediation, see how it gets logged and forwarded to the service provider over SOAP/HTTP, and see how the response is routed back to the response MQ queue. Use MQ Explorer to send the test message. Right-click on PackageStatusRequestQueue and select the Put Test Message... option. Figure 11. Putting a test messge to the MQ request queue The message you want to send to the mediation looks like this: Notice how the root element of this message is equal to the global element definition we looked at earlier in the service interface file. Figure 12. The test message If the test runs successfully, output will appear in the server console, and the response message will appear in the PackageStatusResponseQueue. You can view it there again using the MQ Explorer tool. Figure 13. Browsing the response message Figure 14. The response message Figure 14 shows what the response message entry looks like. The expected response message is this: Again, notice how the root element of the response message is the global element that is named after the contained complex type. To compare this scenario and message ouput to the Web services export, you can simply use the Web Services Explorer tool in WebSphere Integration Developer to send a test SOAP message to the mediation. Even though the message formats for request and response are different when they appear at the respective export, they will be identical as soon as they hit the mediation flow component. Additional exports with different bindings can be added in the same way. In this article, showed how enabling additional protocols for use by service providers and consumers is really very simple using the import/export wiring approach in WebSphere ESB and its underlying Service Component Architecture. You can expose an existing service provider that was developed for use with JMS to consumers that only support SOAP over HTTP. Similarly, you can add support for WebSphere MQ-based consumers to SOAP/HTTP-based service providers. Best of all, none of these changes require any update of the service logic or the mediation flow logic! The fact that all messages are converted to and from the Service Message Object format in WebSphere ESB enables manipulating and processing those messages without regard to the protocol that is used to receive or send them. As in our previous series about Enterprise Service Bus implementations using WebSphere Application Server, this concludes our description of basic ESB capabilities in WebSphere ESB. There are many more details and more interesting scenarios to cover, but we will leave that to future articles. In our next, and final, installment, we will provide you with a summary of what we have covered. and offer an outlook into what you can expect in the future, in terms of both articles and in how the product will evolve further. - Part 1: An introduction to using WebSphere ESB, or WebSphere ESB vs. SIBus - Part 2: A JMS messaging example - Part 3: Adding Web services and promoting properties - Part 4: Connecting WebSphere ESB and WebSphere MQ Information about download methods - >. Rachel Reinitz is an IBM Distinguished Engineer with IBM Software Services for WebSphere.
http://www.ibm.com/developerworks/websphere/techjournal/0705_reinitz/0705_reinitz.html
CC-MAIN-2013-20
refinedweb
2,557
50.57
Hackaday forum member [machinelou] says he’s been fascinated with remote controlled hamster balls for quite some time. Inspired by a ball bot he saw on a BBC show, he finally picked up a 12″ plastic ball and got to work. He used a small drill to provide the power required to roll the ball, and an Arduino is used as the brains of the device. This is his first major project outside of simple I/O and servo control, so he’s taking things slowly. While all this is a bit new to him, he already has things up and running to a degree as you can see in the video below. In its current state, the ball is programmed to roll forward and backwards for a few seconds before going back to sleep. His future plans include adding a servo-controlled weight to allow him to steer the ball as well as using a pair of Zigbee modules in order to control the ball remotely. It’s a neat little project, and definitely one that would be a fan favorite among kids. Stick around to see a quick video of his bot’s progress thus far. 6 thoughts on “Ball bot constructed from power tools and pet toys” import SOUL That is all… What’s the theoretical maximum slope a ball bot can climb? Can it roll itself out of a sand trap? Very cool project ;) I have a certain affinity for this one because I did the exact same thing 15 years ago using the same ball, except mine was blue. I started right away with the pendulum steering design and it was an instant A++ for my college senior project. The motor controller was made from scratch, except I didn’t have enough time to build in a custom controller solution. I just used a standard R/C radio system which worked well, and did some interface circuitry to massage the PMW from the receiver into an acceptable signal to drive the motor controller. It was a completely original concept to use the pendulum to steer the ball, but as it later turned out the technology was already patented back in like 1985, so I didn’t do anything with it. This same technology has been used by the military and NASA since then. Looks nice … but seriously, 29 sec of video for 5 sec of actual movement? I think the maximum slope it could sustain a climb up would be the greatest angle where some point on the surface of the sphere (where you have conveniently placed your pendulum weight) moves downward as the ball rolls. Once every point on the sphere is being lifted by the turning motion, there is no possibility of it being able to climb. Anyway, in the real world you probably wouldn’t get very close to that theoretical limit. You could overcome that in the short term by rolling the ball against an internal ballast that is being used as a pivot. Sorry, I only read the article after I posted that – I see this particular ball bot only uses the second method, so the answer is that the slope you can climb up is determined by how powerful the motor is and how much grip the ball has.
http://hackaday.com/2011/04/11/ball-bot-constructed-from-power-tools-and-pet-toys/?like=1&source=post_flair&_wpnonce=836876f408
CC-MAIN-2017-09
refinedweb
550
73.1
Introduction The Firebase SDKs have long had a problem with large bundle sizes. The Firebase V9 modular SDK has been pre-released to address this issue. The V9 Modular SDK is still in beta at the time of this writing. It is also being used on this blog. In this article, I would like to compare how much the V9 Modular SDK can reduce the bundle size. Changes in the V9 Modular SDK The V9 Modular SDK is a release branch whose main purpose is to reduce the bundle size. The biggest difference from the V8 SDK is that the code base has been changed from class-based to function-based. It appears that using the V9 modular SDK may result in 80% less than a comparable app built using the V8 SDK 1. In the V8 SDK, the method chain style execution from classes is impressive. Classes can't benefit from the tree-shaking of the bundler, so all methods are bundled, even unused ones. This causes an explosion in bundle size, for example, just loading data from Cloud Firestore. On the other hand, in the V9 Modular SDK, what were originally class methods are now functions. This allows unused functions to be tree-shaken by the bundler. On the other hand, it is not compatible with the V8 SDK, so you will need to refactor to switch to the V9 Modular SDK. The Compat library is available to help you make the transition in stages, but for projects that are just starting out, the V9 modular SDK may be a better choice for your project. Bundle size comparison The bundle size of the V9 Modular SDK should increase as more functions are used. Also, the bundle size will vary depending on the functions you import. Therefore, in this article, we will compare the bundle size by showing the upper and lower limits of the bundle size. In the V9 Modular SDK, the initialization process is always required when handling each resource of Firebase. The lower limit of the bundle size is defined as the state where only the initialization process is performed. The upper limit of the bundle size is when all the resources are bundled without tree-shaking. In normal use, the bundle size will be between the upper and lower limits. On the other hand, the V8 SDK does not support tree-shaking, so the bundle size is constant. Verification Environment Compare bundle sizes in a project that uses vite as a bundler. yarn create @vitejs/app <project-name> --template preact-tscd <project-name> V9 of the firebase module is still in beta, so it needs the beta flag to be installed. // V8yarn add firebase// V9yarn add firebase@beta To remove comments and licenses in vite, change vite.config.ts as follows import { defineConfig } from 'vite'export default defineConfig({build: {terserOptions: {format: {comments: false}}}}) If you want to deploy it, you will need to output the license information to another file. Firebase App In any Firebase project, initialization process is required. First, let's look at the bundle size associated with the initialization of Firebase App. Full bundle size of Firebase App Let's look at the size of a full bundle of Firebase App. import firebase from 'firebase/app'firebase.initializeApp({ /* config */ }) Since the V9 modular SDK does not have a default export, you can disable tree-shaking by doing the above. You can see that the different versions are quite different in size. The important thing to note here is that the V9 Modular SDK has the potential to reduce the bundle size through tree shaking. On the other hand, the bundle size of the V8 SDK remains constant. Let's take a look at the bundle size for named imports. Bundle size for initializeApp The initializeApp function must be executed prior to the initialization of all Firebase resources. import { initializeApp } from 'firebase/app'initializeApp(firebaseOptions) When only the initializeApp function is bundled, the size is 15.99 kb. In other words, the upper and lower limits for using the firebase/app module are as follows. The bundle size was further reduced by tree-shaking. The bundle size was 21.99 kb when using the V8 SDK, so we can see that the bundle size is smaller with the V9 modular SDK. The V9 Modular SDK reduces the overall size of the module and the bundle size can be further reduced by tree shaking. Actually, the official lists the following two areas for bundle size improvement. - Cloud Firestore - Authentication Since we saw the size reduction for Firebase App initialization Let's take a closer look at these two. Cloud Firestore In the V9 Modular SDK, Cloud Firestore has a new submodule called lite. If you don't use real-time streaming, you can switch to this to further reduce the weight. The V8 SDK also has a memory submodule. Normally, data is kept in IndexedDB, but switching to this will keep the data in memory. And since there is no IndexedDB related code, the bundle size is smaller than a full featured build. This can be used if you do not need to persist data between sessions. First, let's look at the size of a bundle with all Cloud Firestore modules. Cloud Firestore full bundle size First, let's look at the size when everything is bundled without any tree-shaking effect. There are four patterns for Cloud Firestore modules, including the use of submodules. import 'firebase/firestore' This is the upper limit of the bundle size in each case. In the V8 SDK, all methods are available at the initialization stage, so the bundle size will not increase or decrease any further. In comparison, we can see that there is a big difference in the full bundle. In particular, using the lite submodule of the V9 modular SDK seems to reduce the bundle size significantly. Next, let's see what happens when we turn on the tree shaking. Bundle size for initializeFirestore In the V9 Modular SDK, the state where initializeFirestore is performed is considered to be the lower limit of the bundle size. import { initializeFirestore } from 'firebase/firestore'const firestore = initializeFirestore(app, {}) From here, the bundle size will increase as you import and use more functions of the module. For your reference, the following code to read a document with its ID with the lite submodule results in a bundle size of 38.21kb. import { doc, getDoc, initializeFirestore } from 'firebase/firestore/lite'const firestore = initializeFirestore(app, {})const document = doc(firestore, 'posts', 'id')getDoc(document) Although the bundle size that increases with one function is larger than I imagined Still, you can see that the lite submodule is very small in size. If you don't need a real-time listener in your project, we recommend that you switch to it proactively. Authentication Similarly, let's take a look at Authentication. Full bundle size for Authentication import 'firebase/auth' The full bundle reduces the size by about 40%. Bundle size for initializeAuth import { initializeAuth } from 'firebase/auth'initializeAuth(app) It is now very small. If you import the signInAnonymously function here, for example, and enable sign-in as an anonymous user, the total bundle size becomes 41.07kb2. Compared to the V8 SDK, the bundle size is indeed reduced by about 80%. Since the reduction in bundle size is beneficial to everyone, we recommend switching to the V9 modular SDK. Please see Upgrade from version 8 to the modular Web SDK for a step-by-step guide. - Introducing the new Firebase JS SDK↩ - Increased by about 1kb with Edit this page on GitHub
https://miyauchi.dev/posts/firebase-bundle-size/
CC-MAIN-2021-43
refinedweb
1,271
62.88
>>." What's the problem? (Score:3, Insightful) What's the problem? It's random enough for a browser selection screen. This isn't an application where a statistically random shuffle is required. Yeah right (Score:3, Insightful) Good enough (Score:1, Insightful):What? Why not? (Score:5, Insightful) Re:Good enough (Score:5, Insightful) Given that each person will only lose one cent per lifetime, I propose to move $0.01 from each bank account in the world to my own account. standard wooden fence and then the local anti government militia guy laughing at your ignorance because everyone who knows anything about fences knows you choose the solution that's 12 feet high with curved top to prevent climbing and a sunken base of 3 feet to prevent dog-tunneling.:He's just bitching :He's just bitching (Score:5, Insightful) Re:What's the problem? :He's just bitching (Score:3, Insightful) Re:damned faintly praising? (Score:5, Insightful) No, the point was that no one browser got unfairly pushed to the top all the time. This algorithm does push a certain browser higher more often than not, and hence is not fit for it's job. Re:He's just bitching science, academia and the body of professional programmers. So can the "the devil is in the details" crap; you don't know what you're talking about. Building a complex software package that takes into account every possible detail in both process and implementation is impossible in any environment currently available for consumer software and general computing hardware. Just when you think you've got everything covered, nature builds a vendor builds a buggy component, security specialists discover a flaw in the way you learned to write your software, nature builds a better idiot, or a piece of a radioactive isotope in a memory module emits a beta particle, just to ruin your day.:do not fix! (Score:1, Insightful) looking at the outcome IE comes off the worst with the current algorithm, please keep it that way. Thanks from all the Web Developers. Exactly. And the Apple people here managing to interpret this as a plot against Safari are just amazing. MS would represent IE the worst, and Chrome and Firefox the best, just to get Safari. Yeah, right. Talk about delusions of grandeur. Re:He's just bitching some of Microsoft's programmers are of such low quality. What is odd is that their legal department can't make their technical managers understand "do this right or we lose the right to do business on the second most profitable continent." Re:damned faintly praising? (Score. Re:damned faintly praising? the ballot screen. But at least on slashdot we can expect some higer standards than a return (0.5 - Math.random()); comparison function. Re:You can't artificially put down competition :Reach for Knuth? . Re:He's just bitching ). Doubtful. The mere fact that the order places certain items in certain positions a disproportionate amount of the time would raise considerable doubt that Microsoft acted in good faith. This would be sufficient reason to introduce user test data which would demonstrate that the last position is not the least desirable. Re:He's just bitching (Score:2, Insightful) Exactly. From these results, we can assume one of two things: 1) Incompetence 2) Malice There may be an off chance of both incompetence and malice given Microsoft's history, but consider that this action was performed solely to meet legal requirements set forth by the EU to inhibit Microsoft's monopolistic behaviors. Regardless of which it was, the end result will (likely) be one of two things: the EU will say "not good enough" and another year+ long trial will go on before any actual change gets made, or the EU will let it slide and Microsoft will reap the benefit of whatever they intended with this algorithm. To my eyes, it looks like Microsoft is giving preference to 1st place proportionately to the browser current market name recognition - to the exception of their own browser. I don't know if this is intentional. However, also consider how dialog boxes typically work, and how people have been conditioned (on Windows and pretty much everywhere else) to immediately look to the left hand side for their "get past this irritating prompt" button. It's a technique used to install all sorts of insidious malware, so evidently it's a technique that works. By having IE hold closest position to that 'visual queue' area, they are giving it preference. Also consider the impact that having the IE logo branding (or any logo, for that matter) on your desktop for a decade will have. I would not be surprised to see an article on statistics resulting from this browser selector showing up in a couple months, showing the profound popularity of IE. I'd wager at least 50%.: ." (using quotes because I'm quoting myself). In conclusion, we should not care if the distribution is not "random" but whether it is uniform (i.e. all possible permutations of 5 browsers appear with equal frequencies). Re:You can't artificially put down competition (Score:2, Insightful) Obviously I didn't explain what was going on very well, I (stupidly) assumed people would read the actual article and the data there. But hey, this is Slashdot so I guess I better fill you in. It's not two slots. It's one slot (look at the results). The "bottom two" comes from the fact that in each browser test, a SINGLE SLOT was used either 40% or 50% of the time (depends on the browser). The exact NUMBER of the lost depended on which browser was being used. Thus we are talking about 40-50% vs 20% (which would be random). Furthermore the 50% is still out of variance by a decently large factor, even if we were talking about two whole slots instead of one. So, "wow" yourself. Look at the data next time before you leap to conclusions, or state that an utterly broken algorithm is "working perfectly". Re:Milliseconds (Score:3, Insightful):3, Insightful) This is obviously not a random distribution curve. I believe you meant to say uniform rather than random.
http://developers.slashdot.org/story/10/02/28/1837223/Schooling-Microsoft-On-Random-Browser-Selection/insightful-comments
CC-MAIN-2015-40
refinedweb
1,044
63.09
Let’s see the structure declaration below, struct A { int a; float b; }; struct B { int c; float d; struct A e; }; So, what have you noticed here? The member ‘e’ of struct B is itself a structure of type struct A. While being compiled by the compiler, line, struct A e; doesn’t put any error. Actually, member ‘e’ of structure B is itself a structure, type struct A, which is already declared before structure type B. Therefore, compiler doesn’t have any problem in evaluating space allocation to member ‘e’ of structure B. So, we can say that a structure whose member is itself a structure is called a nested structure. Let’s learn to access a nested structure, for example, #include <stdio.h> /* structure A declared */ typedef struct A { int a; float b; }New_a; /* structure B declared */ typedef struct B { int c; float d; struct A e; /* member 'e' is itself a structure */ }New_b; int main(void) { /* Let's declare variables of New_a and New_b */ New_a bread; New_b butter; /* 'butter' is a nested structure */ /* Let's access bread using dot operator */ bread.a = 10; /* assigned member a value 10 */ bread.b = 25.50; /* Let's access butter using dot operator */ butter.c = 10; butter.d = 50.00; /* Let's access member 'e' which is a nested structure */ butter.e.a = 20; butter.e.b = 20.00; /* Display values of members of 'butter.e' structure */ printf("butter.e.a is %4d\n", butter.e.a); printf("butter.e.b is %.2f\n", butter.e.b); return 0; } Output of the above program follows, butter.e.a is 20 butter.e.b is 20.00 Notice that ‘e’ is a member of structure butter and at the same time it’s a structure of type struct A. Therefore, we used dot operator to access ‘e’ in butter, as, butter.e but ‘butter.e’ is a structure of type struct A. struct A has two members viz. an integer and a float. To access these two members of butter.e we again used dot operator as, butter.e.a = 20; and assigned an integer value 20 to member ‘a’ of butter.e. Similarly, we accessed member ‘b’ of butter.e butter.e.b = 20.00; and assigned it with float value. Not just this way you can access nested structures. What if we don’t know the structure by name and we have a pointer to structure instead, for example, New_b *p2b = &butter; Then, how will you access members of ‘butter’ structure? We can use pointer with indirection as well as arrow operators to access them. Let’s understand difference between using pointer with indirection and arrow operator before we try them out to access members of butter. Notice that exp. p2b; is a pointer to ‘butter’ New_b structure. Applying indirection on this, i.e. *p2b; results in value where p2b is pointing to. As an R-Value, this is the value of entire structure ‘butter’ which can be thought of as equivalent to structure name ‘butter’. And as an L-Value, this refers to the entire location occupied by ‘butter’; wherein ‘butter’ members can receive new values. Let’s consider the R-Value to access the members of ‘butter’, (*p2b).member_of_butter; Notice the parenthesis used around ‘*p2b’. Since dot ‘.’ operator has higher precedence than ‘*’ operator. Parenthesis cause ‘*p2b’ to go first. Then ‘.’ operator selects member of ‘butter’. For example, (*p2b).c; /* integer member of 'butter' */ Now, we try using arrow ‘->’ operator to access ‘butter’ and its members. Arrow operator has indirection built into it. Therefore, exp. p2b->c; means indirection performed on ‘p2b’ and selected member ‘c’ of ‘butter’. We prefer arrow operator because of its convenience. Let’s access members of ‘butter’ using pointer-to-butter rather than by name of ‘butter’, printf("p2b->c is an integer %d\n", p2b->c); printf("p2b->d is a float %f\n", p2b->d); /* 'e' member is a nested structure, let's access this too */ printf("p2b->e.a is an integer %d\n", p2b->e.a); printf("p2b->e.b is a float %f\n", p2b->e.b); Let’s understand the expression, p2b->e.a; p2b->e selects member ‘e’ of ‘butter’, but ‘e’ is a structure of type struct A. Therefore, exp. p2b->e represents a strucute of type struct A. Structure A has two members, an integer and a float. To access their values, we used dot operator, like, p2b->e.a; selects integer member of ‘p2b->e’ structure. Likewise, we accessed float member, p2b->e.b; Though ‘.’ operator has higher precedence than ‘->’ operator, but their associativity is from left to right. Hence, in exp. p2b->e.a; ‘p2b->e’ is evaluated first then dot operator gets evaluated. Sanfoundry Global Education & Learning Series – 1000 C Tutorials.
http://www.sanfoundry.com/c-tutorials-nested-structure-access/
CC-MAIN-2016-44
refinedweb
800
68.77
How to use Gatsby's Head API with MDX - Gatsby - React - JavaScript - MDX Hi there! I'm excited, are you? Of course you are! In Gatsby 4.19.0 the team shipped the Gatsby Head API 🎉 But what does this mean for you, and why am I excited? React Helmet Historically the way to add indexable meta data to you Gatsby site's <head> element was to use a combination of react-helmet and gatsby-plugin-react-helmet, but rather worryingly, react-helmet hasn't really been updated since 2020. 😬 What does it mean for you? Using an Open-source library that's not been well maintained can lead to headaches, as I'm sure you're well aware. Why Am I excited? The Gatsby Engineering team recognizes this and have now moved all of that lovely Helmet functionality into the core framework! — Superb! Migration Options To use the Head API today, upgrade to at least 4.19.0 and I'll now talk you through the steps required to migrate from react-helmet to the Head API. There's two slightly different ways you might wish to approach this depending on if you're using unique pages or template/layout file. (MDX Blog posts with frontmatter for example) I've prepared an example repo and x2 PR's which you can use for reference. Example Repo (Using React Helmet) PR's (Using The Head API) - ⚙️ feat/use-head-api-fs-routes - ⚙️ feat/use-head-api-gatsby-node Getting Started I've tried to consider the most common scenario based on the approaches I see many folks use. Your use case may well be different. Remove React Helmet npm uninstall react-helmet gatsby-plugin-react-helmet // gatsby-config.jsmodule.exports = {...plugins: [- 'gatsby-plugin-react-helmet',...]}; Page Generally I see folks using an <Seo /> component somewhere in a page or page template file. In the example repo please have a look at src/pages/index.js#L9, and here's a similar looking code snippet. Seo // src/pages/index.jsimport React, { Fragment } from 'react';import Seo from '../components/seo';const Page = () => {return (<Fragment><Seo title="Gatsby Head API MDX" /><main>...</main></Fragment>);};export default Page; ... and here's what the same page looks like using the Head API export const Head // src/pages/index.jsimport React, { Fragment } from 'react';import Seo from '../components/seo';const Page = () => {return (<Fragment>- <Seo title="Gatsby Head API MDX" /><main>...</main></Fragment>);};export default Page;+ export const Head = () => {+ return <Seo title="Gatsby Head API MDX" />;+ }; Seo component Now you can remove any reference to <Helmet /> from the <Seo /> component. // src/components/seo.jsimport React from 'react';- import { Helmet } from 'react-helmet';const Seo = ({ title }) => {return (- <Helmet><title>{title}</title>- </Helmet>);};export default Seo; ... and that's it! Frontmatter as title The above example shows a simple method for "hard-coding" a title and passing it on to the <Seo /> component via the title prop. In Page templates you'll likely need to use the title as defined in the frontmatter. Take a look at the src from the example repo: src/pages/posts/{mdx.frontmatter__title}.js#L43 Head props Before you get going, you might like to inspect the props passed to the Head API. They should be the same as what's passed to the page, E.g. export const Head = (props) => {console.log(JSON.stringify(props, null, 2));return null;}; In my example repo this results in something similar to the below. {"location": {"pathname": "/posts/this-is-post-one"},"params": {},"data": {"mdx": {"frontmatter": {"title": "This is post one"},"body": "..."}},"pageContext": {"id": "6aa907b2-4040-5e38-b6f0-4f1762068476"}} The bit I'm most interested in is data.mdx.frontmatter.title as this is what I'll need to pass on to the <Seo /> component to display in the HTML <title />. export const Head = ({data: {mdx: {frontmatter: { title }}}}) => {return <Seo title={title} />;}; Now when I visit each of the post pages in the browser I see the page title change in my browser tab, and when inspect the DOM I see the following. Notice: the data attribute on the title. If it says data-gatsby-head you're all set! //<head><title data-This is post one</title></head> ... and that's it, for real this time! How am I doing? Hey! Lemme know if you found this helpful by leaving a reaction.
https://paulie.dev/posts/2022/07/how-to-use-gatsbys-head-api-with-mdx/
CC-MAIN-2022-33
refinedweb
726
58.18
What is tmux and why would you want it for frontend development? I think it’s time more frontend and full-stack developers started using tmux, which stands for “terminal multiplexer”. But before we get into whatever “multiplexer” means, let’s consider Vim. Vim is the default text editor when writing a git commit message. A wonky terminal program that somehow makes writing 50 characters more challenging than typing them outright. You end up alt-tabbing to search how to quit this thing, because your emergency key combo Ctrl-C to close out programs… does nothing! Vim is a tough, unforgiving piece of software from a bygone era, like a treacherous ring full of magical power. Truthfully, if you don’t know and use Vim already, I don’t intend to convince you to pick up tmux. It’s got decent mouse support but working with modal, keyboard-driven apps like vim and tmux is not easy. This article is simply an introduction to a powerful terminal tool that you might consider adding to your kit. What is tmux? There’s a range of vim’s a developer could run— some use it on remote computers like it’s the 1980’s, some use it strictly within their Terminal app, or there are suckers like me, who use a graphical interface with operating-system-specific gimmicks. I use vim like the serious devs but copy paste just like you in an app called MacVim. There is no “graphical” tmux. There is no mactmux. It is a program built for those serious devs who work on actual shells on a variety of computers. It’s like a window manager that works inside any terminal. The devops folks working on many machines, backend developers optimizing some engine,… for these people tmux, or its enigmatic predecessor screen, comprise a powerful school of magic. A “terminal multiplexer” like tmux or screen lets you: - Leave terminal sessions and come back to them without interrupting the running process. On a remote computer, you can run some long installation process in tmux, go offline, and come back later to see the results, e.g. the live terminal output. The computer, of course, needs to stay turned on. - Manage a whole swarm of screens, split displays, dashboards at once, organized in three layers as “sessions”, “windows”, and “panes”. You can maneuver many terminal screens with home-row friendly shortcuts and manipulate your workspace with scripts and hotkeys. This one-two punch is incredible for ops folks on the go, the “shit-is-on-fire” crew, and backend developers, but I think very few frontend (or general full-stack) developers use tmux day to day, and they should. Why should frontend folks care about tmux? Consider the steadily rising amount of build tools, software services, and storage servers that need to run all at once to even resemble a large web property on your own computer. Your database, your Redis cache, your Ruby or Python application server, your static (or splash-site-specific) code, some Node service or two building assets. Maybe a separate server for granting users sessions and authorizing requests. It’s painstakingly difficult to organize 7 servers’ worth of logs, a couple more terminal windows for git or general purpose navigation. tmux is useful even if you don’t write your code in vim inside the terminal. As your dependencies swell, or if you need to bounce between contexts of work locations, home projects, and Mr. Robot terminal acting (if you work for that show, I’d do it for cheap). Jumping between contexts and saving your work is exactly what tmux is designed for. Done with work for the day? Switching to your `fsociety` IRC session? Dip out of the work context with `:d`, or detach, and then `tmux attach -t fsociety` will open a whole different bunch of windows and panes. Modes Like vim, tmux has modes, but the default mode feels the same as your normal Terminal. You type “git status”, that appears as letters, you hit enter and git status runs. A “prefix” key combination kicks you out of that and into a command mode (Prefix is commonly Ctrl-a, or Ctrl-f, default Ctrl-b). Now 1–5 might move you to different windows, and a whirlwind of shortcuts is available to you. If you start your command with :, you get a command line (like the infamous :wq<Enter> in vim). To see all the shortcuts currently set in your tmux, just hit your prefix and then :keys<Enter>. From my limited experience, `copy mode` is both the most useful and the only mode you might get stuck in. Oh yeah, I guess there’s a clock mode, which switches a pane from a terminal to a nice little clock in the corner of your screen. Startup Script Restarted your computer for an update? You can make a bash script to open a bunch of screens, migrate your db, and organize the logs of a dozen services at once while you grab coffee. Let’s see how this works… So you start by making a regular bash script that says tmux new-session -d and then tmux split-window -v -p 34; this makes a new session with one window, and some split panes. You can then just script sending keystrokes to your tmux session: tmux send-keys ‘redis-server’ ‘C-m’. This example starts the redis local server and hits enter. You can script all those pane arrangements, servers, and so on into a few dozen calls to tmux. At the end of the whole script you can either list sessions, or just attach right into your just-right workspace. Just like vim, tmux can be intimidating and is much too deep to master in one sitting. You can get the baseline skill in a day, and speed up your workflow in a week of cautious use. Copying and pasting in tmux feels like opening and closing a spaceship airlock, but otherwise it’s a powerhouse for managing all the scripts and servers a regular web developer might need to run. tmux is a unique user experience all on it’s own, but it is heavily influenced by vim & friends. If you’re comfortable with the one editor to rule them all, you’ll have a fairly easy time adjusting to the terminal multiplexer… to rule all the terminal multiplexers. If you’re experiencing serious amounts of clutter in your Terminal setup, I encourage you to give tmux go. Thanks for reading! 🔌 To get random ramblings about 🎛 synthesizers, 🇺🇸🇺🇦 politics, and 🎨 CSS, follow me on twitter: @tholex Installing tmux Tmux is available using brew on OSX and apt-get on Ubuntu. Much like vim, tmux has a wild range of setups and config options. You can check out my tmux.conf to get a sense of the available options. I copy-pasted someone else’s and at most changed just a couple keybindings around (mainly setting the prefix to ctrl-f, and pasting is done via [ and ]). There’s also a necessary shim for copying and pasting on OSX available via brew as reattach-to-user-namespace.
https://medium.com/@tholex/what-is-tmux-and-why-would-you-want-it-for-frontend-development-e43e8f370ef2
CC-MAIN-2021-49
refinedweb
1,196
70.63
Greetings! How would I be able to get the return value of a Lua function called in my application? My C code is this: lua_pushnumber(8); lua_callfunction(lua_getglobal("luafunc")); printf("Result In C: %d \n", lua_getresult(1)); and my function is: function luafunc(value) write("In Lua Function \n"); write("Value Is: "); write(value); write("\n"); return (value * value) ; end When I call lua_getresult(1), I get the same result no matter what I feed the the function. I changed the rest of the lua script just in case it was affecting my function, but I get the same result, nonetheless. I'm always getting 3 when I call lua_getresult(1). I should be getting 64. Is this the right way of doing it? And do I use lua's format() if I want to format a string like C's printf()? For example, if I wanted to write something similar to "printf("%d",value);" in Lua, should it be "write(format("%d",value));"? I've tried it, but it doesn't work. I may be missing something. with milk and cookies, Shawn
http://lua-users.org/lists/lua-l/2000-02/msg00019.html
crawl-001
refinedweb
183
65.12
prompter_bp 0.0.1 This is a test from Stephen Grinder's Udemy class example/main.dart import 'package:prompter_bp/prompter_b_b_bp/prompter_bp.dart'; We analyzed this package on Jan 17, 2020, and provided a score, details, and suggestions below. Analysis was completed with status completed using: - Dart: 2.7.0 - pana: 0.13.2 Health issues and suggestions Document public APIs. (-1 points) 11 out of 11 13 col 3: The method askMultiple should have a return type but doesn't. Format lib/prompter_bp.dart. Run dartfmt to format lib/prompter_b. (-9.04 points) The package was last published 56 weeks ago.
https://pub.dev/packages/prompter_bp
CC-MAIN-2020-05
refinedweb
102
70.5
. Note This topic discusses named methods. For information about anonymous functions, see Anonymous Functions. Method Signatures Methods are declared in a class or struct by specifying the access level such as public or private, optional modifiers such as abstract or sealed, the return value, the name of the method, and any method parameters. These parts together are the signature of the method. Note A return type of a method is not part of the signature of the method for the purposes of method overloading. However, it is part of the signature of the method when determining the compatibility between a delegate and the method that it points to. Method parameters are enclosed in parentheses and are separated by commas. Empty parentheses indicate that the method requires no parameters. This class contains four(); } Method Access; } Passing by Reference vs. Passing by Value. For a list of built-in value types, see Value Types Table.. public class SampleRefType { public int value; } Now, if you pass an object that is based on this type to a method, a reference to the object is passed. The following example passes an object of type SampleRefType to method ModifyObject. public static void TestRefType() { SampleRefType rt = new SampleRefType(); rt.value = 44; ModifyObject(rt); Console.WriteLine(rt.value); } static void ModifyObject(SampleRefType obj) { obj.value = 33; } and Reference Types. Return Values value can be returned to the caller by value or, starting with C# 7.0, by reference. Values are returned to the caller by reference if the ref keyword is used in the method signature and it follows each return keyword. For example, the following method signature and return statement indicate that the method returns a variable names estDistance by reference to the caller. public ref double GetEstimatedDistance() { return ref estDistance; }:(1, 2); result = obj.SquareANumber(result); // The result is 9. Console.WriteLine(result); result = obj.SquareANumber(obj.AddTwoNumbers(1, 2)); // The result is 9. Console.WriteLine(result); Using a local variable, in this case, result, to store a value is optional. It may help the readability of the code, or it may be necessary if you need to store the original value of the argument for the entire scope of the method. To use a value returned by reference from a method, you must declare a ref local variable if you intend to modify its value. For example, if the Planet.GetEstimatedDistance method returns a Double value by reference, you can define it as a ref local variable with code like the following: ref int distance = plant Returning a multi-dimensional array from a method, M, that modifies the array's contents is not necessary if the calling function passed the array into M. You may return the resulting array from M for good style or functional flow of values, but it is not necessary because C# passes all reference types by value, and the value of an array reference is the pointer to the array. In the method M, any changes to the array's contents are observable by any code that has a reference to the array, as shown in the following example. static void Main(string[] args) { int[,] matrix = new int[2, 2]; FillMatrix(matrix); // matrix is now full of -1 } public static void FillMatrix(int[,] matrix) { for (int i = 0; i < matrix.GetLength(0); i++) { for (int j = 0; j < matrix.GetLength(1); j++) { matrix[i, j] = -1; } } } For more information, see return. Async Methods By using the async feature, you can invoke asynchronous methods without using explicit callbacks or manually splitting your code across multiple methods or lambda expressions.. Note An async method returns to the caller when either it encounters the first awaited object that’s not yet complete or it gets to the end of the async method, whichever occurs first., Control Flow in Async Programs, and Async Return Types. Expression Body Definitions It is common to have method definitions that simply return immediately with the result of an expression, or that have a single statement as the body of the method. There is a syntax shortcut for defining such methods using =>: public Point Move(int dx, int dy) => new Point(x + dx, y + dy); public void Print() => Console.WriteLine(First + " " + Last); // Works with operators, properties, and indexers too. public static Complex operator +(Complex a, Complex b) => a.Add(b); public string Name => First + " " + Last; public Customer this[long id] => store.LookupCustomer(id); If the method returns void or is an async method, then the body of the method must be a statement expression (same as with lambdas). For properties and indexers, they must be read only, and you don't use the get accessor keyword. Iterators# Language Specification For more information, see the C# Language Specification. The language specification is the definitive source for C# syntax and usage. See also Feedback
https://docs.microsoft.com/en-us/dotnet/csharp/programming-guide/classes-and-structs/methods
CC-MAIN-2019-30
refinedweb
803
55.44
Social Authentication (or Social Login) is a method of simplifying end-user logins by utilizing current login information from prominent social networking services like Facebook, Twitter, Google, and LinkedIn, which is the subject of this article. Most websites require users to log in using social login platforms for a better authentication/registration experience instead of establishing their systems. Django Allauth is a tool for quickly setting up an authentication/registration system that works with various frameworks and authentication providers. Many big websites now allow visitors to log in using their Facebook, Google, or LinkedIn accounts. I’ll confess that most users do so because it is simple, especially now that everyone has those accounts, and they are much more trusted. Django authentication with LinkedIn This article will show you how to incorporate the Django Allauth library into your Django project and use OAuth 2.0 to offer user authentication using LinkedIn. What is OAuth 2.0, and how does it work? OAuth 2.0 is an authorization system that allows apps to access a user’s account to authenticate or register with popular social networking sites. The end-user has control over which information the program has access to. It focuses on streamlining the development process while also offering particular authorization routines for web and desktop applications, mobile phones, and Internet of Things (IoT) devices. How does Oauth2 Work? The creation of OAuth2 was initially to have it as a protocol that facilitates web authentication. This kind of protocol expects some HTML rendering and capabilities to handle browser redirection. That makes it different from the usual protocol for network authentication. In that regard, JSON-based application programming interface comes at a disadvantage when dealing with a web authentication protocol. As a result, this calls for the use of a workaround to get it working correctly. However, in this article, we concentrate on a typical server-side web application. OAuth 2 Server Flow The initial step happens entirely outside of the primary application, where the project owner does the registration of every OAuth2 app provider they need their logins. For instance, the registration of a LinkedIn application since LinkedIn can provide logins on their behalf. During the registration process, the owner provides the Oauth2 provider with a callback URI. The callback URI is a designation in the owner’s application where requests from the provider will be received. The provider, in turn, makes available both a client key and a client secret which gives the owner the necessary authentication because any authentication requires these access tokens to verify any login attempts. When the application creates a page with a designated button indicating login through a specific social media application, it indicates the beginning of the flow. For instance, the application may have a button such as “Login with LinkedIn”. Such buttons are essentially links pointing to URLs that have similarities with the following sample. The above code indicates that the owner has provided a response type, a Client Id, and a redirect URI. However, the owner did not provide any secret keys. In addition, the owner has requested permission to access the user’s email address and profile from the Oauth2 provider’s server. These scopes check the permissions given to you from the user and the authorization of the provided access token. Upon receiving the access token, the browser belonging to the user is then redirected to a page (usually dynamic) that the respective Oauth2 provider ultimately controls. The Oauth2 provider, at this point, has the sole responsibility of checking and confirming that the client keys and the callback URI are the same. The flow will then diverge based on the session tokens of the given user. For example, if the user is not already logged in to the given service, they will be prompted to log in. On the other hand, if the user is already logged in to the service, the user will be requested to give permissions that allow your program to log in. Once the user has given the necessary permissions, the Oauth2 server will forward them to the callback URI earlier. However, this time, it will also have an authorization code as part of the query parameters, as shown below. GET The authorization code expires quickly since it is meant for one-time use only. Thus, upon receipt of the authorization code, a new request is issued to the Oauth2 provider from your server. The latter will contain both the client secret and the authorization code. Here is an example, POST grant_type=authorization_code& code=AUTH_CODE& redirect_uri=CALLBACK_URI& client_id=CLIENT_KEY& client_secret=CLIENT_SECRET The POST request shown above is authenticated using the authorization code provided. Nevertheless, this happens through the user’s system, depending on the kind of flow employed here. If this is the case, then the entire process is also risky. The limitations of the authorization code include rapid expiry. That causes it to be used once in place, reducing the risk of using an untrusted system to send authentication credentials. This call from the owner’s server to the respective Oauth2 providers’ server is one of the key constituents of the login process of the Oauth2 server-side. Now, controlling the call gives assurances of it being TLS-secured and keeping it resistant to assaults coming from wiretapping. Further, we can only ascertain users’ explicit consent through the provision of the authorization code. At a further level, the client secret is provided to ensure that the whole request did not result from a computer virus or a spyware program that intercepted the authorization code to the user’s system. When everything checks out, the server returns an access token, which can then be used to make calls to the Oauth2 provider while logged in as the user. Subsequently, the owners’ server will redirect the browser belonging to a given user to the designated landing page for successfully logged-in users upon receiving access token from the server to do whatever brought them to the site. Often, the access token is stored in the user’s server-side session cache allowing the server to continue making calls to the registered Oauth2 provider when the need arises. For example, Google has a refresh token that extends your access token’s duration. On the other hand, Facebook provides an endpoint for you to exchange your short-lived access token with longer-lived tokens. Setting up this flow in the case of REST API is a complicated process. Even though you can have the backend providing a callback URL while at the same time having a login page on the frontend, problems will mar the process. You may receive an access token to send the visitor to the landing page, but there is no definite RESTful way to achieve this. What we will cover: - Setup a Django project - Linkedin Authentication Authentication with LinkedIn - Create a Linkedin App - Update Django Settings.py - Common Errors. Creating a LinkedIn Application Step 1: mkdir django-linkedin-auth && cd django-linkedin-auth Step 2: virtualenv linkedin_env Step 3: source linkedin_env/bin/activate Step 4: pip install Django==3.2.6 Step 5: django-admin startproject LinkedInLogin_app Step 6: python manage.py migrate Step 7: python manage.py runserver Authentication Keys for LinkedIn We need certain app-specific credentials to distinguish our app’s login from other social logins on the web to recognize the LinkedIn social login that we deployed. To use social service providers like Linkedin for authentication, you’ll need a Client ID and a Secret Key, which you can receive by creating a Linkedin app. First, go to the Linkedin developer portal and create an app, as shown below. If you already have a Linkedin page, you can add it or start a new page by clicking on the + before “Create a new LinkedIn Page”. Then, we will be starting a new Linkedin page by picking the small company page option. After you’ve created a page, you may add it; after that, you’ll need a logo, and your app will be complete. In the Auth tab, check your Key and Secret Key, then in the product tab, click Sign in with Linkedin. Add a Redirect URL to the Auth tab as follows. Update Django Settings.py First, we will update the social provider in the installed apps. INSTALLED_APPS = [ …. #social account providers 'allauth.socialaccount.providers.linkedin_oauth2', ] In addition, we will also add Linkedin as a social service provider in the settings.py as shown below. # Linkedin Authentication Setting SOCIALACCOUNT_PROVIDERS = { 'linkedin': { 'SCOPE': [ 'r_basicprofile', 'r_emailaddress' ], 'PROFILE_FIELDS': [ 'id', 'first-name', 'last-name', 'email-address', 'picture-url', 'public-profile-url', ] } } SCOPE refers to the rights you must supply; r_basicprofile is necessary to read the basic profile, and r_emailaddress is required to check the user’s email address. The information returned after a successful login is stored in PROFILE_FIELDS. Now we need to go into Django Admin and add our Key and Secret Key. Creating superuser credentials For our Django project, we’ll start by creating a superuser. To create a superuser for our LinkedInLogin_app application, we will run the following command and pass in the required details, including the username, email address, a designated password, and a confirmation for the same. python manage.py migrate --run-syncdb python manage.py createsuperuser Django – Allauth With Django AllAuth, you can create user signup, login, and logout, as well as other features like email change, lost password, and so on, in just 10 minutes. Installing Django Allauth pip install django-allauth We will then modify the INSTALLED_APPS in the settings.py by adding allauth apps. If we do not do this, Django Allauth will not work properly with our LinkedInLogin_app. So, we will add the following under LinkedInLogin_app. The new changes should appear as follows. INSTALLED_APPS = [ 'django.contrib.admin', 'django.contrib.auth', 'django.contrib.contenttypes', 'django.contrib.sessions', 'django.contrib.messages', 'django.contrib.staticfiles', # custom apps 'LinkedInLogin_app', # alluth 'django.contrib.sites', 'allauth', 'allauth.account', 'allauth.socialaccount', # linkedin 'allauth.socialaccount.providers.linkedin_oauth2', ] SITE_ID = 1 # allout login settings DEFAULT_AUTO_FIELD = 'django.db.models.BigAutoField' properly. AUTHENTICATION_BACKENDS = ( "allauth.account.auth_backends.AuthenticationBackend", ) That will include Django Allauth. In addition, we need to apply the migrations associated with Django Allauth by running the following command. python manage.py migrate Ensure not to forget the above step because Django Allauth needs the new tables. Update Urls We will also add allauth paths in LinkedInLogin_app/Urls.py as follows. # LinkedInLogin_app/urls.py from django.contrib import admin from django.urls import path, include urlpatterns = [ path('admin/', admin.site.urls), path('accounts/', include('allauth.urls')), ] Subsequently, we will create and apply migrations by running these commands. python manage.py makemigrations python manage.py migrate Finally, we will start the server by running the following set of commands as follows. Python manage.py runserver Then, we will access the following URL That shows all the features that are available for you to access now. Also, note the following URL as being very vital in your operations as indicated. - To signup go to - To log in, go to - To log out, go to . admin dashboard So, we will need to access the admin panel by using the link, and log in with the same credentials you used to create a superuser. As a result, you should have a page similar to this after you login successfully. add a new site Finally, we will update the home page template as shown below. We will start by adding a site to our Django application under the category SITES. Now select Add in Sites from the drop-down menu. Update Domain name 127.0.0.1 and any other name (LinkedIn) for this domain because we are operating it on our local system. Also, look for SITE_ID in the URL of this page. In our case, the URL is Therefore we adjusted SITE_ID=2 in the settings.py file. Linkedin must now be added to the Admin Social Applications. Fill in the details as given below, and select 127.0.0.1 from the list of sites. First, click on “Add Social Application” and fill the empty fields under the social applications. - Select LinkedIn as the Provider - Add a Customized name - Add the Client ID and Client secret you noted from the previous section. - Then, add 127.0.0.1 as one of the selected sites. After filling all the sections, it should appear as in the diagram below. Now, ensure that everything is OK by running makemigrations and migrate as follows since that’s all we need. python manage.py makemigrations python manage.py migrate Then, we will run the server using the command, python manage.py runserver, and visit the Login URL. Typical Errors - Unauthorized scope r_liteprofile, discovered If you receive a No Redirect URL or Mismatch of Redirect URL error, click on Linkedin in the signup process and copy the URL. Then use an online decoder to decode and check your redirect URL to see what redirect URL Linkedin is using. If you receive a No Redirect URL or Mismatch of Redirect URL error, click on Linkedin in the signup process, copy the URL. Then use an online decoder to decode and check your redirect URL to see what redirect URL Linkedin is using. If you’re getting an Unauthorized scope r_liteprofile problem, you likely neglected to pick Sign in with Linkedin in the Products page of your App dashboard.. # LinkedIn LinkedIn Login</title> </head> <body> {% block content %} {% endblock content %} </body> </html> <!-- templates/home.html --> {% extends 'base.html' %} {% load socialaccount %} {% block content %} <div class="container" style="text-align: center; padding-top: 10%;"> <h1>Django LinkedIn. # LinkedInLogin_app/views.py from django.views.generic import TemplateView class Home(TemplateView): template_name = "home.html" We will then create a new URL so that the final look of the urls.py is shown below. #/templates' %}" class="btn btn-secondary"> <i class="fa fa-linkedin fa-fw"></i> <span>Login with LinkedIn</span> </a> <!-- LinkedIn button code ends here --> {% endif %} </div> {% endblock content %} Now, we are ready to run and log in via LinkedIn. Login with LinkedIn We first need to access the homepage for our Django authentication application by using the following address If you are OK, you should be able to see a page similar to this. After clicking on the button “Login with LinkedIn”, you will be redirected to LinkedIn’s Login page with a clear indication that you need to sign in to LinkedIn to proceed to LinkedInLogin_app is our Django application. When successfully signed in, you should see a page similar to this. Complete Source Code for reference # LinkedInLogin_app/settings.py """ Django settings for LinkedIn-%@m@&@70-9=4(^yl4zckk=l_a)a7hr1c^0*!x-6tddjs1w#4', 'LinkedInLogin_app', # alluth 'django.contrib.sites', 'allauth', 'allauth.account', 'allauth.socialaccount', # linkedin 'allauth.socialaccount.providers.linkedin_oauth2', ] = 'LinkedInLogin_app = 'LinkedIn = 2 # allout login settings LOGIN_URL = '/accounts/login' LOGIN_REDIRECT_URL = '/home/' EMAIL_BACKEND = 'django.core.mail.backends.console.EmailBackend' ACCOUNT_EMAIL_VERIFICATION = "none" ACCOUNT_LOGOUT_ON_GET = True AUTHENTICATION_BACKENDS = ( "allauth.account.auth_backends.AuthenticationBackend", ) #/views.py from django.views.generic import TemplateView class Home(TemplateView): template_name = "home.html" <!-- # LinkedInLogin_app_oauth2' %}" class="btn btn-secondary"> <i class="fa fa-linkedin fa-fw"></i> <span>Login with LinkedIn</span> </a> <!-- LinkedIn button code ends here --> {% endif %} </div> {% endblock content %} <!-- # LinkedInLogin_app/base.html --> <!DOCTYPE html> <html lang="en"> <head> <meta charset="UTF-8" /> <link href=" rel="stylesheet" /> <link rel="stylesheet" href=" /> <meta name="viewport" content="width=device-width, initial-scale=1.0" /> <title>Django LinkedIn Authentication</title> </head> <body> {% block content %} {% endblock content %} </body> </html> Conclusion Most social account authentication providers function the same way; you’ll need to build an app or project on their developer page to obtain a client key and client secret key, then update settings.py and add it to the Admin social application. Django AllAuth is extremely easy to incorporate into your Django project. It includes user registration, login, logout, email update, forgotten password, and many other features already developed and ready to use. Django AllAuth also validates data. The most popular and trusted social accounts are Facebook, GitHub, LinkedIn, and Google, but if you wish to add additional, Click here for a list of all supported social accounts.
https://www.codeunderscored.com/django-authentication-with-linkedin/
CC-MAIN-2022-21
refinedweb
2,687
56.05
################################################## Revision history for Log::Log4perl ################################################## 1.09 (2007/02/07) * (ms) Added $^S check to FAQ, as suggested by J. David Blackstone. * (ms) Applied Robert Jacobson's patch for the "DDD" formatter in L4p::DateFormats, which now formats the day-of-year values numerically and precedes them with zeroes if necessary. * (ms) Added %M{x} PatternLayout notation as requested by Ankur Gupta. * (ms) Another Win32 test suite fix, no longer deleting an open file but moving it aside (rt.cpan:23520). 1.08 2006/11/18 * (ms) Applied test suite patch by Lars Thegler for ancient perl 5.005_03. * (ms) Applied patch by Jeremy Bopp to fix test suite running under Cygwin. * (ms) Fixed documentation bug in L4p:Appender::File, s/recreate_signal/recreate_check_signal. Thanks to Todd Chapman and Robert Jacobson for reporting this. * (ms) Fixed init(), which now deletes any config file watchers left over from previous init_and_watch() calls. Reported by Andreas Koenig who saw sporadic errors in the test suite, thanks! 1.07 2006/10/11 * (ms) Removed checks for unlink() in t/017Watch.t since they failed on win32. * (ms) Fixed doc bug in Appender::File reported by Robert Jacobson. * (ms) Added FAQ on why to use Log4perl and not another logging system on CPAN. * (ms) Fixed %M, %L, etc. level in logcarp/cluck/croak/confess (thanks to Ateeq Altaf) * (ms) Autocorrecting rootlogger/rootLogger typo * (ms) Better warning on missing loggers in config sanity check 1.06 2006/07/18 * (ms) Applied patch by Robert Jacobson to fix day-of-year in DateFormat, which was off by one. * (ms) Added FAQ on syslog * (ms) umask values for the file appender are now also accepted in octal form (0xxx). * (ms) The file appender now accepts owner/group settings of newly created log files. * (ms) Fixed appender cleanup, a bug caused composite appenders to be cleaned up during global destruction, which caused an ugly segfault with the Synchronized appender on FreeBSD. 1.05 2006/06/10 * (ms) Added recreate signal handler to L4p::Appender::File for newsyslog support. Two new FAQ entries on dealing with newsyslog and log files being removed by external apps. * (ms) L4p::Config::Watch no longer sets the global $SIGNAL_CAUGHT by default but uses an instance variable instead to prevent clobbering L4p's config and watch mechanism. * (ms) die() on undefined configuration (rt 18103 by justice8@wanadoo.fr) * (ms) Hugh Esco submitted a FAQ on where to put logfiles * (ms) Applied patch provided by Chia-liang Kao to suppress an error message and skip tests in the suite when DBI is missing. 1.04 2006/02/26 * (ms) Duplicate log4perl directives, which previously just overwrote existing ones, are no longer permitted and cause the config parser to throw an error. * (ms) If a conversion pattern was specified twice in a config file, the output was "ARRAY(0x804da00)" (bug reported by Bill Mason). Now, gobbling up property configurator values into an array is limited to appender properties and excludes the conversion pattern. * (ms) Multiple calls to import (usually happens if 'use L4p' gets called twice within the same namespace) caused nasty warnings, bug reported by Greg Olszewski. Fixed by ignoring subsequent calls from the same package to import(). * (ms) Changed rendering of logdie/warn/cluck/croak/... messages to fix a bug reported by Martin J. Evans. * (ms) Added a L4p::Appender::String appender to handle the rendering internally. * (ms) Documentation patch by Matisse Enzer on increased/ decreased log levels. * (ms) Fixed stack trace level of logcarp() * (ms) Carl Franks reported that the test suite failed on WinXP SP2 because of a hardcoded /tmp - fixed by File::Spec->tempdir(). * (ms) Added reconnect_attempts and reconnect_sleep parameters to DBI appender. * (ms) Bugfix for rt.cpan.org #17886 (tmp files in test suite) 1.03 (2006/01/30) * (ms) Some perl-5.6.1 installations have a buggy Carp.pm. Skipping 4 test cases for these. Reported by Andy Ford and Matisse Enzer. * (ms) The DBI appender now reconnects on stale DB connections. * (ms) Fixed Win32 test bug as reported in by barbie. Instead of deleting a file still in use by an appender (which Windows doesn't like), the file gets now truncated. 1.02 (2005/12/10) * (ms) Adapted t/006Config-Java.t to cope with Win32 path separators * (ms) Corrected typo in Chainsaw FAQ, reported by Bernd Dirksen. * (ms) Brian Edwards noticed that (Screen, File) were missing a base class declaration, causing $logger->add_appender() to fail. Fixed with test case. * (ms) Log::Log4perl::Appender::File now handles the case where the logfile suddenly disappears. * (ms) Fixed section indentation in main man page * (ms) Converted Ceki's last name to UTF-8 (a historic step!) 1.01 (09/29/2005) * (ms) Added 'utf8' and 'binmode' flags to Log::Log4perl::Appender::File per suggestion by Jonathan Warden. * (ms) Made test cases 003Layout.t and 033UsrCspec.t resilient against broken ActiveState 5.8.4 and 5.8.7. * (ms) Skipped failing test cases for 5.005, looks like the caller() level in carp() is wrong, but not worth fixing. * (ms) Fixed the bug with the caller level of the first log message sent after init_and_watch() detected a change. Added test case to 027Watch2.t. * (ms) Added FAQ on UTF-8. * (ms) Applied patch by David Britton, improving performance during the init() call. * (ms) Fixed bug to prevent it from modifying $_. Thanks to Steffen Winkler. 1.00 (08/13/2005) * (ms) Added tag qw(:no_extra_logdie_message) to suppress duplicate die() messages in scripts using simple configurations and LOGDIE(). Added logexit() as an alternative way. * (ms) Fixed bug with logcarp/croak/cluck, which were using the wrong Carp level. * (kg) Fixing bug in Appender::Limit regarding $_ scope * (ms) corrected typo in Synchronized.pm found by Rob Redmon. * (ms) Fixed bug with Appender::File reported by Michael Smith. Checking now if print() succeeds, catching errors with full disks and ulimit'ed environments. * (ms) Added LOGCARP(), LOGCLUCK(), LOGCONFESS(), LOGCROAK() macros in :easy mode (suggested by Jud Dagnall). * (ms) $INITIALIZED now gets reset during logger cleanup. 0.52 (05/08/2005) * (ms) Jonathan Manning <jmanning@alisa-jon.net> provided a patch for DateFormat.pm to fix 3-letter month abbreviations and a shortcut to simulate Apache's log format. * (kg) Ola Finsbraaten provided a patch to provide a better error message when a logger is defined twice in a config. 0.51 (01/08/2005) * (ms) Jon Bjornstad noticed that the file appender wasn't including $! in the die() exception thrown if open_file() fails. Added it. * (ms) Added umask option to file appender * (ms) Fix to L4p::Util::module::available() for Win32 compliance by Roger Yager <roger.yager@eyestreet.com> * (ms) Added check to L4p::Util::module_available() returning true if the pm file is available in %INC, indicating that it has already been loaded. This fixes a problem when running L4p in a PAR binary. * (ms) Added remove_appender() and eradicate_appender() method to Logger.pm, test cases and documentation on the main Log4perl page. * (ms) Added a generic buffered composite appender, L4p::Appender::Buffer, buffering messages until a trigger condition is met. 0.50 (12/08/2004) * (ms) Added ':resurrect' source filter, which uncomments all lines starting with "###l4p". Can be used for hidden L4p statements, which are then activated by calling 'use Log::Log4perl qw(:resurrect)'. * (ms) Fixed Win32 test suite bug: File::Spec->catfile() returns '/' as a path separator on both Unix and Win32, while Log4perl's layouts (derived from caller() info) use '\' on Win32 and '/' on Unix. Changed tests to only verify file name, not path. * (ms) Added 'appender_by_name()' to retrieve an appender defined in the configuration file by name later. * (ms) Added FAQ on "stubbing out" L4p macros in environments that don't have L4p installed. * (ms) Added convenience function appender_thresholds_adjust() to adjust thresholds of chosen (or all) appenders * (ms) Got rid of Test::Simple dependency * (ms) Moved autoflush setting in L4p::Appender::File from log() to file_open(), running only once, not with every message. * (ms) Applied doc fixes suggested by Jon Bjornstad. * (ms) Added ScreenANSIColor appender to colorize messages based on their priority. See Log::Log4perl::Appender::ScreenANSIColor. 0.49 (11/07/2004) * (ms) init_and_watch() no longer die()s on reloading syntactically wrong configuration files but issues a warning and then reloads the last working config. * (ms) init() now also accepts an open file handle (passed in as a glob) to a configuration file or a ref to an IO::File object. * (ms) Jos I. Boumans <kane@xs4all.net> and Chris Winters <chris@cwinters.com> reported an error thrown by L4p in their app SPOPS: During global construction. Looks like the Logger object's internal hash is cleared and then the is_<level> method gets called, resulting in a runtime exception. Added proposed remedy checking if the called method is defined by ref. * (ms) Added check to init_and_watch if obtaining the mod timestamp failed. 0.48 (08/20/2004) * (ms) fixed bug reported by Chip Salzenberg <chip@pobox.com>: logdie() and logwarn() are now compliant with the warn() and die() standard which suppresses the "at file line x" message if the message ends with a "\n". * (ms) New interface for custom config parsers. Log::Log4perl::Config::BaseConfigurator now provides a base class for new config parsers. Init can now be called like Log::Log4perl->init($parser) with a parser object, which is derived from Log::Log4perl::Config::BaseConfigurator and provides a parse() method (no arguments). The file (or whatever) to be parsed can be set by calling $parser->text(\@lines) or $parser->file($name) before calling L4p->init($parser). The Property, DOM and LDAP configurators have been adapted, check their implementation for details. * (ms) Added integrity check for Log4perl configurations: Log4perl now issues a warning if a configuration doesn't define any appenders. Should anyone not like this, it can be turned off by setting $L4p::Config::CONFIG_INTEGRITY_CHECK = 0 before calling init(). * (ms) Fixed bug reported by Johannes Kilian <jok@vitronic.com> with __DIE__ handler and "PatternLayout" shortcut. Replaced 'eval { require ... }' by L4p::Util::module_available in L4p::Config.pm. * (ms) Did away with $IS_LOADED internal variable. * (ms) Fixed bug with L4p::INITIALIZED vs. L4P::Logger::INITIALIZED, added t/020Easy2.t. * (ms) Added adm/cvskwexp script to check if we're running into CVS trouble because of <dollar>Log keyword expansion. 0.47 (07/11/2004) * (ms) Added suggestion by Hutton Davidson <Davidson.Hutton@ftid.com> to make the socket appender more forgiving. New option "silent_recovery" will silently ignore errors and recover if possible on initiallly dead socket connections. * (ms) Fixed bug with initialized() -- checking once caused subsequent calls to return true. * (ms) run t/045Composite.t only if Storable is installed -- earlier perl versions (like 5.6.1) don't have it by default. * (ms) fixed test case in t/020Easy.t for buggy perl 5.6.1 * (ms) added Log::Log4perl::infiltrate_lwp() to make LWP::UserAgent play in the L4p framework upon request. * (ms) perl 5.00503 mysteriously core dumps in t/017Watch.t, seems like this was introduced in 0.46. Disabled these tests for now if we're on 5.00503 to avoid installation hickups. Longer term, need to investigate. 0.46 (06/13/2004) * (ms) removed superfluous eval() in Log4perl.pm, reported anonymously on the CPAN bugtracker. * (ms) Added a cleanup() function to Logger.pm which is used by an END {} block in Logger.pm to tear down all Loggers/Appenders before global destruction kicks in. In addition, Kevin found that the eval "" is the cause of an Appender memleak. Moved assignment variable out of the eval to plug the leak. Added $Log::Log4perl::CHATTY_DESTROY_METHODS, which shows what L4p objects are destroyed and when. * (ms) Kevin's idea is in now, on localizing $? in the L4p global END {} block. It prevents logdie() et. al from exiting with unwanted exit codes when global cleanup / global destruction modifies $?, as seen by Tim with the Email appender. * (ms) Dave Viner <dviner@yahoo-inc.com> added isLevelEnabled() methods as aliases to is_level(). 0.45 (05/23/2004) * (ms) fix for t/045Composite.t on perl 5.6.1 by Jeff Macdonald <jeff.macdonald@e-dialog.com> (specify number of test cases, getting rid of no_plan). * (ms) Dennis Gregorovic <dgregor@redhat.com> provided a patch to protect applications who are tinkering with $/. It is set to "\n" now locally when L4p is reading the conf file. Added a test case to t/004Config.t. * (ms) Fixed a documentation error with initialized(), pointed out by Victor Felix <vfelix@tigr.org>. 0.44 (04/25/2004) * (ms) added filename() method to L4P::Appender::File as suggested by Lee Carmichael <lecar_red@yahoo.com> * (ms) added RRDs appender Log::Log4perl::Appender::RRDs and testcases * (ms) fixed Log::Log4perl::Appender to check if a an appender package has already been loaded and skip 'require' in this case. Packages injected via Class::Prototyped caused an error with this. * (ms) Extended the FAQ's "How can I write my own appender?" on how to dynamically create new appenders via Class::Prototyped. 0.43 (03/22/2004) * (ms) Applied patch by Markus Peter <warp@spin.de> for 'pipe' mode in Log::Log4perl::Appender::File * (ms) Added composite appender Log::Log4perl::Appender::Limit to limit message delivery to adjustable time windows. * (ms) Fixed last 033UsrCspec.t test case to run on Win32 as well (path fixed). * (ms) Lars Thegler <lars@thegler.dk> provided a patch to keep compatibility with 5.005_03. * (ms) Added a patch to avoid warnings on undefined MDC values referenced via %X in PatternLayout. Now, the string "[undef]" is used. Bug was reported by Ritu Kohli <Ritu.Kohli@ubs.com> 0.42 (02/14/2004) * (kg) added filters to XML DOMConfig and DTD * (ms) Fixed caller level to cspecs by adding one * (ms) Added init_once() and documentation * (ms) Worked around the perl bug that triggers __DIE__ handlers even if die() occurs within an eval(). So if you did BEGIN { $SIG{__DIE__} = sub { print "ouch!"; die }; } use Log::Log4perl; and Time::HiRes wasn't available, the eval { require Time::HiRes } in PatternLayout.pm triggered the __DIE__ handler. Now there's a function module_available() in L4p::Util to check if a module is installed. * (ms) Fixed %M cspec in PatternLayout in case a logging method is called within one (or more) eval {} block(s). caller(n+m) will be called repeatedly if necessary to get the next real subroutine. Anonymous subroutines will still be called __ANON__, but this can be overridden by defining local *__ANON__ = "subroutine_name"; in them explicitely (thanks, Perlmonks :). 0.41 (12/12/2003) * (ms) Applied documentation update for Synchronized appender, suggested by David Viner E<lt>dviner@yahoo-inc.comE<gt> * (ms) Added option to Log::Log4perl::Layout::PatternLayout to enable people to provide their own timer functions. 0.40 (11/11/2003) * (ms) perl 5.005_03 fix for l4p::Appender::Synchronized * (ms) Fixed a bug in 0.39 (thanks to James King for finding) which caused composite appenders like Synchronized to just use SimpleLayout. With the fix, composite appenders are now relaying messages unmodified to their delegates, which can then apply any layout they desire. * (ms) Added file_open(), file_close() and file_switch() to l4p::Appender::File 0.39 (10/23/2003) * (kg) fixed bug in interaction between Logger::Level and Level::is_valid so that now you can do $logger->level('INFO') instead of just $INFO. * (ms) Added logic for 'composite appenders'. Appenders can now be configured to relay messages to other appenders. Added Log::Log4perl::Appender::Synchronized, an appender guaranteeing atomic logging of messages via semaphores. * (ms) Added basic substitution to PropertyConfigurator. Now you can define variables (like in "name=value") and subsequent patterns of "${name}" will be replaced by "value" in the configuration file. * (kg) Followed Mike's lead and added variable substitution to the DOMConfigurator. * (ms) Added Log::Log4perl::Appender::Socket as a simple Socket appender featuring connection recovery. 0.38 (09/29/2003) * (kg) fixed bug where custom_levels beneath DEBUG didn't work * (ms) fixed 5.00305 incompatibility reported by Brett Rann <brettrann@mail.com> (constants with leading _). * (ms) Log::Log4perl->easy_init() now calls ->reset() first to make sure it's not duplicating the existing logging environment. Thanks to William McKee <william@knowmad.com> for bringing this up. * (ms) fixed bug with error_die() - printed the wrong function/line/file. Reported by Brett Rann <brettrann@mail.com>. * (ms) added %T to PatternLayout as a stack traced as suggested by Brett Rann <brettrann@mail.com>. 0.37 (09/14/2003) * (kg) adjusting tests for XML::Parser 2.32 having broken XML::DOM 1.42 and lower * (ms) Added signal handling to init_and_watch * (ms) renamed l4p-internal DEBUG constant to avoid confusion with DEBUG() and $DEBUG as suggested by Jim Cromie <jcromie@divsol.com>. * (ms) Applied patch by Mac Yang <mac@proofpoint.com> for Log::Log4perl::DateFormat to calculate the timezone for the 'Z' conversion specifier. 0.36 (07/22/2003) * (ms) Matthew Keene <mkeene@netspace.net.au> suggested to have an accessor for all appenders currently defined -- added appenders() method * (ms) Test case 041SafeEval.t didn't share $0 explicitely and created some warnings, fixed that with (jf)'s help. * (ms) Added performance improvements suggested by Kyle R. Burton <mortis@voicenet.com>. is_debug/is_info/etc. are now precompiled, similar to the debug/info/etc. methods. * (ms) Added a fix to have is_debug()/is_info()/etc. pay attention to on-the-fly config file changes via init_and_watch(). * (ms) Fixed bug that reloaded the config under init_and_watch() every time the check period expired, regardless if the config file itself had changed. Added test case. 0.35 06/21/2003 * (kg) got rid of warnings during make test in 014ConfErrs.t added user-defined hooks to JavaMap * Jim Cromie <jcromie@divsol.com> provided a patch to get rid of deprecated our-if syntax in Level.pm * (ms) removed test case for RollingFileAppender because of recent instability. Added dependency for Log::Dispatch::RollingFile 1.10 in Log/Log4perl/JavaMap/RollingFileAppender.pm. 0.34 06/08/2003 * (ms) James FitzGibbon <james.fitzgibbon@target.com> noticed a major bug in Log::Log4perl::Appender::File and provided a patch. Problem was that 0.33 was reusing the same file handle for every opened file, causing all messages to end up in the same file. 0.33 05/30/2003 * (kg) CPAN rt#2636, coordinating XML::DOM version required across modules and unit tests * (ms) Removed Log::Dispatch dependency, added standard Log::Log4perl::Appender appenders File and Screen. Log::Dispatch is still supported for backwards compatibility and special purpose appenders implemented within this hierarchy. 0.32 05/17/2003 * (ms) Added fix to Makefile.PL to compensate for MakeMaker bug in perl < 5.8.0, causing man pages below Log::Log4perl::Config not to be installed. Thanks to Mathieu Arnold <mat@mat.cc> for bringing this up. * (ms) 0.31 had a Win32 test suite glitch, replaced getpwuid() (not implemented) by stat() for Safe test. 0.31 05/08/2003 * (kg) fixed bug Appender::DBI where it was consuming the message array before other appenders could get to it * (ms) changed config_and_watch to ignore clock differences between system time and file system time (helpful with skewed NFS systems). Added Log::Log4perl::Config::Watch. * James FitzGibbon <james.fitzgibbon@target.com>: Added support for optionally restricting eval'd code to Safe compartments. * (ms) allow/deny code in configuration files should now be controlled via the accessor Log::Log4perl::Config->allow_code(0/1). $Log::Log4perl::ALLOW_CODE_IN_CONFIG_FILE is still supported for backwards compatibility. 0.30 03/14/2003 * (ms) Added Log4perl custom filter logic and standard filter set * (kg) Added url support to init(), finally documenting it * (kg) Finished implementation of DOMConfigurator allowing xml configs. * (ms) Corrected DateFormat inconsistencies as reported by Roger Perttu <roger.perttu@easit.se> 0.29 01/30/2003 * (kg) Removing debugging from 0.28, big woops * (kg) Fixing 036JSyslog.t, Syslog implementations are too often broken to base any results on. * (kg) Fixing XML-DOM tests, Data::Dumper doesn't return data exactly the same way. 0.28 (01/28/2003) * (ms) '#' in the conf file are now interpreted as comment starters only if they're at the start of a line with optional whitespace. The previous setting (comments starting anywhere) had problems with code containing '#''s, like in layout.cref = sub { $#_ = 1 } * (ms) warp_message accepts code refs or function names * (kg) Split config bits into PropertyConfigurator and implemented DOMConfigurator for XML configs. * (kg) Adding appender.warp_message parameter as a help to DBI appender * (kg) Added NoopLayout to help DBI appender * (ms) Added message output filters: log({filter => \&filter, value => $value}) * (kg) t/024WarnDieCarp was assuming / as directory separator, failed on Win32 * (kg) implemented JavaMaps for NTEventLogAppender, SyslogAppender * (kg) found and addressed circular ref problem in Logger->reset * (kg) moved TestBuffer under Appender/ directory along with DBI * (kg) fixed docs, Pattern layout, %f not supported, s/b %F * (kg) added Log::Log4perl::Appender::DBI to implement JDBCAppender * (ms) Every value in the config file can now be a perl function, dynamically replaced by its return value at configuration parse time * (ms) NDC now prints entire stack, not just top element (as mandated by Log4j) * (ms) Allow trailing spaces after a line-breaking '\' in the config file to be fault-tolerant on cut-and-pasted code 0.27 12/06/2002 * (ms) Updated FAQ with "Recipes of the Week" * (ms) Added Log::Log4perl::NDC (Nested Diagnostic Contexts) and Log::Log4perl::MDC (Mapped Diagnostic Contexts) * (ms) LOGDIE and LOGWARN added to stealth loggers * (ms) Logging methods ($lo->debug(), $lo->info() ...) now return a value, indicating the number of appenders that the message was propagated to. If the message was suppressed due to level constraints, undef is returned. Updated manpage (new section "return values"). * (ms) Fixed bug reported by Francisco Olarte Sanz. <folarte@peoplecall.com>: ISO date format and documentation mixed up MM with mm in the simple date format * (kg) User-defined conversion specifiers for PatternLayout in configuration file and as C API * (kg) implementing map to log4j.RollingFileAppender * (kg) trying out oneMessagePerAppender parameter * (kg) changed unit tests to use File::Spec 0.26 11/11/2002 * (kg) enabled %l (was missing from PatternLayout::define) * (kg) got rid of "Use of uninitialized value in join or string" message when some of $logger->debug(@array) when some of @array are undef * (ms) Stealth loggers and documentation * (kg) Better error message for case reported by Hai Wu * (ms) Added Log/Log4perl/FAQ.pm, which the homepage links to * (ms) Took dependency on Test::More and Test::Simple out of the PPD file because of a problem with Activestate 5.6.1 reported by James Hahn <jrh3@att.com> * (ms) Added Log::Dispatch equivalent levels to the Log4perl loggers, which are passed on the Log::Dispatch appenders now according to the priority of the message instead of the default "DEBUG" setting * (ms) Added %P process ID to PatternLayout as suggested by Paul Harrington <Paul-Harrington@deshaw.com>. Also added %H as hostname * (kg) Added %min.max formatter to PatternLayout * (ms) Updated docs for Log::Log4perl::DateFormat 0.25 10/06/2002 * (ms) backwards-compatibility with perl 5.00503 * (ms) added system-wide threshold, fixed java-app thresholds * (kg) Nested configuration structures for appenders like L::D::Jabber * (ms) ::Log4perl::Appender::threshold() accepts strings or integer levels (as submitted by Aaron Straup Cope <asc@vineyard.net>) * (ms) Fixed logdie/logwarn caller(x) offset bug reported by Brian Duffy <Brian.Duffy@DFA.STATE.NY.US> * (ms) dies now on PatternLayout without ConversionPattern (helps detecting typos in conf files) 0.24 09/26/2002 * (kg) Fix for init_and_watch and test cases * (ms) Added documentation for Log::Log4perl::Config * (ms) Added log4perl.additivity.loggerName conf file syntax * (ms) Assume Log::Log4perl::Layout prefix of 'relative' layout class names in conf file (say 'SimpleLayout' instead of 'Log::Log4perl::Layout::SimpleLayout'). * (ms) accidently appending a ';' at the end of an appender class in a conf file now spits out a reasonable error message * (ms) added a by_name() method to TestBuffer to retrieve an instance of the TestBuffer population by name instead of relying on the order of creation via POPULATION[x] (for testing only). * (kg) Win32 compatibility fixes 0.23 09/14/2002 * Both Log4perl/log4perl is now accepted in conf file * Added documentation to Log::Log4perl::Appender * Made Time::HiRes optional. If it's missing, PatternLayout will just use full seconds as %r. * SimpleDateFormat "%d{HH:SS}", including predefined formats (DATE etc.) * Added another cut-and-paste example to the docs (EXAMPLE) * Added new logdie/logwarn/error_warn/error_die/logcarp/ logcluck/logcroak/logconfess functions written by Erik Selberg <erik@selberg.com> * Added PatternLayout documentation * Changed suppression of duplicate newline in log message algorithm * Custom levels and inc_level/dec_level/more_logging/less_logging added by Erik Selberg <erik@selberg.com> * Append to logfile by default if Log::Dispatch::File is used (previously clobbered by default) * Kevin's init_and_watch fix 0.22 8/17/2002 * Threshold settings of appenders: $appender->threshold($ERROR); log4j.appender.A.Threshold = ERROR * Chris R. Donnelly <cdonnelly@digitalmotorworks.com> submitted two patches: - extended init() to take obj references (added, also added a test case and documentation) - fixed %F and %L if Log4perl is used by a wrapper class (accepted, but changed variable name to Log::Log4perl::caller_depth as a tribute to Log::Dispatch::Config, added test case 022Wrap and documentation 0.21 8/08/2002 * Synopsis shows code samples in Log4perl.pm/README * Slight Log4j incompatibility but useful: %F{n} lets you limit the number of entries the source file path is logged * Erik W. Selberg (erik@selberg.com) suggested having PatternLayout.pm suppress another \n if the messages already contains a \n and the format requires a %n. Done. * Erik W. Selberg (erik@selberg.com) suggested loggers should take any number of messages and concatenate them. Done. * Fixed double-init problem and added a test case. Now the entire configuration is cleared before the second init(). However, this surfaced a problem with init_and_watch: If a program obtains references to one or more loggers, rewriting the configuration file during program execution and re-initing makes these reference point to loggers which hold obsolete configurations. Fixed that by code in debug(), info(), etc. which *replaces* (shudder) the logger reference the program hands in to them with a new one of the same category. This happens every time if 'init_and_watch' has been enabled. However, this introduces a small runtime penalty. This is different from the original log4j, which does some half-assed re-initialization, because Java isn't expressive enough to allow for it. Making this thread-safe might be tough, though. * Added DEBUG statements to Logger.pm and Config.pm to trace execution (debugging won't work because of "eval"s). Both files define a constant named DEBUG towards the top of the file, which will have perl optimize away the debug statements in case it's set to 0. * A warning is issued now (once) if init() hasn't been called or no appenders have been defined. * Added ':levels' target to Log::Log4perl to import $DEBUG, $ERROR, etc. levels (just like 'use Log::Log4perl::Level' works). * Added ':easy' target to allow for simple setup * Code references can be passed in as log messages to avoid parameter passing penalty 0.20 7/23/2002 * Strip trailing spaces in config file * Accept line continuations in properties file * Refactored Logger.pm for speed, defined the logging behavior when the logger is created, not when a message is logged * Fixing test suites so that SimpleFormat newline is accounted for * Fixed a bug with root inheritance where the category name wasn't coming through * added init_and_watch 0.19 07/16/2002 * Added Log::Log4perl::Appender::TestBuffer back in the distribution, otherwise regression test suite would fail. 0.18 07/16/2002 * Failed attempt to fix the Log::Dispatch::Buffer problem. 0.17 07/11/2002 * Updated documentation according to Dave Rolsky's suggestions * Lots of other documentation fixes * Fixed bug in renderer, %M was displayed as the logger function bumped up the level by 1 * Fixed %% bug 0.16 07/10/2002 * Updated documentation for CPAN release * Applied Kevin's patch to limit it to one Log::Dispatcher 0.15 07/10/2002 * There were name conflicts in Log::Dispatch, because we used *one* Log::Dispatch object for the *all* loggers in the Log::Log4perl universe (it still worked because we were using log_to() for Log::Dispatch to send messages to specific appenders only). Now every logger has its own Log::Dispatch object. Logger.pm doesn't call Kevin's anti-dupe logic anymore -- is this ok? Maybe there's some leftovers which need to be cleaned up. * Kevin fixed t/014ConfErrs.t after last night's Appender.pm change 0.14 07/09/2002 * (!) Added new class Log::Log4perl::Appender as a wrapper around Log::Dispatch::*. Layouts are no longer attached to the loggers, but to the appenders instead. $app->layout($layout) sets the layout. $logger->add_appender($app) is the new syntax to add an appender to a logger. The $logger->layout method is gone for that reason. * Added documentation on categories * Added documentation on Log::Log4perl::Appender, Log::Log4perl::Layout::SimpleLayout, Log::Log4perl::Layout::PatternLayout. 0.13 07/09/2002 * in the config files, 'debug' is not a level, 'DEBUG' is * expanded the layouts so that we can add subclassess, added SimpleLayout, note that api usage changes -$logger->layout('buf',"The message is here: %m"); +$logger->layout(new Log::Log4perl::Layout::PatternLayout('buf',"The message is here: %m")); * did benchmarks, see doc/benchmark*, t/013Bench.t * further tweaked errors for bad configuration, added a test for those 0.12 07/08/2002 * Log::Log4perl::Logger->get_logger now accessible via Log::Log4perl->get_logger() * Log::Log4perl::Config->init now accessible via Log::Log4perl->init() * Adapted test cases to new shortcuts * Constrained some files to 80 chars width * Added test case t/009Deuce.t for two appenders in one category via the config file * Changed default layout in case there's none defined (SimpleLayout) * Implemented dictatory date format for %d: yyyy/MM/dd hh:mm:ss 0.11 07/07/2002 * added documentation to Log/Log4perl.pm * added is_debug/is_error/is_info etc. functions to Logger.pm, test cases to t/002Logger.t 0.10 07/05/2002 * %p should return level name of the calling function, so $logger->warn('bad thing!!') should print 'WARN - bad thing' even if the category is set to debug, so took level_str out of Logger.pm (kg) 0.09 07/03/2002 * %p should return level name, not number, adding level_str to Logger.pm (kg) * Level.pm - discriminating: priorities are 1-4, levels are 'info','debug',etc (kg) 0.08 07/03/2002 * Non-root loggers are working now off the config file 0.07 07/02/2002 * Updated documentation * removed "diagnostics" 0.06 07/01/2002 * Bug discovered by Kevin Goess <cpan@goess.org>, revealed in 004-Config.t: Wrong layout used if Appender is inherited. Fixed. * Changed Log::Log4perl::Appender::TestBuffer to keep track of the object population -- so we can easily reference them in the Log::Log4perl test cases. Got rid of get_buffer(). * Added a reset() method to Log::Log4perl and Log::Log4perl::Logger for easier testing. It resets all persistent loggers to the inital state. * Added documentation 0.05 06/30/2002 * Fixed bug with mapped priorities between java/Log::Dispatch * Java/Perl integration with conf file 0.04 06/30/2002 * Layout tests * %r to layout * Added lib4j configuration file stuff and tests 0.03 06/30/2002 * Layout * Curly braces in Layout first ops 0.02 06/30/2002 * Created Logger and test cases 0.01 06/22/2002 * Where it all began TODO (not assigned to anybody yet): ################################################## * objects passed via the config hash are stringified by Config.pm (requires a significant change on how to init via a hash ref, something like a HashConfigurator class) * BasicConfigurator() vs. :easy, PropertyConfigurator() * get_logger() thread safety (two try to create it at the same time) * Thread safety with appenders, e.g. two threads calling the File::Dispatch appender's log method * Thread safety with re-reading the conf file (watch) * Object rendering * log4j.logger.blah = INHERITED, app * variable subst: a=b log4j.blah = ${a} * log4j.renderer.blah = blah * permission problems, init() creates the files, maybe read later by different uid, no way to set umask? * Custom filters TODO Kevin: ################################################## * use squirrel? * document oneMessagePerAppender as a bona-fide feature * appender-by-name stuff? * implement? #. " TODO Mike: ################################################## * index.html on sourceforge should be part of CVS * Release script should maintain old CPAN message on index.html * Turning on DEBUG in Logger.pm results in broken test cases and warnings * Layout.pm: '%t' * README tests (Pod::Tests or something) * Just had a wild idea: Could we possibly utilize the compiler frontend to eliminate log statements that are not going to be triggered? This would be a HUGE performance increase! * Write a bunch of useful appenders for Log::Dispatch like RollingLogFile ##################################################
https://metacpan.org/changes/release/MSCHILLI/Log-Log4perl-1.09
CC-MAIN-2019-43
refinedweb
5,477
58.58
Back to: Angular Tutorials For Beginners and Professionals Radio Buttons in Angular Template Driven Forms In this article, I am going to discuss Radio Buttons in Angular Template Driven Forms in detail. Please read our previous article as it is a continuation part to that article where we discussed Angular Template Driven Forms. At the end of this article, you will understand what are Radio Buttons and when and how to use Radio Buttons in Angular Template Driven Forms. What is a Radio Button? A Radio Button is an HTML element which is basically allows the user to select a single option from a predefined list of options. For example, you can create radio buttons for gender (Male and Female) and the user can only select either male radio button option or female radio button option but not the both. Example to understand Radio Buttons in Angular Template Driven Forms: Let us understand how to create and use Radio Buttons in Angular Template Driven Forms. We are going to work with the same example that we started in our previous article. Now, we want to include the “Gender” radio buttons in the student registration form as shown in the below image. When we select student “Gender” using the radio buttons and when we click the “Submit” button, we want the selected gender value to be logged to the console. How to create radio button in angular template driven forms? Please have a look at the below code which will create gender radio buttons with male and female options. Code Explanation In the above code, the name attribute of the input element radio is used to group the radio buttons as one unit which makes the selection mutually exclusive. The most important point that you need to keep in mind is that both the radio buttons should have the same value for the “name” attribute. Otherwise the radio button selection won’t be mutually exclusive. Again if you notice we have set the value attribute of each radio button to make and female and this is the value which is going to be posted to the server when the form is submitted.> <div class="panel-footer"> <button class="btn btn-primary" type="submit">Submit</button> </div> </div> </form> </div> </div> </div> Modifying the app.component.ts file: We want to log the posted form values into the console. So, modify the app.component.ts file as shown below. import { Component } from '@angular/core'; import { NgForm } from '@angular/forms' @Component({ selector: 'app-root', templateUrl: './app.component.html', styleUrls: ['./app.component.css'] }) export class AppComponent { RegisterStudent(studentForm: NgForm): void { // debugger; // var firstName = studentForm.controls.firstName.value; // var lastName = studentForm.controls.lastName.value; // var email = studentForm.controls.email.value; // var gender = studentForm.controls.gender.value; console.log(studentForm.value); } } With the above changes in place, now browse the application,open browser developers tool by pressing F12 key and open console tab. Then fill the form and click on the submit button and you should see the posted form values in the console tab as shown in the below image. How to select a radio button checked by default in Angular? As we know when working with real-time applications, sometimes we need to provide the one radio button to be checked by default when the form load initially and normally we can do this by adding the checked attribute of the radio button. If you include the checked attribute to one of the radio buttons, then you may expect that the radio button to be checked by default. But in our example, you will not get that default checked. In our example, lets include the “checked” attribute on the “Male” radio button. So. Modify the gender HTML code as shown below. <input type=”radio” name=”gender” value=”male” checked ngModel> With the above changes now browse the application and you will see the Male radio button is not checked. However, if you remove the “ngModel” directive from the radio button as shown below, then you will see that the Male radio button is checked when the form is load. <input type=”radio” name=”gender” value=”male” checked> In Angular Template Driven forms, we generally use the “ngModel” directive for two-way data binding. So when we put the ngModel directive back into the control then the “checked” attribute will not work as expected. How to make it works? In order to make it work include the “gender” property in the component class and initialize its value to the value of the radio button that you want to have checked by default. In our case, we want the “Male” radio button to be checked by default. So, we need to add “gender” property initialized to value of “male” in the component class as shown below. import { Component } from '@angular/core'; import { NgForm } from '@angular/forms' @Component({ selector: 'app-root', templateUrl: './app.component.html', styleUrls: ['./app.component.css'] }) export class AppComponent { gender = 'male'; RegisterStudent(studentForm: NgForm): void { console.log(studentForm.value); } } Modifying the app.component.html Now modify the app.component.html file as shown below where we included the ngModel with gender property of the component class. At this point when you browse the application, you will see that the “Male” radio button is checked by default. Now, if you remove the “checked” attribute from the “Male” radio button, then it is still checked by default when the form loads. This is possible because of the two-way data binding in angular. In our example, we do not want any radio button to be checked by default, so we remove the “checked” attribute and the “gender” property from the component class and ngModel directive. How to disable a radio button in Angular Template Driven Forms? In order to disable a radio button in Angular Template Driven Form, we need use the disabled attribute on that radio button. For example, if you want to make the “Male” radio button disabled when the form initially loads, then you need to modify the Male radio button as shown below. <input type=”radio” name=”gender” value=”male” ngModel disabled> Note: The most important point that you need to remember is, by default, the disabled form controls are not included in the Angular auto generated form model. In our example, the “Male” radio button is disabled, so, the gender property will not be included in the Angular generated form model. At this point, even if you select the Female radio button and submit the form, then also you will not see the gender property as shown in the below image. In our example, we do not want any radio button to be disabled, so please remove the disabled attribute from the radio button. In the next article, I am going to discuss the checkbox in angular template driven form with example. Here, in this article, I try to explain the radio buttons in angular Template Driven Forms. I hope this article will help you with your needs. I would like to have your feedback. Please post your feedback, question, or comments about this article.
https://dotnettutorials.net/lesson/radio-buttons-in-angular-template-driven-forms/
CC-MAIN-2020-45
refinedweb
1,184
53.1
Lesson 1: Defining a Report Dataset for a Reporting Services Web Service Use the following steps to learn how to specify a data source connection and return XML data from a Web service. In this lesson, you will create a dataset by calling the Report Server Web service ListChildren method that returns a list of all items from the root folder in the Report Server database. You define the parameters required by the ListChildren method and set default values to iterate through the hierarchy starting with the root folder. Item properties defined by the Web service appear as fields for the dataset in the Datasets window. Finally, you drag the dataset fields to the report layout to design your report. When you preview the report, you see items and item properties from your report server database, such as reports, folders, and data sources. Open a browser window and type to get the namespace information for the Report Server Web service. Later, you will specify the namespace in the query. Start Report Designer and create a new report. If you do not know how to create a report, see Tutorial: Creating a Basic Report. In Data view, select New Dataset. Type a name for the dataset (for example, XMLDataSet). In the Dataset dialog box, in Data source, select New Data Source. The Data Source dialog box appears. Type a name for the data source (for example, XMLDataSource). In Type, select XML. In Connection string, type the following URL to the Report Server Web service: The dialog box should look similar to the following illustration: In the Credentials tab, select Use Windows Authentication (Integrated Security). Click OK to save your changes and close the Data Source dialog box. In the Dataset dialog box, type the following query using the namespace version information that you verified in step 1: The dialog box should look similar to the following illustration: In Parameters tab of the Dataset dialog box, type two parameters. These are the parameters on the ListChildren method that specify where to start in the Report Server folder hierarchy and whether to include all nested folders: Item Recursive Set Item to /. Remove the "=" that Report Designer adds. The / symbol specifies the root node of the report server folder namespace. Set Recursive to 1. Remove the "=" that Report Designer adds. The dialog box should look similar to the following illustration: Click OK. The dataset is added to the Datasets window. Click Run (!) to view the result set. If the report server database contains reports and other items, you should see a row of data for each item. Click the Refresh Fields ( ) button on the toolbar. This saves the report definition and updates the view of fields in the Report Datasets window to show all the fields you can use. The dialog box should look similar to the following illustration: You have successfully defined the metadata for a report dataset for Report Server database items using the Report Server Web service. When you process the report, the data represented by the dataset metadata will be retrieved from the Report Server database. Next, you can create a report dataset from a Web service that returns an XML System.Data.DataSet object. See Lesson 2: Defining a Report Dataset for an ADO.NET DataSet from a Web Service. TasksHow to: Create or Edit a Report-Specific Data Source (Report Designer) How to: Create a Dataset (Report Designer) How to: Add, Edit, or Delete a Field in the Datasets Window (Report Designer) ConceptsDefining Report Datasets for XML Data Connecting to a Data Source Defining Report Datasets Working with Fields in a Report Dataset (Reporting Services) Other ResourcesHow Do I Find Tutorials (Reporting Services) Report Datasets Dialog Box (Report Designer)
https://msdn.microsoft.com/en-us/library/ms345338.aspx
CC-MAIN-2018-05
refinedweb
621
62.58
Analyzing crash dumps can be complicated. Although Visual Studio supports viewing managed crash dumps, you often have to resort to more specialized tools like the SOS debugging extensions or WinDbg. In today’s post, Lee Culver, software developer on the .NET Runtime team, will introduce you to a new managed library that allows you to automate inspection tasks and access even more debugging information. –Immo Today are we excited to announce the beta release of the Microsoft.Diagnostics.Runtime component (called ClrMD for short) through the NuGet Package Manager. ClrMD is a set of advanced APIs for programmatically inspecting a crash dump of a .NET program much in the same way as the SOS Debugging Extensions (SOS). It allows you to write automated crash analysis for your applications and automate many common debugger tasks. We understand that this API won’t be for everyone — hopefully debugging .NET crash dumps is a rare thing for you. However, our .NET Runtime team has had so much success automating complex diagnostics tasks with this API that we wanted to release it publicly. One last, quick note, before we get started:. Getting Started Let’s dive right into an example of what can be done with ClrMD. The API was designed to be as discoverable as possible, so IntelliSense will be your primary guide. As an initial example, we will show you how to collect a set of heap statistics (objects, sizes, and counts) similar to what SOS reports when you run the command !dumpheap –stat. The “root” object of ClrMD to start with is the DataTarget class. A DataTarget represents either a crash dump or a live .NET process. In this example, we will attach to a live process that has the name “HelloWorld.exe” with a timeout of 5 seconds to attempt to attach: int pid = Process.GetProcessesByName("HelloWorld")[0].Id; using (DataTarget dataTarget = DataTarget.AttachToProcess(pid, 5000)) { string dacLocation = dataTarget.ClrVersions[0].TryGetDacLocation(); ClrRuntime runtime = dataTarget.CreateRuntime(dacLocation); // ... } You may wonder what the TryGetDacLocation method does. The CLR is a managed runtime, which means that it provides additional abstractions, such as garbage collection and JIT compilation, over what the operating system provides. The bookkeeping for those abstractions is done via internal data structures that live within the process. Those data structures are specific to the CPU architecture and the CLR version. In order to decouple debuggers from the internal data structures, the CLR provides a data access component (DAC), implemented in mscordacwks.dll. The DAC has a standardized interface and is used by the debugger to obtain information about the state of those abstractions, for example, the managed heap. It is essential to use the DAC that matches the CLR version and the architecture of the process or crash dump you want to inspect. For a given CLR version, the TryGetDacLocation method tries to find a matching DAC on the same machine. If you need to inspect a process for which you do not have a matching CLR installed, you have another option: you can copy the DAC from a machine that has that version of the CLR installed. In that case, you provide the path to the alternate mscordacwks.dll to the CreateRuntime method manually. You can read more about the DAC on MSDN. Note that the DAC is a native DLL and must be loaded into the program that uses ClrMD. If the dump or the live process is 32-bit, you must use the 32-bit version of the DAC, which, in turn, means that your inspection program needs to be 32-bit as well. The same is true for 64-bit processes. Make sure that your program’s platform matches what you are debugging. Analyzing the Heap Once you have attached to the process, you can use the runtime object to inspect the contents of the GC heap:); } This produces output similar to the following: However, the original goal was to output a set of heap statistics. Using the data above, you can use a LINQ query to group the heap by type and sort by total object size: var stats = from o in heap.EnumerateObjects() let t = heap.GetObjectType(o) group o by t into g let size = g.Sum(o => (uint)g.Key.GetSize(o)) orderby size select new { Name = g.Key.Name, Size = size, Count = g.Count() }; foreach (var item in stats) Console.WriteLine("{0,12:n0} {1,12:n0} {2}", item.Size, item.Count, item.Name); This will output data like the following — a collection of statistics about what objects are taking up the most space on the GC heap for your process: 564 11 System.Int32[] 616 2 System.Globalization.CultureData 680 18 System.String[] 728 26 System.RuntimeType 790 7 System.Char[] 5,788 165 System.String 17,252 6 System.Object[] ClrMD Features and Functionality Of course, there’s a lot more to this API than simply printing out heap statistics. You can also walk every managed thread in a process or crash dump and print out a managed callstack. For example, this code prints the managed stack trace for each thread, similar to what the SOS !clrstack command would report (and similar to the output in the Visual Studio stack trace window): foreach (ClrThread thread in runtime.Threads) { Console.WriteLine("ThreadID: {0:X}", thread.OSThreadId); Console.WriteLine("Callstack:"); foreach (ClrStackFrame frame in thread.StackTrace) Console.WriteLine("{0,12:X} {1,12:X} {2}", frame.InstructionPointer, frame.StackPointer, frame.DisplayString); Console.WriteLine(); } This produces output similar to the following: ThreadID: 2D90 Callstack: 0 90F168 HelperMethodFrame 660E3365 90F1DC System.Threading.Thread.Sleep(Int32) C70089 90F1E0 HelloWorld.Program.Main(System.String[]) 0 90F36C GCFrame Each ClrThread object also contains a CurrentException property, which may be null, but if not, contains the last thrown exception on this thread. This exception object contains the full stack trace, message, and type of the exception thrown. ClrMD also provides the following features: - Gets general information about the GC heap: - Whether the GC is workstation or server - The number of logical GC heaps in the process - Data about the bounds of GC segments - Walks the CLR’s handle table (similar to !gchandles in SOS). - Walks the application domains in the process and identifies which modules are loaded into them. - Enumerates threads, callstacks of those threads, the last thrown exception on threads, etc. - Enumerates the object roots of the process (as the GC sees them for our mark-and-sweep algorithm). - Walks the fields of objects. - Gets data about the various heaps that the .NET runtime uses to see where memory is going in the process (see ClrRuntime.EnumerateMemoryRegions in the ClrMD package). All of this functionality can generally be found on the ClrRuntime or the ClrHeap objects, as seen above. IntelliSense can help you explore the various properties and functions when you install the ClrMD package. In addition, you can also use the attached sample code. Please use the comments under this post to let us know if you have any feedback! Join the conversationAdd Comment To answer my own comment – it seems you can self-debug. Seems to work – would be nice to know if it is supported. BTW, there is an error in the sample – I assume it should be: // If we don't have the dac installed, we will use the long-name dac in the same folder. if (string.IsNullOrEmpty(dacLocation)) // ***** without '!' ? ****** dacLocation = version.DacInfo.FileName; @Hrvoje, yep that's an error, sorry about that, it should not have the '!'. 🙁 The self-debug case is not a supported scenario because there's not a sensible way to make it work. For example, you can attempt to inspect your own heap with it, the ClrMD api itself will be allocating objects, which will trigger GCs, which in turn will cause your heap walk to fail when a GC rearranges the heap as you were walking it. This should always be used to inspect another process (or crash dump). How can I call this to dump all objects under a given class or namespace from code? I want to dump all objects under a given dialogue window when that window is supposedly closed and deallocated. This would greatly help in finding objects that have not been garbage collected. I also want to do a memory snapshot of allocated ojbects by full type name and object id and then at a later time compare that to the current memory snapshot. I'd want only the objects in the second snapshot that do not exist in the first one to be printed. This helps for code that should clean up all of its resources when it exits. I've used this in C++ in the past to put in automatic debug only checks for memory leaks (e.g., snapshot, call method A, snapshot, compare snapshots, if snapshots differ, break in debug mode). @Tom ClrType instances have a .Name which contains the namespace as well as the typename. You can use this to separate out the heap by namespace (though I suppose it would be better to provide a Namespace property instead of making you parse out the name…that's not currently in the API). As to your second question about doing heap diffs, the main obstacle to doing this is that the GC relocates objects, and an object can still be alive between two snapshots, but the object got moved…so you don't know the instance is the same. To solve this, we use a heuristic which basically does a diff of the type statistics (100 Foo objects in snapshot 1, 110 Foo objects in snapshot 2, 10 Foo objects difference). In fact, perfview's memory diagnostics already does this today:…/details.aspx (Memory diagnostic in PerfView is actually built on top of ClrMD.) is there any limitation on the kind of process we can attach to? e.g not runnning as admin and more cause i have tried to attach to one of my own processes and got exception Could not attach to pid 514, HRESULT: 0xd00000bb hiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiii i write a dll in project ATL with a function witch function inclusive a input and output parameter is that byte* type STDMETHODIMP CMSDllServer::sum22(BYTE* aa,SHORT len) { return S_OK; } in use during from this function in a windowes application c# doesnot exist no problem and array values returned to truth //out put 1,2,3,4,5 but the same function in a webservice returned to only first index array and exist very big problem byte[] Packet = new byte[5]; //out put 1,0,0,0,0 help me pleaseeeeeeeeeeeeeeeeeeeeeeeeeeeee thanx Seems my original comment never made it through. Our use case seems to be a bit simpler than this dll was intended to be used. We produce mission critical software (high availability and fault tolerance required, low installation count) which sometimes presents a challenge to monitor and diagnose. I view ClrMD as a possibility to implement a miniature adtools-like component that would always be present with the deployment of our software to cover these use cases: monitor the target process and take a "memory object count" if memory usage > X monitor the target process (GUI application) and take a "call stack dump" if the UI thread is blocked for more than 250ms (there was a MS visual studio extension that did the same thing, but produced minidumps – there was a nice video about it on channel9, but I can't seem to find it. Basically, microsoft was using it to detect usability problems in production). However, I would also like to see the following: integrated ability to take usable minidump and full dumps of attached process, something like clrdump (google it, first result – seems the comments delete any of my posts that contains a link without warning…) ability to take a snapshot of native call stacks – 95% of our threads are .net, but there are a couple of native threads we integrate with through C++/CLI and we would really like to see both native and managed callstacks. There should be an easy way to convert the call stack to a readable format with the matching symbols (symbol server support). I know this may not be your primary use case, but it would complete our needed feature set. Hi, Nice ! 🙂 It's really nice to see this kind of things released publicly since all analysis needs (automated or not) are not covered by SOS or even by other extensions like Steve Johnson's SOSEX (). Gaël @Hrvoje You can do this using the IDebug* interfaces provided by DbgEng (full details too long for a comment here, but you can search for IDebugControl::GetStackTrace). ClrMD provides a (mostly) complete wrapper around those interfaces: using Microsoft.Diagnostics.Runtime; … IDebugControl control = (IDebugControl)dataTarget.DebuggerInterface; control.GetStackTrace(…); IDebugClient client = (IDebugClient)dataTarget.DebuggerInterface; client.WriteDumpFile(…); You can use the IDebug* interfaces to fully control your process (go, step, etc), but again…that's more detail than I can put in this comment. The API is still in beta too. =) @li-raz I should have pointed out in the post, attaching to a process requires debugger privileges (in this case, that almost certainly means running the program as administrator). You do not need admin privileges to load a crash dump. Is the clrCmd .net library on a path to be fully supported and part of .NET 4.x or later? We can use beta version code in our development environment but not in our production environment given the production environment has many different long running server processes. Here is the quote from the blog post:. @Tom: The current version of ClrMD is a pre-release, which means its license doesn’t allow usage in production environments. Once we turn ClrMD into a stable release it will allow production use. Is there any plan to provide one for .Net 3.5 framework (CLR 2)? Thanks for this Immo. I've actually spent the past few months writing exactly the same library using the CorDebug, MetaData and Symbol server APIs. I've actually got it all up and running, although I've targeted crash dumps (full and partial) as opposed to a live process. Do you have any thoughts on ClrMD vs CorDebug? Obviously CorDebug is geared towards debugging and happens to support crash dumps as an extra bonus, while ClrMD is focussed on analysis and not debugging. It's great that ClrMD takes away all of the grunt labour I've had to do in order to get CorDebug up and running for crash dumps, like implementing ICorDebugDataTarget (great fun for partial dumps!) and parsing MetaData binary signatures which is a truely painful experience. It's awesome to have an officially supported way of doing this now, but any thoughts on whether CorDebug will continue to support crash dumps? And any future plans for ClrMD, is this just the beginning ofClrMD ? Really excited by this so would love to hear anything you have to say 🙂 ps – is the team hiring? I've got experience in CorDebug, MetaData and the Symbol Server API's 😀 @kevinlo2: It's on our list but we don't have a timeline yet. This is great! I can see this going pretty big in just overall debugging. I still seem to be getting the "unable to attached to pid" error, though. Of course, I've only tried on the calc, notepad, and issexpress processes. Keep up the awesome work! Finally got past an issue attaching to a running process! I'm not seeing any stack traces in the managed threads though. This is awesome stuff – I can't remember the last time I installed a framework and had so many "are you serious???" moments. Very cool – keep up the good work! This is awesome! Looking forward for further samples. I was looking to automate IIS app pool process memory dump / analysis and make it as self-service tool on our shared web farm. This sounds very promising for that. Very cool stuff – looking forward to reading more about this. I'd be interested to see a way of grabbing more information about the objects or even the whole objects themselves. I'm seeing odd behavior when trying to use retrieve native call stacks. When I try to use IDebugControl::GetStackTrace, it appears to not return all the frames in the stack. For example, if I retrieve the stack for thread zero in a sample dump file via IDebugControl::GetStackTrace in ClrMD, I get the following frames: ############ Frames for thread 0 [B128] ############ [0]: 7C82845C ntdll!KiFastSystemCallRet [1]: 77E61C8D kernel32!WaitForSingleObject [2]: 5A364662 w3dt!IPM_MESSAGE_PIPE::operator= [3]: 0100187C w3wp [4]: 01001A27 w3wp [5]: 77E6F23B kernel32!ProcessIdToSessionId If I look at the stack in Visual Studio or WinDbg, or if I retrieve it using IDebugControl::GetStackTrace in a WinDbg extension, I get the following: ############ Frames for thread 0 [B128] ############ [0]: 7C82845C ntdll!KiFastSystemCallRet [1]: 7C827B79 ntdll!ZwWaitForSingleObject <<<< Skipped by ClrMD [2]: 77E61D1E kernel32!WaitForSingleObjectEx <<<< Skipped by ClrMD [3]: 77E61C8D kernel32!WaitForSingleObject [4]: 5A364662 w3dt!WP_CONTEXT::RunMainThreadLoop [5]: 5A366E3F w3dt!UlAtqStartListen <<<< Skipped by ClrMD [6]: 5A3AF42D w3core!W3_SERVER::StartListen <<<< Skipped by ClrMD [7]: 5A3BC335 w3core!UlW3Start <<<< Skipped by ClrMD [8]: 0100187C w3wp!wmain [9]: 01001A27 w3wp!wmainCRTStartup [10]: 77E6F23B kernel32!BaseProcessStart Note that all the frames listed by ClrMD exist in the true call stack, but it has skipped the indicated frames in between. Have you seen this behavior before? The code I'm using looks like this: DataTarget dataTarget = DataTarget.LoadCrashDump(@"c:scratchmydump.dmp"); // Not actually using the ClrRuntime in this snippet, but my actual code is doing this, // so I'm including it in case it matters. ClrInfo clrInfo = dataTarget.ClrVersions[0]; string dacLocation = clrInfo.TryGetDacLocation(); ClrRuntime clrRuntime = dataTarget.CreateRuntime(dacLocation) ; // Retrieve the required debugging interfaces IDebugControl4 control = (IDebugControl4) dataTarget; IDebugSystemObjects3 sysObjs = (IDebugSystemObjects3) dataTarget; sysObjs.SetCurrentThreadId(0); DEBUG_STACK_FRAME[] frames = new DEBUG_STACK_FRAME[100]; uint frameCount = 0; control.GetStackTrace(0, 0, 0, frames, 100, out frameCount); // Note: after the call, frameCount is set to 6, instead of 11, like it should be As you may have noticed from my output above, I'm also having some issues with symbol resolution via IDebugSymbols::GetNameByOffset where occasionally I'm getting incorrect or incomplete names; but, I'm hoping that this is just something I have wrong in the code that sets up the symbol path. Slight typo in the above code sample. These two lines: IDebugControl4 control = (IDebugControl4) dataTarget; IDebugSystemObjects3 sysObjs = (IDebugSystemObjects3) dataTarget; should have read IDebugControl4 control = (IDebugControl4) dataTarget.DebuggerInterface; IDebugSystemObjects3 sysObjs = (IDebugSystemObjects3) dataTarget.DebuggerInterface; Great… Love this. One question. I have a simple program that allocates a List<> and adds instances of a class called PayLoad (see code below). After the first time through the while loop, one Payload instance is allocated and added to the List<PayLoad>. When I dump all the heap allocated objects from my test program's namespace I see: Name: Total Size: Total Number: TestProgram.PayLoad[] 96 2 TestProgram.PayLoad 24 1 Does anybody have any idea where the TestProgram.PayLoad[] instances come from? I'm trying to measure memory usage of my namespace object and this seems to skew it a bit. Thanks. class Program { static void Main(string[] args) { List<PayLoad> payloadList = new List<PayLoad>(); while (true) { Console.WriteLine("Adding new payload"); payloadList.Add(new PayLoad()); Console.ReadLine(); } } } class PayLoad { public int a; public int b; } I tried to run the sample to analyze .dmp file which was taken from a program running on the same machine as the sample. but i keep getting the following exception when trying to create the runtime object: Message: Failure loading DAC: CreateDacInstance failed 0x80131c30 at Microsoft.Diagnostics.Runtime.Desktop.DacLibrary.Init(String dll) at Microsoft.Diagnostics.Runtime.Desktop.DacLibrary..ctor(DbgEngTarget dataTarget, String dll) at Microsoft.Diagnostics.Runtime.DbgEngTarget.CreateRuntime(String dacFilename) at DumpFetch.App..ctor() at DumpFetch.App() Any ideas? I'm analyzing 11 GB dmp file and top size type is "Free". What does "Free" means? Free is a pseudo object that represents a free space (a hole) in the GC heap. They exist when the GC happens but decides not to compact. Having large amounts of fee space is not necessarily bad (especially if they are a few big chunks), since these to get reused. When these free areas get too numerous/small, the GC will compact. Large objects (> 85K), are treated differently, and placed in their own region of memory and currently are not ever compacted (however in V4.5.1 we have added the ability to do this explicitly) Is any way to find using ClrMD that some object is reachable from roots or not reachable from roots? Yes. In fact, this is how PerfView uses ClrMD, but you have to calculate the object graph manually to find that information. There's not a simple function call to do this. The functions which you use to do most of the work are: ClrHeap.GetObjectType – To get the type of the object in the root. ClrType.EnumerateRefsOfObject – To enumerate what objects the current object references. With these functions you build the full heap graph…and any object not in the graph is considered dead. Any object you do reach is considered live. (There are false positives and negatives from this approach, but they are rare. We unfortunately aren't 100% accurate in the root reporting for all versions of the runtime.) Thank you, Lee Culver! It helped me find cause of memory leak. @Alexey: Can you provide me your code for calculating the heap graph? (Mail: toni.wenzel@googlemail.com) I'm currently investigating a memory leak of our own application. It would be interesting how you managed this. THX! What the ClrRoot.Address used for? Points this to the same as ClrRoot.Object? How can I receive following informations: The object which is pinned by the root (I guess ClrRoot.Object) I would like to know which object prevent which object from being collected (GC relocated). What is the ClrType.GetFieldForOffset() "inner" parameter used for? Great work! But when I tried it out on a production dump I don’t get the same answer from the sample code as I got from WinDbg for the command "!dump heap –stat". Example for strings The sample code returns: 16 318 082 199 815 System.String But in WinDbg I get : 21004872 191564 System.String I miss 3 Mb of string objects?! And when I trying to search for “ClaimsPrincipal” objects it’s possible to locate 46 of them with WinDbg but none with ClrMD? Is it something I have missed? Wow, I wish I knew about this weeks ago. This is a fantastic little library and it's making my deep investigations into many millions of objects much more bearable. Thanks kindly 🙂 So, I might be doing something wrong, but I'm having a hard time working with array fields while trying to browse an object. I'm currently using ClrInstanceField.GetFieldValue(parentObjectAddress) which I was hoping would give me the address of the Array, since that is what it does for objects. Instead it seems to be returning something else? It also seems like it thinks the array in every generic List<T> is an Object[] but this would imply that Generic collections don't prevent boxing, which I know to be false. I'm also curious that when I use GetFieldValue on an Object type, the address it gives back seems to work fine with field.Type, but heap.GetObjectType for the same address returns null or sometimes strange values. I only stumbled this way when trying to account for polymorphism while browsing referenced objects deeper than my starting point, since I figured ClrInstanceField.Type would reflect the general type definition, not necessarily the actual type stored in a particular instance (e.g. field definition type: IEnumerable, instance reference: ArrayList). Maybe you could provide some more sample code now that this has been in the wild for a while? Without documentation it has been hard to infer how one might dig deep into an object graph, especially regarding fields that aren't primitive values (structs/arrays/objects/etc.). There are very few deep resources online, though the ScriptCs module and a few other blogs have been helpful, I am encountering plenty of things that require a lot of trial and error, which is costing me more time than I was hoping this tool would save me. I still think the knowledge will benefit me in the long run, but a followup would be nice. Maybe some of those internal automated diagnostics might be safe to share with the public? On a positive note, I've had great success combining some work I did automating against dbgeng and SOS with this library and they appear to be complimenting each other well (since I already have some SOS parsing implemented). I love this tool, but would also like to use an app written with it against some dumps containing unmanaged code from old legacy apps to automate large numbers of repetitive actions. I'm thinking the tool can do it because DebugDiag v2 uses ClrMD and it can open unmanaged dumps. But I can't figure out how to load the required unmanaged-compatible clr10sos from ClrMD-based code. The code seems to required the FindDAC step and, of course, there are no CLR versions in the dump at all. How can I get ClrMd to use the Clr10Sos and let me use the ReadMemory, thread-related, and other convenient debugging commands? Thanks! -Bob I realize now that I didn't put my name with my question, but I've further detailed the question above on StackOverflow. Sadly, I don't think there are many people using this extensively yet, so I'm concerned by the fact that the question is already well below the average number of viewers for a new question. I'm posting the link here both for experts that might see this as well as others who might have the same question: stackoverflow.com/…/how-to-properly-work-with-non-primitive-clrinstancefield-values-using-clrmd Hi, We need to parse dictionary of type <string, List<objects>> using ClrMD. Dictionary are being stored as System.Collections.Hashtable+bucket[] into memory as per our understanding. We have tried to parse dictionary of type<string, List<objects>> by enumerating internal Hashtable+bucket[] objects, but we aren’t successful. We are able to parse dictionary values(List<objects>>) as individual arrays[]. But we aren’t able to correlate these individual arrays[] belongs to which Keys. To know this, we need to parse dictionary of type <string, List<objects>>. Can you please provide us pointers/direction on how to parse dictionary using ClrMD ? Sample of code will be helpful. This is amazing ! This is going to help me automating the debugging of W3WP on certain situations, I can't explain how thrilled I am, previously I would have had to create a memory dump using procdump, then pull up windbg and start issuing commands to gather the desired info, all of this manually and error prone ! Now I can make automatically from my APP, with a LIVE PROCESS !! We do a lot of dump analysis where I work and we started to use your library a lot, it's awesome. I recently made an extension library for ClrMD which allows to use ClrMD with LINQPad; being able to interactively navigate in the dump with ClrMD and LINQPad is great for finding unknown issue. When we spot a particular issue pattern, we take the code we did in LINQPad and put it in an analysis rule in DebugDiag. The project is on GitHub if you want to take a look: github.com/…/ClrMD.Extensions It would be great to hear your thoughts about it, does it fit with your vision about where ClrMD is evolving? My main concern is about my 'ClrObject' class which take care of most of the LINQPad integration, I saw that you created one in the DebugDiag API (which is also available in the ClrMD samples). Do you plan to include the ClrObject class from DebugDiag directly in ClrMD? Do you plan to change ClrMD in a way that I would not be able to create my own ClrObject class? Thanks for your time, Jeff Cyr There is a memory leak when calling the dataTarget.CreateRuntime method. Where can I report this? Hi, I have posted a question about finding root of object using CLR MD at stackoverflow.com/…/trying-to-find-object-roots-using-clr-md Can you please provide any suggestion about it. Hi, maybe someone would be interested, I wrote an application that expose CLRMD library via GUI: github.com/…/LiveProcessInspector Posted a question at stackoverflow.com/…/finding-a-types-instance-data-in-net-heap. Can you help? How do I start a process using the debugger API? I have been able to use CLRMD to attach to live proceesses that are already running and monitor them for exceptions and other debug events. Now I want to be able to launch an app under the debugger API so I can capture debug events that occur during the startup sequence. CLRMD does not expose this functionality so in a C++/CLI dll I wrote code that uses the unmanaged API to launch the process and then use DataTarget::CreateFromDebuggerInterface() to be able to use the CLRMD functionality. (error handling omitted): PDEBUG_CLIENT debugClient = nullptr; hr = DebugCreate( __uuidof( ::IDebugClient ), (void**)&debugClient ); System::Object^ obj = Marshal::GetObjectForIUnknown( ( System::IntPtr )debugClient ); Interop::IDebugClient^ pdc = (Interop::IDebugClient^)obj Interop::DEBUG_CREATE_PROCESS createFlags = (Interop::DEBUG_CREATE_PROCESS)1; // this value seems to work, don't know why; other values failed Interop::DEBUG_ATTACH attachFlags = Interop::DEBUG_ATTACH::INVASIVE_RESUME_PROCESS; hr = pdc->CreateProcessAndAttach( 0, exePath, createFlags, 0, attachFlags ); Interop::IDebugControl^ idc = ( Interop::IDebugControl^ )pdc; hr = idc->WaitForEvent( Interop::DEBUG_WAIT::DEFAULT, 1000 ); After calling WaitForEvent() the debugger should be attached and the process should be running. If I call idc->WaitForEvent( Interop::DEBUG_WAIT::DEFAULT, 1000 ); in a loop it will display the UI and run normally. However, when I try to connect the session to CLRMD I get errors. DataTarget^ target = DataTarget::CreateFromDebuggerInterface( pdc ); This always throws an Microsoft::Diagnostics::Runtime::ClrDiagnosticsException "Failed to get proessor type, HRESULT: 8000ffff"" at Microsoft.Diagnostics.Runtime.DbgEngDataReader.GetArchitecture() in c:workprojectsProjectsProcessMonitorsamplesdotnetsamplesMicrosoft.Diagnostics.RuntimeCLRMDClrMemDiagdbgengdatatarget.cs:line 164 at Microsoft.Diagnostics.Runtime.DataTargetImpl..ctor(IDataReader dataReader, IDebugClient client) in c:workprojectsProjectsProcessMonitorsamplesdotnetsamplesMicrosoft.Diagnostics.RuntimeCLRMDClrMemDiagdatatargetimpl.cs:line 30 at Microsoft.Diagnostics.Runtime.DataTarget.CreateFromDebuggerInterface(IDebugClient client) in c:workprojectsProjectsProcessMonitorsamplesdotnetsamplesMicrosoft.Diagnostics.RuntimeCLRMDClrMemDiagpublic.cs:line 2797 Inside the exception object it reports that the _HResult=0x81250002 I tried calling DataTarget::CreateFromDebuggerInterface() both before and after the target is connected and before and after WaitForEvent() is called – all fail the same way. Any help getting this to work is appreciated. Thanks. Exception thrown: 'Microsoft.Diagnostics.Runtime.ClrDiagnosticsException' in Microsoft.Diagnostics.Runtime.dll Additional information: This runtime is not initialized and contains no data. Any ideas?
https://blogs.msdn.microsoft.com/dotnet/2013/05/01/net-crash-dump-and-live-process-inspection/
CC-MAIN-2016-36
refinedweb
5,152
55.95
CLion 2021.1 EAP: Clazy Analyzer, Better Makefile Projects Support, Sharing CMake Settings in VCS The CLion 2021.1 EAP program is underway and we’ve already introduced Global Data Flow Analysis, Google Sanitizers, Valgrind Memcheck, Code Coverage in remote mode, CMake 3.19, and more. Today we’re releasing the CLion 2021.1 EAP2 build, with even more new features for you to preview. Build 211.5787.12 is available from our website, via the Toolbox App, or as a snap package (if you are using Ubuntu). Note that if you are on macOS, there is a separate build for Apple Silicon (M1 chip). DOWNLOAD CLION 2021.1 EAP Main highlights: - Qt projects: - Clazy: CLion now has a Qt-oriented static code analyzer. - QtCreator keymap. - Makefile projects: - Makefile plugin is now bundled. - Use compilers from toolchain during Makefile project resolution. - CMake projects: - You can now share CMake options in VCS. - CMake Profile Wizard is added. - Go to declaration performance improvements. - A new inspection: catching unmatched header guards. Qt projects In CLion 2020.3 we added more sophisticated support for Qt projects that included code completion for signals and slots, Qt-style auto-import, and some Qt projects and Qt UI class templates. We’re not stopping there! In this EAP we’ve integrated Clazy, a Qt-oriented static code analyzer, into CLion. We did it in the same way we implemented the Clang-Tidy integration, meaning checks appear in the editor similarly to how they are displayed in CLion’s own static analyzer: Clazy is integrated into CLion’s own language engine based on Clangd, and the version currently used in CLion is 1.8. In Settings/Preferences | Editor | Inspections | C/C++ | General | Clazy settings you can configure CLion’s severity level and the level of Clazy checks: If you’re coming from QtCreator, you’ll be happy to know that CLion now bundles the QtCreator keymap. You can switch to it in Settings or via a Quick Switch Scheme action: Makefile projects The Makefile Language plugin (previously 3rd-party) is now maintained by the CLion team and is bundled into CLion and GoLand in 2021.1 EAP: You no longer need to install the plugin manually, so you now get make syntax highlighting, quick documentation, Find Usages for targets, and some navigation and code completion actions for Makefile right out of the box! When loading a Makefile project, CLion now not only uses the make executable from the Makefile Toolchain, but also takes compilers from it (if configured explicitly in the corresponding Toolchain). This renders our Makefile project support more consistent and accurate. CMake Profiles CMake Profiles, which are used when building projects via CMake, can be configured in Settings/Preferences | Build, Execution, Deployment | CMake in CLion. These settings are now stored in cmake.xml in the .idea directory and can be shared in the VCS along with the project. Simply tick the Share option in the settings. This new ability makes it much easier for the team to share a single CMake setup between all members! Users can have both shared and local profiles (local profiles always go first in the list of profiles in settings) and switch between them in the editor. Known limitations: - CMake Profiles with the same name are not allowed. If a shared and a local profile both have the same name, the local profile takes precedence and the shared one won’t appear in the settings. - Only CMake Profile settings can be shared. The “Reload CMake project on editing CMakeLists.txt” setting is common for all profiles and is stored in workspace.xml. CLion now comes with a CMake Profile Wizard which helps users configure Toolchains and CMake Profiles for the first time. Depending on whether the project has previously opened, the process may include the following steps: - The Toolchains step is shown only if CLion is being launched for the first time and settings were not imported. - The CMake profiles step is shown if the project is opened for the first time (i.e. if the .idea directory is missing) or if workspace.xml or cmake.xml is missing in .idea or all the profiles are disabled. It’s worth mentioning that closing the Toolchain/CMake wizard automatically saves all the introduced changes. Improved navigation performance for the Eigen library The most widely used navigation action, Go to Declaration, now works faster in the Eigen library (CPP-15082). Catching the unmatched header guard We’ve added a new static code analysis check to catch situations when the comment at the #endif preprocessor directive doesn’t match the macro name from the #ifndef: The full release notes are available here. DOWNLOAD CLION 2021.1 EAP Your CLion team JetBrains The Drive to Develop
https://blog.jetbrains.com/clion/2021/02/clion-2021-1-eap-clazy-makefile-cmake-settings/
CC-MAIN-2021-10
refinedweb
791
64.41
In the XHTML sample we discussed in tutorial 5, data types were not that important—all the document content was of the string data type. Often, however, you will have documents that contain several different data types, and you will want to be able to validate these data types. Unfortunately, DTDs are not designed for validating data types or checking ranges of values. DTDs also do not understand namespaces. To solve these problems, schemas were invented. Unlike DTDs, which have their own peculiar syntax, XML schemas are written in XML. In addition to providing the information that DTDs offer, schemas allow you to specify data types, use namespaces, and define ranges of values for attributes and elements. In this tutorial, you'll learn about XML schemas and how to use them in your XML documents. We'll look at the XML schema data types and their categories and then explore how to create simple and complex data types. Finally, we'll examine namespaces used in XML schemas.
https://www.brainbell.com/tutors/XML/XML_Book_B/XML_Schemas.htm
CC-MAIN-2019-18
refinedweb
167
63.39
I am asking this question mostly for confirmation, because I am not an expert in data structures, but I think the structure that suits my need is a hashmap. Here is my problem (which I guess is typical?): I would use something like: #include <boost/unordered_map.hpp> class Data { boost::unordered_map<std::pair<int,int>,double> map; public: void update(int i, int j, double v) { map[std::pair<int,int>(i,j)] += v; } void output(); // Prints data somewhere. }; That will get you going (you may need to declare a suitable hash function). You might be able to speed things up by making the key type be a 64-bit integer, and using ((int64_t)i << 32) | j to make the index. If nearly all the updates go to a small fraction of the pairs, you could have two maps ( small and large), and directly update the small map. Every time the size of small passed a threshold, you could update large and clear small. You would need to do some carefully testing to see if this helped or not. The only reason I think it might help, is by improving cache locality. Even if you end up using a different data structure, you can keep this class interface, and the rest of the code will be undisturbed. In particular, dropping sparsehash into the same structure will be very easy.
https://codedump.io/share/fwsffWhmRoX4/1/data-structure-for-sparse-insertion
CC-MAIN-2018-05
refinedweb
229
70.43
There are two applications that have a one library each. The libraries have XSDs that are used by the corresponding application to Parse an incoming idoc stream data. However, the idocs come from different SAP systems, and each SAP system has a different structure for the same Matmas idoc. After EG restart, which ever application gets the idoc message first, works fine and the second application fails with below error: BIPmsgs;Exception Code - 3450;Exception Type - RecoverableException;Exception Description - DESPI Exception during IDOC Parsing;Inserts - getAccessor called for non existent accessor: Cursor = SapMatmas05Yematmas05ExtnY1maram1000, accessor = VendName Any suggestions what could cause this? Answer by Adrienne_Lee (3179) | Jan 13, 2016 at 02:25 PM In the failed message processing in the trace there are references to two schemas in the same namespace: Schema 1: SapMatmas05Yematmas05ExtnY1maram1Mdg000.xsd targetNamespace="" Schema 2: SapMatmas05Yematmas05ExtnY1maram1Mdg000.xsd targetNamespace="" Both schemas have identical names in the same namespace, but different contents. This contradicts XML standards and it is not certain which schema will be used by an adapter. The schemas are not visible across applications. However, the SAP adapter distinguishes them according to their name and namespace. When the name and namespace are identical, the adapter does not know what to choose. It will only work in a separate EG, which has another JVM. 51 people are following this question. Exception does not trigger SAP Input node failure terminal 1 Answer How to properly configure the number of listeners on WebSphere SAP Adapter in an IBM Integration Bus (or WebSphere Message Broker) multiple instance scenario ? 2 Answers Unit testing with IBM Integration Bus 1 Answer How to connect with Remote Broker running on Linux Box? 1 Answer Problem occured when using SOAP Nodes(SOAP Input & SOAP Reply) in IBM Integration Bus v10 0 Answers
https://developer.ibm.com/answers/questions/247992/why-are-xml-schemas-xsd-getting-cross-referenced-b.html?smartspace=hybrid-cloud-core
CC-MAIN-2019-35
refinedweb
295
53.31
Happy new year from Palo Alto! Please, I have been working for an inordinately long time trying to create some code for something that should be pretty basic. I have written some nice, simple PHP code that generates a random 11 digit alphanumeric codes, called $elevendigits, like this one: FFM00976YH5 What I now need to do is write some code that generates a check digit according to the Modulus 10 - - sometimes called Luhn - - standard. Let's call it it $checkdigit. It will be the twelfth digit. I would have thought this an off-the-shelf function but, can't seem to get anything to work. It would need to: Please, can anyone help me with the code necessary to accomplish this? I would have thought it easy, but am failing. Thank you sincerely! It is a pretty simple method to write in PHP, as long as you understand the principle for how the Luhn Algorithm work, which it seems you do. The difference with your project is that you also need to consider letters, and convert them as you hit any. I am also not sure why you don't reuse the method you created to make these numbers for the validation as well? You would just need to change it from setting the last digit, to validating it instead. Give it another try and I'm sure your able to figure it out, remember modulus is your friend. If you run into problems, post your code here and we can take a look and help you out. Found that via a google search, you could perhaps use that as a "blueprint" with each "step" being a seperate little section of code and wrap it up in a function. Thank you, The Red Devil and Space Phoenix, for your encouragement. Here's the code that I have so far: <? //generate a random string of 11 alphanumeric characters without vowels function random_string() { $character_set_array = array(); $character_set_array[] = array('count' => 11, 'characters' => 'BCDFGHJKLMNPQRSTVWXYZ0123456789'); $temp_array = array(); foreach ($character_set_array as $character_set) { for ($i = 0; $i < $character_set['count']; $i++) { $temp_array[] = $character_set['characters'][rand(0, strlen($character_set['characters']) - 1)]; } } shuffle($temp_array); return implode('', $temp_array); } $randomdigits=random_string(); //change all letters to numbers based on ordinal position and starting with B as eleven $randomdigits = str_replace('B','11', $randomdigits); $randomdigits = str_replace('C','12', $randomdigits); $randomdigits = str_replace('D','13', $randomdigits); $randomdigits = str_replace('F','15', $randomdigits); $randomdigits = str_replace('G','16', $randomdigits); $randomdigits = str_replace('H','17', $randomdigits); $randomdigits = str_replace('J','19', $randomdigits); $randomdigits = str_replace('K','20', $randomdigits); $randomdigits = str_replace('L','21', $randomdigits); $randomdigits = str_replace('M','22', $randomdigits); $randomdigits = str_replace('N','23', $randomdigits); $randomdigits = str_replace('P','25', $randomdigits); $randomdigits = str_replace('Q','26', $randomdigits); $randomdigits = str_replace('R','27', $randomdigits); $randomdigits = str_replace('S','28', $randomdigits); $randomdigits = str_replace('T','29', $randomdigits); $randomdigits = str_replace('V','31', $randomdigits); $randomdigits = str_replace('W','32', $randomdigits); $randomdigits = str_replace('X','33', $randomdigits); $randomdigits = str_replace('Y','34', $randomdigits); $randomdigits = str_replace('Z','35', $randomdigits); ?> So now, we wind up with $randomdigits as only numberals, which is great. But alas, that's all I've been able to accomplish so far. I still need to multiply every other digit of $randomdigits by 2 starting with the right-most digit; convert any two-digit products from the preceding step into 2 one-digit numbers; add all the digits together; and finallysubtract the sum from the next highest number ending in zero to yield the check digit. I am diligent, but stuck. Any help would be greatly appreciated! Thank you very much. Hmm, ok. Yea, I can see why you might be a little stuck. You are looking on this from perhaps the wrong perspective, and perhaps a little too procedural If you have the time, I would recommend you to read up on OOP, it would greatly improve your development speed. I have wrote a function for you that should work for both generating the validation number, as well as validating a complete number as well. To use the different functions, you would change to the correct return method. One will give you the correct validation number that is missing, the other will tell you if the number is valid (bool).When fetching the correct validation number it is important that you append a "x" (or any other number or character) to the end of the string. I.e. in your first post you mentioned "FFM00976YH5", when passing this number along you would pass it as "FFM00976YH5x", and then just replacing the x afterwards with the correct validation number. function luhnAlgorithm($checkNumber) { $characters = array_flip(range('B', 'Z')); $length = strlen($checkNumber) - 1; $total_sum = 0; $cur_num = 0; for ($num=($length-1);$num >= 0;--$num) { if (!ctype_digit((string) $checkNumber[$num])) { $sum = $characters[$checkNumber[$num]] + 11; } else { $sum = $checkNumber[$num]; } if ($cur_num++ % 2 == 0) { $sum *= 2; } if ($sum > 9) { $sum = substr($sum, 0, 1) + substr($sum, 1, 1); } $total_sum += $sum; } return (10 - ($total_sum % 10));//Use this to return the missing validation number, note has to be passed with x in the end //return (($total_sum + $checkNumber[$length]) % 10 == 0); //Use this to validate the 12 digit number } EDIT:Forgot to mention that I have not tested the code, but it should work. In the event you have any problems, please let us know what is happening and we can help sort it out. So, procedurally (Trying to wrap my head around the algorithm in 30 seconds)For a given string $string = "FFM00976YH5"; 1: str_replace letters with numbers . This gives you a pure-digit string of variable length. $string = str_replace(range('B','Z'),range('11','35'),$string) 2: Iterate to every other digit, starting from the right and walking back to the start. for ($x = count($string)-1; $x >= 0; $x -= 2) 3: Double the value and Sum (via type juggling) $sum += array_sum(str_split(($string[$x] * 2))); 4: Apply the math X*9 Mod 10; $cd = ($sum * 9) % 10; $cd is now my Check Digit. Did i miss anything? Point one would be wrong if you are looking for a true implementation of the algorithm, since you would treat B as 11 and not as 1 and 1 later. Point two is also not correct as you just illiterate over ever second number, while you also need to add every odd number to the sum as well (without doubling it). Point three is correct, only thing is that you need the addition for the odd numbers which should not be doubled. Point four, I am not sure why you are multiplying with 9 here? On a side note if the validation is only for internal use, any way its implemented will work. The problem would be if you want it to work off other already established systems. Or did i misinterpret your desire to convert the numbers (whole numbers instead of individual digits)? That would change the code to something along the lines of... For a given string 1: Iterate to every other character, starting from the right and walking back to the start. for ($x = count($string)-1; $x >= 0; $x -= 2) { 2: str_replace letters with numbers . $char = str_replace(range('B','Z'),range('11','35'),$string[$x]) $sum += array_sum(str_split(($char * 2))); } (step 2 and 3 could be combined, but for clarity i left them seperate) Altered in my second post (made while you were posting!)". Point two is also not correct as you just illiterate over ever second number, while you also need to add every odd number to the sum as well (without doubling". Ah... yes, that will necessitate a little extra code... below. According to wikipedia, there are two methods of obtaining the value;subtracting the modulo value from 10, or multiplying by 9 and taking the modulo of the result. Either way it's 2 mathematical operations. Entirely agree. Code modified: $string = "FFM00976YH5"; $string = strrev($string); //Turn the string around so that $x has meaning. for ($x = 0; $x < strlen($string); $x++) { $char = str_replace(range('B','Z'),range('11','35'),$string[$x]); $sum += array_sum(str_split(($char * pow(2,(($x+1) % 2))))); } $cd = ($sum * 9) % 10; I'm happy to report that I have a working cluster of code, thanks to the generous contributions of StarLion and The Red Devil. Part of the trouble I was having came from not recognizing that my host was running only PHP 4, which caused the str_split call to return undefined. But with encouragement from StarLion and The Red Devil, I plugged through and created a workaround using preg_split. Everything now works fine. I am extremely grateful to StarLion and The Red Devil, without whom I would not have figured this out. Thank you! Ask you host if they have the option of running php5 as php4 is no longer supported. If they don't give some thought about moving hosts to one that does. Support for PHP 4 has been discontinued since 2007-12-31. Please consider upgrading to PHP 5. from: The current stable release listed is 5.4.10
https://www.sitepoint.com/community/t/generate-modulus-10-luhn-check-digit-in-php-help/25244
CC-MAIN-2017-13
refinedweb
1,478
59.23
Introduction to Clinical Natural Language Processing ModelingNLP/Text AnalyticsHealthcareNatural Language ProcessingWest 2019posted by Andrew Long February 25, 2019 Andrew Long Andrew is a speaker for ODSC West 2019! Be sure to check out his talk, “Healthcare NLP with a doctor’s bag of notes,” this November in San Francisco! Doctors have always written clinical notes about their patients — originally, the notes were on paper and were locked away in a cabinet. Fortunately for data scientists, doctors now enter their notes in an electronic medical record. These notes represent a vast wealth of knowledge and insight that can be utilized for predictive models using Natural Language Processing (NLP) to improve patient care and hospital workflow. As an example, I will show you how to predict hospital readmission with discharge summaries. This article is intended for people interested in healthcare data science. After completing this tutorial, you will learn - How to prepare data for a machine learning project - How to preprocess the unstructured notes using a bag-of-words approach - How to build a simple predictive model - How to assess the quality of your model - How to decide the next step for improving the model I recently read this great paper “Scalable and accurate deep learning for electronic health records” by Rajkomar et al. (paper at). The authors built many state-of-the-art deep learning models with hospital data to predict in-hospital mortality (AUC = 0.93–0.94), 30-day unplanned readmission (AUC = 0.75–76), prolonged length of stay (AUC = 0.85–0.86) and discharge diagnoses (AUC = 0.90). AUC is a data science performance metric (more about this below) where closer to 1 is better. It is clear that predicting readmission is the hardest task since it has a lower AUC. I was curious how good of a model we can get if use the discharge free-text summaries with a simple predictive model. If you would like to follow along with the Python code in a Jupyter Notebook, feel free to download the code from my github. Model Definition This blog post will outline how to build a classification model to predict which patients are at risk for 30-day unplanned readmission utilizing free-text hospital discharge summaries. Data set We will utilize the MIMIC-III (Medical Information Mart for Intensive Care III) database. This amazing free hospital database contains de-identified data from over 50,000 patients who were admitted to Beth Israel Deaconess Medical Center in Boston, Massachusetts from 2001 to 2012. In order to get access to the data for this project, you will need to request access at this link (). In this project, we will make use of the following MIMIC III tables - ADMISSIONS — a table containing admission and discharge dates (has a unique identifier HADM_ID for each admission) - NOTEEVENTS — contains all notes for each hospitalization (links with HADM_ID) To maintain anonymity, all dates have been shifted far into the future for each patient, but the time between two consecutive events for a patient is maintained in the database. This is important as it maintains the time between two hospitalizations for a specific patient. Since this is a restricted dataset, I am not able to publicly share raw patient data. As a result, I will only show you artificial single patient data or aggregated descriptions. Step 1: Prepare data for a machine learning project We will follow the steps below to prepare the data from the ADMISSIONS and NOTEEVENTS MIMIC tables for our machine learning project. First, we load the admissions table using pandas dataframes: # set up notebook import pandas as pd import numpy as np import matplotlib.pyplot as plt # read the admissions table df_adm = pd.read_csv('ADMISSIONS.csv') The main columns of interest in this table are : - The next step is to convert the dates from their string format into a datetime. We use the errors = ‘coerce’ flag to allow for missing dates. # convert to dates df_adm.ADMITTIME = pd.to_datetime(df_adm.ADMITTIME, format = '%Y-%m-%d %H:%M:%S', errors = 'coerce') df_adm.DISCHTIME = pd.to_datetime(df_adm.DISCHTIME, format = '%Y-%m-%d %H:%M:%S', errors = 'coerce') df_adm.DEATHTIME = pd.to_datetime(df_adm.DEATHTIME, format = '%Y-%m-%d %H:%M:%S', errors = 'coerce') The next step is to get the next unplanned admission date if it exists. This will follow a few steps, and I’ll show you what happens for an artificial patient. First we will sort the dataframe by the admission date # sort by subject_ID and admission date df_adm = df_adm.sort_values(['SUBJECT_ID','ADMITTIME']) df_adm = df_adm.reset_index(drop = True) The dataframe could look like this now for a single patient: We can use the groupby shift operator to get the next admission (if it exists) for each SUBJECT_ID # add the next admission date and type for each subject using groupby # you have to use groupby otherwise the dates will be from different subjects df_adm['NEXT_ADMITTIME'] = df_adm.groupby('SUBJECT_ID').ADMITTIME.shift(-1) # get the next admission type df_adm['NEXT_ADMISSION_TYPE'] = df_adm.groupby('SUBJECT_ID').ADMISSION_TYPE.shift(-1) Note that the last admission doesn’t have a next admission. But, we want to predict UNPLANNED re-admissions, so we should filter out the ELECTIVE next admissions. # get rows where next admission is elective and replace with naT or nan rows = df_adm.NEXT_ADMISSION_TYPE == 'ELECTIVE' df_adm.loc[rows,'NEXT_ADMITTIME'] = pd.NaT df_adm.loc[rows,'NEXT_ADMISSION_TYPE'] = np.NaN And then backfill the values that we removed # sort by subject_ID and admission date # it is safer to sort right before the fill in case something changed the order above df_adm = df_adm.sort_values(['SUBJECT_ID','ADMITTIME']) # back fill (this will take a little while) df_adm[['NEXT_ADMITTIME','NEXT_ADMISSION_TYPE']] = df_adm.groupby(['SUBJECT_ID'])[['NEXT_ADMITTIME','NEXT_ADMISSION_TYPE']].fillna(method = 'bfill') We can then calculate the days until the next admission df_adm['DAYS_NEXT_ADMIT']= (df_adm.NEXT_ADMITTIME - df_adm.DISCHTIME).dt.total_seconds()/(24*60*60) In our dataset with 58976 hospitalizations, there are 11399 re-admissions. For those with a re-admission, we can plot the histogram of days between admissions. Now we are ready to work with the NOTEEVENTS.csv df_notes = pd.read_csv("NOTEEVENTS.csv") The main columns of interest are: - SUBJECT_ID - HADM_ID - CATEGORY: includes ‘Discharge summary’, ‘Echo’, ‘ECG’, ‘Nursing’, ‘Physician ‘, ‘Rehab Services’, ‘Case Management ‘, ‘Respiratory ‘, ‘Nutrition’, ‘General’, ‘Social Work’, ‘Pharmacy’, ‘Consult’, ‘Radiology’, ‘Nursing/other’ - TEXT: our clinical notes column Since I can’t show individual notes, I will just describe them here. The dataset has 2,083,180 rows, indicating that there are multiple notes per hospitalization. In the notes, the dates and PHI (name, doctor, location) have been converted for confidentiality. There are also special characters such as \n (new line), numbers and punctuation. Since there are multiple notes per hospitalization, we need to make a choice on what notes to use. For simplicity, let’s use the discharge summary, but we could use all the notes by concatenating them if we wanted. # filter to discharge summary df_notes_dis_sum = df_notes.loc[df_notes.CATEGORY == 'Discharge summary'] Since the next step is to merge the notes on the admissions table, we might have the assumption that there is one discharge summary per admission, but we should probably check this. We can check this with an assert statement, which ends up failing. At this point, you might want to investigate why there are multiple summaries, but for simplicity let’s just use the last one df_notes_dis_sum_last = (df_notes_dis_sum.groupby(['SUBJECT_ID','HADM_ID']).nth(-1)).reset_index() assert df_notes_dis_sum_last.duplicated(['HADM_ID']).sum() == 0, 'Multiple discharge summaries per admission' Now we are ready to merge the admissions and notes tables. I use a left merge to account for when notes are missing. There are a lot of cases where you get multiple rows after a merge(although we dealt with it above), so I like to add assert statements after a merge df_adm_notes = pd.merge(df_adm[['SUBJECT_ID','HADM_ID','ADMITTIME','DISCHTIME','DAYS_NEXT_ADMIT','NEXT_ADMITTIME','ADMISSION_TYPE','DEATHTIME']], df_notes_dis_sum_last[['SUBJECT_ID','HADM_ID','TEXT']], on = ['SUBJECT_ID','HADM_ID'], how = 'left') assert len(df_adm) == len(df_adm_notes), 'Number of rows increased' 10.6 % of the admissions are missing ( df_adm_notes.TEXT.isnull().sum() / len(df_adm_notes)), so I investigated a bit further with df_adm_notes.groupby('ADMISSION_TYPE').apply(lambda g: g.TEXT.isnull().sum())/df_adm_notes.groupby('ADMISSION_TYPE').size() and discovered that 53% of the NEWBORN admissions were missing discharge summaries vs ~4% for the others. At this point I decided to remove the NEWBORN admissions. Most likely, these missing NEWBORN admissions have their discharge summary stored outside of the MIMIC dataset. For this problem, we are going to classify if a patient will be admitted in the next 30 days. Therefore, we need to create a variable with the output label (1 = readmitted, 0 = not readmitted) df_adm_notes_clean['OUTPUT_LABEL'] = (df_adm_notes_clean.DAYS_NEXT_ADMIT < 30).astype('int') A quick count of positive and negative results in 3004 positive samples, 48109 negative samples. This indicates that we have an imbalanced dataset, which is a common occurrence in healthcare data science. The last step to prepare our data is to split the data into training, validation and test sets. For reproducible results, I have made the random_state always 42. # shuffle the samples df_adm_notes_clean = df_adm_notes_clean.sample(n = len(df_adm_notes_clean), random_state = 42) df_adm_notes_clean = df_adm_notes_clean.reset_index(drop = True) # Save 30% of the data as validation and test data df_valid_test=df_adm_notes_clean.sample(frac=0.30,random_state=42) df_test = df_valid_test.sample(frac = 0.5, random_state = 42) df_valid = df_valid_test.drop(df_test.index) # use the rest of the data as training data df_train_all=df_adm_notes_clean.drop(df_valid_test.index) Since the prevalence is so low, we want to prevent the model from always predicting negative (not re-admitted). To do this, we have a few options to balance the training data - sub-sampling the negatives - over-sampling the positives - create synthetic data (e.g. SMOTE) Since I didn’t make any restrictions on size of RAM for your computer, we will sub-sample the negatives, but I encourage you to try out the other techniques if your computer or server can handle it to see if you can get an improvement. (Post as a comment below if you try this out!) #) Step 2: Preprocess the unstructured notes using a bag-of-words approach Now that we have created data sets that have a label and the notes, we need to preprocess our text data to convert it to something useful (i.e. numbers) for the machine learning model. We are going to use the Bag-of-Words (BOW) approach. BOW basically breaks up the note into the individual words and counts how many times each word occurs. Your numerical data then becomes counts for some set of words as shown below. BOW is the simplest way to do NLP classification. In most blog posts I have read, fancier techniques have a hard time beating BOW for NLP classification tasks. In this process, there are few choices that need to be made - how to preprocess the words - how to count the words - which words to use There is no optimal choice for all NLP projects, so I recommend trying out a few options when building your own models. You can do the preprocessing in two ways - modify the original dataframe TEXT column - preprocess as part of your pipeline so you don’t edit the original data I will show you how to do both of these, but I prefer the second one since it took a lot of work to get to this point. Let’s define a function that will modify the original dataframe by filling missing notes with space and removing newline and carriage returns def preprocess_text(df): # This function preprocesses the text by filling not a number and replacing new lines ('\n') and carriage returns ('\r') df.TEXT = df.TEXT.fillna(' ') df.TEXT = df.TEXT.str.replace('\n',' ') df.TEXT = df.TEXT.str.replace('\r',' ') return df # preprocess the text to deal with known issues df_train = preprocess_text(df_train) df_valid = preprocess_text(df_valid) df_test = preprocess_text(df_test) The other option is to preprocess as part of the pipeline. This process consists of using a tokenizer and a vectorizer. The tokenizer breaks a single note into a list of words and a vectorizer takes a list of words and counts the words. We will use word_tokenize from the nltk package for our default tokenizer, which basically breaks the note based on spaces and some punctuation. An example is shown below: import nltk from nltk import word_tokenize word_tokenize('This should be tokenized. 02/02/2018 sentence has stars**') With output: [‘This’, ‘should’, ‘be’, ‘tokenized’, ‘.’, ‘02/02/2018’, ‘sentence’, ‘has’, ‘stars**’] The default shows that some punctuation is separated and that numbers stay in the sentence. We will write our own tokenizer function to - replace punctuation with spaces - replace numbers with spaces - lower case all words import string def tokenizer_better(text): # tokenize the text by replacing punctuation and numbers with spaces and lowercase all words punc_list = string.punctuation+'0123456789' t = str.maketrans(dict.fromkeys(punc_list, " ")) text = text.lower().translate(t) tokens = word_tokenize(text) return tokens With this tokenizer we get from our original sentence ['this', 'should', 'be', 'tokenized', 'sentence', 'has', 'stars'] Additional things you can do would be to lemmatize or stem the words, but that is more advanced so I’ll skip that. Now that we have a way to convert free-text into tokens, we need a way to count the tokens for each discharge summary. We will use the built in CountVectorizer from scikit-learn package. This vectorizer simply counts how many times each word occurs in the note. There is also a TfidfVectorizerwhich takes into how often words are used across all notes, but for this project let’s use the simpler one (I got similar results with the second one too). As an example, let’s say we have 3 notes sample_text = ['Data science is about the data', 'The science is amazing', 'Predictive modeling is part of data science'] Essentially, you fit the CountVectorizer to learn the words in your data and the transform your data to create counts for each word. from sklearn.feature_extraction.text import CountVectorizer vect = CountVectorizer(tokenizer = tokenizer_better) vect.fit(sample_text) # matrix is stored as a sparse matrix (since you have a lot of zeros) X = vect.transform(sample_text) The matrix X will be a sparse matrix, but if you convert it to an array ( X.toarray()), you will see this array([[1, 0, 2, 1, 0, 0, 0, 0, 1, 1], [0, 1, 0, 1, 0, 0, 0, 0, 1, 1], [0, 0, 1, 1, 1, 1, 1, 1, 1, 0]], dtype=int64) Where there are 3 rows (since we have 3 notes) and counts of each word. You can see the column names with vect.get_feature_names() ['about', 'amazing', 'data', 'is', 'modeling', 'of', 'part', 'predictive', 'science', 'the'] We can now fit our CountVectorizer on the clinical notes. It is important to use only the training data because you don’t want to include any new words that show up in the validation and test sets. There is a hyperparameter called max_features which you can set to constrain how many words are included in the Vectorizer. This will use the top N most frequently used words. In step 5, we will adjust this to see its effect. # fit our vectorizer. This will take a while depending on your computer. from sklearn.feature_extraction.text import CountVectorizer vect = CountVectorizer(max_features = 3000, tokenizer = tokenizer_better) # this could take a while vect.fit(df_train.TEXT.values) We can look at the most frequently used words and we will see that many of these words might not add any value for our model. These words are called stop words, and we can remove them easily (if we want) with the CountVectorizer. There are lists of common stop words for different NLP corpus, but we will just make up our own based on the image below. '] Feel free to add your own stop words if you want. from sklearn.feature_extraction.text import CountVectorizer vect = CountVectorizer(max_features = 3000, tokenizer = tokenizer_better, stop_words = my_stop_words) # this could take a while vect.fit(df_train.TEXT.values) Now we can transform our notes into numerical matrices. At this point, I will only use the training and validation data so I’m not tempted to see how it works on the test data yet. X_train_tf = vect.transform(df_train.TEXT.values) X_valid_tf = vect.transform(df_valid.TEXT.values) We also need our output labels as separate variables y_train = df_train.OUTPUT_LABEL y_valid = df_valid.OUTPUT_LABEL As seen by the location of the scroll bar… as always, it takes 80% of the time to get the data ready for the predictive model. Step 3: Build a simple predictive model We can now build a simple predictive model that takes our bag-of-words inputs and predicts if a patient will be readmitted in 30 days (YES = 1, NO = 0). Here we will use the Logistic Regression model. Logistic regression is a good baseline model for NLP tasks since it works well with sparse matrices and is interpretable. We have a few additional choices (called hyperparameters) including C which is a coefficient on regularization and penalty which tells how to measure the regularization. Regularization is essentially a technique to try to minimize overfitting. # logistic regression from sklearn.linear_model import LogisticRegression clf=LogisticRegression(C = 0.0001, penalty = 'l2', random_state = 42) clf.fit(X_train_tf, y_train) We can calculate the probability of readmission for each sample with the fitted model model = clf y_train_preds = model.predict_proba(X_train_tf)[:,1] y_valid_preds = model.predict_proba(X_valid_tf)[:,1] Step 4: Assess the quality of your model At this point, we need to measure how well our model performed. There are a few different data science performance metrics. I wrote another blog postexplaining these in detail if you are interested. Since this post is quite long now, I will start just showing results and figures. You can see the github account for the code to produce the figures. For a threshold of 0.5 for predicting positive, we get the following performance With the current selection of hyperparameters, we do have some overfitting. One thing to point out is that the major difference between the precision in the two sets of data is due to the fact that we balanced the training set, where as the validation set is the original distribution. Currently, if we make a list of patients predicted to be readmitted we catch twice as many of them as if we randomly selected patients (PRECISION vs PREVALENCE). Another performance metric not shown above is AUC or area under the ROC curve. The ROC curve for our current model is shown below. Essentially the ROC curve allows you to see the trade-off between true positive rate and false positive rate as you vary the threshold on what you define as predicted positive vs predicted negative. Step 5: Next steps for improving the model At this point, you might be tempted to calculate the performance on your test set and see how you did. But wait! We made many choices (a few below) which we could change and see if there is an improvement: - should we spend time getting more data? - how to tokenize — should we use stemming? - how to vectorize — should we change the number of words? - how to regularized the logistic regression — should we change C or penalty? - which model to use? When I am trying to improve my models, I read a lot of other blog posts and articles to see how people tackled similar issues. When you do this, you start to see interesting ways to visualize data and I highly recommend holding on to these techniques for your own projects. For NLP projects that use BOW+logistic regression, we can plot the most important words to see if we can gain any insight. For this step, I borrowed code from a nice NLP article by Insight Data Science. When you look at the most important words, I see two immediate things: - Oops! I forgot to exclude the patients who died since ‘expired’ showed up in the negative list. For now, I will ignore this and fix it below. - There are also some other stop words we should probably remove (‘should’,’if’,’it’,’been’,’who’,’during’, ‘x’) When we want to improve the model, we want to do it in a data-driven manner. You can spend a lot of time on ‘hunches’, that don’t end up panning out. To do this, it is recommend to pick a single performance metric that you use to make your decisions. For this project, I am going to pick AUC. For the first question above, we can plot something called a Learning Curve to understand the effect of adding more data. Andrew Ng has a set of great Coursera classes on the discussion of high-bias vs high-variance models. We can see that we have some overfitting but adding more data is likely not going to drastically change the AUC on a validation set. This is good to know because it means we shouldn’t spend months getting more data. Some simple things that we can do is try to see the effect of some of our hyperparameters ( max_features and C). We could run a grid search, but since we only have 2 parameters here we could look at them separately and see the effect. We can see that increasing C and max_features, cause the model to overfit pretty quickly. I selected C = 0.0001 and max_features = 3000 where the validation set started to plateau. At this point, you could try a few other Step 6: Finalize your model and test it We will now fit our final model with hyperparameter selection. We will also exclude the patients who died with a re-balancing. rows_not_death = df_adm_notes_clean.DEATHTIME.isnull() df_adm_notes_not_death = df_adm_notes_clean.loc[rows_not_death].copy() df_adm_notes_not_death = df_adm_notes_not_death.sample(n = len(df_adm_notes_not_death), random_state = 42) df_adm_notes_not_death = df_adm_notes_not_death.reset_index(drop = True) # Save 30% of the data as validation and test data df_valid_test=df_adm_notes_not_death.sample(frac=0.30,random_state=42) df_test = df_valid_test.sample(frac = 0.5, random_state = 42) df_valid = df_valid_test.drop(df_test.index) # use the rest of the data as training data df_train_all=df_adm_notes_not_death.drop(df_valid_test.index) assert len(df_adm_notes_not_death) == (len(df_test)+len(df_valid)+len(df_train_all)),'math didnt work' #) # preprocess the text to deal with known issues df_train = preprocess_text(df_train) df_valid = preprocess_text(df_valid) df_test = preprocess_text(df_test) my','should','if','it','been','who','during', 'x'] from sklearn.feature_extraction.text import CountVectorizer vect = CountVectorizer(lowercase = True, max_features = 3000, tokenizer = tokenizer_better, stop_words = my_new_stop_words) # fit the vectorizer vect.fit(df_train.TEXT.values) X_train_tf = vect.transform(df_train.TEXT.values) X_valid_tf = vect.transform(df_valid.TEXT.values) X_test_tf = vect.transform(df_test.TEXT.values) y_train = df_train.OUTPUT_LABEL y_valid = df_valid.OUTPUT_LABEL y_test = df_test.OUTPUT_LABEL from sklearn.linear_model import LogisticRegression clf=LogisticRegression(C = 0.0001, penalty = 'l2', random_state = 42) clf.fit(X_train_tf, y_train) model = clf y_train_preds = model.predict_proba(X_train_tf)[:,1] y_valid_preds = model.predict_proba(X_valid_tf)[:,1] y_test_preds = model.predict_proba(X_test_tf)[:,1] This produces the following results and ROC curve. Conclusion Congratulations! You built a simple NLP model (AUC = 0.70) to predict re-admission based on hospital discharge summaries that is only slightly worse than the state-of-the-art deep learning method that uses all hospital data (AUC = 0.75). If you have any feedback, feel free to leave it below. References Scalable and accurate deep learning with electronic health records. Rajkomar A, Oren E, Chen K, et al. NPJ Digital Medicine (2018). DOI: 10.1038/s41746–018–0029–1. Available at: Want to learn more about using data science and NLP in the healthcare field? Attend ODSC East 2020 this April 13-17 to learn more in-person!
https://opendatascience.com/introduction-to-clinical-natural-language-processing/
CC-MAIN-2020-29
refinedweb
3,900
55.03