text
stringlengths
454
608k
url
stringlengths
17
896
dump
stringclasses
91 values
source
stringclasses
1 value
word_count
int64
101
114k
flesch_reading_ease
float64
50
104
Linux Tools Project/TMF/CTF guide This article is not finished. Please do not modify it until this label is removed by its original author. Thank you. This article is a guide about using the CTF component of Linux Tools ( org.eclipse.linuxtools.ctf.core). It targets both users (of the existing code) and developers (intending to extend the existing code). You might want to jump directly to examples if you prefer learning this way. Section CTF general information briefly explains CTF while section org.eclipse.linuxtools.ctf.core is a reference of the Linux Tools' CTF component classes and interfaces. Contents - 1 CTF general information - 2 org.eclipse.linuxtools.ctf.core - 2.1 Architecture outline - 2.2 Data types - 2.3 Trace parameters - 2.4 Utilities - 2.5 Metadata - 2.6 Reading - 2.7 Writing - 2.8 Trace input and output - 2.9 Translation - 3 Examples CTF general information This section discusses the CTF format. What is CTF? CTF (Common Trace Format) is a generic trace binary format defined and standardized by EfficiOS. Although EfficiOS is the company maintaining LTTng (LTTng is using CTF as its sole output format), CTF was designed as a general purpose format to accommodate basically any tracer (be it software/hardware, embedded/server, etc.). CTF was designed to be very efficient to produce, albeit rather difficult to decode, mostly due to the metadata parsing stage and dynamic scoping support. We distinguish two flavours of CTF in the next sections: binary CTF refers to the official binary representation of a CTF trace and JSON CTF is a plain text equivalent of the same data. Binary CTF anatomy This article does not cover the full specification of binary CTF; the official specification already does this. Basically, the purpose of a trace is to record events. A binary CTF trace, like any other trace, is thus a collection of events data. Here is a binary CTF trace: Stream files A trace is divided into streams, which may span over multiple stream files. A trace also includes a metadata file which is covered later. There can be any number of streams, as long as they have different IDs. However, in most cases (at least at the time of writing this article), there is only one stream, which is divided into one file per CPU. Since different CPUs can generate different events at the same time, LTTng (the only tracer known to produce a CTF output) splits its only stream into multiple files. Please note: a single file cannot contain multiple streams. In the image above, we see 3 stream files: 2 for stream with ID 0 and a single one for stream with ID 1. A stream "contains" packets. This relation can be seen the other way around: packets contain a stream ID. A stream file contains nothing else than packets (no bytes before, between or after packets). Packets A packet is the main container of events. Events data cannot reside outside packets. Sometimes a packet may contain only one event, but it's still inside a packet. Every packet starts with a small packet header which contains stuff like its stream ID (which should always be the same for all packets within the same file) and often a magic number. Immediately following is an optional packet context. This one usually contains even more stuff, like the packet size and content size in bits, the time interval covered by its events and so on. Then: events, one after the other. How do we know when we reach the end of the packet? We just keep the current offset into the packet until it's equal to the content size defined into its context. Events An event isn't just a bunch of payload bits. We have to know what type of event it is, and sometimes other things. Here's the structure of an event: The event header contains the time stamp of the event and its ID. Knowing its ID, we know the payload structure. Both contexts are optional. The per-stream context exists in all events of a stream if enabled. The per-event context exists in all events with a given ID (within a certain stream) if enabled. Thus, the per-stream context is enabled per stream and the per-event context is enabled per (stream, event type) pair. Please note: there is no stream ID written anywhere in an event. This means that an event "outside" its packet is lost forever since we cannot know anything about it. This is not the case of a packet: since it has a stream ID field in its header (and the packet header structure is common for all packets of all streams of a trace), a packet is independent and could be cut and paste elsewhere without losing its identity. Metadata file The metadata file (must be named exactly metadata if stored on a filesystem according to the official CTF specification) describes all the trace structure using TSDL (Trace Stream Description Language). This means the CTF format is auto-described. When "packetized" (like in the CTF structure image above), a metadata packet contains an absolutely defined metadata packet header (defined in the official specification) and no context. The metadata packet does not contain events: all its payload is a single text string. When concatening all the packets payloads, we get the final metadata text. In its simpler version, the metadata file can be a plain text file containing only the metadata text. This file is still named metadata. It is valid and recognized by CTF readers. The way to differentiate the packetized from the plain text versions is that the former starts with a magic number which has "non text bytes" (control characters). In fact, it is the magic number field of the first packet's header. All the metadata packets have this required magic number. CTF types The CTF types are data types that may be specified in the metadata and written as binary data into various places of stream files. In fact, anything written in the stream files is described in the metadata and thus is a CTF type. Valid types are the following: - simple types - integer number (any length) - floating point number (any lengths for mandissa and exponent parts) - strings (many character sets available) - enumeration (mapping of string labels to ranges of integer numbers) - compound types - structure (collection of key/value entries, where the key is always a string) - array (length fixed in the metadata) - sequence (dynamic length using a linked integer) - variant (placeholder for some other possible types according to the dynamic value of a linked enumeration) JSON CTF As a means to keep test traces in a portable and versionable format, a specific schema of JSON was developed in summer 2012. Its purpose is to be able to do the following: with binary CTF traces A and B being binary identical (except for padding bits and the metadata file). About JSON JSON is a lightweight text format used to define complex objects, with only a few data types that are common to all file formats: objects (aka maps, hashes, dictionaries, property lists, key/value pairs) with ordered keys, arrays (aka lists), Unicode strings, integer and floating point numbers (no limitation on precision), booleans and null. Here's a short example of a JSON object showing all the language features: { "firstName": "John", "lastName": "Smith", "age": 25, "male": true, "carType": null, "kids": [ "Moby Dick", "Mireille Tremblay", "John Smith II" ], "infos": { "address": { "streetAddress": "21 2nd Street", "city": "New York", "state": "NY", "postalCode": "10021" }, "phoneNumber": [ { "type": "home", "number": "212 555-1234" }, { "type": "fax", "number": "646 555-4567" } ] }, "balance": 3482.15, "lovesXML": false } A JSON object always starts with {. The root object is unnamed. All keys are strings (must be quoted). Numbers may be negative. You may basically have any structure: array of arrays of objects containing objects and arrays. Even big JSON files are easy to read, but a tree view can always be used for even more clarity. Why not using XML, then? From the official JSON website: - Simplicity: JSON is way simpler than XML and is easier to read for humans, too. Also: JSON has no "attributes" belonging to nodes. -. In other words: why would you care closing a tag already opened with the same name in XML when it can be written only once in JSON? Compare both: <cart user="john"> <item>asparagus</item> <item>pork</item> <item>bananas</item> </cart> { "user": "john", "items": [ "asparagus", "pork", "bananas" ] } Schema The "dictionary" approach of JSON objects makes it very convenient to store CTF structures since they are exactly that: ordered key/value pairs where the key is always a string. Arrays and sequences can be represented by JSON arrays (the dynamicity of CTF sequences is not needed in JSON since the closing ] indicates the end of the array, whereas CTF arrays/sequences must know the number of elements before starting to read). CTF integers and enumerations are represented as JSON integer numbers; CTF floating point numbers as a JSON object containing two integer numbers for mandissa/exponent parts (to keep precision that would be lost by using JSON floating point numbers). To fully understand the developed JSON schema, we use the same subsection names as in Binary CTF anatomy. Stream files All streams of a JSON CTF trace fit into the same file, which by convention has a .json extension. So: a binary CTF trace is a directory whereas a JSON CTF trace is a single file. There is no such thing as a "stream file" in JSON CTF. The JSON file looks like this: { "metadata": "we will see this later", "packets": [ "next subsection" ] } The packets node is an array of packet nodes which are ordered by first time stamp (this is found in their context). Binary CTF stream files can still be rebuilt from a JSON CTF trace since a packet header contains its stream ID. It's just a matter of reading the packets objects in order, converting them to binary CTF, and appending the data to the appropriate files according to the stream ID and the CPU ID. Packets Packet nodes (which are elements of the aforementioned packets JSON array) look like this: { "header": { }, "context": { }, "events": [ "event nodes here" ] } Of course, the header and context fields contain the appropriate structures. The events node is an array of event nodes. Here is a real world example: { "header": { "magic": 3254525889, "uuid": [30, 20, 218, 86, 124, 245, 157, 64, 183, 255, 186, 197, 61, 123, 11, 37], "stream_id": 0 }, "context": { "timestamp_begin": 1735904034660715, "timestamp_end": 1735915544006801, "events_discarded": 0, "content_size": 2096936, "packet_size": 2097152, "cpu_id": 0 }, "events": [ ] } In this last example, event nodes are omitted to save on space. The key names of nodes header and context are the exact same ones that are declared into the metadata text. The context node is optional and may be absent if there's no packet context. Events Event nodes (which are elements of the aforementioned events JSON array) have a structure that's easy to guess: header, optional per-stream context, optional per-event context and payload. Nodes are named like this: { "header": { }, "streamContext": { }, "eventContext": { }, "payload": { } } Here is a real world example: { "header": { "id": 65535, "v": { "id": 34, "timestamp": 1735914866016283 } }, "payload": { "_comm": "lttng-consumerd", "_tid": 31536, "_delay": 0 } } No context is used in this particular example. Again, the key names of nodes header, streamContext, eventContext and payload are the exact same ones that are declared into the metadata text. Metadata file Back to the JSON CTF root node. It contains two keys: metadata and packets. We already covered packets in section Packets. The metadata node is a single JSON string. It's either external:somefile.tsdl, in which case file somefile.tsdl must exist in the same directory and contain the whole metadata plain text, or the whole metadata text in a single string. The latter means all new lines and tabs must be escaped with \n and \t, for example. Since the metadata text of a given trace may be huge (often several hundreds of kilobytes), it might be a good idea to make it external for human readability. However, if portability is the primary concern, having a single JSON text file is still possible using this technique. There is never going to be a collision between the string external: and the beggining of a real TSDL metadata text since external is not a TSDL keyword. org.eclipse.linuxtools.ctf.core This section describes the Java software architecture of org.eclipse.linuxtools.ctf.core. This package and all its subpackages contain code to read/write both binary CTF and JSON CTF formats, translate from one to another and support other input/output languages. Please note up-to-date JavaDoc is also available for all classes, interfaces and enumerations here. Architecture outline Java classes are packaged in a way that makes it easier to discern the different, relatively independent parts. The following general class diagram shows the categories and their content: Strictly speaking, the "TSDL parsing" subcategory of "Metadata" is not part of org.eclipse.linuxtools.ctf.core: it is located in org.eclipse.linuxtools.ctf.parser. However, the two classes TSDLLexer and TSDLParser are automatically generated by ANTLr and are not part of this article's content. The following sections explain each category seen in the above diagram. The order in which they come is important. Security Since performance is the primary concern in this CTF decoding context, copy is avoided as much as possible and security wasn't taken into account during the implementation phase. This means developers must carefully follow the guidelines of this article in order to correctly operate the whole mechanism without breaking anything. For example, one is able to clear the payload structure of an Event object obtained while reading a trace, but this will create exceptions when reading the next event of the same type since those objects are reused and only valid between seeking calls. This is all explained in the following sections, but the rule of thumb is: do not assume what you get is a copy. Deep copy constructors exist and work anyway for all data types: all simple/compound definitions, events and packet informations. It is the user's responsability to create deep copies when needed, at the expense of keeping maximum performance and minimum memory usage. Data types Data types refer to the actual useful data contained into traces. It is what the user gets when reading a trace and what he must provide when writing a trace. CTF types Specific types are omitted on the general class diagram since there are too much. The types are CTF types classes are analogous to the already described CTF types. All types have a declaration class and a definition class. The declaration contains declarative information; information that doesn't change from one type value to the other. A definition contains the actual type value, whatever it is. For example, an integer declaration contains its length in bits (found in the metadata text), while an integer definition contains its integer value. This way, multiple definitions may have a link to the same declaration reference. All definitions have a link to a declaration reference. The primary purpose of a declaration is to create an empty definition. An integer declaration creates an empty integer definition, a structure declaration creates an empty structure definition, and so on. Once created, the definitions are ready to read, except for a few dynamic scoping details that are covered later in this article. This creational pattern is found in IDeclaration , which all type declarations classes implement. Specific declarations contain specific declarative information. The types are: - simple types - compound types If you only intend to read traces or translate them, you will never have to create definitions yourself: the reader you will use will take care of that. Creating definitions from scratch (bypassing the declarations) is a little painful and only useful when creating a synthetic trace from scratch. However, the prefered approach for this is to generate JSON CTF text using Java or any interpreter like Python and then translating to binary CTF using the facilities of org.eclipse.linuxtools.ctf.core. All definitions inherit from the abstract class Definition . This forces them to implement, amongst other methods, read(String, IReader) and write(String, IWriter). This is the entry point of genericity regarding input/ouput trace formats. Simple types will only call the IReader or IWriter appropriate method, passing a reference to this. However, compound types may iterate over their items and recursively call their read/write methods. Definitions must also implement copyOf() which returns a Definition . This method creates a deep copy of this and is used by copy constructors of each definition to achieve proper deep copy without casting. Scope nodes Sequences are arrays of a specific CTF type with a dynamic length. The length is the value of "some integer somwhere". Basically, this integer value can be anywhere in the data already read, and its exact location is described in the metadata text. Without knowing the length prior to reading the sequence, we cannot know when to stop reading items since there's no "end of sequence" mark in binary CTF. In JSON CTF, it's another story: all arrays end with character ], so the dynamic length value is not really needed in this case. Here's a few places where the length could be: - any already read integer of the current event payload - any integer in the header, per-stream context or per-event context of the current event - any integer in the header or the context of the current packet Variants also need a dynamic value: they need the dynamic current label of a specific enumeration, which may be located in any of the aforementioned places. Using this label, the right CTF type can be selected amongst possible ones (since variants are just placeholders). Knowing this, we see that sequences and variants need to get the current value of some definition before starting a read operation. This definition must be linked, and this binding is usually done at the time of opening a trace using scope nodes. Scope nodes form the basis of a scope tree. A scope tree is a simple tree in which some nodes are linked to definitions. The definitions also have a reference of their scope node. Using this, a definition may query its scope node a path which will resolve to another scope node. Scope nodes know their parents and children, so traversing the tree is possible in both direction. The purpose of the scope tree is to link sequences and variants with the correct reference, not getting the needed value by traversing it at each read. Once sequences and variants have a reference, they only need to get the current value prior to reading. Thus, tree traversing is only done once at trace opening time. Example When opening the trace and building the events, the scope tree will usually also be built. For a specific event, the scope tree might look like this: Blue nodes are the ones with a linked definition and grey ones have none (they only exist for organization and location). The scope tree is built as the definitions are created. Paths are relative when they contain no dot and absolute when they have at least one. An absolute path starts with one of the second level labels: trace, stream, event or env. So, from the point of view of node b in the above diagram: - relative path ais equivalent to absolute path event.fields.complex.a - relative path countis equivalent to absolute path event.fields.count - the current stream ID path is trace.packet.header.stream_id(cannot be a reached with a relative path) - the path of the current starting time stamp of the current packet is stream.packet.context.timestamp_begin(cannot be a reached with a relative path) From the point of view of node v, relative path id is equivalent to absolute path stream.event.header.id. Packet information A packet information is a header and an optional context. It does not contain an array of events; events are read one at a time. Class PacketInfo makes those two structures available thanks to methods getHeader() and getContext(). Since the packet context is optional, getContext() returns null if there is no context. It is also possible to get the CPU ID, the first and last time stamps and the stream ID from the packet. Those values are read from the packet context (if exists) when calling updateCachedInfos(). If you are using any provided reader, do not care about this last method; when you ask for a packet information, the cached information will always be synchronized with the inner context structure. Caching this information avoids some tree traversal each time one of these values is desired (which may happen a lot in typical use cases). If you are developing a new reader, you need to manually call updateCachedInfos() everytime the packet information context structure is updated. Make sure not to modify the header and context structures when getting references to them since no copy is created and the structures are reused afterwards when using the framework facilities. Instead, if you really need to modify or keep the original informations for a long time, make a packet information deep copy using the copy constructor. Event An event is analogous to a CTF event: header, optional per-stream and per-event contexts, payload. It also contains: - its name (from the metadata text) - the stream ID which it belongs to - its own ID within that stream - its computed real time stamp (in cycles) Class Event makes all this available. Since both contexts are optional, getContext() and getStreamContext() return null if there is no context. Make sure not to modify the inner structures when getting references to them since no copy is created and the structures are reused afterwards when using the framework facilities. Instead, if you really need to modify or keep the original informations for a long time, make an event deep copy using the copy constructor. Trace parameters The trace parameters represent the universal token for data exchange between lots of classes, especially inputs, outputs, readers and writers. Those parameters are the following and are common to all CTF traces: - metadata plain text - version major number - version minor number - trace UUID - trace default byte order - map of names (string) to clock descriptors ( ClockDescriptor) - one environment descriptor (which is a map of name (string) to string or integer values; EnvironmentDescriptor) - map of stream IDs to stream descriptors ( StreamDescriptor) By all sharing the same trace parameters, the communication between the different parts of the library is reduced and thus classes are more decoupled. With a normal use, the input is responsible for filling the trace parameters with what is found in the metadata text. Once a trace is opened, you can get the trace parameters using TraceInput 's getTraceParameters(). Utilities As it is the case with many projects, the CTF component of Linux Tools has its own set of utilities. Class Utils includes several static methods to factorize some code. Other utilities have their own classes. General utilities Class Utils has the following static methods used here and there: - unsigned comparison of two Java longintegers - byte array to UUIDobject UUIDobject to byte array - creation of an unsigned IntegerDefinitionwith specific size, alignment, byte order and initial value - creation of several specific IntegerDefinition(e.g. 32-bit (8-bit aligned), 8-bit (8-bit aligned), etc.) - array of IntegerDefinition(bytes) from a UTF-8 string Trace parameters MTA visitor Class TraceParametersMTAVisitor implements IMetadataTreeAnalyzer and binds the metadata tree analyzer (covered later) to the trace parameters. Time stamp accumulator How to compute the time stamp of a given event uses a special algorithm that is developed for LTTng. For the moment, this is the de facto method of computing it since LTTng is the only real tracer natively producing CTF traces. Steps are not detailed here because class TimeStampAccumulator does it anyway. An instance of TimeStampAccumulator accepts an absolute time stamp. It also accepts a relative time stamp, after what the absolute time stamp is computed from the previous and new data. It can then return the current time stamp in cycles, seconds or Java Date . The time stamp accumulator is used by the readers when setting an Event 's real time stamp. It's also used by the translator when translating a range. Metadata As a reminder, the metadata text is written in TSDL and fully describes the structures of a CTF trace. A specific text exists for each individual trace. Since TSDL is a formal domain-specific language, it is parsable. In this project, ANTLr is used to generate a lexer and a parser in Java from descriptions of tokens and the grammar of TSDL. The work done by those generated TSDL lexer and parser ( TSDLLexer and TSDLParser in package org.eclipse.linuxtools.ctf.parser) is called the parsing stage and produces a metadata abstract syntax tree (AST). A metadata AST is a tree containing all the tokens found into the metadata text. The tree structure makes it (relatively) easy to traverse and to analyze the semantics of the metadata (the syntax is okay, but does it make any sense?). AST analysis The MetadataTreeAnalyzer class (MTA) implements the metadata analysis stage. The metadata AST is rigorously analyzed and higher-level data is extracted from it. This higher-level data is passed to a visitor. In fact, the visitor is not a real visitor (design-patternly speaking), but we couldn't find a better name. IMetadataTreeAnalyzerVisitor declares what needs to be in any MTA visitor. MetadataTreeAnalyzer only has one public method, and that is enough: analyze(). When building an instance of it, you pass to the constructor the only two things this analyzer needs: the metadata AST (an ANTLr class called CommonTree ) and the visitor. Then, when calling analyze(), the whole metadata AST will be traversed and, as useful data is found, the appropriate visitor methods will be called with it. Here are a few methods of IMetadataTreeAnalyzerVisitor just so you get it: public void addClock(String name, UUID uuid, String description, Long freq, Long offset); public void addEnvProperty(String key, String value); public void addEnvProperty(String key, Long value); public void setTrace(Integer major, Integer minor, UUID uuid, ByteOrder bo, StructDeclaration packetHeaderDecl); public void addStream(Integer id, StructDeclaration eventHeaderDecl, StructDeclaration eventContextDecl, StructDeclaration packetContextDecl); public void addEvent(Integer id, Integer streamID, String name, StructDeclaration context, StructDeclaration fields, Integer logLevel); This interface also features lots of info*() methods which are called with the current analyzed node for progression information. No useful data is to be read within these methods. Utility class TraceParametersMTAVisitor implements IMetadataTreeAnalyzerVisitor and acts as a bridge between the metadata tree analyzer and trace parameters. As it is called by the MTA, it fills a TraceParameters object with the information. The MTA also checks the semantics as it traverses and analyzes the metadata AST. For example, it will throw an exception with the line number if it finds two trace blocks, two structure fields with the same key name, an attribute that's unknown for a certain block type, etc. However, it won't throw exceptions in the following situations: - multiple clocks with the same name - multiple environment variables with the same name - multiple streams with the same ID - multiple events with the same ID This must be checked or ignored by the visitor. Strings Anywhere in the library where a string token part of TSDL is needed, taking it from abstract class MetadataStrings is prefered. For field names that are common to all metadata texts, MetadataFieldNames can be used (e.g. timestamp_begin for packet contexts). Reading Readers are those objects parsing some CTF subformat and filling Event and PacketInfo structures. Readers are not, however, used directly by a user wanting to read a trace. They could, but the outer procedures (mainly during the opening stage) needed prior to using any reader reside in trace inputs. Trace inputs are the interfaces (software interfaces, not Java interfaces) between a user and a reader. They are discussed in section Trace input and output. During the design phase of this generic CTF Java library, two master use cases were identified regarding reading: streaming and random access. The following two sections explain the difference between both and in what situations a user needs one or the other. Both types of readers have their own interface which inherits from IReader . The shared methods are worth mentioning here: public void openTrace() throws ReaderException; public void closeTrace() throws ReaderException; public String getMetadataText() throws ReaderException; public void openStreams(TraceParameters params) throws ReaderException; public void closeStreams() throws ReaderException; public void openStruct(StructDefinition def, String name) throws ReaderException; public void closeStruct(StructDefinition def, String name) throws ReaderException; public void openVariant(VariantDefinition def, String name) throws ReaderException; public void closeVariant(VariantDefinition def, String name) throws ReaderException; public void openArray(ArrayDefinition def, String name) throws ReaderException; public void closeArray(ArrayDefinition def, String name) throws ReaderException; public void openSequence(SequenceDefinition def, String name) throws ReaderException; public void closeSequence(SequenceDefinition def, String name) throws ReaderException; public void readInteger(IntegerDefinition def, String name) throws ReaderException; public void readFloat(FloatDefinition def, String name) throws ReaderException; public void readEnum(EnumDefinition def, String name) throws ReaderException; public void readString(StringDefinition def, String name) throws ReaderException; What you see first is that all readers, be them streamed or random access, must be able to open the trace and close it (i.e. getting and releasing resources). They must also know how to get the complete metadata text. For binary CTF, this means concatening all text packets of the metadata file as seen previously. For JSON CTF, simply return the whole content of the external metadata file pointed to by the metadata node or the whole node text if the metadata text is embedded. Once the reader owner (usually a trace input) has the metadata text, it can parse it and use a metadata tree analyzer to fill trace parameters. This is why those trace parameters only come to the reader afterwards, when opening all its streams. The information about streams to open is located into the map of StreamDescriptor inside TraceParameters . Now all readers must be able to read all CTF types. As you may notice in the previous declarations, compound types have open* and close* methods while simple types only have read* methods. This is because the real data is never a compound type. A structure may have 3 embedded arrays of structures containing integers, but all this organization is to be able to find said integers, because they have the values we need. A compound CTF type, has the following reading algorithm: - call the appropriate reader's open*method - for each contained definition (extends Definition), in order - call its read(String, IReader) - call the appropriate reader's close*method A simple CTF type has the following reading algorithm: - call the appropriate reader's read*method Here is a sequence diagram showing a structure of integers being read: The initial call to StructureDefinition.read(String, IReader) is from another method in the reader. This might be, for example, from a method reading the context of a packet. In this case, the reader has a reference to the packet context's StructDefinition and will call: packetContextDefinition.read(null, this); When this call returns, all the sequence shown in the above diagram will have been executed, with underlying method calls to the same reader. The String parameters everywhere are for exchanging the field names. Only StructDefinition s should have their read(String, IReader method called directly by a reader. The reading methods of other types will be called because they are eventual children of a structure. This is why the initial direct call passes null as the field name: a packet context or an event header, for example, do not have any name. In other words, they are not part of a parent structure. They could always be called context and header, but this is useless because those names are absolutely known and part of the TSDL semantics anyway. Now, on with the above sequence diagram. Suppose we are a reader and need to read a structure definition myStructure containing the following ordered integers: a, b and c. Step 1 is the following method call: myStructure.read(null, this); Step 2: the structure definition calls our openStructure(StructDefinition, String) method. Received parameters are the reference to this structure (equivalent to myStructure here) and null. Here, we do whatever we need to open the structure. For example: - binary CTF: align our current bit buffer to the structure alignment value - JSON CTF: the JSON object node corresponding to this structure is pushed as the current JSON context node Steps 3: the integer definition corresponding to name a has its read(String, IReader) called (first parameter is a, second is the same reference to the reader that we are). Step 4: this integer definition calls our readInteger(IntegerDefinition, String), passing this and a. Now we have to read an integer value and set the definition's current value to it. Examples: - binary CTF: use the current bit buffer to read the integer value - JSON CTF: search the anode (within the current context node) and read its value (convert text number to integer value) Steps 5 and 6 are just call returns. Steps 3 to 6 are repeated for definitions b and c. Step 7: the structure definition calls our closeStructure(StructDefinition, String) method. Received parameters are the reference to this structure (equivalent to myStructure here) and null. Here, we do whatever we need to close the structure. For example: - binary CTF: do absolutely nothing - JSON CTF: pop the current context node Step 8: back to where we were at step 1. All this mechanism might seem bloated, but a choice had to be made: do we browse the items of compound types in the readers or in the compound types definitions? If we decide to browse them in the readers, this means all the types do not need read/write methods. However, this browsing needs to be repeated in every reader. The chosen technique seems like the most generic one. Of course this last example is very simple; the whole approach is much more appreciated when we have to deal with arrays of structures containing 2-3 levels of other compound types. Implementing a new reader becomes easier because you don't have to think about recursivity: it is managed by the architecture. Streamed reading Streamed reading means reading the resources forward and never having to go back, whatever those resources are. For binary CTF, they are stream files. For JSON CTF, the single JSON file. We could think about a network reader which receives CTF packets and, once the application is done with their content, dumps them. In this case, no backward seeking is needed. Interface IStreamedReader only adds 4 methods to IReader : public PacketInfo getCurrentPacketInfo() throws ReaderException; public void nextPacket() throws ReaderException, NoMorePacketsException; public Event getCurrentPacketEvent() throws ReaderException; public void nextPacketEvent() throws ReaderException, NoMoreEventsException; In a streamed reader, you don't read all events of a trace: you read all packets of a trace and, for each packet, all contained events. This means events do not come in order of time stamp, but packets must do. Method getCurrentPacketInfo() must return null if there is no more packets. If the initial call to getCurrentPacketInfo() returns null, this means the trace is empty (does not have a single packet). If getCurrentPacketInfo() returns a PacketInfo instance, this packet is the current one (hence "current packet info"). The values in this object are good until the next call to nextPacket(), which seeks to the next packet. Do not call nextPacket() if getCurrentPacketInfo() returns null; you will get a NoMorePacketException . If, as a reader user, you want to keep a PacketInfo object for a long time (being able to call nextPacket() again without modifying it), you need to get a deep copy of it using the its copy constructor: PacketInfo myCopy = new PacketInfo(referenceToThePacketInfoComingFromTheReader); Method getCurrentPacketEvent() returns the current event of the current packet. The current packet is the one represented by the PacketInfo returned by getCurrentPacketInfo(). This means that as soon as a reader opens its streams, getCurrentPacketEvent() should return the first event of the first packet. If, at any moment, getCurrentPacketEvent() returns null, it means the current packet has no more events. If this happens right after a call to nextPacket(), it means the current packet has no events at all. But it could also mean that there's no such current packet (end of trace reached). Use getCurrentPacketInfo() first to be sure. This way of reading packets and events at two separate levels enables easy translation between formats. Since a packet is the biggest unit of data in CTF, there must exist a way to read one exactly as it is in the trace. This reader behavior is not suitable for all situations. If you want the reader to read events in order of time stamp and do not care about packets, use a random access reader. The JSON CTF streamed reader, JSONCTFStreamedReader , uses Jackson, a high-performance JSON processor, to parse the JSON CTF file. Jackson has the advantage of providing two combinable ways of reading JSON: streaming (getting one token at a time) and a tree model. Since JSON CTF files can rapidly grow big, the streaming method is used as much as possible. Only small nodes (e.g. event nodes) are converted to a tree model to render traversing easier. Using a tree model, members of CTF structures do not have to be in any particular order, as long as they keep the same names (keys). This is because, contrary to streamed parsing, random access to any node is possible using a tree model. This feature improves human editing of JSON CTF files. Random access reading Random access readers are used whenever a user wants to read events by order of time stamp (as they were recorded) or seek at a specific time location in the trace. Reading events in order of time stamp means the current event may be in packet A and the next one will be in packet B. This is because tracers may record with multiple CPUs at the same time and produce packets with interlaced events. IRandomAccessReader adds the following methods to IReader : public Event getCurrentEvent() throws ReaderException; public void advance() throws ReaderException; public void seek(long ts) throws ReaderException; Whereas you must have two loops (read all packets, and for each packet, read all events) with a streamed reader, you only need one with a random access reader: while (myRandomAccessReader.getCurrentEvent() != null) { Event ev = myRandomAccessReader.getCurrentEvent(); // Do whatever you want with the data of ev myRandomAccessReader.advance(); } The seek(long) method seeks at a specific time stamp. Here, just like in all places of this library where a long time stamp is required, the unit is CTF clock cycles. The event selection mechanism when seeking at a time stamp is shown by the following diagram: On the above diagram, the large rectangle represents the whole trace and small blocks are events. Arrows are actions of seeking. An event selected by a seeking operation has the same color. The rule is: - seek before the first event time stamp: current event is the first event of the trace - seek at any time stamp between the first and last events of the trace (inclusive): current event is the next existing one with time stamp greater or equal to query - seek after the last event of the trace: current event is null(end of trace reached) You'll notice you cannot get any information about the current packet with this interface. In fact, packets are useless when your only use case is to get events in recorded order and seek at specific time locations. Packets are used when translating from one format to another, to get the same exact content structure. For analysis and monitoring purposes, a random access reader is much more useful. Since random access reading is the main use case of TMF regarding the CTF component, a binary CTF random access reader is implemented as BinaryCTFRandomAccessReader . Because many algorithms are shared between BinaryCTFRandomAccessReader and BinaryCTFStreamedReader , they both extend BinaryCTFReader , an abstract class. There is no JSON CTF random access reader. The only purposes of JSON CTF are universal data exchange for small traces, easy human editing and the possibility to track the content changes of sample traces in revision control systems. The workflow is to translate a small range of a native binary CTF trace to JSON CTF, modify it to exercise some parts of TMF, store it and convert it back to binary CTF when time comes to test the framework. The only JSON CTF reader needed by all those operations is a streamed one. Writing With Linux Tools' CTF component, it is also possible to write CTF traces. This is needed in order to translate from one CTF subformat to another (mostly JSON CTF to binary CTF and vice versa). Writing is not as complex as reading because it only presents a single interface. If you're going to write something from scratch, there's no such thing as "random access writing", and we don't have appending/prepending use cases here. Writers only need to implement IWriter : public interface IWriter { public void openTrace(TraceParameters params) throws WriterException; public void closeTrace() throws WriterException; public void openStream(int id) throws WriterException; public void closeStream(int id) throws WriterException; public void openPacket(PacketInfo packet) throws WriterException; public void closePacket(PacketInfo packet) throws WriterException; public void writeEvent(Event ev) throws WriterException; // Methods for CTF types not shown } The openTrace() method will usually do a few validations, like checking if the output directory/file exists and so on. In its simplest form, openTrace() and closeTrace() do absolutely nothing. The writer owner also passes valid trace parameters to openTrace() so that the writer may use them as soon as possible. Since trace parameters contain the whole metadata text, the metadata may already be outputted at this stage. Following the empty trace will be calls to openStream(int) for each stream found into the metadata. The writer allocates all needed resources at this stage. The actual writing of data comes in the following order: - open a packet ( openPacket(PacketInfo)): packet header and context (if exists) will probably be written here (the stream ID can be retrieved from the packet info if needed for output purposes) - write events ( writeEvent(Event)): this is called multiple times between packet opening and closing; all calls to writeEvent(Event)mean the events belong to the last opened packet - close the packet ( closePacket(PacketInfo)): not needed by all formats Just like class Event has no reading method, it doesn't have any writing method: what to write of an event is chosen by the writer. If a complete representation of a CTF event is wanted, the header, optional contexts and payload must be written. But a writer could also only dump the headers, in which case it will ignore the other structures. Writing CTF types is fully analogous to reading them (see Reading): the writer calls the write(String, IWriter) of a structure to be written, passing null as the name and this as the writer. Provided writers A few writers are already implemented and shipped with the library. The following list describes them. BinaryCTFWriteris able to write a valid binary CTF trace, that is a directory containing a packetized metadatafile and some stream files. JSONCTFWriteris a JSON CTF writer. It produces native JSON and doesn't have any external dependency like the JSON CTF reader. HTMLWriterwas written as a proof of concept that coding a writer is easy, but could still be useful. It outputs one HTML page per packet, each one containing divisions for all events with all their structures. CyclesOnlyWriteris a very simple writer that only textually outputs the cycle numbers of all received events. This was created mainly to test TimeStampAccumulatorbut is still shipped for debug and learning purposes. Trace input and output Since a user may first close a trace, then open it, then close a packet and read an integer using a reader, some sort of state machine is needed to know what is allowed and what's not. As we don't want to repeat this in every reader, users about to read/write a trace must use a trace input/output. Those classes, TraceInput and TraceOutput , are the owners of readers and writers. They comprise a rudimentary state machine that checks if a trace is opened before getting an event, opened before getting closed, etc. When creating one of those, you always register an existing reader or writer. Input Here is the ultimate trace input flowchart: Its job is to hide to the user the repeating process of opening a trace, getting its metadata text, parsing it, analyzing its metadata AST, filling trace parameters and opening streams shared by all readers. TraceInput is in fact an abstract class. Since we have two reader interfaces, we need two concrete trace inputs: StreamedTraceInput uses a IStreamedReader reader while RandomAccessTraceInput uses a IRandomAccessReader one. Afterwards, using one of those trace inputs is very easy. Methods shared by both versions are pretty straightforward: public void open() throws WrongStateException, TraceInputException; public void close() throws WrongStateException, TraceInputException; public int getNbStreams() throws WrongStateException; public ArrayList<Integer> getStreamsIDs() throws WrongStateException; public TraceParameters getTraceParameters() throws WrongStateException; A reference to a streamed/random access reader is given at construction time to a streamed/random access trace input. Before doing anything useful, you must open a trace. This will issue a background call to the registered reader's openTrace() method. Then, it is possible to get the number of streams, get all the stream IDs or get the correct trace parameters. As usual, do not modify the trace parameters reference returned by getTraceParameters(): no copy is performed and this shared object is needed by lots of underlying blocks. Streamed The streamed trace input adds the following public methods to TraceInput : public PacketInfo getCurrentPacketInfo() throws WrongStateException, TraceInputException; public void nextPacket() throws WrongStateException, TraceInputException; public Event getCurrentPacketEvent() throws WrongStateException, TraceInputException; public void nextPacketEvent() throws WrongStateException, TraceInputException; Those are analogous to the streamed reader ones, except you cannot call them if the trace is not opened first. No copy of events and packets info will be performed by the trace input, so refer to section Reading to understand how long the reference data remains valid. Random access The random access input adds the following public methods to TraceInput : public Event getCurrentEvent() throws WrongStateException, TraceInputException; public void advance() throws WrongStateException, TraceInputException; public void seek(long ts) throws WrongStateException, TraceInputException; Again, see Random access reading to understand these. Output It turns out that TraceOutput is also an abstract class. Extending classes are StreamedTraceOutput and BufferedTraceOutput . Both can use any implementation of IWriter . The only difference is that BufferedTraceOutput is more intelligent. By copying all received events into a temporary buffer, it is able to modify the packet context prior to dumping all this to the owned writer. This is used to shrink a packet size when the initial size is too large, or conversely to expand it if too many events were written to the current packet. This makes it possible to not care about the packet and content sizes if events are removed/added from/to a packet. This feature is useful when manually modifying a JSON CTF packet (adding/removing events) and going back to binary CTF. If using a buffered trace output, you don't have to also manually edit the packet sizes in the JSON CTF trace. The streamed version immediately calls the underlying writer, making it possible to introduce serious errors. Translation All the previously described generic software architecture enables a very interesting scope: translating. Translating means going from one CTF format to another automatically. To do this, class Translator is shipped. The job of the translator is very easy: it takes an opened streamed trace input and an opened buffered trace output, then reads packets and events from the input side and gives them to the output side. From a user's point of view, you only need to create the input/output and call translate() of Translator . To follow the progress, you may implement ITranslatorObserver and set it as an observer to the translator. The translator observer interface looks like this: public interface ITranslatorObserver { public void notifyStart(); public void notifyNewPacket(PacketInfo packetInfo); public void notifyNewEvent(Event ev); public void notifyStop(); } A basic translator observer (outputs to standard output) is shipped as an utility: StdOutTranslatorObserver . If you call translate() without any parameter, the whole input trace will be translated. Should you need to translate only a range, you may call translate(long, long). This version will translate all the events between two time stamps and discard any event or packet outside that range. Packet sizes and event time stamps are automatically fixed when translating a range. Examples For some people, including the original author of this article, there's nothing better than a few examples to learn a new framework/library/architecture. This section lists a few simple examples (Java snippets) that might be just what you need to get started with Linux Tools' CTF component. Reading a binary CTF trace (order of events) This example reads a complete binary CTF trace in order of events (time stamps). For each one, its content is outputted using the default data types toString(). package examples; import org.eclipse.linuxtools.ctf.core.trace.data.Event; import org.eclipse.linuxtools.ctf.core.trace.input.RandomAccessTraceInput; import org.eclipse.linuxtools.ctf.core.trace.input.ex.TraceInputException; import org.eclipse.linuxtools.ctf.core.trace.reader.BinaryCTFRandomAccessReader; public class Example { public static void main(String[] args) throws Exception { // Create the random access reader BinaryCTFRandomAccessReader reader = new BinaryCTFRandomAccessReader("/path/to/trace/directory"); // Use this reader with a random access trace input RandomAccessTraceInput input = new RandomAccessTraceInput(reader); try { // Open the input input.open(); // Main loop while (input.hasMoreEvents()) { // Get current event Event e = input.getCurrentEvent(); // Print it (default debug format; similar to JSON) System.out.println(e); // Seek to the next event input.seekNextEvent(); } // Close the input input.close(); } catch (TraceInputException e) { // Something bad happened... e.printStackTrace(); } } }
https://wiki.eclipse.org/Linux_Tools_Project/TMF/CTF_guide
CC-MAIN-2021-25
refinedweb
8,384
51.78
23 August 2012 04:00 [Source: ICIS news] SINGAPORE (ICIS)--Japan’s Mitsui Chemicals plans to restart its 617,000 tonne/year naphtha cracker in Chiba by 28 or 29 August following power supply cut on 21 August, company sources said on Thursday “The power supply has been restored since yesterday. The cracker may be restarted by 28 or 29 August,” one source said. The cracker and other derivative plants, including polyethylene (PE), PP (polypropylene) and elastomer plants, were shut around 01:00-02:00 hours ?xml:namespace> The downstream units are likely to be restarted on 25-27 August, the source said. Prior to the outage, the Mitsui Chemicals’ other naphtha
http://www.icis.com/Articles/2012/08/23/9589255/japans-mitsui-chemicals-to-restart-chiba-cracker-by-28-29-august.html
CC-MAIN-2015-22
refinedweb
112
59.23
/* * [] = "@(#)getprotoent.c 8.1 (Berkeley) 6/4/93"; #endif /* LIBC_SCCS and not lint */ #include <sys/types.h> #include <sys/socket.h> #include <netdb.h> #include <stdio.h> #include <stdlib.h> #include <string.h> #define MAXALIASES 35 static FILE *protof = NULL; static struct protoent proto; static char *proto_aliases[MAXALIASES]; int _proto_stayopen; void setprotoent(f) int f; { if (protof == NULL) protof = fopen(_PATH_PROTOCOLS, "r" ); else rewind(protof); _proto_stayopen |= f; } void endprotoent() { if (protof) { fclose(protof); protof = NULL; } _proto_stayopen = 0; } struct protoent * getprotoent() { char *p; static char *line = NULL; register char *cp, **q; if (protof == NULL && (protof = fopen(_PATH_PROTOCOLS, "r" )) == NULL) return (NULL); if (line == NULL) { line = malloc(BUFSIZ+1); if (line == NULL) return (NULL); } again: if ((p = fgets(line, BUFSIZ, protof)) == NULL) return (NULL); if (*p == '#') goto again; cp = strpbrk(p, "#\n"); if (cp == NULL) goto again; *cp = '\0'; proto.p_name = p; cp = strpbrk(p, " \t"); if (cp == NULL) goto again; *cp++ = '\0'; while (*cp == ' ' || *cp == '\t') cp++; p = strpbrk(cp, " \t"); if (p != NULL) *p++ = '\0'; proto.p_proto = atoi(cp); q = proto.p_aliases = proto_aliases; if (p != NULL) { cp = p; while (cp && *cp) { if (*cp == ' ' || *cp == '\t') { cp++; continue; } if (q < &proto_aliases[MAXALIASES - 1]) *q++ = cp; cp = strpbrk(cp, " \t"); if (cp != NULL) *cp++ = '\0'; } } *q = NULL; return (&proto); }
http://opensource.apple.com//source/Libinfo/Libinfo-222.3.5/gen.subproj/getprotoent.c
CC-MAIN-2016-40
refinedweb
207
66.13
Is it possible one and the same object, particularly a string struct .GetHashCode() "Hello World".GetHashCode() .GetHashCode() GetHashCode() Short answer: Yes. But short answers are no fun, are they? When you are implementing GetHashCode() you have to make the following guarantee: When GetHashCode()is called on another object that should be considered equal to this, in this App Domain, the same value will be returned. That's it. There's some things you really need to try to do (spread the bits around with non-equal objects as much as possible, but don't take so long about it that it outweighs all the benefits of hashing in the first place) and your code will suck if you don't do so, but it won't actually break. It will break if you don't go that far, because then e.g.: dict[myObj] = 3; int x = dict[myObj];//KeyNotFoundException Okay. If I'm implementing GetHashCode(), why might I go further than that, and why might I not? First, why might I not? Maybe it's a slightly different version of the assembly and I improved (or at least attempted to) in between builds. Maybe one is 32-bit and one is 64-bit and I was going nuts for efficiency and chose a different algorithm for each to make use of the different word sizes (this is not unheard of, especially when hashing objects like collections or strings). Maybe some element I'm deciding to consider in deciding on what constitutes "equal" objects is itself varying from system to system in this sort of way. Maybe I actually deliberately introduce a different seed with different builds to catch any case where a colleague is mistakenly depending upon my hash code! (I've heard MS do this with their implementation for string.GetHashCode(), but can't remember whether I heard that from a credible or credulous source). Mainly though, it'll be one of the first two reasons. Now, why might I give such a guarantee? Most likely if I do, it'll be by chance. If an element can be compared for equality on the basis of a single integer id alone, then that's what I'm going to use as my hash-code. Anything else will be more work for a less good hash. I'm not likely to change this, so I might. The other reason why I might, is that I want that guarantee myself. There's nothing to say I can't provide it, just that I don't have to. Okay, let's get to something practical. There are cases where you may want a machine-independent guarantee. There are cases where you may want the opposite, which I'll come to in a bit. First, check your logic. Can you handle collisions? Good, then we'll begin. If it's your own class, then implement so as to provide such a guarantee, document it, and you're done. If it's not your class, then implement IEqualityComparer<T> in such a way as to provide it. For example: public class ConsistentGuaranteedComparer : IEqualityComparer<string> { public bool Equals(string x, string y) { return x == y; } public int GetHashCode(string obj) { if(obj == null) return 0; int hash = obj.Length; for(int i = 0; i != obj.Length; ++i) hash = (hash << 5) - hash + obj[i]; return hash; } } Then use this instead of the built-in hash-code. There's an interesting case where we may want the opposite. If I can control the set of strings you are hashing, then I can pick a bunch of strings with the same hash-code. Your hash-based collection's performance will hit the worse-case and be pretty atrocious. Chances are I can keep doing this faster than you can deal with it, so it can be a denial of service attack. There's not many cases where this happens, but an important one is if you're handling XML documents I send and you can't just rule out some elements (a lot of formats allow for freedom of elements within them). Then the NameTable inside your parser will be hurt. In this case we create a new hash mechanism each time: public class RandomComparer : IEqualityComparer<string> { private int hashSeed = Environment.TickCount; public bool Equals(string x, string y) { return x == y; } public int GetHashCode(string obj) { if(obj == null) return 0; int hash = hashSeed + obj.Length; for(int i = 0; i != obj.Length; ++i) hash = hash << 5 - hash + obj[i]; hash += (hash << 15) ^ 0xffffcd7d; hash ^= (hash >>> 10); hash += (hash << 3); hash ^= (hash >>> 6); hash += (hash << 2) + (hash << 14); return hash ^ (hash >>> 16) } } This will be consistent within a given use, but not consistent from use to use, so an attacker can't construct input to force it to be DoSsed. Incidentally, NameTable doesn't use an IEqualityComparer<T> because it wants to deal with char-arrays with indices and lengths without constructing a string unless necessary, but it does do something similar. Incidentally, in Java the hash-code for string is specified and won't change, but this may not be the case for other classes. Edit: Having done some research into the overall quality of the approach taken in ConsistentGuaranteedComparer above, I'm no longer happy with having such algorithms in my answers; while it serves to describe the concept, it doesn't have as good a distribution as one might like. Of course, if one has already implemented such a thing, then one can't change it without breaking the guarantee, but if I'd now recommend using this library of mine, written after said research as follows: public class ConsistentGuaranteedComparer : IEqualityComparer<string> { public bool Equals(string x, string y) { return x == y; } public int GetHashCode(string obj) { return obj.SpookyHash32(); } } That for RandomComparer above isn't as bad, but can also be improved: public class RandomComparer : IEqualityComparer<string> { private int hashSeed = Environment.TickCount; public bool Equals(string x, string y) { return x == y; } public int GetHashCode(string obj) { return obj.SpookyHash32(hashSeed); } } Or for even harder predictability: public class RandomComparer : IEqualityComparer<string> { private long seed0 = Environment.TickCount; private long seed1 = DateTime.Now.Ticks; public bool Equals(string x, string y) { return x == y; } public int GetHashCode(string obj) { return obj.SpookyHash128(seed0, seed1).GetHashCode(); } }
https://codedump.io/share/TERpd8lpmmsj/1/can-objectgethashcode-produce-different-results-for-the-same-objects-strings-on-different-machines
CC-MAIN-2016-50
refinedweb
1,051
61.97
#include <Parallel.h>void setup() { Parallel.begin(PARALLEL_BUS_WIDTH_8, PARALLEL_CS_0, 17, false, true); Parallel.setCycleTiming(4,4); Parallel.setPulseTiming(3,3,4,3); Parallel.setAddressSetupTiming(0,0,0,0); Serial.begin(115200);}uint8_t inb[8192],outb[8192];void loop() { int r=micros(); memcpy(inb,outb,8192); int s=micros(); for(int a=0;a<8192;a++){inb[a]=random(256);outb[a]=random(256);} uint8_t *m=(uint8_t *)Parallel.getAddress(); int t=micros(); memcpy(m,inb,8192); int u=micros(); memcpy(m+32768,m,8192); int v=micros(); memcpy(outb,m+32768,8192); int w=micros(); for(int a=0;a<8192;a++)if(inb[a]!=outb[a]){ Serial.println("Error ");break;} Serial.println("memcpy speed in MiB/s"); Serial.print("int to int "); Serial.println(7812.5/(s-r)); // 7812.5==8192*1000000/1048576 Serial.print("int to ext "); Serial.println(7812.5/(u-t)); Serial.print("ext to ext "); Serial.println(7812.5/(v-u)); Serial.print("ext to int "); Serial.println(7812.5/(w-v)); Serial.println(); delay(1000);} memcpy speed in MiB/sint to int 76.59int to ext 19.88ext to ext 8.63ext to int 13.40 once all of the internal 48Kb internal ram is used ? Hello guys. Long time reader first time poster.Im designing my own board for another purpose using the same ATSAM3X chip, and after reading this thread I am still a little confused as to how to setup External ram as a continuation of the internal ram - presumable this is possible.What I mean by this, assuming that all the control, data and address lines are connected correctly to an external chip, how does the MCU know to point the stack to this external memory once all of the internal 48Kb internal ram is used ? is it automatic after setting some SMC control bit in some register? If this is not possible, then it requires some memory management in code which is quite annoying. Any help would be appreciated. Hi,i am using this parallel library to drive a LCD display. I am using one address line to toggle the RS line of the display.Unfortunately i do not get the first five address lines to work. The first address line that is working as expected is line A5. A0-A4 stay constantly high.So my display works fine if i use line A5 (and 5-times faster than using ports), thanks for the work!But i would like to use A0 (pin 9) instead of A5. Any idea how i can get A0 working? Any trick how to enable A0/C.21/PWM9? Please enter a valid email to subscribe We need to confirm your email address. To complete the subscription, please click the link in the Thank you for subscribing! Arduino via Egeo 16 Torino, 10131 Italy
http://forum.arduino.cc/index.php?topic=152644.msg1170160
CC-MAIN-2015-06
refinedweb
469
52.36
In server-side rendering (SSR), the client and server don’t always communicate perfectly. Recently, we faced a bit of a challenge configuring our app to display adaptive React components smoothly. Ideally, all React components would be completely responsive, adapting themselves to every screen. But sometimes, we want to completely change the integrity of the component for a different device. For example, imagine that you create an always-visible search bar that fills the top of the page on desktop screens. For mobile, however, you want the search bar to appear in a full-screen dropdown only after a button has been clicked. This was the dilemma I faced, and I realized that replacing components called for a more comprehensive approach than just resizing components. What Didn’t Work CSS media queries can be a great tool for controlling the visibility of components and thereby making them adaptive. However, my project is using a relatively complex UI toolkit that nests a lot of elements. Because of these complex sub-trees, we often can’t insert IDs and class names into elements that we want to make adaptive. CSS selectors become quite tricky because we’re left selecting nested components by the name the toolkit gives them. Should the toolkit ever get updated (resulting in code changes), the CSS selectors could become obsolete. Another approach to creating adaptive layouts is to make a custom component (I first created one with React Hooks) that can define screen breakpoints for other components to reference. But with SSR, the server is not always aware of the window size. The React Hooks rely on a window resize event being triggered, but this client-side event doesn’t always make it to the server. In my experience, this caused odd rendering behavior where desktop components would get rendered initially until the server was able to receive the window information from the client. What Worked–fresnal That’s when I found @artsy/fresnal. Artsy is a really cool place to discover and browse art. What I didn’t know is that the team at Artsy has created several awesome open-source projects, one of which is fresnal. This tool allows you to define a set of screen breakpoints that render components accordingly. It wraps these components with generated CSS and controls their visibility. The main thing to keep in mind about this tool is that the server renders the HTML defined at every breakpoint. Although this might seem counterintuitive, it’s necessary for smooth adaptive implementation with SSR. When the client receives the HTML from the server to start rendering, it includes all of the components that it might need to render and checks its CSS to determine which ones to use. This means it doesn’t have to wait for communication with the server to adjust to the screen size–the problem I was having with a custom component. Certainly, rendering all of the components can have an effect on performance. However, in our case, this was a minor concern as we only needed to rely on this solution for a couple of our components. Let’s check out how we used it. In our media file, I defined our specific screen breakpoints: import { createMedia } from "@artsy/fresnel"; export const AppMedia = createMedia({ breakpoints: { sm: 0, md: 540, lg: 780, }, }); export const { MediaContextProvider, Media } = AppMedia; export const mediaStyle = AppMedia.createMediaStyle(); Then, I wrapped the components I wanted to make adaptive like this: export const SomeComponent = (props) => { return { <MediaContextProvider> <Media at="sm"> <MobileContent {...props} /> <Media/> <Media at="md"> <TabletContent {...props} /> <Media/> <Media greaterThanOrEqual="lg"> <DesktopContent {...props} /> <Media/> </MediaContextProvider> }; }; MobileContent, TabletContent, and DesktopContent are all their own sub-trees. Now I can import this component into any page file, and it will display correctly for each screen size. And that’s it! A perfectly functioning adaptive component with SSR.
https://spin.atomicobject.com/2019/10/03/adaptive-responsive-react-comp-server-side-rendering/
CC-MAIN-2019-51
refinedweb
640
54.12
Posts: 213 Registered: 02-10 Posts: 11451 Registered: 07-02 Posts: 248 Registered: 05-11 Posts: 142 Registered: 05-11 Posts: 8692 Registered: 09-02 Average said: There seems to be a similar bug that rears it's head in XP as well. It manifests itself as just running the music at full volume regardless of where the slider is though the SFX work fine. I presume it's the same bug. A pain in the neck! Ralphis said: I would guess that this could be caused by a midi device that does not respond to being turned down by the windows midi controls. You could check your midi options (in windows) and make sure it's set to Windows synth natt said: Maybe he doesn't want to use the Windows GS Wavetable synth though. Ralphis said: I understand that, but unfortunately it is an issue with a lot of wavetable synths that came out in the early 00s. Maybe there is a solution to this within the code, which you would surely know more about. I was only offering a solution to Average that I came across a few years ago so that he could play these engines NOW without having the music blasting. Posts: 228 Registered: 08-10 Average said: For the record I much prefer the Creative soundfont but if it's a choice between acceptable SF with no volume control and pretty rubbish sF with volume control, I'm afraid I'm going to suffer the rubbish one - at least until there's some kind of fix or work around... I'll finish this post later - my girlfriend is standing tapping her foot waiting on me to go out with her!!! *Slap....* :P Posts: 2672 Registered: 08-00 __________________ Liquid Doom - The Alternative Posts: 4615 Registered: 08-00 Quasar said: The OP's problem isn't a bug in SDL_mixer; it's a bug in Windows Vista and 7's system sound mixer which ties the volume of the Microsoft GS Synth to the application's digital audio output gain. The midiOutVolume API provided by the mmsystem library (aka MCI) is the only function called by SDL_mixer to set volume. It's MS's wavesynth code, which they re-implemented in the most lazy way possible for Vista, that is entirely at fault here. natt said: Perhaps the original fault lies with Microsoft; but Windows Vista has been out for 5 years now, and if you can't fix an OS bug directly, you create a work-around where possible. Pointing fingers doesn't help. Quasar said: The only workaround is literally to add your own software synth. code: mus_extend_volume 0 Posts: 1814 Registered: 02-03 natt said: ...I've been informed that the workaround I coded into prboom-plus works. Give it a try! Any recent build should do; use portmidi music player option and this config file option: Mancubus II said: That's fine, but how does that help sdl_mixer usage? That is what quasar is referring to. If your application only has sdl_mixer as an option, you're kinda screwed unless you add other midi options like prboom+ did. Posts: 486 Registered: 04-09 Posts: 447 Registered: 08-09 wesleyjohnson said: DoomLegacy has bug reports on unspecified Windows machines that sound similar. I will blame Vista and Win7, being that it looks like I cannot not do much about it by changing SDL calls. Posts: 7130 Registered: 01-03 Mancubus II said: If your application only has sdl_mixer as an option, you're kinda screwed unless you add other midi options like prboom+ did. Posts: 829 Registered: 08-05 GhostlyDeath said: Yknow if lowering the Windows volume is do damn buggy, why not just lower the volume of the notes that are played? ReMooD (when it had music that is) did this. code: void native_midi_setvolume(int volume) { } Posts: 2587 Registered: 01-04 natt said: SDL_Mixer midi playback on Linux (OSS) doesn't have functioning master volume control either. code:/* native_midi: Hardware Midi support for the SDL_mixer library */ #include "SDL_config.h" /* everything below is currently one very big bad hack ;) Proff */ >
http://www.doomworld.com/vb/source-ports/56515-chocolate-doom-prboom-linked-sound-problem/
crawl-003
refinedweb
688
67.28
commit-queue keeps crashing Exception while checking queue and bots: unknown encoding: idna. Sleeping until 2009-10-27 23:07:56 (5 mins). 'import site' failed; use -v for traceback Traceback (most recent call last): File "/Users/eseidel/Projects/WebKit/WebKitTools/Scripts/bugzilla-tool", line 33, in <module> import os File "/System/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/os.py", line 49, in <module> import posixpath as path File "/System/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/posixpath.py", line 14, in <module> import stat ImportError: No module named stat I think that's happening on re-exec. I think it may only happen when I have the patch from bug 30098 applied locally. Either way, it's disturbing. It's happened 3 times at seeming random. The commit-queue used to be rock-solid! This appears to be a python bug. I've made a reduced test case. Created attachment 42122 [details] Script demonstrating python 2.5.1 (or mac os x?) bug On Linux (the "import site" bits are when I'm repeatedly hitting ctl-C). Python 2.5.2 % ./t ...............................................................................'import site' failed; use -v for traceback .........'import site' failed; use -v for traceback ............'import site' failed; use -v for traceback ...........'import site' failed; use -v for traceback ....................'import site' failed; use -v for traceback .......... Bah. I just don't know how exec works. I didn't realize it carried file handles through. Created attachment 42123 [details] Patch v1 Comment on attachment 42123 [details] Patch v1 File handle leaks = bad. Comment on attachment 42123 [details] Patch v1 Going to make a better patch. It would appear that even with my fix, Python itself is still leaking 3 file descriptors on every exec(): Python 57958 eseidel 58r REG 14,2 144580 466329 /System/Library/Frameworks/Carbon.framework/Versions/A/Frameworks/HIToolbox.framework/Versions/A/Resources/HITo olbox.rsrc Python 57958 eseidel 59r REG 14,2 490410 9627854 /System/Library/Frameworks/Carbon.framework/Versions/A/Frameworks/HIToolbox.framework/Versions/A/Resources/Engl ish.lproj/Localized.rsrc Python 57958 eseidel 60r REG 14,2 86770 481232 /System/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/plat-mac/errors.rsrc.df.rsrc I expect this may be a mac-only problem. I wrote a function: def check_for_file_leak(message): log("before %s" % message) lsof_output = SCM.run_command(['lsof', '-p', str(os.getpid())]) if lsof_output.count('HIToolbox'): log(message) error(lsof_output) log("after %s" % message) and sprinkled it throughout the code. It seems these 3 leaking files are opened by mechanize during: self.browser = Browser() in Bugzilla.__init__() I've not tried to trace through the mechanize code to see where the leaks come from. Why don't we fork and exec and then throw away the old version of ourselves? If that doesn't work, can we use a trampoline executable? Tracking down every leak seems fragile. Here is a stand-alone version of the same function: def check_for_file_leak(message): import os import sys import subprocess print >> sys.stderr, "before %s" % message process = subprocess.Popen(['lsof', '-p', str(os.getpid())], stdout=subprocess.PIPE, stderr=subprocess.STDOUT) lsof_output = process.communicate()[0].rstrip() if lsof_output.count('HIToolbox'): print >> sys.stderr, message print >> sys.stderr, lsof_output exit(1) print >> sys.stderr, "after %s" % message We could definitely use a trampoline executable. That may be the simplest solution. bugzilla-tool commit-queue is already a trampoline executable of sorts. It's just no longer a shell script, instead it's implemented in bugzilla-tool itself. I think I might just get rid of the "don't need to restart bugzilla-tool to notice committers.py changes" feature for now. Rolled out the change and re-opened bug 30084. *** This bug has been marked as a duplicate of bug 30084 ***
https://bugs.webkit.org/show_bug.cgi?id=30869
CC-MAIN-2020-50
refinedweb
631
53.68
Hi Ingo,As you probably know, we've been chasing a variety of performance issueson our SLE11 kernel, and one of the suspects has been CFS for quite awhile. The benchmarks that pointed to CFS include AIM7, dbench, and a fewothers, but the picture has been a bit hazy as to what is really the problem here.Now IBM recently told us they had played around with some schedulertunables and found that by turning off NEW_FAIR_SCHEDULERS, theycould make the regression on a compute benchmark go away completely.We're currently working on rerunning other benchmarks with NEW_FAIR_SLEEPERSturned off to see whether it has an impact on these as well.Of course, the first question we asked ourselves was, how can NEW_FAIR_SLEEPERSaffect a benchmark that rarely sleeps, or not at all?The answer was, it's not affecting the benchmark processes, but some noisegoing on in the background. When I was first able to reproduce this on my workstation, it was knotify4 running in the background - using hardly any CPU, butgetting woken up ~1000 times a second. Don't ask me what it's doing :-)So I sat down and reproduced this; the most recent iteration of the test programis courtesy of Andreas Gruenbacher (see below).This program spawns a number of processes that just spin in a loop. It also spawnsa single process that wakes up 1000 times a second. Every second, it computes theaverage time slice per process (utime / number of involuntary context switches),and prints out the overall average time slice and average utime.While running this program, you can conveniently enable or disable fair sleepers.When I do this on my test machine (no desktop in the background this time :-)I see this:./slice 16 avg slice: 1.12 utime: 216263.187500 avg slice: 0.25 utime: 125507.687500 avg slice: 0.31 utime: 125257.937500 avg slice: 0.31 utime: 125507.812500 avg slice: 0.12 utime: 124507.875000 avg slice: 0.38 utime: 124757.687500 avg slice: 0.31 utime: 125508.000000 avg slice: 0.44 utime: 125757.750000 avg slice: 2.00 utime: 128258.000000 ------ here I turned off new_fair_sleepers ---- avg slice: 10.25 utime: 137008.500000 avg slice: 9.31 utime: 139008.875000 avg slice: 10.50 utime: 141508.687500 avg slice: 9.44 utime: 139258.750000 avg slice: 10.31 utime: 140008.687500 avg slice: 9.19 utime: 139008.625000 avg slice: 10.00 utime: 137258.625000 avg slice: 10.06 utime: 135258.562500 avg slice: 9.62 utime: 138758.562500As you can see, the average time slice is *extremely* low with new fairsleepers enabled. Turning it off, we get ~10ms time slices, and aperformance that is roughly 10% higher. It looks like this kind of"silly time slice syndrome" is what is really eating performance here.After staring at place_entity for a while, and by watching the process'vruntime for a while, I think what's happening is this.With fair sleepers turned off, a process that just got woken up willget the vruntime of the process that's leftmost in the rbtree, and willthus be placed to the right of the current task.However, with fair_sleepers enabled, a newly woken up processwill retain its old vruntime as long as it's less than sched_latencyin the past, and thus it will be placed to the very left in the rbtree.Since a task that is mostly sleeping will never accrue vruntime atthe same rate a cpu-bound task does, it will always preempt anyrunning task almost immediately after it's scheduled.Does this make sense?Any insight you can offer here is greatly appreciated!Thanks,Olaf-- Neo didn't bring down the Matrix. SOA did. --soafacts.com/* * from agruen - 2009 05 28 * * Test time slices given to each process */#include <sys/time.h>#include <sys/resource.h>#include <sys/types.h>#include <sys/ipc.h>#include <sys/msg.h>#include <sys/stat.h>#include <signal.h>#include <unistd.h>#include <stdio.h>#include <stdlib.h>#undef WITH_PER_PROCESS_SLICESstruct msg { long mtype; long nivcsw; long utime;};int msqid;void report_to_parent(int dummy) { static long old_nivcsw, old_utime; long utime; struct rusage usage; struct msg msg; getrusage(RUSAGE_SELF, &usage); utime = usage.ru_utime.tv_sec * 1000000 + usage.ru_utime.tv_usec; msg.mtype = 1; msg.nivcsw = usage.ru_nivcsw - old_nivcsw; msg.utime = utime - old_utime; msgsnd(msqid, &msg, sizeof(msg) - sizeof(long), 0); old_nivcsw = usage.ru_nivcsw; old_utime = utime;}void worker(void) { struct sigaction sa; sa.sa_handler = report_to_parent; sigemptyset(&sa.sa_mask); sa.sa_flags = 0; sigaction(SIGALRM, &sa, NULL); while (1) /* do nothing */ ;}void sleeper(void) { while (1) { usleep(1000); }}int main(int argc, char *argv[]){ int n, nproc; pid_t *pid; if (argc != 2) { fprintf(stderr, "Usage: %s <number-of-processes>\n", argv[0]); return 1; } msqid = msgget(IPC_PRIVATE, S_IRUSR | S_IWUSR); nproc = atoi(argv[1]); pid = malloc(nproc * sizeof(pid_t)); for (n = 0; n < nproc; n++) { pid[n] = fork(); if (pid[n] == 0) worker(); } /* Fork sleeper(s) */ for (n = 0; n < (nproc + 7)/8; n++) if (fork() == 0) sleeper(); for(;;) { long avg_slice = 0, avg_utime = 0; sleep(1); for (n = 0; n < nproc; n++) kill(pid[n], SIGALRM); for (n = 0; n < nproc; n++) { struct msg msg; double slice; msgrcv(msqid, &msg, sizeof(msg) - sizeof(long), 0, 0); slice = 0.001 * msg.utime / (msg.nivcsw ? msg.nivcsw : 1);#ifdef WITH_PER_PROCESS_SLICES printf("%6.1f ", slice);#endif avg_slice += slice; avg_utime += msg.utime; } printf(" avg slice: %5.2f utime: %f", (double) avg_slice / nproc, (double) avg_utime / nproc); printf("\n"); } return 0;}
http://lkml.org/lkml/2009/5/28/219
CC-MAIN-2016-30
refinedweb
903
69.48
View want to compute the sum of about 10 records in my SQL Server database table, then display that amount in my ASP label. The field is BalanceDue. So, I need the total balance due for all ten records. Any ideas? I'm using Visual Basic.net. thought I did something like the following inorder to find the labels in a user object but when I revisited it, it did not work. Any ideas??? using System; using System.Collections.Generic; System.Linq; System.Text; Point Label Property Of Chart Increase Range of chart to display scale properly. Hall of Fame Twitter Terms of Service Privacy Policy Contact Us Archives Tell A Friend
http://www.dotnetspark.com/links/12656-how-to-display-images-with-label-object.aspx
CC-MAIN-2017-30
refinedweb
112
66.44
Eclipse Community Forums - RDF feed Eclipse Community Forums "User task has no task name" <![CDATA[Hello, I am a BPMN modeler newbie and using it solely for editing models. In this context I have been fighting to get rid of the warning "User task has no task name" that is displayed on any 'User Task' activity shape. What exactly is expected here? I have a Name specified in the Description attributes (Properties View) and this name is displayed correctly on the diagram. The warning remains though. I am using BPMN2 Editor 0.2.5.201304262142 in an Eclipse 4.2.2 based product on Mac OS X Lion 10.7.5. Many thanks in advance for your help! Best, Elisabeth]]> Elisabeth Dévière 2013-05-03T20:20:17-00:00 Re: "User task has no task name" <![CDATA[Hi Elisabeth, The "Task Name" in this case is the name of the task that the user is to accomplish (for example "Approve Loan Application") not necessarily the name of the User Task figure on the canvas (although they should probably be the same, for consistency). The Task Name is set in the "User Task" tab of the Property Sheet. This "Task Name" attribute is actually a model extension required by jBPM, and is unrelated to the BPMN2 "Name" attribute. Regards, Bob]]> Robert Brodt 2013-05-06T13:08:17-00:00 Re: "User task has no task name" <![CDATA[Hi Bob, Many thanks for your detailed explanation. Weirdly enough I do not find any "Task Name" name attribute in the User Task tab of the Properties View. I am using BPMN2 Editor version 0.2.5.201305042252 and see the following - Attributes - Resource Role List - Rendering List The Attributes section has an - 'Implementation' text field - 'Is For Compensation' checkbox Always willing to provide more information. Thanks again, Best, Elisabeth. ]]> Elisabeth Dévière 2013-05-07T19:59:09-00:00 Re: "User task has no task name" <![CDATA[Hello, I am a brand new comer on this site, I just registered a few minutes ago. I am engeneer and need to experiment different BPMN solutions. This eclipse plugin seens to be very interresting as it seems it implements most (if not all) of the BPMN2 specification. I just encounter the same problem as Elisabeth. The sample project imported in eclipse is OK. I can see all attributes in the tasks impored. But in any new task as Elisabeth wrote only one attribute named Implementation is present. A yellow sign report the error "User Task has no task name". I beleive a parameter not set or plugin not installed, or may be a specific French problem! I snooped around but did not find anything relevant. Thanks for any help. Regards Pascal Garcia]]> Pascal Garcia 2013-05-12T17:07:02-00:00 Re: "User task has no task name" <![CDATA[Sory I was wrong. In the sample now any new task has the same parameters. I am quite sure that was not the case, but I left eclipse and answered by yes to a question to apply "I do not know exactly what", some kind of template. But the new bpmn still do not work? But the process are different: tab process : id name and verson in one case; name, process type for the new process. I edited the 2 files, there is a difference in definition of the namespace. Eveything that relates to drool is missing in the process not working. xmlns:di xmlns:g xmlns xsi:schemalocation here are the 2nd line of the file working. Sory I can not : stupid ununders+tandable messag+e : "You can only use links to eclipse.org sites while you have fewer than 5 messages." Elisabeth if you replace your second line by the secong line in the example. ]]> Pascal Garcia 2013-05-12T18:03:22-00:00 Re: "User task has no task name" <![CDATA[Wow...I have read these posts several times now, and have tried very hard to figure out what was being said, but I still feel like the blind man and the elephant... Maybe a screen capture along with the generated XML would help explain what's going on? Please email me at bbrodt@redhat.com with a more detailed description. Thanks, Bob]]> Robert Brodt 2013-05-13T03:23:47-00:00 Re: "User task has no task name" <![CDATA[Ok I try to be more precise. The BPMN process is stored in an xml file. Capture of the not working window, attributes of the user task are not correct. add https before the links I provide. I am not allowed to use links! //dl.dropboxusercontent.com/u/12108744/BPMNEDITOR/Capture1.JPG Here is the not working file: //dl.dropboxusercontent.com/u/12108744/BPMNEDITOR/process_2.bpmn Here is the workingfile: //dl.dropboxusercontent.com/u/12108744/BPMNEDITOR/process_2.bpmn The difference is in namespace define at line 2. I do beleave the difference comes from the configuration of eclipse bpmn plugin: //dl.dropboxusercontent.com/u/12108744/BPMNEDITOR/Capture2.JPG Depending on this configuration I beleive a different template is used when creating a new process. The template for none seems to be incorrect, name space is incomplete. On my installation am am not able to create a process if not None target. Probably an error in my system. Edit these 2 files with xmleditor in eclipse. You will notice some lines missing : xmlns etc. I do beleive that the BPMN2 Editor uses the définition of the XML to display the various fields, so the editor does not add fields that are not supported by the target engine. I do beleive that the None template should include a full definition. Is that more clear. Regards. ]]> Pascal Garcia 2013-05-13T20:46:23-00:00 Re: "User task has no task name" <![CDATA[I don't understand what you mean by "not working". The URL for both of these process files is identical, so I have nothing to compare. And, the file you said is "not working" can be opened with the BPMN2 Modeler without any problems, and I can change attributes and save/reopen the file. Regarding the "Target Runtime" setting in the User Preferences, if you select "None" the editor will generate "generic" BPMN 2.0 XML with appropriate default values. If you select "JBoss jBPM5 Business Process Engine" then the editor will add the targetNamespace="" to the <bpmn2:definitions> element and will also set the default namespace to the same URI (xmlns="" - this is required to work around an element referencing issue in jBPM.) So...I'm not sure what you are asking...sorry ]]> Robert Brodt 2013-05-13T23:51:22-00:00 Re: "User task has no task name" <![CDATA[sorry for the mistake here is the one working //dl.dropboxusercontent.com/u/12108744/BPMNEDITOR/Evaluation.bpmn open the user task and you will see that you have or have not the attribe Task Name defined depending on the process. Regards. Pascal Regards]]> Pascal Garcia 2013-05-14T04:17:29-00:00 Re: "User task has no task name" <![CDATA[Task Name is one of the data inputs for the User Task. This is required by jBPM. The other process file you say is "not working" is not targeted for jBPM; in fact it is not targeted for any particular process execution engine, but it is still valid BPMN 2.0. What does "not working" mean outside the context of an execution engine? I don't understand why you think that is an error. What, in your opinion, should it look like? ]]> Robert Brodt 2013-05-14T11:48:24-00:00 Re: "User task has no task name" <![CDATA[Hello The problem is that when you save a validation is done. Then a smal sign appears on the upper lef corner of Users Tasks. When moving cursir on this sign, a message saying that "User task as no Task Name", and you have no chance to set the User Task No porperties are available for this. Regards.]]> Pascal Garcia 2013-05-14T20:14:57-00:00 Re: "User task has no task name" <![CDATA[Ok, I think we should simply forget all what I said. Here is something I did not notice at first. At some point a process properly edited, suddendly give these errors. In fact I think there are conflicts between different pakages installed: Jbpm, and BPM2 Editor at least. Both gardening in the file when saving and modifying. I Deinstalled jbpm5 and some of the problems just went away. Regards]]> Pascal Garcia 2013-05-14T22:23:06-00:00
http://www.eclipse.org/forums/feed.php?mode=m&th=486060&basic=1
CC-MAIN-2017-34
refinedweb
1,429
75.1
Please need help with my Comodo signed applet manifest to get rid of the Oracle security warninguser13550719 Mar 8, 2014 12:23 PM Dear members . I need your help please I have a game I am hosting at I have signed the code with M/s Comodo. I have written the manifest file (and changed it so many times) but I still get the Oracle security warning: "This application will run with unrestricted access which may put your computer and personal information at risk. Run this application if you trust the location and publisher above" I am must admit am defeated/ do not understand what I am doing. Please I need your help on how to write the manifest code, how to correctly put it in the jar and how to reference from the html code The game can only be played online from day.com I need the system clock of the client and also I have used getResources() to read images in the jar file on the site I have a (Play) button. When a call to the play button is made, the index page connects to the file play.html which is located in the jars folder. The play.html file calls the HiredForOneDay.jar file which is also located in the jars folder. Files like launch.jnlp, launch.html are all in the jarsfolder. My game uses Cardlayout (CardLayOutClass) in the Applet init() the cardLayoutClass.showCongratulationsPanel(); which shows Congraculations class then setJMenuBar(helpTopicSelector.getBar()); HelpTopicSelector is also another class below is the code [code] package hiredforoneday; /** * @(#)HiredForADayApplet.java * * @author Ruth Bugembe * @author John Bannick * @version 23 Dec 2012 */ @SuppressWarnings("serial") public class HiredForADayApplet extends javax.swing.JApplet{ static CardLayoutClass cardLayoutClass; HelpTopicSelector helpTopicSelector; @Override @SuppressWarnings("static-access") public void init(){ cardLayoutClass = new CardLayoutClass(); helpTopicSelector = new HelpTopicSelector(this); add(cardLayoutClass.getMainPanel(), BorderLayout.CENTER); cardLayoutClass.showCongratulationsPanel(); setJMenuBar(helpTopicSelector.getBar()); } }[/code] I have not sent in manifest code because I have made so many versions and now am confused Thank you again for your time Ruth 1. Re: Please need help with my Comodo signed applet manifest to get rid of the Oracle security warningjashburn Mar 8, 2014 5:37 PM (in response to user13550719) I don't think it's possible to prevent the warning message from popping up at least once. There might be a "Do not show this again" option on the warning dialog box that users can check to prevent it from popping up again. Code-wise the only thing I can think of is to remove reliance on client-side system clock so that the Permissions attribute in the manifest file can be set to "sandbox" rather than "all-permissions". Not sure if reading images from the same signed jar file still qualifies it as running under "sandbox" - try and see. It's worth noting that other Java RIA publishers such as Skillsoft Support Knowledge Base also face the same issue, and they simply document it as a relatively benign warning message. 2. Re: Please need help with my Comodo signed applet manifest to get rid of the Oracle security warninguser13550719 Mar 11, 2014 8:59 AM (in response to jashburn) Thank you very much for that answer. I have spent weeks on the problem. I am not an expert in Java but I have an application am hosting at I need to time the players when answering questions and also in my animations I also need the time. Is there a way around this problem so that I do not use the client system clock? I will be very grateful if you can give me a sample code because the security alert scares my players. I also do not know how to write absolute paths for my graphics. my graphics are in a file ''images" in the src file Thank you in advance. Ruth 3. Re: Please need help with my Comodo signed applet manifest to get rid of the Oracle security warningjwenting Mar 11, 2014 10:27 AM (in response to jashburn) wouldn't be much of a security system if the programmer could just tell the applet to turn off security and it would do so without so much as alerting the end user 4. Re: Please need help with my Comodo signed applet manifest to get rid of the Oracle security warninggimbal2 Mar 11, 2014 2:31 PM (in response to jwenting) jwenting wrote: wouldn't be much of a security system if the programmer could just tell the applet to turn off security and it would do so without so much as alerting the end user I would hate that only a little more than web applications being able to print stuff without showing the print dialog. 5. Re: Please need help with my Comodo signed applet manifest to get rid of the Oracle security warningjashburn Mar 11, 2014 5:06 PM (in response to user13550719)1 person found this helpful @jwenting, it is not really about turning off security per se. It's more about looking for a way so that the Java plugin doesn't have a reason to display the warning pop-up because the applet has been made secure. Oracle introduced a number of new security-related warning pop-ups since Java 7u21 such that users who hadn't seen them before are alarmed to see them now. You can see some of the pop ups at . A number of them are for good measure, e.g., when the jar file is not signed or signed but not using a trusted CA certificate, or when the jar's manifest file is missing some attributes that help prevent security issues such as applet repurposing. Pop ups for these are completely warranted, and developers should take steps to rectify them. In fact applets with these issues may not even run starting from Java 7u51 as this update release enforces a number of security measures, and blocks applets from running if these measures (trusted CA signing and some manifest attributes) are not in place. In this particular case the jar file is signed using a trusted CA certificate, and it seems that the mandatory manifest attributes have also been put in. Therefore of question is if there is anything else that needs to be done to satisfy the Java plugin of the applet's security, or is it by design that the Java plugin will display the warning message at least once no matter what. Iinm, the message here displays the Java logo that signifies a lower security risk (see ) but still it goes back to what I wrote about users being alarmed when there weren't such messages before. One of my previous links suggests that the warning message is unavoidable. Here's another one that suggests similarly: (scroll down to the last question on the page.) @Ruth, I've noticed in your manifest file you have: Application-Library-Allowable-Codebase: * Codebase: * Referring to , you might want to try replacing * with the web site's domain. Also, you have: Caller-Allowable-Codebase: * localhost 127.0.0.1 m/jars/HiredForOneDay.jar Not sure if the value being on separate lines matter, but I recall that manifest attribute value checking is quite strict, so you might want to put them into a single line. Also try removing the *. Other than that I don't see anything wrong with it. If the warning message is still displayed after changing the above, I think we can really conclude that it is unavoidable for all-permissions applets. For your animation, you might want to try using one of the Timer classes, or a simple sleep() in the animation loop. To time user's answers, perhaps you can implement a timer that sends out a tick, say, every 50 milliseconds, and count the number of ticks between the question and answer. Finally, the usual way to load images packaged in the jar file is to use Image image = getClass().getResource("/absolute/package/filename"); but I'm not sure if this will work with sandbox Permissions. An alternative would be to externalise the images into the same web server that serves out the applet, and load them from the applet using URLConnection. (This is fine if there aren't many images to load as having many round-trips back to the server can cause performance issues.) Hth! 6. Re: Please need help with my Comodo signed applet manifest to get rid of the Oracle security warningjwenting Mar 12, 2014 8:36 AM (in response to jashburn) " the Java plugin doesn't have a reason to display the warning pop-up because the applet has been made secure." and you want me as an end user to just assume that every applet where the programmer asserts that it is secure can be trusted and therefore no security is needed (because that's what it does, turns off sandbox security if you agree with it). So yes, it turns off security, and you want the applet programmer to be able to tell the JVM that security should be turned off. Which of course means that there might as well be no security at all, as every malware author would of course instantly do just that. 7. Re: Please need help with my Comodo signed applet manifest to get rid of the Oracle security warninggimbal2 Mar 12, 2014 9:34 AM (in response to jwenting) Hey, if the Jedi can do it with the wave of a hand, why not Java developers? *waves hand* You will instantly trust my software to not email your addressbook to iamnotahacker@h0tmail.com. 8. Re: Please need help with my Comodo signed applet manifest to get rid of the Oracle security warninguser13550719 Mar 20, 2014 12:41 PM (in response to jashburn) Thanks everyone for your help. I am ashamed to admit am defeated.I have spent weeks on this problem and it is not going away. I have put used all the advice I get even I have put codes in doPrivileged() nothing works.Does anyone know where I can post the code so do it for me? at a cheap cost. below is the security error I get I have a valid certificate from Comodo and I am signing it in netbeans Thanks again [code] java.lang.SecurityException: attempted to open sandboxed jar as a Trusted-Library) at com.sun.deploy.security.CPCallbackHandler$ParentElement.checkResource(Unknown Source) at com.sun.deploy.security.DeployURLClassPath$JarLoader.checkResource(Unknown Source) [/code]
https://community.oracle.com/message/12331255?tstart=0
CC-MAIN-2017-22
refinedweb
1,749
57
05 July 2012 08:36 [Source: ICIS news] SINGAPORE (ICIS)--Norwegian shipping firm Stolt-Nielsen posted on Thursday a 14% year-on-year increase in its net profit to $37m (€30m) in the second quarter of 2012, augmented by an exceptional gain in its tankers operations, the company said. Revenue in the second quarter that ended 31 May 2012 rose by 2% year on year to $538.8m, the company said on a statement. “The improvement was attributable primarily to better operating results at Stolt Tankers, driven by improved COA freight rates and higher utilisation in terms of tonnes carried per day,” company CEO Niels G. Stolt-Nielsen said in the statement. Stolt Tankers swung to an operating profit of $29m in the second quarter of 2012, 84% or $24.5m of which was a net gain on insurance proceeds related to the loss of vessel MT Stolt Valor in the ?xml:namespace> In the same period last year, Stolt Tankers had an operating loss of $3.5
http://www.icis.com/Articles/2012/07/05/9575417/norways-stolt-nielsen-q2-profit-rises-14-revenue-up-2.html
CC-MAIN-2014-52
refinedweb
168
59.84
Nerdkill is a simple 2D stress-relieving game that involves targeting little nerds that scamper across the screen. I first wrote this game for the BeOS several years ago and last year I rewrote it in C# for .Net and DirectX Managed, mostly as a simple exercise to learn the DirectX Managed APIs. The article describes the effort of porting a simple 2D game with sound to the Pocket PC, possibly as a 100% Managed application running under .Net Compact Framework, and exploring the limitations imposed by the device and framework capabilities. The resulting fully functional game and full source code are provided under a GPL license. This game was tested on the Pocket PC simulator as well as on a Cassiopeia E-125 (MIPS 140 MHz.) At the heart of Nerdkill C# (the full desktop version, available at) is a reusable game framework that defines a simple architecture to host a game application. The architecture separates the game engine from the platform display, input, sound and resource management by defining the following components: public interface Engine.IEngineProcess; public interface Engine.IEngine2D; public interface Engine.IEngineSound; public interface Engine.IEngineInput; public interface Engine.IEngineResources; IEngineProcess All the other interfaces form auxiliary modules which render services for the engine. They abstract the engine from the actual implementation of the resource handling. In the case of the DirectX Managed version of Nerdkill C#, IEngine2D is implemented as a windowed DirectX surface embedded in a Windows.Form, IEngineInput is implemented using mouse and keyboard events generated by the Windows.Form, IEngineSound is implemented using DirectSound and IEngineResources is implemented accessing the data files embedded in the assembly. IEngine2D Windows.Form IEngineInput IEngineSound IEngineResources The architecture used by Nerdkill is pretty flexible. To port the game to Pocket PC, it was merely necessary to reimplement the IEngine2D and IEngineSound interfaces. The goal was initially to see if the combination of Pocket PC and the .Net Compact Framework was good enough for the two tasks at hand. Ideally the code should be 100% Managed and .Net. In reality, it is all managed yet some parts like the sound left me no choice but access the WinCE APIs through P/Invoke. Part of the port effort, which I did not exactly expect at first, was adapting the game to the limited resources of the Pocket PC. Obviously every bitmap artwork had to be "shrunk" down to fit on the 240x320 screen and the sounds were sampled down to 11 kHz/8 bits/mono WAV files to reduce the size of the resources. The resulting assembly is 520 kB whereas the original desktop assembly is a mere 2.5 MB. Luckily most of the gameplay is exactly the same. However, there are a couple of differences. For example on the desktop game, scrolling happens automatically when the mouse approaches the border of the screen. Since there is no MouseOver event with a stylus, I simply use the navigation pad of the Pocket PC instead. A menu that allows to quickly pause the game or disable the sound has been added, and most important it was necessary to automatically pause the game when the application looses the focus, otherwise the game would continue to update when in background, rendering the device extremely slow. Since the initial goal was a 100% .Net Managed approach to understand the limitations of this framework, the options for rendering 2D bitmaps were rather limited. The requirements for 2D rendering are simple: I did not want to use a proprietary library such as GAPI, despite the obvious gain in performances it would give. I was initially tempted to reuse the classic Win32 approach with BitBlt and co. The bitmap manipulation capabilities of .Net are pretty limited, especially for the Compact Framework. Nevertheless it contains everything needed for the purpose here. Bitmaps can be loaded from resources. The Compact Framework does not allow direct access to the bits of the bitmaps except using GetPixel and SetPixel, which are clearly too slow to be acceptable. On the other hand, a Bitmap object can be used to create a Graphics context for GDI+, allowing C# code to simply draw in the offscreen bitmap. BitBlt GetPixel SetPixel Bitmap Graphics The article Flicker Free Drawing In C# explains the usual DllImport trick to access the GDI+ functions which are not available directly in .Net: CreateCompatibleDC, CreateCompatibleBitmap, BitBlt, etc. But more important, it shows that most of what is needed is present in .Net Compact Framework. The trick is that a bitmap loaded from resources will not be compatible (thus very slow to draw) but it can be made compatible by creating a new empty bitmap (automagically made compatible), getting a Graphics DC from it (thus a compatible DC) and then using DrawImage to draw the independent bitmap into the compatible one. DllImport CreateCompatibleDC CreateCompatibleBitmap DrawImage Remember that it is mandatory to release any Bitmap and Graphics DC objects by calling their Dispose method. Failure to do so will cripple the application's available resources. It may not be apparent on a desktop version of Windows but it will be obvious on a Pocket PC when the limited resources get exhausted. Dispose This code uses the .Net Compact Framework to load a bitmap from the assembly resources and make it compatible: private Bitmap loadCompatibleBitmap(string filename) { System.Type st = this.GetType(); Assembly assembly = st.Assembly; Stream stream = assembly.GetManifestResourceStream( st.Namespace + "." + filename); // Get the independent bitmap from resources Bitmap bitmap = new Bitmap(stream); stream.Close(); // Extract the transparency color from the upper // left corner (a sensible common hack) // Note: for the full .Net Framework, specify the current // PixelFormat in the Bitmap constructor too. Color bg_col = bitmap.GetPixel(0,0); Bitmap compatible = new Bitmap(bitmap.Size.Width, bitmap.Size.Height); Graphics g = Graphics.FromImage(compatible); // Make sure the offscreen bitmap gets erased with the default // background color. Here Black is chosen as transparency color. g.Clear(Color.Black); // Set the color key to what the image had... ImageAttributes ia = new ImageAttributes(); ia.SetColorKey(bg_col, bg_col); g.DrawImage(bitmap, new Rectangle(0, 0, compatible.Size.Width, compatible.Size.Height), 0, 0, bitmap.Size.Width, bitmap.Size.Height, GraphicsUnit.Pixel, ia); g.Dispose(); bitmap.Dispose(); return compatible; } To deal with transparent images, create an ImageAttribute instance, set the transparency color using ImageAttribute.SetColorKey and use the Graphics.DrawImage method that accept an ImageAttribute. The sample code above, the transparency color of the compatible bitmap is set to black. The actual color will generally depend on your artwork. ImageAttribute ImageAttribute.SetColorKey Graphics.DrawImage The 2D rendering implementation for Nerdkill Pocket uses the same principle everywhere: OnPaint Note that drawing the compatible bitmap manually in OnPaint is actually pretty fast. A sample code that does just that achieved up to 200 fps on the simulator and up to 100 fps on my test machine, a Cassiopeia E-125. The complete source of the 2D Rendering part is available in the source archive. It is implemented in the RGdiGraphics.cs file. I could not find a 100% .Net Managed way to play sounds for the game. Instead I found two solutions, both using P/Invoke to access WinCE APIs: PlaySound The PlaySound function is fairly straightforward to use but it can only play one sound at a time. It can play asynchronously. By default it will stop the currently playing sound before starting the next one. There's a flag to avoid that, yet the result is that the new sound will simply not play. It doesn't mix. A sound can loop too and will stop when the next sound is requested. In the context of this game, a better sound API is required. Several sounds should be able to play simultaneously. Some sounds need to loop automatically. Clearly, reimplementing my own sound mixer using WaveOut was necessary. WaveOut has a simple but efficient workflow. Buffers first need to be constructed and prepared. They are then filled with data and output using waveOutWrite. Once a buffer has been used, it is returned to the application which can then fill it again and output it. waveOutWrite The implementation is composed of the following classes: public class RSoundPlayer: Engine.IEngineSound; public class RISoundReader: IDisposable; public class RWavStreamReader: RISoundReader; public class RWaveOut; public class RMemAlloc; RWaveOut maps the various WaveOut methods and structures using DllImport. RMemAlloc does the same for LocalAlloc and LocalFree which are used to allocate the WaveOut buffers. RWaveOut RMemAlloc LocalAlloc LocalFree The sound mixer does not access any sound resource directly. It uses the RISoundReader interface that knows how to read a new buffer of data. The actual reader is implemented by RWavStreamReader and is constructed from a Stream extracted directly from the assembly resources. Since memory is at a premium on Pocket PC, it is neither necessary nor useful to read the full assembly resource stream into a memory buffer. The data can be accessed directly from the resource stream. RISoundReader RWavStreamReader Stream The sound data is expected to be formatted as WAV files, mono, 8 bits, 11.025 kHz. The stream reader validates the WAV file header to ensure these properties. Using the WaveOut API is pretty simple: memory buffers are created and "prepared" using waveOutPrepareHeader; they are then filled with data and set to play using waveOutWrite. When the WaveOut interface is done with each buffer, it sends a message to a HWND. The window callback recycles the buffer. The .Net Compact Framework does not allow access to the underlying implementation of a Windows.Form, that is its HWND cannot be retrieved and its WndProc callback cannot be used. To circumvent this limitation, the WinCE's .Net-specific class Microsoft.WindowsCE.Forms.MessageWindow is used: waveOutPrepareHeader Microsoft.WindowsCE.Forms.MessageWindow private class RWaveOutMsgWindow: MessageWindow { public delegate void Callback(IntPtr waveHdrPtr); public void SetBufferDoneCallback(Callback cb) { mBufferDoneCallback = cb; } protected override void WndProc(ref Message m) { if (m.Msg == RWaveOut.MM_WOM_DONE && mBufferDoneCallback != null) mBufferDoneCallback(m.LParam); base.WndProc(ref m); } private Callback mBufferDoneCallback = null; } public RSoundPlayer() { ... mWceMessageWindow = new RWaveOutMsgWindow(); mWceMessageWindow.SetBufferDoneCallback( new RWaveOutMsgWindow.Callback(this.waveOutBufferDone)); ... } The sound mixer maintains a list of sounds that are currently being played. For each of these sounds, a structure holds the current read position, the total size, a stop flag and a repeat flag. The sound mixer also maintains a queue of available WaveOut buffers. The actual mixer code runs in a thread with the following workflow: public RSoundPlayer() { // create non-signaled (blocking) event mMixerEvent = new AutoResetEvent(false); // start the mixer thread mMixerThread = new Thread(new ThreadStart(this.mixerLoop)); mMixerThread.Start(); } private void addSound(RISoundReader reader) { lock(mPlayingSounds.SyncRoot) { RSoundData sd = new RSoundData(reader); mPlayingSounds.Add(sd); } // signal the mixer another buffer can be processed mMixerEvent.Set(); } private void mixerLoop() { // wait on the event signal... while (mMixerEvent != null && mMixerEvent.WaitOne()) ... } Mixing the actual data in the buffers is done using unsafe C# code (the unsafe keyword allows C# code to manipulate pointers and use pointer arithmetic). Here is a simplified version of the method mixBuffer from RSoundPlayer.cs (the full version is available in the source archive): mixBuffer int max_data = 0; unsafe { int *header = (int *) waveHdr.ToPointer(); // get the address of the buffer and its size uint *dest32 = (uint *)(header[0]); int size = mWaveBufferSize / 4; // initialize the buffer with 0x80 bytes, *not* zeroes! // wave data is from -128 to +127 with a middle point offset at +128. while(size-- > 0) *(dest32++) = 0x80808080; } // accumulate all currently playing sounds in the wave out buffer lock(mPlayingSounds.SyncRoot) { for(int i = mPlayingSounds.Count-1; i >= 0; i--) { RSoundData sd = mPlayingSounds[i] as RSoundData; int read_size = sd.mSize - sd.mOffset; if (read_size > kReadBufferSize) read_size = kReadBufferSize; // Read a block from the sound stream sd.mReader.Read(mMixerBuffer, sd.mOffset, read_size); sd.mOffset += read_size; unsafe { unchecked { fixed(byte *source = mMixerBuffer) { int *header = (int *) waveHdr.ToPointer(); byte *dest = (byte *) (header[0]); int rsn = read_size * mSampleSize; if (rsn > max_data) max_data = rsn; byte *src = (byte *)source; for (int c = read_size; c > 0; c--) { // Each byte is 0..255 but it really represents // a sample which is -128..+127. // So the real operation here is: // dest = 128 + (dest-128) + (src-128); // which is: // dest += src-128; // then 0..255 clipping must be done. int a = (int)(*dest) + (int)(*(src)++) - 0x80; if (a < 0) a = 0; else if (a > 0xFF) a = 0xFF; byte b = (byte)a; for(int j = mSampleSize; j > 0; j--) *(dest++) = b; } } // fixed } // unchecked } // unsafe // end reached? if (sd.mOffset >= sd.mSize) { if (sd.mRepeat) sd.mOffset = 0; else // remove buffer from list mPlayingSounds.RemoveAt(i); } } // for mPlayingSounds } // lock mPlayingSounds // Update the size really used in the buffer unsafe { int *header = (int *) waveHdr.ToPointer(); // reset some members of a WaveHdr: // set the dwBytesRecorded field to the number of actual bytes header[2] = max_data; // public uint ; // set the dwFlags field... only clear WHDR_DONE here header[4] = header[4] & (~RWaveOut.WHDR_DONE); } The mixer is only targeted towards 8-bits output yet it can output to 11.025, 22.050 or 44 kHz streams, mono or stereo. This is done by expanding each byte of input as many bytes as necessary to fit in one output sample (no audio filtering is performed). This way the mixer code can stay very simple yet be reasonably adaptive. In the code, note the usage of the C#-specific keyword fixed. This allows the code to retrieve a pointer on a Managed buffer. When doing this, .Net "pins down" the pointed object so that the Garbage Collector will not move it around. Operations on the pointer can be done as in C/C++ using the familiar "*(dest++)" syntax. An important note is that since the mixer uses unsafe code, the assembly must be compiled as such. Under Visual Studio, this is done by setting the project's Configuration Property > Build > Allow Unsafe Code Blocks to True. This also means that special rights are necessary to run such an assembly. By default an application running from a desktop or a palm device has such rights. It wouldn't be the case if it was run as a Smart Client or in non-trusted sand box. In the two previous blocks of code and throughout the RSoundPlayer.cs source, you can also notice the C# keyword lock. This primitive locks a Managed object for exclusive access (mutex). This is used here to synchronize access to the mPlayingSound ArrayList which holds the current list of sounds being played. The mixer thread reads from this list whilst the application thread appends new sound requests to it. mPlayingSound ArrayList The .Net Compact Framework is a suitable candidate for games on the Pocket PC platform. The main advantage is of course the CLR: compile it once and run it everywhere. Once the .Net Compact Runtime has been installed on the device, the assembly can be deployed no matter what the underlying architecture is. Draw backs exists though. Blitting full-screen graphics in a Windows.Forms and performing offscreen rendering using solely the Managed interface to GDI+ is not exactly fast. It is barely usable. The game presented here does not try to optimize the rendering to its extreme, which I believe would come at the expense of a simple and somehow generic framework. As it is, the current engine is capped to 10 fps max. It can achieve this frame rate on the simulator but I never saw it go past 4 or 5 fps on a Cassiopeia E-125. By disabling part of the core rendering loop, it is easy to notice that the more small sprites need to be blitted in the offscreen bitmap, the slower the game gets. These graphic updates could probably be made a bit faster by analyzing the needs of the current game and reducing screen updates to the minimum. The other obvious limitation of the .Net Compact Framework is the complete absence of sound support. This is not surprising since .Net focus is obviously on building applications relying on Windows.Form. Nevertheless it is a bit disappointing that not even WinCE's PlaySound made it to a Managed class available by default. Since this would not address the needs of a game, it is not a major issue here. Deployment is another issue. Visual Studio 7.1 has the ability to generate CAB files. It generates 5 of them, for different architectures. This seems at first surprising since one of the main points of using .Net is to have a single platform-agnostic executable. The reality is that the CLR needs the .Net Compact Framework to be installed and the generated CAB files contain a little native DLL that will check if this is the case. So in effect, unless one assumes the framework is installed, it is necessary to have one CAB per targeted architecture. Fortunately, it seems reasonable to limit ourselves to ARM (for all recent Pocket PCs), MIPS (for older Pocket PCs) and maybe x86 (for the simulator). The underlying framework presented here can be reused for other applications. It is made available freely under a GPL license. I tried as much as possible to dissociate the game logic from the framework, in the hope that it would be easy to reuse. As noted before, the frame per second speed of the current game is rather low. This is due to the code being written as generic and as modular as possible for the purpose of this article. In real life, you may want to introduce a phase of optimization -- analyze speed bottlenecks and rewrite part of the core/game loop or maybe use some unsafe code to perform internal bitblits for example. This is left as an exercise for the reader :-) Finally, I'd like to thank Mathias for allowing me to use his Cassiopeia E-125 for testing. Since it is powered "only" by a 140 Mhz MIPS processor, it is by far not the fastest Pocket PC currently available, which makes it great to see performances issues first hand. I haven't had a chance to run the game on a recent ARM-powered Pocket PC. This page is also mirrored on the Nerdkill home.
https://www.codeproject.com/articles/7265/nerdkill-game-for-pocketpc
CC-MAIN-2016-50
refinedweb
3,011
57.27
lp:charms/trusty/hdp-storm Created by Charles Butler on 2014-09-20 and last modified on 2015-02-24 - Get this branch: - bzr branch lp:charms/trusty/hdp-storm Members of Big Data Charmers can upload to this branch. Log in for directions. Branch merges Related bugs Related blueprints Branch information - Owner: - Big Data Charmers - Status: - Mature Recent revisions - 18. By Cory Johns on 2015-02-24 Replaced jps call (which was broken upstream) with pgrep implementation - 17. By Charles Butler on 2014-12-09 Move from personal namespace to store resources, and current service - 16. By Charles Butler on 2014-12-09 [r=lazypower] amir sanjar 2014-11-12 enable storm for HP Cloud - 15. By Charles Butler on 2014-11-05 [r=lazypower] Antonio Rosales 2014-11-04 Update icon to be more readable at smaller sizes. - 14. By Charles Butler on 2014-09-20 [lazypower] Removed additional charm helpers sync, and propigated charm-helpers.yaml with the proper modules - 13. By amir sanjar on 2014-09-19 changes to enable auto-test - 12. By amir sanjar on 2014-09-19 merged chuck's changes - 11. By amir sanjar on 2014-09-18 amulet test to wait for nimbus server to load - 10. By amir sanjar on 2014-09-18 final bundletester update - 9. By amir sanjar on 2014-09-17 Review update Branch metadata - Branch format: - Branch format 7 - Repository format: - Bazaar repository format 2a (needs bzr 1.16 or later)
https://code.launchpad.net/~bigdata-charmers/charms/trusty/hdp-storm/trunk
CC-MAIN-2017-26
refinedweb
245
66.23
in that file, then it is based on the locale sent by the browser. For example, in the United States, by default, the start day of the week is Sunday, and 2 p.m. is shown as 2:00 PM. In France, the default start day is Monday, and 2 p.m. is shown as 14:00. The time zone for the calendar is also based on the you can create ADF Business Components over your data source that represents the activities, and the model will be created for you. You can then declaratively create the calendar, and it will automatically be bound to that model. For more information, see the "Using the ADF Faces Calendar Component" section of the Oracle Fusion Middleware Fusion Developer's Guide for Oracle Application Development Framework. If your application does not use the Fusion technology stack, then you create your own implementation of the CalendarModel class and the associated CalendarActivity and CalendarProvider classes. The classes are abstract classes with abstract methods. You must provide the functionality behind the methods, suitable for your implementation of the calendar. For more information, see Section 15.2, "Creating the Calendar." The calendar includes a toolbar with built-in functionality that allows a user to change the view (between daily, weekly, monthly, or list), go to the previous or next day, week, or month, and return to today. The toolbar is fully customizable. You can choose which buttons and text to display, and you can also add buttons or other components. For more information, see Section 15.5, "Customizing the Toolbar." Tip: When these toolbar buttons are used, attribute values on the calendar are changed. You can configure these values to be persisted so that they remain for the user during the duration of the session. For more information, see Chapter 31, "Allowing User Customization on JSF Pages." You can also configure your application so that the values will be persisted and used each time the user logs into the system. For this persistence to take place, your application must use the Fusion technology stack. For more information, see the "Allowing User Customizations at Runtime" chapter of the Oracle Fusion Middleware Fusion Developer's Guide for Oracle Application Development Framework. The calendar component displays activities based on those activities and the provider returned by the CalendarModel class. By default, the calendar component is read-only. That is, it can display only those activities that are returned. You can add functionality within supported facets of the calendar so that users can edit, create, and delete activities. When certain events are invoked, popup components placed in these corresponding facets are opened, which can allow the user to act on activities or the calendar. For example, when a user clicks on an activity in the calendar, the CalendarActivityEvent is invoked and the popup component in the ActivityDetail facet is opened. You might use a dialog component that contains a form where users can view and edit the activity, as shown in Figure 15-3. For more information about implementing additional functionality using events, facets, and popup components, see Section 15.4, "Adding Functionality Using Popup Components." The calendar component supports the ADF Faces drag and drop architectural feature. Users can drag activities to different areas of the calendar, executing either a copy or a move operation, and can also drag handles on the activity to change the duration of the activity. For more information about adding drag and drop functionality, see-1. You can customize how the activities are displayed by changing the color ramp. Each activity is associated with a provider, that is, an owner. If you implement your calendar so that it can display activities from more than one provider, you can also style those activities so that each provider's activity shows in a different color, as shown in Figure for Oracle Application Development Framework. Before you implement your logic, it helps to have an understanding of the CalendarModel and CalendarActivity classes, as described in the following section. The calendar component must be bound to an implementation of the CalendarModel class. The CalendarModel class contains the data for the calendar. This class is responsible for returning a collection of calendar activities, given the following set of parameters: Provider ID: The owner of the activities. For example, you may implement the CalendarModel class such that the calendar can return just the activities associated with the owner currently in session, or it can also return other owners' activities. Time range: The expanse of time for which all activities that begin within that time should be returned. A date range for a calendar is inclusive for the start time and exclusive for the end time (also known as half-open), meaning that it will return all activities that intersect that range, including those that start before the start time, but end after the start time (and before the end time). A calendar activity represents an object on the calendar, and usually spans a certain period of time. The CalendarActivity class is an abstract class whose methods you can implement to return information about the specific activities. Activities can be recurring, have associated reminders, and be of a specific time type (for example, hour or minute). Activities can also have start and end dates, a location, a title, and a tag. The CalendarProvider class represents the owner of an activity. A provider can be either enabled or disabled for a calendar. Create your own implementations of the CalendarModel and CalendarActivity classes and implement the abstract methods to provide the logic. method component can be stretched by any parent component that can stretch its children. If the calendar is a child component to a component that cannot be stretched, it will use a default width and height, which cannot be stretched by the user at runtime. However, you can override the default width and height using inline style attributes. For more information about the default height and width, see. StartDayOfWeek: Enter the day of the week that should be shown as the starting day, at the very left in the monthly or weekly view. When not set, the default is based on the user's locale. Valid values are: sun mon tue wed thu fri sat StartHour: Enter a number that represents the hour (in 24 hour format, with 0 being midnight) that should be displayed at the top of the day and week view. While the calendar (when in day or week view) starts the day at 12:01 a.m., the calendar will automatically scroll to the startHour value, so that it is displayed at the top of the view. The user can always scroll above that time to view activities that start before the startHour value. ListType: Select how you want the list view to display activities. Valid values are: day: Shows activities only for the active day. dayCount: Shows a number of days including the active day and after, based on the value of the listCount attribute. month: Shows all the activities for the month to which the active day belongs. week: Shows all the activities for the week to which the active day belongs ListCount: Enter the number of days' activities to display (used only when the listType attribute is set to dayCount). Figure 15-5 shows a calendar in list view with the listType set to dayCount and the listCount value set to 7. Expand the Calendar Data section of the Property Inspector, and set the following: ActiveDay: Set the day used to determine the date range that is displayed in the calendar. By default, the active day is today's date for the user. Do not change this if you want today's date to be the default active day when the calendar is first opened. Note that when the user selects another day, this becomes the value for the activeDay attribute. For example, when the user first accesses the calendar, the current date, view, if you enter month and do not also enter all, then you must also enter day. If you want the user to be able to drag a handle on an existing activity to expand or collapse the time period of the activity, then implement a handler for CalendarActivityDurationChangeListener. This handler should include functionality that changes the end time of the activity. If you want the user to be able to move the activity (and, therefore, change the start time as well as the end time), then implement drag and drop functionality. For more information, see.6, "Styling the Calendar." The calendar has two events that are used in conjunction with facets to provide a way to easily implement additional functionality needed in a calendar, such as editing or adding activities. These two events are CalendarActivityEvent (invoked when an action occurs on an activity) and CalendarEvent (invoked when an action occurs on the calendar, itself). For more information about using these events to provide additional functionality, see. When a user acts upon an activity, a CalendarActivityEvent is fired. This event causes the popup component contained in a facet to be displayed, based on the user's action. For example, if the user right-clicks an activity, the CalendarActivityEvent causes the popup component in the activityContextMenu to be displayed. The event is also delivered to the server, where a configured listener can act upon the event. You create the popup components for the facets (or if you do not want to use a popup component, implement the server-side listener). It is in these popup components and facets where you can implement functionality that will allow users to create, delete, and edit activities, as well as to configure their instances of the calendar. Table 15-1 shows the different user actions that invoke events, the event that is invoked, and the associated facet that will display its contents when the event is invoked. The table also shows the component you must use within the popup component. You create the popup and the associated component within the facet, along with any functionality implemented in the handler for the associated listener. If you do not insert a popup component into any of the facets in the table, then the associated event will be delivered to the server, where you can act on it accordingly by implementing handlers for the events. To add functionality, create the popups and associated components in the associated facets. To add functionality using popup components: In the Structure window, expand the af:calendar component node so that the calendar facets are displayed, as shown in Figure 15-6. Based on Table 15-1,. For more information about creating popup components, see Chapter 13, "Using Popup Dialogs, Menus, and Windows." Example 15-1 shows the JSF code for a dialog popup component used in the activityDelete facet. Example 15-1 JSF Code for an Activity Delete Dialog <f:facet <af:popup <!-- don't render if the activity is null --> <af:dialog <af:outputText <af:spacer <af:outputText <af:panelFormLayout> <af:inputText <af:inputDate <af:convertDateTime </af:inputDate> <af:inputDate <af:convertDateTime </af:inputDate> <af:inputText </af:panelFormLayout> </af:dialog> </af:popup> </f:facet> Figure. Ensure that each facet has a unique name for the page. Tip: To ensure that there will be no conflicts with future releases of ADF Faces, start all your facet names with customToolbar. For example, the section of the toolbar that contains the alignment buttons shown in Figure 15-9 are in the customToolbarAlign facet.: all: Displays all the toolbar buttons and text in the default toolbar dates: Displays only the previous, next, and today buttons range: Displays only the string showing the current date range views: Displays only the buttons that allows the user to change the view Note: If you use the all keyword, then the dates, range, and views keywords are ignored. For example, if you created two facets named customToolbar1 and customToolbar2, and you wanted the complete default toolbar to appear in between your custom toolbars, the value of the toolboxLayout attribute would be the following list items:, the list would be: customToolbar1 all newline customToolbar2 If instead, you did not want to use all of the default toolbar, but only the views and dates sections, and you wanted those to each appear on a new line, the list would be: customToolbar1 customToolbar2 newline views newline dates stretch: Adds a spacer component that stretches to fill up all available space so that the next named facet (or next keyword from the default toolbar) is displayed as right-aligned in the toolbar. <af:calendar <af:toolbar> <af:commandToolbarButton <af:commandToolbarButton <af:commandToolbarButton </af:toolbar> </f:facet> . . . </af:calendar> Like other ADF Faces components, the calendar component can be styled as described in Chapter 20, "Customizing the Appearance Using Styles and Skins." However, along with standard styling procedures, the calendar component has specific attributes that make styling instances of a calendar easier. These attributes are: activityStyles: Allows you to individually style each activity instance. For example, you may want to show activities belonging to different providers in different colors. dateCustomizer: Allows you to display strings other than the calendar date for the day in the month view. For example, you may want to display countdown or countup type numbers, as shown in Figure 15-10. This attribute also allows you to add strings to the blank portion of the header for a day. The activityStyles attribute uses InstanceStyles objects to style specific instances of an activity. The InstanceStyles class is a way to provide per-instance inline styles based on skinning keys. The most common usage of the activityStyles attribute is to display activities belonging to a specific provider using a specific color. For example, the calendar shown in Figure ramp. Activities whose time span is within one day are displayed in medium blue text. Activities that span across multiple days are shown in a medium blue box with white text. Darker blue is the background for the start time, while lighter blue is the background for the title. These three different blues are all part of the Blue color ramp. The CalendarActivityRamp class is a subclass of InstanceStyles, and can take a representative color (for example, the blue chosen for T.F.'s activities) and return the correct color ramp to be used to display each activity in the calendar. The activityStyles attribute must be bound to a map object. The map key is the set returned from the getTags method on an activity. The map value is an InstanceStyles object, most likely an instance of CalendarActivityRamp. This InstanceStyles object will take in skinning keys, and for each activity, styles will be returned. the map. During calendar rendering for each activity, the renderer calls the CalendarActivity.getTags method to get a string set. The string set is then passed to the map bound to the activityStyles attribute, and an InstanceStyles object is returned (which may be a CalendarActivityRamp). Using the example: If the string set {"Me"} is passed in, the red CalendarActivityRamp is returned. If the string set {"LE"} is passed in, the orange CalendarActivityRamp is returned. If the string set {"TF"} is passed in, the blue CalendarActivityRamp is returned. If you want to display something other than the date number string in the day header of the monthly view, you can bind the dateCustomizer attribute to an implementation of a DateCustomizer class that determines what should be displayed for the date. public class MyDateCustomizer extends DateCustomizer { public String format(Date date, String key, Locale locale, TimeZone tz) { if ("af|calendar::month-grid-cell-header-misc".equals(key)) { // return appropriate string } else if ("af|calendar::month-grid-cell-header-day-link".equals(key)) { // return appropriate string } return null; } }.
http://docs.oracle.com/cd/E23943_01/web.1111/b31973/af_calendar.htm
CC-MAIN-2016-30
refinedweb
2,622
52.49
elm-html-shorthand is a modest shorthand supplementing Html with two suffix notations: A shorthand / elision form where arguments are supplied directly... div_ [ text "contents" ] -- Most elements simply take a list of children, eliding any attributes Html.div [] [ text "contents" ] -- normalizes to this h1_ "heading" -- Some elements take strings arguments instead of nodes Html.h1 [] [ text "heading" ] -- normalizes to this An idiomatic form where... img' -- Takes a common sense list of arguments: { class = "" , src = "" -- * probably all images should have a src attribute , alt = "The Elm logo" -- * probably all images should have an alt attribute , width = 50 -- * width and height helps the browser to predict the , height = 50 -- dimensions of unloaded media so that popping does not occur } Html.img -- * normalizes to this [ Html.class "" , Html.src "" , Html.alt "The Elm logo" , Html.width 50 , Html.height 50 ] inputInt' "" -- Some elements are a bit special: { name = "my count" , placeholder = Nothing , value = count , min = Just 0 , max = Nothing , update = fieldUpdateContinuous -- * e.g. let's update this field continuously { onInput val = Channel.send action (UpdateCount val) } } Html.input -- * normalizes to something rather more elaborate [ Html.type' "number" , Html.id "my-count" , Html.name "my-count" , Html.valueAsInt count , Html.min "0" , Html.required True , Html.on "input" {- ... magic ... -} , Html.on "blur" {- ... magic ... -} , Html.on "keydown" {- ... magic ... -} -- , Html.on "keypress" {- ... magic ... -} (TODO: input masking) ] Shorthand does not attempt to create a template for every concievable use case. In fact, we encourage you to look for patterns in your own html and create your own widgets! The intention here is to make the common case easy. Another project we're working on, elm-bootstrap-html, aims to eventually provide more sophisticated templates on top of Bootstrap. Please note that this API is highly experimental and very likely to change! Use this package at your own risk. Shorthand can help you deal with namespace pollution. Since the suffixed names used in Html.Shorthand are unlikely to clash with names in your application logic it may make sense to import Html.Shorthand unqualified, while using a qualified import for Html. import Html -- you can use your own short u, i, b, p variable names! import Html.Events as Html import Html.Attributes as Html import Html.Shorthand (..) -- bringing u',i',b',p',em' etc... import Html (blockquote) -- if you really want something unqualified, just import it individually... import Html (Html, text, toElement, fromElement) -- perhaps in future Html.Shorthand will re-export these automatically Notice that the definition of h2_ doesn't allow for an id string to be supplied to it. h2_ : TextString -> Html How then do I target my headings in URLs, you ask? Well, this is a job better suited to section' and article'! Notice that these do take ids in both forms. section_ : IdString -> List Html -> Html section' : { class : ClassString, id : IdString } -> List Html -> Html This encourages you to use < section and < article in order to add url-targetable ids to your page. section_ "ch-5" [ h2_ "Chapter 5" ] This adherence to the HTML 5 spec is a theme we've tried to encourage in both the design of and the documentation accompanying this API. It is actually very difficult to use html form inputs correctly, especially in a reactive setting with numeric and/or formatted input types. inputMaybeFloat' { class = "" , name = "val" , placeholder = Just "Enter value" , value = Just 1 , min = Just -10 , max = Just 40 , update = fieldUpdateFallbackFocusLost { -- Send an error message if the field doesn't parse correctly onFallback _ = Channel.send action <| TemperatureError "Expected degrees celcius" , -- Update the temperature if it parsed correctly onInput val = Channel.send action <| SetTemperature val } } This does more work under the hood, though more can still be done to make form elements play well with reactivity... The approach of lightly constraining the Html API to reinforce pleasant patterns, seems like an interesting idea... Who wants to dig through gobs of Html with a linter when you can just get it right from the get go? One might argue that Html.Shorthand doesn't take this nearly far enough though. Restricted embedding It seems clear that one should be able to restrict the hierarchy of elements by type. Perhaps this complaint will disintegrate as work proceeds on Graphics.Element in elm-core, subsuming the need for this package... or perhaps some other brave soul will take the time to tackle it*. Aside: One naive approach would be to try and recreate HTML's structure using Elm's tagged unions.However, the same tag cannot be reused in two different tagged union types. Catalog of templates Another option is simply to create a catalog of templates which encode a particular piece of semantic layout instead of enforcing things at a type level. This is something we're currently investigating in the form of elm-bootstrap-html, so give me a shout if you want to help out. It is possible that some of the advice linked in the documentation could be incorporated into the templates themselves. Uniqueness of ids Another nice property would be enforced uniqueness of id attributes. We don't do this :) No broken links Sorry, this package doesn't use any sort of restricted URL scheme to prevent broken links. Namespace pollution + mini version! It would be nice to explore a -mini versions of this library as well as elm-html that excludes elements like <b>, <i> and <u> that are rarely used correctly anyway. This would further assist the battle against namespace pollution as well. Better reactive behaviour It is extremely difficult to work around buggy and inconsistent browser behaviour when it comes to text input fields. The wrappers provided here does a pretty good job of working around the issues, but more can be done still to improve the user experience. In particular, it would be great to have masking similar to say this jquery inputmask plugin in order to prevent invalid inputs in the first place and to allow for advanced formatting for things like telephone numbers etc. One major challenge at the moment is managing the keyboard cursor selection, so that the caret does not jump around due to updates to the field. Chris Done recently worked on a new EDSL called Lucid for templating HTML in Haskell. It uses a with combinator in order to supply attributes to elements only when necessary. We have chosen not to go this route for now, however it may be worth revisiting this design at some future time (probably as a new package). Feedback and contributions are very welcome. In particular, documenting the do's and don't's and adding examples for every semantic element is very time consuming. This doesn't require any sort of in-depth knowledge of the library though, just the ability to use a search engine and some patience researching what is considered best practice. All we ask is that you try and keep it short and to the point. Please help us to tidy up and flesh out the documentation! [ ][team] [team]:
https://package.frelm.org/repo/272/11.0.0
CC-MAIN-2020-16
refinedweb
1,163
56.45
Cartoonizing an image Now that we know how to handle the webcam and keyboard/mouse inputs, let's go ahead and see how to convert a picture into a cartoon-like image. We can either convert an image into a sketch or a colored cartoon image. Following is an example of what a sketch will look like: If you apply the cartoonizing effect to the color image, it will look something like this next image: Let's see how to achieve this: import cv2 import numpy as np def cartoonize_image(img, ds_factor=4, sketch_mode=False): # Convert image to grayscale img_gray = ... Get OpenCV with Python By Example now with the O’Reilly learning platform. O’Reilly members experience live online training, plus books, videos, and digital content from nearly 200 publishers.
https://www.oreilly.com/library/view/opencv-with-python/9781785283932/ch03s05.html
CC-MAIN-2022-27
refinedweb
130
55.37
Simplest. Fastest solution for user to know how to convert Zimbra to Lotus Notes with all details and attachment. The software supports all Lotus notes edition like - Lotus notes 9.0, Lotus notes 8.0, Lotus notes 8.5, etc.. Splitting option is also available to split converted Lotus notes file by their size. You can try trial edition to gather more materials about how to convert Zimbra to Lotus Notes. The software can convert 20 mails first, If any user want to convert batch mails from Zimbra to Lotus Notes, So you can purchase License edition of the software. how to convert zimbra to lotus notes IB. ibm notes 9 convert to pdf. Z. zimbra webmail export emails Get the tool to import TGZ file into Thunderbird with all meta details. This application is a superb tool to import TGZ file into Thunderbird in a batch mode also. With this software you can also import all Zimbra TGZ files including contacts, calendars, briefcase etc. Zimbra Converter Tool programmed with responsive nature that easily import TGZ file into Thunderbird with complete database without ...loss of data. Consumers do not require any technical skills, so non-professional users also import TGZ file into Thunderbird. This tool maintains emails, attachments, metadata, formatting, folder structure etc. No need for Zimbra installation. Free download to import TGZ file into Thunderbird process that help for users to understand the complete working functionality, for unlimited convert process users have to buy the license. import tgz file into thunderbird. The software is eligible to convert IncrediMail to MBOX Mac Mail alongwith folder structure - Sent items, Delete items, Draft items, Inbox & Outbox. It comes with batch mode for instant conversion. The Conversion tool performs IncrediMail ...to Mac Mail with entire attachment, formatting, layout, header body etc. It also offers free demo version to convert 25 emails from IncrediMail to Mac Mail at free of cost. To know 100% conversion of IncrediMail to Mac Mail, buy its licensed version at affordable price 65 USD. convert incredimail to mac mail , incredimail to mac mail converter. free software to convert tgz to pdf How free software open TGZ files successfully? I think you were already faced a lot of problem during the process of how free software open TGZ files. But now you are free from stress because we provided an excellent software that know more about how free software open TGZ files. So, don''t waste your precious time and money to lost somewhere and get the tool that supports the procedure of how free ...software open TGZ files and convert into PST,. In-built Preview is the usefyl feature using which users can see the loaded data before continuing with the process to migrate Zimbra open source. The tool also provide two modes to add TGZ folder. Users can select the required option as per need and migrate Zimbra open source mail folder proficiently. Zimbra Migration Tool offers trial edition using which users can explore the complete process to migrate Zimbra open source mails.. The Zimbra Backup Restore Account Tool also keeps the integrity of data intact after successful process. Also, the procedure can be carried out effortlessly without zimbra installation. The Zimbra Backup Restore Account Tool includes preview property so that users can analyze the clicked email file before final Zimbra backup restore account. The Zimbra Backup Restore Account Tool comes with trial version with limited functionality and it has been especially included in the software by developers so that users can evaluate and explore the working process before buying the license edition. zimbra backup restore account Filter: All / Freeware / Shareware / Mac / Mobile
http://freedownloadsapps.com/Information-Management/Book-Collection-Managers/
CC-MAIN-2018-17
refinedweb
605
56.25
This code can be copied and pasted in the footer area of any .aspx file in your website. The first thing you'll want to do is create an empty text file, call it counter.txt and save it to your root directory. The next step is even easier, copy and paste the code below into your .aspx file and save it. Be sure to import the system.IO class into your page something like this <%@ import namespace="System.IO" %> < script public string counter() { StreamReader re = File.OpenText(Server.MapPath("counter.txt")); string input = null; string mycounter = ""; while ((input = re.ReadLine()) != null) { mycounter = mycounter + input; } re.Close(); int myInt = int.Parse(mycounter); myInt = myInt + 1; TextWriter tw = new StreamWriter(Server.MapPath("counter.txt")); tw.WriteLine(Convert.ToString(myInt)); tw.Close(); re = File.OpenText(Server.MapPath("counter.txt")); input = null; mycounter = ""; } re.Close(); return mycounter; < /script > 'copy this code to the bottom of your .aspx page. < % Response.Write(counter() + " visited");% > A brief description of what is going on in this code goes as follows: a. create a method called counter b. Call the StreamReader class from the system.IO library and read your text file c. Store the value of the text file in a variable d. Close the StreamReader e. Open the StreamWriter f. Add 1 to the variable holding the value from the text file g. Write the new incremented by 1 value to the text file h. Close the StreamWriter This last line <% Response.Write(counter() + " visited");%> also more example Hi, you can use three technologies for this... 1. Application variable 2. Storing counter in Database 3. Storing counter in files storing counter in database will be the best method. to store in application variable Application["Counter"] = 1; but this will be cleard when you restrat the IIS or your website. this code usefull thnaks also all guys
http://www.nullskull.com/q/10121166/hit-counter.aspx
CC-MAIN-2017-04
refinedweb
310
79.67
Miscellaneous functions for sending an email. More... #include <stdbool.h> #include <stdio.h> #include "email/lib.h" Go to the source code of this file. Miscellaneous functions for sending an email. sendlib.h. Bounce an email message. Definition at line 1390 of file sendlib.c. Get the Fully-Qualified Domain Name. Definition at line 1194 of file sendlib.c. Analyze file to determine MIME encoding to use. Also set the body charset, sometimes, or not. Definition at line 465 of file sendlib.c. Find the MIME type for an attachment. Given a file at 'path', see if there is a registered MIME type. Returns the major MIME type, and copies the subtype to "d". First look in a system mime.types if we can find one, then look for ~/.mime.types. The longest match is used so that we can match 'ps.gz' when 'gz' also exists. Definition at line 465 of file sendlib.c. Create a file attachment. Definition at line 1092 of file sendlib.c. Create a message attachment. Definition at line 939 of file sendlib.c. Convert an email's MIME parts to 7-bit. Definition at line 747 of file sendlib.c. Prepare an email header. Encode all the headers prior to sending the email. For postponing (!final) do the necessary encodings only Definition at line 1250 of file sendlib.c. Timestamp an Attachment. Definition at line 895 of file sendlib.c. Undo the encodings of mutt_prepare_envelope() Decode all the headers of an email, e.g. when the sending failed or was aborted. Definition at line 1289 of file sendlib.c. Update the encoding type. Assumes called from send mode where Body->filename points to actual file Definition at line 907 of file sendlib.c. Write email to FCC mailbox. Definition at line 1526 of file sendlib.c. Handle FCC with multiple, comma separated entries. Definition at line 1478 of file sendlib.c.
https://neomutt.org/code/sendlib_8h.html
CC-MAIN-2021-49
refinedweb
317
72.53
You can use a lot of ES6 features today. Most of the features are already implemented in Node 4, 5 and even 0.12. You can precisely transpile the missing features when the client installs your module, see my blog post JavaScript needs the compile step (on install). The ES6 features already supported by the client are unchanged, and only the missing features are transpiled. There was a lot of feedback to the blog post and the example implementation, ranging from "No way, don't do this" to "Wow, this is the way JavaScript should be written in 2016". A lot of people pointed the major shortcoming of the proposed method: it adds a LOT of weight to the NPM install - the client has to download Babel and its plugins in order to transpile! I wanted a better solution. Pre-compiled JavaScript Here is an alternate solution. The range of NodeJS versions in the wild is pretty small. The most popular are 0.10, 0.12, and the newer versions 4 and 5. We can pre-build a single bundle for each of these versions during the build step (on the dev or CI machine). During the installation by the client, we can determine which bundle to use depending on the Node version. This obviously increases the NPM download size, but the increase is only in the source code size, and not in any additional module downloads. I wrote pre-compiled tool for the devs. The tool uses compiled to bundle (using Rollup) and transpile the produced code for different Node platforms. I have collected the ES features available by default (without --harmony flag) on each platform, see features files. I still test the source code to see which ES features it actually uses to minimize transpile times and avoid transpiling a feature the platform already supports. Example Here is an example how one could use this approach in practice. I have created precompiled-example repo with a few source files. It uses my favorite ES6 features, for example template literals and object parameter shorthand notation It uses ES6 module import of course to allow efficient tree-shaking when producing the output bundle. First, install the pre-compiled build tool npm install --save-dev pre-compiled Create the build step We need to tell pre-compiled tool which files to start with and where to output them. Add a config object to the package.json file The precompile step will start with src/main.js file and will roll all ES6 'imported' modules into one bundle, producing dist/main.js bundle. Then it will produce several bundles (you can control this list via config option, of course) dist/main.compiled.for.0.10.js dist/main.compiled.for.0.11.js dist/main.compiled.for.0.12.js dist/main.compiled.for.4.js dist/main.compiled.for.5.js We should add these bundles to the list of files included in our NPM package Second, we need to add a production dependency that will run on the client during install. npm install --save pick-precompiled And we need to call it during postinstall step We have multiple bundles, and at install one of them will be picked and copied to dist/main.js (according to the output directory and the original bundle name). Thus we should also set the main script to point at the bundle, even if it does not exist yet. Picking bundle in action I have made a precompiled-example module to show the bundling and picking the right bundle in action. If we do npm install precompiled-example we get the following output node version 0.12.9 for node version 0.12.9 picked bundle dist/main.compiled.for.0.12.js copied bundle dist/main.js Our bundle has a lot of transpiled features and it works $ node node_modules/precompiled-example/ Adding object properties 12 binary literal 5 object foo and bar 10 + 2 = 12 We can inspect the bundle, notice the transpiled features for Node 0.12 $ cat node_modules/precompiled-example/dist/main.js 'use strict'; require('pick-precompiled').babelPolyfill() var add = function (a, b) { return a + b; }; var a = 10; var b = 2; Promise.resolve(add(a, b)).then(function (sum) { console.log(a + ' + ' + b + ' = ' + sum); }); var objectAdd = function (_ref) { var a = _ref.a; var b = _ref.b; return a + b; }; console.log('Adding object properties', objectAdd({ a: 10, b: 2 })); ... Let us try installing from Node 4. $ nvm use 4 Now using node v4.2.2 (npm v3.5.0) for node version 4.2.2 picked bundle dist/main.compiled.for.4.js copied bundle dist/main.js $ node node_modules/precompiled-example/ Adding object properties 12 binary literal 5 object foo and bar 10 + 2 = 12 Same working module, but the code now runs without transpiled parts, Node 4 supports almost all features we needed (arrows, template literals) $ cat node_modules/precompiled-example/dist/main.js 'use strict'; require('pick-precompiled').babelPolyfill() const add = (a, b) => a + b; const a = 10; const b = 2; Promise.resolve(add(a, b)).then(function (sum) { console.log(`${ a } + ${ b } = ${ sum }`); }); const objectAdd = _ref => { let a = _ref.a; let b = _ref.b; return a + b; }; console.log('Adding object properties', objectAdd({ a: 10, b: 2 })); Finally, let us see if Node 0.10 is working. $ nvm use 0.10 Now using node v0.10.40 (npm v1.4.28) $ npm install precompiled-example for node version 0.10.40 picked bundle dist/main.compiled.for.0.10.js $ node node_modules/precompiled-example/ Adding object properties 12 binary literal 5 object foo and bar 10 + 2 = 12 Nice! We do have a nice bundle for each case. Conclusion With this approach you can admit that your NPM module is pre-rolled and pre-compiled :) There are shortcomings - building and including several bundles instead of one, changing Node version using nvm requires npm install call. But overall I like this approach because the bundles installed have nice, clean source.
https://glebbahmutov.com/blog/precompiled-javascript/index.html
CC-MAIN-2019-22
refinedweb
1,003
58.58
08 February 2007 03:12 [Source: ICIS news] By Steve Tan and Gina Myung SINGAPORE (ICIS news)--Formosa Group’s downstream companies expect their new plants to start up on schedule from late April onwards despite an earlier ICIS news report that its cracker in Mailiao, Taiwan, will be delayed until end-2007, company sources said late on Wednesday. But some were also bracing for a possible delay, keeping a close watch on developments in the spot market which could go into a tailspin if they end up picking up large volumes of ethylene and propylene. Formosa Petrochemical Corp’s (FPCC) new No 3 cracker, Asia’s largest, will supply 1.2m tonnes of ethylene, and 850,000 tonnes/year of propylene, to various downstream plants within the group in Mailiao and ?xml:namespace> The main contractor CTCI had earlier said that problems with land shortage could delay the start-up of the cracker until November or December, almost a year behind the original schedule. FPCC officials declined to comment on the prospects of a delay. Most sources at the various “It will proceed as scheduled and will start up at the end of April,” said a FPCC company source on its 176,000 tonne/year butadiene (BD) extraction unit. Formosa Chemicals and Fibre Corps’s (FCFC) new 600,000 tonne/year styrene monomer (SM) unit will also be started up in mid-2007 as planned, one company source said. There was no indication from Taiwanese players in the downstream styrenic plastics sector had also not received information on FCFC’s new SM unit being delayed. FCFC’s current SM operations include a 250,000 tonne/year No 1 and a 350,000 tonne/year No 2 unit at Mailiao. Other officials from FCFC and its sister company FCFC has two phenol-acetone units in Mailiao, which can each produce 200,000 tonnes/year of phenol and 125,000 tonnes/year of acetone. FCFC plans to cut exports and allocate the phenol and acetone feedstock for Nan Ya’s fourth 130,000 tonne/year BPA line when the facility starts up around April. Nan Ya currently operates three BPA lines, with a combined production capacity of 290,000 tonnes/year. In downstream polymer operations, a Formosa Plastics Corp (FPC) source echoed the views that the cracker would start-up as scheduled but noted that its 450,000 tonne/year plant in east China will not be able to start up as scheduled in June if the cracker was delayed. Ethylene supply from the new cracker would be essential as Formosa Petrochemical Corp (FPCC) will not be able to ramp up production at its downstream polyethylene (PE) facilities at Mailiao to full capacity, if the No.3 cracker start-up is delayed, a second FPCC source said. The facilities have been running as low as 80% since being debottlenecked a few years ago due to a shortage of on-site feedstock ethylene, he said. The facilities include a 350,000 tonne/year high density PE (HDPE) line, a 264,000 tonne/year linear low density PE unit and a 240,000 tonne/year low density PE/ethylene vinyl acetate (LDPE/EVA) plant and would be looking forward to a much needed boost in ethylene supply from the new cracker. Nonetheless, some buyers were bracing themselves for a possible delay in cracker start-up. Sources close to Nan Ya Plastics said the company’s 700,000 tonne/year monoethylene glycol (MEG) plant downstream could run into further delays. The new plant’s startup date was already pushed back twice last year – first from early 2007 to April, and then finally to the middle of the year. “It’s difficult to gauge now, as we don’t know if we should get ethylene from other sources, or simply delay the startup of the MEG plant,” said one of the sources. When pressed, another source acknowledged that “a slight delay” would be the most likely outcome but declined to elaborate. Olefin traders were keeping their ears to the ground for latest developments as a delay in the cracker start-up would result in a significant shift in the supply/demand balance in the region. FPCC would also have to delay the start-up of the olefins conversion unit which is able to produce 250,000 tonne/year propylene, originally due to start-up with the cracker. “Big quantities of propylene were expected to be exported starting from June/July and this would bring propylene demand-supply close to balance in northeast Asia, but if it is delayed, northeast Asia would see a net shortage in propylene supply,” a regional trader said. Agreeing, a Japanese trader expressed concerns over the possible delay, saying “the delay in FPCC’s cracker start-up will impact several things, and it would affect the Japanese producers who are exporting cargoes into the regional countries.” “Buyers especially in the Ningbo area have not secured feedstock as they were expecting FPCC to start-up but the delay may cause buyers to panic and purchase cargoes in bulk quantities, causing propylene prices to surge,” a buyer in Taiwan said. He went on to say that the acrylic ester (AE) and PP plants in Elsewhere in southeast Asia, traders agreed that the delay would cause a setback to the propylene market but said that it would not affect prices to a great extent as consumers would still have to consider downstream PP prices when purchasing their feedstock. The possible delay could tighten ethylene supply availability in the region, traders said. “About 600,000 tonnes of ethylene will disappear from the market. So The change in scenario could prompt the company to turn into a net importer for the time being, instead of a net exporter until such time the cracker can successfully start, he added. Clive Ong, Nurul Darni, Chow Bee Lin, Helen Yan, Peh Soo Hwee, and Chan Jingyi
http://www.icis.com/Articles/2007/02/08/9004694/analysisformosa-units-in-dark-on-cracker-delay.html
CC-MAIN-2014-15
refinedweb
987
51.62
a few of my url calls in my har > js load test file are coming up with an error WARN[0166] Request Failed error=“Get wss:/the_url : unsupported protocol scheme “wss”” any ideas on how to fix this a few of my url calls in my har > js load test file are coming up with an error WARN[0166] Request Failed error=“Get wss:/the_url : unsupported protocol scheme “wss”” any ideas on how to fix this This seems like a websocket connection that was mistakenly converted to a HTTP request. I’ve added, so we fix our converter to not emit such broken requests. In the meantime, if you want to keep it, k6 supports websocket connections through the k6/ws module, so you can manually fix it. I’ve decided not to use the browser recorder because it is creating a lot of unnecessary http requests along with 3rd party calls I don’t want it to. I am now having trouble wrapping my head around how to create a user scenario where they go to the login page and then put in their email and password and click sign in and then it takes them to the next webpage and checks if it’s successful (200 status code). I have this thus far and know it doesn’t run but it gives you a basic understanding what I’m trying to do. import { sleep, group } from “k6”; import http from “k6/http”; export let options = { maxRedirects: 0, stages: [ { duration: "30s", target: 5}, //simulate ramp-up of traffic from 1 to 5 users over 30 seconds { duration: "30s", target:5}, // stay at 5 users for 90 seconds { duration: "30s", target: 0}, //ramp down to 0 users over 5 seconds ] }; export default function () { group('visit page and login', function () { // load homepage var url = ''; var payload = JSON.stringify({ email: 'aaa', password: 'bbb', }); var params = { headers: { 'Content-Type': 'application/json', }, }; http.post(url, payload, params); } check(res, { 'is status 200': (r) => r.status === 200, }); } I’m not seeing any issues here? Though keep in mind that you don’t need to start writing the script from scratch - you can take the script generated from the .har conversion and start from there, deleting requests to external websites, adjusting headers, etc. And, in general, if your login page returns the credentials as a JSON object, you can save the http.post() result in a variable and access them that way - see And if the website uses cookies, then that should be handled mostly automatically by k6, though you can also manually adjust them, if necessary: Are you downloading the HAR-file directly from the recorder? When importing the recorded session into k6 Cloud, it should give you an option to filter out third-party requests (enabled by default, I think).
https://community.k6.io/t/k6-unsupported-protocol-scheme-wss/781
CC-MAIN-2020-45
refinedweb
466
58.86
HCL is a configuration file format created by Hashicorp to “build a structured configuration language that is both human and machine friendly for use with command-line tools”. They created it to use with their tools because they weren’t satisfied with existing solutions, and I think they did a really good job with it. I also have similar opinions. JSON has parsers in just about any language one can think of, but is a really terrible format for humans to deal with since it doesn’t support comments. YAML is sometimes mentioned as a better choice, but I think it’s generally pretty terrible to use. I’ve only had to use YAML a few times, but in my opinion it’s tricky to get YAML files of any complexity correct without a bit of futzing. Instead, HCL is super easy to type out and read, and looks quite nice: variable "ami" { # this is a comment description = "the AMI to use" } I liked the idea of HCL a lot, so I created a python implementation of the parser using ply, which I’ve called pyhcl. When you read an HCL file, pyhcl turns it into a python dictionary (just like JSON), and the dictionary representation for the above HCL is like this: { "variable": { "ami": { "description": "the AMI to use" } } } To get that result, parsing HCL using pyhcl is super easy, and is pretty much the same as parsing JSON: import hcl with open('file.hcl', r) as fp: obj = hcl.load(fp) So far, I’ve been really happy with HCL, and I’ve been using it for some projects with complex configuration requirements, and the end-users of those projects have been quite happy with the simplicity that HCL provides. pyhcl is provided under the MPL 2.0, just like the golang parser, and can mostly be used in the same places one might use JSON. It’s probably not terribly performant — but parsing files that humans read shouldn’t really require performance. If you do, python’s JSON parser is written in C and should meet your needs quite nicely. One thing that would be nice is if there was an actual specification for HCL, and in particular it isn’t very well defined on how to convert HCL to/from JSON… but lacking that, pyhcl currently tries to match the golang implementation bug for bug, and its test suite has stolen most of the fixtures from the golang parser to ensure maximum compatibility. Github site for pyhcl Pypi site for pyhcl
http://www.virtualroadside.com/blog/index.php/category/software/
CC-MAIN-2019-04
refinedweb
425
62.72
The QDial class provides a rounded range control (like a speedometer or potentiometer). More... #include <QDial> Inherits QAbstractSlider. The QDial class provides a rounded range control (like a speedometer or potentiometer). functions setValue(), addLine(), subtractLine(), addPage() and subtractPage() available as slots.(). This property holds the target number of pixels between notches. The notch target is the number of pixels QDial attempts to put between each notch. The actual size may differ from the target size. The default notch target is 3.7 pixels. Access functions: This property holds whether the notches are shown. If the property is true, a series of notches are drawn around the dial to indicate the range of values available; otherwise no notches are shown. By default, this property is disabled. Access functions:: Constructs a dial. The parent argument is sent to the QAbstractSlider constructor. Destroys the dial. Initialize option with the values from this QDial. This method is useful for subclasses when they need a QStyleOptionSlider, but don't want to fill in all the information themselves. See also QStyleOption::initFrom().
http://doc.trolltech.com/4.5-snapshot/qdial.html
crawl-003
refinedweb
176
61.12
Lennart Regebro wrote: > On 2/13/06, Philipp von Weitershausen <[EMAIL PROTECTED]> wrote: > > Yet again looking for comments, this time at: > >. > > What happens if you want to add your own statements? Should you still > do that in your own namespace? No. But I don't think that it'll be much of a problem. I expect that not a lot of 3rd party packages will need their own set of ZCML directives. I would certainly not encourage it and I will continue not to document it in my book. ZCML is a good tool, but only with a certain limited functionality (that was its intention in the first place!). That includes a somewhat limited set of directives. I realize "somewhat" is fuzzy. Let me just say that I think ZCML has failed when Plone will have its own ZCML directives... > If not, how are we going to make sure we don't get conflicts? By choosing decent names for the few directives that will be necessary. I know, it sounds lame, but even *with* namespace you'd need decent names. Or does anything prevent me in my own package to register a ZCML directive called browser:viewlet? Nope. So, it doesn't make much of a difference with or w/o namespaces. Philipp ---------------------------------------------------------------- This message was sent using IMP, the Internet Messaging Program. _______________________________________________ Zope3-dev mailing list Zope3-dev@zope.org Unsub:
https://www.mail-archive.com/zope3-dev@zope.org/msg04020.html
CC-MAIN-2017-43
refinedweb
233
75.3
Last modified on 22 February 2011, at 19:01 If you have a document (image, JavaScript, CSS stylesheet) and you want to load them in a page, you pass an argument "src" or "href" to the tags <IMG>, <SCRIPT> or <LINK rel="stylesheet"> If this argument is a relative URL, remember that the script URL has the form. So if the document to load is in the same folder as the script, the relative URL must begin with ../ : the relative URL ../foo.js will be resolved as the absolute URL If you forget the leading ../, the relative URL foo.js would have been resolved as the absolute URL, resulting in a "File not found" error The same goes for images and stylesheets : def index(): style = LINK(rel="stylesheet",href="../default.css") body = IMG(src="../images/books.png") return HTML(HEAD(style)+BODY(body))
http://en.m.wikibooks.org/wiki/Karrigell/Insert_an_image,_a_stylesheet,_a_Javascript_in_a_document
CC-MAIN-2013-48
refinedweb
143
72.36
Silverlight Combobox seleteditem. - Tuesday, April 10, 2012 2:53 PM Using EF4 + DDS,SL5 I have a combobox in XAML and in loaded event I was trying to set the selelcteditem as defualt. I just want to set combobox seleted item when it loads... what I try is : 1) cbo.seleteditem = "Test"; 2) cbo.seleteditem =ptx.entity.select(0=>o.entitycolumn); 3) List<LanguageList> AppLanguages = LanguageComboBox.ItemsSource as List<LanguageList>; var selectedLanguage = (from o in AppLanguages where o.LanguageCode == "en-US" select o).FirstOrDefault(); LanguageComboBox.SelectedItem = selectedLanguage; 4) seleteditem.seletecvalue ,selectedvaluepath,selectedindex=0,-1,-2,1,everything. but still not working.... All Replies - Tuesday, April 10, 2012 4:17 PM Try making properties in your xaml backing code class for the ItemsSource and the SelectedItem. Then bind to those properties in the xaml and then set the properties of the class instead of the control. Be sure to implement INotifyPropertyChanged. - Wednesday, April 11, 2012 1:47 PM I don't have any INotifyPropertyChanged in my code behind.(I am doing two way binding) I have a simple xaml page which contains a combobox(say I want to display column c from table 1) that's it! in code behind what I was trying is : private void cbo_Loaded(object sender, RoutedEventArgs e) { pcontext ptx =(PContext)pDomainDataSource.DomainContext; Binding itemsSource = new Binding("entitys") { Source = this.pDomainDataSource.DomainContext }; Binding SelectedItem = new Binding("entity") { Mode = BindingMode.TwoWay };//ItemsSource="{Binding Source=entity.entityname Path=Data}" cbo.SetBinding(ComboBox.ItemsSourceProperty, itemsSource); cbo.SetBinding(ComboBox.SelectedItemProperty, SelectedItem); cbo.DisplayMemberPath = "column c"; //{ DisplayMemberPath = "column c"}; //cbo.SelectedValue= cbo.Items.SingleOrDefault(c => (c as ComboBoxItem).Content == "comboboxitem"); /// cbo.SelectedValue=cbo.Items[1]; //cbo.Items.IndexOf("comboboxitem"); //cbo.SelectedValuePath = "665"; // cbo.SelectedIndex = 0; //Web.entityselectedProject = (from p in ptx.entityselect p).FirstOrDefault(); //IEnumerable<FMProject> projectList = PDataGrid.ItemsSource.OfType<entity>(); //List<Web.entity> projectList = cbo.DataContext as List<Web.entity>; //MessageBox.Show(projectList.First().ToString()); //Web.entity.selectedProject = (from p in projectList where p.column c== "string" select p).FirstOrDefault(); //cbo.SelectedItem = ptx.entities.Select(o=>o.column c); //cbo.SelectedItem = selectedProject; //cbo.SelectedItem = "string"; // PContext ptx =(PContext)pDomainDataSource.DomainContext; //List<entity> AppLanguages = cbo.ItemsSource as List<entity>; //var selectedLanguage = (from o in AppLanguages where o.ProjDomainDataSource.Data == "string" select o).FirstOrDefault(); //cbo.SelectedItem = selectedLanguage; //ComboBoxItem comboitem = new ComboBoxItem(); //comboitem.IsSelected = true; //comboitem.Content = "Test"; //cbo.SelectedItem = comboitem; } still not set the first item.. - Wednesday, April 11, 2012 4:50 PM I don't think your example has given the SelectItem a value anywhere. - Thursday, April 12, 2012 3:21 PM How you want me to set the value ? in code behind or in xaml either is not working by the way I found this. - Thursday, April 12, 2012 4:43 PM I don't use the RIA stuff because it seems too complex to me, and I think your problem MIGHT be that when the ItemsSource binding is done, there is nothing in the list yet. And setting the SelectedItem at that point wouldn't do anything because the list is empty. So, assuming you can find someplace where any async call has been completed for the ItemsSource of the combobox, you could set SelectedItem there. Could be that blog post will help as it seems to be talking about async calls. Sorry, I don't know anything about RIA to be able to contribute. - Friday, April 13, 2012 5:06 PM Thank you for sharing all your knowledge mtiede From the begining I have data in combobox list the only problem is I have to set the selecteditem. either in xaml or codebehind... data is directly coming through DDS+EF4 from entity in database( also I am fetching through RIA services which is basiclly a metadatafile and service file.) Once again thank you for everything. I still Not able to fixed the problem. Microsoft want many people to visit this silverlight blogs to see how much hardworking they are doing ....:) and guess what you still can't find the answer. combobox selected item = all headache - Thursday, April 26, 2012 8:42 AM SL combobox and selecteditem work fine. I use it a lot. There is something about what your are doing that is causing the problem. Try making a new app from scratch. Just create the list and selected item totally in code without any RIA or database involved. Make sure you understand how that works first. Then go back to your actual app and try to make it similar to what works. Make sure you implement INotifyPropertyChanged. - Thursday, April 26, 2012 4:14 PM Even If I create a new project for list and try there by using simple list and if it works,,,,,that will be great ,but as for now I need to set selected item based on RIA services. but not on simple list. My data is coming directly in combobox through binding Mode=bindingmode.twoway. what I try is : combobox.selectedindex =itemindex; it works on local but not on production. Thanks again mtiede@swtec.. - Thursday, April 26, 2012 7:18 PM It's very difficult to say exactly what your issue is. I'm having trouble following your code and logic. But I can tell you this: What you want to do is possible. I just think you're going about it the wrong way. I think you should bind your selectedItem property rather than trying to set it in code behind. 1) The combobox ItemsSource is one-way bound to your list of items. 2) The combobox SelectedItem is two-way bound to a property that you need to create (we'll call it MySelectedItem) that is of the same type as the items in your list. I am not familiar with the domain data source, but I am familiar with RIA services, and I think some of the concepts should cross over. When you load your list of items that are used to populate the combo box through the DDS, there will be an event that is fired called LoadedData (). In this event, set your MySelectedProperty to whichever item you want to be selected from your List and it will be selected in the combobox, because your combobox SelectedITem is bound to that item. - Friday, April 27, 2012 8:39 AM Josh, I think you meant MySelectedItem in the last paragraph instead of MySelectedProperty (in order to agree with the name you mentioned earlier). - Friday, April 27, 2012 9:59 AM You should make sure the ItemsSource is set first before selecting the SelectedValue. More conveniently, is to use Kyle McClennan's [MSFT] ComboBox DataSource and set <ComboBox ... ... ex: This is the best way, and should have been part of Silverlight release. People have had so much trouble with ComboBoxes because this doesn't exist by default. - Sunday, April 29, 2012 7:26 AM I wouldn't switch to another combobox type. Most likely it will just replace one set of problems with another. The provided ComboBox DOES work. I was looking back at the code and had a couple of questions. Is LanguageList a single language? Since you are saying List<LanguageList>, it makes it look like LanguageList is a single language. If it isn't, that could be a source of problems. And, fwiw, I would make a Viewmodel. The Viewmodel would have a property with notification for both the List and the SelectedItem. Then in a Page I would make the datacontext of the page be an instance of that Viewmodel. Then the View can have the combobox bindings to the List and the SelectedItem properties. And everything should happen automatically because of the bindings and notification. - Sunday, April 29, 2012 10:01 AM Here's an example. It is written in Embarcadero Prism language, but C# is similar, but more complicated. Here is the MainPage.xaml: <UserControl x: <UserControl.DataContext> <viewmodels:MainViewmodel/> </UserControl.DataContext> <views:MainView/> </UserControl> Note how it instantiates the Viewmodel and the View and "hooks them up" through the DataContext. Here is the MainView.xaml: <UserControl x: <Grid x: <Grid.RowDefinitions> <RowDefinition Height="Auto"/> <RowDefinition Height="Auto"/> <RowDefinition Height="Auto"/> <RowDefinition Height="Auto"/> </Grid.RowDefinitions> <TextBlock Text="Main View"/> <ComboBox Grid. <TextBlock Text="{Binding SelectedLanguage.Name}" Grid. <TextBlock Text="{Binding SelectedLanguage.Code}" Grid. </Grid> </UserControl> It expects the Viewmodel to have LanguageList and SelectedLanguage. Here is the MainViewmodel which instantiates the list and sets a default value for SelectedLanguage. It also establishes the two properties and NOTIFICATION (which is the "notify" directive in the Prism language and has to be an implementation of INotifyPropertyChanges in C#). namespace MvvmBase8.Viewmodels; interface uses System.Collections.Generic; type Language = public class public property Name : String; notify; property Code : String; notify; end; MainViewmodel = public class public property LanguageList : List<Language>; notify; property SelectedLanguage : Language; notify; constructor; end; implementation constructor MainViewmodel; begin var temp := new List<Language>; temp.Add( new Language( Name:='US English', Code:='en-US' ) ); temp.Add( new Language( Name:='British English', Code:='en-BR' ) ); temp.Add( new Language( Name:='French', Code:='fr-FR' ) ); LanguageList := temp; SelectedLanguage := temp[0]; end; end. Note that I only set LanguageList AFTER it is populated with the languages. Otherwise, the binding will not see a "property change" of the list address after the languages are added to the list. And then the selecteditem is just set to the first item in the list (which assumes there is at least one). Also note that there is no xaml code-behind code required. Hope that helps even if it isn't using RIA or any data access. But the setup of the Viewmodel, View and Page should be similar. - Friday, May 11, 2012 5:28 PM .purchaseDomain me what's wrong in this above code. - Friday, May 11, 2012 5:29 PM This is my xaml and code behind as follows: .Domain what's wrong in this above code? - Friday, May 11, 2012 6:29 PM If at some point you are looping through your items to assign them to your ComboBox, you might be able to filter out the ComboBoxItem you want and set IsSelected=true; - Saturday, May 12, 2012 8:23 AM azam50004, Try setting the SelectedItem before setting the DisplayMemberPath. Also, setting SelectedItem to a string constant implies that the ItemsSource is a list of Strings. Is that true? OR is the TableNames some sort of other object class? I don't know Ria, but I would guess it is some sort of String Entity and not a String. If that is the case, they you need to find the Entity in the TableNames that has a String property that is "Apple" and assign THAT object to the SelectedItem. - Monday, May 14, 2012 5:56 PM @mtiede@swtec... setting SelectedItem to a string constant implies that the ItemsSource is a list of Strings. Is that true? Yes ,it's true that I have list of strings in that column which I want to display in combobox.(data is coming just combobox appear as blank, user need to click and select then its working). TableNames is just a query at Servicefile. and TableName is an entity in DB. Ex: GetFruits is a query to fetch data from that table based on our choice complete table or particular columns and Fruit is just an entity in Oracle DB Hope it make sense. Thank you once again for all your suggestions and help ================================================ I refered to this article in my book and I try by setting the primary key of the entity to selectedvaluepath still no help. let me describe you the problem step by step. 1) Fruit entity is there in Oracle DB What I have in Service query to fetch data from db is. [Query(IsDefault=true)] public IQueryable<Animal> GetAnimals() { return this.ObjectContext.Animals.Include("Fruit").Include("Employee"); \\Including other entities in query to use this query as in DDS. } ---------------------------------------------------------------------------------------------------------------------------------------- [Query(IsDefault=true)] public IQueryable<Fruit> GetFruits() { return this.ObjectContext.Fruits.OrderBy(p=p.FruitName); \\ columnName to be displayed in combobox); } [Query(IsDefault=true)] public IQueryable<Animal> GetAnimals() { return this.ObjectContext.Animals.OrderBy(o=o.AnimalAge); \\ columnName to be displayed in combobox); } *********************************************************************************************************** combobox codebehind and xaml code. <ComboBox x: private void FruitscomboBox_Loaded(object sender, RoutedEventArgs e) { Binding itemsSource = new Binding("Entities") { Source = this.DomainDataSource.DomainContext}; Binding SelectedItem = new Binding("Apple"); FruitscomboBox.SetBinding(ComboBox.ItemsSourceProperty, itemsSource); FruitscomboBox.DisplayMemberPath = "ColumnName"; } I again Try as follows: Binding SelectedItem = new Binding("Apple"); //before displaymemberPath, apple is one of the item from list of strings of dynamic data coming through RIA service but resulting an exception. (Catastrophic failure(exception from HRResult: 0x8000FFFF(E_UNEXPECTED)) May be I am missing some thing really simple. Just want to confirm that I need to use combobox_Loaded event right? for Binding,Seletecitem,Itemssource - Monday, May 14, 2012 6:04 PM - Monday, May 14, 2012 6:55 PM Azam- You do not bind the SelectedItem to a column, you bind it to an ENTITY, and in this case it needs to be an entity that is contained within the data that is loaded into the DomainDataSource control. This should be accessed through the DomainDataSource.DataView.CurrentItem property. " /> That will NOT work. Instead you need to bind the SelectedItem to the CurrentItem of the DomainDataSource. I've never used the DomainDataSource myself, but I beleive it's something like this: <ComboBox x: What you are binding to here is the DataView.CurrentItem of your DomainDataSource. To control what field is displayed in the ComboBox, set the DisplayPath property. To Control what field is used as the value of the selected item, use the ValuePath property. If I could recommend something, it would be to not use the DomainDataSource except for simple purposes. I would not use it in a production application that customers would use. It would be better in my opinion to use a MVVM type approach where you at least use a ViewModel (Just a class that you write that implements INotifyPropertyChanged and loads the data and exposes it so that the UI can bind to it) instead of a DomainDataSource. This will give you more control over how it all works, and will be a lot more straight forward to implement advanced funcitonality if you need it. - Tuesday, May 15, 2012 11:16 AM I'm not sure a "Loaded" or "ValuePath" are needed either. I never use either of those. But maybe it is something RIA needs. - Tuesday, May 15, 2012 3:32 PM JoshSommers , It looks like your answers is very close but when I set DisplayMemberPath ="DataTableColumnName" and SelectedValuePath="Value ex: Apple" its throwing an exception.Error 2 Catastrophic failure (Exception from HRESULT: 0x8000FFFF (E_UNEXPECTED)) , I wish I could use INotify to rectify this but I have no idea how it works in MVVM approach. Basically data is coming in combobox but appear as blank, no selected value its displaying. after reading about INotify I went to this blog -> I wondered what to write in xaml and in codebehind for my combobox. I will really appreciate if you could post some code to go for MVVM approach. and I have EF4 ,SL5,Oracle,.Net4 so that I can create a seperate class for my column which I have to bind to my combobox. Fruit entity - had many columns (I just have to display one column from this entity in annoying combobox) Thank you so much. - Tuesday, May 15, 2012 3:43 PM Ok, there is still some misunderstanding. Providing a full MVVM example is too much for me to do at the moment as I am very busy with my project, but there are a lot of resources out there for learning to do this when you are ready to take that step. But for now, you CAN get this to work with your domaindatasource. First, let's try to fully understand these two properties: 1) DisplayMemberPath: This property controls what is DISPLAYED in the combobox, both in the dropdown and in the text area of the combobox. This property should contain the NAME of a field in your table. For example, say you have a field called "sName" in your table. Then you would set DisplayMemberPath="sName". This will cause the VALUE in that FIELD to be displayed in the combo box and in the dropdown. 2) ValueMemberPath: This property should contain the NAME of a field in your table that you want to represent the VALUE, and will control what is returned in the codebehind or binding for the SelectedValue property. For example, say you have a GUID column called "ID". If you set the ValueMemberPath="ID" then in code behind when you get SelectedValue, you will get the ID for the selected item. If you do not set this property, the selectedValue will return the same thing as SelectedItem: the entity itself (not a value of a property on the entity, but the entity itself). I just want to make sure you understand that you should not put a hard-coded value in either of these properties (for example DO NOT set "apple" as the value for either of these properties, that is not how it works), they BOTH should contain the NAME of a FIELD. - Tuesday, May 15, 2012 6:08 PM Your biggest mistake is to use DomainDataSource as the ItemsSource of a combobox. It only works in rarest of cases. Most of the time, you'll find the odd misbehavior. This is because of a bug in combobox. You must assign the ItemsSource before assigning myComboBox.SelectedValue. I've been bitten by this before, and the only solution is to avoid it. The reason your code works in local but not in production is due to timing. The Production server takes a little longer to return the ItemsSource, but SelectedValue has already been set. Use Kyle McClennan's [MSFT] ComboBox DataSource, and ComboBox.Mode="async" and it should work: <ComboBox Name="AComboBox" ItemsSource="{Binding Data, ElementName=ASource}" SelectedItem="{Binding A, Mode=TwoWay}" ex:ComboBox. It works by helpinh ComboBox retain its selectedValue when ItemsSource is finally set. - Tuesday, May 15, 2012 6:26 PM In order to set combobox to Async mode, I need to add that dll file to consume in my project. will you recommend to add that dll just to set the selected item so that combobox shouldn't appear blank. and I think I already try that. but I will try again and get back. Use Kyle McClennan's [MSFT] ComboBox DataSource, and ComboBox.Mode="async" and it should work: this blog is for SL4 but not SL5 , I am using SL5 looks like @Joshsommer is correct I will follow your solution josh.(I believe that it has nothing to do with DDS) I am trying this comobbox thing from last one month only to set the selected item =some value. :) Thank you very much. - Tuesday, May 15, 2012 6:32 PM I don't think he is facing that issue. I have faced the bug you are speaking of before, it has nothing to do with using the DomainDataSource, it happens in any binding scenario when binding to the SelectedValue and possibly the SelectedIndex property. But I do not think it happens when binding to the SelectedItem property, and I also think that the bug you are referring to was resolved in SL5. Please correct me if you know this to be incorrect. - Tuesday, May 15, 2012 6:35 PM Azam: Did you try my previous suggestions? Do you understand what DisplayMemberPath and ValueMemberPath properties are user for now? I do not advise you getting sidetracked by this other combobox. You can make the normal combobox work if you set it up correctly, which so far, you have not. - Tuesday, May 15, 2012 7:06 PM @Josh, sorry I just got distracted by DDS and other answers. I am able to understand what you are saying, from beginning I have DisplayMemberPath = "Field"; (Field is one column of DB table) I don't have ValueMemberPath here in SL5 how to get that? - Tuesday, May 15, 2012 7:13 PM NO!!!!!! But I am sorry, I DID use the wrong property name. SelectedValuePath (Sorry, not ValueMemberPath) is the VALUE. DisplayMemberPath is what is DISPLAYED. - Tuesday, May 15, 2012 7:30 PM Thank you for quick response Josh, ok I understood, looks like problem lie in SelectedValuePath....Let me check that. <ComboBox x: CodeBehind modiefied as follows: Binding itemsSource = new Binding("Entities") { Source = this.DomainDataSource.DomainContext}; Binding SelectedItem = new Binding("Entity"){ Mode = BindingMode.TwoWay }; FruitcomboBox.SetBinding(ComboBox.ItemsSourceProperty, itemsSource); FruitcomboBox.DisplayMemberPath = "Field";//(column of db table which is a list of strings)"; FruitCombobox.SelectedValuePath="Primarykeycolumn"; Not fix. - Tuesday, May 15, 2012 7:42 PM You can technically not set either of those properties. The important property to set for you right now is the binding on the SelectedItem property. First just make sure that that is working, then set the other properties. If you do not set the displaymemberpath, the combobox will display the .ToString value of your entity, which usually resolves to the primary key, which in my case is always a GUID. If you do not set the SelectedValuePath property, then nothing will change with display, only what you get when you try to access the SelectedValue property. If the SelectedValuePath property is not set, then the SelectedValue property will return the full entity instead of a property on the entity. I suggest that at first you do not set either of these properties, and just get the bindings to work first. Then set the DisplayMemberPath property and confirm that it still works. Then, if and only if you NEED to set the SelectedValueMember property, you can set that property. - Tuesday, May 15, 2012 7:59 PM we are on the same track @Josh,You are 100% right,I just check that If I do not set DisplayMemberPath="column"; then Its taking my PrimarykeyValues and displaying in combobox but even combobox appear as blank initially , I have to click it to check wheather it has data in it. I am 100% sure about the DisplayMemberPath and binding now. that I set correctly to a database table column [example table : emp{ empId,empage,empname etc} I set as combobox.DisplaymemberPath="empname"; and Binding SelectedItem=new Binding("empId"){Mode=BindingMode.TwoWay}; From here onward I will set everything in codebehind. - Tuesday, May 15, 2012 8:08 PM No buddy, you've still got it wrong. Go back a few posts and look again at what I said about binding the SelectedItem. You need to bind this to the selected item of the DomainDataSource. You need to specify WHICH ENTITY TO BIND TO, NOT WHICH FIELD. Please look at this again: <ComboBox x: DomainDataSource.DataView.CurrentItem: - Tuesday, May 15, 2012 8:15 PM I am sorry, I want to inform you that I remove selected item, Itemsource from XAML I just have the below exactly same code in xaml and in codebehind,I want to do everything in codebehind to avoid confusion. <combobox x: Exact Code Behind: Binding itemsSource = new Binding("Entities") { Source = this.DomainDataSource.DomainContext}; Binding SelectedItem = new Binding("EntityUID"){ Mode = BindingMode.TwoWay }; FruitcomboBox.SetBinding(ComboBox.ItemsSourceProperty, itemsSource); FruitcomboBox.SetBinding(ComboBox.SelectedItemProperty, SelectedItem); FruitcomboBox.DisplayMemberPath = "Field";//(column of db table which is a list of strings)"; - Tuesday, May 15, 2012 8:20 PM This is actual XAML from a sample project I just created. You still need to fill in the DomainDataSource properties to load the data, etc, but this is how the bindings should be setup: <riaControls:DomainDataSource <ComboBox DisplayMemberPath="FieldNameToDisplay" SelectedValuePath="FieldNameToUseAsValue" ItemsSource="{Binding ElementName=DomainDataSource1, Path=DataView}" SelectedItem="{Binding Path=DataView.CurrentItem, Mode=TwoWay, ElementName=DomainDataSource1}" /> - Tuesday, May 15, 2012 8:22 PM You're not avoiding any confusion, you're creating more confusion. Try to follow the instructions and once you have it working, then you can change it and try to do things your way. Setting up the bindings in the code behind can be ok if you know what you are doing, but it can be disasterous if you do not. - Tuesday, May 15, 2012 8:37 PM I remove the loaded event and set everything exactly as you mention in xaml. (so now I don't have codebehind for combobox) but I am not getting any data in combobox due to combobox.setbinding I can't do in xaml. need to fill the combobx. You hv done alot for this problem today, Can you please post code just for codebehind. so that I can set all the properties in codebehind. - Tuesday, May 15, 2012 8:43 PM Look, you're not really getting it my friend. I can only provide you with an example. You need to be smart enouogh to know which pieces you need to change to fit within your code, I can't really help you with that. I think you'd better step away from this and do some reading. Look at other examples of how to use binding. You clearly do not understand how it works so you need to start by learning, then move on to "doing". You do NOT need to do ANYTHING in code behind to get the bindings to work. You can do it all in the XAML, and my example shows you exactly how to do that, but you need to fill in the correct names of your fields. If yuo want to paste your XAML here I will try to look at it and fix it for you, but that is the last thing I can offer at this point. I strongly advise you to take some time to read a book on Silverlight that includes a section on Binding. I recommend Business Applications with Silverlight by Anderson. - Tuesday, May 15, 2012 8:43 PM 1. if you use combobox as a standalone control, you can set selecteditem, selectedvalue, selectedindex. They all worked. 2. if you use databinding, use code behind to changed the property/value which binds to the selectedvaluepath. It will work. (well at least it worked for me :) ) - Tuesday, May 15, 2012 8:49 PM Not helpful. He is not binding to the SelectedValue, so this is irrellevant and just adding to the confusion. - Tuesday, May 15, 2012 8:57 PM I have Pro Business Application with SL5 by Chris Anderson/SL4 in action by Pete Brown both the books right beside me. Yes you are right that May be I need to read a little. Anyhow Here is my code goes for xaml. <riaControls:DomainDataSource Name=DomainDataSource1 <ComboBox x:Name="FruitcomboBox" Height="23" Margin="10,6,5,0" HorizontalAlignment="Stretch" VerticalAlignment="Top" Grid.Column="1" Grid.Row="2" DisplayMemberPath="ColumnName" SelectedValuePath="FruitUID"(primarykeycolumnName) ItemsSource="{Binding Source/ElementNamepath=DomainDataSource1,Path=DataView}" //(I try with both source and ElementName} Thank you for everything. - Wednesday, May 16, 2012 3:18 AM Normally, the attributes that are set in xaml are not order dependent. But you are asking for trouble on a combobox to set them in the order you are setting them. (I know at least in some previous SL versions they didn't work right. Haven't tried in SL 5) So, if it doesn't know the ItemsSource and it doesn't know the SelectedItem, how could it know what to display? That might be a problem by ordering your parameters the way you have. I would recommend: 1. ItemsSource 2. SelectedItem 3. DisplayMemberPath 4. SelectedValuePath And maybe it works just fine now since Josh has them in a different order. I'm just making the suggestion that you put them in this order as I know it worked in the past. - Wednesday, May 16, 2012 12:41 PM Azam, Can you please paste in your actual XAML without changing anything? It's hard for me to help when there are parts that are not your actual code. PLease just put here your actual xaml without any notes or substitutions. - Thursday, May 17, 2012 6:45 PM I am sorry for the late reply, Josh I can't post the exact code here, we fixed the problem by selecting a row which by default select an Item in combobox for me, If we select some row in Datagrid that row value will be reflected in combobox. but I am still interested in fixing, I try to use kyle blog .dll in my project and use Mode=Async & AsyncEager When I download sample project I found that he define seperate DataSource just for combobox. mtiede@swtec... I try to follow your recommdn : as Itemsource SelectedItem DisplayMemberPath SelectedValuePath but no help. And Finally I am trying this -> InitializeComponent(); { ComboBox comboo = new ComboBox { DisplayMemberPath = "PhaseName" }; Binding SelectedValuePath = new Binding("PhaseUID"); Binding itemsSource = new Binding("Phases") { Source = this.phaseDomainDataSource.DomainContext }; comboo.SetBinding(ComboBox.ItemsSourceProperty, itemsSource); Binding selectedItem = new Binding("PhaseUID") { Mode = BindingMode.TwoWay }; comboo.SetBinding(ComboBox.SelectedItemProperty, selectedItem); //comboo.HorizontalAlignment = HorizontalAlignment.Stretch; comboo.VerticalAlignment = VerticalAlignment.Top; comboo.Height = 25; comboo.Width = 270; comboo.Margin = new Thickness(10, 6, 5, 0); LayoutRoot.Children.Add(comboo); and again data is coming but selected item is not working. } Josh which version are you using SL5 ? Thank you Josh & metde@ once again for all your comments and suggesitons.
http://social.msdn.microsoft.com/Forums/en-US/silverlightgen/thread/b816b7b9-e72b-4b79-aac5-78935dea73af/
CC-MAIN-2013-20
refinedweb
4,852
57.16
Cython wrapper around GrailSort () Project description Join the Discord channel for live tech support! GrailSort for Python GrailSort for Python is a Python API for the GrailSort algorithm. Installation You can install GrailSort for Python from source: $ git clone $ cd grailsort $ python setup.py install Or you can install it from PyPI: $ python -m pip install GrailSort Usage GrailSort for Python comes with two modules: a strict one, and a slower one. The strict module ( cGrailSort) only deals with array.array('d') objects, while the slower module ( grailsort) deals with any Python sequence that contains comparable objects. It is generally unnescessary to deal with the grailsort module, as you might as well use the built-in list.sort method or the sorted function. However, TimSort is not in-place, while GrailSort is. cGrailSort is useful when you need to sort with speed. Example grailsort import grailsort import random def print_out_of_order_index(): index = next((i for i in range(len(l) - 1) if l[i] > l[i + 1]), None) print('Out of order index:', index) l = list(range(1024)) print_out_of_order_index() random.shuffle(l) print_out_of_order_index() grailsort.grailsort(l) print_out_of_order_index() cGrailSort import cGrailSort import array import random def print_out_of_order_index(): index = next((i for i in range(len(a) - 1) if a[i] > a[i + 1]), None) print('Out of order index:', index) a = array.array('d', range(1024)) print_out_of_order_index() random.shuffle(a) print_out_of_order_index() cGrailSort.grailsort(a) print_out_of_order_index() Project details Release history Release notifications | RSS feed Download files Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
https://pypi.org/project/GrailSort/
CC-MAIN-2022-40
refinedweb
262
57.77
Looking at the System.Security.AccessControl Namespace Play Looking at the System.Security.AccessControl Namespace Description API's to create your ACLs. Well no more! In this screencast Duane takes a look at using this brand new namespace and show how you can even apply ACLs to your objects too! Download The Discussion PerfectPhaseDescription is a bit misleading, only deals with acls for file objects, does not touch on securing your apps private objects. kaushCan I get the code please? Thanks a bunch. aroberts55403Hello, Could you please send or post the code for this application? Thank you. Very informative post. santy_123 Is it possible to remove inherit permission on folder by using this namespace. If yes then how ? please reply i want it urgently if you have then please mail me the code More episodes in this series Looking at the Permissions Calculator Related episodes Setting up a Custom Membership Provider Encrypting your web.config file with ASP 2.0 (Visual Studio 2005) ASP.NET Core Series: SameSite Cookie Security Integrating Power BI Into Your Web Applications The Ops Team #25 - "Longest URL...IN THE WORLD" Web.Dev 13: Modern Web Workflow Demo of the Day: Putting the UWP Back into Unit Testing. Web Workflow Series - Quick Guide to Bower What's New for .NET 2015 Installing ASP.NET and Web Tools
https://channel9.msdn.com/Blogs/trobbins/Looking-at-the-SystemSecurityAccessControl-Namespace
CC-MAIN-2021-10
refinedweb
222
69.79
pantry A JSON/XML resource caching library based on Request npm install pantry Pantry A JSON/XML resource caching library based on Request Introduction Pantry is an HTTP client cache used to minimize requests for external JSON and XML feeds. Pantry will cache resources to minimize round trips, utilizing the local cache when available and refresh the cache (asynchronously) as needed. (As of 0.7.x, Pantry can also proxy cache any raw resource.) As with any of our projects, constructive criticism is encouraged. Installing Just grab node.js and npm and you're set: npm install pantry Pantry uses the amazing Request library for all HTTP requests and xml2js to parse xml resources. Using Work in progress. See the examples for the time being. To utilize the pantry, simply require it, optionally override some default values, and then request your resource(s) via the fetch method var pantry = require('pantry'); pantry.configure({ shelfLife: 5 }); pantry.fetch({ uri: ''}, function (error, item, contentType) { console.log(item.results[0].text); }); At this item, the following configuration options can be specified: - shelfLife - number of seconds before a resource reaches it's best-before-date - maxLife - number of seconds before a resource spoils - caseSensitive - URI should be considered case sensitive when inferring cache key - verbosity - possible values are 'silly', 'debug, 'verbose', info', 'warn' and 'error' (default is 'error' for production systems) - parser - possible values are 'json', 'xml', or 'raw' (default is undefined, in which auto-detection by content-type header is attempted) - ignoreCacheControl - do not utilize the cache-control header to override the cache configuration if present (default is false) - cacheBuster - adds the provided query string parameter name with a cachebusting timestamp to the request - xmlOptions - options passed into the xml2js parser (default is {explicitRoot: false}) When you request a resource from the pantry, a couple interesting things happen. If the item is available in the pantry, and hasn't 'spoiled', it will be returned immediately via the callback. If it has expired (it's beyond its best before date) but hasn't spoiled, it will still be returned and then refreshed in the background. Ode to my immigrant mother: The best before date is treated as a recommendation. If it hasn't visibly spoiled it's probably still good to use, so use it until we have a chance to go shopping. Especially if it's salad dressing, that stuff never goes bad as long as it's in the fridge. If the resource isn't available in the pantry, or the item has spoiled, then the item will be retrieved immediately and won't be passed on to the callback method until we have the resource on hand. Pantry will also ensure that we don't fetch the same resource multiple times in parallel. If a resource is already being requested, additional requests for that same resource will hook into the same completion event for the original request. Storage The latest version of Pantry ( >= 0.3 ) supports the ability to plug the caching storage engine of your choice. By default, pantry will utilize the MemoryStorage plugin, which will (of all things) cache items locally in memory. To specify an alternate storage engine, or to provide custom configuration for the default memory storage, simply assign the new storage engine to pantry via the .storage property. MemoryStorage The constructor for MemoryStorage takes two parameters: - config - hash of configuration properties (see below) - verbosity - controls the level of logging (default is 'info') The following configuration properties are allowed for MemoryStorage - capacity - the maximum number of items to keep in the pantry (default is 1000) - ideal - when cleaning up, the ideal number of items to keep in the pantry (default is 90% of capacity) Example: var pantry = require('pantry'); , MemoryStorage = require('pantry/lib/pantry-memory'); pantry.storage = new MemoryStorage({ capacity: 18, ideal: 12 }, 'debug'); Note that the ideal must be set to a value which is between 10% and 90% of capacity. Every time an item is added to pantry, we ensure we haven't reached capacity. If we have, then we first start with throwing out any spoiled items. After that, if we are still above capacity we will get rid of the expired items, and if we're really desperate we will need to throw out some good items just to make room. RedisStorage A simply plugin for Redis is included with Pantry. Note that since use of Redis is optional, the required client (redis) is not included in the package dependencies. You must include it in your own application's dependencies and/or manually install it via npm install redis The constructor for RedisStorage takes four parameters: - port - the redis server port (default is 6379) - host - the redis server host name (default is 'localhost') - options - hash of configuration properties (see below) - verbosity - controls the level of logging (default is 'info') The following configuration properties are allowed for RedisStorage - auth - the password / authentication key for the redis server (default is null) Example: var pantry = require('pantry') , RedisStorage = require('pantry/lib/pantry-redis'); pantry.storage = new RedisStorage(6379, 'localhost', null, 'debug'); MemcachedStorage A simply plugin for Memcached is also included with Pantry. Note that since use of Memcached is optional, the required client (memcached) is not included in the package dependencies. You must include it in your own application's dependencies and/or manually install it via npm install memcached The constructor for MemcachedStorage takes three parameters: - servers - a string or array of strings identifying the Memcached server(s) to use - options - hash of memcached configuration properties (see here for more details) - verbosity - controls the level of logging (default is 'info') Example: var pantry = require('../src/pantry') , MemcachedStorage = require('../src/pantry-memcached'); pantry.storage = new MemcachedStorage('localhost:11211', {}, 'debug'); SOAP Support As of v0.5.1, Pantry contains experimental support for SOAP requests. To make SOAP requests, you must first configure Pantry by pointing it to the correct WSDL using the initSoap(name, url, callback) method like this: pantry.initSoap('calculator', '', function(error, client) { // configuration completed. you can further configure the client (e.g. authentication) if needed }); The name parameter can be any made up but valid host name. This allows you to configure and identify multiple SOAP services. SOAP requests are handled by the soap package as opposed to Request. If you attempt to configure a service under an already existing name, it will be ignored. The error and client parameters in this situation will both be undefined. Once configured, you can use our custom 'soap' protocol and the host name you defined during configuration to make your SOAP requests like this: var src = { uri: 'soap://calculator/add?x=2&y=3, maxLife: 60 } The code above tells pantry you want to make a request to the SOAP service named 'calculator' (by you via initSoap) and call the add method, passing it parameters x and y. Putting it all together, you can execute a SOAP request using the following pattern (plus additional error handling of course). pantry.initSoap('calculator', '', function(error, client) { if (client) { // additional one-time client configuration goes here } pantry.fetch('soap://calculator/add?x=2&y=3', function(error, item) { // handle the data here }); }); As of v0.5.2, Pantry also supports complex data types in soap requests. They can be passed in via the 'args' property like the following: var src = { uri: 'soap://calculator/add', key: 'calculator/add/2/3' maxLife: 60, args: { x: 2, y: 3, 'namespace:name': 'some value', myobject: { first: 'billy', last: 'bob' } } }; When passing in values via arg, please ensure you specify your own unique cache key like in the example above! In a future release we'll likely make it a requirement for SOAP and POST requests. Upgrading As of v0.4.x, we now use v0.2.x of the xml2js library. This has significantly changed the default parsing options. You can easily revert to the xml2js v0.1 parsing options as described here Roadmap - Better handling of not-GET requests - Ability to execute array of requests in parallel - Support for cookies (including cache key) Created and managed by - Edward de Groot - Keith Benedict
https://www.npmjs.org/package/pantry
CC-MAIN-2014-10
refinedweb
1,349
50.67
This hack shows how to create animated transitions that play whenever the user switches tabs on a JTabbedPane. One of Swing's great strengths is that you can hack into virtually anything. In particular, I love making changes to a component's painting code. The ability to do this is one of the reasons I prefer Swing over SWT. Swing gives me the freedom to create completely new UI concepts, such as transitions. With the standard paint methods, Swing provides most of what you will need to build the transitions. You will have to put together three additional things, however. First, you need to find out when the user actually clicked on a tab to start a transition. Next, you need a thread to control the animation. Finally, since some animations might fade between the old and new tabs, you need a way to provide images of both tabs at the same time. With those three things, you can build any animation you desire. To keep things tidy, I have implemented this hack as a subclass of JTabbedPane, except for the actual animation drawing, which will be delegated to a further subclass. By putting all of the heavy lifting into the parent class, you will be able to create new animations easily. Figure is the basic skeleton of the parent class. public class TransitionTabbedPane extends JTabbedPane implements ChangeListener, Runnable { protected int animation_length = 20; public TransitionTabbedPane() { super(); this.addChangeListener(this); } public int getAnimationLength() { return this.animation_length; } public void setAnimationLength(int length) { this.animation_length = length; } transitionTabbedPane extends the standard JTabbedPane and also implements ChangeListener and Runnable. ChangeListener allows you to learn when the user has switched between tabs. Since the event is propagated before the new tab is painted, inserting the animation is very easy. Runnable is used for the animation thread itself. You could have split the thread into a separate class, but I think that keeping all of the code together makes the system more encapsulated and easier to maintain. TRansitionTabbedPane adds one new property, the animation length. This defines the number of steps used for the transition, and it can be set by the subclass or external code. Since the pane was added as a ChangeListener to itself, the stateChanged( ) method will be called whenever the user switches tabs. This is the best place to start the animation thread. Once started, the thread will capture the previous tab into a buffer, loop through the animation, and control the repaint speed: // threading code public void stateChanged(ChangeEvent evt) { new Thread(this).start(); } protected int step; protected BufferedImage buf = null; protected int previous_tab = -1; public void run() { step = 0; // save the previous tab if(previous_tab != -1) { Component comp = this.getComponentAt(previous_tab); buf = new BufferedImage(comp.getWidth(), comp.getHeight(), BufferedImage.TYPE_4BYTE_ABGR); comp.paint(buf.getGraphics()); } Notice that the run( ) method grabs the previous tab component only when the previous_tab index isn't -1. The component will always have a valid value, except for the first time the pane is shown on screen, but that's OK because the user won't have really switched from anything anyway. If there is a previous tab, then the code grabs the component and paints it into a buffer image. It's important to note that this is not thread-safe because the code is being executed on a custom thread, not the Swing thread. However, since the tab is about to be hidden anywayand, in fact, the next real paint() call will only draw the new tabyou shouldn't have any problems. Any changes introduced by this extra paint() call won't show up on screen. With the previous component safely saved away, you can now loop through the animation: for(int i=0; i<animation_length; i++) { step = i; repaint(); try { Thread.currentThread().sleep(100); } catch (Exception ex) { p("ex: " + ex); } } step = -1; previous_tab = this.getSelectedIndex(); repaint(); This code shows a basic animation loop from 1 to N, with a 100-millisecond duration for each frame. A more sophisticated version of the code could have dynamic frame rates to adjust for system speed. Once the transition finishes, the animation step is set back to -1, the previous tab is stored, and the screen is repainted one last time, without the transition effects. The TRansitionTabbedPane is now set up with the proper resources and repaints, but it still isn't drawing the animation. Because the animation is going to partially or completely obscure the tabs underneath, the best place to draw is right after the children are painted: public void paintChildren(Graphics g) { super.paintChildren(g); if(step != -1) { Rectangle size = this.getComponentAt(0).getBounds(); Graphics2D g2 = (Graphics2D)g; paintTransition(g2, step, size, buf); } } public void paintTransition(Graphics2D g2, int step, Rectangle size, Image prev) { } This code puts all of the custom drawing into the paintTransition( ) method, currently empty. It will only be called if step isn't -1, meaning during a transition animation. The paintTransition( ) method provides the drawing canvas, the current animation step, the size and position of the content area (excluding the tabs themselves), and the image buffer that stores the previous tab's content. By putting all of this in a single method, subclasses can build their own animations very easily. Figure is a simple transition with a white rectangle that grows out of the center, filling the screen, then shrinking again to reveal the new tab content. public class InOutPane extends TransitionTabbedPane { public void paintTransition(Graphics2D g2, int state, Rectangle size, Image prev) { int length = getAnimationLength(); int half = length/2; double scale = size.getHeight()/length; int offset = 0; // calculate the fade out part if(state >= 0 && state < half) { // draw the saved version of the old tab component if(prev != null) { g2.drawImage(prev,(int)size.getX(),(int)size.getY(),null); } offset = (int)((10-state)*scale); } // calculate the fade in part if(state >= half && state < length) { g2.setColor(Color.white); offset = (int)((state-10)*scale); } // do the drawing g2.setColor(Color.white); Rectangle area = new Rectangle((int)(size.getX()+offset), (int)(size.getY()+offset), (int)(size.getWidth()-offset*2), (int)(size.getHeight()-offset*2)); g2.fill(area); } } InOutPane implements only the paintTransition( ) method, leaving all of the harder tasks to the parent class. First, it determines how long the animation will be, and then it calculates an offset to grow and shrink the white rectangle. If the drawing process is currently in the first half of the animation (step < half), then it draws the previous tab below the rectangle, creating the illusion that old tab content is still really on screen with the rectangle growing above it. For the second half of the animation, it just draws the rectangle, letting the real tab (the new one) shine through as the rectangle shrinks. Because transitionTabbedPane is just a JTabbedPane subclass, it can be used wherever the original would be. Figure creates a frame with two tabs, each containing a button. The running program looks like Figure. As you switch between the tabs, you will see an animation like that shown in Figure. public class TabFadeTest { public static void main(String[] args) { JFrame frame = new JFrame("Fade Tabs"); JTabbedPane tab = new InOutPane(); tab.addTab("t1",new JButton("Test Button 1")); tab.addTab("t2",new JButton("Test Button 2")); frame.getContentPane().add(tab); frame.pack(); frame.show(); } } Because TRansitionTabbedPane makes it so easy to build new animations, I thought I'd add another one. This is the old venetian blinds effect, where vertical bars cover the old screen and uncover the new one; Figure puts it together. public class VenetianPane extends TransitionTabbedPane { public void paintTransition(Graphics2D g2, int step, Rectangle size, Image prev) { int length = getAnimationLength(); int half = length/2; // create a blind Rectangle blind = new Rectangle(); // calculate the fade out part if(step >= 0 && step < half) { // draw the saved version of the old tab component if(prev != null) { g2.drawImage(prev,(int)size.getX(),(int)size.getY(),null); } // calculate the growing blind blind = new Rectangle( (int)size.getX(), (int)size.getY(), step, (int)size.getHeight()); } // calculate the fade in part if(step >= half && step < length) { // calculate the shrinking blind blind = new Rectangle( (int)size.getX(), (int)size.getY(), length-step, (int)size.getHeight()); blind.translate(step-half,0); } // draw the blinds for(int i=0; i<size.getWidth()/half; i++) { g2.setColor(Color.white); g2.fill(blind); blind.translate(half,0); } } } Just like InOutPane, VenetianPane selectively draws the old tab and then calculates the placement of animated rectangles. In this case, there is a blind rectangle that spans the entire screen from top to bottom, but has the width of the current step. As a result of the step growing, this rectangle gets bigger with each frame. For the second half of the animation, it shrinks and moves to the right, making it appear to fade into nothing. Once the blind is calculated, VenetianPane draws the blind multiple times to cover the entire tab content area, creating the effect seen in Figure. This hack is quite extensible. With the power of Java2D you could add translucency, blurs, OS X-like genie effects, or anything else you can dream up. As a future enhancement, you could include more animation settings to control the frame rate and transition time. If you do create more, please post them on the Web for others to
http://codeidol.com/community/java/animate-transitions-between-tabs/12893/
CC-MAIN-2017-22
refinedweb
1,547
54.42
Alesis Midiverb4 Programchart Here you can find all about Alesis Midiverb4 Programchart like manual and other informations. For example: review. Alesis Midiverb4 Programchart manual (user guide) is ready to download for free. On the bottom of page users can write a review. If you own a Alesis Midiverb4 Programchart please write about it to help other people. [ Report abuse or wrong photo | Share your Alesis Midiverb4 Programchart photo ] Manual Preview of first few manual pages (at low quality). Check before download. Click to enlarge. Alesis Midiverb4 Programchart User reviews and opinions Comments posted on are solely the views and opinions of the people posting them and do not necessarily reflect the views or opinions of us. Documents: Press [EDIT/PAGE] until page 1 is selected. Footswitch Dry Defeat Press [B] to select the Footswitch parameter. Turn the [VALUE] knob to set the Footswitch parameter to Bypass mode (bYP). Connections Chapter 2 CHAPTER 2 CONNECTIONS AC Power Hookup The MidiVerb 4 comes with a power adapter suitable for the voltage of the country it is shipped to (either 110 or 220V, 50 or 60 Hz). With the MidiVerb 4 off, plug the small end of the power adapter cord into MidiVerb 4s [POWER] socket and the male (plug) end into a source of AC power. Its good practice to not turn the MidiVerb 4 on until all other cables are hooked up. Alesis cannot be responsible for problems caused by using the MidiVerb 4 or any associated equipment with improper AC wiring. Line Conditioners and Protectors Although the MidiVerb 4 is designed to tolerate typical voltage variations, in todays world the voltage coming from the AC line may contain spikes or transients that can possibly stress your gear and, over time, cause a failure. There are three main ways to protect against this, listed in ascending order of cost and complexity: Line spike/surge protectors. Relatively inexpensive, these are designed to protect against strong surges and spikes, acting somewhat like fuses in that they need to be replaced if theyve been hit by an extremely strong spike. Line filters. These generally combine spike/surge protection with filters that remove some line noise (dimmer hash, transients from other appliances, etc.).. Interfacing Directly with Instruments 18 MidiVerb 4 Reference Manual When connecting audio cables and/or turning power on and off, make sure that all devices in your system are turned off and the volume controls are turned down. The MidiVerb 4 has two 1/4 unbalanced inputs and two 1/4 unbalanced outputs. These provide three different (analog) audio hookup options: Mono. Connect a mono cord to the [LEFT/CH.1] INPUT of the MidiVerb 4 from a mono source, and another mono cord from the [LEFT/CH.1] output of the MidiVerb 4 to an amplification system or mixer input. (headphone) mix, and individual, post-fader effect sends. Typically, if a mixer has more than two sends per channel (4, 6 or 8, perhaps), the first two sends are reserved for the cue sends, while the remaining sends are used to feed effects, such as the MidiVerb 4.. Chapter 2 Connections Using Inserts By using individual channel inserts, you can dedicate the MidiVerb 4 MidiVerb 4 before the mixers channel input. However, some mixing consoles inserts come after the EQ section, and may therefore be different from the original signal. If nothing is connected to the channels Insert jack, the signal is not routed there. Usually, insert connections require a special, stereo-splitting Y-cord to be connected (one stereo plug provides both send and return while two mono plugs connect separately to an input and output). These are known as TRS connectors (tip-ringsleeve). The tip of the stereo plug carries the send or output of the insert jack, while the ring carries back the return. The sleeve represents a common ground for both signals. Mono. This involves connecting a 1/4" TRS (tip-ring-sleeve) Y-cable to the Insert jack of a single channel on a mixing console. The other end of the cable (which splits into two, 1/4" mono connectors) are connected to the [LEFT/CH.1] input and [LEFT/CH.1] output, respectively. If you do not hear any audio after making these connections, swap the input and output cables at the MidiVerb 4, as these may be wired backwards. If the cable is color-coded, usually the red jack represents the send (which connects to the MidiVerb 4s input) and black is the return (which connects to the output).. MIDI (Musical Instrument Digital Interface) is an internationally-accepted protocol that allows musical-related data to be conveyed from one device to another. The MIDI connections on the MidiVerb 4 provide four different functions: To recall Programs using MIDI program change messages To control (modulate) parameters inside the MidiVerb 4 in realtime via MIDI controllers (example: A keyboards mod wheel, or pedals, etc.) To send and receive Sysex (System Exclusive) dumps of individual programs or the entire bank of programs for storage and retrieval purposes To pass-on MIDI information thru the MidiVerb 4 to another MIDI device. To connect the MidiVerb 4s MIDI ports to another MIDI device: Connect a MIDI cable from the MidiVerb 4s MIDI [IN] connector to the other MIDI devices MIDI OUT connector. Connect another MIDI cable from the MidiVerb 4s MIDI [OUT/THRU] connector to the MIDI IN connector of the other MIDI device. Note: It is not necessary to follow step 2 if you intend to only send information to the MidiVerb 4, and do not need to receive information back from it. Example: If you only want to be able to recall Programs using MIDI program change messages, there is no need to connect a cable to the MidiVerb 4s [OUT/THRU] connector. For more information about MIDI and Modulation, refer to chapter 6. Footswitch On the rear panel you will find a footswitch jack labeled [FOOTSWITCH]. This footswitch has three functions, which can be selected using the [UTIL] button: An program advance function (Advance) An effects bypass function (Bypass) A tap tempo control for Delay effects (Control) To set the [FOOTSWITCH] jacks mode: Press [UTIL]. Press the [EDIT/PAGE] button until page 1 is selected. The upper display will look like this: Foostwitch Dry Defeat Press [B] to select the Footswitch parameter and turn the [VALUE] knob to select either Advance mode (Adv), Bypass mode (bYP) or Control mode (ctL). Any momentary single-pole/single-throw footswitch, normally open or normally closed, will work for the three footswitch functions. This should be plugged in prior to power-up so that the MidiVerb 4 can configure itself for the type of footswitch being used. Advance. When the footswitch mode is set to the Advance function, each time the footswitch is pressed the MidiVerb 4 will advance to the next Program number. Bypass. When set to the Bypass function, pressing the footswitch will toggle Bypass mode on and off (when Bypass mode is activated, the [BYPASS] LED will be lit). Control. When using a Delay effect, the footswitch can serve as a way of programming the delay time using a feature called tap tempo. If the footswitch function is set to the Control function, you can program the delay time in two ways: Press down on the footswitch repeatedly at the desired tempo you wish the delay time to follow; or, Hold down the footswitch and the MidiVerb 4 will listen to the audio being fed to its input(s); now you can play your guitar, bang your drum, or sing some doot doots into your microphone (depending on what is plugged into the inputs), and the delay time will be set to a value that equals the tempo you are using. When the Footswitch parameter is set to the Control function and the Lezlie->Room Configuration is being used, pressing down on the footswitch will toggle the Speed parameter in the Lezlie effect between its slow and fast settings. For more information about tap tempo, see Chapter 3. Overview of Effects Chapter 3 CHAPTER: Mono-in/mono-out. These effects have a single input (both inputs summed together) and a single output (routed to both outputs). Mono-in/stereo-out. These effects have a single mono input and two outputs. Stereo-in/stereo-out. These effects have two inputs and two outputs. In each case, the dry, uneffected signal of both inputs are also routed to the outputs. Chapter 3 Overview of Effects effects output, while the Right/Ch. 2 output provides the output of channel 1s effect routed through channel 2s effect. Plate Reverb This is a simulation of a classic echo plate, a 4' by 8' suspended sheet of metal with transducers at either end used to produce reverb. Popular in the 1970s, drums and other percussive sounds adding space without washing out the instrument.: The Reverb Decay determines how long the Reverb will sound before it dies away. When using the Reverse Reverb effect type, the Reverb Decay parameter controls the Reverse Time. Low Pass Filter. MIDI: Chan Thru PChg Page 4: Modulators. This is where you select the two MIDI modulation sources which will be used for all Programs to control their parameters. The parameters these control depend on the selected Programs Configuration. For example, in all Reverb Configurations, Modulator X controls the Reverb Decay Time, while Modulator Y controls the Wet/Dry Mix. Either Modulator can be assigned to: Pitch Bend, Aftertouch, Note Number, Velocity or a Controller from 000119. Each Modulators amplitude can be set between -99 and +99. The default settings are: Mod#X = 001 (modulation wheel), Mod#Y = 007 (volume), Amp X and Y = 000. For more information and a list of the modulated parameters in each Configuration, see Chapter 6. Mod#X AmpX Mod#Y AmpY Page 5: Program Table. The Program Table allows you to intercept incoming program change messages and have them recall specific Programs (in either the Preset or User bank) which may not be the same number as the program change message received. There are 128 different possible MIDI program change messages (000 127). However, the MidiVerb 4 has 256 Programs to choose from. Therefore, the Program Table allows us to choose which of the 256 Programs will be recalled when certain program change numbers are received. The first value in the display indicates the MIDI program change you wish to remap (000127). The second value indicates the Program you wish to be recalled (00127 Preset (Pset) and 00127 User). You can remap each of the 128 program change numbers, if so desired. Program Tbl: MIDI User If the D parameter is lowered below User 000, the display will change from User to Pset to indicate that you are now assigning an incoming program change number to a Program in the Preset bank. Program Tbl: MIDI Pset Page 6: Sends Sysex. This page lets you dump out all 128 User Programs or the current Program being used/edited, or the Program Change Table (see above). The data is sent as Sysex information. This can be sent to a MIDI storage device, or to another MidiVerb 4. Select either All or Buffer (the currently selected Program which is in the edit buffer), or Table. When this page is selected, the [UTIL] button will flash to indicate that pressing the [UTIL] button starts the MIDI dump. The display will read Transmitting Sysex. and the [UTIL] button will flash quickly, indicating that all 128 User Programs are being sent out the [MIDI OUT] connector. See Chapter 6 for more information regarding MIDI applications. Send MIDI Sysex: All STORE Button The [STORE] button is used to permanently keep changes you make to a Program, or to copy a Program to a different location. When pressed for the first time, the [STORE] button will flash, to indicate that it is prepared to store the current Program. At this point, you can choose to alter the Programs name, and/or choose a different location to store the Program into. When youre ready to store, press the [STORE] button a second time. To store an edited Program:. See the section on the A/B/C/D Buttons, earlier in this chapter. INPUT and OUTPUT Buttons The [INPUT] Button is used to view and adjust the input levels. The [OUTPUT] Button is used to view and adjust the output levels. When either button is pressed by itself, the display will show either the current input or output settings, depending on which button was pressed. The [VALUE] knob can then be used to adjust the level setting. If the currently selected Program uses a Stereo Configuration, you will be able to adjust both channels simultaneously , as indicated by the fact that only one parameter ( STEREO) appears in the display. If the currently selected Program uses a Dual Configuration, the Ch. 1 and Ch. 2 levels can be adjusted separately, as indicated by the fact that two parameters (Ch 1 and Ch 2) appear in the display. The currently selected channels value will flash in the display. To select Ch. 1, press the [C] button. To select Ch. 2, press the [D] button. Auto Level When both [INPUT] and [OUTPUT] buttons are pressed simultaneously, the Auto Level function is activated. This function listens to the signal present at the input jacks and sets the input level to an appropriate value. The Auto Level function listens for a period of five seconds. During this time, you should feed signal to the MidiVerb 4s inputs (i.e. play your guitar or keyboard, or playback tape). To cancel the Auto Level function once it has been engaged, press any button on the front panel. To extend the Auto Levels listening time beyond the normal five second period, hold down the footswitch pedal (connected to the [FOOTSWITCH] jack) during the listening process. The Auto Level function will continue listening until the footswitch pedal is released. FLANGE} REALROOM REALROOM} FLANGE 3 4 Decay !0-7%8 Dens 000-100 Rate )0-(9 Trig 0fffffffFF, L, r, Lr Decay !0-7%8 Dens 000-100 Decay !0-7%8 Dens 000-100 Rate )0-(9 Trig 0fffffffFF, L, r, Lr 100ms 0-7 Fdbk 00-99 LPF 059-3^2, OFF Hold 000-500 Depth 000-255 Wave Sin, tri LPF 059-3^2, OFF Diff PDly 000-250 Gate OFF, 010-500 Fdbk -99-99 Rel 000-255 PDly 000-250 Gate OFF, 010-500 PDly 000-250 Gate OFF, 010-500 Fdbk -99-99 Rel 000-255 CHORUS}DLY} ROOM FLANGE}DLY} ROOM REALROOM+ DELAY REALROOM+ CHORUS Rate )0-(9 Time 000-500 Decay !0-7%8 Gate OFF, 001-100 Rate )0-(9 Tap 000-500 Decay !0-7%8 Gate OFF, 001-100 Decay !0-7%8 Gate OFF, 001-100 Tap --Fdbk 00-99 Decay !0-7%8 Dens 000-100 Rate )0-(9 Depth 000-255 Fdbk 00-99 LPF 059-3^2, OFF Hold 000-500 Depth 000-250 Fdbk 00-99 LPF 059-3^2, OFF Hold 000-500 LPF 059-3^2, OFF Hold 000-500 100ms 0-7 HiCut 059-3^2, OFF LPF 059-3^2, OFF Diff 000-100 Depth 000-255 Wave Sin, tri Fdbk 00-99 HiCut 059-3^2, OFF Dens 000-100 Rel 000-500 Fdbk -99-99 HiCut 059-3^2, OFF Dens 000-100 Rel 000-500 Dens 000-100 Rel 000-500 10ms 0-9 PDly 000-250 Gate OFF, 000-500 Fdbk 00-99 CMix 000-100 DMix 000-100 Diff 000-100 RMix 000-100 FMix 000-100 DMix 000-100 Diff 000-100 RMix 000-100 Diff 000-100 RMix 000-100 1ms 0-9 DMix 000-100 PMix 000-100 RMix 000-100 CMix 000-100 CMix 000-100 REALROOM+ FLANGE CHORUS: DELAY Decay !0-7%8 Dens 000-100 Rate )0-(9 Trig 0fffffffFF, L, r, Lr Rate )0-(9 LPF 059-3^2, OFF Diff 000-100 Depth 000-250 Attck 000-255 Depth 000-255 Wave Sin, tri 100ms 0-5 LoCut OFF, 059-3^2 Rate )0-(9 Wave Sin, tri 100ms 0-5 LoCut OFF, 059-3^2 Fine -50-50 LoCut OFF, 059-3^2 100ms 0-5 LoCut OFF, 059-3^2 PDly 000-250 Gate OFF, 000-500 Fdbk -99-99 Rel 000-255 Fdbk 00-99 PMix 000-100 RMix 000-100 Wave Sin, tri FMix 000-100 PDly 000-250 Mix 000-100 1ms 0-9 Mix 000-100 Fdbk -99-99 Mix 000-100 1ms 0-9 Mix 000-100 Fdbk 00-99 Mix 000-100 1ms 0-9 Mix 000-100 Tap --Fdbk 00-99 FLANGE: DELAY Tap --Fdbk 00-99 Semi -12-12 10ms 0-9 HiCut 059-3^2, OFF Depth 000-250 PITCH: DELAY 10ms 0-9 HiCut 059-3^2, OFF PDly 000-250 HiCut 059-3^2, OFF 10ms 0-9 HiCut 059-3^2, OFF Advanced Applications Chapter 6 CHAPTER 6 Press [UTIL], then press the [EDIT/PAGE] button until page 5 is selected. Pr o g r a m T b l : MI DI 000 Us e r 000 Press the [C] button to select the MIDI Program Number field. The MIDI Program Number field will flash to indicate it is selected for editing. Advanced Applications Chapter 6 Turn the [VALUE] knob to select a MIDI program change number from 000127 to be remapped. Press the [D] button to select the Program field. The Program field will flash to indicate it is selected for editing. Turn the [VALUE] knob to select a MidiVerb 4 Program for the selected MIDI program change message to be re-mapped to (User 000127 or Preset 000127). If the [VALUE] knob is turned counterclockwise so that the value goes below User 000, the upper display will change to Pset indicating you are selecting a Program in the Preset bank. Sysex Storage In order to send and receive Program information via Sysex (System Exclusive) dumps using a computer, or some other Sysex storage device, or another MidiVerb 4: Connect the other devices MIDI OUT to the MidiVerb 4s [MIDI IN]. Connect the MidiVerb 4s [MIDI OUT] to the other devices MIDI IN. This provides two-way communication between the devices. Press [UTIL], then press [EDIT/PAGE] button until page 6 is selected. The [UTIL] button will now be flashing, and the display will read: Send MIDI Sysex: All Use the [VALUE] knob to select All User Programs (All), or the currently selected Program (Buffer), or the Program Change Table (Table). Set the receiving MIDI device to receive or record the MIDI information about to be sent from the MidiVerb 4. Press the flashing [UTIL] button to transmit. The [UTIL] button will briefly flash rapidly and the display will read: Transmitting Sysex. When you send a Sysex dump back to the MidiVerb 4, it will automatically go into receive mode (you do not have to do anything special). When this occurs, the display will momentarily read: Press the [B] button to select the Amplitude X field, and turn the [VALUE] knob to set the amount of control Modulator X will have over the parameters it controls. This can be set anywhere from -99 to +99. Repeat steps and , substituting buttons [A] and [B] with buttons [C] and [D] to select the type of MIDI message for Modulator Y and adjust its amplitude. Modulation Parameters Index 60 MidiVerb 4 Reference Manual The following is a chart describing which parameters of each Configuration are controlled by Modulators X and Y. Use this chart to determine what control possibilities exist for each Program. Configuration CONCERT HALL REAL ROOM AMBIENCE PLATE REVERB NONLINEAR MONO DELAY STEREO DELAY PING PONG DELAY MULTI TAP DELAY BPM MONO DELAY DELAY:DELAY STEREO CHORUS QUAD CHORUS CHORUS:CHORUS STEREO FLANGE FLANGE:FLANGE LEZLIE->ROOM STEREOPITCHSHFT PITCH:PITCH AUTO PAN DELAY->REALROOM CHORUS->REALROOM FLANGE->REALROOM REALROOM->FLANGE CHORUS->DLY->ROOM FLANGE->DLY->ROOM REALROOM+DELAY REALROOM+CHORUS REALROOM+FLANGE CHORUS:DELAY FLANGE:DELAY PITCH:DELAY Mod X Decay Decay Decay Decay Decay Feedback Feedback Feedback Master Feedback Feedback Delay 1 Feedback Wet/Dry Mix Wet/Dry Mix Chorus 1 Wet/Dry Mix Wet/Dry Mix Flange 1 Wet/Dry Mix Speed (slow/fast) (none) (none) (none) Delay Feedback Chorus Wet/Dry Mix Flange Wet/Dry Mix Reverb Decay Chorus Wet/Dry Mix Flange Wet/Dry Mix Reverb Decay Reverb Decay Reverb Decay Chorus Wet/Dry Mix Flange Wet/Dry Mix (none) Mod Y Wet/Dry Mix Wet/Dry Mix Wet/Dry Mix Wet/Dry Mix Wet/Dry Mix Wet/Dry Mix Wet/Dry Mix Wet/Dry Mix Wet/Dry Mix Wet/Dry Mix Delay 2 Feedback Depth* Depth* Chorus 2 Wet/Dry Mix Depth* Flange 2 Wet/Dry Mix Motor (on/off) (none) (none) (none) Reverb Decay Reverb Decay Reverb Decay Flange Wet/Dry Mix Reverb Decay Reverb Decay Delay Feedback Chorus Depth Flange Depth Delay Feedback Delay Feedback Delay Feedback * Note: If audio is going through a chorus effect and the depth parameter is changed, you will notice audible clicks. This is due to the fact that the processor is making significant changes in the effects algorithm. We recommend that you change the setting of this parameter only while no audio is running through the effect. Setting Modulation Amplitude Samsung R831 Bbcc-S15A PL42A410c2D DMC-FX55 WF8702LSW CDX-GT230 DT200 LE40A676 KLX 250S TR 200 405SX Camcorder Review G32026pexx Dmclz7 Nikon FE Output KIT UX-A1000 VDR-M30 0I 16V PSM-5 DMC-GH1 CLP-550N Walker T290I 20220 Rally 2005 CDX-1150 Europa Limited LTD1 DVP-3100V PL42B450 Go 8 RM6401 Zastava 10 Xmax125-2007 PVR-A1 550EX KIP712W DVP-S325 ALL-IN-one LX Decepticon DI9532 Humminbird 565 RS2555SL CT90 1 CVA-1000R MJ-L1 RM-LJ304 Polaroid I631 Rendezvous 2003 3 3 Coolpix S70 VGN-Z11wn B Silver Eacute O UE40C6700 FR911 Assist Z099 KD-SX991R Nokia 6301 SR-L39WEB NN-F653 3d LA40B550 XL-MP2H C4210 MPF-10 XL-30H Cgva24-3H TDS473E SB5100 ES-4025 Toaster Korg X2 Powertrack 360 WR450F-2007 Ezset Strike Travelmate 4720 50PX4R-ZB 1100 Rack VGX-TP3e B PAC210 6480T FR DI250F MTD 660 Tx500B Tower PC Minolta 7155 Ixus 65 P5B PRO Yamaha T-85 Series PF3 2 5 SGH-A167 TW-9010 Artis Easy 150 28PW5407 BH-503
http://www.ps2netdrivers.net/manual/alesis.midiverb4.programchart/
crawl-003
refinedweb
3,738
58.11
52889/lambdas-defined-loop-with-different-values-return-same-result Why do lambdas defined in a loop with different values all return the same result? I am unable to understand the logic here. Example: squares = [] for x in range(5): squares.append(lambda: x**2) Here x is not local to the lambda but is defined outside the scope and is only accessed when the lambda is called rather than when defined. At the end of the loop, the value of x is 4 and hence returns only 16 for all values. save the values in variables local to the lambdas. something like this: squares = [] for x in range(5): squares.append(lambda n=x: n**2) Hi, it is pretty simple, to be ...READ MORE Hi, You can use dedicated hooks(decorators) called before ...READ MORE Lets say we have a problem statement .. def maximum(x, y): if x > y: ...READ MORE Good question. I actually was stuck with ...READ MORE OR Already have an account? Sign in.
https://www.edureka.co/community/52889/lambdas-defined-loop-with-different-values-return-same-result
CC-MAIN-2020-34
refinedweb
169
69.18
BSEtunes is a MySQL based, full manageable, networkable single or multiuser jukebox application. With BSEtunes, the user could listen to single- or multi selected songs, random songs, complete albums, songs in playlists, etc. You could filter the random playback or create unlimited numbers of playlists. For selections, drag the content from one panel to another. BSEtunes also contains an integrated WPF coverflow clone. BSEtunes What you see at the BSEtunes client will be managed by integrated tools in BSEtunes or BSEadmin, which is the integrated tool for data aquisition. Optically BSEtunes is a mixture between the Microsoft Media Player and iTunes, the controls are based on the BSE.Windows.Forms controls. The program uses the WMPlib but may work with the Winamp player (if installed) also. Until the launch of WPF, BSEtunes worked even on Windows 2000. BSEadmin WMPlib Winamp When you use this application, you would'nt need any other audio application. The included BSEadmin tool could rip CDs, write the ID3 tags and gets album information from freeDB database. BSEadmin I started developing this application in the year 2002 for managing my LPs and CDs, for learning C# and for testing several .NET features and namespaces. Always when I saw a new .NET feature, I've considered how to do something with it in BSEtunes. It was my intention only to use free- or open source components. For this reason, BSEtunes is based on MySQL. The code in the application grew over the years. Some code parts are better and some code parts aren't so well implemented. Meanwhile, the application has a high complexity and works so stable that other users can participate in it. So take the code and do whatever you want with it. You could play random songs over all the audio file content or you can filter it by genre or year. You could have unlimited number of playlists where you could drag in the songs from all the other panels. For exporting content to an audioplayer, drag in the songs from all the panels. You could browse through your albums by using the WPF coverflow window. BSEadmin is the main window for managing all your content. In BSEadmin, you could manage all your audio content. For this, BSEadmin contains several forms and dialogs. You could rip your CDs (thanks Idael Cardoso)... For ripping, select the tracks in the upper list and drag them into the lower list. ...or you could import your audio content from elsewhere. For importing, select the tracks in the upper list and drag them into the lower list. You could read out the content of a CS by a freeDB request. For changing the database host or the network share that contains the audio files, double click into the entry in the options form and change the value. For system information, there several statistic dialogs are integrated. BSEtunes could work as a single computer system and also has the opportunity to grow up to a multiuser client server system. The database and file server could be a Windows- or Linux computer respectively server. If you have a new disk for your audio content, copy the files to this new disk and change the path to it in the options dialog. The database references to the files are relative. If there is a song located at "C:\songs\interpret\album\song1.mp3", only the path part "interpret\album\song1.mp3" is stored in the database. The value for "C:\songs" is stored in the options. After installation of the MySQL Connector, you should be able to build the solution. The additional needed DLLs are located in subdirectories named DLL and lame. Perhaps you should adapt the bat files in the BSE.Platten.Tunes and BSE.Platten.Admin.WinApp projects. BSE.Platten.Tunes BSE.Platten.Admin.WinApp The installation of the MySQL database server is described in the MySQL.chm file. ListView This article, along with any associated source code and files, is licensed under The Code Project Open License (CPOL) General News Suggestion Question Bug Answer Joke Rant Admin Man throws away trove of Bitcoin worth $7.5 million
http://www.codeproject.com/Articles/43068/BSEtunes?PageFlow=FixedWidth
CC-MAIN-2013-48
refinedweb
690
66.13
Get the highlights in your inbox every week. Getting started with OpenSSL: Cryptography basics Getting started with OpenSSL: Cryptography basics Need a primer on cryptography basics, especially regarding OpenSSL? Read on. Subscribe now. A quick history Secure Socket Layer (SSL) is a cryptographic protocol that Netscape released in 1995. This protocol layer can sit atop HTTP, thereby providing the S for secure in HTTPS. The SSL protocol provides various security services, including two that are central in HTTPS: - Peer authentication (aka mutual challenge): Each side of a connection authenticates the identity of the other side. If Alice and Bob are to exchange messages over SSL, then each first authenticates the identity of the other. - Confidentiality: A sender encrypts messages before sending these over a channel. The receiver then decrypts each received message. This process safeguards network conversations. Even if eavesdropper Eve intercepts an encrypted message from Alice to Bob (a man-in-the-middle attack), Eve finds it computationally infeasible to decrypt this message. These two key SSL services, in turn, are tied to others that get less attention. For example, SSL supports message integrity, which assures that a received message is the same as the one sent. This feature is implemented with hash functions, which likewise come with the OpenSSL toolkit. SSL is versioned (e.g., SSLv2 and SSLv3), and in 1999 Transport Layer Security (TLS) emerged as a similar protocol based upon SSLv3. TLSv1 and SSLv3 are alike, but not enough so to work together. Nonetheless, it is common to refer to SSL/TLS as if they are one and the same protocol. For example, OpenSSL functions often have SSL in the name even when TLS rather than SSL is in play. Furthermore, calling OpenSSL command-line utilities begins with the term openssl. The documentation for OpenSSL is spotty beyond the man pages, which become unwieldy given how big the OpenSSL toolkit is. Command-line and code examples are one way to bring the main topics into focus together. Let’s start with a familiar example—accessing a web site with HTTPS—and use this example to pick apart the cryptographic pieces of interest. An HTTPS client The client program shown here connects over HTTPS to Google: /* compilation: gcc -o client client.c -lssl -lcrypto */ #include <stdio.h> #include <stdlib.h> #include <openssl/bio.h> /* BasicInput/Output streams */ #include <openssl/err.h> /* errors */ #include <openssl/ssl.h> /* core library */ #define BuffSize 1024 void report_and_exit(const char* msg) { perror(msg); ERR_print_errors_fp(stderr); exit(-1); } void init_ssl() { SSL_load_error_strings(); SSL_library_init(); } void cleanup(SSL_CTX* ctx, BIO* bio) { SSL_CTX_free(ctx); BIO_free_all(bio); } void secure_connect(const char* hostname) { char name[BuffSize]; char request[BuffSize]; char response[BuffSize]; const SSL_METHOD* method = TLSv1_2_client_method(); if (NULL == method) report_and_exit("TLSv1_2_client_method..."); SSL_CTX* ctx = SSL_CTX_new(method); if (NULL == ctx) report_and_exit("SSL_CTX_new..."); BIO* bio = BIO_new_ssl_connect(ctx); if (NULL == bio) report_and_exit("BIO_new_ssl_connect..."); SSL* ssl = NULL; /* link bio channel, SSL session, and server endpoint */ sprintf(name, "%s:%s", hostname, "https"); BIO_get_ssl(bio, &ssl); /* session */ SSL_set_mode(ssl, SSL_MODE_AUTO_RETRY); /* robustness */ BIO_set_conn_hostname(bio, name); /* prepare to connect */ /* try to connect */ if (BIO_do_connect(bio) <= 0) { cleanup(ctx, bio); report_and_exit("BIO_do_connect..."); } /* verify truststore, check cert */ if (!SSL_CTX_load_verify_locations(ctx, "/etc/ssl/certs/ca-certificates.crt", /* truststore */ "/etc/ssl/certs/")) /* more truststore */ report_and_exit("SSL_CTX_load_verify_locations..."); long verify_flag = SSL_get_verify_result(ssl); if (verify_flag != X509_V_OK) fprintf(stderr, "##### Certificate verification error (%i) but continuing...\n", (int) verify_flag); /* now fetch the homepage as sample data */ sprintf(request, "GET / HTTP/1.1\x0D\x0AHost: %s\x0D\x0A\x43onnection: Close\x0D\x0A\x0D\x0A", hostname); BIO_puts(bio, request); /* read HTTP response from server and print to stdout */ while (1) { memset(response, '\0', sizeof(response)); int n = BIO_read(bio, response, BuffSize); if (n <= 0) break; /* 0 is end-of-stream, < 0 is an error */ puts(response); } cleanup(ctx, bio); } int main() { init_ssl(); const char* hostname = ""; fprintf(stderr, "Trying an HTTPS connection to %s...\n", hostname); secure_connect(hostname); return 0; } This program can be compiled and executed from the command line (note the lowercase L in -lssl and -lcrypto): gcc -o client client.c -lssl -lcrypto This program tries to open a secure connection to the web site. As part of the TLS handshake with the Google web server, the client program receives one or more digital certificates, which the program tries (but, on my system, fails) to verify. Nonetheless, the client program goes on to fetch the Google homepage through the secure channel. This program depends on the security artifacts mentioned earlier, although only a digital certificate stands out in the code. The other artifacts remain behind the scenes and are clarified later in detail. Generally, a client program in C or C++ that opened an HTTP (non-secure) channel would use constructs such as a file descriptor for a network socket, which is an endpoint in a connection between two processes (e.g., the client program and the Google web server). A file descriptor, in turn, is a non-negative integer value that identifies, within a program, any file-like construct that the program opens. Such a program also would use a structure to specify details about the web server’s address. None of these relatively low-level constructs occurs in the client program, as the OpenSSL library wraps the socket infrastructure and address specification in high-level security constructs. The result is a straightforward API. Here’s a first look at the security details in the example client program. The program begins by loading the relevant OpenSSL libraries, with my function init_ssl making two calls into OpenSSL: SSL_library_init(); SSL_load_error_strings(); The next initialization step tries to get a security context, a framework of information required to establish and maintain a secure channel to the web server. TLS 1.2 is used in the example, as shown in this call to an OpenSSL library function: const SSL_METHOD* method = TLSv1_2_client_method(); /* TLS 1.2 */ If the call succeeds, then the method pointer is passed to the library function that creates the context of type SSL_CTX: SSL_CTX* ctx = SSL_CTX_new(method); The client program checks for errors on each of these critical library calls, and then the program terminates if either call fails. Two other OpenSSL artifacts now come into play: a security session of type SSL, which manages the secure connection from start to finish; and a secured stream of type BIO (Basic Input/Output), which is used to communicate with the web server. The BIO stream is generated with this call:. Three library calls do the work: BIO_get_ssl(bio, &ssl); /* get a TLS session */ SSL_set_mode(ssl, SSL_MODE_AUTO_RETRY); /* for robustness */ BIO_set_conn_hostname(bio, name); /* prepare to connect to Google */ The secure connection itself is established through this call: BIO_do_connect(bio); If this last call does not succeed, the client program terminates; otherwise, the connection is ready to support a confidential conversation between the client program and the Google web server. During the handshake with the web server, the client program receives one or more digital certificates that authenticate the server’s identity. However, the client program does not send a certificate of its own, which means that the authentication is one-way. (Web servers typically are configured not to expect a client certificate.) Despite the failed verification of the web server’s certificate, the client program continues by fetching the Google homepage through the secure channel to the web server. Why does the attempt to verify a Google certificate fail? A typical OpenSSL installation has the directory /etc/ssl/certs, which includes the ca-certificates.crt file. The directory and the file together contain digital certificates that OpenSSL trusts out of the box and accordingly constitute a truststore. The truststore can be updated as needed, in particular, to include newly trusted certificates and to remove ones no longer trusted.). If that signature were trusted, then the certificate containing it should be trusted as well. Nonetheless, the client program goes on to fetch and then to print Google’s homepage. The next section gets into more detail. The hidden security pieces in the client program Let’s start with the visible security artifact in the client example—the digital certificate—and consider how other security artifacts relate to it. The dominant layout standard for a digital certificate is X509, and a production-grade certificate is issued by a certificate authority (CA) such as Verisign. A digital certificate contains various pieces of information (e.g., activation and expiration dates, and a domain name for the owner), including the issuer’s identity and digital signature, which is an encrypted cryptographic hash value. A certificate also has an unencrypted hash value that serves as its identifying fingerprint. A hash value results from mapping an arbitrary number of bits to a fixed-length digest. What the bits represent (an accounting report, a novel, or maybe a digital movie) is irrelevant. For example, the Message Digest version 5 (MD5) hash algorithm maps input bits of whatever length to a 128-bit hash value, whereas the SHA1 (Secure Hash Algorithm version 1) algorithm maps input bits to a 160-bit value. Different input bits result in different—indeed, statistically unique—hash values. The next article goes into further detail and focuses on what makes a hash function cryptographic. Digital certificates differ in type (e.g., root, intermediate, and end-entity certificates) and form a hierarchy that reflects these types. As the name suggests, a root certificate sits atop the hierarchy, and the certificates under it inherit whatever trust the root certificate has. The OpenSSL libraries and most modern programming languages have an X509 type together with functions that deal with such certificates. The certificate from Google has an X509 format, and the client program checks whether this certificate is X509_V_OK. X509 certificates are based upon public-key infrastructure (PKI), which includes algorithms—RSA is the dominant one—for generating key pairs: a public key and its paired private key. A public key is an identity: Amazon’s public key identifies it, and my public key identifies me. A private key is meant to be kept secret by its owner. The keys in a pair have standard uses. A public key can be used to encrypt a message, and the private key from the same pair can then be used to decrypt the message. A private key also can be used to sign a document or other electronic artifact (e.g., a program or an email), and the public key from the pair can then be used to verify the signature. The following two examples fill in some details. In the first example, Alice distributes her public key to the world, including Bob. Bob then encrypts a message with Alice’s public key, sending the encrypted message to Alice. The message encrypted with Alice’s public key is decrypted with her private key, which (by assumption) she alone has, like so: +------------------+ encrypted msg +-------------------+ Bob's msg--->|Alice's public key|--------------->|Alice's private key|---> Bob's msg +------------------+ +-------------------+ Decrypting the message without Alice’s private key is possible in principle, but infeasible in practice given a sound cryptographic key-pair system such as RSA. Now, for the second example, consider signing a document to certify its authenticity. The signature algorithm uses a private key from a pair to process a cryptographic hash of the document to be signed: +-------------------+ Hash of document--->|Alice's private key|--->Alice's digital signature of the document +-------------------+ Assume that Alice digitally signs a contract sent to Bob. Bob then can use Alice’s public key from the key pair to verify the signature: +------------------+ Alice's digital signature of the document--->|Alice's public key|--->verified or not +------------------+ It is infeasible to forge Alice’s signature without Alice’s private key: hence, it is in Alice’s interest to keep her private key secret. None of these security pieces, except for digital certificates, is explicit in the client program. The next article fills in the details with examples that use the OpenSSL utilities and library functions. OpenSSL from the command line In the meantime, let’s take a look at OpenSSL command-line utilities: in particular, a utility to inspect the certificates from a web server during the TLS handshake. Invoking the OpenSSL utilities begins with the openssl command and then adds a combination of arguments and flags to specify the desired operation. Consider this command: openssl list-cipher-algorithms The output is a list of associated algorithms that make up a cipher suite. Here’s the start of the list, with comments to clarify the acronyms: AES-128-CBC ## Advanced Encryption Standard, Cipher Block Chaining AES-128-CBC-HMAC-SHA1 ## Hash-based Message Authentication Code with SHA1 hashes AES-128-CBC-HMAC-SHA256 ## ditto, but SHA256 rather than SHA1 ... The next command, using the argument s_client, opens a secure connection to and prints screens full of information about this connection: openssl s_client -connect -showcerts The port number 443 is the standard one used by web servers for receiving HTTPS rather than HTTP connections. (For HTTP, the standard port is 80.) The network address also occurs in the client program's code. If the attempted connection succeeds, the three digital certificates from Google are displayed together with information about the secure session, the cipher suite in play, and related items. For example, here is a slice of output from near the start, which announces that a certificate chain is forthcoming. The encoding for the certificates is base64: Certificate chain 0 s:/C=US/ST=California/L=Mountain View/O=Google LLC/CN= i:/C=US/O=Google Trust Services/CN=Google Internet Authority G3 -----BEGIN CERTIFICATE----- MIIEijCCA3KgAwIBAgIQdCea9tmy/T6rK/dDD1isujANBgkqhkiG9w0BAQsFADBU MQswCQYDVQQGEwJVUzEeMBwGA1UEChMVR29vZ2xlIFRydXN0IFNlcnZpY2VzMSUw ... A major web site such as Google usually sends multiple certificates for authentication. The output ends with summary information about the TLS session, including specifics on the cipher suite: SSL-Session: Protocol : TLSv1.2 Cipher : ECDHE-RSA-AES128-GCM-SHA256 Session-ID: A2BBF0E4991E6BBBC318774EEE37CFCB23095CC7640FFC752448D07C7F438573 ... The protocol TLS 1.2 is used in the client program, and the Session-ID uniquely identifies the connection between the openssl utility and the Google web server. The Cipher entry can be parsed as follows: ECDHE (Elliptic Curve Diffie Hellman Ephemeral) is an effective and efficient algorithm for managing the TLS handshake. In particular, ECDHE solves the key-distribution problem by ensuring that both parties in a connection (e.g., the client program and the Google web server) use the same encryption/decryption key, which is known as the session key. The follow-up article digs into the details. RSA (Rivest Shamir Adleman) is the dominant public-key cryptosystem and named after the three academics who first described the system in the late 1970s. The key-pairs in play are generated with the RSA algorithm. AES128 (Advanced Encryption Standard) is a block cipher that encrypts and decrypts blocks of bits. (The alternative is a stream cipher, which encrypts and decrypts bits one at a time.) The cipher is symmetric in that the same key is used to encrypt and to decrypt, which raises the key-distribution problem in the first place. AES supports key sizes of 128 (used here), 192, and 256 bits: the larger the key, the better the protection. Key sizes for symmetric cryptosystems such as AES are, in general, smaller than those for asymmetric (key-pair based) systems such as RSA. For example, a 1024-bit RSA key is relatively small, whereas a 256-bit key is currently the largest for AES. GCM (Galois Counter Mode) handles the repeated application of a cipher (in this case, AES128) during a secured conversation. AES128 blocks are only 128-bits in size, and a secure conversation is likely to consist of multiple AES128 blocks from one side to the other. GCM is efficient and commonly paired with AES128. SHA256 (Secure Hash Algorithm 256 bits) is the cryptographic hash algorithm in play. The hash values produced are 256 bits in size, although even larger values are possible with SHA. Cipher suites are in continual development. Not so long ago, for example, Google used the RC4 stream cipher (Ron’s Cipher version 4 after Ron Rivest from RSA). RC4 now has known vulnerabilities, which presumably accounts, at least in part, for Google’s switch to AES128. Wrapping up This first look at OpenSSL, through a secure C web client and various command-line examples, has brought to the fore a handful of topics in need of more clarification. The next article gets into the details, starting with cryptographic hashes and ending with a fuller discussion of how digital certificates address the key distribution challenge. 2 Comments Great, super detailed article. Thank you very much Marty! A really excellent article! As Apostolos says above, the detail is awesomely good. Thanks so much.
https://opensource.com/article/19/6/cryptography-basics-openssl-part-1
CC-MAIN-2020-34
refinedweb
2,765
53
(The code for this example is in InstallDir/examples/tracing.) Writing a class that provides tracing functionality is easy: a couple of functions, a boolean flag for turning tracing on and off, a choice for an output stream, maybe some code for formatting the output -- these are all elements that Trace classes have been known to have. Trace classes may be highly sophisticated, too, if the task of tracing the execution of a program demands it. But developing the support for tracing is just one part of the effort of inserting tracing into a program, and, most likely, not the biggest part. The other part of the effort is calling the tracing functions at appropriate times. In large systems, this interaction with the tracing support can be overwhelming. Plus, tracing is one of those things that slows the system down, so these calls should often be pulled out of the system before the product is shipped. For these reasons, it is not unusual for developers to write ad-hoc scripting programs that rewrite the source code by inserting/deleting trace calls before and after the method bodies. AspectJ can be used for some of these tracing concerns in a less ad-hoc way. Tracing can be seen as a concern that crosscuts the entire system and as such is amenable to encapsulation in an aspect. In addition, it is fairly independent of what the system is doing. Therefore tracing is one of those kind of system aspects that can potentially be plugged in and unplugged without any side-effects in the basic functionality of the system. Throughout this example we will use a simple application that contains only four classes. The application is about shapes. The TwoDShape class is the root of the shape hierarchy: public abstract class TwoDShape { protected double x, y; protected TwoDShape(double x, double y) { this.x = x; this.y = y; } public double getX() { return x; } public double getY() { return y; } public double distance(TwoDShape s) { double dx = Math.abs(s.getX() - x); double dy = Math.abs(s.getY() - y); return Math.sqrt(dx*dx + dy*dy); } public abstract double perimeter(); public abstract double area(); public String toString() { return (" @ (" + String.valueOf(x) + ", " + String.valueOf(y) + ") "); } } TwoDShape has two subclasses, Circle and Square: public class Circle extends TwoDShape { protected double r; public Circle(double x, double y, double r) { super(x, y); this.r = r; } public Circle(double x, double y) { this( x, y, 1.0); } public Circle(double r) { this(0.0, 0.0, r); } public Circle() { this(0.0, 0.0, 1.0); } public double perimeter() { return 2 * Math.PI * r; } public double area() { return Math.PI * r*r; } public String toString() { return ("Circle radius = " + String.valueOf(r) + super.toString()); } } public class Square extends TwoDShape { protected double s; // side public Square(double x, double y, double s) { super(x, y); this.s = s; } public Square(double x, double y) { this( x, y, 1.0); } public Square(double s) { this(0.0, 0.0, s); } public Square() { this(0.0, 0.0, 1.0); } public double perimeter() { return 4 * s; } public double area() { return s*s; } public String toString() { return ("Square side = " + String.valueOf(s) + super.toString()); } } To run this application, compile the classes. You can do it with or without ajc, the AspectJ compiler. If you've installed AspectJ, go to the directory InstallDir/examples and type: ajc -argfile tracing/notrace.lst To run the program, type java tracing.ExampleMain (we don't need anything special on the classpath since this is pure Java code). You should see the following output: c1.perimeter() = 12.566370614359172 c1.area() = 12.566370614359172 s1.perimeter() = 4.0 s1.area() = 1.0 c2.distance(c1) = 4.242640687119285 s1.distance(c1) = 2.23606797749979 s1.toString(): Square side = 1.0 @ (1.0, 2.0) In a first attempt to insert tracing in this application, we will start by writing a Trace class that is exactly what we would write if we didn't have aspects. The implementation is in version1/Trace.java. Its public interface is: public class Trace { public static int TRACELEVEL = 0; public static void initStream(PrintStream s) {...} public static void traceEntry(String str) {...} public static void traceExit(String str) {...} } If we didn't have AspectJ, we would have to insert calls to traceEntry and traceExit in all methods and constructors we wanted to trace, and to initialize TRACELEVEL and the stream. If we wanted to trace all the methods and constructors in our example, that would amount to around 40 calls, and we would hope we had not forgotten any method. But we can do that more consistently and reliably with the following aspect (found in version1/TraceMyClasses.java): aspect TraceMyClasses { pointcut myClass(): within(TwoDShape) || within(Circle) || within(Square); pointcut myConstructor(): myClass() && execution(new(..)); pointcut myMethod(): myClass() && execution(* *(..)); before (): myConstructor() { Trace.traceEntry("" + thisJoinPointStaticPart.getSignature()); } after(): myConstructor() { Trace.traceExit("" + thisJoinPointStaticPart.getSignature()); } before (): myMethod() { Trace.traceEntry("" + thisJoinPointStaticPart.getSignature()); } after(): myMethod() { Trace.traceExit("" + thisJoinPointStaticPart.getSignature()); } } This aspect performs the tracing calls at appropriate times. According to this aspect, tracing is performed at the entrance and exit of every method and constructor defined within the shape hierarchy. What is printed at before and after each of the traced join points is the signature of the method executing. Since the signature is static information, we can get it through thisJoinPointStaticPart. To run this version of tracing, go to the directory InstallDir/examples and type: ajc -argfile tracing/tracev1.lst Running the main method of tracing.version1.TraceMyClasses should produce the output: --> tracing.TwoDShape(double, double) <-- tracing.TwoDShape(double, double) --> tracing.Circle(double, double, double) <-- tracing.Circle(double, double, double) --> tracing.TwoDShape(double, double) <-- tracing.TwoDShape(double, double) --> tracing.Circle(double, double, double) <-- tracing.Circle(double, double, double) --> tracing.Circle(double) <-- tracing.Circle(double) --> tracing.TwoDShape(double, double) <-- tracing.TwoDShape(double, double) --> tracing.Square(double, double, double) <-- tracing.Square(double, double, double) --> tracing.Square(double, double) <-- tracing.Square(double, double) --> double tracing.Circle.perimeter() <-- double tracing.Circle.perimeter() c1.perimeter() = 12.566370614359172 --> double tracing.Circle.area() <-- double tracing.Circle.area() c1.area() = 12.566370614359172 --> double tracing.Square.perimeter() <-- double tracing.Square.perimeter() s1.perimeter() = 4.0 --> double tracing.Square.area() <-- double tracing.Square.area() s1.area() = 1.0 --> double tracing.TwoDShape.distance(TwoDShape) --> double tracing.TwoDShape.getX() <-- double tracing.TwoDShape.getX() --> double tracing.TwoDShape.getY() <-- double tracing.TwoDShape.getY() <-- double tracing.TwoDShape.distance(TwoDShape) c2.distance(c1) = 4.242640687119285 --> double tracing.TwoDShape.distance(TwoDShape) --> double tracing.TwoDShape.getX() <-- double tracing.TwoDShape.getX() --> double tracing.TwoDShape.getY() <-- double tracing.TwoDShape.getY() <-- double tracing.TwoDShape.distance(TwoDShape) s1.distance(c1) = 2.23606797749979 --> String tracing.Square.toString() --> String tracing.TwoDShape.toString() <-- String tracing.TwoDShape.toString() <-- String tracing.Square.toString() s1.toString(): Square side = 1.0 @ (1.0, 2.0) When TraceMyClasses.java is not provided to ajc, the aspect does not have any affect on the system and the tracing is unplugged. Another way to accomplish the same thing would be to write a reusable tracing aspect that can be used not only for these application classes, but for any class. One way to do this is to merge the tracing functionality of Trace—version1 with the crosscutting support of TraceMyClasses—version1. We end up with a Trace aspect (found in version2/Trace.java) with the following public interface abstract aspect Trace { public static int TRACELEVEL = 2; public static void initStream(PrintStream s) {...} protected static void traceEntry(String str) {...} protected static void traceExit(String str) {...} abstract pointcut myClass(); } In order to use it, we need to define our own subclass that knows about our application classes, in version2/TraceMyClasses.java: public aspect TraceMyClasses extends Trace { pointcut myClass(): within(TwoDShape) || within(Circle) || within(Square); public static void main(String[] args) { Trace.TRACELEVEL = 2; Trace.initStream(System.err); ExampleMain.main(args); } } Notice that we've simply made the pointcut classes, that was an abstract pointcut in the super-aspect, concrete. To run this version of tracing, go to the directory examples and type: ajc -argfile tracing/tracev2.lst The file tracev2.lst lists the application classes as well as this version of the files Trace.java and TraceMyClasses.java. Running the main method of tracing.version2.TraceMyClasses should output exactly the same trace information as that from version 1. The entire implementation of the new Trace class is: abstract aspect Trace { // implementation part public static int TRACELEVEL = 2; protected static PrintStream stream = System.err; protected static int callDepth = 0; public static void initStream(PrintStream s) { stream = s; } protected static void traceEntry(String str) { if (TRACELEVEL == 0) return; if (TRACELEVEL == 2) callDepth++; printEntering(str); } protected static void traceExit(String str) { if (TRACELEVEL == 0) return; printExiting(str); if (TRACELEVEL == 2) callDepth--; } private static void printEntering(String str) { printIndent(); stream.println("--> " + str); } private static void printExiting(String str) { printIndent(); stream.println("<-- " + str); } private static void printIndent() { for (int i = 0; i < callDepth; i++) stream.print(" "); } // protocol part abstract pointcut myClass(); pointcut myConstructor(): myClass() && execution(new(..)); pointcut myMethod(): myClass() && execution(* *(..)); before(): myConstructor() { traceEntry("" + thisJoinPointStaticPart.getSignature()); } after(): myConstructor() { traceExit("" + thisJoinPointStaticPart.getSignature()); } before(): myMethod() { traceEntry("" + thisJoinPointStaticPart.getSignature()); } after(): myMethod() { traceExit("" + thisJoinPointStaticPart.getSignature()); } } This version differs from version 1 in several subtle ways. The first thing to notice is that this Trace class merges the functional part of tracing with the crosscutting of the tracing calls. That is, in version 1, there was a sharp separation between the tracing support (the class Trace) and the crosscutting usage of it (by the class TraceMyClasses). In this version those two things are merged. That's why the description of this class explicitly says that "Trace messages are printed before and after constructors and methods are," which is what we wanted in the first place. That is, the placement of the calls, in this version, is established by the aspect class itself, leaving less opportunity for misplacing calls. A consequence of this is that there is no need for providing traceEntry and traceExit as public operations of this class. You can see that they were classified as protected. They are supposed to be internal implementation details of the advice. The key piece of this aspect is the abstract pointcut classes that serves as the base for the definition of the pointcuts constructors and methods. Even though classes is abstract, and therefore no concrete classes are mentioned, we can put advice on it, as well as on the pointcuts that are based on it. The idea is "we don't know exactly what the pointcut will be, but when we do, here's what we want to do with it." In some ways, abstract pointcuts are similar to abstract methods. Abstract methods don't provide the implementation, but you know that the concrete subclasses will, so you can invoke those methods.
http://www.eclipse.org/aspectj/doc/released/progguide/examples-development.html
CC-MAIN-2016-22
refinedweb
1,780
51.95
This article describes a set of best practices for building containers. These practices cover a wide range of goals, from shortening the build time, to creating smaller and more resilient images, with the aim of making containers easier to build (for example, with Cloud Build), and easier to run in Google Kubernetes Engine (GKE). These best practices are. Advice about running and operating containers is available in Best practices for operating containers. Package a single app per container Importance: HIGH When you start working with containers, it's a common mistake to treat them as virtual machines that can run many different things simultaneously. A container can work this way, but doing so reduces most of the advantages of the container model. For example, take a classic Apache/MySQL/PHP stack: you might be tempted to run all the components in a single container. However, the best practice is to use two or three different containers: one for Apache, one for MySQL, and potentially one for PHP if you are running PHP-FPM. Because a container is designed to have the same lifecycle as the app it hosts, each of your containers should contain only one app. When a container starts, so should the app, and when the app stops, so should the container. The following diagram shows this best practice. Figure 1. The container on the left follows the best practice. The container on the right doesn't. If you have multiple apps in a container, they might have different lifecycles, or be in different states. For instance, you might end up with a container that is running, but with one of its core components crashed or unresponsive. Without an additional health check, the overall container management system (Docker or Kubernetes) cannot tell whether the container is healthy. In the case of Kubernetes, it means that your container will not be restarted by default if needed. You might see the following actions in public images, but do not follow their example: - Using a process management system such as supervisord to manage one or several apps in the container. - Using a bash script as an entrypoint in the container, and making it spawn several apps as background jobs. For the proper use of bash scripts in containers, see Properly handle PID 1, signal handling, and zombie processes. Properly handle PID 1, signal handling, and zombie processes Importance: HIGH Linux signals are the main way to control the lifecycle of processes inside a container. In line with the previous best practice, in order to tightly link the lifecycle of your app to the container it's in, ensure that your app properly handles Linux signals. The most important Linux signal is SIGTERM because it terminates a process. Your app might also receive a SIGKILL signal, which is used to kill the process non-gracefully, or a SIGINT signal, which is sent when you type Ctrl+C and is usually treated like SIGTERM. Process identifiers (PIDs) are unique identifiers that the Linux kernel gives to each process. PIDs are namespaced, meaning that a container has its own set of PIDs that are mapped to PIDs on the host system. The first process launched when starting a Linux kernel has the PID 1. For a normal operating system, this process is the init system, for example, systemd or SysV. Similarly, the first process launched in a container gets PID 1. Docker and Kubernetes use signals to communicate with the processes inside containers, most notably to terminate them. Both Docker and Kubernetes can only send signals to the process that has PID 1 inside a container. In the context of containers, PIDs and Linux signals create two problems to consider. Problem 1: How the Linux kernel handles signals The Linux kernel handles signals differently for the process that has PID 1 than it does for other processes. Signal handlers aren't automatically registered for this process, meaning that signals such as SIGTERM or SIGINT will have no effect by default. By default, you must kill processes by using SIGKILL, preventing any graceful shutdown. Depending on your app, using SIGKILL can result in user-facing errors, interrupted writes (for data stores), or unwanted alerts in your monitoring system. Problem 2: How classic init systems handle orphaned processes Classic init systems such as systemd are also used to remove (reap) orphaned, zombie processes. Orphaned processes—processes whose parents have died—are reattached to the process that has PID 1, which should reap them when they die. A normal init system does that. But in a container, this responsibility falls on whatever process has PID 1. If that process doesn't properly handle the reaping, you risk running out of memory or some other resources. These problems have several common solutions, outlined in the following sections. Solution 1: Run as PID 1 and register signal handlers This solution addresses only the first problem. It is valid if your app spawns child processes in a controlled way (which is often the case), avoiding the second problem. The easiest way to implement this solution is to launch your process with the CMD and/or ENTRYPOINT instructions in your Dockerfile. For example, in the following Dockerfile, nginx is the first and only process to be launched. FROM debian:9 RUN apt-get update && \ apt-get install -y nginx EXPOSE 80 CMD [ "nginx", "-g", "daemon off;" ] Sometimes, you might need to prepare the environment in your container for your process to run properly. In this case, the best practice is to have the container launch a shell script when starting. This shell script is tasked with preparing the environment and launching the main process. However, if you take this approach, the shell script has PID 1, not your process, which is why you must use the built-in exec command to launch the process from the shell script. The exec command replaces the script with the program you want. Your process then inherits PID 1. Solution 2: Use a specialized init system As you would in a more classic Linux environment, you can also use an init system to deal with those problems. However, normal init systems such as systemd or SysV are too complex and large for just this purpose, which is why we recommend that you use an init system such as tini, which is created especially for containers. If you use a specialized init system, the init process has PID 1 and does the following: - Registers the correct signal handlers. - Makes sure that signals work for your app. - Reaps any eventual zombie processes. You can use this solution in Docker itself by using the --init option of the docker run command. To use this solution in Kubernetes, you must install the init system in your container image and use it as entrypoint for your container. Optimize for the Docker build cache Importance: HIGH The Docker build cache can accelerate the building of container images. Images are built layer by layer, and in a Dockerfile, each instruction creates a layer in the resulting image. During a build, when possible, Docker reuses a layer from a previous build and skips a potentially costly step. Docker can use its build cache only if all previous build steps used it. While this behavior is usually a good thing that makes builds go faster, you need to consider a few cases. For example, to fully benefit from the Docker build cache, you must position the build steps that change often at the bottom of the Dockerfile. If you put them at the top, Docker cannot use its build cache for the other build steps that are changing less often. Because a new Docker image is usually built for each new version of your source code, add the source code to the image as late as possible in the Dockerfile. In the following diagram, you can see that if you are changing STEP 1, Docker can reuse only the layers from the FROM debian:9 step. If you change STEP 3, however, Docker can reuse the layers for STEP 1 and STEP 2. Figure 2. Examples of how to use the Docker build cache. In green, the layers that you can reuse. In red, the layers that have to be recreated. Reusing layers has another consequence: if a build step relies on any kind of cache stored on the local file system, this cache must be generated in the same build step. If this cache isn't being generated, your build step might be executed with an out-of-date cache coming from a previous build. You see this behavior most commonly with package managers such as apt or yum: you must update your repositories in the same RUN command as your package installation. If you change the second RUN step in the following Dockerfile, the apt-get update command isn't rerun, leaving you with an out-of-date apt cache. FROM debian:9 RUN apt-get update RUN apt-get install -y nginx Instead, merge the two commands in a single RUN step: FROM debian:9 RUN apt-get update && \ apt-get install -y nginx Remove unnecessary tools Importance: MEDIUM To protect your apps from attackers, try to reduce the attack surface of your app by removing any unnecessary tools. For example, remove utilities like netcat, which you can use to create a reverse shell inside your system. If netcat isn't in the container, the attacker has to find another way. This best practice is true for any workload, even if it isn't containerized. The difference is that it is much simpler to implement with containers than it is with classic virtual machines or bare-metal servers. Some of those tools might be useful for debugging. For example, if you push this best practice far enough, exhaustive logs, tracing, profiling, and Application Performance Management systems become almost mandatory. In effect, you can no longer rely on local debugging tools because they are often highly privileged. File system content The first part of this best practice deals with the content of the container image. Keep as few things as possible in your image. If you can compile your app into a single statically linked binary, adding this binary to the scratch image allows you to get a final image that contains only your app and nothing else. By reducing the number of tools packaged in your image, you reduce what a potential attacker can do in your container. For more information, see Build the smallest image possible. File system security Having no tools in your image isn't sufficient: you must prevent potential attackers from installing their own tools. You can combine two methods here: Avoid running as root inside the container: this method offers a first layer of security and could prevent, for example, attackers from modifying root-owned files using a package manager embedded in your image (such as apt-getor apk). For this method to be useful, you must disable or uninstall the sudocommand. This topic is more broadly covered in Avoid running as root. Launch the container in read-only mode, which you do by using the --read-onlyflag from the docker runcommand or by using the readOnlyRootFilesystemoption in Kubernetes. You can enforce this in Kubernetes by using a PodSecurityPolicy. Build the smallest image possible Importance: MEDIUM Building a smaller image offers advantages such as faster upload and download times, which is especially important for the cold start time of a pod in Kubernetes: the smaller the image, the faster the node can download it. However, building a small image can be difficult because you might inadvertently include build dependencies or unoptimized layers in your final image. Use the smallest base image possible The base image is the one referenced in the FROM instruction in your Dockerfile. Every other instruction in the Dockerfile builds on top of this image. The smaller the base image, the smaller the resulting image is, and the more quickly it can be downloaded. For example, the alpine:3.7 image is 71 MB smaller than the centos:7 image. You can even use the scratch base image, which is an empty image on which you can build your own runtime environment. If your app is a statically linked binary, it's easy to use the scratch base image: FROM scratch COPY mybinary /mybinary CMD [ "/mybinary" ] The distroless project provides you with minimal base images for a number of different languages. The images contain only the runtime dependencies for the language, but don't include many tools you would expect in a Linux distribution, such as shells or package managers. Reduce the amount of clutter in your image To reduce the size of your image, install only what is strictly needed inside it. It might be tempting to install extra packages, and then remove them at a later step. However, this approach isn't sufficient. Because each instruction of the Dockerfile creates a layer, removing data from the image in a later step than the step that created it doesn't reduce the size of the overall image (the data is still there, just hidden in a deeper layer). Consider this example: In the bad version of the Dockerfile, the [buildpackage] and files in /var/lib/apt/lists/* still exist in the layer corresponding to the first RUN. This layer is part of the image and must be uploaded and downloaded with the rest, even if the data it contains isn't accessible in the resulting image. In the good version of the Dockerfile, everything is done in a single layer that contains only your built app. The [buildpackage] and files in /var/lib/apt/lists/* don't exist anywhere in the resulting image, not even hidden in a deeper layer. For more information on image layers, see Optimize for the Docker build cache. Another great way to reduce the amount of clutter in your image is to use multi-staged builds (introduced in Docker 17.05). Multi-stage builds allow you to build your app in a first "build" container and use the result in another container, while using the same Dockerfile. Figure 3. The Docker multi-stage build process. In the following Dockerfile, the hello binary is built in a first container and injected in a second one. Because the second container is based on scratch, the resulting image contains only the hello binary and not the source file and object files needed during the build. The binary must be statically linked in order to work without the need for any outside library in the scratch image. FROM golang:1.10 as builder WORKDIR /tmp/go COPY hello.go ./ RUN CGO_ENABLED=0 go build -a -ldflags '-s' -o hello FROM scratch CMD [ "/hello" ] COPY --from=builder /tmp/go/hello /hello Try to create images with common layers If you must download a Docker image, Docker first checks whether you already have some of the layers that are in the image. If you do have those layers, they aren't downloaded. This situation can occur if you previously downloaded another image that has the same base as the image you are currently downloading. The result is that the amount of data downloaded is much less for the second image. At an organizational level, you can take advantage of this reduction by providing your developers with a set of common, standard, base images. Your systems must download each base image only once. After the initial download, only the layers that make each image unique are needed. In effect, the more your images have in common, the faster they are to download. Figure 4. Creating images with common layers. Use vulnerability scanning in Container Registry Importance: MEDIUM Software vulnerabilities are a well-understood problem in the world of bare-metal servers and virtual machines. A common way to address these vulnerabilities is to use a centralized inventory system that lists the packages installed on each server. Subscribe to the vulnerability feeds of the upstream operating systems to be informed when a vulnerability affects your servers, and patch them accordingly. However, because containers are supposed to be immutable (see statelessness and immutability of containers for more details), don't patch them in place in case of a vulnerability. The best practice is to rebuild the image, patches included, and redeploy it. Containers have a much shorter lifecycle and a less well-defined identity than servers. So using a similar centralized inventory system is a poor way to detect vulnerabilities in containers. To help you address this problem, Container Registry has a vulnerability scanning feature. When enabled, this feature identifies package vulnerabilities for your container images. Images are scanned when they are uploaded to Container Registry and whenever there is an update to the vulnerability database. You can act on the information reported by this feature in several ways: - Create a cron-like job that lists vulnerabilities and triggers the process to fix them, where a fix exists, - As soon as a vulnerability is detected, use the Cloud Pub/Sub integration to trigger the patching process that your organization uses. We recommend automating the patching process and relying on the existing continuous integration pipeline initially used to build the image. If you are confident in your continuous deployment pipeline, you might also want to automatically deploy the fixed image when ready. However, most people prefer a manual verification step before deployment. The following process achieves that: - Store your images in Container Registry and enable vulnerability scanning. - Configure a job that regularly fetches new vulnerabilities from Container Registry and triggers a rebuild of the images if needed. - When the new images are built, have your continuous deployment system deploy them to a staging environment. - Manually check the staging environment for problems. - If no problems are found, manually trigger the deployment to production. Properly tag your images Importance: MEDIUM Docker images are generally identified by two components: their name and their tag. For example, for the google/cloud-sdk:193.0.0 image, google/cloud-sdk is the name and 93.0.0 is the tag. The tag latest is used by default if you don't provide one in your Docker commands. The name and tag pair is unique at any given time. However, you can reassign a tag to a different image as needed. When you build an image, it's up to you to tag it properly. Follow a coherent and consistent tagging policy. Document your tagging policy so that image users can easily understand it. Container images are a way of packaging and releasing a piece of software. Tagging the image lets users identify a specific version of your software in order to download it. For this reason, tightly link the tagging system on container images to the release policy of your software. Tagging using semantic versioning A common way of releasing software is to "tag" (as in the git tag command) a particular version of the source code with a version number. The Semantic Versioning Specification provides a clean way of handling version numbers. In this system, your software has a three-part version number: X.Y.Z, where: Xis the major version, incremented only for incompatible API changes. Yis the minor version, incremented for new features. Zis the patch version, incremented for bug fixes. Every increment in the minor or patch version number must be for a backward-compatible change. If you use this system, or a similar one, tag your images according to the following policy: - The latesttag always refers to the most recent (possibly stable) image. This tag is moved as soon as a new image is created. - The X.Y.Ztag refers to a specific version of your software. Don't move it to another image. - The X.Ytag refers to the latest patch release of the X.Yminor branch of your software. It's moved when a new patch version is released. - The Xtag refers to the latest patch release of the latest minor release of the Xmajor branch. It's moved when either a new patch version or a new minor version is released. Using this policy offers users the flexibility to choose which version of your software they want to use. They can pick a specific X.Y.Z version and be guaranteed that the image will never change, or they can get updates automatically by choosing a less specific tag. Tagging using the Git commit hash If you have an advanced continuous delivery system and you release your software often, you probably don't use version numbers as described in the Semantic Versioning Specification. In this case, a common way of handling version numbers is to use the Git commit SHA-1 hash (or a short version of it) as the version number. By design, the Git commit hash is immutable and references a specific version of your software. You can use this commit hash as a version number for your software, but also as a tag for the Docker image built from this specific version of your software. Doing so makes Docker images traceable: because in this case the image tag is immutable, you instantly know which specific version of your software is running inside a given container. In your continuous delivery pipeline, automate the update of the version number used for your deployments. Carefully consider whether to use a public image Importance: N/A One of the great advantages of Docker is the sheer number of publicly available images, for all kinds of software. These images allow you to get started quickly. However, when you are designing a container strategy for your organization, you might have constraints that publicly provided images aren't able to meet. Here are a few examples of constraints that might render the use of public images impossible: - You want to control exactly what is inside your images. - You don't want to depend on an external repository. - You want to strictly control vulnerabilities in your production environment. - You want the same base operating system in every image. The response to all those constraints is the same, and is unfortunately costly: you must build your own images. Building your own images is fine for a limited number of images, but this number has a tendency to grow quickly. To have any chance of managing such a system at scale, think about using the following: - An automated way to build images, in a reliable way, even for images that are built rarely. Build triggers in Cloud Build are a good way to achieve that. - A standardized base image. Google provides some base images that you can use. - An automated way to propagate updates to the base image to "child" images. - A way to address vulnerabilities in your images. For more information, see Use vulnerability analysis in Container Registry. - A way to enforce your internal standards on images created by the different teams in your organization. Several tools are available to help you enforce policies on the images that you build and deploy: - container-diff can analyze the content of images and even compare two images between them. - container-structure-test can test whether the content of an image complies with a set of rules that you define. - Grafeas is an artifact metadata API, where you store metadata about your images to later check whether those images comply with your policies. - Kubernetes has admission controllers that you can use to check a number of prerequisites before deploying a workload in Kubernetes. - Kubernetes also has pod security policies, that you can use to enforce the use of security options in the cluster. You might also want to adopt a hybrid system: using a public image such as Debian or Alpine as the base image and building everything on top of that image. Or you might want to use public images for some of your noncritical images, and build your own images for other cases. Those questions have no right or wrong answers, but you have to address them. A note about licenses Before you include third-party libraries and packages in your Docker image, ensure that the respective licenses allow you to do so. Third-party licenses might also impose restrictions on redistribution, which apply when you publish a Docker image to a public registry. What's next - Build your first containers with Cloud Build. - Spin up your first GKE cluster. - Speed up your Cloud Build builds. - Preparing a GKE environment for production. - Docker has its own set of best practices, some of which are covered in this document. Try out other Google Cloud Platform features for yourself. Have a look at our tutorials.
https://cloud.google.com/solutions/best-practices-for-building-containers?hl=lv
CC-MAIN-2019-30
refinedweb
4,112
53.1
ModifyFrame¶ std. ModifyFrame(clip clip, clip[] clips, func selector)¶ The selector function is called for every single frame and can modify the properties of one of the frames gotten from clips. The additional clips’ properties should only be read and not modified because only one modified frame can be returned. You must first copy the input frame to make it modifiable. Any frame may be returned as long as it has the same format as the clip. Failure to do so will produce an error. If for conditional reasons you do not need to modify the current frame’s properties, you can simply pass it through. The selector function is passed n, the current frame number, and f, which is a frame or a list of frames if there is more than one clip specified. If you do not need to modify frame properties but only read them, you should probably be using FrameEval instead. How to set the property FrameNumber to the current frame number: def set_frame_number(n, f): fout = f.copy() fout.props['FrameNumber'] = n return fout ... ModifyFrame(clip=clip, clips=clip, selector=set_frame_number) How to remove a property: def remove_property(n, f): fout = f.copy() del fout.props['FrameNumber'] return fout ... ModifyFrame(clip=clip, clips=clip, selector=remove_property) An example of how to copy certain properties from one clip to another (clip1 and clip2 have the same format): def transfer_property(n, f): fout = f[1].copy() fout.props['FrameNumber'] = f[0].props['FrameNumber'] fout.props['_Combed'] = f[0].props['_Combed'] return fout ... ModifyFrame(clip=clip1, clips=[clip1, clip2], selector=transfer_property)
https://www.vapoursynth.com/doc/functions/modifyframe.html
CC-MAIN-2021-21
refinedweb
263
56.96
Back to: LINQ Tutorial For Beginners and Professionals Linq Cross Join using Method and Query Syntax In this article, I am going to discuss Linq Cross Join using both method and query syntax examples. Please read our previous article before proceeding to this article where we discussed Left Outer Join in Linq. What is Linq Cross Join? When combining two data sources (or you can two collections) using Linq Cross Join, then each element in the first data source (i.e. first collection) will be mapped with each and every element in the second data source (i.e. second collection). So, in simple words, we can say that the Cross Join produces the Cartesian Products of the collections or data sources involved in the join. In Cross Join we don’t require the common key or property as the “on” keyword which is used to specify the Join Key is not required. And moreover, there is no filtering of data. So, the total number of elements in the resultant sequence will be the product of the two data sources involved in the join. If the first data source contains 5 elements and the second data source contains 3 elements then the resultant sequence will contain (5*3) 15 elements. Model classes and Data Sources: We are going to use the following Student and Subject model classes in this demo. Please create a class file and then copy and paste the following code in it. using System.Collections.Generic; namespace LINQJoin { public class Student { public int ID { get; set; } public string Name { get; set; } public static List<Student> GetAllStudnets() { return new List<Student>() { new Student { ID = 1, Name = "Preety"}, new Student { ID = 2, Name = "Priyanka"}, new Student { ID = 3, Name = "Anurag"}, new Student { ID = 4, Name = "Pranaya"}, new Student { ID = 5, Name = "Hina"} }; } } public class Subject { public int ID { get; set; } public string SubjectName { get; set; } public static List<Subject> GetAllSubjects() { return new List<Subject>() { new Subject { ID = 1, SubjectName = "ASP.NET"}, new Subject { ID = 2, SubjectName = "SQL Server" }, new Subject { ID = 5, SubjectName = "Linq"} }; } } } As you can see we created two methods to return the respective data sources. Example1: Cross Join Using Query Syntax Cross Join Students with Subjects using Query Syntax. using System.Linq; using System; namespace LINQJoin { class Program { static void Main(string[] args) { var CrossJoinResult = from employee in Student.GetAllStudnets() from subject in Subject.GetAllSubjects() select new { Name = employee.Name, SubjectName = subject.SubjectName }; foreach (var item in CrossJoinResult) { Console.WriteLine($"Name : {item.Name}, Subject: {item.SubjectName}"); } Console.ReadLine(); } } } Output: We have 5 students in the student’s collection and 3 subjects in the subject’s collection. In the result set, we have 15 elements, i.e. the Cartesian product of the elements involved in the joins. Example2: Cross Join using Method Syntax. In order to implement the Cross Join using method syntax, we need to use either the SelectMany() method or the Join() method as shown in the below example. using System.Linq; using System; namespace LINQJoin { class Program { static void Main(string[] args) { //Cross Join using SelectMany Method var CrossJoinResult = Student.GetAllStudnets() .SelectMany(sub => Subject.GetAllSubjects(), (std, sub) => new { Name = std.Name, SubjectName = sub.SubjectName }); //Cross Join using Join Method var CrossJoinResult2 = Student.GetAllStudnets() .Join(Subject.GetAllSubjects(), std => true, sub => true, (std, sub) => new { Name = std.Name, SubjectName = sub.SubjectName } ); foreach (var item in CrossJoinResult2) { Console.WriteLine($"Name : {item.Name}, Subject: {item.SubjectName}"); } Console.ReadLine(); } } } It will give you the same result as the previous example. In the next article, I am going to discuss the Element Operators in Linq. In this article, I try to explain how to implement Linq Cross Join using both Method and Query Syntax with some examples. 1 thought on “Linq Cross Join” Excellent example. Please correct spelling of Student
https://dotnettutorials.net/lesson/linq-cross-join/
CC-MAIN-2021-31
refinedweb
629
56.15
Code Coverage Reports for ASP.NET Core Code Coverage Reports for ASP.NET Core Code coverage reports for ASP.NET Core projects are not provided out-of-the-box, but by using the right tools we can build decent code coverage reports. Join the DZone community and get the full member experience.Join For Free Code coverage reports for ASP.NET Core projects are not provided out-of-the-box, but by using the right tools we can build decent code coverage reports. I needed code coverage reports in some of my projects and here is how I made things work using different free libraries and packages. Getting Started To get started, we need a test project and some NuGet packages. Test project can be a regular .NET Core library project. Adda reference to web application project and write some unit tests if you start with Note: In the project file, we need a tool reference to run the report generator using the dotnet utility: <DotNetCliToolReference Include="dotnet-reportgenerator-cli" Version="x.y.z" /> After adding these packages it's time to make a test build and see if everything still works and we don't have any build issues. Creating Reporting Folders Now it's time to configure reporting. I decided to keep reports in the BuildReports folder of the test project. There are two subfolders: - Coverage - for coverage reports. - UnitTests - unit tests reports (for future use). I also added the BuildReports folder to the .gitignore file because I don't want these files to wander from one developer box to another and be part of commits. The number of files in the Coverage folder is not small. It's not just two or three files that are easy to ignore. There can be hundreds or thousands of files depending on how many tests there are in test projects. Here we are working on a smaller scale, of course. Getting Code Coverage Data To generate reports we need coverage data and this is why we added the coverlet.msbuild package to the test project. When tests are run, we gather code coverage information and publish it in the Cobertura output format. Cobertura is a popular code coverage utility in the Java world. Test data is transformed to the Cobertura format by Coverlet — a cross platform code coverage library for .NET Core. With coverage data, I also output unit test results in Microsoft and xUnit formats to the UnitTests folder. As I said before, this is for future use and we don't do anything with files in these folders right now. I added the run-tests.bat file to the root folder of my test project and the first command there is for running unit tests (in your file you can put it all in one line without any line breaks)..*]* This is what this command does: - Using Visual Studio logger to create the TestResults.trx file for test results. - Using the xUnit logger create the TestResults.xml file for test results. - Put test results in the ./BuildReports/UnitTests folder. - Enable collecting of code coverage data. - Make Coverlet use the BuildReports\Coverage folder. - Set Coverlet's). It's time to try out run-tests.bat to see if everything still works and if files generated to the expected locations. Generating Code Coverage Reports For code coverage reports we need to add another command to the run-tests.bat file. This command will run the report generator that generates reports based on the coverage.cobertura.xml file. The reports are generated to the same folder to keep the folder tree smaller. Here is the command (you can put it all on one line): dotnet \reportgenerator "-reports:BuildReports\Coverage\coverage.cobertura.xml" "-targetdir:BuildReports\Coverage" -reporttypes:HTML;HTMLSummary I think this command is not very cryptic and I don't make additional comments on command line parameters here. As a lazy guy, I expect the browser to automatically open with the newly generated reports and this is why the last line of my run-tests.bat file is: start BuildReports\Coverage\index.htm It's time to run the script and see if it runs successfully to the end. Code Coverage on Linux On Linux, we need shell scripts. This shell script may also need to execute permissions. Here's the command for this: chmod +x run-tests.sh Now we are good to go on Linux too. Code Coverage Reports After running the batch file in my playground test project folder, I see the following report in the browser. The report is longer than we can see here but I'm still not very happy with it. I would like to have a better structural view of the tested code, so I have a better overview of how well the different system areas are covered with tests. Let's take a look at this grouping in the above tests and try to move it. Voila! To see an overall view of the system under test, I can click and close those bold namespaces. Now we see how much one or another namespace is covered. We can also go inside classes and see coverage statistics about specific classes. The nice thing is that we get also method-based statistics and the source code view shows us which lines in the class are covered with tests and which lines are not covered. I think this kind of code coverage reporting is good enough for me. Wrapping Up The path to code coverage reporting is not always easy but I got it to work as expected. All the tools I used are free and incurred no hidden expenses (besides my own time). We had to write a batch file to run tests, collect code coverage data, and generate reports. In the end, we got decent reports giving us a good overview of code coverage of our codebase. Published at DZone with permission of Gunnar Peipman , DZone MVB. See the original article here. Opinions expressed by DZone contributors are their own. {{ parent.title || parent.header.title}} {{ parent.tldr }} {{ parent.linkDescription }}{{ parent.urlSource.name }}
https://dzone.com/articles/code-coverage-reports-for-aspnet-core
CC-MAIN-2020-29
refinedweb
1,019
65.93
You want your React app to have an intelligent form that validates the data being input before sending it off to your API. You want it to be easy and straightforward to implement with the features you’d expect. Features like setting initial values of your fields. Luckily for you, Redux Form makes all that a breeze. Even better, with the latest release candidate (Redux Form 6.0.0rc3) and later, you can upgrade to the latest version of React and not have those pesky unknown props errors bring your project to a screeching halt. This tutorial is going to help you set up a Redux Form that uses the latest syntax, and how to get that form set up with some simple validation and initial values. We’ll be pretending that we’re creating an “update user information” form for a hypothetical application. The form is going to have access to actions to make submissions, and we’ll work under the assumption that we’ve stored our user information in the user reducer. The complete source code for this tutorial can be found at this gist. Install the Redux Form 6.0.0 release candidate (or later) We’re going to be using the latest Redux version (at the time of this writing), 6.0.0-rc.3 for this tutorial.. There’s no point in learning a syntax that’s going to be absolute as soon as the release candidate is published, so let’s get a head start! Open up your console and use NPM to install the Redux Form release candidate: npm install --save redux-form@6.0.0-rc.3 Our tutorial will also be dependent on react and react-redux, so make sure you have the latest versions of those installed as well. Setting up your component Opening up a blank document we want to import the dependencies we’re going to need to create our form. The form we’re making is connected to our application state, and will have an awareness of the actions we’ve created elsewhere in the project. This allows our form to send values to an API or another service directly. import React, { Component } from 'react'; import { Field, reduxForm, initialize } from 'redux-form'; import { connect } from 'react-redux'; import * as actions from '../../actions'; Next, let’s scaffold out our component real quick: class ReduxFormTutorial extends Component { //our other functions will go here render( return ( <div> //our form will go here </div> ) ) } function mapStateToProps(state) { return { user: state.user }; } export default connect(mapStateToProps, actions)(form(ReduxFormTutorial)); You’ll notice that we brought in our application state (user) and set it as props at the bottom. This is going to allow us to initialize our form with data that’s already defined in our state. Defining your form The first thing we want to do is define our form. So just underneath our dependencies, outside of the scope of the component, add the following: const form = reduxForm({ form: 'ReduxFormTutorial' }); Handling the validation of our form can get messy if we do it “inline” as part of the render function of our component. So to clean that up and make it reusable (plus easier to manage), create a const that returns the input and logic for any errors that our field receives: const renderField = field => ( <div> <label>{field.input.label</label> <input {...field.input}/> {field.touched && field.error && <div className="error">{field.error}</div>} </div> ); Let’s also create a similar constant that will serve our select input: const renderSelect = field => ( <div> <label>{field.input.label</label> <select {...field.input}/> {field.touched && field.error && <div className="error">{field.error}</div>} </div> ); Now we need to define the redux form required property handleSubmit inside the render function of the component. Without this, the form simply will not work at all, and you’ll get a bunch of ugly errors. const { handleSubmit } = this.props; It’s time to start making up our form! Inside the render() function return the following example form. Make sure to use the <Field/> component imported from redux form in place of <input />. return ( <div> <form onSubmit={handleSubmit(this.handleFormSubmit.bind(this))}> <Field name="firstName" type="text" component={renderField} <Field name="lastName" type="text" component={renderField} <Field name="sex" component={renderSelect} <option></option> <option name="male">Male</option> <option name="female">Female</option> </Field> <Field name="email" type="email" component={renderField} <Field name="phoneNumber" type="tel" component={renderField} <button action="submit">Save changes</button> </form> </div> ) We now need something to happen when the onSubmit function is called. Generally, this would mean calling an action and sending that action the values from our form. handleFormSubmit(formProps) { this.props.submitFormAction(formProps); } At this point, your form should function properly, that is if this.props.submitFormActionwere to point to an existing action that we had created. However, our form doesn’t have any sort of validation or initial data prefilled into the fields. And those are nice, we want those. Initialize your form with data For our example here we’re pretending that our form handles updates to our user’s information. We wouldn’t want them to have to type in all of their information every time they switch email accounts or phone numbers. So we want the form to initialize with their existing information which can then be altered in whatever way they wish. Inside our component, let’s call a function we’ll name handleInitialize when the component mounts. componentDidMount() { this.handleInitialize(); } Now, we can create our function and define the initial values. Afterward, we call the redux form property initialize and pass it our data. The objects names must correlate with name property of our <Field />‘s above. handleInitialize() { const initData = { "firstName": this.props.currentUser.firstName, "lastName": this.props.currentUser.lastName, "sex": this.props.currentUser.sex, "email": this.props.userEmail, "phoneNumber": this.props.currentUser.phoneNumber }; this.props.initialize(initData); } It’s as easy as that. When the component mounts, it will define our values and push it to the form’s fields. Adding form validation The last thing we want to do now is add in some sort of form validation to make sure all the fields have something in them and it’s of the appropriate format. Outside of our component, we want to define a function called validate that will take our formPropsand run them through various tests. function validate(formProps) { const errors = {}; if (!formProps.firstName) { errors.firstName = 'Please enter a first name'; } if (!formProps.lastName) { errors.lastName = 'Please enter a last name'; } if (!formProps.email) { errors.email = 'Please enter an email'; } if (!formProps.phoneNumber) { errors.phoneNumber = 'Please enter a phone number' } if(!formProps.sex) { errors.sex = 'You must enter your sex!' } return errors; } We’re going to make a quick jog back up to where we defined our form and add one more line to check our properties against our validation criteria. This process will also check our fields against HTML validation that was defined when we set the <Field />property type to “email” (for example). const form = reduxForm({ form: 'ReduxFormTutorial', validate }); And there you have it! Your redux form is now set to initialize with values, validate itself before submitting, and then pass it’s approved values to your action. Redux form is truly one of the most pivotal dependencies you will integrate into your application so getting a strong understanding of it is important. That’s all for now and thanks for reading! Once again, you can view the complete source code here. And if you have any questions, comments, or want to suggest a topic for me write about next, just leave it in the section below!
https://www.davidmeents.com/blog/create-redux-form-validation-initialized-values/
CC-MAIN-2017-26
refinedweb
1,276
55.44
tag:blogger.com,1999:blog-26191519759686929782017-10-21T22:12:49.090-05:00Young, Widowed & RebuildingWendy of Names....<div dir="ltr" style="text-align: left;" trbidi="on">It's time to re-name the blog! I'd say I'm no longer "rebuilding"....I've built a whole different life instead. Since Brian died, I've moved to Austin, then San Antonio. I've dated, married, and had 2 children. I've changed jobs. I've acquired another cat. I've sold our house, moved in with a new husband, sold his house, and picked out another one for our family. I think I've re-built plenty! And, arguably, at 35 I may not be classified as "young" anymore...but we don't have to render an official verdict on <i>that</i>, regardless of whether I ditch that part of the blog name.<br /><br />So....I'm going to be rolling this over in my mind a bit, but I'm open to suggestions!</div><img src="" height="1" width="1" alt=""/>Wendy's in a Name?<div dir="ltr" style="text-align: left;" trbidi="on">On June 13, Sheldon & I became the proud parents of two! Our second son, Waylon Luis, was born at 5:15 pm after about 10 hours of labor. He weighed in at 7 lbs, 15 oz. <br /><br /><div class="separator" style="clear: both; text-align: center;"><a href="" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="320" src="" width="240" /></a></div><br /><br />One of the questions we get a lot is about his name - "Why Waylon?" Technically, it came from the baby book and was just one we both liked. But, of course, there is some significance that caused us to choose that name in particular out of the twenty or so we had winnowed out of the thousands.<br /><br />"Like Waylon Jennings?" Yeah, sort of. But this isn't a tribute name; while Jennings inspired the moniker, the name wasn't chosen to honor him. It's what his name represents that mattered to us.<br /><br />Waylon Jennings was one of The Highwaymen, a country supergroup of musicians who sang the epic song "The Highwayman" about reincarnation and continuing to live on after death. (<a href=""></a>) The song was a favorite of Brian's, and one that I only really started to appreciate after his death. Waylon's name, and his very existence, is a reminder that energy is a constant force and that we go on, even if those departed only return as "a single drop of rain." Finally, Waylon is a good mix of Wendy + Sheldon. Waylon being a Texan name, it seemed a good choice for this new life created here by us, a child whose very being exemplifies life after death.<br /><br />His middle name is also reminiscent of second lives to us, though in the sense of second chances within the context of this physical life. Sheldon's late grandfather, whose funeral was held on the date that would have been my 10 year wedding anniversary with Brian, was named Luis. I was incredibly touched by the words penned by his wife to be read on the day of his funeral. Luis Gonzalez, or "Grandpa Lou," was by many accounts, a bit of a stern man and could be difficult to get along with when he was younger. There were some disagreements, and at times, some estrangement, between Lou and his children. Eventually, he found religion and age found him. He mellowed enough, and time healed the wounds enough for all, that he made peace with everyone in the family. I love the idea that as long we are alive, it's never too late to change, to forgive, to choose love. I myself am a very different person than I was 10 or 20 years ago. I am more patient (with others and myself), less judgmental, more inclined toward compassion and forgiveness, and more likely to let things be than carry a grudge. I'm working on being less defensive, on owning my actions and feelings and mistakes, and commanding respect. I think I'm a better wife this time around. By no means am I perfect, but I'm getting better. I'm a different person, and who I am now fits with Sheldon. He is a different person than he was as a young adult, and one who fits with me. We wouldn't have been a good match if we had met ten years sooner, but change brought us to the point where we compliment each other and enjoy each other. To me, the name Luis represents the amazing capacity for change that we all hold within our hearts, and the possibilities in store for those who are willing to let go, let live, and let love in.<br /><br />I don't know what my sweet little boy's soul has lived through before now, nor do I know what trials and tribulations -- and, God willing, redemptions -- await, but right now it is his destiny to be our Waylon Luis.<br /><br /><br /><br /><br /></div><img src="" height="1" width="1" alt=""/>Wendy Branch on the Family Tree<div dir="ltr" style="text-align: left;" trbidi="on">So, we are having another baby boy! I'm due on June 14. As much as I'm physically ready to be done being pregnant, it would be great to have the kiddo on his actual due date. My aunt Amanda had her son on my birthday, and June 14 is her birthday, so that would be a cool connection. But whatever will be, will be.<br /><br />What else? I mean, I haven't blogged in almost a year! So much...<br /><br />Sheldon's sister Paige is living with us; she moved in right around the time we were settling into the new house. She is going to school and helps out as a nanny and "Tia" to Cooper. It's been amazing having some family here in town, and even better - right in our house! With Sheldon & I being self-employed and doing a lot from home, it's great having another set of adult hands in the family to help run the household and our businesses. <br /><br />Speaking of businesses and growth, Sheldon is training a new person (a friend of ours) in the fundraising business. It's keeping him extra busy this spring, but will be a great move in the long-term. We are really expanding in every way -- the size of our family, the size of our business, the size of our house. Except on the pet front, and I tried. We had a stray cat that was coming around daily and had even snuck into the house a couple times and would let us pet him sometimes. I managed to get him trapped, neutered and get him basic shots, and was able to re-home him with a friend. Maybe we'll get some fish sometime soon though...<br /><br />I think about Brian a lot, and miss him. I still dream about him. I still wear his shirts to sleep in, and can't part with any of our Emeril cookware, or even the Emeril kitchen towel he liked, even though it is worn and might have a small tear. I still worry about whether I got rid of too much of his stuff, and wish I'd kept more tees for sleeping. I still don't know what to do with his glasses and wallet. I still have a tote of things in the office closet that I can't bear to part with, or to look at. I still worry about how long it's been since I've been to his grave, and how long it will before I'm back.<br /><br />And I worry about what would happen if the same thing happened again. I sometimes panic when Sheldon is later coming home than I expected, or if he doesn't answer his phone. My mind goes to the worst thoughts quicker than most. I get fleeting thoughts like "I wonder how I'll die...will it be natural causes at an old age? Cancer in my 50s? Car accident in 2 years?" And I think the same things about everyone I love. I have to push those thoughts away, because thinking about it ahead of time doesn't do a damn thing except cause pain and anxiety. I suppose what will happen, will happen.<br /><br />I wonder if I think about Brian too little, or too much? I sometimes feel like I'm betraying Sheldon to say I miss him. But I feel like I'm betraying Brian if I don't. Mostly, I'm able to be happy thinking about good memories with him. But in the interest of honesty, I don't want to pretend times were always good, or that the good isn't tinged with aching to see him again and anger about his life being cut so short.<br /><br />I'll just never be someone who hasn't lost it all. I'll never not be a widow, and never be the same as someone who hasn't been through this.<br /><br />We still haven't come up with a name for the next baby. He will be our last child. I want to honor Brian in some way with the name, but I don't want to be weird. A music- or football-inspired name would be a good, subtle way to do that. But in a way that honors me and Sheldon as well. I mean, these are Sheldon's sons and part of his family...but they are also here because of Brian's life and how his affected mine.<br /><br />Cooper can pick out Brian from the pictures in our bedroom. I only had to tell him once, and he remembered. I don't even know if he knows some of the people in our families that well in pictures, people he's met many times. I wonder how that works -- does he see Brian sometimes, or somehow know him from when they were both in spirt form together, before Cooper gained a physical body and after Brian lost his? Will Brian's soul re-enter the physical world again before I'm able to reconnect with him? Is he looking after us? Is he happy for me? Proud? Have I done him right? How will I begin to tell the story to my kids about Brian, about the first husband I had? How do we talk about death in a non-scary, but honest way? How do I reassure them that the same thing won't happen to their daddy, or to me, when I don't really know that myself?<br /><br />I'm all over the place, I know. It still is all so overwhelming to think about what the last 6-7 years have been and how complicated and beautiful and painful life is. So many happy, glorious moments in that time, and also so much pain and confusion, so much hurt and loss. And it all comes together to build today. Today feels messy to me, probably just because I'm making the time to sit down and face all these things that float around inside my heart and my head and shape my soul. It's been a long time since I took a look in a spiritual mirror. I've been so caught up in the daily grind -- diapers, laundry, meal planning, work, dishes, game nights, visitors -- that I have been shutting out the really big stuff. <br /><br />All I know is, things are overwhelmingly good in my life right now. I probably think and worry about things that most people don't, because most people haven't walked my path. But maybe that can be an advantage to me, maybe it will help me appreciate what I have more than most. I know that I can make a point to try to do exactly that, so at least that's a good starting point.</div><img src="" height="1" width="1" alt=""/>Wendy a Move On<div dir="ltr" style="text-align: left;" trbidi="on">Sheldon and I (and Cooper) have moved! We found a larger house in San Antonio, not too far from our last, that we plan to have as our "forever home." It's a beautiful four bedroom home with a sunroom, an open kitchen/living space, formal dining room and a large yard with lots of mature trees. Deer roam the neighborhood and the community is special -- picnics, events, a park, a pool, etc. We couldn't be happier!<br /><br />It has been busy and stressful, of course, as all moves are. And the process of going through all your things and making a move tends to bring up some emotions. I think this happens for most people -- Sheldon even has said some of the clothes in his closet aren't really to wear, but are memories on hangers -- but it has hit me hard sometimes. I've had a lot go through my mind even without facing the objects that are packed with a sentimental punch.<br /><br />It's been five years since I first arrived in Austin, sans cats, for a three-month getaway to help me bounce back from losing Brian. I remember painfully and distinctly sitting on the patio of my East Sixth loft place, bawling my eyes out while I blogged on Memorial Day weekend of 2010. I felt guilty that I wasn't home, felt dread about dealing with the ongoing process of getting a headstone in place for Brian, and knew that the issue of being far away from his resting place would always be a struggle (even if I had stayed in Des Moines, that was 2.5 hours from where he is buried). <br /><br />This May, I got to toast the five-year Texas milestone with my good friends Erin and Chad, who moved to Austin that summer too (Erin and their cat stayed with me in that studio apartment for about a month). They have just moved to another apartment in Austin, which I'm anxious to see, and they recently visited our new house to take a break from their moving process and see our new home. They are some of my best friends and it's bittersweet to reflect on what we've been through together and what brought us so close. They are a tie to my Iowa life, a large string in the tapestry of my life that winds through many places and past many faces. I'm lucky to have them here and that we all took the leap of faith to Austin together, even though I ended up moving a bit further south when I fell in love with Sheldon.<br /><br />As we move into our final family home, I feel as though I'm moving one step further away from my first family home - the one I shared with Brian. I miss that house still, and that life. I still grieve for those losses, and this move has stirred up the emotional waters, muddied the surface of my life. I've thought about how many moves the cats have been through, and well they have handled it, and how grateful I am that I won't have to put them through the whole rigamarole again. I wonder if they remember the old house and the first man we called "Daddy." I wonder what kind of memories are being made in that house now and whether the family that I sold it to still lives there. Did they keep the bar in the basement? Do they use the front room as a dining room and play games around the table? Do they socialize and play yard games?<br /><br />And then there are the things. I recently unpacked the box that had the guest book from Brian's visitations. There were so many names; I didn't remember that many people being there. I think it was all such a blur at the time - but several times I thought to myself, "I didn't realize they were there." I was overwhelmed with gratitude to see the names of all those people from different phases of our life, and reminded of how lucky I was to have such a show of support. I remembered too how strong Brian's impact on this world was - how many people loved, admired, respected, and needed him. Hundreds of names filled those pages...hundreds of people who lost something, and many who lost almost everything, with his passing. It was immensely painful to think of that aspect.<br /><br />I still had the large poster board full of pictures of our life together, all pictures of him, that we put together and displayed at his services. Probably a good hundred pictures -- us on vacations, at weddings, him with friends and family, at concerts and so on. Some of the pictures had fallen off over the years and the display had been sitting in our office for the past few years, losing pictures here and there like a tree losing leaves in the early fall. I had been trying to keep it all intact, but my efforts weren't doing the job, and I also didn't know where to display this oversized tribute to my lost love. Where does that fit into my home and my life in a place where I'm the only one in the house who knew Brian? Where most of my friends that visit (except Erin & Chad) know him only from stories? And how to move such a thing (again)? I have been wresting with the idea of taking the photos off the board and putting them somewhere else, and I finally took that step a couple weeks before the move. The whole time I felt sad, guilty, and also had some good feelings thinking about all the fun stories and memories behind those pictures. Right now, the pictures are stacked up in a Ziploc bag in my top desk drawer -- there for me whenever I want to thumb through and remember.<br /><br />It seems like that is what's happening all over my life -- the trappings of my old life and of Brian's life are put away in secret places for me to visit, or forget about. Those moments and that life are further and further away from the present in the timeline of my life, and the ties to those times and places are stretched thinner and thinner, and grow fewer in number. The old gets pushed aside for the new, over and over. It's always most acute during a move.<br /><br />I still have some large things from our life together -- our bedroom furniture (now in the guest room and Cooper's room), our dining room table, Brian's car that I started driving after he passed. I have sentimental things too -- my engagement ring from Sheldon made with diamonds Brian and I wore in our rings, my tattoo, Brian's class ring. But with every move, the number of tangible reminders shrinks and there comes another life milestone that is one more mile marker away from the starting point of my journey in adulthood and love and away from my first camerado.<br /><br /><br /><br /></div><img src="" height="1" width="1" alt=""/>Wendy Next Generation<div dir="ltr" style="text-align: left;" trbidi="on">: left;">So I haven't blogged in forever, but for good reason -- this little guy keeps me plenty busy! Cooper Matthew was born Sept. 22 at 1:30 pm. He is five months old now and keeps us on our toes.</div><div class="separator" style="clear: both; text-align: left;"><br /></div><div class="separator" style="clear: both; text-align: left;".</div><div class="separator" style="clear: both; text-align: left;"><br /></div><div class="separator" style="clear: both; text-align: left;". </div><div class="separator" style="clear: both; text-align: left;"><br /></div><div class="separator" style="clear: both; text-align: left;". </div><div class="separator" style="clear: both; text-align: left;"><br /></div><div class="separator" style="clear: both; text-align: left;". </div><div class="separator" style="clear: both; text-align: left;"><br /></div><div class="separator" style="clear: both; text-align: left;".</div><br /></div><img src="" height="1" width="1" alt=""/>Wendy After Death<div dir="ltr" style="text-align: left;" trbidi="on"><a href="" imageanchor="1" style="clear: right; float: right; margin-bottom: 1em; margin-left: 1em;"><img border="0" src="" height="320" width="240" /></a>Another big milestone for me....I'm having a baby!<br /><br />I'm actually six months along already, due in mid-September. We are having a boy.<br /><br />Sheldon and I are thrilled. We had been trying since we got married last year, and though it doesn't sound like a long time, I was starting to get discouraged when it took us six months to get a positive test result. The (small) bummer was that this test happened to fall one day before we took a trip to Las Vegas. On the plus side, I realized I REALLY love Vegas when I was able to have a blast drinking only ginger ale - and I think the fact that I wasn't drinking made me get carded a lot. At 33, I'll take that all day long!<br /><br />Overall, things have been going very well. I feel pretty good, and even ran 3.5 miles in a marathon relay race a few weeks ago. I am just starting to feel big and notice that my belly influences my mobility and the way I move, and I'm getting up a lot in the middle of the night to go to the bathroom and change positions. Overall, though, I can't complain. <br /><br />On the other hand, there have been some triggers or emotional challenges. Not long after we found out our good news, I had a dream that really shook me and had me feeling "off" for a while, and I cried for a good day or two. In my dream, I was pregnant and sharing my life with Sheldon, just like in real life. However, the baby was Brian's. We knew that I would have two children -- one Brian's and one Sheldon's -- and we were happy with that scenario. In the dream, I felt like it was perfect -- I would get one child with each of my loves. I woke up and was sad to remember that this wasn't the case, and I felt disappointed about that, and again had to grieve for the fact that I never had the opportunity to have a child with Brian (or, more accurately, that we never even ventured down that path because we thought we had time for that "later"). It's not fair that he died before he could have kids, that his genes weren't carried on. We had talked about down the road and I fantasized about having a red-headed, smart, mischievous boy like him. He was such an adorable kid. The fact that this never happened still leaves me with a sinking, empty, feeling....like you feel if you are holding a precious heirloom that means the world to someone and you just dropped it in front of them and saw it shatter at your feet, and you are frozen, staring down in shock at your empty fingers and the myriad glass fragments littering the ground. Broken chances, irreversible fate....an opportunity that literally slipped through my fingers and shattered in front of me, never to be whole or real or within my grasp again. This still pains me a great deal when I think about it. I think this is why I've waited so long to blog about this. The dream happened a good four months ago, but I'm crying as much today as I did the day after it happened.<br /><br />This pregnancy brings about another reality -- I am carrying a child who would not have existed if Brian had lived. This boy will owe his very existence to Brian's death. Of course, Brian dying changed a lot of things in many peoples' lives though the butterfly effect -- I have made new friends, friendships have been forged among people I connected, a couple I introduced is now engaged, people live in houses I found for them, etc. And I know I wouldn't even be married to Sheldon if Brian hadn't died. But this adds a whole new level of gravity to the impact of it all -- a human being is going to be born out of the aftermath of Brian's death. It's a sobering and heavy thought. <br /><br />I think about how I'm going to tell the little one about Brian. How will he understand? When is it too soon to talk about death? I know it will not be a one-time, sit-down conversation and that we will handle it in age-appropriate ways, but it's already something on my mind. Most parents at least get the luxury to delay this conversation for many years, until a death in the family occurs. In this case, a death in the family happened before he came along, and one that he'll ask questions about when he finds out my middle name, when he asks how he's related to his Boka cousins and relatives, when he sees my tattoo or pictures of Brian on the wall. Will he understand that I could love Brian and Daddy the same? Will he worry about Daddy dying too? Will he see Brian watching over him and us? <br /><br />I don't like feeling like I see the negative side of everything, because I'm generally a very positive person. And I do feel like I've made a lot of progress. Early in my grief, I would have to strain to see the silver linings amongst the big, dark clouds. Now, I feel like it's blue skies all the time, though I am aware of the dark clouds in the distance, clouds that are outside the vision of those who don't know what I know, who haven't been through what I've been through. <br /><br />Mostly though...I see skies of blue...and I think to myself, "What a wonderful world."</div><img src="" height="1" width="1" alt=""/>Wendy Years<div dir="ltr" style="text-align: left;" trbidi="on">It's been four years today since Brian died. Thankfully, the details of that horrible day have softened a little bit in my mind. If I choose to go back and remember it, it's pretty sharp and still cuts me to the core, but time has helped me add some distance and I no longer have flashbacks, nightmares, or persistent thoughts about the horrors that unfolded before my eyes and upon my life that awful day.<br /><br />In a way, it doesn't sound like a long time. Four years really isn't that long in the scheme of things - not to a normal person with a normal, happy life. When life is good, time goes quickly. It is true that "time flies when you're having fun." But when you're a grieving widow who's reeling with shock, hurting beyond belief, dreading upcoming holidays and occasions, and who is fearful and unsure about the near and far future, every day seems to drag on for an eternity. While the past couple years have gone by relatively quickly, the first year felt closer to a decade in time than one year. It's only lately that I've started to feel capable and ready to plan far in the future again. I don't know that I've planned anything for more than six months in the future since Brian died -- and that one thing I did plan that far in advance was my wedding. I'm still not the future-planner I once was. I'm too leery of unexpected change, too timid to dare to presume that I (or anyone else) will still be alive and well that far ahead.<br /><br />Yet so much has happened. I moved, I changed jobs, I picked up another (!) cat, I moved again, I bought a condo, I went to Europe, I had a breakdown and went back to therapy, I bounced back, I struggled to fit in, I made amazing friends, I ran a couple more half-marathons, I irreparably injured my ankle on a Mexican waterside (thus insuring I won't be doing any more full 26.2-milers), I traveled to Mexico three times, I went to Bonnaroo twice, I have made mistakes, I met a few celebrities, I took up golfing, and my online diary of grief has been viewed over 100,000 times. I literally could not have imagined any of this four years ago. At that point, all I knew was I was lost, I was shocked, I was devastated, and I knew life would never be the same again.<br /><br />Yet, on that day, I also knew that life would go on. I remember distinctly thinking, "I'm still breathing. I'm going to keep breathing. I'm going to wake up tomorrow, and the next day, and the next. I don't know what to do with this, but I know my life is going on." And from there, I just had to take it hour by hour, then day by day, and week by week, and finally - month by month. I'm finally able to think ahead and to dare to dream about what will happen years from now, what life will look like when I'm middle-aged, when I'm old. It's something a lot of people take for granted, this ability to dream and plan for a future. It's the thing that has taken the longest to build back up in my life. Some combination of fear and the cold reality of possibilities has kept me from daring to think long-term and to build toward an uncertain future.<br /><br />Brian was quite a planner. Not only did we always have a packed social calendar, but he was diligent about his professional and personal goals. He had a target income he wanted to hit by 40, and a position within his company. We started seeing a financial planner before I had even finished my schooling with the idea to set our long-term goals and take the steps needed to achieve them. I was like that to a lesser extent, but loved the structure of this way of thinking and happily participated in these discussions and plans, and we started socking away money into our IRAs and 401(k)s. Once he died, I was like a sailboat in a windless sea, drifting about deflated and without direction. I literally wrote about how I moved to Austin because "that's where the wind took me."<br /><br />Today, in Brian's honor, I resolve to get back to my forward-thinking, future-planning ways. I know that life is uncertain. I also know that the things I want in life aren't going to happen if I don't plan for them. If I don't dare to dream it, I won't achieve it. It's time to start dreaming, goal-setting, and forward-thinking again. I've let the wind take me where I needed to be, and I'm ready to use this place in life as my new launching pad. It's time to draw up a road map to the future I want. It's time to dream big again. <br /><br /><br /><br /><br /><br /></div><img src="" height="1" width="1" alt=""/>Wendyéjà Vu All Over Again<div dir="ltr" style="text-align: left;" trbidi="on">It's been a while since I've walked the early days of fresh, sudden grief. Well, it <i>had</i> been a while. Over the holidays, I treaded that footpath again and remembered what a hard and fierce walk it is with Sheldon's family dealing with the sudden loss of his beloved Uncle Matt. I have to admit, it was very hard to be in that place again, and this trip was very hard on me.<br /><br />I heard the story about how Sheldon's uncle died in the middle of the night, about what his fiancé went through calling first responders and doing chest compressions, how helpless she felt, how this experience was so utterly traumatizing. I heard family members talk about getting the phone call, about being told the news by the doctors, about what it was like to see someone you just talked to -- someone who was just walking around, breathing, living, and doing so on a grand scale -- now lifeless on a table in a small room where family and friends take turns paying their respects, saying private good-byes, and being forced to reckon with a cold reality that is undeniable once you have touched a cooling body and realize there's no breath coming back. I wasn't there when his uncle passed, but I can see it as clearly in my mind as if I was. It was eerie how similar the stories from everyone in Ohio were to my own experience four years ago in Iowa. <br /><br />I knew the family had to tell their stories, difficult they may have been for them to articulate and for me to hear. I remember reading in some grief book or literature how important it is for survivors to tell the story of the death, to say the word "dead" even, to help reality sink in. It was after reading this that I had started to tell my own story and even wrote about it on this blog, to help me accept reality. I knew that this was about acceptance and processing, so I listened. I offered comfort. I felt the familiar heave of heavy sobs of shock, confusion, helplessness and pain when Matt's fiancé cried into my arms. We talked about where the spirit goes, about signs that our departed leave for us to let us know they are okay.<br /><br />We put together a photo collage for the funeral on the long, oval-shaped, oak dining table. As we told the stories behind the pictures amidst a blend of tears and laughter, I was pulled back in time to the preparations for Brian's funeral. I remember someone at that time making a remark about how this task serves to not only honor the life of the deceased, but it gives the survivors something concrete to do, a chore to keep hands busy and to keep the hours passing in those first few, most difficult days.<br /><br />When we went through Matt's clothes, there were many tears shed. Still, we managed some chuckles when the words were said, "Matt had terrible taste." (He always looked nice, but he did have quite a few mock turtlenecks and Cosby sweaters that made the comment completely fair.) A box of tee-shirts was put aside for the making of a memory quilt, just like I did with Brian's Bears attire. Sheldon's mom is keeping a suitcase full of ugly sweaters so we can wear them around the holidays to keep Matt's presence with us in future years. I thought about how, desperate to be practical and knowing I was downsizing in my move, I got rid of a few shirts of Brian's that I wish I had back. On the other hand, I also know I'm okay without them and that no one will ever get the disposition of things just right. Life does go on, with or without the things.<br /><br />Still, it has taken time, a lot of effort, and a life full of love and support for me to come full circle, to get to where I feel mostly happy thinking about Brian. It took time for me to really understand how his presence and spirit live on, and how they don't (and I'm still sorting some of that out). There is no magic pill. Grief is not short-lived, nor is it simple. Most of all, it is not easy. I had to dig deep to think about what advice or insight was most relevant now, at this time when the loss is so fresh and when we haven't all really absorbed his death as fact quite yet.<br /><br />I kept coming back to one thing: One day at a time. Sometimes, one hour or one minute at a time.<br /><br />This mantra got me through the worst times. The other thing I would say to someone who is freshly grieving is to embrace the grief. That's not to say you should seek to <i>enjoy</i> it -- because no one will. It will completely suck. But, like a root canal, it is necessary and you just need to suck it up and deal with it, or the problem and pain will remain festering under the surface. When you feel the pain, lean in. When you want to cry, cry. When you want to tell a story about the loved one, go ahead and do it. Ignoring his life and memory won't help anyone. <br /><br />I think that living with grief is ultimately about how you are able to cope with what has happened. While a death happened, a life happened too. Remember that, celebrate it. Mourn the loss, both of what you had and what you will not have. You have to acknowledge those feelings, and feel them. But also celebrate the good times. Be honest about the persons flaws and laugh about them if you can -- such as the ugly sweaters or Hawaiian shirts (Brian was guilty of the latter).<br /><br />Eventually, the pain will be less. Eventually, the smiles will be more. It just takes time. And work. And love.<br /><br /><br /><br /> <br /><br /><br /><br /><br /><br /><br /></div><img src="" height="1" width="1" alt=""/>Wendy Blues<div dir="ltr" style="text-align: left;" trbidi="on">Today would have been Brian's 35th birthday. He would not have liked it. First off, he would have been less than excited about hitting that midway point between 30 and 40. Second of all, it's a Monday. He much preferred a Friday or Saturday birthday, or even a Thursday, so people could celebrate in style the whole night long. We probably would have had a blowout party over the weekend, followed by a day of recovery (and pizza) watching football yesterday. Still, he would have wanted more today. He'd probably have taken the day off work, stayed home and played video games or fooled around watching internet videos or music DVDs. Maybe we would have gone to Kenny's Pub in Waukee for steak night, if in fact Monday is still steak night there. All in all, even a birthday he didn't like very much still would have been pretty darn good.<br /><br />It's silly to think that someone would see 35 as being old, but he felt that way starting around age 28 or 29. He just wasn't excited about getting older. Maybe it was because he was a kid at heart; maybe it was because he was scared to get to the "kids or no kids" phase of our life; maybe he was afraid everything would change with our friends as we grew older and made such choices; maybe he just realized life is short and hated seeing it go by so quickly; or maybe it was because, deep down, his soul knew his time on this world was particularly limited. Looking back at his attitude on aging, I have mixed feelings. On the one hand, I think of the youthful ignorance behind not wanting to get older -- surely it beats <i>not</i> getting any older. Being 35 sure sounds better than never having the chance to reach that age. We should just relish every day and every year of our lives, and appreciate the bounty of friends, family, fellowship, food, drink, music, fun, faith, and so on. Every day that we get to do that is a day to enjoy, not to dread. The older you are, the more opportunity you've had to enjoy what this world offers. <br /><br />On the other hand, I have to admit that Brian was right. (God, he would love that I'm admitting this.) At least in his own case, he actually was nearing the end of his life at 28, 29 years old. We just didn't know it at the time. It's strange. If only we could have slowed down the clock, made him 29 or 30 for just a while longer…<br /><br />Perhaps a healthy dose of appreciation for enjoying every day needs to be tempered with the awareness that we are all getting older and that every day that goes by represents one less day of your life that remains -- one less day to achieve what you want to accomplish, to take a trip to the place you've always wanted to visit, to tell someone dear how much you love them, to take a chance you've always wanted to take. Whether or not you are objectively "young" or "old," life is short and our days on this earth are limited. That is true for all of us, whether we die young or last 100 years. It's still a finite number of days, and no one has any guarantees.<br /><br />Make the most out of today. That's what Brian would have wanted.<br /><br /><br /><br /></div><img src="" height="1" width="1" alt=""/>Wendy Hearted Holidays<div dir="ltr" style="text-align: left;" trbidi="on">This year marks the first time I won't be in Iowa for the holidays at all. I haven't always been there on Christmas Day since moving to Texas, but I have always spent some time there during the holiday season, had some kind of celebration. Not this year, though, and that's kind of hard. I made this choice with Sheldon a couple months ago, and with good reason - we went to Iowa in October for another wedding celebration and I returned again in November to attend a charity auction for Brian's animal shelter. Plus, it is always stressful to try to go to two different states up north. I have gotten quite sick over the holidays the past two years, probably in part due to the stress and travel. Not to mention the fact that the cats hate being left alone so long, even though our sweet neighbor Carol checks on them daily (actually, more than once a day).<br /><br />Still, I was thinking it would be strange this year. I was missing the idea of seeing everyone, the excitement of the season. I knew we were making the right choice, but it still tugged at my heart a bit. Add that to the warm Texas weather, and I just haven't quite been very quick to get into the holiday spirit. I only started to come around a week or so ago, after we got all our decorations up and went to an ugly sweater party with some friends. I started to finally get excited about Christmas.<br /><br />Now, some bad news has come along that is going to make Christmas really, really hard this year. Sheldon's uncle Matt passed away of a heart attack this week. He was only 50 years old. Far too young. We are still in shock, and very much grieving the loss of this man, who was very close to Sheldon. Matt got Sheldon into the business he is in now, and we would see him on company trips. We just spent time with him in Colorado a couple months ago. He was always around when we were in Cincinnati. He helped Sheldon plan and orchestrate our engagement, and did a reading at our wedding. I can't imagine a trip to Cincinnati where I don't see his face, hear his voice, feel his arm around me in a hearty embrace, and smell his cologne. It just won't be right.<br /><br />This year, like last, we were to have Christmas dinner at Matt's house. He was going to make prime rib. It was amazing last year, one of the highlights of the trip. He was a great cook and host. <br /><br />Matt also had season tickets to the Cincinnati Bengals. Every time we went there, we'd try to go to a game as well. This year, we'd planned a big group outing to the last game of the season with over a dozen people going. Matt would have been the heart and soul of this, the one who had the best tailgating spot, who told the best stories, who brought the best food. He may have been but one of 15 or so people, but his presence (and now absence) was much bigger. It will not be remotely the same without him.<br /><br />Matt reminded me of Brian in a lot of ways. He was big-hearted, big in stature to match, he was outgoing, liked to have fun, liked to drink, not at all shy or reserved, spoke his mind, loved people, loved food, loved football, could be silly at times, and kind of acted kind of like a big kid. They both liked dirty jokes and Jaegermeister and were the life of the party. They both had unique voices that I will remember clear as day for the rest of my life. They both died suddenly on winter mornings, and their deaths were followed by major snowstorms. These men were powerful forces in life, and their sudden takings from this earth seemed to literally suck the air out of the atmosphere and wreak the same havoc on the weather that their deaths were wreaking on our hearts. <br /><br />It will be with heavy hearts that we head north this week. Instead of having Christmas dinner at Matt's house and going to a football game with him, we'll be going to his funeral and comforting his fiancé the best we can, which will be helpful, but I know will never be enough to fill the hole in her heart and life. Thinking about what she is going through now and what lies ahead for her absolutely breaks my heart. I know this pain all too well, and wish to God she didn't have to go through it too. <br /><br />Please keep Matt's family and friends in your prayers this holiday season. And please, cherish the time you spend with your relatives and friends. You never know which Christmas will be someone's last. Live your life with love, have fun, host parties, go to football games or museums or whatever trips your trigger, engage in good conversation, tell funny stories and jokes, and hug one another tightly. And have a Jaegerbomb for Uncle Matt while you cheer on the Bengals.<br /><br /><br /> </div><img src="" height="1" width="1" alt=""/>Wendy (Grief) is Like a Roller Coaster, Baby Baby<div dir="ltr" style="text-align: left;" trbidi="on">My grief is largely under control now, something I carry with me, concealed and small. I don't cry that much anymore and rather than being actively grieving all the time, I function as a more of a "normal person" whose past just happens to shape the way she thinks, feels, and acts. Most of the time.<br /><br />Sometimes, though, I get caught off guard. Sometimes grief still sneaks up on me and overwhelms me. My dark days may be less severe and far less frequent than they were two or three years ago, but they are not gone completely. Despite my overall improvement and well-being, I am not immune from crying spells and bad days. My grief is kind of like a wild animal that I've spent years training and domesticating. While it usually rides around with me inside my pocket, sometimes it returns to its feral ways and, when I'm not looking or I forget how strong and savage it can be, it gets out of its neat little spot and attacks me when I least expect it. It claws me up and sinks its teeth into my skin, but instead of drawing blood it brings a stream of tears.<br /><br />Obviously, I had a bad day recently. There was definitely a trigger, one I don't care to discuss, but I had a full day where I simply couldn't stop the tears. I knew there wasn't much I could do except let them come. I had to let the emotion out, to validate my feelings. Each tear was the anguish, the pain, the hurt coming out. It would do no good to try to fight to keep all that inside. Why would I? There was nothing to prove by not crying.<br /><br />Sheldon was understanding, as always. He couldn't rationally understand the pain, but he didn't have to. Emotions don't always listen to reason anyway. He just let me have space, and gave me lots of hugs. He let me talk if I wanted, but didn't push. I told him I just needed a day to process some things and to work through my feelings. I told him I needed one day to cry. And I did. I alternated between sobbing on the couch and silent tears that just flowed without permission while I went about my daily routine. These tears were coming whether I "allowed" them to or not, and each one carried out a little of my pain. (That last statement is a scientific fact; tears that are produced from emotional crying actually contain more toxins than those produced from a physical stimulus such as chopping onions or having something in your eye: <a href=""></a>)<br /><br />What's nice is that now I know that I can handle the ups and downs of grief. I've lived with it so long that I know I can manage a bad day here and there. I know that crying and feeling bad are okay and are normal. I know this isn't permanent. I know that sometimes, the wild animal that is grief has to be a wild animal, but that it will tucker itself out and I can put a leash on it again eventually and put it back where it belongs.<br /><br /><br /> </div><img src="" height="1" width="1" alt=""/>Wendy Out<div dir="ltr" style="text-align: left;" trbidi="on">Perhaps you saw the commercials that were running recently featuring Sam Gordon, the girl who is a football phenom, promoting the "Together We Make Football" contest. The contest allowed people to submit an essay and five photos or a video telling their football story about why they love the sport. The grand prize was a trip to next year's Superbowl. Naturally, I was excited and set out writing my essay right away!<br /><br />I spent hours writing, proofreading, and editing my essay. I also spent considerable time rifling through years' worth of digital pictures (even busting out an old hard drive) to find the five best pictures that would illustrate my story. Unfortunately, the amount of time I spent on these tasks would be just the beginning, and I ended up spending just as much, if not <i>more</i>, time just trying to submit my entry due to repeated technical glitches and ended up feeling about as crazy as Ray Finkle in <i>Ace Ventura</i>…hence the name of this blog post.<br /><br />Here's exactly what happened. The contest ended at midnight on Tuesday, November 5. By the 4th, my essay and pictures were ready to go! I started trying to submit them that morning. The website had you first fill out your personal information. Then, there was a box for uploading pictures and a box for submitting your essay. Once those things were done, a blue button below read "Submit Your Photos" The first several times, the blue box to submit the photos wouldn't light up - it remained pale. Eventually, I figured out that I needed to first copy and paste the essay, then upload the photos one by one. If I did that, the "Submit Your Photos" button would light up and could be clicked. Still, I kept getting error messages. The message said something to the effect of "Sorry, there was an error uploading one of your photos. Please try again later." This happened every. Damn. Time.<br /><br />I read and re-read the contest rules. My photos were well below the maximum size allowed. The rules said the photos could not have been edited at all, and I had cropped them, so I thought maybe that was the problem. I went back through my old files to dig up the un-cropped versions for submission. No luck.<br /><br />I thought maybe it was my computer. I was at a friend's house, so I emailed my essay and photos to her and tried it from her computer. Same result. I asked Sheldon to try from his computer. Same problem. Another friend offered to try from her computer. She also had the same problem.<br /><br />I tried using less than all 4 photos, tried using different photos. I figured maybe there was a glitch with one of them, so I tried systematically removing each photo, one by one, and only submitting four of them. I STILL got the same error message.<br /><br />I thought maybe web traffic was just too high on the site, so I tried in the middle of the night. Repeatedly. I got the error message. Repeatedly. <br /><br />I thought maybe Internet Explorer was the issue….until I got the same error message using Firefox and Google Chrome.<br /><br />I bet that in all, at least 75 attempts were made over the course of 36 hours by four different people using four different computers and at least three different operating systems. We were ALL unable to submit my essay. I was getting incredibly frustrated, but I always like to try to plan for the worst-case scenario. I decided that since the error was photo-related, that I would just submit my essay without the photos and add a couple sentences explaining my technical issues and asking to submit photos another way, by email or something. This meant I had to pare down my essay a bit more though, to squeeze that explanation in and still stay under the word limit for the essay. I did that, and….STILL got the same error message!<br /><br />At this point, I had literally spent hours just trying to submit my entry and was very frustrated. I had no idea what to do, so I posted my essay and photos on Facebook, asking my friends to share the status to the NFL's Facebook page. The problem? You can't "share" something on a business page, only that of a friend. You have to post it directly, not using the "share" function. So I did that. I posted my story on the NFL page directly, and in the comments section of a post they had made promoting the contest. On the same thread, I reported my technical issues and found I was not the only one having this problem.<br /><br />Desperate, I even tweeted the NFL asking about the problem. I got some tweets in response suggesting various things to try (including, ironically, cropping the photos and saving them a special way with Photoshop). None of them worked. Eventually, I got a direct message from someone with the NFL saying he would try to submit my entry for me before the deadline. I thanked him profusely. The next day he told me he wasn't actually able to submit it after all, but would still see what he could do and told me to "stay tuned." I haven't heard anything since, so I decided I'd write this post. I plan to post a link to this post on the NFL's Facebook page, tweet it to the NFL, and send a direct message with the link to my contact at the NFL. I want someone in charge to see what this contest experience was like for me (and probably many other fans, though I doubt any were as manically rabid about continuing to try to post their entries scores of times using a network of friends and family). Most importantly, though, I wanted my story to be told. I wrote this essay hoping it would be read. I truly believe my football story is powerful and moving, and football means the world to me. I just want to tell my story one way or another. If this is my only platform, so be it.<br /><br /><br /><div style="text-align: center;"><b>THIS IS WHY I LOVE FOOTBALL</b></div><div style="text-align: center;"><b><br /></b></div><div class="MsoNormal">In the seventh grade, I made the football cheerleading squad.<span style="mso-spacerun: yes;"> </span>Not knowing too much about the game, I started watching college football on weekends and tried to learn the basics of football from the other girls on the squad.<span style="mso-spacerun: yes;"> </span>In high school, I continued cheering and started dating a football player. Brian and I would spend Sundays watching games with his family.<span style="mso-spacerun: yes;"> </span>He taught me not just about downs and player positions, but also about Papa Bear Halas, Walter Payton, and the Superbowl Shuffle.<span style="mso-spacerun: yes;"> </span>The boy bled blue and orange, and quickly converted me into a Bears fan.;">My last Bears game with Brian</td></tr></tbody></table><div class="MsoNormal">Once we got to college, Brian and I had our own weekend ritual during football season.<span style="mso-spacerun: yes;"> </span>I would stay in his dorm room on Saturday nights, we’d have a frozen pizza for dinner, and on Sundays we would sleep in as late as we possibly could while allowing time to hit the cafeteria and be back in time for the noon kickoff.<span style="mso-spacerun: yes;"> </span><o:p></o:p></div><div class="MsoNormal"><br /></div><div class="MsoNormal">A few years later, Brian and I got married.<span style="mso-spacerun: yes;"> </span>By that point, I was as big a Bears fan as he was.<span style="mso-spacerun: yes;"> </span>My "something blue" on our wedding day was a Chicago Bears garter.<o:p></o:p></div><div class="MsoNormal"><br /></div><div class="MsoNormal">In our first home, we converted our basement into a Chicago Bears bar - the Boka Bear Den (Boka being our last name). We filled the walls with banners and memorabilia, down to the Bears keg tapper. We loved having parties for Bears games and also cherished our annual trip with “Da Tailgating Crew” from Des Moines to Soldier Field for a game. My favorite memory at Soldier Field was witnessing Devin Hester return two touchdowns one frigid Chicago night to help the Bears defeat the Broncos in overtime. Whether at home or at the stadium, we loved watching football together.</div><div class="MsoNormal"><o:p></o;">Tattoo tribute</td></tr></tbody></table><div class="MsoNormal">Tragically, after five years of marriage, Brian passed away suddenly of a pulmonary embolism.<span style="mso-spacerun: yes;"> </span>As friends and family filled my house that cold Sunday in January, we turned on the television to the playoff games.<span style="mso-spacerun: yes;"> </span>As his brother said, it wouldn't be right to be at our house and not be watching football.<span style="mso-spacerun: yes;"> </span>I don't really remember much of that postseason, but I do remember the way our friends, family, and the members of his fantasy football league came together to support me.<span style="mso-spacerun: yes;"> </span>I had a Superbowl party at our house less than a month after his passing because we always had one and that's what he would have wanted.<span style="mso-spacerun: yes;"> </span>That fall, I hosted the annual draft for the fantasy league that he founded eight years prior.<span style="mso-spacerun: yes;"> </span>I was honored to be given Brian’s place in the league, as a player and as the commissioner.<span style="mso-spacerun: yes;"> </span>That year, we had the trophy named in his honor.<o:p></o:p></div><div class="MsoNormal"><br /></div><div class="MsoNormal">In time, I decided to start anew.<span style="mso-spacerun: yes;"> </span>I moved 1,000 miles away to Austin, Texas.<span style="mso-spacerun: yes;"> </span>I wasn't going to abandon my team, though, or my husband's memory.<span style="mso-spacerun: yes;"> </span>I got a tattoo in remembrance of Brian -- a Chicago Bears "C" set against a shamrock background -- a tribute to the big, Irish guy who made me love football and whose mark on my life would never fade away.<span style="mso-spacerun: yes;"> </span>I remained active in his fantasy league, too, and won the trophy that had eluded him for over a decade.<span style="mso-spacerun: yes;"> </span>I went on our annual trip to Soldier Field with our friends, and we celebrated a bittersweet victory without him.<span style="mso-spacerun: yes;"> </span> Bengals game with Sheldon</td></tr></tbody></table><div class="MsoNormal"><br /></div><div class="MsoNormal">Eventually, I met another Midwest-to-Texas transplant.<span style="mso-spacerun: yes;"> </span>Sheldon was from Cincinnati, but lived in San Antonio.<span style="mso-spacerun: yes;"> </span>We began dating, and one of the first times I visited him was for the 2011 Superbowl…in part because he had a better TV than any of my friends. <span style="mso-spacerun: yes;"> </span>One Sunday, watching football together on the couch, he told me how much he loved that I was a fan of the game.<span style="mso-spacerun: yes;"> </span>He enjoyed watching me me and liked that I didn't feel ignored on Sundays (because I, too, was on my laptop, following fantasy scores and the Bears game blog).<span style="mso-spacerun: yes;"> </span>For my part, I was just glad he wasn’t a Packers fan!<o:p></o:p></div><div class="MsoNormal"><br /></div><div class="MsoNormal">This summer, Sheldon and I got married. Now I’m in two fantasy leagues – one started by my late husband, and one founded and run by my current husband – and I dream of winning both trophies in the same year.<span style="mso-spacerun: yes;"> </span><o:p></o:p></div><div class="MsoNormal"> playoff game - in Houston! (Tank top in January?! Okay!)</td></tr></tbody></table><div class="MsoNormal">While I’m no longer able to make an annual trek to Soldier Field, Sheldon and I see our teams whenever they play in Texas -- we gleefully watched the Bears destroy the Cowboys in Dallas last season, and been crushed by Bengals playoff losses in Houston the past two years.<span style="mso-spacerun: yes;"> </span>We also catch Bengals games when we visit his friends and family in Cincinnati. These game day experiences together have given birth to a dream of ours to see a game in every NFL stadium.<span style="mso-spacerun: yes;"> </span>This fall, we were able to cross Mile High off our list.<span style="mso-spacerun: yes;"> </span> see my fantasy QB in Denver!</td></tr></tbody></table><div class="MsoNormal">Rooting for a different team than my husband is something new, but it has its perks.<span style="mso-spacerun: yes;"> </span>When the Bears and Bengals played earlier this year, the result was not just a Bears victory, but also that I got out of laundry for two weeks!<span style="mso-spacerun: yes;"> </span>For the most part, though, we enjoy getting to have two teams to root for, giving us twice the chances to celebrate a win. <o:p></o:p></div><div class="MsoNormal"><br /></div><div style="text-align:--></div><div class="MsoNormal">The past four years of my life have been filled with ups and downs, awful times and joyous moments.<span style="mso-spacerun: yes;"> </span>One of the things that got me through it all was football. Football provided a distraction when one was needed, an opportunity for my friends to surround me with love, fond memories of my time with Brian, and fertile ground for new love to take root. Football made me the person I am today and the person Sheldon fell in love with. <span style="mso-spacerun: yes;"> </span>I wouldn’t be who I am or where I am without football, and I love football for that.<o:p></o:p></div><br /><br /><br /><br /><br /></div><img src="" height="1" width="1" alt=""/>Wendy Miles to go Before I Sleep<div dir="ltr" style="text-align: left;" trbidi="on">My car hit 100,000 miles recently. And by "my car," I mean the Mitsubishi SUV that used to belong to Brian. The car we took to Austin on our last trip there together, about 10 months before he died. The first, and only, brand new car he ever bought. It wasn't even paid off when he died, and had about half as many miles then as it does now. I've put my fair share on with many trips between Iowa and Texas, plus miles accrued showing houses and driving between Austin and San Antone.<br /><br />The car's been good to me. I've had a few fender-benders in it, but she's in good shape overall. It's a little messier inside than Brian would have kept it, but that's okay; he wouldn't have really liked me driving it at all anyway. I did clean it out pretty thoroughly, complete with vacuuming, and then got it washed just before I hit the 100K mark. That was in part because of my awareness of how he would have kept the vehicle himself, and in part because it had gotten way too messy for my own standards.<br /><br />Sheldon got a new truck recently. Before that, he'd been talking about getting me a better car. He keeps saying that when we have kids, he wants to have them in the best, safest vehicle possible. He wants to spoil me and have me live and drive as comfortably as possible. I keep telling him I don't need or necessarily even want a new vehicle. So now he got himself one, and maybe we'll revisit the idea of me getting a new car down the road (haha) a ways, when his truck is paid off (I hate the idea of having more than one car payment). I still don't know if I will ever be ready to get rid of the Mitsubishi though -- I have a definite emotional connection, besides just loving its utility. It can fit a lot of stuff, drives well, has been solid mechanically. I like how high up I sit while driving it. I also love the Bears helmet bobble head guy hanging from the rear view mirror, left behind by Brian and now festooned with pins from my yoga studio and skeeball league in Austin.<br /><br />I know someday it won't make sense for me to keep this car….but I'm just glad that day is not today. <br /><br /><br /></div><img src="" height="1" width="1" alt=""/>Wendy Big D's<div dir="ltr" style="text-align: left;" trbidi="on">No, this isn't a post about my breasts, although 2 out of 2 husbands would agree...my chest is blog-worthy. If you're looking for boobie jokes, though, I recommend my friend Kristen's blog: <a href=""></a>. (You might notice her top blog post is, in fact, about having sizeable chest melons.)<br /><br />Of course, my blog is less about dirty, witty humor and more about grief, struggle, and deep emotional issues. You know, the kind of thing that really makes you feel warm and fuzzy inside. So the big D's referred to in this post's title are....drumroll, please....Death and Divorce. Fun, right? I'm sure you can't wait to dive right in!<br /><br />When I was a new widow, I was desperate to find tools to help me cope with my loss and profound grief. I devoured books about grief, I struggled through some intensely painful sessions with a grief counselor (which helped as much as or more than they hurt), I joined an online community for widows and widowers for virtual support, and I went to a handful of grief support group meetings. At these meetings, I struggled to find people like me. Of course, in some ways, the experience of losing a spouse is universal - the loneliness, the loss of a life plan, the questions over things like whether to wear a wedding ring and what to say when people ask about your spouse. Yet, being so young, I was not the typical case. I wanted to find others who would relate to me more closely, so shortly after moving to Austin, I posted a couple times on different websites looking to form a group of young people dealing with grief. Only a couple people replied, not enough to form a group, so that idea fizzled out relatively quickly. One of the responses, though, was from a young woman whose long-time partner had left her, and she was grieving the end of the relationship. She wanted to join a support group to help her cope with this. I don't honestly remember if I replied to her email or not, given the general lakc of interest I found, but I knew I wasn't interested in being in a support group with her. I couldn't bring myself to compare our experiences, or to think that we were going through the same thing. I mean, she was going through a break-up, and I was a widow. She was dealing with life, and I was dealing with death. I certainly felt bad for her, but also was almost offended that she reached out to me. At that point, though, I was really looking for someone who had walked a mile in my shoes, and she hadn't. <br /><br />A few months later, I became good friends with a woman named Heather, who started to read my blog as a way to cope with her sense of loss after going through a divorce. Coincidentally, she moved to Austin to heal, just like I had. She was very careful about choosing her words when she talked about how my blog had helped her, and made a point to say she knew that our experiences didn't really compare. Still, in that first conversation with her, I realized that we were going through some of the same feelings and emotional aftermath as a result of our experiences, different as those might have been.<br /><br />Since then, I've had several friends and family members who've gone through divorces. At first, you'd think that death and divorce are very different experiences. And they are -- especially if the divorce is mutually agreed upon, or if you're talking about the party who wanted the divorce. But for those whose spouses made the unilateral decision that they wanted out of the marriage (or long-term relationship), our experiences are more similar than you'd think. Sadly, I've had several friends in this boat in the last few years -- their worlds, their lives, their futures upended and taken away, sometimes suddenly and sometimes painstakingly over months or years. None of us chose to have our spouses leave us. None of us wanted to divert our life paths. None of us wanted to be alone. We all had to grieve the loss of our partner and mourn the fact that the future, the life, the plans we had will not happen.<br /><br />That being said, it is NOT a good idea to say to a new widow or widower, "I know how you feel. When I got divorced..." This would not have sat well with me when I was in the depth of my grief, and is not very sensitive to that person who is in so much pain. You'll just look like an asshole, because it's not the same. Tread very carefully when saying you know how someone feels because you went through an entirely different situation (this applies to everything not just taking to widows and widowers). I used to be quite offended when people who were divorced would compare our experiences. I thought, "That's so different! In a divorce, someone made the choice for that to happen. Neither of us chose for Brian to die. God made that happen to us, not either of us. We were happy." Still, in time I started to see some similarities, and this was in part because I was witness to several unwanted divorces. In each case, it helped that my friends recognized that although we went through some of the same things, our experiences were different. They were all very good at saying, "This is nothing like what you went through, but..." And then I would say something along the lines of, "I know it is different, but I also know some of the feelings that result are the same." <br /><br />In sharing our experiences as friends, we can acknowledge the similarities and differences in our experiences and the feelings we have. I have come to realize that while both are very traumatic and painful, death and divorce present different challenges. <br /><br />If you get divorced, you lack the finality of death. In my case, Brian's body stopped working when he died. Science dictated that he was physically gone. No one and nothing could change that. From that moment on, the grief began, and then the healing. In a divorce, things aren't so cut and dried. A lot of people end up second-guessing themselves, and sometimes a couple will give things another try, even in the midst of or after the divorce proceedings. There is no metaphysical barrier preventing you from working on the relationship, even if it seems dead. This can delay the realization that a relationship, a life as one knew it, is over. It can keep a person focused on rekindling the relationship and prevent them from mourning its demise. With a living ex-partner, there is also much more room for anger. Of course, it is normal for a grieving widow or widower to have anger -- not just at God, but also at their departed spouse for leaving them (not all emotions are rational, after all) -- but the fact is that for divorcees, this anger is more rooted in reality and can easily be fed by nasty divorce proceedings and ongoing issues between the parties, particularly if they have children together. Simply put, death is more of a clean break than divorce. The bandage gets ripped off, and then you start to heal. With divorce, the bandage is slo-o-wly removed, maybe put back on after you peek at the wound, maybe replaced, before it is eventually taken off. Only then can the recovery begin.<br /><br />Finality of loss is a double-edged sword, however. One of the hardest things to accept in coping with death is knowing that you will never EVER hear that person's voice or laugh again, that they are truly gone from your life on this earth. That is a hard realization, and one that you never have to embrace if you're mourning the loss of a relationship and not the loss of your spouse's life. It is hard to wrap your mind around the idea that this person you loved and spent your life with does not exist anymore and is gone from this world. <br /><br />Related to that is the fact that death will almost invariably cause you to examine your spiritual beliefs. When someone you love is gone, you wonder where they are, if they are with you, whether they are in a better place, etc. You might question everything you've ever been taught, you might be sick with anxiety over the soul of the departed, you might find faith anew in signs from beyond. Whatever your experience, death takes you down this journey whether you intended to think about such things or not. You can remain blissfully ignorant or choose to not worry about such things if you're divorced, because you don't have that feeling of responsibility for or a vested interest in the soul of someone who has left the physical world. <br /><br />Another difference is in the way death and divorce are treated by the rest of the world. Divorce carries a stigma and shame, while being a widow or widower causes people to bestow a strange mixture of pity and admiration on you. I was praised so much for being so "brave" and "strong," yet I don't see how I've done anything praise-worthy. Bravery is choosing to face daunting odds -- running into a burning house to save the children inside, rescuing a dog whose fallen through the ice into freezing cold water, etc. I just lived the life I was given; I'm no hero. I simply did what I had to do. What else could I do? On the other hand, the rules about how to move on are clearer for divorced folks. It's assumed that you'll date again and go on with life. You probably won't cry on your new partner's shoulder when an ex-husband's birthday rolls around, but you very well might do that on your late spouse's birthday. By the same token, only one of those is socially acceptable, so at least a widow can continue to grieve and heal while forging a new relationship. It might be that dating divorcees feel more pressure to keep their residual pain and emotional hang-ups hidden from new partners.<br /><br />I could go on and on about these losses, how they are similar and how they are different. I will say that there are many similarities in how someone who chooses neither reacts when life hands them one of these anyway. I have had conversations with other widows and with divorcees about our feelings of loss, about how to cope with being suddenly alone, about having to grieve the futures we thought we were going to have, about how to re-enter the dating world as adults who never thought we'd be there again, etc. I think my experience has given me an insight on what my friends were going through, no matter the reason they were there. Although our experiences were different, some of our feelings were the same. We are all trying to walk the path of recovery, healing, and finding happiness again. In doing so, we have strengthened our resolve, our friendships, our emotional intelligence and our ability to support each other in hard times. What hasn't killed us has made us stronger. <br /><br />It's easy to get caught up in our differences, but sometimes it turns out that our similarities are stronger than they appear. Rather than worry about who has had it worse or whose pain was greater (How does one quantify that anyway? And why would you want to?), I have come to see that the path I've walked has made me a more empathic, compassionate person and I can relate to people a lot more than I could before. Having walked with pain and grief, I know what it is like and I know that, regardless of the source of one's woes, you can come out stronger and better for it. <br /><br />How have you dismissed someone's pain and hurt because you think you had it worse? Is it possible your experiences are more similar than you care to admit? If you focus on how people are feeling rather than the outward cause of their pain, you'll come to find that heartache and loss are the same for everyone. Sharing feelings doesn't have to be a competitive game of who has it the worst; instead, it should be about drawing on your own experiences to help you be compassionate and understand toward others who are hurting. <br /><br /><br /><br /><br /></div><img src="" height="1" width="1" alt=""/>Wendy to Popular Opinion<div dir="ltr" style="text-align: left;" trbidi="on">I've been struggling with some thoughts or experiences I've been having lately, and I think this is part of the reason I haven't been blogging as much (that, and planning a wedding takes a lot of time!). The reality is that I'm no longer struggling with how to manage my grief, but how to life my life and move forward. It's a different phase of widowhood, and in some ways, it's hard to acknowledge these experiences and feelings. <br /><br />I've worried about how or whether to share everything. I think about what other people will say or think - especially Brian's friends and relatives. Still, I started writing as a way to not only process my emotions and experiences, but also to share my journey with others who are in my shoes, to let them know that what they are feeling is normal. I feel like I'd be disingenuous if I didn't share some of these things that have been rolling around my head, things I've been afraid to write about for fear of being seen as a bad person or a less-than-admirable widow. I've wrestled with these fears and with the thought that I want to be sensitive to others who grieve Brian's death, but I've decided that it's time to share more about my journey now, in the interest of full disclosure. I know there are other widows and widowers who read this blog and who, like myself, are years out from their loss and who are navigating life and love in a different way than they were when the loss was fresh. I have to keep reporting from the field for them as well as for myself, so here goes....<br /><br />Sometimes I will go an entire day, or more, without thinking about Brian. He is forever embedded in my soul and in that way, he is with me every moment of every day. That being said, I don't necessarily miss him or talk to him every single day. People tend to say things like, "Not a day goes by that I don't miss him and think about him." That's not the case for me, and I'm sure there are a lot of people in my shoes who would agree. Don't get me wrong -- I think about him a lot, and talk about him freely and frequently. But it's more in the manner of telling stories about an old friend and recalling memories than it is me mourning his death or longing to see him again. There is no joy or purpose in doing that, but telling stories keeps his memory alive and makes him a part of my life on an ongoing basis. In my mind, that is a better way to treat his memory and a healthier thing for me to do. It's also what he would have wanted. When he died, I had to mourn the loss of what would not come to be, and one part of that was crying over the fact that we wouldn't grow old with his friends (Hart in particular), telling the same stories of our silly youthful antics that we had already told and re-told a hundred times. I thought we'd all be old fogies together, telling those same tales. I realize now that the stories will live on, but now it will be Hart and I telling them to Sheldon.<br /><br />Another thing that I never thought would happen is that my memories of Brian are fading somewhat. There have been a few times when I think about a memory of my past and I can't remember if it happened with Brian or with Sheldon, or whether Brian was still alive when a certain thing happened or if it was after he died. For a long time, everything was starkly divided into two segments of my life: before Brian died, and after. Now, the line isn't as sharp. The other day, Sheldon asked me if Brian had liked a particular food as we cooked dinner together. I honestly wasn't sure. I no longer have every preference, every memory, every quirk of his embedded into the surface of my brain and at the top of my mind. <br /><br />It's weird to admit these things or acknowledge them, but they are part of the inevitable process of time moving forward, my brain getting more crowded, and the significance of the little details fading as the rest of my life unfolds. I don't remember if he liked bell peppers because it really isn't that important. I know I'll remember and cherish the most important things, but I'm finally able to see what is and isn't important. I think when someone dies, you put them on a pedastal for a while and everything connected to them takes on more importance, more than it even did when they were alive -- that's why I struggled to throw away his pomade and toothbrush, when they were things that would have made their way to the trash can without a second thought when he was alive and they were all used up. Now that the dust has settled a bit, things have fallen back into their natural order a bit more.<br /><br />Whether I think of Brian consciously or not, whether I remember the small details or not, he is always in my heart and has irreversibly guided the course of my life, from the city I chose to live in to my selection of a new life partner. I don't have to pretend to conform to certain expectations or ideals of what widowhood is to honor him, and I'm not going to anymore. Brian valued truth, and it's time for me to share some of the less romantic realities of what my life is now. This is part of my rebuilding.</div><img src="" height="1" width="1" alt=""/>Wendy, Widowed &</a></div><div style="text-align: left;">It's official - Sheldon and I are married! Okay, it's actually been official for over three months now, but I had to take some time to reflect on everything and decide what to share.</div><div style="text-align: left;"><br /></div><div style="text-align: left;">We had a pretty short engagement -- about six months. We wanted to get married as soon as we could because we saw no reason to wait -- we are in our thirties, we've been living together for a couple years, and we knew we were ready.</div><div style="text-align: left;"><br /></div><div style="text-align: left;">One thing that made me so sure about Sheldon - although anyone who would meet him would understand why I love him - was the fact that I had been in love and married before. I knew what work went into running a household and into tending a marriage, and I knew we could do that well together. </div><div style="text-align: left;"><br /></div><div style="text-align: left;"><a href="" imageanchor="1" style="clear: right; float: right; margin-bottom: 1em; margin-left: 1em;"><img border="0" height="400" src="" width="265" /></a>It felt a little weird to be planning a second wedding (though it was his first) -- I felt a little bashful or ashamed of the attention that is showered on brides-to-be. I didn't want an engagement party or bridal shower, although I did acquiesce when some friends wanted to throw us an engagement party (and I am glad I did). We did have bachelor and bachelorette parties, but nothing too crazy. Sheldon and some of his buds went to the beach for a weekend of fishing and golfing. I had a girls' weekend in at the house, making wedding decorations Friday night and wine-tasting on Saturday with my friends in Texas and our mothers. Sheldon drove the van we rented for the occasion. I had a lot of fun planning the wedding, particularly with the encouragement of my good friend Gabby, who was an enthusiastic personal attendant/co-planner, with a hot glue gun burn on her arm to show for it (oddly enough, it matches one I got the same night as we made centerpieces around my kitchen table together). Still, I have to admit that I felt a little weird inviting people to my second wedding in a decade's time -- I was afraid to infringe on the lives of my family and friends by asking them to commit to another weekend of wedding activities on my behalf. I was ambivalent about having a gift registry, but in the end realized people would bring gifts anyway and we picked out a few things we could use or that needed to be replaced. We also picked a couple charities for people to donate to in lieu of gifts, one of them being the animal shelter where Brian had volunteered.</div><div style="text-align: left;"><br /></div><div style="text-align: left;">We ended up having a fantastic wedding - and I have to say, I think part of that was also because I'd planned one before. It's funny - brides expect or feel pressured to create a "perfect day" on their first attempt at pulling off such an event! At least this time around, I knew what was important and what wasn't. I had consciously vowed to be more calm and to not worry so much about the details. I knew from having gone through it before that it doesn't matter if there are personalized napkins, or if the white of the cake doesn't match the shade of the dress, etc. It's about love, celebration, and the union of two lives into one family. </div><div style="text-align: left;"><br /></div><div style="text-align: left;"><a href="" imageanchor="1" style="clear: left; float: left; margin-bottom: 1em; margin-right: 1em;"><img border="0" height="400" src="" width="266" /></a>That doesn't mean I didn't pay attention to the details though...we put in a new mantle and repainted the fireplace in anticipation for the reception, and Sheldon was very detail-oriented in getting the yard to look perfect. We hung white string and globe lights in the backyard, I acquired tablecloths and runners on Craigslist, I oversaw the making of centerpieces (painted bottle vases), yard lanterns and hanging lanterns, and the list goes on. Instead of a guest book, my mom made a fingerprint tree -- she drew a tree and guests put green "leaves" on with an inkpad and their fingers or thumbs and signed next to those. We did the flowers and food ourselves, with me making up the gin lemonade the morning of the event. We had a port-a-potty brought in for outside, and the two bathrooms inside had flowers and baskets of toiletries and the like. We cleared out two rooms of our house to turn them into the buffet room and the coffee lounge. There was a dance floor and a photo booth. We had a bartender who served beer, wine, gin lemonade, old fashioneds, manhattans, and a fine selection of whiskeys and mixers, with cigars to go along. Oh, boy, were there details....</div><div style="text-align: left;"><br /></div><div style="text-align: left;">I struggled, too, with how to behave as a widow planning a wedding. Should I pay some tribute to Brian, such as a mention of him in the ceremony or flowers at the altar in his memory? I was afraid of insulting his memory if I ignored him, but afraid of drawing attention away from Sheldon and my union if I did. I worried about what people would think either way -- if I did honor him, or if I didn't. In the end, I decided that rather than a formal tribute or token mention of him in a few written or spoken words, I'd let his influence shape the day organically. Some of the musical selections were songs or artists he had liked, or that he had introduced to me. There was a photo from our wedding in the DVD slideshow of our lives that Sheldon and I played at the reception. Several members of his family were there, and many more friends who came into my life through him. My one big way of honoring him was more private - I found an antique locket for my "something old" and inserted photos of Brian and me on our wedding day in 2004; the locket was tied to my bouquet. In the end, I didn't feel the need to draw attention to him, but I also didn't feel the need to exclude him. I do feel that he was there with us.</div><div style="text-align: left;"><a href="" imageanchor="1" style="clear: right; float: right; margin-bottom: 1em; margin-left: 1em;"><img border="0" height="400" src="" width="266" /></a></div><div style="text-align: left;">Aside from the fine line I walked trying to plan a wedding celebration appropriately as a widow, there were the inside thoughts and feelings about what a marriage is, what it really meant to be traveling this road. Again, but with a new partner. I thought about what the vows mean, what a marriage is. I know Sheldon will be there in good times and in bad, because he has been a rock through some of the worst times of my life. I thought about how much more I understood the gravity of the promises we were making now as opposed to the first time, when I was so much younger and didn't really know what we were getting into. I thought about the fact that I can't just call Brian "my husband" anymore, because that title belongs to Sheldon now. I cried about that and struggled to figure out new terminology. (I alternate between "my late husband," "my first husband," and "Brian" depending on the context.) I wondered how Brian felt about all this, and sought some guidance to explore and handle these thoughts. I wondered how Brian's young nieces were interpreting all these events, and how I might be perceived by others. </div><div style="text-align: left;"><br /></div><div style="text-align: left;">Worse, I thought about the fact that the unthinkable could happen again, and I had a nightmare about it just the other night. But I realized that not getting married wouldn't change that risk -- just by loving him and sharing my life with him, I risk the pain of losing him, but I have chosen to be with him anyway because I couldn't go through life afraid to life to avoid pain. I chose to go out on a limb and love again. I thought of the Garth Brooks song "The Dance," which was played on the DVD tribute to Brian at his funeral. The chorus is:</div><div style="text-align: left;"><br /></div><div style="text-align: left;"><i>Now I'm glad I didn't know</i></div><div style="text-align: left;"><i>The way it all would end,</i></div><div style="text-align: left;"><i>The way it all would go.</i></div><div style="text-align: left;"><i>Our lives are better left to chance.</i></div><div style="text-align: left;"><i>I could have missed the pain,</i></div><div style="text-align: left;"><i>But I'd have had to miss the dance.</i></div><div style="text-align: left;"><i><br /></i></div><div style="text-align: left;">I knew that I had to keep dancing. So we rented a dance floor.</div><div style="text-align: left;"><br /></div><div class="separator" style="clear: both; text-align: center;"><a href="" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="424" src="" width="640" /></a></div><div style="text-align: left;"> </div><div style="text-align: left;"><br /></div><div style="text-align: left;"><br /></div><div style="text-align: left;"><br /></div><div style="text-align: left;"><br /></div><div style="text-align: left;"><br /></div></div><img src="" height="1" width="1" alt=""/>Wendy Wedded Widow<div dir="ltr" style="text-align: left;" trbidi="on">It's been interesting to be in my shoes and planning a wedding. Lots of things come up that wouldn't be an issue to a "normal" bride. Some random thoughts on that below.<br /><br />Am I going to change my name? Yes. I'll be taking Sheldon's last name, and will be making Brian's last name my new middle name. I plan to keep using "Boka" professionally, though, as all my professional references, degrees, certifications, publications, appearances, etc. are under that name. I don't want to have anyone who is called as a reference to say, "Who?!" because they don't know me with my new name.<br /><br />In terms of weddings, they bring out opinions from everyone. About everything. Who should or shouldn't be on the guest list. Who should walk me down the aisle. How many chairs should be on each side of the aisle. What everyone should wear. Whether we should have a cake or not. <br /><br />In all these opinions, what I really need to focus on is making the best decisions for Sheldon and me. Still, these decisions are not easy. On top of the normal 1,001 decisions to make, there is this: What is the best way to honor Brian's memory and role in my life, while not taking away from what the day is, which is a celebration of the love between me and Sheldon? <br /><br />And of course, wedding guests love to critique. It's at the corners of my mind, what people will say about ways I do (or don't) pay tribute to Brian. I don't think I'm going to share the details of how I plan to do so before the wedding, or in any written form in the program or anything. Those who are close enough to me will know from having talked to me how I will be honoring him that day. Those who were close to Brian may recognize his footprint on the day in some of the details; and those who weren't close to him don't need to be specifically told which details can be traced back to him because they will all fit together beautifully whether you know the back story or not. <br /><br />So what's my wedding going to be like? It's less formal than many. A ceremony in the park and a reception at our house, in the backyard. There will be a dance floor and bar, and buffet-style bar-be-que. I'm buying my flowers from a grocery store and making bouquets and boutonnieres with help from family and friends. We are making most of our own food, with help from a neighbor. We'll have a friend of a friend as the bartender, another friend as DJ, and a friend is officiating the ceremony. We are writing our own vows, which will certainly be shaped by what I've gone through before (because that has shaped my ability to love, and it set the stage for our love). We are making our own decorations, a process that has been in the works for a few months and is really coming together well now. My dress is white, and I bought it from a traditional bridal store, but it is not full length and fou-fou-y. The guys are wearing khakis and white shirts. No monogrammed napkins, no ice sculpture, no lighted waterfall on the cake (until recently, we were thinking no cake at all).<br /><br />Most of all? Love. Lots of love.<br /><br />I can't wait for the wedding day to arrive!<br /><br /> </div><img src="" height="1" width="1" alt=""/>Wendy<div dir="ltr" style="text-align: left;" trbidi="on">Lately I've felt pulled in a million directions. Am I doing all I need to be, all I can be doing, all I <em>should</em> be as a real estate agent, a lawyer, a fiancée, a friend, a sister, a daughter, a daughter-in-law, a soon-to-be daughter-in-law, etc.? How about as ME? As Wendy? Am I taking care of myself? <br /><br />I've been having trouble sleeping and have had anxiety lately. Not panic attacks, but heartburn, loss of appetite, tearful spells, and insomnia. It seems that my to-do list grows and grows, even as I feebly cross things off. And every time I turn around, I get another text, email or phone call -- and more often than not, it's someone needing something else. Last night I went to bed at half past midnight (had to stay up to see the exciting double-overtime Spurs playoff win!) and after laying in bed awake until 3-something, finally got up and started doing things in the middle of the night/morning. I started making a to-do list, emailing, and organizing for today. Then I went to the couch to watch TV until I dozed off about 4 or 5...only to wake up a few hours later. <br /><br />One thing's for sure....I've neglected writing. I've neglected self-reflection and me time.<br /><br />But enough about me -- I feel like I'm letting people down all the time. I know there are people waiting on me for things -- favors they've asked of me and that I agreed to do, cards I've been meaning to write, gifts I've been meaning to order and mail for various occasions, a guest room that remains unsuitable for the family visiting in a couple days, a stool sample I need to collect and get to the vet (nothing wrong - just routine testing), etc. How do I get all these things done, and fulfill my social obligations to all my family and friends, while working and planning a wedding? I feel like there are a lot of balls in the air, and it seems more keep getting thrown in my direction.<br /><br />I'm trying to find some balance again....but please, bear with me while I tread water for a while. Soon enough, I'll find my way back to a place where I can stand comfortably with my head above the water.</div><img src="" height="1" width="1" alt=""/>Wendy Much to Say, So Much to Say...<div dir="ltr" style="text-align: left;" trbidi="on">But, most importantly, this:<br /><br />Sheldon and I are engaged! />I am one happy woman.<br /><br />There's been a lot of stuff happen that I haven't written about yet or that I am working on writing about -- the proposal, the holidays, a death in the family, the IRS, dental work, running. So I kind of stopped writing for a while because I wasn't keeping up with things as they were happening -- but I thought I'd just do a quick blurb to the blogworld to say this:<br /><br />I am a widow, and I am happy. I love my life. <br /><br />I still cry about Brian's death, I still talk to Brian at times, and I will always talk <em>about</em> Brian. Sometimes certain situations are made harder because of my grief or because of things that go along with being a widow. But I've also learned and grown so much in the past three years, and I believe I am a better person now than I was when Brian was alive, when I was sort of blissfully ignorant to the realities of life and death. I have a greater capacity for love and compassion than I once did, and a greater appreciation for happiness and life. I am in a better place now than I ever have been.<br /><br />Let me be clear -- it is not as if getting engaged has "fixed" me or taken away my grief. Does it make me happy to be engaged to Sheldon? Yes, more than I can say. Does it put me in a better place than I was? Yes, because I love him and my life is better with him than it was without it. He is a wonderful person and I am lucky to have him as my partner. Does it mean my grieving is over? No. That will be a part of me forever. Does this mean I will no longer think about Brian, talk about him, talk about my loss, think of myself as a widow? Of course not. It's just that now, I will be a widow and a wife. And I am happy.<br /><br />We are getting married in a few months. I'm sure I'll be blogging a lot about the upcoming wedding, my feelings and emotions that this brings up, the practical questions for a soon to be wedded widow, etc. on top of a ton of other things I've not yet covered. For now, a blog icebreaker was in store to announce our happy news.<br /><br />I am in love and I am happy!! This year marks a new beginning for me, a new chapter in my life, and I look forward to writing the rest of my story.<br /> </div><img src="" height="1" width="1" alt=""/>Wendy Years and Counting...<div dir="ltr" style="text-align: left;" trbidi="on">Today's marks three years since Brian's passing. The day was so awful, so painful, so surreal. It will forever be burned into my being.<br /><br />Still, more than that day, I will always remember him. Here is what I posted on Facebook today:<br /><br /><span class="userContent"><div class="text_exposed_root text_exposed" id="id_50f890bbf07b59869599598">"It's hard to believe three years have passed since we've heard you laugh, seen you smiling and playing air guitar, or felt your arms around us. I miss your voice, your zest for life, your common sense and quick wit, your musical stylings, <span class="text_exposed_hide">...</span><span class="text_exposed_show">the way my head fit in that space in your chest, and laughing until we cried. I thank you for loving me, for sharing your life with me, and for making me a better person. Brian Steven Boka, 12/16/78 - 1/17/10....but with us always.</span></div><div class="text_exposed_root text_exposed"><span class="text_exposed_show"></span> </div><div class="text_exposed_root text_exposed"><span class="text_exposed_show"><span data-<span class="UFICommentBody" id=".reactRoot[128].[1][2][1]{comment10200408523794281_70159713}.0.[1].0.[1].0.[0].[0][2].0"><span id=".reactRoot[128].[1][2][1]{comment10200408523794281_70159713}.0.[1].0.[1].0.[0].[0][2].0.[0]">Those of you who knew Brian should do something fun and indulgent to remember him today. Those of you who didn't....well, he'd want you to do the same! Honor him by enjoying whiskey, wine, music, your favorite foods, playing Rock Band, spending time with friends, board games, cuddling with pets, watching silly YouTube videos, playing some vinyl, or having great sex."<>I am having trouble posting pictures for some reason, but I also put up several pics on FB that were some of my favorites, that captured his spirit and joy.<>More on this day, and lots of other stuff, to come soon....<>In the meantime, enjoy life! Brian did that as much and as well as anyone you'd ever hope to meet. I plan to honor him by continuing to do just that, along with a bottle of one of his favorite wines.</span></span></span></span></div></span><div class="text_exposed_root text_exposed"></div></div><img src="" height="1" width="1" alt=""/>Wendy Life of Brian<div dir="ltr" style="text-align: left;" trbidi="on">;">Tending bar at the Boka Bear Den</td></tr></tbody></table>Brian would have been 34 today. I have no question about what we'd be doing for his birthday if he were still alive -- getting together with friends for the Bears/Packers game. He was the biggest Chicago Bears fan you'd ever meet. The only question is whether he'd want to go out to a sports bar or have people over to watch in our living room and the "Boka Bear Den" Bears bar we had in the basement. Probably the latter, so we could share victory boots of keg beer from the Bears keg-o-rater, and possibly make up some kind of blue or orange shots. ;">Shots, anyone? (Mexico, 2008)</td></tr></tbody></table>That was Brian. He was the life of any party, and particularly enjoyed getting quiet people out of their shells. He was big, loud, funny and smart. He had a soft side, though, and would cuddle our kitties and speak baby-talk to them. I'll never forget the day he was petting Ellie on his lap and said, "Oh, you're Daddy's little purr factory." By the same token, there was a time he referred to someone (not to their face) as a "monkey sack of shit." He had a knack for stringing together obscenities and insults into hilarious and oft-repeated catch phrases. He was very smart and had a large vocabulary. More than anything, his intelligence came in the form of common sense. He could look at any problem or situation and analyze it quickly, and simply, in a way that would make you think, "Wow, it really is that simple." He would tell you what he thought whether you liked it or not, and whether it was what you wanted to hear or;">Life was always more fun with Brian around< a fundraising gala</td></tr></tbody></table><br />Brian was honest, sometimes almost to a fault. He didn't pull any punches when it came to speaking his mind, and one of his less endearing traits was that he didn't care who he offended with what he said; he valued honesty over feelings. Still, he had a great way to use his strengths to bring out the best in people and situations. He fared quite well in business and quickly moved into positions of leadership and authority. He was a wonderful manager -- he had the ability to improve an organization at its lower levels by bringing out the most in his employees, while he also had a talent for thinking big-picture and improving a company by making sure departments worked together and sharing ideas for change and development in planning meetings with higher-ups. He took great pride in helping poor performers on his team turn things around, in mentoring team members to prepare them for promotions and career growth, and in being a leader at</a></td></tr><tr><td class="tr-caption" style="text-align: center;">The college days</td></tr></tbody></table>He was a rock star at ING, where he took on more special projects than any other manager and managed to excel at each one. He had been identified as a top talent there, becoming a part of a very small group (consisting of 1-3% of employees) who were being groomed for higher management and who would be sent to training and leadership camps around the country. I would have loved to have seen what he could have done for that company and for himself. He was savvy enough to convert his bosses' praises into compensation, and always fared well at review time. He was, simply, a business genius. He wasn't afraid to ask why things were or weren't done a certain way, wasn't afraid to suggest new things, wasn't afraid to address any elephant in any room, and wasn't afraid to negotiate for the biggest raises and bonuses possible. After he died, people who had worked for him at other places, many years ago, came to pay tribute and so many people said wonderful things about him. In a world where many people dislike their bosses, he had raving fans in his best friend</td></tr></tbody></table>Brian had a unique voice. He didn't like it, but I loved it. I miss hearing it, though it's in my memories clear as day. He did some radio work in college, where he was a communications major. He had a radio show for a semester or two on the campus radio station, and he and his freshman year roommate (who went on to spend a few years in radio for a career) did play-by-play announcing of basketball games for the Simpson Storm. He also did an internship at the Muscatine radio station, impressing everyone he worked with there. He loved sports and also did an internship with a sports newspaper in central Iowa. This came easily to him, as he had written for the Simpsonian, the college newspaper of our liberal arts school.<br /><br />Brian loved friends, family, food, fun, and life. Yet he was picky, and had funny tastes. He disliked fruit in general, and despised berries. I remember a big fight we had once when he wouldn't take even one bite of a strawberry-rhubarb cake I'd made (that was a labor-intensive dessert, I might add). He was stubborn; what can I say? He hated topiaries, barn quilts, and doilies. He never shied away from telling me if he didn't like an article of clothing or accessory I picked out, either. He liked my hair best when it was long and I didn't have bang Picaboo Whiskers Boka, our firstborn :)</td></tr></tbody></table>Brian loved animals. He doted on Princess, the dog his family had when I met him, and cried when she died. Then, he doted on Murphy, the dog his parents never planned to have but couldn't resist when he showed up one day. He was a wonderful "pet parent" to Picaboo and Ellie, and loved having a kitty on his lap. He was on the board of directors of a no-kill animal shelter and enjoyed volunteering there as well, with events (dog washing and silent auction fundraisers, for example) and with animal socialization, such as when he worked with a puppy to get him through obedience classes to make him more adopt our neices, Lily and Lauren</td></tr></tbody></table>I wonder if we would have had children...if so, I wouldn't be surprised if they had names that harkened back to the Bears -- he always wanted a cat named Walter, but we ended up only having girls. He was great with kids, a gentle giant, though a bit unsure with babies. He was afraid of "breaking" them and felt awkward being such a big guy holding someone so tiny. He didn't know much about babies, either. When our first niece, Lily, was born, we went to see her in the hospital the day after she was born. He was holding her and commented, "Look - she already has fingernails!" I guess he thought those sometimes grew in later, like hair or teeth. I laughed until I cried....something Brian did a lot. Once they were kids, he did much better and could relate to them in a special way. I always was amazed at how well he could connect to my sister and cousins -- he did that with kids and adults, with ease.<br /><br />Brian could read people like a book. He had an uncanny gut feeling about people that would prove, time and time again, to be spot on. If he warned me that someone was sleazy, they usually proved to be just that. He also had an ability to talk to anyone, to get people to open up, and to get people to leave their comfort zones and;">He was never afraid to jump in and sing along...</td></tr></tbody></table>Brian was passionate about music. He was not a great musician, though he played trombone in the high school band and started playing guitar in college. I loved when he would play guitar and sing, even if he wasn't the most advanced player or the best singer out there. If he was looking at music online, I'd sit on the floor of the home office while he played. Sometimes, he'd sit on the corner of the bed while I laid there and listened. He had a three-ring binder and dozens of pages of loose music in his guitar case. Sometimes, he'd take his guitar and a bottle of red wine to the basement to relax. Other times, it was a book and a bottle, and he'd play his records down his good friends at a Reckless Kelly show (Kansas City)</td></tr></tbody></table>Brian had a great collection of vinyl. It all started when my boss offered me a stereo system that he was going to get rid of -- we needed a sound system for the basement bar, so I took him up on it. One of the components of the 1990 Onkyo setup was a record player that had never even been out of the box. I declined that, but took the rest home. Once home, Brian and I were unloaded components and getting the system set up when I told him of the LP player that also went with the setup. He said, "Why didn't you take that?!" I said, "Because we don't own any records." ...."<em>So?</em>" I had to call my boss that night to make sure he didn't throw away the record player, and thus began a record collection. It was mostly old rock and folk style music that he liked -- Gordon Lightfoot, Joan Baez, Alabama, CSNY, etc. I thank him for introducing me to the likes of James Taylor, Bob Schneider, Jakob Dylan, and the Avett Brothers. But he hated Anne Murray. He loved watching music DVDs, just hanging out with friends and acting like there was a concert in our living room. He spent hundreds of hours with his best friend, Mike Hart, doing this. They would watch Jim Croce, Harry Chapin, The Band, and Crash Test Dummies. He could tell you all about what the songs meant, why they were written, and how the musicians got their starts. He dug deep into music, too -- he knew the real talents and had found the real gems produced by one-hit pop bands like the Dummies or Marcy Playground. When he loved a song or an artist, he would play a CD or just one song on repeat over and over again. I didn't mind that when he discovered Mason Jennings, but I never really liked Warron Zevon. Funny how I don't mind Zev;">With his brother, in Jamaica</td></tr></tbody></table>Brian loved to travel, and always picked beach locations. He loved snorkeling and was a good swimmer, having spent many years as a lifeguard (and eventually manager) at the Weed Park pool in Muscatine. I spent one summer baby-sitting while he was a guard and I'd bring the kids to the pool as often as possible. When it was his break time, he'd jump in and swim and play with me and the kids, teaching the little boys wrestling moves (I remember especially the "European upper-cut"). He loved MMA fighting, and I loved being his guinea pig while he learned and practiced moves -- he could really get me with a figure-four leglock if I didn't pull off a triangle choke first (usually he'd have to sort of let me have that). Another summer, when he was managing the pool and I was working at Applebee's, I would bring him Taco Bell after my shift was over. I'd eat Long John Silver;">Post-tequila-shot picture!</td></tr></tbody></table>Brian loved having parties, and our house in Iowa was perfect for that. It had a big kitchen overlooking the living room. There was a formal dining room that was most often used for board game nights and Wine Club gatherings. Downstairs from the living room was the sports bar, complete with a keg and shelf upon shelf of liquor. There was a bar and stools, as well as a high-top bar table that converted to a poker table. We had a tv mounted in the corner and the walls were covered in Bears memorabilia -- banners, flags, signed photographs, mounted cards (the Walter Payton rookie card being the prize among them), and posters. We had a nicely sized fenced-in yard, perfect for bags and bocce. We never did get the hot tub running though -- something that I predicted when we took it (one of those "free" but broken situations). All we did was spend a few hundred dollars for a new cover, and never did get the broken parts replaced. Sometimes home projects slipped to the bottom of the list, behind a 50-60 workweek and at least twice-weekly socializing. We had a great social life -- parties and nights out all the time. Even our wedding was planned around one idea -- it had to be the biggest party reception we could arrange, with late-night snacks to stem the tide of alcohol, and a DJ that played til 2:00 am (including some last-minute "Chug-A-Lug" karaoke performed by the groom himself...and no, the DJ did not even have a karaoke machine; Brian just asked him for the mic). We loved hosting the "Boka and Friends" fantasy football draft at our dining room table every year. He was commissioner of the league, which has been going strong for 10 years now, always with a waiting list of guys wanting to League</td></tr></tbody></table>I'm proud to say this one-time Vanna-esque sticker girl/beer bitch is now the champion of the league and that the Brian Boka trophy is on my mantel. A fantasy championship in this league is something he never attained, though he would usually have more points than anyone else -- it's just that every week, his opponent would pull off a miracle win by having players with career and season-high stats against Brian. He loved the frustration of it, though, and all the trash-talk among the league. I try to keep that tradition alive as well, even though this year I crashed and burned fantastically (thanks, Cam New his best friend Mike</td></tr></tbody></table>It's no surprise that people stayed in his fantasy football league -- if you were a friend of Brian Boka, you would stay that. He was fiercly loyal to his friends, and that sentiment went both ways. When he made a friend, it stuck. He had the same best friend for 25 years -- he and Hart became buds in first grade, and I have no doubt they would have been best friends for 50 years if fate had allowed it. They were more like brothers. He was also very close with other childhood and high school friends, as well as college and work friends. He had many close friends from every phase of his life. Many people would name him as their best friend, and it was not uncommon for him to be a sort of big brother and friend to anyone who needed an ear to talk to, a shoulder to cry on, or just someone to have a beer with. As fun-loving as he was, he was there for the serious stuff too, and was often the one people would turn to in times of crisis or struggle. I miss his advice, his insights, his;">Hart's 30th birthday -- sometimes they'd kind of dress alike unwittingly</td></tr></tbody></table>Words that describe Brian are funny, loud, big, lovable, honest, fun-loving, stubborn, smart, intuitive, tender, perverted, extroverted, indulgent, insecure, crass, leader, pragmatic, inspirational, goofy, and true to himself. He was someone who would push boundaries and was boisterous and provocative enough to almost get in trouble, but instead would always get away with it. He was a rambunctious little boy, always getting into fights with his brother (that continued into high school, when he worked for his brother for a while) and finding ways to avoid trouble while doing things that should have gotten him some with his friends.<br /><br />I can't sum up a person and his entire life with one blog entry, and I'm sure I'm leaving out hundreds of words and anecdotes that would more fully paint the picture of Brian Boka. He was true to himself and sought to bring out the best in everyone around him. He forever changed me, made me a better person, and shaped me into the football-loving, music-appreciating, dirty-joke-telling woman I am today.<br /><br />I miss you, Brian Steven Boka. I love you. Thank you for sharing your life wedding day -- 06.19 5 year anniversary party at our home in Iowa<br /><br /> </td></tr></tbody></table><br /><br /> </div><img src="" height="1" width="1" alt=""/>Wendy's In a Name?<div dir="ltr" style="text-align: left;" trbidi="on">I have never slipped and called Sheldon "Brian," though lately, I've done it in my head a few times. I'll go to say something to him and I have to mentally make sure that "Brian" doesn't come out of my mouth. I'm sure part of that is that we have a friend named Brian who spends a lot of time at our house and, indeed, I find I'm more likely to feel the potential for a slip of the tongue when he is around. It's weird -- I don't think Sheldon would get mad if that came out, and I would say it's actually a complement. I think I feel the possibility of this happening because the comfort level between us is so great, and because our life together is so "old hat." Our year-plus of living together has gotten to the point that it feels totally natural, and like something that has just always been the case. He is very different from Brian, and our relationship is quite different too, but the familiar feeling is the same. I think feeling that way has caused my brain to circuit into its habits from when I used to feel that way. Much as Brian and I used to have our routines, so do Sheldon and I. Just as Brian and I used to have inside jokes, and could communicate our thoughts with a split-second glance, so do Sheldon and I. We are best friends, we share our thoughts and fears, our judgments and opinions. <br /><br />There is a level of comfort, a level of love, a level of "home" that you feel with a partner, and that's where we are. It's something that takes time to develop, to unfold. It happens while you brush your teeth together, while you fall asleep intertwined with someone over and over, while you start sharing the same language and meals, while you let your guard down more and more until you don't care if that person catches you picking your nose or hears you singing in the shower. It is when you stop feeling like people who live together (with cats) and start feeling like one family unit. This is where we are, and in some ways it takes me back to when I had that before with Brian. The details of our lives are different, but the feeling is the same. And sometimes it makes me go on "autopilot" and almost makes me say the old familiar name I spent so many years saying in that context. <br /><br />I also worry that when I'm older, if I end up suffering from some affliction that confuses my senses, that I will resort back to that name, or that it will come out from time to time. What if I'm an old woman asking for my husband, Brian? How will that make Sheldon feel? I know this -- it wouldn't mean I love him any less, just as my close calls now don't mean that. In a way, it's like a mother who calls her children by the wrong names (we've all heard some version of this), running through them until she reaches the appropriate name. It doesn't speak at all to who she loves more (as though there is any competition of the sort!) -- it means she view them similarly, and that talking to one is like talking to another. She gets the same warm, loving feeling from all her kids, and sometimes the brain and tongue don't work well together to get the correct names and words out. I feel the same way about Brian and Sheldon.<br /><br />The name "Brian" is wrapped in love and comfort for me, and stands for a partner and friend. Sheldon makes me feel all those things, too, and that's where my brain and tongue sometimes get tied up.</div><img src="" height="1" width="1" alt=""/>Wendy Road<div dir="ltr" style="text-align: left;" trbidi="on">"It's the best of times, and the worst of times." -- My therapist, today<br /><br />Today was a tearful session. My therapist told me that she's very busy this time of year, and that it's the "best of times" for her because it was the "worst of times" for everyone else. That sounds sick, but I swear the comment was funny because it was delivered with an awareness of how dark it sounded. Okay, so I'm struggling right now, but I'm not alone.<br /><br />These months, and this year, brings a lot to the table. Last month, the day went by that marked 10 years from when Brian proposed to me. I never would have imagined that my life would have taken a full circle of a detour and put me back into the same place -- cohabitating with a boyfriend, childless, and working on a path to my intended legal career. But alas, what I thought was a path to one destination was really a much longer and more winding road to somewhere else entirely. (I'm happy to say that, while it wasn't where I thought I'd be, I quite like where I've landed.)<br /><br />There are the holidays themselves, of course, too. Christmas, New Year's Eve. There are the normal feelings of wanting to see everyone and balance time between immediate and extended families, though my situation is somewhat unique in that "family" includes three sets -- mine, my late husband's, and my boyfriend's. Sheldon will be meeting Brian's extended family -- grandparents, aunts and uncles, etc. this year. I have no worries about families blending, though, and we are fortunate enough to be able to take off enough time to see everyone, so while things may be hectic, at least we will be able to join all the celebrations and enjoy the company of so many that we love. The only downside is that I'm already having anxiety about gift-giving, packing and shipping. I'm going to have to face this soon, and just start making lists and tackling tasks.<br /><br />Again, not so bad. Brian and I used to spend weeks criss-crossing Iowa to see everyone and attend every family celebration, company party, and friendly holiday cocktail parties; it could be exhausting, but I always tried to remind him that, if our biggest complaint was that we had too many holiday events, it was simply a reflection of how much love we had in our lives. I can't imagine what he'd think of me adding another family and another entire state to the equation, and doing it all from a distance to boot!<br /><br />No, there's more. Brian would have been 34 this month. I will be turning 32, one year older than he ever got to be. This will be the first time I have a birthday that he didn't get. <br /><br />January doesn't get too much easier. We started dating in early January (1996). He died in mid-January. It will be three years this year. It was two years ago this month that I said good-bye to our home in Iowa and packed up to move to Texas.<br /><br />This is a lot to get through in the next five or six weeks. It's going to be a long holiday road. Of course, I guess there's more scenery to enjoy on the longer roads, and the destination can be unexpectedly fantastic.</div><img src="" height="1" width="1" alt=""/>Wendy Shtuff<div dir="ltr" style="text-align: left;" trbidi="on">Read the most recent couple of posts. Good stuff. I have wise and wonderful friends.<br /><br /><a href=""></a></div><img src="" height="1" width="1" alt=""/>Wendy I Losing My Mind? Am I Going Backwards in Time?<div dir="ltr" style="text-align: left;" trbidi="on">I had a tough day yesterday. It all started when Brian died. Let me explain...<br /><br />I have been trying to get licensed to practice law in Texas. The ultimate plan is for me to do some freelance legal work (writing appeal briefs -- something I relish and most lawyers loath) and continue doing real estate as a paying hobby, working with family, friends, and referrals. I love real estate, but I'm not crazy about the prospecting aspect of it -- calling FSBOs (For Sale By Owners) to try to convince them to list with me, hosting open houses for other agents to try to scoop up unrepresented buyers, setting up booths at trade shows, managing email databases of prospects, etc. Some agents do cold calls, door knocking, and even go to malls to approach strangers with business cards. Not my cup of tea. I love helping people find the right home, helping them negotiate a fair price, helping them through all the steps of the transaction, explaining all the contracts and documents, etc. I don't love competing with other agents for listings, trying to convince people why I'm the best agent to use, soliciting business from strangers, etc. It is hard to make a living as an agent without doing all those things, but it's not hard to be a great agent who is fully dedicated to a small number of clients and who makes a little bit of money doing something she loves.<br /><br />My plan, then, is to be a "boutique agent" who focuses on quality, not quantity and target numbers for number of appointments I can set in a given week or month. I can do this by also doing some freelance legal work. The beauty is that I can do the legal work from home, as well as a good deal of the real estate work (the searching, setting up appointments, phone calls, document preparation). Being able to work from home and being able to largely control which days and hours I work would be ideal for raising a family, something that is on the horizon for us (though not the immediate horizon -- no big announcements yet!).<br /><br />Yesterday, I put together a couple more documents that the Texas Board of Law Examiners needed to process my application. One was my 2010 tax return and one was an order from the Arizona Supreme Court accepting my resignation of membership in that state's bar (I am not planning to practice there, so there is no point in paying the dues required to maintain membership...but I needed to follow a certain process to have that treated as resignation and not suspension. Now that has been taken care of in a satisfactory manner). What I didn't have was my 2006 tax return. When I was putting together my (rather large) package of materials for the Board, I realized this was missing. I submitted the spare copy of my W-2 for that year, which I did have for some reason, and explained that I had filed taxes in 2006 but could not locate the return, and that the W-2 and my employment references would be able to verify my full-time employment as an attorney for that year. After placing a follow-up call to the Board yesterday, I was told that I would have to request the tax return from the IRS. The woman I spoke to couldn't tell me why the return itself was necessary (they have copies of all the other years from 2005-2010), given the other proof of income and employment I furnished, but did at least tell me that my application *might* be considered without the tax return if I submitted proof that I've requested it. One hundred fourteen dollars and two IRS forms later, the return has been requested and I've sent proof of that to the Board. The tax return might take 60 days to receive. I was also told that the Board of Law Examiners has 150 days to consider my application before it has to make a decision. That puts us in February before I expect to know anything. I submitted my application (the first time) in September, after spending months completing the application, tracking down and collecting documents (which included my high school cheer coach mailing me the only copy of my old business card anyone seemed to have), and ensuring I had up-to-date phone numbers and addresses for all my dozens of references.<br /><br />It has been a long, drawn-out process, and I expected to have some answers or resolution by now. Instead, there will be more waiting. And it's out of my hands.<br /><br />I don't like the fact that this is out of my control. I don't like that the IRS is involved -- I inherently distrust and dislike them, and even having to fill out forms requesting old returns and having to copy my 2010 return (the one with the word "DECEASED" in all-caps prominently next to Brian's name) made me cry, tears of frustration and sadness and rage at the process and the red tape. Why does this all have to be so <em>difficult</em>?! Why is it taking so long for all these things in my life to come together? I have had this idea and plan of what I want for so long, and now I'm just waiting -- waiting for the IRS to send me copies of a tax return that's more than a half-centry old. waiting for the Texas Board of Law Examiners to decide my fate, waiting to know if I'll have to take another bar exam, waiting to get my professional life to where I want it, waiting to get married and have kids, waiting....<br /><br />And then I started thinking about what I've done with the last year of my life. Today, I see it more clearly and can appreciate some of the non-Christmas-card items I've done, like writing this blog and having heart-to-hearts with hurting, grieving people. But yesterday, I broke down, upset that I've only sold a few houses and haven't put all the pieces together like I thought I would have by now. I'm getting impatient.<br /><br />Then, I started thinking about the "what if"s that I usually avoid....Where would I be if Brian hadn't died? Would I be a partner at my old firm? Would we have two kids? Would they be redheads? Would my social life be better, full of all my friends that are now 1,000 miles away? Would we have upgraded to a four-bedroom house with granite countertops? Would we be able to spend vacation days at tropical resorts instead of returning home to see family?<br /><br />I know all of these things make it sound like I'm miserable and unhappy with my life now, but I'm not. That's the thing. I just had a lousy day and I let it get the best of me. I just could not stem the tide of tears yesterday. It wasn't because I am unhappy with my life -- it was because I was frustrated and upset with a few things, and then some other issues came spilling out. I don't grieve too much for what "could have been" anymore, but it happens every now and then. Yesterday was one of those days. They are few and far between, and get to be fewer and farther between as time goes on, but they still happen. The good thing is that I was able to share all of this with Sheldon and just having an ear to talk to and a shoulder to cry on, and arms to hold me tight, made me feel better. It can be a tough walk, to acknowledge these feelings and be honest about them, but also to make sure I don't offend Sheldon by making him feel that I don't love him and our life together. I explained that, and somehow he handles it all well and keeps it in perspective. A bad day here and there is nothing compared to our usual routine of happiness, kisses, and counting our blessings. He knows I am happy, even if I have those "widow days" now and again. We also know we can and will get through them.<br /><br />Yesterday was a bad today. Today is better. Life goes on. I will go on, and I will do so with a smile on my face, appreciating the wonderful things and people I have in my life. As for the IRS and red tape I have to deal with....well, I hope to learn a thing or two about patience and persistence from all this. There are lessons to be learned from everything, and goodness can come out of anything.</div><img src="" height="1" width="1" alt=""/>Wendy
http://feeds.feedburner.com/YoungWidowedRebuilding
CC-MAIN-2018-05
refinedweb
24,254
77.37
15 July 2010 04:55 [Source: ICIS news] SINGAPORE (ICIS news)--Taiwan’s Formosa Chemical and Fibre Corp has brought forward styrene monomer (SM) maintenance schedule to ensure that feedstock ethylene was sufficient for its many downstream operations following the shutdown of its No1 cracker, market sources said on Thursday. The 700,000 tonne/year No 1 cracker in Mailiao was shut on 7 July following an explosion and was expected to remain off line for around one month. The company’s 600,000 tonne/year No 3 SM unit was initially scheduled for a 40-day maintenance starting early September. However, end-users in ?xml:namespace> The shutdown was likely to start in the second half of August, a source close to the company said. Meanwhile, the re-start of the 250,000 tonne/year No1 unit will be pushed back by around one week as well, from mid-July to late July, the source added. The company expects to have sufficient ethylene for its SM operations and no disruption in supply to its customers,
http://www.icis.com/Articles/2010/07/15/9376685/taiwans-formosa-brings-forward-styrene-maintenance-schedule.html
CC-MAIN-2013-48
refinedweb
176
56.59
When you are writing applications, eventually you have to decide how to manage state. You can get far with React setState and lift the state in the component hierarchy as you go. Eventually that might become cumbersome and you realize using a state manager might save time and effort. This is the reason why solutions like Redux, MobX, and Cerebral are popular in the community. To provide another point of view, you will hear this time from Nir Yosef, the author of controllerim. It's a solution that builds on top of MobX and has been designed testability in mind. My name is Nir, and I am a front-end developer at Wix.com, with over two years of experience in React and MobX, and now gaining some experience with React Native and Android. Controllerim is a state management library. It gives you the ability to create logic controllers for you React components, and makes your components automatically reactive to any change in the controllers. All of this is done with almost zero boilerplate. Controllerim uses MobX Observables behind the scenes, so all the optimizations of MobX in term of performance are also relevant for Controllerim. Controllerim brings back the idea of the well know Controller, the C of MVC, and abandon the singleton Stores concept that Redux (using Flux terminology) gave birth to. When I first came across React, I almost immediately came across Redux. Its seems like Redux was the only way to do React. Everyone was talking about it, so I decided to give it a try. After reading some tutorials, I was quite amazed by its complexity. All the different terms (thunk, reducers, selectors, map dispatch to props, etc.) weren’t so clear to me, and it seems like a considerable amount of boilerplate. Something just felt wrong. It seems like a strange way to implement the good old MVC. I think the article by André Staltz says it all. After some playing around with dummy project, trying to crack this Redux thing, I came across MobX and dumped Redux for good. MobX was much clearer and straightforward. I used MobX for over a year with my team, and it was pretty good, but some problems immediately came up: mobx.toJs()conversions all over the place. mobx.injectand mobx.providebut those didn't play well with our tests. So MobX wasn’t perfect after all. At this point, I again started to wonder what happens to the good old MVC, Why things are getting so much more complicated on the web? And then I decided to write down all the pain points of our current architecture: toJSthing. I want everything to be a plain JavaScript object. AppStorewill be the root. After writing it down, I found out that I don’t have a Store anymore. I have a Controller. The good old Controller. I knew I was on the right track. The API was just written itself down. I just needed to figure out the way to make it happen, and it wasn’t so hard. The final result was Controllerim. If you wonder about the name, I tried to name it “Controllers” but it was already taken. I tried React-controllers, but it was also taken. In Hebrew, the ‘im’ suffix is the plural suffix for the word controller, so I just named it Controllerim. :) Let's say we have App component as the root of our web app, and that we have Child component deeply nested in the app. Every data that we will put on the AppController will be available to all other components in the app for as long as the app is alive, so let's create an AppController and put some application data on it: class AppController extends Controller { constructor(componentInstance) { super(componentInstance); this.state = { userName: "Bob" }; } getUserName() { return this.state.userName; } setUserName(name) { this.state.userName = name; } } So a controller is just an ES2015 class that extends Controller and has some state and getters and setters methods. Now let's connect the controller to the App component: class App extends React.Component { componentWillMount() { this.controller = new AppController(this); } render(){ return ( <div> <h1>Welcome {this.controller.getUserName()}!</h1> <compA/> <compB/> </div> ); } } export default observer(app); Easy right? We just need to init the controller in componentWillMount, and we need to make sure that we wrap the component with observer, and that's it! Every change in the controller will be reflected by the view. Now, let's say that Child is some deeply nested component and that it should allow us to preview and edit the userName when we click on a save button: Let's start with creating ChildController: class ChildController extends Controller { constructor(componentInstance) { super(componentInstance); this.state = { input: "" }; } getInput() { return this.state.input; } setInput(value) { this.state.input = value; } saveInput() { this.getParentController( "AppController" ).setUserName(this.state.input); } } The only new thing here is the call to getParentController(). Controllerim allows you to get any Parent controller, not only a direct parent, so we just save the userName, and because everything is reactive, this change will be reflected in all the views that make use of userName prop from App. Let's finish by creating Child: class Child extends React.Component { componentWillMount() { this.controller = new ChildController(this); } render() { return ( <div> <input value={this.controller.getInput()} onChange={(e) => this.controller.setInput(e.target.value)} /> <button onClick={() => this.controller.saveInput()}> Save </button> </div> ); } } And that's it! Simple isn't it? It depends. The ones that are already familiar with MobX are very supportive. The Redux people are more suspicious and begin to recycle arguments they heard about MobX, so I think it would be nice to tackle down the two most frequently recycled arguments once and for all: this.setState({some: {nested:{prop: true }}}), you can just write this.state.some.nested.prop = true. Use Controllerim all over the place to make it battle tested. :) I think that Controllerim has the potential to be the best Redux alternative out there. In general, I think that React is here to stay, and the next giant step will be in the field of CSS. If something doesn't feel right, don’t be fooled by its popularity. You should interview someone from the CSS community. This field in the web development needs a little push. Thanks for the interview Nir! Controllerim looks like a great abstraction over MobX and I hope people find it. The code feels almost amazingly simple. Learn more about Controllerim on GitHub.
https://survivejs.com/blog/controllerim-interview/
CC-MAIN-2018-30
refinedweb
1,084
66.94
package org.apache.lucene.search; /** *.Serializable; /** *), false, true, false));* There are three possible kinds of term values which may be put into * sorting fields: Integers, Floats, or Strings. Unless * {@link SortField SortField} objects are specified, the type of value * in the field is determined by parsing the first term in the field. * * Integer term values should contain only digits and an optional * preceeding). * * Float term values should conform to values accepted by * {@link Float Float.valueOf(String)} (except that NaN * and Infinity are not supported). * Documents which should appear first in the sort * should have low values, later documents high values. * * String term values can contain any valid String, but should * not be tokenized. The values are sorted according to their * {@link Comparable. * * The cache is cleared each time a new IndexReader is * passed in, or if the value returned by maxDoc() * changes for the current IndexReader. This class is not set up to * be able to efficiently sort hits from more than one index * simultaneously. * * Created: Feb 12, 2004 10:53:57 AM * * @author Tim Jones (Nacimiento Software) * @since lucene 1.4 * @version $Id: Sort.java,v 1.7 2004/04/05 17:23:38 ehatcher Exp $ */ public class Sort implements Serializable { /** Represents sorting by computed relevance. Using this sort criteria * returns the same results as calling {@link Searcher#search(Query) Searcher#search()} * without a sort criteria, only with slightly more overhead. */ public static final Sort RELEVANCE = new Sort(); /** Represents sorting by index order. */ public static final Sort INDEXORDER = new Sort (SortField.FIELD_DOC); // internal representation of the sort criteria SortField[] fields; /** Sorts by computed relevance. This is the same sort criteria as * calling {@link Searcher#search(Query) Searcher#search()} without a sort criteria, only with * slightly more overhead. */ public Sort() { this (new SortField[]{SortField.FIELD_SCORE, SortField.FIELD_DOC}); } /** Sorts by the terms in field then by index order (document * number). The type of value in field is determined * automatically. * @see SortField#AUTO */ public Sort (String field) { setSort (field, false); } /** Sorts possibly in reverse by the terms in field then by * index order (document number). The type of value in field is determined * automatically. * @see SortField#AUTO */ public Sort (String field, boolean reverse) { setSort (field, reverse); } /** Sorts in succession by the terms in each field. * The type of value in field is determined * automatically. * @see SortField#AUTO */ public Sort (String[] fields) { setSort (fields); } /** Sorts by the criteria in the given SortField. */ public Sort (SortField field) { setSort (field); } /** Sorts in succession by the criteria in each SortField. */ public Sort (SortField[] fields) { setSort (fields); } /** Sets the sort to the terms in field then by index order * (document number). */ public final void setSort (String field) { setSort (field, false); } /** Sets the sort to the terms in field possibly in reverse, * then by index order (document number). */ public void setSort (String field, boolean reverse) { SortField[] nfields = new SortField[]{ new SortField (field, SortField.AUTO, reverse), SortField.FIELD_DOC }; fields = nfields; } /** Sets the sort to the terms in each field in succession. */ public void setSort (String[] fieldnames) { final int n = fieldnames.length; SortField[] nfields = new SortField[n]; for (int i = 0; i < n; ++i) { nfields[i] = new SortField (fieldnames[i], SortField.AUTO); } fields = nfields; } /** Sets the sort to the given criteria. */ public void setSort (SortField field) { this.fields = new SortField[]{field}; } /** Sets the sort to the given criteria in succession. */ public void setSort (SortField[] fields) { this.fields = fields; } /** * Representation of the sort criteria * @return Array of SortField objects used in this sort criteria */ public SortField[] getSortFields() { return fields; } public String toString() { StringBuffer buffer = new StringBuffer(); for (int i = 0; i < fields.length; i++) { buffer.append(fields[i].toString()); if ((i +1) < fields.length) buffer.append(','); } return buffer.toString(); } }
http://mail-archives.apache.org/mod_mbox/lucene-dev/200407.mbox/raw/%3C008401c46f5c$01ab62a0$6a00a8c0@Aviran%3E/2
CC-MAIN-2017-04
refinedweb
609
57.87
I have this LinkedList called cardList. I have placed cards here using this: cardList.add(new Card(int suit, int rank)) I can display the contents = 00, 01, 02 and so on ... with my suit being... I have this LinkedList called cardList. I have placed cards here using this: cardList.add(new Card(int suit, int rank)) I can display the contents = 00, 01, 02 and so on ... with my suit being... How do I convert an existing java program to GWT-MVP format program? It's not clear to me where the process begins. Please help. So what will be the output of that code if I do sys.out.println(str) if the string is KD? I have the code below and the corresponding output. void openTopCards(){ LinkedList<String> fliper =... This is a solitaire game (without graphics) that I am trying to write. I can't figure out how to write a class with that property because the cards are not always facing down. By default they are... Is it possible to convert one element (string) from one linkedlist with another element (string) from another linkedlist to have an output of one element that joined the two strings? I am trying to... Thank you Tim. I figured that out yesterday after a colleague pointed it to me. I should have declared the value first before using it. As a Java programming enthusiast, I was expecting people to... int m, mc; Foundations give = new Foundations(); LinkedList<String>[] maneuver = new LinkedList[m]; LinkedList<String> distribute(){ m = 7; for (int i = 0; i != m; i++){ maneuver[i] =... I am sorry about the long code. I thought it would be helpful for readers to see everything. I really don't know yet how to implement the Exceptions. I seem to have traced the problem. I just do not... package wargame; import java.util.*; public class Table { public int n, p, c; public LinkedList<String> shuffle = new LinkedList<String>(); public LinkedList<String> card = new... I want to distribute a certain number of items to an indefinite number of recipients, how do I code it? I have this: item = getitem.items(); LinkedList<String>[] distributedItems = new... @ ranjithfs1: I am trying to copy the whole method from another class to main. Even if I remove the return there is still error. "Multiple markers at this line - Syntax error on token(s),... this one is working on: public class Table { public int player() { Scanner in = new Scanner(System.in); p = in.nextInt(); return p; } but does not work on another class but under...
http://www.javaprogrammingforums.com/search.php?s=23dd6102fc3fdca83799411cca4f7937&searchid=1583587
CC-MAIN-2015-22
refinedweb
426
78.55
Aug 09, 2017 10:18 PM|JoeyCrack|LINK I have 2 pages that I want to use Routing on using the query string to create a friendly URL. I want /first?item1={0} to route to /{0}/ And /second?item1={0}&item2={0} to route to /{0}/article/{0} Can somebody please tell me how I can achieve this using WebForms and the built in routing in ASP.NET 4.6? Contributor 6420 Points Aug 10, 2017 03:03 AM|Jean Sun|LINK Hi JoeyCrack, JoeyCrack I want /first?item1={0} to route to /{0}/ And /second?item1={0}&item2={0} to route to /{0}/article/{0} Usually when we create user friendly urls, it should route the url like /{0}/ to /first?item1={0}. The above description confuses me. If you want to make the application show content of /first?item1={0} when you request /{0}/, you add the following code into the Global.asax.cs. Since the querystring isn't allowed in route, we will put the querystring parameters in the routevaluedirectory. public class Global : HttpApplication { void Application_Start(object sender, EventArgs e) { // Code that runs on application startup RouteConfig.RegisterRoutes(RouteTable.Routes); BundleConfig.RegisterBundles(BundleTable.Bundles); } public static void RegisterRoutes(RouteCollection routes) { routes.MapPageRoute("first", "{item1}/", "~/first.aspx", true, new RouteValueDictionary { { "item1", "defaultvalue" } }); routes.MapPageRoute("first", "{item1}/article/{item2}", "~/second.aspx", true, new RouteValueDictionary { { "item1", "defaultvalue" },{ "item2", "defaultvalue" } }); } } Then you can use the following code to get the value from the route value directory. var test = Page.RouteData.Values["item1"]; If you want to make the application show content of /{0}/ when you request /first?item1={0}, this can't be done using Routing. However you can use IIS URL Rewrite to achieve this. The following link shows how to use IIS URL Rewrite, please take it as reference. Best Regards, Jean 2 replies Last post Aug 10, 2017 03:03 AM by Jean Sun
https://forums.asp.net/t/2126682.aspx?URL+Routing+with+ASP+NET+WebForms
CC-MAIN-2018-26
refinedweb
317
59.9
Life is short, proclaims the authors of O'Reilly's Ruby Cookbook. You have real problems and this book is here to solve them, they go on. Weighing in at around 850 pages, there certainly is a chance that what ever problem you have could be addressed in this book. But is it all cartoon foxes or a tale with descriptions and plot twists worthy of reading in your bathrobe over a cup of chamomile? Thankfully no. In this book, O'Reilly delivers a densely packed tome filled with information, most of it unrelated, intended to solving real problems. If you've never used a recipe book before, the premise is simple. They teach you how to put together code to create a particular effect or output. Reading tons of code on paper can be awfully boring but thankfully Ruby lends itself well to these types of books largely because it's a human readable language and packs a lot of functionality in very little space. For instance, you can expect: <% 3 times.do %> yeah Will output yeah yeah yeah as you would expect when reading three times, do 'yeah'. In the Ruby Cookbook, authors Lucas Carlson and Leonard Richardson have divided the problems into these logical groups: 1. Strings 2. Numbers 3. Date & Time 4. Arrays 5. Hashes 6. Files & Directories 7. Code Blocks & Iterations 8. Objects & Classes 9. Modules & Namespaces 10. Reflections & Metaprogramming 11. XML & HTML 12. Graphics and Other File Formats 13. Databases & Persistence 14. Internet Services 15. Web Development: Ruby on Rails 16. Web Services and Distributed Programming 17. Testing, Debugging, Optimizing, and Documenting 18. Packaging and Distributing Software 19. Automating Tasks with Rake 20. Multitasking and Multithreading 21. User Interface 22. Extending Ruby with Other Languages 23. System Administration As you can see there is a lot of information in this book. <tangent>As I was flipping through it I began to wonder how I could possibly make room in my brain for all this knowledge. If I absorbed it all I'd probably have to discard some pretty fundamental things like remembering to zip up my fly. Trust me, nobody wants that. My guess is that Mssrs. Carlson & Richardson are in similar predicaments and my thanks to go them and their Significant Others for potentially sacrificing valuable basic skills in order to bring you all this wonderful code. </tangent> Each recipe in the book is divided into three sections: Problem, Solution, and Discussion. As you can infer, the problem gives a concise description of the issue that the Solution will then resolve. The Solution portion is where most of the code is and, like all O'Reilly books, they do a good job of separating the text from code so there's never any confusion. The Discussion is often the longest part of the recipe and contains not only details on the solution but the occasional alternative approach as well. I can't say that each recipes contained in this book is staggeringly useful. For instance, I personally do not foresee ever needing 2.14: Doing Math with Roman Numbers. But I guess some would-be gladiators out there might find that helpful. But the recipes that I did find interesting were very valuable and made the price of the book well worthwhile. I particularly enjoyed the recipes on Classifying Text with a Bayesian Analyzer, Documenting Your Website, and Checking a Credit Card Checksum. Other recipes, while not urgently required, can add a polish and professionalism to your code to set you apart from your peers. For instance there is a very simple recipe for wrapping text which you can use as an alternative to truncating: def wrap(s, width=78) s.gsub(/(.{1,#{width}})(\s+|Z)/, "\\1\n") end Using it like: puts wrap("This text is not too short to be wrapped.", 20) I did find some solutions a little lacking. The Sending Mails with Rails recipe is different enough from the similar recipe in Chad Fowler's Rails Recipes book to make it useful. Neither solution independently helped me craft a newsletter function for my website but both combined gave me the knowledge I needed to write my code the way I needed it. Likewise, Generating PDF Files recommended using PDF-Writer which I found slow for the amount of fields (> 200) and records (> 500) that I need for my application. Based on a recommendation from a fellow RubyDC member, I will be checking out Ruport and RTex to see if it helps. Honestly, it could be that all solutions give me the same headaches. Overall I was really impressed with the book. As with any recipes book, the utility is directly proportional to your need. Ruby Cookbook is no different there. It makes sense that, since each program is unique, some proposed solutions won't fit perfectly. Yet they will still otherwise give you a great launching point towards finding your own answer! Most importantly, as the types of applications you need to code change and as their complexity grow, you will find methods to avoid many of the usual speedbumps along the way within these pages. Cover |.
http://www.oreillynet.com/cs/catalog/view/cs_msg/87595
crawl-001
refinedweb
858
65.83
CherryPy1 helps with both the collection and the analysis of coverage data (for a good introduction to code coverage, see bullseye.com). Now, I'm a visual learner, so I'm going to skip right to the screenshot and explain it in detail afterward. This is a browser session with two frames: a menu frame on the left and a file frame on the right. Clicking on one of the filenames in the menu will show you that file, annotated with coverage data, in the right-hand frame. This stats-browser is included with CherryPy, and can be used for any application, not just CherryPy or CP apps. 1 All of this is present in CherryPy 2.1 beta, revision 543. Get it via SVN You need to start by obtaining the coverage.py module, either the original from Gareth Rees, or Ned Batchelder's updated version. Drop it in site-packages. If you're collecting coverage statistics for CherryPy itself, just run the test suite with the --cover option. Coverage data will be collected in cherrypy/lib/coverage.cache. Example: mp5:/usr/lib/python2.3/site-packages# python cherrypy/test/test.py --cover If you write a test suite for your own applications, build it on top of the tools present in cherrypy/test. Here's a minimal example: import os, sys localDir = os.path.dirname(__file__) dbpath = os.path.join(localDir, "db") from cherrypy.test import test if __name__ == '__main__': # Place our current directory's parent (myapp/) at the beginning # of sys.path, so that all imports are from our current directory. curpath = os.path.normpath(os.path.join(os.getcwd(), localDir)) sys.path.insert(0, os.path.normpath(os.path.join(curpath, '../../'))) testList = ["test_directory", "test_inventory", "test_invoice", ] testConf = os.path.join(localDir, "test.conf") test.TestHarness(testList).run(testConf) By using the TestHarness from CherryPy's test suite, you automatically get access to the --cover command-line arg (and --profile and all the others, too, but that's for another day). Again, coverage data will be collected in cherrypy/lib/coverage.cache by default. You can use the stats-browser even if you don't use the CherryPy framework to develop your applications. Just use coverage.py as it was originally intended: coverage.py -x yourapp.py The coverage data, in this case, will be collected by default into a .coverage file. You need to tell the stats-server where this file is (see below). Note that successive manual calls to coverage.py will accumulate stats; the CherryPy test suite, in contrast, erases the data on each run. Once you've got coverage data sitting around in a file somewhere, it's a snap to have CherryPy serve it in your browser. If you're covering the CherryPy test suite, or your own CP app using CP's TestHarness (see above), just execute: mp5:/usr/lib/python2.3/site-packages# python cherrypy/lib/covercp.py Then, point your browser to, and you should see an image similar to the above. By default, the server reads coverage data from cherrypy/lib/coverage.cache, the same file our collector wrote to by default. If you covered your own application and collected the data in another file, you can supply that path as a command-line arg: # python cherrypy/lib/covercp.py /path/to/.coverage 8088 If you supply a second arg, as in this example, it will change the port for you (from the default of 8080). You need to stop (Ctrl-C) and restart the server if you recollect coverage data. Each file in the menu has coverage stats, and is a hyperlink; click on one, and the file frame will show you the file contents, annotated with coverage data. Lines that start with ">" were touched, and those that start with "!" were not. Click the "Show %" button to show a "percent covered" figure for each file. This can take a long time if you have lots of files, so it's best to first restrict your view using the directory links. Each directory is a hyperlink; click on one to restrict the menu to that folder only. Percentages below the "threshold" value will be shown in red. The "Show %" feature isn't "sticky", by the way; that is, if you click on a different directory link, or refresh the page, the figures will disappear. That's a necessary evil due to the slowness of generating percentages for many files. Just hit the "Show %" button again as needed. As you can see from the screenshot, I've got some more tests to write! Hope you find this tool as useful as I do.
http://www.aminus.org/blogs/index.php/2005/08/19/code_coverage_with_cherrypy_2_1?blog=2
CC-MAIN-2015-18
refinedweb
773
66.74
Error handling – also called exception handling – is a big part of Java, but it’s also one of the more divisive elements. Exception handling allows a developer to anticipate problems that may arise in their code to prevent them from causing issues for users down the line. The reason this can become a nuisance is that some methods in Java will actually force the user to handle exceptions. This is where “try catch” in Java comes into play. What is “try catch” Java? For someone new to programming, it can be hard to understand why you might write code that makes it possible for an error to occur. See also: NullPointerException in Java – Explaining the Billion Dollar Mistake A good example would be the FileNotFoundException. This does exactly what it says on the tin: this exception is “thrown” when Java looks for a particular file and can’t find it. So, what happens if someone is using your app, switches to their file browser, then deletes a save-file that the app was using? In that scenario, your application might understandably throw an exception. We say that this is an exception rather than an error because it’s a problem that we might reasonably anticipate and handle. So you use a “try catch” block. Try essentially asks Java to try and do something. If the operation is successful, then the program will continue running as normal. If it is unsuccessful, then you will have the option to reroute your code while also making a note of the exception. This happens in the “catch” block. Try catch Java example Here’s an example of using try catch in Java: try { int[] list = {1, 2, 3, 4, 5, 6}; System.out.println(list[10]); } catch (Exception e) { System.out.println("Oops!"); } Here, we create a list with 6 entries. The highest index is therefore 5 (seeing as “1” is at index 0). We then try to get the value from index 10. Try running this and you will see the message “Oops!”. Notice that we passed “Exception e” as an argument. That means we can also say: System.out.println(e); We will get the message: “java.lang.ArrayIndexOutOfBoundsException: 10” See also: Java beginner course – A free and comprehensive guide to the basics of Java Now that we have “handled” our exception, we can refer to it as a “checked exception.” Forced exception handling Notice that we could have written this code without handling the exception. This would cause the program to crash, but that’s our prerogative! In other cases, a method will force the user to handle an exception. So, let’s say that we create a little method that will check the tenth position of any list we pass in as an argument: public class MyClass { public static void main(String[ ] args) { int[] list = {1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11}; System.out.println(checkTen(list)); } public static int checkTen (int[] listToCheck) { int outPut = listToCheck[10]; return outPut; } } This works just fine and will print “11” to the screen. But if we add the “throws” keyword to our method signature, we can force the user to deal with it. public static int checkTen (int[] listToCheck) throws ArrayIndexOutOfBoundsException { Now we can write our code like so: public class MyClass { public static void main(String[ ] args) { int[] list = {1, 2, 3, 4, 5, 6, 7, 8, 9, 10}; try { System.out.println(checkTen(list)); } catch (ArrayIndexOutOfBoundsException e) { //System.out.println(e); System.out.println("Oops!"); } } public static int checkTen (int[] listToCheck) throws ArrayIndexOutOfBoundsException { int output = listToCheck[10]; return output; } } This will then force the user to deal with the exception. In fact, many Java editors will automatically populate the code with the necessary block. Note that we need to use the right type of exception! So, should you force other devs to handle exceptions when writing your own classes? That’s really up to you. Keep in mind that some scenarios really should cause a program to terminate, and forcing a developer to deal with such instances will only create more boilerplate code. In other cases, this can be a useful way to communicate potential issues to other devs and promote more efficient code. Of course, in the example given here, there are a number of other possibilities for exceptions. What happens if someone passes a list of strings into your method, for example? All I can say to that is, welcome to the wonderful world of Java! This is your chance to decide what type of developer you want to be! Once you’re ready to find out, check out our guide to the best resources to learn Java!
http://www.nochedepalabras.com/try-catch-java-exception-handling-explained.html
CC-MAIN-2020-40
refinedweb
783
63.49
This tutorial introduces Python developers, of any programming skill level, to blockchain. You’ll discover exactly what a blockchain is by implementing a public blockchain from scratch and by building a simple application to leverage it. You’ll be able to create endpoints for different functions of the blockchain using the Flask microframework, and then run the scripts on multiple machines to create a decentralized network. You’ll also see how to build a simple user interface that interacts with the blockchain and stores information for any use case, such as peer-to-peer payments, chatting, or e-commerce. Python is an easy programming language to understand, so that’s why I’ve chosen it for this tutorial. As you progress through this tutorial, you’ll implement a public blockchain and see it in action. The code for a complete sample application, written using pure Python, is available on GitHub. Now, to understand blockchain from the ground up, let’s walk through it together. Prerequisites - Basic programming knowledge of Python - Knowledge of REST-APIs - Familiarity with the Flask microframework (not mandatory, but nice to have) Background In 2008, a whitepaper titled Bitcoin: A Peer-to-Peer Electronic Cash System was released by an individual (or maybe a group) named Satoshi Nakamoto. The paper combined several cryptographic techniques and a peer-to-peer network to transfer payments without the involvement of any central authority (like a bank). A cryptocurrency named Bitcoin was born. Apart from Bitcoin, that same paper introduced a distributed system of storing data (now popularly known as “blockchain”), which had far wider applicability than just payments or cryptocurrencies. Since then, blockchain has attracted interest across nearly every industry. Blockchain is now the underlying technology behind fully digital cryptocurrencies like Bitcoin, distributed computing technologies like Ethereum, and open source frameworks like Hyperledger Fabric, on which the IBM Blockchain Platform is built. What is “blockchain”?: - Blocks can’t be modified once added; in other words, it is append only. - There are specific rules for appending data to it. - Its architecture is distributed. Enforcing these constraints yields the following benefits: - Immutability and durability of data - No single point of control or failure - A verifiable audit trail of the order in which data was added So, how can these constraints achieve these characteristics? We’ll get more into that as we implement this blockchain. Let’s get started. About the application Let’s briefly define the scope of our mini-application. Our goal is to build an application that allows users to share information by posting. Since the content will be stored on the blockchain, it will be immutable and permanent. Users will interact with the application through a simple web interface. Steps - Store transactions into blocks - Add digital fingerprints to the blocks - Chain the blocks - Implement a proof of work algorithm - Add blocks to the chain - Create interfaces - Establish consensus and decentralization - Build the application - Run the application We’ll implement things using a bottom-up approach. Let’s begin by defining the structure of the data that we’ll store in the blockchain. A post is a message that’s posted by any user on our application. Each post will consist of three essential elements: - Content - Author - Timestamp Store transactions into blocks We’ll be storing data in our blockchain in a format that’s widely used: JSON. Here’s what a post stored in blockchain will look like: { "author": "some_author_name", "content": "Some thoughts that author wants to share", "timestamp": "The time at which the content was created" }: class Block: def __init__(self, index, transactions, timestamp): """ Constructor for the `Block` class. :param index: Unique ID of the block. :param transactions: List of transactions. :param timestamp: Time of generation of the block. """ self.index = index self.transactions = transactions self.timestamp = timestamp Add digital fingerprints to the blocks: - It should be easy to compute. - It should be deterministic, meaning the same data will always result in the same hash. - It should be uniformly random, meaning even a single bit change in the data should change the hash significantly. The consequence of this is: - It is virtually impossible to guess the input data given the hash. (The only way is to try all possible input combinations.) - If you know both the input and the hash, you can simply pass the input through the hash function to verify the provided hash. This asymmetry of efforts that’s required to figure out the hash from an input (easy) vs. figuring out the input from a hash (almost impossible) is what blockchain leverages to obtain the desired characteristics. There are various popular hash functions. Here’s an example in Python that uses the SHA-256 hashing function: >>> from hashlib import sha256 >>> data = b"Some variable length data" >>> sha256(data).hexdigest() 'b919fbbcae38e2bdaebb6c04ed4098e5c70563d2dc51e085f784c058ff208516' >>> sha256(data).hexdigest() # no matter how many times you run it, the result is going to be the same 256 character string 'b919fbbcae38e2bdaebb6c04ed4098e5c70563d2dc51e085f784c058ff208516' >>> data = b"Some variable length data2" # Added one character at the end. '9fcaab521baf8e83f07512a7de7a0f567f6eef2688e8b9490694ada0a3ddeec8' # Note that the hash has changed entirely! We’ll store the hash of the block in a field inside our Block object, and it will act like a digital fingerprint (or signature) of data contained in it: from hashlib import sha256 import json def compute_hash(block): """ Returns the hash of the block instance by first converting it into JSON string. """ block_string = json.dumps(self.__dict__, sort_keys=True) return sha256(block_string.encode()).hexdig. Chain the blocks. from hashlib import sha256 import json import time class Block: def__init__(self, index, transactions, timestamp, previous_hash): """ Constructor for the `Block` class. :param index: Unique ID of the block. :param transactions: List of transactions. :param timestamp: Time of generation of the block. :param previous_hash: Hash of the previous block in the chain which this block is part of. """ self.index = index self.transactions = transactions self.timestamp = timestamp self.previous_hash = previous_hash # Adding the previous hash field def compute_hash(self): """ Returns the hash of the block instance by first converting it into JSON string. """ block_string = json.dumps(self.__dict__, sort_keys=True) # The string equivalent also considers the previous_hash field now return sha256(block_string.encode()).hexdigest() class Blockchain: def __init__(self): """ Constructor for the `Blockchain` class. """ self.chain = [] self.create_genesis_block() def create_genesis_block(self): """ A function to generate genesis block and appends it to the chain. The block has index 0, previous_hash as 0, and a valid hash. """ genesis_block = Block(0, [], time.time(), "0") genesis_block.hash = genesis_block.compute_hash() self.chain.append(genesis_block) @property def last_block(self): """ A quick pythonic way to retrieve the most recent block in the chain. Note that the chain will always consist of at least one block (i.e., genesis block) """ return self.chain[-1] Now, if the content of any of the previous blocks changes: - The hash of that previous block would change. - This will lead to a mismatch with the previous_hashfield in the next block. - Since the input data to compute the hash of any block also consists of the previous_hashfield, the hash of the next block will also change. Ultimately, the entire chain following the replaced block is invalidated, and the only way to fix it is to recompute the entire chain. Implement a proof of work algorithm): class Blockchain: # difficulty of PoW algorithm difficulty = 2 """ Previous code contd.. """ def proof_of_work(self, block): """ Function that tries different values of the nonce to get a hash that satisfies our difficulty criteria. """ block.nonce = 0 computed_hash = block.compute_hash() while not computed_hash.startswith('0' * Blockchain.difficulty): block.nonce += 1 computed_hash = block.compute_hash() return computed_hash Notice that there is no specific logic to figuring out the nonce quickly; it’s just brute force. The only definite improvement that you can make is to use hardware chips that are specially designed to compute the hash function in a smaller number of CPU instructions. Add blocks to the chain To add a block to the chain, we’ll first have to verify that: - The data has not been tampered with (the proof of work provided is correct). - The order of transactions is preserved (the previous_hashfield of the block to be added points to the hash of the latest block in our chain). Let’s see the code for adding blocks into the chain: class Blockchain: """ Previous code contd.. """ def add_block(self, block, proof): """ A function that adds the block to the chain after verification. Verification includes: * Checking if the proof is valid. * The previous_hash referred in the block and the hash of a latest block in the chain match. """ previous_hash = self.last_block.hash if previous_hash != block.previous_hash: return False if not Blockchain.is_valid_proof(block, proof): return False block.hash = proof self.chain.append(block) return True def is_valid_proof(self, block, block_hash): """ Check if block_hash is valid hash of block and satisfies the difficulty criteria. """ return (block_hash.startswith('0' * Blockchain.difficulty) and block_hash == block.compute_hash()) Mining: class Blockchain: def __init__(self): self.unconfirmed_transactions = [] # data yet to get into blockchain self.chain = [] self.create_genesis_block() """ Previous code contd... """ def add_new_transaction(self, transaction): self.unconfirmed_transactions.append(transaction) def mine(self): """ This function serves as an interface to add the pending transactions to the blockchain by adding them to the block and figuring out proof of work. """ if not self.unconfirmed_transactions: return False last_block = self.last_block new_block = Block(index=last_block.index + 1, transactions=self.unconfirmed_transactions, timestamp=time.time(), previous_hash=last_block.hash) proof = self.proof_of_work(new_block) self.add_block(new_block, proof) self.unconfirmed_transactions = [] return new_block.index Alright, we’re almost there. You can see the combined code up to this point on GitHub. Create interfaces. from flask import Flask, request import requests # Initialize flask application app = Flask(__name__) # Initialize a blockchain object. blockchain = Blockchain() We need an endpoint for our application to submit a new transaction. This will be used by our application to add new data (posts) to the blockchain: # Flask's way of declaring end-points @app.route('/new_transaction', methods=['POST']) def new_transaction(): tx_data = request.get_json() required_fields = ["author", "content"] for field in required_fields: if not tx_data.get(field): return "Invalid transaction data", 404 tx_data["timestamp"] = time.time() blockchain.add_new_transaction(tx_data) return "Success", 201 Here’s an endpoint to return the node’s copy of the chain. Our application will be using this endpoint to query all of the data to display: @app.route('/chain', methods=['GET']) def get_chain(): chain_data = [] for block in blockchain.chain: chain_data.append(block.__dict__) return json.dumps({"length": len(chain_data), "chain": chain_data}) Here’s an endpoint to request the node to mine the unconfirmed transactions (if any). We’ll be using it to initiate a command to mine from our application itself: @app.route('/mine', methods=['GET']) def mine_unconfirmed_transactions(): result = blockchain.mine() if not result: return "No transactions to mine" return "Block #{} is mined.".format(result) @app.route('/pending_tx') def get_pending_tx(): return json.dumps(blockchain.unconfirmed_transactions) These REST endpoints can be used to play around with our blockchain by creating some transactions and then mining them. Establish consensus and decentralization: # Contains the host addresses of other participating members of the network peers = set() # Endpoint to add new peers to the network @app.route('/register_node', methods=['POST']) def register_new_peers(): # The host address to the peer node node_address = request.get_json()["node_address"] if not node_address: return "Invalid data", 400 # Add the node to the peer list peers.add(node_address) # Return the blockchain to the newly registered node so that it can sync return get_chain() @app.route('/register_with', methods=['POST']) def register_with_existing_node(): """ Internally calls the `register_node` endpoint to register current node with the remote node specified in the request, and sync the blockchain as well with the remote node. """ node_address = request.get_json()["node_address"] if not node_address: return "Invalid data", 400 data = {"node_address": request.host_url} headers = {'Content-Type': "application/json"} # Make a request to register with remote node and obtain information response = requests.post(node_address + "/register_node", data=json.dumps(data), headers=headers) if response.status_code == 200: global blockchain global peers # update chain and the peers chain_dump = response.json()['chain'] blockchain = create_chain_from_dump(chain_dump) peers.update(response.json()['peers']) return "Registration successful", 200 else: # if something goes wrong, pass it on to the API response return response.content, response.status_code def create_chain_from_dump(chain_dump): blockchain = Blockchain() for idx, block_data in enumerate(chain_dump): block = Block(block_data["index"], block_data["transactions"], block_data["timestamp"], block_data["previous_hash"]) proof = block_data['hash'] if idx > 0: added = blockchain.add_block(block, proof) if not added: raise Exception("The chain dump is tampered!!") else: # the block is a genesis block, no verification needed blockchain.chain.append(block) return blockchain A new node participating in the network can invoke the register_with_existing_node method (via the /register_with endpoint) to register with existing nodes in the network. This will help with the following: - Asking the remote node to add a new peer to its list of known peers. - Initializing the blockchain of the new node with that of the remote node. - Resyncing the blockchain with the network if the node goes off-grid.): class Blockchain """ previous code continued... """ def check_chain_validity(cls, chain): """ A helper method to check if the entire blockchain is valid. """ result = True previous_hash = "0" # Iterate through every block for block in chain: block_hash = block.hash # remove the hash field to recompute the hash again # using `compute_hash` method. delattr(block, "hash") if not cls.is_valid_proof(block, block.hash) or \ previous_hash != block.previous_hash: result = False break block.hash, previous_hash = block_hash, block_hash return result def consensus(): """ Our simple consensus algorithm. If a longer valid chain is found, our chain is replaced with it. """ global blockchain longest_chain = None current_len = len(blockchain.chain) for node in peers: response = requests.get('{}/chain'.format(node)) length = response.json()['length'] chain = response.json()['chain'] if length > current_len and blockchain.check_chain_validity(chain): # Longer valid chain found! current_len = length longest_chain = chain if longest_chain: blockchain = longest_chain return True return False): # endpoint to add a block mined by someone else to # the node's chain. The node first verifies the block # and then adds it to the chain. @app.route('/add_block', methods=['POST']) def verify_and_add_block(): block_data = request.get_json() block = Block(block_data["index"], block_data["transactions"], block_data["timestamp"], block_data["previous_hash"]) proof = block_data['hash'] added = blockchain.add_block(block, proof) if not added: return "The block was discarded by the node", 400 return "Block added to the chain", 201 def announce_new_block(block): """ A function to announce to the network once a block has been mined. Other blocks can simply verify the proof of work and add it to their respective chains. """ for peer in peers: url = "{}add_block".format(peer) requests.post(url, data=json.dumps(block.__dict__, sort_keys=True)) The announce_new_block method should be called after every block is mined by the node so that peers can add it to their chains. @app.route('/mine', methods=['GET']) def mine_unconfirmed_transactions(): result = blockchain.mine() if not result: return "No transactions to mine" else: # Making sure we have the longest chain before announcing to the network chain_length = len(blockchain.chain) consensus() if chain_length == len(blockchain.chain): # announce the recently mined block to the network announce_new_block(blockchain.last_block) return "Block #{} is mined.".format(blockchain.last_block.index Build the application Alright, the blockchain server is all set up. You can see the code up to this point on GitHub.. import datetime import json import requests from flask import render_template, redirect, request from app import app # Node in the blockchain network that our application will communicate with # to fetch and add data. CONNECTED_NODE_ADDRESS = "" posts = [] The fetch_posts function gets the data from the node’s /chain endpoint, parses the data, and stores it locally. def fetch_posts(): """ Function to fetch the chain from a blockchain node, parse the data, and store it locally. """ get_chain_address = "{}/chain".format(CONNECTED_NODE_ADDRESS) response = requests.get(get_chain_address) if response.status_code == 200: content = [] chain = json.loads(response.content) for block in chain["chain"]: for tx in block["transactions"]: tx["index"] = block["index"] tx["hash"] = block["previous_hash"] content.append(tx) global posts posts = sorted(content, key=lambda k: k['timestamp'], reverse=True) The application has an HTML form to take user input and then makes a POST request to a connected node to add the transaction into the unconfirmed transactions pool. The transaction is then mined by the network, and then finally fetched once we refresh our web page: @app.route('/submit', methods=['POST']) def submit_textarea(): """ Endpoint to create a new transaction via our application """ post_content = request.form["content"] author = request.form["author"] post_object = { 'author': author, 'content': post_content, } # Submit a transaction new_tx_address = "{}/new_transaction".format(CONNECTED_NODE_ADDRESS) requests.post(new_tx_address, json=post_object, headers={'Content-type': 'application/json'}) # Return to the homepage return redirect('/') Run the application It’s done! You can find the final code on GitHub. Clone the project: $ git clone Install the dependencies: $ cd python_blockchain_app $ pip install -r requirements.txt Start a blockchain node server: $ export FLASK_APP=node_server.py $ flask run --port 8000 One instance of our blockchain node is now up and running at port 8000. Run the application on a different terminal session: $ python run_app.py The application should be up and running at. Figures 1 – 3 illustrate how to post content, request a node to mine, and resync with the chain. Figure 1. Posting some content Figure 2. Requesting the node to mine Figure 3. Resyncing with the chain for updated data Running with multiple nodes To play around by spinning off multiple custom nodes, use the register_with/ endpoint to register a new node with the existing peer network. Here’s a sample scenario that you might want to try: # already running $ flask run --port 8000 & # spinning up new nodes $ flask run --port 8001 & $ flask run --port 8002 & You can use the following cURL requests to register the nodes at port 8001 and 8002 with the already running 8000: $ curl -X POST \ \ -H 'Content-Type: application/json' \ -d '{"node_address": ""}' $ curl -X POST \ \ -H 'Content-Type: application/json' \ -d '{"node_address": ""}' This will make the node at port 8000 aware of the nodes at port 8001 and 8002, and vice-versa. The newer nodes will also sync the chain with the existing node so that they are able to participate in the mining process actively. To update the node with which the frontend application syncs (default is localhost port 8000), change the CONNECTED_NODE_ADDRESS field in the views.py file. Once you do all this, you can run the application ( python run_app.py) and create transactions (post messages via the web interface), and once you mine the transactions all the nodes in the network will update the chain. The chain of nodes can also be inspected by invoking the /chain endpoint using cURL or Postman. $ curl -X GET $ curl -X GET Authenticate transactions You might have noticed a flaw in the application: Anyone can change any name and post any content. Also, the post is susceptible to tampering while submitting the transaction to the blockchain network. One way to solve this is by creating user accounts using public-private key cryptography. Every new user needs a public key (analogous to username) and a private key to be able to post in our application. The keys are used to create and verify the digital signature. Here’s how it works: - Every new transaction submitted (post submitted) is signed with the user’s private key. This signature is added to the transaction data along with the user information. - During the verification phase, while mining the transactions, we can verify if the claimed owner of the post is the same as the one specified in the transaction data, and also that the message has not been modified. This can be done using the signature and the public key of the claimed owner of the post. Conclusion This tutorial covered the fundamentals of a public blockchain. If you’ve followed along, you should now be able to implement a blockchain from scratch and build a simple application that allows users to share information on the blockchain. This implementation is not as sophisticated as other public blockchains like Bitcoin or Ethereum (and still has some loopholes) — but if you keep asking the right questions as per your requirements, you’ll eventually get there. The key thing to note is that most of the work in designing a blockchain for your needs is about the amalgamation of existing computer science concepts, nothing more! Next steps You can spin off multiple nodes on the cloud and play around with the application you’ve built. You can deploy any Flask application to the IBM Cloud. Alternatively, you can use a tunneling service like ngrok to create a public URL for your localhost server, and then you’ll be able to interact with multiple machines. There’s a lot to explore in this space! Here are several ways to continue building your blockchain skills: Continue exploring blockchain technology by getting your hands on the new IBM Blockchain Platform. You can quickly spin up a blockchain pre-production network, deploy sample applications, and develop and deploy client applications. Get started! Stay in the know with the Blockchain Newsletter from IBM Developer. Check out recent issues and subscribe. Stop by the Blockchain hub on IBM Developer. It’s your source for tools and tutorials, along with code and community support, for developing and deploying blockchain solutions for business. Continue building your blockchain skills through the IBM Developer Blockchain learning path, which gives you the fundamentals and then shows you how to start creating apps, and offers helpful use cases for perspective. And be sure to check out the many blockchain code patterns on IBM Developer, which provide roadmaps for solving complex problems, and include overviews, architecture diagrams, process flows, repo pointers, and additional reading.
https://developer.ibm.com/languages/python/tutorials/develop-a-blockchain-application-from-scratch-in-python/
CC-MAIN-2020-50
refinedweb
3,563
56.86
required steps to get a Spark application up and running, including submitting an application to a Spark cluster. Goal The goal is to read in data from a text file, perform some analysis using Spark, and output the data. This will be done both as a standalone (embedded) application and as a Spark job submitted to a Spark master node. Step 1: Environment setup Before we write our application we need a key tool called an IDE (Integrated Development Environment). I've found IntelliJ IDEA to be an excellent (and free) IDE for Java. I also recommend PyCharm for python projects. - Download and install IntelliJ (community edition). Step 2: Project setup - With IntelliJ ready we need to start a project for our Spark application. Start IntelliJ and select File-> New-> Project... - Select "Maven" on the left column and a Java SDK from the dropdown at top. If you don't have a Java SDK available you may need to download one from Oracle. Hit next. - Select a GroupId and ArtifactId. Feel free to choose any GroupId, since you won't be publishing this code (typical conventions). Hit next. - Give you project a name and select a directory for IntelliJ to create the project in. Hit finish. Step 3: Including Spark - After creating a new project IntelliJ will open the project. If you expand the directory tree on the left you'll see the files and folders IntelliJ created. We'll first start with the file named pom.xml, which defines our project's dependencies (such as Spark), and how to build the project. All of this is handled by a tool called Maven. - Open IntelliJ Preferences and make sure "Import Maven projects automatically", and "Automatically download: |x| Sources |x| Documentation" are checked under Build, Execution, Deployment -> Build Tools -> Maven -> Importing, on the left. This tells IntelliJ to download any dependencies we need. - Add the following snippet to pom.xml, above the </project>tag. See the complete example pom.xmlfile here. <dependencies> <dependency> <groupId>org.apache.spark</groupId> <artifactId>spark-core_2.10</artifactId> <version>1.6.1</version> </dependency> </dependencies> This tells Maven that our code depends on Spark and to bundle Spark in our project. Step 4: Writing our application - Select the "java" folder on IntelliJ's project menu (on the left), right click and select New -> Java Class. Name this class SparkAppMain. To make sure everything is working, paste the following code into the SparkAppMainclass and run the class (Run -> Run... in IntelliJ's menu bar). public class SparkAppMain { public static void main(String[] args) { System.out.println("Hello World"); } } You should see "Hello World" print out below the editor window. Now we'll finally write some Spark code. Our simple application will read from a csv of National Park data. The data is here, originally from wikipedia. To make things simple for this tutorial I copied the file into /tmp. In practice such data would likely be stored in S3 or on a hadoop cluster. Replace the main()method in SparkAppMainwith this code: public static void main(String[] args) throws IOException { SparkConf sparkConf = new SparkConf() .setAppName("Example Spark App") .setMaster("local[*]") // Delete this line when submitting to a cluster JavaSparkContext sparkContext = new JavaSparkContext(sparkConf); JavaRDD<String> stringJavaRDD = sparkContext.textFile("/tmp/nationalparks.csv"); System.out.println("Number of lines in file = " + stringJavaRDD.count()); } Run the class again. Amid the Spark log messages you should see "Number of lines in file = 59" in the output. We now have an application running embedded Spark, next we'll submit the application to run on a Spark cluster. Step 5: Submitting to a local cluster To run our application on a cluster we need to remove the "Master" setting from the Spark configuration so our application can use the cluster's master node. Delete the .setMaster("local[*]")line from the app. Here's the new main()method: public static void main(String[] args) throws IOException { SparkConf sparkConf = new SparkConf() .setAppName("Example Spark App") JavaSparkContext sparkContext = new JavaSparkContext(sparkConf); JavaRDD<String> stringJavaRDD = sparkContext.textFile("/tmp/nationalparks.csv"); System.out.println("Number of lines in file = " + stringJavaRDD.count()); } We'll use Maven to compile our code so we can submit it to the cluster. Run the command mvn installfrom the command line in your project directory (you may need to install Maven). Alternatively you can run the command from IntelliJ by selecting View -> Tool Windows -> Maven Projects, then right click on install under Lifecycle and select "Run Maven Build". You should see a the compiled jar at target/spark-getting-started-1.0-SNAPSHOT.jarin the project directory. - Download a packaged Spark build from this page, select "Pre-built for Hadoop 2.6 and later" under "package type". Move the unzipped contents (i.e. the spark-1.6.1-bin-hadoop2.6directory) to the project directory ( spark-getting-started). Submit the Job! From the project directory run: ./spark-1.6.1-bin-hadoop2.6/bin/spark-submit \ --master local[*] \ --class SparkAppMain \ target/spark-getting-started-1.0-SNAPSHOT.jar This will start a local spark cluster and submit the application jar to run on it. You will see the result, "Number of lines in file = 59", output among the logging lines. Step 6: Submit the application to a remote cluster Now we'll bring up a standalone Spark cluster on our machine. Although not technically "remote" it is a persistent cluster and the submission procedure is the same. If you're interested in renting some machines and spinning up a cluster in AWS see this tutorial from Insight. To start a Spark master node, run this command from the project directory: ./spark-1.6.1-bin-hadoop2.6/sbin/start-master.sh View your Spark master by going to localhost:8080in your browser. Copy the value in the URL:field. This is the URL our worker nodes will connect to. Start a worker with this command, filling in the URL you just copied for "master-url": ./spark-1.6.1-bin-hadoop2.6/sbin/start-slave.sh spark://master-url You should see the worker show up on the master's homepage upon refresh. We can now submit our job to this cluster, again pasting in the URL for our master: ./spark-1.6.1-bin-hadoop2.6/bin/spark-submit \ --master spark://master-url \ --class SparkAppMain \ target/spark-getting-started-1.0-SNAPSHOT.jar On the master homepage (at localhost:8080), you should see the job show up: This tutorial is meant to show a minimal example of a Spark job. I encourage you to experiment with more complex applications and different configurations. The Spark project provides documentation on how to do more complex analysis. Below are links to books I've found helpful, it helps support Data Science Bytes when you purchase anything through these links. Similar Posts - Creating a Spark Streaming Application in Java, Score: 0.999 - Spark 1.2.0 released, Score: 0.980 - Spark 1.3.0 released, Score: 0.975 - IPython 3.0 released, Score: 0.972 - First Look at AWS Machine Learning, Score: 0.899
https://www.datasciencebytes.com/bytes/2016/04/18/getting-started-with-spark-running-a-simple-spark-job-in-java/
CC-MAIN-2018-39
refinedweb
1,177
59.4
public class JDBCUtils extends Object Although this class is not used directly by the org.geotools.jdbc classes it is used downstream in GeoServer. clone, equals, finalize, getClass, hashCode, notify, notifyAll, toString, wait, wait, wait public static void close(Statement statement) statement- The statement to close. This can be null since it makes it easy to close statements in a finally block. public static void close(ResultSet rs) rs- The ResultSet to close. This can be null since it makes it easy to close result sets in a finally block. public static void close(Connection conn, Transaction transaction, SQLException sqlException) Connections are maintained by a Transaction and we will need to manage them with respect to their Transaction. Jody here - I am forcing this to be explicit, by requiring you give the Transaction context when you close a connection seems to be the only way to hunt all the cases down. AttributeReaders based on QueryData rely on I considered accepting an error flag to control Transaction rollback, but I really only want to capture SQLException that force transaction rollback. conn- The Connection to close. This can be null since it makes it easy to close connections in a finally block. transaction- Context for the connection, we will only close the connection for Transaction.AUTO_COMMIT sqlException- Error status, nullfor no error
http://docs.geotools.org/stable/javadocs/org/geotools/data/jdbc/JDBCUtils.html
CC-MAIN-2017-43
refinedweb
220
54.73
Section D.15 Exam Prep Sheet This sheet was created by the tutors. The exercises are neither relevant nor irrelevant for the exam. This collection of exercises does not necessarily cover all the materials subject to the exam. You should still revice all material from the lecture notes, exercise sheets, and lecture on your own. 1. Multiple Choice. For each of the following statements decide if they are true or false. If you think that a statement is false try to explain to yourself why it is false. The Mips calling convention specifies that the stack pointer and the s-registers need to be restored by the called function before jumping back. To load a word its address needs to be divisible by 4. int foo(int x);is a valid function declaration in C. Every expression can be R-evaluated. In C it always holds that sizeof(char)smaller or equal sizeof(long). Dividing by zero is undefined behaviour in Java. booleanand intcan be implicitly converted to each other in Java. A method without visibility modifiers can be called by methods within the same package. The load factor of a hash table can be computed by dividing the number of inserted elements by the size of the hash table. To run through every position in a hash table using quadratic probing it is not important to use a surjective hash function. Linear probing aims to prevent the formation of clusters. Black Box tests try to achieve full branch, path and code coverage. In a Hoare triple we can strengthen preconditions. An assertion A is called stronger than an assertion B if \(B \Rightarrow A\text{.}\) 202207221400 True. True. True. True. True. False, an exception is thrown. False. True. True. False, if the function isn't surjective it is not possible to go through every position. False, clusters form using linear probing, quadratic probing aims to prevent them. False, Black box testing tests if a program satisfies a specification. True. False, A is stronger than B if \(A \Rightarrow B\text{.}\) 2. Insertion Sort in MIPS. Look at the following code implementing insertion sort. - Fill in the gaps such that swapswaps two elements in an array. The addresses of the elements to be swapped are passed via the registers $a0and $a1. .text swap: # $a0, $a1 adresses of the elements to be swapped lw ____ ____ lw ____ ____ sw ____ ____ sw ____ ____ jr $ra - Now fill in the gaps sucht that insertion_sortfollows the calling convention. ____ ____ sw ____ ____ sw ____ ____ sw ____ ____ sw ____ ____ sw ____ ____ sw ____ ____ sw ____ ____ move $a0 $t1 move $a1 $t2 jal swap lw ____ ____ lw ____ ____ lw ____ ____ lw ____ ____ lw ____ ____ lw ____ ____ lw ____ ____ lw ____ ____ addiu $sp $sp 32 addiu $t1 $t1 -4 b loop_2 end: jr $ra 202207221400 # a) .text swap: # $a0, $a1 adresses of elements to be swapped lw $t0 ($a0) lw $t1 ($a1) sw $t0 ($a1) sw $t1 ($a0) jr $ra # b) $a0 0($sp) sw $a1 4($sp) sw $t0 8($sp) sw $t1 12($sp) sw $t2 16($sp) sw $t5 20($sp) sw $t6 24($sp) sw $ra 28($sp) move $a0 $t1 move $a1 $t2 jal swap lw $a0 0($sp) lw $a1 4($sp) lw $t0 8($sp) lw $t1 12($sp) lw $t2 16($sp) lw $t5 20($sp) lw $t6 24($sp) lw $ra 28($sp) addiu $sp $sp 32 addiu $t1 $t1 -4 b loop_2 end: jr $ra 3. Pointergolf. Which output does the following program print? Try to figure out the solution without using a computer. You may find it helpful to write down an execution log, but you are not obligated to do so. #include <stdio.h> int main() { int a = 3; int b = 7; int c = 5; int m[5] = {1, 2, 3, 4, 5}; int* pa = &a; int* pb = &b; int* pc = &c; int** ppa = &pa; a = *pb; pc = &m[3]; *pc = *pb + 3; pb = pc - 1; *pb = **ppa; *(pc + 1) = c + *pa; m[0] = *(pb - 1) + 3; printf("a = %d\n", a); printf("b = %d\n", b); printf("c = %d\n", c); printf("m[0] = %d\n", m[0]); printf("m[1] = %d\n", m[1]); printf("m[2] = %d\n", m[2]); printf("m[3] = %d\n", m[3]); printf("m[4] = %d\n", m[4]); return 1; } Write down your solution here: a = b = c = m[0] = m[1] = m[2] = m[3] = m[4] = 202207221400 4. Test Suite. The following C-program calculates the n-th tetranacci number. The tetranacci numbers are a generalization of the Fibonacci numbers. int tetranacci(int n) { if (n < 4) { if (n < 3) { if (n == 2) { return 1; } else { if (n == 1) { return 1; } else { return 0; } } } else { return 2; } } else { return tetranacci(n - 4) + tetranacci(n - 3) + tetranacci(n - 2) + tetranacci(n - 1); } } Write a test suite for the program which covers all statements. Extend your test suite such that it covers all branches, if possible. 202207221400 void test_n_zero() { int tetranacci = tetranacci(0); assert(tetranacci == 0); } void test_n_one() { int tetranacci = tetranacci(1); assert(tetranacci == 1); } void test_n_two() { int tetranacci = tetranacci(2); assert(tetranacci == 1); } void test_n_three() { int tetranacci = tetranacci(3); assert(tetranacci == 2); } void test_tetranacciursion() { int tetranacci = tetranacci(4); assert(tetranacci == 4); } Here, we cannot cover all statements without also covering all branches. We are therefore already done. 5. C0p(b). Draw the abstract syntax tree of the C0p program { x = 1; y = &x; if (*y>0) abort(); else x = x + 41; } Execute the following C0pb program on the state \(\sigma = (\{\}; \{\})\text{.}\)Explicitly state the evaluation of the expressions \(\texttt{**z}\) and \(\texttt{*y - *(\&x)}\text{.}\) { int x; int *y; int **z; x = 42; y = &x; z = &y; if (*y>0) **z = *y - *(&x); else abort(); } 202207221400 Abstract Syntax Tree: Figure D.15.1. To improve readability, we use the shorthand Sto denote the if statement. Execution: Figure D.15.2. Let\begin{equation*} \sigma' := \rho_1 ,\rho_2' ;\mu' := \{ \},\{\texttt x \mapsto \adra, \texttt y \mapsto \adrb, \texttt z \mapsto \adrc\};\{\adra \mapsto 42, \adrb \mapsto \adra, \adrc \mapsto \adrb\}.\\ \end{equation*}\begin{align*} \denotL{\texttt{**z}} \sigma' \amp= \denotR{\texttt{*z}} \sigma'\\ \amp= \mu'(\denotL{\texttt{*z}}\sigma')\\ \amp= \mu'(\denotR{\texttt{z}}\sigma')\\ \amp= \mu'(\mu'(\denotL{\texttt{z}}\sigma'))\\ \amp= \mu'(\mu'(\rho_2' \texttt{z}))\\ \amp= \mu'(\mu'(\adrc))\\ \amp= \mu'(\adrb)\\ \amp= \adra \end{align*}\begin{align*} \denotR{\texttt{*y-*(\&x)}} \sigma' \amp= \denotR{\texttt{*y}} \sigma' - \denotR{*(\& x)}\\ \amp= \mu'(\denotL{\texttt{*y}} \sigma') - \mu'(\denotL{*(\& x)}\sigma')\\ \amp= \mu'(\denotR{\texttt{y}} \sigma') - \mu'(\denotR{\& x}\sigma')\\ \amp= \mu'(\mu'(\denotL{\texttt{y}} \sigma')) - \mu'(\denotL{x}\sigma')\\ \amp= \mu'(\mu'(\rho_2' y)) - \mu'(\rho_2' x)\\ \amp= \mu'(\mu'(\adrb)) - \mu'(\adra)\\ \amp= \mu'(\adra) - 42\\ \amp= 0 \end{align*} 6. Cars. Fridolin would really like to buy his first car. Because this is a big expense he wants to compare several cars beforehand. Please help him by first implementing a class Car. Add a reasonable constructor. Each car has an associated number plate, the remaining distance in kilometers and a price (in euros). Implement getters for these fields. Add a method drive(int kilometers)which takes the number of kilometers driven. Update the remaining distance accordingly. If the car cannot drive that far, throw a generic exception. Now create a second class ElectricCarextending Car. In addition to the properties from Carit has an associated maximum distance it can drive, which the constructor should take into consideration. Electric cars get delivered fully charged. Create a getter for the new property. Implement a method charge()which fully charges the electric car (sets remaining distance to maximum distance). Make sure all methods in both classes are publically accessible. 202207221400 public class Car { int remainingDistance; String numberPlate; int price; public Car(int remainingDistance, String numberPlate, int price) { this.remainingDistance = remainingDistance; this.numberPlate = numberPlate; this.price = price; } public int getRemainingDistance() { return this.remainingDistance; } public String getNumberPlate() { return this.numberPlate; } public int getPrice() { return this.price; } public void drive(int kilometers) throws Exception { if (this.remainingDistance < kilometers) throw new Exception(); this.remainingDistance -= kilometers; } } public class ElectricCar extends Car { int maxDistance; public ElectricCar(int maxDistance, String numberPlate, int price) { super(maxDistance, numberPlate, price); this.maxDistance = maxDistance; } public void charge() { this.remainingDistance = maxDistance; } } 7. Inheritance Hierarchy. Which method is called in each case? State the output of the program. public class A { public void fun(B b) { System.out.println("A.fun(B)"); } public void fun(D d) { System.out.println("A.fun(D)"); } } public class B extends A { public void fun(A a) { System.out.println("B.fun(A)"); } public void fun(C c) { System.out.println("B.fun(C)"); } public void fun(E e) { System.out.println("B.fun(E)"); } } public class C extends B { public void fun(B b) { System.out.println("C.fun(B)"); } } public class D extends B { } public class E extends D { public void fun(B b) { System.out.println("E.fun(B)"); } public void fun(C c) { System.out.println("E.fun(C)"); } } public class F extends A { public void fun(A a) { System.out.println("F.fun(A)"); } public void fun(D d) { System.out.println("F.fun(D)"); } } public class G extends F { } public class Main { public static void main( String[] args) { A aa = new A(); A ab = new B(); A ac = new C(); A ag = new G(); A af = new F(); B bb = new B(); B bc = new C(); B bd = new D(); B be = new E(); C cc = new C(); D dd = new D(); D de = new E(); E ee = new E(); F ff = new F(); F fg = new G(); G gg = new G(); aa.fun(bd); ac.fun(bb); ac.fun(be); af.fun(dd); af.fun(ab); ag.fun(cc); bb.fun(ab); bb.fun(bc); bc.fun(bb); bc.fun(ff); bc.fun(dd); bd.fun(cc); be.fun(bc); be.fun(cc); be.fun(dd); cc.fun(de); cc.fun(ee); dd.fun(dd); dd.fun(bd); dd.fun(fg); ee.fun(ee); ee.fun(cc); ff.fun(fg); gg.fun(de); gg.fun(bc); } } 202207221400 8. Hashing Pets. Your best friend Konrad Klug would like to build up his own pet shop 'Prog Pets'. To keep track which pets can be sold, he would like to have a list in his online shop where there are stored in. Because Konrad tried to safe some money, the computing power of his web server is not very high and therefore searching in big data structures takes much time. No Hau heard of his problem and suggested to use hash tables. Konrad has prepared the following classes. public class Snake { private short length; // length in cm (max. 900cm) private short weight; // weigth in kg (max. 300kg) private boolean isPoisonous; // ... } public class Fish { private String name; private int shedNumber; @Override public boolean equals(Object o) { return shedNumber == o.shedNumber; } @Override public int hashCode() { return shedNumber / 100; } // ... } Implement the hashCodeand equalsmethods for the class Snake. Konrad implemented the hashCodeand equalsmethods for the class Fishhimself but he made a mistake. Specify the mistake he made and correct the methods. Hash the following fishes represented by instances of Fishwith the corrected methods into a hash table of size 5. Use collision lists to resolve collisions. 202207221400 @Override public boolean equals(Object o) { if (!(o instanceOf Snake)) return false; Snake s = (Snake) o; return this.length == s.length && this.weight == s.weight && this.isPoisonous == s.isPoisonous; } @Override public int hashCode() { return this.isPoisonous * 1000000 + this.weight * 1000 + this.length; } Konrad forgot to check if the given parameter ois of type Fish. The hashCodemethod is maybe not that well-chosen but it is sufficient for this exercise. Also the nameattribute could have been taken into account. public class Fish { private String name; private int shedNumber; @Override public boolean equals(Object o) { if (!(o instanceOf Fish)) return false; return shedNumber == ((Fish) o).shedNumber; } @Override public int hashCode() { return shedNumber / 100; } // ... } 9. Compiler Impersonation. Consider the following type environment: \(\Gamma := \{ \CVar{i} \mapsto \CInt, \CVar{it} \mapsto \CPtr\CChar, \CVar{len} \mapsto \CInt \}\) Check whether the following C0t program has any type errors with respect to the type environment given above: while (i < len) { void* vp; vp = it + i; } Generate MIPS code for the subexpression vp = it + i;. Evaluate the lefthand side of binary operators first. Assume the offsets \(\{\CVar{i} \mapsto 0, \CVar{it} \mapsto 4, \CVar{len} \mapsto 8, \CVar{vp} \mapsto 12\}\text{.}\) 202207221400 The program is well-typed. \(\mbox{}\) \(\mbox{}\) The following listings contain the code used in the derivation trees. addiu $t1 $sp 4 Listing D.15.3. Code \(c_1\) addiu $t1 $sp 4 lw $t1 ($t1) Listing D.15.4. Code \(c_2\) addiu $t2 $sp 0 Listing D.15.5. Code \(c_3\) addiu $t2 $sp 0 lw $t2 ($t2) Listing D.15.6. Code \(c_4\) addiu $t1 $sp 4 lw $t1 ($t1) addiu $t2 $sp 0 lw $t2 ($t2) addu $t1 $t1 $t2 Listing D.15.7. Code \(c_5\) addiu $t0 $sp 12 Listing D.15.8. Code \(c_6\) addiu $t0 $sp 12 addiu $t1 $sp 4 lw $t1 ($t1) addiu $t2 $sp 0 lw $t2 ($t2) addu $t1 $t1 $t2 sw $t1 ($t0) Listing D.15.9. Code \(c_7\) \(\mbox{}\) \(\mbox{}\) \(\mbox{}\) 10. Find the invariant. Take a look at the following loop, find an invariant and prove that your invariant is valid using Hoare logic derivation rules. sum = 0; i = 1; while (i <= n) { sum = sum + i; i = i + 1; } 202207221400
https://prog2.de/book/sheet_exam_prep.html
CC-MAIN-2022-33
refinedweb
2,264
64.81
PROBLEM LINK: Author: Kunal Demla Editorialist: Kunal Demla DIFFICULTY: Easy PREREQUISITES: None PROBLEM: Given a number of palindrome binary Strings and game rules, find optimal solution to win the game. QUICK EXPLANATION: Count the number of 0s in the string. Answer is constantly dependent on the same. # EXPLANATION: When No. of 0s = 0, Tie When No. of 0s = 1, The one who moves 1st, looses. When No. of 0s = 2, The one who moves 1st, looses as: Lets take Players A and B, and String 1001: A uses move 1 to change it to 1101. B uses move 2 to change it to 1011. A uses move 1 to change it to 1111. B wins. Using this we can derive that this is a subproblem to all other bigger numbers and it can be derived that: When No. of 0s = odd, 1st player wins. When No. of 0s = even, 2nd player wins. ALTERNATE EXPLANATION: The problem can be broken into parts and solved but it will just be a waste of time. SOLUTIONS: Setter's Solution #include<bits/stdc++.h> using namespace std; #define ll long long int void solve() { ll n,m,x,y,i,j,k; string s; cin>>s; n=0; for(auto ch:s){ if(ch=='0') n++; } if(n==0) cout<<"Tie"; else if(n==1) cout<<"Kunal"; else if(n%2) cout<<"Anamika"; else cout<<"Kunal"; } int main() { ios_base::sync_with_stdio(false); cin.tie(NULL); cout.tie(NULL); int t=1; cin>>t; int n=t; while(t--) { solve(); cout<<"\n"; } cerr<<"time taken : "<<(float)clock()/CLOCKS_PER_SEC<<" secs"<<endl; return 0; }
https://discuss.codechef.com/t/pallgame101-editorial/102535
CC-MAIN-2022-33
refinedweb
266
74.08
J. Brisbin's blog on the Virtual Private Cloud, RabbitMQ, and Web 2.x – Asynchronous, messaging-oriented applications are pretty hard to visualize. In writing the vcloud session manager, I was finding I couldn't really see in my head what was happening when I dumped a message into queue "X", or why it wasn't showing up in consumer "Y". I wrote the Groovy DSL for RabbitMQ to help me with this. Now I want to show you a more in-depth example of how I'm using the DSL to rapidly prototype applications and help me see, in concrete terms, how exchanges, queues, and messages are wired together. One of the hurdles I'm having to overcome with all these virtual cloud components (a dozen or more virtual machines that do various things) is keeping configuration files in sync. At the moment I'm using a combination of cron jobs and shared NFS mounts. Neither of these is what I'm really after, though. Sure, they work, but they're not very dynamic and they suffer from single points of failure. To get away from that, I'm writing some utilities (some in Python, some in Java) to keep these configuration files in sync. To visualize how this application is going to work, I'm prototyping it in Groovy, using the RabbitMQ DSL. First Steps One of the first things I know I need is a membership manager. In my prototype, I'll use an in-memory array and a very simple "fanout" exchange to listen for membership events: def members = [] mq.exchange name: "vcloud.config.membership", type: "fanout" When we get a membership message, we'll push the node name onto the members stack: // Manage membership queue(name: "master.membership", routingKey: "") { consume onmessage: {msg -> println "Received ${msg.envelope.routingKey} event " + "from ${msg.bodyAsString}..." if(msg.envelope.routingKey == "join") { members << msg.bodyAsString println "Members: ${members.toString()}" return msg.envelope.routingKey != "exit" } } } Now, I'll publish a couple membership messages to this queue to see my "println" messages in the console, and see the membership list change. The full Groovy DSL code to do this is: println "Creating membership exchange" mq.exchange(name: "vcloud.config.membership", type: "fanout") { // Manage membership queue(name: "master.membership", routingKey: "") { consume onmessage: {msg -> println "Received ${msg.envelope.routingKey} event " + "from ${msg.bodyAsString}..." if(msg.envelope.routingKey == "join") { members << msg.bodyAsString println "Members: ${members.toString()}" } return msg.envelope.routingKey != "exit" } } // Join two nodes publish routingKey: "join", body: "dev1" publish routingKey: "join", body: "dev2" } Running this results in the following console output: Creating membership exchange Received join event from dev1... Members: [dev1] Received join event from dev2... Members: [dev1, dev2] This is very helpful because it lets me quickly comprehend how my queues exist under specific exchanges. It lets me visually connect message publishing to message consuming. If I wanted to, I could now take this Groovy code and code a real application against it. Of course, there's nothing stopping me from simply writing the "real" application entirely in Groovy, either! More Consumers Now that I've got a skeleton upon which to prototype my membership manager, I can create a couple queues that simulate the actual nodes that will be joining the cloud and listening for configuration file change events. This exchange will be a topic exchange, so I can send messages to the entire group: // Listen for incoming config events on test node 1 queue(name: "dev1", routingKey: "dev1") { consume tag: "dev1.config", onmessage: {msg -> println " ***** INCOMING: ${msg.toString()}" if(msg.properties.headers["key"]) { println "(dev1) Config change for key: ${msg.properties.headers['key']}" // Also let node 2 know about this send("vcloud.config.events", "dev2", ["key": "firstconfig"], msg.body) } return false } } This dumps the incoming message to stdout and forwards the message on to a second node, which has a virtually identical consumer.
http://www.itworld.com/article/2756776/software-as-a-service/rapid-application-prototyping-with-groovy-and-rabbitmq.html
CC-MAIN-2017-17
refinedweb
641
57.16
The production version of TypeScript 3.4, the latest version of Microsoft’s typed superset of JavaScript, has arrived, with improvements for builds and type-checking. Where to download TypeScript You can download TypeScript through NuGet, or you can get it via NPM: npm install -g typescript Current version: The new features in TypeScript 3.4 - A new compilation flag, --incremental, provides faster subsequent builds. This flag tells TypeScript to save information about the project graph from the last compilation. When TypeScript is again invoked with --incremental, it will use the data to detect the least costly way to type-check and emit project changes. - Type-checking is introduced for the ECMAScript globalThisglobal variable, providing a standard way for accessing the global scope that can be used across different environments. - It is now easier to read-only array types, with a new syntax for ReadonlyArrayusing a Readonlymodifier for array types. - Support is introduced for read-only tuples; any tuple type can be prefixed with the readonlykeyword to make it a read-only tuple. - The readonlymodifier in a mapped type automatically will convert array-like types to a corresponding read-only array type. - A new construct has been introduced for literal values, constassertions. The syntax is a type assertion with constin place of the type name. When literal expressions are constructed with constassertions, developers can signal that no literal type in that expression should be widened, that object literals get readonlyproperties, and that array literals become readonlytuples. - As a breaking change, the type of top-level thisis now typed as typeof globalThisinstead of any. As a consequence, developers might receive errors for accessing unknown values on thisand noImplicitAny. - Another breaking change is that improved inference in TypeScript 3.4 might produce generic functions rather than functions that take and return constants. Previous version: The new features in TypeScript 3.3 Released in late January 2019, TypeScript 3.3 has incremental file watching for composite projects, enabling faster builds by rechecking and re-emitting only files that have changed, rather than performing a complete build of the entire project. The --watch flag of --build mode automatically takes advantage of this new feature. In tests, the --build --watch functionality resulted in a 50 percent to 75 percent reduction in build times compared to the original --build --watch. TypeScript 3.3 also features improved behavior for calling union types. Also, the TypeScript plug-in for Sublime Text now supports editing in JavaScript files. Previous version: The new features in TypeScript 3.2 Released in November 2018, TypeScript 3.2 introduces stricter checking on the apply, bind, and call methods. These are methods on functions enabling capabilities such as binding this and partially applying arguments and call functions with a different value for this. Also, call functions can be fitted with an array for arguments. Earlier versions of TypeScript lacked the power to model these functions; apply, bind, and call were all typed to take any number of arguments and return any. Also, ECMAScript 2015’s arrow functions and rest/spread arguments have provided a new syntax to make it easier to express what some of these methods do. With the changes, bind, call, and apply are more strictly checked with the use of a flag, called strictBindCallApply. When this flag is used, methods on callable objects are described by a new global type, CallableFunction, that declares stricter signatures for bind, call, and apply. Also, any methods on objects that are construct-able but not callable are described by a new global type, NewableFunction. Other new features in TypeScript 3.2 include: - Type-checking is enabled for BigInt as well as support for emitting BigInt literals when targeting esnext. The new primitive type bigintcan be accessed by calling the BigInt()function or by writing out a BigInt literal by adding an nto the end of any integer numeric literal. Also, bigints produce a new string type when using the typeofoperator: the string bigint. BigInt support is only available for the esnexttarget. - Object spreads are permitted on generics and are modeled using intersections. JavaScript supports copying properties from an existing object into a new one called “spreads.” - Object rest on generic types is featured, in which object rest patterns create a new object that lacks specified properties. - For d.ts, certain parameters no longer accept null. Also, some WebKit-specific properties have been deprecated. Version 3.2 and future releases will only ship as an MSBuild package and not a standalone compiler package. The MSBuild package depends on the presence of a globally invokable version of Node.js. Computers with Visual Studio 2017 Version 15.8 and later have that Node.js version. TypeScript 3.2 is the last version with editing support for Visual Studio 2015. Previous version: The new features in TypeScript 3.1 Released in September 2018, TypeScript 3.1 adds properties on function declarations. Thus, for any function or const declaration that is initialized with a function, the type-checker analyzes the containing scope to track added properties. This change is intended to make TypeScript smarter about patterns. With JavaScript, functions are objects, with developers able to tack properties on them. TypeScript’s traditional approach to this has been through the namespaces construct. But this construct has not aged well. ECMAScript modules have become the preferred mode for organizing new code in TypeScript and JavaScript, but namespaces are TypeScript-specific. Also, namespaces do not merge with var, let, or const declarations, which can make code conversions difficult. These issues can make it tougher to migrate to TypeScript. Other new features in TypeScript 3.1 include: - Version redirects for typesVersions, a field in a JSON file that tells TypeScript to check whether the current version of TypeScript is running. This impacts Node module resolution and addresses a situation in which library maintainers have had to choose between supporting new features and not breaking older versions of TypeScript. Developers of TypeScript cited an example in which a library is being maintained that uses the unknowntype from TypeScript 3.0; any consumers using earlier versions would be broken. The new capability provides a way to provide types for pre-3.0 versions of TypeScript while also providing types for later versions. - A refactoring to convert promises that return promises constructed with chains of .()thenand .catch()calls to asyncfunctions that use await. - Mapped tuple and array types are featured, with mapped types such as Partialor Requiredfrom d.tsnow automatically working on tuples and arrays. - TypeScript 3.1 generates d.tsand other built-in declaration file libraries using Web IDL files from the WHATWG DOM specification. This means lib.d.tswill be easier to keep current but many vendor-specific types have been removed. This change can break existing code. - The use of the typeof foo === "function"type guard could also break existing code, providing different results when intersecting with questionable union types composed of {}, Object, or unconstrained generics. Previous version: What’s new in TypeScript 3.0 Key to TypeScript Version 3.0, released in late July 2018, is a project references capability, which lets projects depend on other projects. With this capability, tsconfig.json files can reference other tsconfig.json files. By specifying these dependencies, it becomes easier to split code into smaller projects, with TypeScript and tools able to understand build order and output structure. As a result, builds become faster. Developers also gain support for transparently navigating and editing across projects. Other new features in TypeScript 3.0 include: - Project references allow mapping of an input source to outputs. - A set of APIs for project references has been added. These references should in the future be able to integrate with a selection of build orchestrators. - Improved error messages provide guidance so developers can better understand the cause and effect of an error. - Richer tuple types, so tuples can model parameter lists. Previously, TypeScript could only model the order and count of a parameters set. - Support for a type alias in the JSX namespace, with LibraryManagedAttributesacting as a helper to tell TypeScript what attributes a JSX tag accepts. This helps with using TypeScript with the React JavaScript UI library, enabling modeling of React behavior for capabilities such as React’s defaultPropsproperty, for filling in values for propsthat are omitted. - The unknowntype, for describing the least-capable type in JavaScript. This is useful for APIs to signal that a type can be of any value and that type-checking must be performed before use. Returned values must be safely introspected. As a result, the unknowntype no longer can be used in type declarations such as interfaces, type aliases, or classes. - Two productivity features are added for JSX, providing completions for JSX closing tags and collapsible outlining spans. - Named import refactorings, to help developers switch back and forth from qualifying imports with the modules they came from and individual imports. - Quick fixes to remove unreachable code and unused labels. - The deprecated internal method LanguageService#getSourceFilehas been removed. So have the deprecated function TypeChecker#getSymbolDisplayBuilderand associated interfaces, as well as the deprecated functions escapeIdentifierand unescapeIdentifier. Previous version: The new features in TypeScript 2.9 The release candidate released in June 2018 features support for object literals and numeric types, via the keyof operator and mapped object types. Keyof, which already is part of TypeScript, provides a way to query property names of an existing type. But this operator has predated TypeScript’s ability to reason about unique symbol types, so it never recognized symbolic keys. TypeScript 2.9 changes the behavior of keyof to factor in unique symbols and literal types. Also, mapped object types such as Partial and Readonly can recognize symbolic and numeric property keys; properties named by symbols will no longer be dropped. Also in TypeScript 2.9 are: - Properties can be converted to get-and set-accessors. - An unused span reporting capability enables two lint-like flags, --noUnusedLocalsand –noUnusedParameters, to be surfaced as “unused” suggestion spans. - Declarations can be moved to their own files and files can be renamed within a project while keeping import paths current. - The --prettymode, to provide a fuller console experience, is now the default mode when TypeScript can figure out that it is printing output to a terminal. It can be turned off, however. - Type arguments can be placed on tagged template strings. - The import(…)type syntax, to addresses a shortcoming in which TypeScript cannot reference a type in another module, or the type of module itself, without including an import at the top of the file. - Support for passing of generics to JSX elements and passing generics to tagged template calls. - Adding suggestion diagnostics for open files. - Showing unused declarations as suggestions. - Resolving module names with modulename.json to JSON files when resolving node modules. TypeScript 2.9’s changes can break existing code. Microsoft has cited these issues to be aware of: - Where developers have assumed that for any type Tthat keyof Talways is assignable to a string. Symbol- and numeric-named properties invalidate this assumption. There are workarounds, such as using Extract<keyof T, string>to restrict symboland number. Developers also can revert to old behavior under the --keyofStringsonlycompiler flag, which is meant as a transitionary flag. - Trailing commas are not permitted on REST parameters; this was done for conformance with the ECMAScript specification for JavaScript. - Unconstrained type parameters can no longer be assigned to objectin strictNullChecks. - Values of type neverno longer can be iterated over, which might be useful in catching a class of bugs. This behavior can be avoided through use of a type assertion to cast to the type, any. Previous version: The new features in TypeScript 2.8 Released in March 2018, Version 2.8 of TypeScript adds a conditional types construct for modeling. Based on JavaScript’s conditional syntax, conditional types help with modeling of simple choices based on types at runtime while allowing more expressive design-time constructs. The construct takes the following form: A extends B ? C : D. It should be read as “If the type A is assignable to B, then the type boils down to C and otherwise becomes D.” Conditional types also offer a new way to infer new types from types compared against the new infer keyword, which introduces a new type variable. TypeScript 2.8 also offers new type aliases that use conditional types. Other new features in TypeScript 2.8 include:
https://www.infoworld.com/article/3249607/whats-new-in-typescript.html
CC-MAIN-2021-21
refinedweb
2,059
57.16
Results 1 to 4 of 4 Thread: Python Problems - dr_springfieldGuestPython Problems I'm very new to python... ran into this error code: import urllib proxies = proxies={'http': ''} filehandle = urllib.urlopen('', proxies=proxies) error: AttributeError: 'NoneType' object has no attribute 'read' The problem is in the proxies variable... running without it (proxies=None) generates no error. Anyone know python? p.s. I took this code straight off the python website describing the urllib module... so it *should* work... - dr_springfieldGuestsolution This went away when I upgraded my version of python. Nice to know - I have not yet tried Python... - dr_springfieldGuest I've only used it for a few days now, and I already feel pretty comfortable in it. It's very easy to use. It used to be if I had something to do that I couldn't do by hand (like say... decrypting an encypted message, or processing a file) I would write up a little terminal C++ program to do it for me, but now that I've used python that seems ridiculous. Python is not only a thousand times easier to do anything I could ever do in C++ (aside from real development), but it also makes stuff that would have been very difficult... such as http operations...very easy. I'd recommend it, you can pick it up in a few days. Oh, and make sure you update first. Thread Information Users Browsing this Thread There are currently 1 users browsing this thread. (0 members and 1 guests) Similar Threads Python FaultBy TOmike15 in forum OS X - Development and DarwinReplies: 4Last Post: 06-10-2015, 11:05 AM Using PythonBy AllanGreen in forum OS X - Apps and GamesReplies: 8Last Post: 07-07-2012, 10:11 PM - Replies: 0Last Post: 11-14-2008, 01:54 AM python/textstatBy coach_z in forum OS X - Apps and GamesReplies: 1Last Post: 04-26-2005, 12:27 AM PYTHON PROGRAMMERS!!! how do i compile/run linux-based python code on OS XBy pkny in forum OS X - Development and DarwinReplies: 4Last Post: 03-20-2005, 05:17 PM
http://www.mac-forums.com/forums/os-x-development-darwin/6366-python-problems.html
CC-MAIN-2016-44
refinedweb
345
72.36
Java is a powerful programming language that is right at home in standalone desktop applications and complex enterprise-level web applications. It is a popular programming language for computer science students because it incorporates just about every key concept of modern object-oriented programming yet remains relatively easy to learn. After you have graduated from console-based (command line) programs, you will want to start creating your own Graphical User Interfaces (GUIs). Fortunately, Java makes this easy thanks to the Swing GUI toolkit. Although creating GUIs with Java is relatively easy, you should still have a basic understanding of Java before starting. Check out the Introduction to Java Training Course if you are unfamiliar with basic Java programming concepts. The Swing toolkit offers developers a platform-independent, customizable, configurable, and lightweight solution that can easily be incorporated into practically any Java program. The toolkit offers basic components like buttons and labels as well as advanced features including trees and tables. The entire toolkit is written in Java and is part of the Java Foundation Classes (JFC). The JFC is a collection of packages designed to create fully featured desktop applications. Other components of the JFC include AWT, Accessibility, Java 2-D, and Drag and Drop. For the purposes of this tutorial, it is assumed you are using the Eclipse IDE. Although an IDE is not required, it makes programming, compiling, and running your program much easier. If you are unfamiliar with using an IDE to create Java programs, check out Java Programming Using Eclipse. Creating a New Swing Project From the homepage of Eclipse, click File and select New Java Project from the drop-down menu. In the setup window that appears, enter the name of your project and then click finish. At the Navigator (upper left-hand corner), right-click the SRC folder and select New Class. Enter the class name (in this example it should be HelloWorld) and click finish. With the environment set up, you are now ready to start adding code and create a basic Hello World program using the Java Swing GUI toolkit. You should also consider checking out Mastering Java Swing for a more detailed overview of creating new projects and using the Swing toolkit. Hello World GUI As with any Java program, the first thing you need to do is import the libraries needed for the program. In this case, you only need to import one library using the statement: import javax.swing.*; The next step is to define the main method and create the class object. If you are unfamiliar with the “main method” terminology, you can brush up on your basic Java skills in Java Programming for Beginners. The code to define the main method and create the class object should look like this: public class HelloWorld extends JFrame { public static void main(String args[]) { new HelloWorld(); } As you can see above, this code creates a public class called HelloWorld. It’s also important to note that this public class extends JFrame; the basis of any Java GUI program. The main method follows and the third line creates a new instance of the HelloWorld class. The JFrame is responsible for creating the window where your application “lives.” In its most basic form, a JFrame is a window with a title, border, and an optional menu bar. The frame is also where any components that your application uses are located such as the JLabel you’ll see below: HelloWorld() { JLabel jlbHelloWorld = new JLabel(“Hello World”); add(jlbHelloWorld); This code creates a new JLabel within the JFrame that displays the text “Hello World.” The third line adds the JLabel to the visible frame. JLabels can be used to display messages or images as needed. It is a quick and easy way to add text to your program which is why it works so well in this Hello World example. The good news is that there are only a couple more lines of code and your first Java GUI program using the Swing toolkit will be complete. Here is what the completed program looks like: import javax.swing.JFrame; import javax.swing.JLabel; //import statements public class HelloWorld extends JFrame { public static void main(String args[]) { new HelloWorld(); } HelloWorld() { JLabel jlbHelloWorld = new JLabel(“Hello World”); add(jlbHelloWorld); this.setSize(100,100); setVisible(true); } } Notice that the size of this program is set to 100 pixels by 100 pixels. The JFrame class allows you to set the size of your application by default or you can choose to allow the user to change the size as needed. The next line – setVisible(true); – actually makes your application window show up on the user’s computer. As simple as this line seems, you would be surprised by how many new programmers forget to set the visibility of the JFrame. Without setting this value to true, the program will not be visible at runtime. At this point, your Hello World program is complete and you can run it via the Eclipse IDE. Of course, there is a lot more you can do even with a basic example like this and the best way to learn is to experiment on your own. You could add buttons, text fields, or even images (using JLabel) to this simple program to make it more interesting and teach valuable programming skills along the way. If you have become bored with basic command-line Java applications, learning how to use the Swing GUI toolkit is guaranteed to make your programs more user-friendly, interesting, and above all else, fun.
https://blog.udemy.com/java-swing-tutorial/
CC-MAIN-2017-26
refinedweb
925
60.14
Can someone pls. confirm, that MQTT is running w\ latest FW Hi all! I'm testing some sensors, and just wanted to push the measurements to my local MQTT Broker. But I get allways an error in the MQTT lib. File "main.py", line 51, in <module> File "/flash/lib/mqtt.py", line 84, in connect IndexError: bytes index out of range I did it a thousand times with WiPys 2.0 and LoPys, and it worked allway seemlessly. Cheers, Thomas Arghh. Got it. It was a typo in the config file of my broker. The broker was running on the wrong port. Thanks. Thomas @ledbelly2142 You don't use client.connect(). Is this right? The connect() method is the one, that gives me the error. - ledbelly2142 last edited by ledbelly2142 Yes from this example Your main.py should have something like: import socket import struct import machine import time import ubinascii import network from network import WLAN from network import LoRa from mqtt import MQTTClient wlan = network.WLAN(mode=network.WLAN.STA, antenna=WLAN.EXT_ANT) #wlan.antenna(WLAN.EXT_ANT) You may be using internal antenna wlan.connect('WiFiSSID', auth=(WLAN.WPA2, 'WiFiPassword'), timeout=5000) while not wlan.isconnected(): machine.idle() print(wlan.ifconfig()) print("Connected to Wifi\n") client = MQTTClient("nameofMQTTserver", "IPaddressOrURLforMQTTserver", port=1883, user="EnterUserNameForMQTTserver", password="EnterPasswordForMQTTserver") client.settimeout = settimeout # You may not be using port 1883, use 8883 if using SSL #then test with something like: client.publish("house001/sensor/sensorId_001/temp/c", "23.48") #the way this library is set up, it needs the topic, then the topic value to be written If you are not using MQTTlense, I highly recommend it for Chrome. Simple and easy to use for testing. More on MQTTlense HERE The QOS for this Library is set to 0 (best effort similar to TCP/IP), to avoid ping pong MQTT messaging, use QOS 2 (once and only once) on the subscriber for testing. - BetterAuto Pybytes Beta last edited by @ledbelly2142 said in Can someone pls. confirm, that MQTT is running w\ latest FW: The MQTT lib from Pycom (on github) is working for me. This one? - ledbelly2142 last edited by The MQTT lib from Pycom (on github) is working for me. I did not get an index error.
https://forum.pycom.io/topic/1823/can-someone-pls-confirm-that-mqtt-is-running-w-latest-fw
CC-MAIN-2020-40
refinedweb
377
66.94
The @protocol fundamentals By Muralidharan Padmanaban & Anthony Prakash Whether you’re a developer looking to get started on the @protocol, an enterprise technologist trying to get a deeper understanding, or a casual Internet reader who randomly stumbled upon us, you’ve come to the right place. This article will give you a basic understanding of how our technology works. So let’s get started. What is the @protocol? Developed by The @ Company, the @protocol is an Open-source, P2P Internet protocol that enables developers and enterprises to handle personal data based on trust and permissions. On the @protocol, people will have the freedom to share, withhold, or retract their information at will through a unique identifier we’ve titled the @sign. They will no longer be unwittingly entrusting their data to companies. Imagine that there are two entities with the @signs @alice and @bob. The @protocol gives @alice complete control over how information like email, websites, credit card numbers, and data like location or preferences is shared with other entities. To give @bob access to information, all @alice needs to do is share their @sign so that @bob can request specific data in the future. In the future, business cards will use @signs for people and roles. For example, the @signs @alice and sales@acmeco allow you to always be able to contact Alice and the sales role at AcmeCo even if Alice’s contact details or if the people in the sales role change. The “right to be forgotten” is another key aspect of the @protocol. Since we believe that it is everyone’s fundamental right to change their choice of services on the Internet, people’s data should be forgotten and removed from entities that they no longer want to be associated with. @root & secondary servers The @protocol has only two tiers of servers: the @root and @secondary servers. @root servers are the only centralized part of the @protocol and are centralized to provide a single namespace and a global dependable platform. No data beyond the @sign and responding authoritative @secondary server is held on the root servers. This information is considered public and no authentication is required to look up the @secondary server for a particular @sign. The @root servers have been designed to scale to billions of @signs and handle the request for @sign lookups at near real time, globally. To achieve this, in memory databases are utilized and only the absolute minimum of data is stored. @secondary (personal @sign) servers provide the second tier of the @protocol architecture and are responsible for answering lookups for specific @signs. @secondary servers provide the lookup service for a particular @sign and one name only. This ensures that the @secondary server will not mix @sign data with any other @sign’s data. This is unlike web servers that can provide service to multiple websites at a time. Schema The @protocol actually defines a secure URI (Universal Resource Identifier) for any data stored across the @protocol (i.e. phone@alice) with one important difference — the value returned for an identifier is polymorphic, which means that it depends on who is accessing the resource or asking for the information. Besides, the addition of the @scheme, <atsign://>, creates a URL (Universal Resource Locator) that can be securely shared and interpreted. For example, atsign://phone@alice can be identified and used to locate phone@alice with all the security and permissions features applied to that resource. This convention is easy to use and particularly useful for storing reference values to be returned by @protocol requests. @protocol server verbs A verb is a command used to communicate with an @server through a secure socket. from verb Purpose: The from verb is used to tell the @server what @sign you claim to be. The @server will respond with a challenge in the form of a full @ address and a cookie to place at that address. Example 1: In this case the @server will place the cookie 29741692–6c08–408c-b93b-24d2758cc0f9in a private location _948da07a-01da-457f-a23b-aa851738e898@alice. The from verb response (challenge) will be signed by the pkam verb in this authentication flow. Example 2: In this case the @server is asking for the cookie dfc6aedf-4618–4446–84bb-9d15838a7b10 to be placed at the location _64a27907-f555–44f8-bd86-b97838303805@bob with the header of proof:. The challenge being that, to actually place that cookie on @bob, the @server requires access to the @bob @server with public access. The public access is important as the @alice @server will want to look up that cookie to prove the request is actually from @bob. The type of cookie and the location should be in the format of a uuid-v4 to ensure that the likelihood of a namespace clash is mathematically unlikely, especially when coupled with the timeout of any cookies placed within a few minutes or cleared once used. pol verb Purpose: The pol verb, which stands for proof of life, verifies whether the correct cookie was placed in the requesting @server. Example: @pol @bob@ In this case, the @alice server looks up the location of the @bob server through the root server. @alice server verifies whether the cookie dfc6aedf-4618–4446–84bb-9d15838a7b10 is placed in the public location _64a27907-f555–44f8-bd86-b97838303805@bob in the @bob server. If the cookie is present and valid, a prompt of requesting atsign(@bob) is returned. pkam verb Purpose: The pkam verb, short for public key authentication mechanism, is used to authenticate a client to an @server. The challenge returned by the from verb is cryptographically signed with a private key by the client. The cryptographic signature is validated by the @server using a public key. Example: // snippet to sign challenge with private key var key = RSAPrivateKey.fromString(privateKey); var challenge = ‘_948da07a-01da-457f-a23b-aa851738e898@alice:29741692–6c08–408c-b93b-24d2758cc0f9’; var signature = base64Encode (key.createSHA256Signature(utf8.encode(challenge))); // send pkam request to server @pkam:<signature> //prompt is returned if authentication is successful on @alice server @alice@ update verb Purpose: The update verb updates a key with a value in the @sign’s namespace. This value can be a user’s public, private, or shared data. Example: - Sharing data publicly update:public:phone@alice +1–111–111 The key phone@alice can be looked up by anyone. 2. Store confidential information update:@alice:creditcard@alice 123–456–789 The key creditcard@alice is visible only to @alice and not to anyone else. 3. Share data with a specific @atsign update:@bob:email@alice alice@atsign.com The key email@alice is visible only to @bob and not to anyone else. llookup verb Purpose: An authenticated @sign can look up local keys using the llookup verb. Example: - Local lookup public key llookup:public:phone@alice data:+1–111–111 2. Local lookup key in private namespace llookup:@alice:creditcard@alice data:123–456–789 3. Local lookup shared key llookup:@bob:email@alice data:alice@atsign.com plookup verb Purpose: An authenticated @sign can look up public keys using the plookup verb. Example: lookup public key of @alice from @bob @bob@pllookup:phone@alice data:+1–111–111 lookup verb Purpose: The lookup verb is polymorphic in nature. If an @sign is authenticated, the lookup verb is used to retrieve data shared with authenticated @sign. If an @sign is unauthenticated, the lookup verb returns the public value of the key. Example: - @bob is authenticated @bob@lookup:email@alice data:alice@atsign.com 2. @alice is unauthenticated lookup:phone@alice data:+1–111–111 //value of public:phone@alice scan verb Purpose: The scan verb lists all the keys stored in an @sign’s server. Like the lookup verb, the scan verb is polymorphic in nature. When executed without authentication, the verb returns all public keys. Upon authentication, the scan verb returns all keys (public, private and shared) Example: - @alice is unauthenticated @scan data:phone@alice //returns only the key shared publicly 2. @alice is authenticated @alice@scan data:[public:phone@alice,@alice:creditcard@alice,@bob:email@alice] //returns all the keys in @sign’s server delete verb Purpose: The delete verb removes a key from the user’s server. Example: delete:@bob:email@alice There are several other verbs that we will explain further in subsequent articles. Stay tuned! Join our Discord Community : to stay up to date. Muralidharan Padmanaban (murali@atsign.com) is a technical architect on the @protocol core development team. Anthony Prakash (anthony@atsign.com) leads Developer Relations and Strategic Partnerships for The @ Company. The @ Company is the creator of an Open-source, P2P Internet protocol that enables developers and enterprises to handle personal data based on trust and permissions. Check out our GitHub repo.
https://atsigncompany.medium.com/the-protocol-fundamentals-7629b232adcf?source=post_internal_links---------3-------------------------------
CC-MAIN-2022-21
refinedweb
1,455
53.71
Working with the results of a thread without blocking tkinter I have a minimal tkinter program which analyses some data. Some of the datafiles are quite large, so to ensure that the GUI remains responsive I load the data in a new thread. How can I run analysis on the data once the thread has terminated? Some example code is below. import tkinter from threading import Thread from time import sleep result = [] def func(result): sleep(10) ans = 1 result.append(ans) class myApp(tkinter.Tk): def __init__(self, parent): tkinter.Tk.__init__(self, parent) self.grid() self.myButton = tkinter.Button(self, text="Press me!", command=self.onButtonPress) self.myButton.grid(column=0, row=0) def onButtonPress(self): thread = Thread(target=func, args=(result,)) thread.start() self.myButton["text"]=result app = myApp(None) app.mainloop() How can I make the button text change only when func returns? 1 answer See also questions close to this topic - Running a Window on its own Thread I am working on an excel add in where it takes a very long time to create the excel sheet. Because of this I am running a progress window (Winform) on its own Thread to notify the user what is going on. I am wondering if I am launch and closing the thread correctly and if I can use something better to create a thread like a task or something else. This is my code private static Thread tPrgBarfrm; private static frmProgressBar frmProgBar; //Launching the window (tPrgBarfrm = new Thread(() => System.Windows.Forms.Application.Run(new frmProgressBar()))).Start();//working //Update the label in the form frmProgBar.UpdateStatusMessage("Completed Calc Sheet"); //Closing tPrgBarfrm.Abort(); - QT,QMT , force button enable change before function finish I have a qml image button. If button is pressed then an long operation is start at C++ side. I want button is disable when click on it , them emable - hello friends i cant execute my else condition The program must accept a string S as the input. The program must replace every vowel in the string S by the next consonant (alphabetical order) and replace every consonant in the string S by the next vowel (alphabetical order). Finally, the program must print the modified string as the output. s=input() z=[let for let in s] alpa="abcdefghijklmnopqrstuvwxyz" a=[let for let in alpa] v="aeiou" vow=[let for let in v] for let in z: if(let=="a"or let=="e" or let=="i" or let=="o" or let=="u"): index=a.index(let)+1 if index!="a"or index!="e"or index!="i"or index!="o"or index!="u": print(a[index],end="") else: for let in alpa: ind=alpa.index(let) i=ind+1 if(i=="a"or i=="e" or i=="i"or i=="o"or i=="u"): print(i,end="") the output is : i/p orange pbf the required output is: i/p orange puboif - We are getting an error with this while loop but cannot see why The error we are getting is that it is looping infinitely and does not appear to be picking a number to be the correct choice import random print ("Guess a random number between 1 and 10") number = random.randint(1,10) guessTaken = 0 print ("Guess!") guess = int( input()) while guessTaken < 6: guess != guess+1 print ("Wrong!, guess again") if guess == input(): print ("Correct") print ( ) - i-vector and PLDA I started to read about i-vector and PLDA. I couldn't understand what their significance is? Also, I couldn't get the code for understanding purposes. If possible can someone provide the code in python for the both and when and where are they used? - Python3 convert string words from Combobox to int value Is there anyway to convert or assign int value to strings ? if i use this line, self.months = [1,2,3,4,5,6,7,8,9,10,11,12] the program gets the year and month and return the value to combobox. But if i replace the int value in self.month to string like in the code bellow it will complain that it wants int. as i understand . values Specifies the list of values to display in the drop-down and listbox.textvariabl specifies a name whose value is linked to the widget value. from tkinter import * import calendar from tkinter import ttk class main: def __init__(self,master): self.master = master self.month = IntVar() self.year = IntVar() self.months = ["Jan","Feb","Mars","April","Maj","Jun","Juli","Aug","Sept","Okt","Nov","Dec"] print(self.months) self.years = (2014,2015,2016,2017,2018,2019,2020) self.widgets() def widgets(self): Label(self.master,text="Kalender",font =("freesansbold",30),bd=10).pack() f = Frame(self.master,pady=10,padx=10) Label(f,text="Year",font =("freesansbold",12)).grid(row=0,column=0) Label(f,text='Month',font =("freesansbold",12)).grid(row=0,column=3,) year = ttk.Combobox(f,width=7,font =("freesansbold",12),values = self.years,textvariable = self.year) year.grid(row=0,column=2) year.current(4) mon = ttk.Combobox(f,width=7,font =("freesansbold",12),values = self.months,textvariable = self.month) mon.grid(row=0,column=4) mon.current(0) f.pack() self.area = Text(self.master,width=30,height=10,bd=5,font =("freesansbold",12)) self.area.pack() Button(self.master,text="Get Kalender",font=("freesansbold",12),command=self.getcal).pack() def getcal(self): m = self.month.get() y = self.year.get() cal = calendar.month(y,m,1,2) self.area.delete(0.0,END) self.area.insert(0.0,cal) root = Tk() main(root) root.title("just som stuff 1.0") root.geometry('{}x{}'.format(460, 350)) root.mainloop() - Python, Tkinter - How to run 'shelve.close ()' when exiting the gui program? I have a simple gui(tkinter) program that writes data to a file. Uses shelve. How to run shelve.close () when I want to disable the program? - Changing opacity of a rectangle How do i change the opacity/transparency of a black rectangle in python tkinter. canvas.create_rectangle (200, 200, 600, 600, fill="black") Above is what i used to create the box. I looked at attributes i could use but could not find any. I also played with .setOpacity(0.5)as i saw this being used somewhere else but it did not work. I have looked at different examples and they either change the opacity for images using the PIL module or change the opacity in different languages which i am unfamiliar with. - Python - Calling random function and creating a thread I am trying to call a function randomly from a list of functions. When the function is called i want to start a new thread . My code looks like this : jobs = [func1, func2, func3, func4] def run_threaded(job_func): info("Number of active threads: " + str(threading.active_count())) info("Threads list length: " + str(len(threading.enumerate()))) job_thread = threading.Thread(target=job_func) job_thread.start() job_thread.join() When i call the function without the parenthesis , then the same function is called over and over again every one minute . i.e schedule.every(1).minutes.do(run_threaded, random.choice(jobs)) When i call the function with an extra parenthesis i.e schedule.every(1).minutes.do(run_threaded, random.choice(jobs)()) I get the following error : Exception in thread Thread-7: Traceback (most recent call last): File "C:\Users\(USER)\AppData\Local\Programs\Python\Python37-32\lib\threading.py", line 917, in _bootstrap_inner self.run() File "C:\Users\(USER)\AppData\Local\Programs\Python\Python37-32\lib\threading.py", line 865, in run self._target(*self._args, **self._kwargs) TypeError: 'bool' object is not callable Does it expect something as a param ? Do i have to override the run() method in a subclass ? - Python Process only spawning one process in loop I have a Python 3.7 program I'm writing that has a main script a.pythat creates an instance of a class in another script b.py. I want a.pyto be able to create multiple simultaneous instances of my class in b.py, but the code I'm using to call b.pyseems to only spawn one instance at a time. The code I have looks like this: import b from multithreading import Process processes = [] while (True): dataFiles = getFilesFromDatabase() if len(dataFiles) > 0: for file in dataFiles: p = Process(target=b.class, args=(file,), name=file['name']) p.start() processes.append(p) sleep(60) When dataFiles(a list of dictionaries) has multiple dictionaries, a Processis spawned for dataFiles[0], but dataFiles[1]is not started until after dataFiles[0]finishes. I'm not calling Process.join()anywhere so I'm not sure what could be wrong. Do I need to use a Poolfor this? Is there anything else I'm missing? Any help appreciated! - Python class variable delayed outside of its callback function I have written a python script that subscribes to a ROS topic and then goes into a callback function that then calls the test function that returns a list of radii in ascending order by extracting the necessary data points in the topic. This is working correctly, however I would like to access this list of radii throughout the whole class(and outside of it). I have made it a class variable "self.radii" but the console throws an error saying the instance has no attribute "radii" unless i tell it to sleep using rospy.sleep(2) for 2 seconds and then it returns a value. Its the same story if I try to call self.radii within the main function. I feel as through this is a python threading issue rather than a ros issue as the actual subscriber is working correctly there just seems to be a long delay I do not know how to remedy. If i instead print(self.radii) inside the callback function it works fine, with values appearing immediately, but I want to access this variable outside of this. The code is below. Thanks! #!/usr/bin/env python import rospy import numpy as np from laser_line_extraction.msg import LineSegmentList class z_laser_filter(): def __init__(self): self.sub_ = rospy.Subscriber("/line_segments", LineSegmentList, self.callback) rospy.sleep(2) print(self.radii) def callback(self, line_segments): self.radii = self.test(line_segments) print(self.radii) def test(self, line_segments): number_of_lines = ((len(line_segments.line_segments))-1) i = 0 radii = list() while (i!=number_of_lines): radii.append(line_segments.line_segments[i].radius) radii = sorted(radii, key=float) i=i+1 return radii if __name__ == '__main__': rospy.init_node('line_extraction_filter', anonymous=True) node = z_laser_filter() rospy.spin()
http://quabr.com/47252998/working-with-the-results-of-a-thread-without-blocking-tkinter
CC-MAIN-2018-39
refinedweb
1,748
60.21
. Are there a toolbox of lenses for algebraic data types with multiple constructors? 2 Existing solutions 2.1 Partial lenses The data-lens library provides partial lenses which are isomorphic to type PartialLens a b = (a -> Maybe b, a -> Maybe (b -> a)) The flollowing partial lenses are defined for lists: headLens :: PartialLens [a] a headLens = (get, set) where get [] = Nothing get (h:t) = Just h set [] = Nothing set (h:t) = Just (:t) tailLens :: PartialLens [a] [a] tailLens = (get, set) where get [] = Nothing get (h:t) = Just t set [] = Nothing set (h:t) = Just (h:) 2.2 Other solutions Please help to extend the list of known solutions. 3 ADT lenses The proposed solution, summarized: Use one lens for each ADT type, with reversed direction. 3.1 Example: List lens The ADT lens for lists: import Data.Lens.Common listLens :: Lens (Bool, (a, [a])) [a] listLens = lens get set where get (False, _) = [] get (True, (l, r)) = l: r set [] (_, x) = (False, x) set (l: r) _ = (True, (l, r)) 3.2 UsageSuppose that we have a state type S = (Bool, (Int, [Int])) We can]
https://wiki.haskell.org/index.php?title=LGtk/ADT_lenses&oldid=56174
CC-MAIN-2017-26
refinedweb
185
66.27
Hi Folks this is my first ever Python Post in any forum... I have been busy learnin Pyhon in conjunction with PYQT4 and find it very usefull, however Ive come up against a brick wall... What I am trying to do is capture the terminal (stdout) output of a subprocess and display it in a QTextEdit box. The process I am interested in is an Arch Linux process called pacman. Normally you would type in a terminal: pacman -Syy to update the central package database. The process gives a streaming text output to the terminal. I want to capture this stream in a PyQt Gui and display it. Any help would be great...so far I have this code: =============================================== #!/usr/bin/python import sys import io from PyQt4 import QtGui, QtCore, uic class MyWindow(QtGui.QMainWindow): def __init__(self): super(MyWindow, self).__init__() uic.loadUi('getout.ui', self) self.show() def doit(self): oldstdout = sys.stdout sys.stdout = io.StringIO() print("Here is some text") #process = subprocess.Popen(['python','-h']) process = subprocess.Popen(['pacman','-Syy']) self.textEdit.setText(sys.stdout.getvalue() ) sys.stdout = oldstdout if __name__ == '__main__': import sys import subprocess app = QtGui.QApplication(sys.argv) window = MyWindow() sys.exit(app.exec_()) =============================================== I get the output from the print command but the subprocess command output goes straight to the terminal.... Please Help Thanks Steve
http://forums.devshed.com/python-programming/952772-help-redirecting-subprocess-stdout-last-post.html
CC-MAIN-2016-36
refinedweb
224
61.43
Donald Knuth famously said, “We should forget about small efficiencies, say about 97% of the time”. But when faced with the other 3%, it is good to know what’s going on behind the scenes. So in this article we’ll be taking a dive into the foreach loop.. Performance Testing foreach This means we can obtain some surprising gains depending on how we declare our variables. Here is our starting point, a simple loop that calculates a sum. private static void Test1(IEnumerable list) { var sw = Stopwatch.StartNew(); var count = 0; foreach (var item in list) count += item; sw.Stop(); Console.WriteLine("IEnumerable : " + sw.Elapsed.TotalMilliseconds.ToString("N2")); } So how long does this take for a list of 100 million? Using BenchmarkDotNet on my laptop we’re looking at an average of 870.4265 ms (12.4169 ms StdDev). Not bad given the amount of data, but what if we used IList instead of IEnumerable? private static void Test2(IList<int> list) With no other changes, this gives us an average run time of 870.7999 ms (11.9365 ms StdDev). So close that given the margin of error you could say it was exactly the same. And it makes sense because either way we are getting the same enumerator. But what if we were to use a concrete class instead? private static void Test3(List<int> list) Again we’re testing on a noisy laptop, so some variance is to be expected. But surprisingly, I am getting an average runtime of 284.4808 ( 4.1080 ms StdDev). How do we account for a better than 67% improvement in runtime performance? List<int>.GetEnumerator doesn’t return a IEnumerable<int> As I mentioned in the beginning, a foreach loop only needs something that looks like an IEnumerable. This means you can use tricks otherwise be unavailable to you. For example, you can return a value type instead of an object as your enumerator. This is a small gain, but it does mean one less piece of memory to allocate now and garbage collect later. More importantly, since we are using a struct instead of reference type or interface we gain the ability to use a normal function call instead of a virtual function call. With a normal call, you simply push your parameters onto the stack and jump to the right instruction. With a virtual call, you must first locate the correct virtual table (v-table) for the interface you are using. Then you find the function you want to call in that v-table, after which you can proceed as if it were a normal call. Note: In most statically typed, OOP style languages, every class has multiple interfaces. There is the public interface, one or more base class interfaces, and any relevant abstract interfaces. Each of these gets its own v-table for looking up methods marked as abstract or virtual (hence the name v-table). Depending on how the compiler and/or runtime is designed, the protected and internal interfaces either get their own v-table or share with the public interface. If you build your application with optimizations turned on (a.k.a. a release build), another factor comes into play. Because it knows exactly where the normal function is during the JIT process, the compiler has the option to skip the ceremony of a function call entirely and just copy the contents of the function you are calling into your function. This is known as “inlining” and is very important for the optimization process. When I said, “you simply push your parameters onto the stack”, it’s a bit of an exaggeration. You also have to push the CPU registers and the return address onto the stack for when the function is complete. With inlining, those extra steps are not necessary. Furthermore, with everything now in one place the optimizer may see other ways to improve the code. Note: Some runtimes, such as Java’s Hotspot VM can actually inline virtual methods if it thinks the method being called is always from the same concrete class. It is a rather advanced technique and requires the ability to “deoptimize” the code if it detects something has changed and the initial criteria no longer apply. Disassembling the code: We now turn our attention the disassembled IL code for the three tests. Test 1: IEnumerable!0> : 2: IList><!0> !0>!0>!0> 3: List valuetype [mscorlib]System.Collections.Generic.List`1/Enumerator<!0><!0><!0> !0>!0>!0>!0>!0>!0> class [mscorlib]System.Collections.Generic.List`1 ::GetEnumerator() IL_000e: stloc.2 .try { IL_000f: br.s IL_001d IL_0011: ldloca.s V_2 IL_0013: call instance !0 valuetype [mscorlib]System.Collections.Generic.List`1/Enumerator ::get_Current() IL_0018: stloc.3 IL_0019: ldloc.1 IL_001a: ldloc.3 IL_001b: add IL_001c: stloc.1 IL_001d: ldloca.s V_2 IL_001f: call instance bool valuetype [mscorlib]System.Collections.Generic.List`1/Enumerator ::MoveNext() IL_0024: brtrue.s IL_0011 IL_0026: leave.s IL_0036 } // end .try finally { IL_0028: ldloca.s V_2 IL_002a: constrained. valuetype [mscorlib]System.Collections.Generic.List`1/Enumerator IL_0030: callvirt instance void [mscorlib]System.IDisposable::Dispose() IL_0035: endfinally } // end handler IL_0036: ldloc.0 IL_0037: callvirt instance void [System]System.Diagnostics.Stopwatch::Stop() As you look at this, it is pretty easy to spot the calls to methods on structures. They always use call instead of callvirt. But there is more going on here to be aware of. Consider line IL_0009 in test 3. We said earlier this wouldn’t be a virtual call, but it is in fact using callvirt. This is where IL code can be a bit misleading. The callvirt operator doesn’t exactly mean “make a virtual call through a v-table”. Here is the breakdown: So basically, C# emits a callvirt instead of a call so the null reference check is performed. It then trusts the JIT compiler to recognize when a method isn’t actually virtual, eliminate the v-table lookup, and possibly inline the code. This also has a backwards compatibility aspect. If an instance method is changed into a virtual method, the code that calls it won’t have to be recompiled. Note: IL code allows you to use call on a virtual method in order to invoke a base class method (e.g. base.Foo()) when overriding a method without having to introduce a specialized op-code. For loops So we know why using a foreach over a List<int> is faster than the same list wrapped in an IList<int> or IEnumerable<int>. But what if we were to use a for loop instead of a foreach? private static void Test4(IList <!0><!0><!0> !0>!0>!0>!0>!0>!0> list) { var sw = Stopwatch.StartNew(); var count = 0; for (int i = 0; i < list.Count; i++) count += list[i]; sw.Stop(); Console.WriteLine("List for: " + sw.Elapsed.TotalMilliseconds.ToString("N2")); } This time we’re seeing an average runtime of 698.9598 ms (24.5606 ms StdDev). Slower than our foreach+List case, but still 20% faster than when we started. private static void Test5(List<int> list) Testing it again with a for loop over a List<int>, we see the numbers drop by an order of magnitude. With an average runtime of 105.6321 ms (1.1690 ms StdDev), we’re 63% faster than our next best case and 88% faster than our worst case. What about LINQ? For our next set of tests, we’ll only be counting the even numbered rows. Common knowledge is LINQ is slower than normal loops because it uses delegates, but is that true? We ran theses four tests with an 1000 item list: - IList<int> with LINQ Where clause: list.Where(x => x % 2 == 0) - IList<int> with an if-statement: if (item % 2 == 0) - List with LINQ Where clause - List<int> with an if-statement Common wisdom is that LINQ is slower than if statements, but that’s not always the case. The slowest turned out to be IList<int> with an if-statement . When using LINQ, performance actually improved by 4 to 5%. But if we use a List<int> with an if-statement , then we see a 65% improvement of the worst case. So when choosing between using LINQ and an if-statement when using interface-typed variables, LINQ is the better choice. Why? In part because it discards the interface and goes back to the base class. Here is LINQ’s Where function as shown in Reference Source. public static IEnumerable <!0><!0><!0> !0>!0>!0>!0>!0>!0> Where (this IEnumerable source, Func predicate) { if (source == null) throw Error.ArgumentNull("source"); if (predicate == null) throw Error.ArgumentNull("predicate"); if (source is Iterator ) return ((Iterator )source).Where(predicate); if (source is TSource[]) return new WhereArrayIterator ((TSource[])source, predicate); if (source is List ) return new WhereListIterator ((List )source, predicate); return new WhereEnumerableIterator (source, predicate); } As you can see, it has special cases for several concrete types including List<T>. In fact, this function had no idea we were trying to give it an IList<T>. That information was discarded and then time was spent to determine what the real type was. Hence the reason both LINQ versions have the same runtime characteristics. This is actually fairly common in the .NET Framework. A common misconception is you can cast a list to IEnumerable<T> or IReadOnlyList<T> and that actually makes the object read-only. In fact, libraries such as LINQ and frameworks such as WPF sniff out the real types underneath and respond accordingly. Strongly Named Classes A recommendation in the .NET Framework Design Guidelines is all collections exposed in a public API should be strongly named. For example, instead of exposing a generic IList<Customer>, you should have a CustomerCollection class. The primary reason for this rule is it allows for future extensibility. You can later decide to expose additional properties and methods on the strongly named class in a way consumers can easily find. It also solves the problems caused by IList<T> not inheriting from IReadOnlyList<T>. As of .NET 2.0, the recommended base class for strongly named collections is Collection<T>. This makes for a good look at trades-offs when designing an API. For example, Collection<T> has overridable methods that are fired when objects are added or removed from the collection. This may make the class slower when calling add and remove methods, but it allows you to add interesting features such as validation. Unlike List<T>, Collection<T> does not expose a struct based enumerator. So enumerating it with a foreach loop will be no different than exposing IEnumerable<T>. But you do get something in exchange. Consider the constructor for Collection<T>. public Collection(IList list) { if (list == null) { ThrowHelper.ThrowArgumentNullException(ExceptionArgument.list); } items = list; } Rather than copying the source list, Collection<T> will use it as-is. This makes converting a List<T> into a Collection<T> very cheap, which is useful when you want to add additional functionality to a populated list before exposing it to consumers. Immutable vs Read-Only Collections Like Collection<T>, ReadOnlyCollection<T> is just a wrapper around another list. So you should see the same runtime performance characteristics. But what about the immutable collections: ImmutableList<T> and ImmutableArray<T>? In a test of 1,000 integers we saw these averages. (Note that the time is in micro-seconds.) Performance wise, an ImmutableArray<T> is clearly a substitute for List<T>. And ReadOnlyCollection<T> is about the same speed as Collection<T>. But what’s up with ImmutableList<T>? The source code isn’t available on Reference Source, but if you understand the concept behind the immutable list it starts to make sense. While ReadOnlyCollection<T> and ImmutableArray<T> are merely wrappers around an IList<T> and an array respectively, ImmutableList<T> is a rather complex tree structure. Here is a decompiled version of the enumerator for ImmutableList<T>. public bool MoveNext() { this.ThrowIfDisposed(); this.ThrowIfChanged(); if (this._stack != null) { Stack <!0><!0><!0> !0>!0>!0>!0>!0>!0> .Node>> stack = this._stack.Use .Enumerator>(ref (ImmutableList .Enumerator) ref this); if ((this._remainingCount > 0) && (stack.get_Count() > 0)) { ImmutableList .Node node = stack.Pop().Value; this._current = node; this.PushNext(this.NextBranch(node)); this._remainingCount--; return true; } } this._current = null; return false; } The I mmutableList<T> is useful, but only for very specific use cases beyond the scope of this article. Conclusions If you want convenience or the flexibility to make future changes, expose a strongly named type that inherits from Collection<T> or ReadOnlyCollection<T>. If performance is a more important concern, design your API to use arrays, List<T> or ImmutableArray<T>. Avoid IEnumerable<T>, IList<T>, and IReadOnlyList<T> for property and method return types if you can, and don’t use ImmutableList<T> unless you are actively manipulating immutable collections. The source code for this article is available at.. Community comments Missing code by Juan Casanova / Re: Missing code by Jonathan Allen / Missing code by Juan Casanova / Your message is awaiting moderation. Thank you for participating in the discussion. Really interesting article, I liked the IL code snippets :) The code for constructor for Collection<T> is missing and just show the LINQ where predicate again. Also in the conclusions you recomend to use List<T> and not to use List<T>. I think the not to use bit should be IList<T> Re: Missing code by Jonathan Allen / Your message is awaiting moderation. Thank you for participating in the discussion. Thank you for pointing that out. A couple of typos got introduced due do formatting issues. I'll get them fixed right away.
https://www.infoq.com/articles/For-Each-Performance/?useSponsorshipSuggestions=true&itm_source=articles_about_performance-scalability&itm_medium=link&itm_campaign=performance-scalability
CC-MAIN-2019-26
refinedweb
2,278
67.76
On Mon, Nov 19, 2018 at 1:13 AM Daniel P. Smith <address@hidden> wrote: > > It would be great if the TPM commands that are using EFI protocol and > exposed to TPM command module be name spaced under efi, e.g. > grub_efi_tpm_log_event. As I lay in a TIS implementation, I can mimic a > similar set of tis name spaced functions that can be #ifdef/#else (or > any other mechanism GRUB maintainer's would prefer) switched between EFI > and TIS. I'm a little confused - if it's #ifdefed then surely there's no namespace collision (because only one implementation can be built at once)? If the goal is to allow one binary to support multiple implementations then that's not impossible, but it's going to require runtime registration of TPM callbacks rather than simply namespacing stuff.
https://lists.gnu.org/archive/html/grub-devel/2018-11/msg00165.html
CC-MAIN-2022-27
refinedweb
137
54.15
Listen to this article in proper Spanish. I come from a very beautiful town in Columbia called Medellín. So, if you haven't gone there, please visit us. That we have an amazing community, as Brian said. Right now, I work as a senior developer advocate for Heroku. So, I live in United States. Sadly, I'm away from my country, but I always constantly in communication with my community, and that's pretty much true to main conferences that I organize. One is the NodeConf Colombia and the other is the JSConf Colombia. So, I know if you are like me right now, you are needing coffee. I'm needing coffee too. It's super early. So, please don't crash now. Let's wait until my talk finish, and we can have some coffee to keep us awake. Julián: So, a little bit of some background about this talk, why I presented this. These are pretty much lessons learned while I was working at NodeSource, previously. I was doing consulting work as a solutions architect, pleasing the customer, making sure they were using Node.js properly and they were successfully using Node. And I saw a lot of different bad patterns out there on how other companies were doing error handling, and especially when the process were crashing or the process were dying. They didn't have enough visibility. They didn't have logging strategies in place. They were missing the very important information about why the Node processes were having issues or were crashing. They were experiencing downtime, and we started to collect in a set of best practices and recommendations for them, that are aligned with the overall Node.js community. Julián: If you go to the documentation, there are going to be pretty much the same recommendations that I'm going to be speaking about today. We add a couple other more things to make sure you have a very good exit strategy for your Node.js Processes. These best practices applies pretty much for web and network based applications because we are going to cover also the graceful shutdowns, but you can use them for other type of Node.js applications that are constantly running. And Node, sadly, is not Erlang. If you know about Erlang or leaks related crashes, just like a term that it's very common in that community. When I started learning Erlang back in 2014, I loved the fault tolerance options that these platform and language has. And I always think about how to bring the same experience into Node.js, is not the same because you can't do whole code reloading or function swapping on Node. You can do those things on Erlang, but still, Node is pretty lightweight, and you can easily restart and recover from a crash. Julián: First, before getting into the bad place or when bad things happen, how to make sure that everything is good? What do we need to do to our Node applications to make sure they are running properly? So, first, as a recommendation, and there is going to be a workshop later about this specific thing, cloud native JS, don't miss this worship by Beth. She's going to also mention about how to add health checks to you Node.js processes. So, pretty much as our recommendation, add a health check route, it's a simple route that is going to return a 200 status code, and you will need to set something to monitor that route. You can do it at your loa balancer level. If you are using a reverse proxy, or a load balancer like nginx, or HAProxy, or you're using ELB, ALB, any type of application that is being the top layer of your Node.js process being constantly monitoring that the health check is returning okay. So you are making sure that everything is fine. Julián: And also, rely on APM, some tools that are going to monitor the performance and the health of your Node.js Processes. So, in order to make sure that everything is running fine, you will need to have tools, some very known tools, New Relic, App Dynamics, Dynatrace, and N|Solid. A lot of them in the market will give you way more visibility around the health of your Node.js processes, and you can live in peace when you are making sure your Node is running properly. But what to do if something bad and unexpected happens? So, what should we do with our Node.js processes? Letting them crash. If something bad and unexpected happened, I will let my Node.js process crash, but in order to be able to do it and drive, we will need to implement a set of best practices and follow some steps to make sure that the application is going to restart properly and continue running and serving to our customers and clients. Julián: Before letting it crash, we will need to learn about the process lifecycle, especially on the shutdown side of things, some error handling best practices. There is going to be also another very recommended workshop around it. I'm not going to be covering how to properly handle errors in Node.js, just on shutdown, and this is pretty much so you stop worrying about unexpected errors and increased visibility of your Node.js processes, increased visibility of what happened when your process crashes and what might be the reason, so you can fix it and iterate over your application. So, similar to coming back to the Erlang concept, a Node.js process is very lightweight. It's a small in memory. It doesn't have a very big memory footprint, and the idea is to keep the processes very lean at a startup, so they can start like super fast. If you have a lot of operations, like high intensive CPU or synchronous operation at a startup, it might decrease the ability to restart super fast, your Node.js processes. Julián: So, try to keep your processes very lean on a startup. Use the strategies, like prebuilding, so you are not going to build on a startup or on the bootstrap of your process. Do everything before you are going to start your process, and if something unexpected and bad happens, just exit and start a new Node.js process as soon as possible to avoid downtime. And pretty much this is called a restart. You're late in the process crash, and then start the new one. But we will need to have some tools in place and settings to be able to have something that restarts all our Node.js processes. So, let's learn how to exit a Node.js process. So, there are two common methods on the process module that will help you to shut down or terminate a Node.js process. The most common one is the process.exit. You can pass an exit code, the zero if it's a success exit or higher than zero, commonly one, if it's a failure. And this pretty much instructs Node.js to end a process with a specified exit code. Julián: And there is the other one, which is a process.abort. With the process.abort, it's going to cause Node.js process to exit immediately and generate the core file, if your operating system has core dumps enabled. So, in order to be able to have more visibility on postmortem debugging, to be able to see what happened or what clashes your Node.js process. If there is a memory issue, you can call process.abort, it will generate a core dump, and then you can use tools like llnode, which is a plugin for lldb to do a C and C++ debugging of the core dump, and to see what might happen in the native side of Node.js when your process scratch. So those are the two options you have to exit the Node.js process. How to handle exit events? So Node.js, it needs two different or two main events when your Node.js process is exiting. One is the beforeExit. So the beforeExit, it's a handle that can make asynchronous calls and the event loop will continue to work until it finishes. Julián: So before the process is ending, you can schedule more work on the event loop, do more a synchronous task and then you can clean up your process. This event is not immediate on conditions that are causing explicit termination like on an uncaught exception or when I explicitly call process that exit. So this is all other exit scenarios. And the exit event, it's a handle, also can't make a synchronous call. Only synchronous calls can happen in this part of the process life cycle because the event loop doesn't have any more work to do. So the event loop is paused in here. So, if you try to do any asynchronous calling here, is not going to be executed. Only synchronous calls can happen here and this event is immediate when process.exit is called explicitly. It's commonly used if you want to log at the end, some information when you process.exit, my process.exit with the specific exit code and you want to add some more context around the state of your application at the time that the process exits. Julián: Some examples how to use it. You attach those events on the process module. The beforeExit can do asynchronous code so that setTimeout, even though the event loop is pause at that moment when you are scheduled more asynchronous work, it will receive the event loop and continue until there is no more work to do. There's one thing I want to mention here is that normally a Node.js process exits when there is no more work is scheduled on the event loop. When there is nothing else on the event loop, a process is going to exit. How does a server keeps running? Because it has a handle register on the event loop, like a socket waiting for connections and that's why a web server is constantly running until you close the server or you interrupt the process. Otherwise, if there is something register on the event loop, the Node.js process is going to continue running. So in this case, I execute setTimeout, schedule more work, it will continue working on until there is no more left to do. Julián: On process.exit, pretty much just synchronous calls. I can do anything here with the event loop. The event loop is thoroughly paused, useful for logging information or saving the state of the application and exit. There is are a couple of signal events that are going to be useful on shutdown. There is the SIGTERM and SIGINT. SIGTERM, it's normally immediate when a process monitor send a signal termination to your Node.js process to tell them that there is going to be a successful way to shut down your process. When you execute on systemd or using upstart, when you send stop that service or stop that process, it's going to sending that SIGTERM to your Node.js process and you can handle that event and do some work on that specific part of the life cycle. And the SIGINT, it's an interruption. It is immediate when you interrupt the Node.js processes, normally when you do control-C, when you are running on the console, you can also capture that event and do some work around it. Julián: So these are two ways to expectedly finalize a Node.js process. So these two events are considered a successful termination. This is why I'm exiting here with the exit code zero because it is something that is expected. I say I don't want this process to continue running. And there is also the error events. So there are two main error events. One is the uncaughtException, the famous uncaughtException. And recently, in promises we're introducing to Node, the unhandledRejection. So the uncaught exception is immediate when a JavaScript error is not properly handled. So it pretty much represents a programmer error or represents a bug in your code. If an uncaughtException happens, the recommendation is to always crash your process, let it crash. Don't try to recover from an uncaughtException because it might give you some troubles. And while even though, the community is not totally agree on the second one. Julián: I will say the same for an unhandledRejection. An unhandledRejection, it is immediate when a promise is rejected and there is no handle attached to the promise. So there is no catch attached to the promise. It my represent an operational error, it my represent a programmer error, so it depends of what happened here. But in both of those cases, it's better to log as much information as possible. Treat those as P1 issues that needs to be fixed in the next iteration or in the next release. So if you don't have any strategy in place to be able to identify why your processes are crashing and you are not fixing and handling those properly, your application are going to remain having box. So if it is an uncaught exception, that's a bug, that's a programming error, that is something that is not expected. Please crash, log and file an issue, so that needs to be fixed. Julián: If it is an unhandled rejection, see if this is a programmer error or if it's an operational error that needs to be handled and go update the code, add the proper handling to that promise and continue with your job. So as I say in both cases an error event, it's a cause of termination for your Node.js process. Always exit the process with an exit code different than zero. So it's going to be one. So your process monitor and your logs know that it was a failure and as I say, don't try to recover from an uncaught exception. While I was working as a consultant, I saw a lot of people trying to do a lot of magic to avoid the Node.js processes dying by adding some complex logic on uncaught exception. And that always ended your application on a bad state. They were having memory leaks or having sockets hanging and it was a mess. So it's cheaper to let it crash, start a new process from a scratch and continue receiving more requests. Julián: So a couple of examples on uncaught exception and unhandled rejection. The uncaught exception received such an argument and error instance. So you get the information about the error that was thrown or that wasn't handling your Node.js code. And the unhandled rejection is going to give you a reason which can be an error instance tool and it will give you the promise that was not properly handled. So those are useful information that you can have in your logs to have more information where things are failing in your code. But we saw how to handle the events, how to handle the errors, some of the best practices, but how to do it properly? What we need to do a better to be able to have a very good shutdown a strategy for Node.js processes? So the first one is running more than one process at the same time. So rely on scaling load balancer processes, having more than one. So in that way, if one of those processes crashes, there is another process that is alive and it's able to receive requests. Julián: So it will give you time to do the restart and all the requests that are coming in. And maybe the only issue you are going to have are with the requests that were already happening in the Node.js process that crashes. But this is going to give you a little bit more leverage and prevent downtime. And what do you use for load balancing? Use whatever you have in hand. If it's nginx or HAProxy as a reverse proxy for your Node.js applications. If you are on AWS or on the cloud, you can use their elastic load balancer application, load balancers or the order load-balancer solutions that cloud offers. If you are on Kubernetes, you can use Ingress or other different in the load balancer strategies for your application. So pretty much make sure that you have more than one Node.js process running, so you can be more in peace if one of those processes crashes. You will need to have process monitoring and process monitoring needs a pretty much something that is running in your operating system or an application that it's constantly checking if your process is alive or not. Julián: If it crashes, if there is a failure, the process monitor is in charge of restarting the process. So, the recommendation is to always use the native process monitoring that it's available on your operating system. If it's Unix or Linux, you can use systemd or upstart, specifically adding the restart on failure or respond when you are working on upstart. If you are using containers, use whatever is available. Docker has the restart option, Kubernetes has the restart policy and you can also configure your processes to restart when it fails to retry a number of times. So you don't go into a crazy error, that is going to constantly make your application crash and you end up in the crash loop. So you can add some retries into there but always have a process monitoring in place. If you can't use any of these tools as a last resource--but not recommended--use a Node.js process monitor like PM2 or forever. Julián: But I will not recommend these to any customer of mine or any friend, but if you don't have any more resource, if you can use the native stuff in your operating system or if you are not using containers, you can go this way. These tools are good for development. Don't get me wrong. If you are logging on the development and they're very good tools to restart your processes when the crashes. But for production, they might not be the best. Let's talk about little bit about a graceful shutdown. So we have a web server running. The web server is getting request and it's getting connection. Sometime we have some established connections between our customers or clients and the server. But what happens when the process crashes? When the process crashes, if we are not doing a graceful shutdown, some of those sockets are going to be kept hanging and are going to wait until a timeout has been reached and that might cause down time and a decreased experience of your users. So it is better. So setting up an un-reference timeout is going to let the server do its job. Julián: So, we will need to close the server, it's explicitly say to the server, stop receiving connections so they can reject the new connection. So new connections are going to the new or to the other Node.js process that is running through the load balancer and it will be able to send a TCP packet to the clients that are already connected. So they are going to be finishing the connection immediately when the server dies. They are not going to stay waiting until a timeout is reach out. They are going to be closing that connection and on the next retry, we expect that the process has restarted at that point or they go to another process that is running. So one example of that, un-reference time out, when we are handling the signal or error event, which is the shutdown part of the life cycle. What we can do, it's too explicitly call server.close. If it is an instance of the net server, which is the same one that uses the http or https, Node modules, you can pass a callback. Julián: So when it finishes closing the connection, it will exit the process successfully. But we will need to have our timeout in place because we don't want to wait for a long time. Imagine if we had a lot of different clients connected that it's taken a lot of time to clean up those processes. We need to have some way to have an internal timeout. So here, we are scheduling a new timeout, but that timeout is not on the event loop. That last part the, unref is not the scheduling the timeout on the event loop, so it is not adding more work to the event loop. So when the timeout is reach or the server close callback is reach, either of those paths are going to close the Node.js process. So this is a race between the two, between your time out that is not in the event loop or between the server close, whichever works better. And what timeout time we do need to put here depending on the needs of your applications. Julián: We had customers that had the need to have very few timeouts or a small time out because they were doing a lot of real time trading and they needed the processes to restart as far as possible. There are others that can have longer timeouts to lead, or when the connection finishes, so this depends of the use case. If you don't add the unref in here, since this timeout is going to be a schedule on the event loop, it's going to wait until it finishes and the process is going to end. So this is like a safeguard. So there is no more work schedule on the event loop while we are exiting our process. Logging, this is one of the most important parts of having a very good exit strategy for Node.js processes. So implement the robust logging strategy for your application, especially on shutdown. If an error happens, please log as much information as possible. An error object will contain not only the message or the cost of the error, but it will also contain the stack trace. Julián: So if you log the stack trace, you will be able to come back to your code and fix and look specifically why it failed and where it fail. And you can rely on libraries, like pino or winston and use transport to store the logs in an external service. You can use like Splunk or Papertrail or use whatever you like to store the logs. But have a way to always go back to the logs, search for those uncaught exceptions and unhandled rejections and being able to identify why your processes are crushing. Fix those issues and continue with your work. So how can we put these altogether? I have some pattern I use on my projects but there is also a lot of modules on NPM that are going to do the same thing even better than the approach I'm following here. So this is a pattern I use. I create a module called terminate or I use a file called terminate. I pass the server like the instance of that server that I'm going to be closing and some configuration options if I want to enable core dumps or not, and the timeout. Julián: Usually when I want to enable the core dump of Node, I use an environment variable. When I am going to do some performance testing on my application or I want to replicate the error, I enable the core dump. I let it crash with the process.abort, I check out the core dump and get more information about it. So here, I have our exit function that switches between the abort or the process.exit, depending of the configuration you have here. And the rest, I'm returning a function that returns a function and that function is the one that I'm going to be using as the exit handler. And this is pretty much the code that I'm going to be using for uncaught exceptions, unhandled rejections, and signals. And here, log as much as possible. I'm using console log for simplicity, but please use a proper logging library here. And pretty much if there is an error and if that is an instance of the error, I want to get information about the message and the stack trace. And at the end, I'm going to be trying doing the graceful shutdown. Julián: So this is the same thing I explained before. I will close this server and also I will have a timeout to also close the server after that timeout happens. So it depends whatever ends first. And how to use this small module I have here, this is as an example, I have an issue to the server. I have my terminate code that I use for my project. I create an exit handler with the options with the server I'm running, with the different parameters I want to pass into my exit handler and I attach that function into the different events. So here exit handler, on uncaught exception and unhandled rejection, I'm going to return an exit code of one and I can add a message to my logs to say what type of error or what type of handling was this, and also with the signals. And with the signals, I'm passing an exit code of zero because it is something that there is going to be successful. Julián: So this is pretty much what I have for today and the presentation, some resources that are going to be useful for you. Please don't miss Rubin Bridgewater workshop later today. It's going to be called "Error Handling: doing it right". Again, it's going to be explaining now how to avoid getting here? How to avoid getting into the uncaught exception side of things? How to properly create the error objects to have more visibility? How to handle promises, rejections? So, these are going to be a very good presentation and also the cloud native JS by Beth. She's going to be mentioning also how to add monitoring to application health checks. So those are going to be good things if you want to run Node.js properly in production. Some NPM modules to take a look that pretty much solve the issue I was talking about today. There is a module I like, the terminus by the team at GoDaddy. Julián: It supports adding health checks to your application. It has a C signal handlers too. It has a very good graceful shutdown strategy. Way more complex than the one I presented you. This is something that you can add to your projects pretty easily. Just create an instance of terminus, configure it, and add the different handlers there. There is another module called stoppable. Stoppable is the decorator over the server class that is going to be able to implement not a close function, but a stop function and it's going to be also doing a lot of things around a graceful shutdown. And there is also a module that pretty much is what I presented today. It's called http-graceful-shutdown. You also pass an instance of your HTTP server and it has different handlers, you can see what happened when there is an error or what signals I'm going to be monitoring. Julián: It's pretty much... It's all going to be resources that are going to simplify your life and make you a better up running Node in production and you will be able to let it crash. One last thing, I want to invite you to Nodeconf Colombia, so save the day. This is going to happen June 26 and 27, 2020. It's going to happen in Medellín, Columbia. More information at nodeconf.co. CFP is not open yet, but I will expect a lot of you all sending proposals to go to Medellin. We pay for travel, we pay for a hotel. And if you want to know a little bit about the experience of speaking at a conference in Columbia, you can ask James, you can ask Anna, and I think you can ask Brian. There is a couple of folks here that have spoken there and thank you very much. This is it. We started to assemble a collection of best practices and recommendations on error handling, to ensure they were aligned with the overall Node.js community. In this post, I'll walk through some of the background on the Node.js process lifecycle and some strategies to properly handle graceful shutdown and quickly restart your application after a catastrophic error terminates your program. The Node.js process lifecycle Let's first explore briefly how Node.js operates. A Node.js process is very lightweight and has a small memory footprint. Because crashes are an inevitable part of programming, your primary goal when architecting an application is to keep the startup process very lean, so that your application can quickly boot up. If your startup operations include CPU intensive work or synchronous operations, it might affect the ability of your Node.js processes to quickly restart. A strategy you can use here is to prebuild as much as possible. That might mean preparing data or compiling assets during the building process. It may increase your deployment times, but it's better to spend more time outside of the startup process. Ultimately, this ensures that when a crash does happen, you can exit a process and start a new one without much downtime. Node.js exit methods Let's take a look at several ways you can terminate a Node.js process and the differences between them. The most common function to use is process.exit(), which takes a single argument, an integer. If the argument is 0, it represents a successful exit state. If it's greater than that, it indicates that an error occurred; 1 is a common exit code for failures here. Another option is process.abort(). When this method is called, the Node.js process terminates immediately. More importantly, if your operating system allows it, Node will also generate a core dump file, which contains a ton of useful information about the process. You can use this core dump to do some postmortem debugging using tools like llnode. Node.js exit events As Node.js is built on top of JavaScript, it has an event loop, which allows you to listen for events that occur and act on them. When Node.js exits, it also emits several types of events. One of these is beforeExit, and as its name implies, it is emitted right before a Node process exits. You can provide an event handler which can make asynchronous calls, and the event loop will continue to perform the work until it's all finished. It's important to note that this event is not emitted on process.exit() calls or uncaughtExceptions; we'll get into when you might use this event a little later. Another event is exit, which is emitted only when process.exit() is explicitly called. As it fires after the event loop has been terminated, you can't do any asynchronous work in this handler. The code sample below illustrates the differences between the two events: process.on('beforeExit', code => { // Can make asynchronous calls setTimeout(() => { console.log(`Process will exit with code: ${code}`) process.exit(code) }, 100) }) process.on('exit', code => { // Only synchronous calls console.log(`Process exited with code: ${code}`) }) OS signal events Your operating system emits events to your Node.js process, too, depending on the circumstances occurring outside of your program. These are referred to as signals. Two of the more common signals are SIGTERM and SIGINT. SIGTERM is normally sent by a process monitor to tell Node.js to expect a successful termination. If you're running systemd or upstart to manage your Node application, and you stop the service, it sends a SIGTERM event so that you can handle the process shutdown. SIGINT is emitted when a Node.js process is interrupted, usually as the result of a control-C ( ^-C) keyboard event. You can also capture that event and do some work around it. Here is an example showing how you may act on these signal events: process.on('SIGTERM', signal => { console.log(`Process ${process.pid} received a SIGTERM signal`) process.exit(0) }) process.on('SIGINT', signal => { console.log(`Process ${process.pid} has been interrupted`) process.exit(0) }) Since these two events are considered a successful termination, we call process.exit and pass an argument of 0 because it is something that is expected. JavaScript error events At last, we arrive at higher-level error types: the error events thrown by JavaScript itself. When a JavaScript error is not properly handled, an uncaughtException is emitted. These suggest the programmer has made an error, and they should be treated with the utmost priority. Usually, it means a bug occurred on a piece of logic that needed more testing, such as calling a method on a null type. An unhandledRejection error is a newer concept. It is emitted when a promise is not satisfied; in other words, a promise was rejected (it failed), and there was no handler attached to respond. These errors can indicate an operational error or a programmer error, and they should also be treated as high priority. In both of these cases, you should do something counterintuitive and let your program crash! Please don't try to be clever and introduce some complex logic trying to prevent a process restart. Doing so will almost always leave your application in a bad state, whether that's having a memory leak or leaving sockets hanging. It's simpler to let it crash, start a new process from scratch, and continue receiving more requests. Here's some code indicating how you might best handle these events: process.on('uncaughtException', err => { console.log(`Uncaught Exception: ${err.message}`) process.exit(1) }) We’re explicitly “crashing” the Node.js process here! Don’t be afraid of this! It is more likely than not unsafe to continue. The Node.js documentation says, Unhandled exceptions inherently mean that an application is in an undefined state...The correct use of 'uncaughtException' is to perform synchronous cleanup of allocated resources (e.g. file descriptors, handles, etc) before shutting down the process. It is not safe to resume normal operation after 'uncaughtException'. process.on('unhandledRejection', (reason, promise) => { console.log('Unhandled rejection at ', promise, `reason: ${err.message}`) process.exit(1) }) unhandledRejection is such a common error, that the Node.js maintainers have decided it should really crash the process, and they warn us that in a future version of Node.js unhandledRejections will crash the process. [DEP0018] DeprecationWarning: Unhandled promise rejections are deprecated. In the future, promise rejections that are not handled will terminate the Node.js process with a non-zero exit code. Run more than one process Even if your process startup time is extremely quick, running just a single process is a risk to safe and uninterrupted application operation. We recommend running more than one process and to use a load balancer to handle the scheduling. That way, if one of the processes crashes, there is another process that is alive and able to receive new requests. This is going to give you a little bit more leverage and prevent downtime. Use whatever you have on-hand for the load balancing. You can configure a reverse proxy like nginx or HAProxy to do this. If you're on Heroku, you can scale your application to increase the number of dynos. If you're on Kubernetes, you can use Ingress or other load balancer strategies for your application. Monitor your processes You should have process monitoring in-place, something running in your operating system or an application environment that's constantly checking if your Node.js process is alive or not. If the process crashes due to a failure, the process monitor is in charge of restarting the process. Our recommendation is to always use the native process monitoring that's available on your operating system. For example, if you're running on Unix or Linux, you can use the systemd or upstart commands. If you're using containers, Docker has a --restart flag, and Kubernetes has restartPolicy, both of which are useful. If you can't use any existing tools, use a Node.js process monitor like PM2 or forever as a last resort. These tools are okay for development environments, but I can't really recommend them for production use. If your application is running on Heroku, don’t worry—we take care of the restart for you! Graceful shutdowns Let's say we have a server running. It's receiving requests and establishing connections with clients. But what happens if the process crashes? If we're not performing a graceful shutdown, some of those sockets are going to hang around and keep waiting for a response until a timeout has been reached. That unnecessary time spent consumes resources, eventually leading to downtime and a degraded experience for your users. It's best to explicitly stop receiving connections, so that the server can disconnect connections while it's recovering. Any new connections will go to the other Node.js processes running through the load balancer To do this, you can call server.close(), which tells the server to stop accepting new connections. Most Node servers implement this class, and it accepts a callback function as an argument. Now, imagine that your server has many clients connected, and the majority of them have not experienced an error or crashed. How can you close the server while not abruptly disconnecting valid clients? We'll need to use a timeout to build a system to indicate that if all the connections don't close within a certain limit, we will completely shutdown the server. We do this because we want to give existing, healthy clients time to finish up but don't want the server to wait for an excessively long time to shutdown. Here's some sample code of what that might look like: process.on('<signal or error event>', _ => { server.close(() => { process.exit(0) }) // If server hasn't finished in 1000ms, shut down process setTimeout(() => { process.exit(0) }, 1000).unref() // Prevents the timeout from registering on event loop }) Logging Chances are you have already implemented a robust logging strategy for your running application, so I won't get into it too much about that here. Just remember to log with the same rigorous quality and amount of information for when the application shuts down! If a crash occurs, log as much relevant information as possible, including the errors and stack trace. Rely on libraries like pino or winston in your application, and store these logs using one of their transports for better visibility. You can also take a look at our various logging add-ons to find a provider which matches your application’s needs. Make sure everything is still good Last, and certainly not least, we recommend that you add a health check route. This is a simple endpoint that returns a 200 status code if your application is running: // Add a health check route in express app.get('/_health', (req, res) => { res.status(200).send('ok') }) You can have a separate service continuously monitor that route. You can configure this in a number of ways, whether by using a reverse proxy, such as nginx or HAProxy, or a load balancer, like ELB or ALB. Any application that acts as the top layer of your Node.js process can be used to constantly monitor that the health check is returning. These will also give you way more visibility around the health of your Node.js processes, and you can rest easy knowing that your Node processes are running properly. There are some great great monitoring services to help you with this in the Add-ons section of our Elements Marketplace. Putting it all together Whenever I work on a new Node.js project, I use the same function to ensure that my crashes are logged and my recoveries are guaranteed. It looks something like this: function terminate (server, options = { coredump: false, timeout: 500 }) { // Exit function const exit = code => { options.coredump ? process.abort() : process.exit(code) } return (code, reason) => (err, promise) => { if (err && err instanceof Error) { // Log error information, use a proper logging library here :) console.log(err.message, err.stack) } // Attempt a graceful shutdown server.close(exit) setTimeout(exit, options.timeout).unref() } } module.exports = terminate Here, I've created a module called terminate. I pass the instance of that server that I'm going to be closing, and some configuration options, such as whether I want to enable core dumps, as well as the timeout. I usually use an environment variable to control when I want to enable a core dump. I enable them only when I am going to do some performance testing on my application or whenever I want to replicate the error. This exported function can then be set to listen to our error events: const http = require('http') const terminate = require('./terminate') const server = http.createServer(...) const exitHandler = terminate(server, { coredump: false, timeout: 500 }) process.on('uncaughtException', exitHandler(1, 'Unexpected Error')) process.on('unhandledRejection', exitHandler(1, 'Unhandled Promise')) process.on('SIGTERM', exitHandler(0, 'SIGTERM')) process.on('SIGINT', exitHandler(0, 'SIGINT')) Additional resources There are a number of existing npm modules that pretty much solve the aforementioned issues in a similar ways. You can check these out as well: Hopefully, this information will simplify your life and enable your Node app to run better and safer in production!
https://blog.heroku.com/best-practices-nodejs-errors
CC-MAIN-2020-50
refinedweb
7,073
64.51
So many social networks, so little time In recent months I've noticed that StumbleUpon is referring more people to this blog than any other single source. Richard McManus' recent interview with Garrett Camp, along with the reader commentary, nicely sums up what StumbleUpon is about and how it can complement a system like del.icio.us. This comment echoes my experience: I use SU [StumbleUpon] to dig around and just explore. Del [del.icio.us] though offers me my giant bookmark container and a very easy way of correlating that data through others' data. At this time I wouldn't use Del for SU stuff and I wouldnt use SU for the way Del works. Although I'd tried StumbleUpon several times over the past few years, it never really stuck. But since Stumblers are evidently interested in me I thought I'd try to learn more about them, their software, and their network. So I rejoined, reinstalled the Firefox extension, and started stumbling around. At this point, a couple of weeks into the experiment, I'm again ambivalent. In principle, I would use SU for a daily dose of serendipity. In practice, although the sites it suggests are often noteworthy, they're all pretty heavy-handedly based on the categories I've declared interest in. Now I do understand that the system expects me to refine those interests by rating a few sites a day with a thumbs up or thumbs down. And I haven't done much of that. What I have done, though, is import the 1600 thumbs ups I've recorded in del.icio.us over the past few years. My hope was that this would provide the kinds of collaborative recommendations I was working toward in some experiments last year. So far as I can tell, though, the importation of my del.icio.us bookmarks into StumbleUpon hasn't influenced its recommendations. If I've got that wrong, I hope somebody will chime in here and set things straight. The general problem, for me, is that I refuse to invest in closed social networks. Life's too short to participate actively in LinkedIn, StumbleUpon, Flickr, and all the rest. When I met Gary McGraw this summer, he said: "People keep asking ask me to join LinkedIn, but I tell them I'm already on a network: the Internet." I feel exactly the same way. I'm a citizen of the Internet, but beyond that I neither have nor want an allegiance to artificial communities defined arbitarily by particular software and network architectures. However I do have, and would like to strengthen, allegiances to natural communities defined by common interest. Those natural communities don't respect the borders of arbitrarily-defined artificial communities. But if you want to behave as a citizen of the Internet, and affiliate with others in your natural communities on those terms, it's a hard slog. Once upon a time, Kim Cameron pioneered the idea of a metadirectory. Today, he's laying the foundations for the kind of metacommunity that the Internet has always needed to be. We'll get there, I hope. But meanwhile, please don't be offended that I haven't accepted the invitation you sent me from StumbleUpon or Flickr or LinkedIn or any of the others. Sorry, but life's too short.
http://weblog.infoworld.com/udell/2006/10/25.html
crawl-001
refinedweb
557
63.29
Improve OpenBSD support Bug Description Here is a patch (as of build17-rc2) that fixes some issues on OpenBSD: - needs to be linked with execinfo - <ctype.h> #defines _C to be 20, which conflicts with some templates - nitems() conflicts with a macro definition in the system - OpenBSD doesn't have O_NOATIME - due to namespace issues <sys/types.h> must be #included before <sys/socket.h> - default to x11 driver Hi Anthony, welcome to Launchpad. (And thanks for your patch) Note that Debian/kFreeBSD (Debian with the FreeBSD kernel instead of Linux) also had some problems with execinfo. Might be general *BSD issues? According to the changelog of widelands_17~rc2-3: "on kfreebsd, the execinfo library cannot be found for an unkown reason. Try to survive this fact to fix the resulting FTBFS." See http:// Just pushed this patch. Thanks Anthony! Released in build-18 rc1. Thanks. Will be merged after b17 status triaged importance medium milestone build18-rc1
https://bugs.launchpad.net/widelands/+bug/983448
CC-MAIN-2019-30
refinedweb
158
66.44
Description Many of the DOM nodes in your app exist solely for style purposes. Styling them can be cumbersome because of name fatigue (coming up with unique class names for nodes that don't need a name, like .outerWrapperWrapper), selector complexity, and constantly bouncing between your JS code and your CSS code in your editor. jsxstyle alternatives and similar libraries Based on the "PostCSS" category. Alternatively, view jsxstyle alternatives based on common mentions on social networks and blogs. styled-components9.5 8.4 L2 jsxstyle VS styled-componentsVisual primitives for the component age. Use the best bits of ES6 and CSS to style your apps without stress 💅 PostCSS9.3 9.0 L5 jsxstyle VS PostCSSTransforming styles with JS plugins emotion8.4 7.6 jsxstyle VS emotion👩🎤 CSS-in-JS library designed for high performance style composition linaria7.1 8.3 jsxstyle VS linariaZero-runtime CSS in JS library Radium7.0 1.0 L5 jsxstyle VS RadiumA toolchain for React component styling. JSS6.8 7.8 L5 jsxstyle VS JSSJSS is an authoring tool for CSS which uses JavaScript as a host language. styled-jsx6.7 6.1 L3 jsxstyle VS styled-jsxFull CSS support for JSX without compromises React CSS Modules6.1 0.0 L4 jsxstyle VS React CSS ModulesSeamless mapping of class names to CSS modules inside of React components. Aphrodite6.1 0.6 L5 jsxstyle VS AphroditeFramework-agnostic CSS-in-JS with support for server-side rendering, browser prefixing, and minimum CSS generation CSS Layout5.7 8.0 jsxstyle VS CSS LayoutA collection of popular layouts and patterns made with CSS. Now it has 100+ patterns and continues growing! glamor5.5 0.0 L1 jsxstyle VS glamorinline css for react et al glamorous5.5 0.0 jsxstyle VS glamorousReact component styling solved 💄 Neutrino5.5 2.1 jsxstyle VS NeutrinoCreate and build modern JavaScript projects with zero initial configuration. vue-virtual-scroll-list⚡️A vue component support big amount data list with high render performance and efficient. styletron5.1 4.0 jsxstyle VS styletron:zap: Toolkit for component-oriented styling Fela4.3 6.4 L5 jsxstyle VS FelaState-Driven Styling in JavaScript React Figma3.9 8.6 jsxstyle VS React Figma⚛️ A React renderer for Figma ReactCSS3.8 0.0 L5 jsxstyle VS ReactCSS:lipstick: Inline Styles in JS Atomizer3.5 5.8 L3 jsxstyle VS AtomizerA tool for creating Atomic CSS, a collection of single purpose styling units for maximum reuse. CSSX3.2 0.0 jsxstyle jsxstyle VS React InlineTransform inline styles defined in JavaScript modules into static CSS code and class names so they become available to, e.g. the `className` prop of React elements. c2f1.5 5.6 jsxstyle jsxstyle or a related project? README jsxstyle & friends This repository is the monorepo for runtime versions jsxstyle as well as a few tools designed to be used with jsxstyle. <!-- prettier-ignore --> | Package | Description | | :-- | :-- | | jsxstyle | stylable React/Preact components | | jsxstyle-webpack-plugin | webpack plugin that extracts static styles from jsxstyle components at build time | | jsxstyle-utils | utility functions used by jsxstyle-webpack-plugin and runtime jsxstyle | jsxstyle jsxstyle is an inline style system for React and Preact. It provides a best-in-class developer experience without sacrificing performance. Styles are written inline on a special set of components exported by jsxstyle. Inline styles on these components are converted to CSS rules and added to the document right as they’re needed. With jsxstyle, your component code looks like this: <Row alignItems="center" padding={15}> <Block backgroundColor="#EEE" boxShadow="inset 0 0 0 1px rgba(0,0,0,0.15)" borderRadius={5} height={64} width={64} marginRight={15} <Col fontFamily="sans-serif" fontSize={16} <Block fontWeight={600}>Justin Bieber</Block> <Block fontStyle="italic">Canadian</Block> </Col> </Row> ⚡️ Style as fast as you can think. Jumping between JS and CSS in your editor is no longer necessary. Since style are inline, you can determine at a glance exactly how an element is styled. jsxstyle frees you up to do what you do best—write styles. ✅ Inline styles done right. Just because styles are written inline doesn’t mean they stay inline. jsxstyle’s approach to inline styles ensures that a best-in-class developer experience comes with no performance cost. 😪 No more naming fatigue. Naming components is hard enough, and there are only so many synonyms for “wrapper”. jsxstyle provides a set of stylable components, each with a few default styles set. These primitive stylable components form a set of building blocks that you can reuse throughout your application. You can still create named stylable components if you wish, by utilizing a paradigm you’re already familiar with: composition. No funky syntax necessary: const RedBlock = (props) => <Block {...props}; 🍱 Scoped styles right out the box. Styles written on jsxstyle components are scoped to component instances instead of abstract reusable class names. That’s not to say we’ve abandoned class names, though; styles on jsxstyle components are extracted into CSS rules and assigned a hashed, content-based class name that is intentionally unlike a human-written name. 👯 Team friendly by design. jsxstyle’s mental model is easy to teach and easy to learn, which means onboarding new frontend contributors takes seconds, not hours. Since styles applied by jsxstyle are scoped to component instances, frontend contributors don’t need a complete knowledge of the system in order to be 100% productive right from the start. 🛠 Powerful build-time optimizations. Styles written inline on a set of components from a known source can very easily be statically analyzed, which opens up new possibilities for tooling and optimization. One such optimization is jsxstyle-webpack-plugin, a webpack plugin that extracts static styles from jsxstyle components at build time. jsxstyle-webpack-plugin reduces and in some cases entirely removes the need for runtime jsxstyle. Getting started Install the jsxstyle package with your preferred node package manager. Components for React can be imported from jsxstyle, and components for Preact can be imported from jsxstyle/preact. jsxstyle provides the following seven components: Most props passed to these components are assumed to be CSS properties. There are some exceptions to this rule: component: the underlying HTML tag or component to render. Defaults to 'div'. props: an object of props to pass directly to the underlying tag or component. mediaQueries: an object of media query strings keyed by prefix. More on that below. Additionally, the following component props can also be set at the top level if the component you specify supports these props: checked className href id name placeholder style type value - Any event handler prop starting with on This list is fairly arbitrary. If there’s a prop that you think is missing, feel free to request an addition to this list. Features Pseudoelements and pseudoclasses To specify a pseudoelement or pseudoclass on a style property, prefix the prop with the name of the applicable pseudoelement or pseudoclass. If you’d like to specify a pseudoelement and a pseudoclass for a style prop, start with the pseudoclass—i.e., hoverPlaceholderColor, not placeholderHoverColor. import { Block } from 'jsxstyle/preact'; <Block component="input" color="#888" activeColor="#333" placeholderColor="#BBB" />; <!-- prettier-ignore --> | Supported Pseudoclasses | Supported Pseudoelements | | -- | -- | | active, checked, disabled, empty, enabled, focus, hover, invalid, link, required, target, valid | placeholder, selection, before, after | Media queries Define a mediaQueries property with an object of media queries keyed by whatever prefixes you want to use. Prepend these media query keys to any style props that should be contained within media query blocks. Note that only one media query prefix can be applied at a time. <Block mediaQueries={{ sm: 'screen and (max-width: 640px)', lg: 'screen and (min-width: 1280px)', }} useMatchMedia hook jsxstyle exports a hook, useMatchMedia, that enables the developer to subscribe to media query change events and react accordingly. Here’s the hook in action: import { Block, useMatchMedia } from 'jsxstyle'; export const RedOrBlueComponent = ({ children }) => { const isSmallScreen = useMatchMedia('screen and (max-width: 800px)'); // text color is red when viewport <= 800px, blue when viewport > 800px return <Block color={isSmallScreen ? 'red' : 'blue'}>{children}</Block>; }; When this hook is used in combination with jsxstyle-webpack-plugin, prop values will be extracted if the prop passed to the component is a ternary and if the alternate and consequent values of the ternary are both static. Convenient animation support You can define an animation inline using object syntax, where the key is the specific keyframe name and the value is an object of styles: <Block animation={{ from: { opacity: 0 }, to: { opacity: 1 }, }} Shorthand properties for same-axis padding and margin You can set margin or padding on the same axis—either horizontal or vertical—by setting marginH/ marginV or paddingH/ paddingV. Note: shortcut props should not be used with in combination with -Top/Left/Bottom/Right variants. Prop names on jsxstyle components are sorted alphabetically before the styles are stringified, which means that styles will be applied alphabetically. FAQs Why write styles inline with jsxstyle? Writing styles inline does away with name fatigue and constantly bouncing between CSS and component code in your editor, and jsxstyle’s approach to inline styles ensures that a best-in-class developer experience comes with no performance cost. ### Naming things is hard. jsxstyle manages CSS and corresponding generated class names, which means that what those class names actually are becomes unimportant. jsxstyle can generate short, production-optimized class names and retain a mapping of those class names to corresponding style objects. All you have to do is worry about actual style properties. ### Jumping between JS and CSS in your editor wastes time. There’s no need to constantly jump between components and the CSS file(s) that define how those components are styled because styles are defined right at the component level. CSS has always been a language that describes what HTML elements look like. With jsxstyle, those descriptions are right where you need them. ### Styles are… inline. With inline styles, any frontend contributor can look at an element and know in a matter of seconds exactly how it’s styled. Inline styles describe an element’s appearance better than CSS classes ever could, and because you don’t have to worry about the class abstraction, there’s no fear of you or another frontend contributor taking a pure CSS class (like .red { color: tomato }) and corrupting it by modifying its styles. Also, because styles are inline, when you delete a component, you delete its style properties along with it. Dead CSS is no longer a concern. ### Styles written inline don’t remain inline. jsxstyle is first and foremost syntax for styling components at a particular scope. The styles you specify on jsxstyle components are added to the document and a divor component you specify is output with a class name that points to the added styles. ### Building tooling around inline styles is simple and straightforward. Statically analyzing inline styles on known components is trivial. Most of the styles you’ll end up writing on jsxstyle primitive components are static. Once you’re done perusing this README, check out jsxstyle-webpack-plugin. It’s a webpack plugin that, at build time, extracts static styles defined on jsxstyle components into separate CSS files. jsxstyle-webpack-pluginreduces and in some cases entirely removes the need for runtime jsxstyle. jsxstyle becomes nothing more than syntactic sugar for styling components, much like how JSX itself is syntactic sugar for nested function calls. Dude, that’s next level! Why use jsxstyle instead of BEM/SMACSS/OOCSS/etc.? jsxstyle provides all the benefits of a CSS class naming/organization system, but without the system. Writing CSS at scale is hard. Overly specific selectors cause specificity collisions. More generic selectors cause overstyling. Being a responsible frontend contributor in a shared codebase means you have to have a working knowledge of the system before you can contribute new code without introducing redundancies or errors. Countless systems have been developed to either solve or circumvent inherent problems with writing CSS in a team environment. Most of these systems attempt to solve the complexity of writing CSS with even more complex systems. Once a system is implemented it has to be closely adhered to. CSS systems are fantastic in theory, but in practice, a CSS system is only as good as the most negligent frontend contributor on your team. jsxstyle provides all the benefits of a good CSS class-naming system, with the added benefit of not having to learn or remember a CSS class-naming system. - ### No more specificity issues, collisions, accidental overstyling, or inscrutable class names. jsxstyle manages class names and generated styles, leaving you to do what you do best… write styles. Selector complexity is a thing of the past. Each jsxstyle component gets a single class name based on the inline styles specified on the component. The class name is reused when repeat instances of that set of style props are encountered. - ### No more bikeshedding! No more extended discussions about which CSS class naming strategy is best! I cannot emphasize enough how much time and mental energy this saves. Code review is simple as well. CSS-related nits only involve actual style properties. Conversations about how to style a thing begin and end with the actual CSS properties that need to be written. - ### Onboarding new frontend contributors takes seconds, not hours. A knowledge of existing styles is not required for a new frontend contributor to be 100% productive right from the start. In codebases without jsxstyle, in order for someone to be able to contribute, they usually have to know what styles to put where and where to look to put new styles. There are usually mixins and variables they don’t know exist because they don’t yet “know their way around the place”. With jsxstyle, you’re just writing styles on components. Can I use jsxstyle with existing CSS? Yes! jsxstyle is designed to work alongside existing styles and style systems. In order to avoid class name collisions, class names generated by jsxstyle are hashed names that are intentionally unlike class names that a human would write. As far as specificity is concerned, jsxstyle uses single class names as selectors, which makes overriding styles in your existing system easy (though not recommended). Does jsxstyle support server rendering? Yep! jsxstyle exports a cache object with a few functions that make adding support for server rendering a breeze. Two things you need to know: In a server environment, the function that adds styles to the document is a noop, but it can be replaced with any arbitrary function. When server rendering, you can aggregate jsxstyle-injected styles when rendering your app to a string, and then add those styles to the response you send to the client. jsxstyle builds a cache of styles that have been added to the document to ensure they’re added exactly once. When server rendering, this cache will need to be reset between each render. Here’s a minimal (untested!) example of jsxstyle server rendering with Koa: import { cache } from 'jsxstyle'; import * as Koa from 'koa'; import * as React from 'react'; import { renderToString } from 'react-dom'; import App from './App'; // aggregate styles as they’re added to the document let styles = ''; cache.injectOptions({ onInsertRule(css) { styles += css; }, }); const app = new Koa(); app.use(async (ctx) => { // Reset cache and style string before each call to `renderToString` cache.reset(); styles = ''; const html = renderToString(<App path={ctx.request.path} />); ctx.body = `<!doctype html> <style>${styles}</style> <div id=".app-root">${html}</div> <script src="/bundle.js"></script> `; }); Does jsxstyle support autoprefixing? Runtime jsxstyle does not bundle an autoprefixer, but autoprefixing is easily doable if you use webpack. We recommend combining jsxstyle-webpack-plugin with a CSS loader that handles provides autoprefixing. At Smyte, we use postcss-loader with postcss-cssnext. Not using webpack and you’d like to see runtime autoprefixing supported? Open an issue and let us know! What about global styles? jsxstyle only manages styles written on jsxstyle components. Where you put global styles is entirely up to you. At Smyte, we use a separate shared style sheet that contains a few reset styles. Browser support jsxstyle is tested on every push in a wide array of browsers, both old and new. Shout out to Sauce Labs for making cross browser testing free for open source projects. Sauce Labs is shockingly easy to integrate with other services. I’m not gonna say it’s simple to get set up, because it’s not, but once it’s up and running, damn, it’s easy. They even make an SVG test matrix you can drop into your README: Contributing Got an idea for jsxstyle? Did you encounter a bug? Open an issue and let’s talk it through. PRs welcome too! Alternatives So you don’t think jsxstyle is the thing for you? That’s quite alright. It’s a good time to be picky about exactly how and where your styles are written. We’re in the golden age of component-based web frameworks, and a lot of ancient “best practices” that were set in place by the old guard are being rethought, to everyone’s benefit. It’s a weird and exciting time to be making stuff for the web. Sorting through the myriad CSS-in-JS solutions out there can get tiring, but there are a few projects out there that have stuck out to me: Tachyons by Adam Morse enables a lot of the the same benefits as jsxstyle but allows you to still use CSS classes. I love the “no new CSS” concept behind Tachyons. Tachyons elegantly solves the issues that Adam covers in his excellent blog post on scalable CSS. Rebass by Brent Jackson is “a functional React UI component library, built with styled-components”. Rebass has similar API to jsxstyle, but is a bit more opinionated when it comes to separation of presentation and logic. Syntactically it’s more compact, and it has a few more tricks. We don’t like tricks over here at jsxstyle dot com but we do give Rebass two meaty thumbs up. styled-components and (more recently) emotion have both gained serious traction in the frontend JS community. I can’t do either system justice in a single sentence and I’ve never used either system, but they both seem like reasonable jsxstyle alternatives that embrace the funky things you can do with tagged template literals.
https://js.libhunt.com/jsxstyle-alternatives
CC-MAIN-2021-43
refinedweb
3,052
55.95
Scala and CodeKata Recently, my buddy Quy has been raving about Scala and CodeKata. So after much arm-twisting, I decided to check them out. Scala is a language built on top of the JVM, focusing mainly on functional programming (with the option to do imperative as well) – all put together into a statically-typed, object-oriented package. After some reading and playing, I was sold on Scala and wanted to do some exercises to really get a feel for it. Enter CodeKata. CodeKata is actually a really brilliant idea. Usually programmers practice and learn on the job, but really they should be practicing all the time and honing their skills. From the site:. So I decided to try the karate chop exercise. My first take on the binary chop was an iterative approach. def chop(search: Int, searchSet: List[Int]): Int = { // base case -- empty set if (searchSet.length <= 0) { return -1 } var min = 0 var max = searchSet.length - 1 do { // recalculate mid based on max and min val mid = min + (max - min) / 2 val x = searchSet.apply(mid) if (x == search) { return mid; } else if (x > search) { max = mid - 1 } else if (x < search) { min = mid + 1 } } while (min <= max) // nothing found return -1; } It seems pretty standard, no functional programming at all. The next take on the algorithm was to do it in a recursive (slightly more functional) way. def chop(search: Int, searchSet: List[Int]): Int = { // base case -- empty set if (searchSet.length <= 0) { return -1 } val mid = (searchSet.length - 1) / 2 val x = searchSet.apply(mid) // other base case -- found if (x == search) { return mid } if (x < search) { // drop the left side of the list chop(search, searchSet.drop(mid + 1)) match { case -1 => return -1 /* since function returns the position, have to restore the values chopped */ case i => return (i + mid + 1) } } // x > search else { // drop the right side of the list // (don't need to worry about position since it's not altered) chop(search, searchSet.dropRight(mid + 1)) } } After these first two exercises, I still feel like I’m coding with a Java mindset. You can see from the examples, Scala is somewhat more concise but these examples don’t really show its true power. From reading about it and seeing glimpses of awesome code, it definitely has the potential to be the next big player on the heels of Java. I’m going to try and get another implementation of this algorithm that’s more concise and/or functional. If you’re reading this I would love some feedback on either my implementation, or on Scala or CodeKata.
http://arthur.gonigberg.com/2011/03/07/scala-and-codekata/
CC-MAIN-2017-13
refinedweb
436
71.14
Your browser does not seem to support JavaScript. As a result, your viewing experience will be diminished, and you have been placed in read-only mode. Please download a browser that supports JavaScript, or enable it if it's disabled (i.e. NoScript). On 08/04/2013 at 07:49, xxxxxxxx wrote: Hi everyone! I want to render a picture using a thread, so C4D isn't freezing during rendering. I use a CommandData plugin and the c4dThread module. Here's my thread code: import c4d from c4d import documents from c4d.threading import C4DThread class RenderThread(C4DThread) : def __init__(self, doc) : super(RenderThread, self).__init__() self.doc = doc def Main(self) : renderData = self.doc.GetActiveRenderData() xResolution = int(round(renderData[c4d.RDATA_XRES])) yResolution = int(round(renderData[c4d.RDATA_YRES])) renderBmp = c4d.bitmaps.BaseBitmap() renderBmp.Init(x=xResolution, y=yResolution, depth=32, flags=c4d.INITBITMAPFLAGS_0) result = documents.RenderDocument(self.doc, renderData.GetData(), renderBmp, c4d.RENDERFLAGS_EXTERNAL | c4d.RENDERFLAGS_CREATE_PICTUREVIEWER | c4d.RENDERFLAGS_OPEN_PICTUREVIEWER) print result This is the call: class CommandDataExecute(plugins.CommandData) : __dialog = None renderThread = None def Execute(self, doc) : doc = documents.GetActiveDocument() if self.renderThread and self.renderThread.IsRunning() : #todo pass else: self.renderThread = RenderThread(doc) print 'start thread successful: ', self.renderThread.Start() return True def RestoreLayout(self, sec_ref) : pass My goal is to render several pictures one after the other in the picture viewer. But the picture viewer doesn't open with this code (but if I launch the code directly in the plugin main thread it works). When I try to open the picture viewer from the menu after launching my plugin, I get an Access Violation and C4D terminates. So how can i reach my goal? Thanks for any advice in advance! On 08/04/2013 at 08:30, xxxxxxxx wrote: threading does not allow gui operations, and also trying to lauch the whole rendering operation from another thread than the c4d main thread does seem to be pretty optimistic to me On 08/04/2013 at 08:42, xxxxxxxx wrote: In this thread it is suggested to use threads for rendering. I'm confused, what's correct? Are there any other possibilites to prevent C4D from freezing during rendering? I speak of dozens of pretty big pictures (11k x 8k pixels). If there is no graphical response to the user, this will be a serious problem. On 08/04/2013 at 10:03, xxxxxxxx wrote: When it comes to rendering and working on the scene at the same time. It might be simpler to open a second instance of C4D to work on it while your other instance is rendering the scene. Right click on the C4D shortcut icon and edit the Properties->Target field so you can run multiple instances of C4D. Example: "C:\Program Files\MAXON\CINEMA 4D R13\CINEMA 4D 64 Bit.exe" -parallel This will let you work on the same scene while it's rendering in the other instance of C4D. But you will probably have choppy performance. So to help with that you can limit the number of threads used for rendering. That will allow smoother editing of the scene in the other opened instance of C4D. In the preferences. Go to Renderer->Custom Number of threads Set it to 1 or maybe 2 threads. How high you can set it will depend on your computer system. -ScottA On 08/04/2013 at 10:18, xxxxxxxx wrote: well, opening dialog boxes or even just printing to the console can lead in a threaded environment to crashing c4d. it is stated in the documents and i can also confirm it from personal experience. i cannot guarantee that using the render document method is a gui operation, but it seems very likely imho. i am not quite sure nikklas has considered that in the linked thread, but i might be wrong. On 08/04/2013 at 11:46, xxxxxxxx wrote: Hi! RenderDocument() is not a GUI operation and it should be safe to call it from a threaded context. However, using the RENDERFLAGS_OPEN_PICTUREVIEWER flag only works in synchronous (non- threaded) context. You can use c4d.bitmaps.ShowBitmap() to manually show the bitmap in the Picture Viewer. Also remove the RENDERFLAGS_CREATE_PICTUREVIEWER flag in this case, otherwise your bitmap would be shown twice in the PV. Even modifieng the document to be rendered is fine while rendering, since the document is copied internally for rendering. Note that passing the C4DThread, which is normally used by RenderDocument() to check if it should stop rendering or continue by calling C4DThread.TestBreak(), will not work. Its a bug in the API and you will get a System Error with "bad argument to internal function". I can't reproduce that Cinema freezes. import c4d class RenderThread(c4d.threading.C4DThread) : def __init__(self, doc, bmp, flags=c4d.RENDERFLAGS_EXTERNAL) : super(RenderThread, self).__init__() self.doc = doc self.bmp = bmp self.flags = flags self.result = None self._assert() def _assert(self) : assert self.doc and self.bmp assert all(self.bmp.GetSize()), "Bitmap is not initialized." def Main(self) : self._assert()] = False self.result = c4d.documents.RenderDocument( doc, rdata.GetData(), self.bmp, self.flags) def main() : bmp = c4d.bitmaps.BaseBitmap() bmp.Init(600, 300, 32) thread = RenderThread(doc, bmp) thread.Start() thread.Wait(False) print thread.result == c4d.RENDERRESULT_OK c4d.bitmaps.ShowBitmap(bmp) main() On 08/04/2013 at 13:52, xxxxxxxx wrote: Niklas, Your example doesn't work for me. While the thread is running and rendering. Everything is frozen and can't be used until the rendering is completed. On 08/04/2013 at 14:17, xxxxxxxx wrote: Hi Scott, yes, that is due to this line: thread.Wait(False) I need to keep the thread alive somehow, and there is no other option than waiting in the main-loop until the thread is done in the script manager. In a plugin, like Nachtmahr did, you can store the thread in the plugin object (CommandData in this case) to keep it alive. I should've added that the script freezes Cinema, because, as a script, it can't be avoided (ok, with some really hacky stuff, it could be avoided, but then, still, I couldn't open the PV from the thread). -Niklas On 08/04/2013 at 14:33, xxxxxxxx wrote: I also tried it in a simple Command plugin. But I'm still getting the same results. Everything is frozen until the rendering is finished. import c4d from c4d import gui from c4d import documents from c4d import utils, bitmaps, storage,plugins #get an ID from the plugincafe PLUGIN_ID = 1000001 class RenderThread(c4d.threading.C4DThread) : def __init__(self, doc, bmp) : super(RenderThread, self).__init__() self.doc = doc self.bmp = bmp def render(self) : print "rendering"] = True self.bmp = c4d.documents.RenderDocument(self.doc, rdata.GetData(), self.bmp, c4d.RENDERFLAGS_EXTERNAL) class SceneRender(c4d.plugins.CommandData) : def Execute(self, doc) : bmp = c4d.bitmaps.BaseBitmap() bmp.Init(600, 300, 32) w, h = bmp.GetSize() thread = RenderThread(doc, bmp) thread.Start() thread.render() #thread.Wait(False) c4d.bitmaps.ShowBitmap(bmp) return True if __name__ == "__main__": help = "The text shown at the bottom of C4D when the plugin is selected in the menu" plugins.RegisterCommandPlugin(PLUGIN_ID, "Scene Render", 0, None, help, SceneRender()) On 09/04/2013 at 05:03, xxxxxxxx wrote: Hi, you didn't overload the correct threading fuctions. Pls use C4DThread.Main() instead of render(). Also make your thread variable as a member of your SceneRender class, otherwise your RenderThread object will be destroyed when Execute(...) is left. Cheers, Seb On 09/04/2013 at 10:53, xxxxxxxx wrote: Thanks Sebastien, I've got it working now. And it does render the scene while I'm also working on it. But I'm having trouble with ending the thread properly. I'm not 100 % ceratin. But it seems like the thread never ever stops?: import c4d from c4d import documents from c4d import utils, bitmaps, storage,plugins #get an ID from the plugincafe PLUGIN_ID = 1000001 class RenderThread(c4d.threading.C4DThread) : bmp = c4d.bitmaps.BaseBitmap() def Main(self) : doc = c4d.documents.GetActiveDocument() rdata = doc.GetActiveRenderData() rdata = rdata.GetClone(c4d.COPYFLAGS_NO_HIERARCHY) w = rdata[c4d.RDATA_XRES] h = rdata[c4d.RDATA_YRES] rdata[c4d.RDATA_SAVEIMAGE] = True bmp = c4d.bitmaps.BaseBitmap() bmp.Init(int(w), int(h), 32) bmp = c4d.documents.RenderDocument(doc, rdata.GetData(), bmp, c4d.RENDERFLAGS_EXTERNAL) class SceneRender(c4d.plugins.CommandData) : doc = c4d.documents.GetActiveDocument() thread = RenderThread() image = thread.bmp def Execute(self, doc) : self.thread.Start() self.thread.End(False) if self.thread and not self.thread.IsRunning() : print "Render Finished" #<------ This never happens!!! Thread never stops? return True if __name__ == "__main__": help = "The text shown at the bottom of C4D when the plugin is selected in the menu" plugins.RegisterCommandPlugin(PLUGIN_ID, "SceneRender", 0, None, help, SceneRender()) On 09/04/2013 at 14:51, xxxxxxxx wrote: your thread is still running while the Execute() method has already reached its end. At the point you check if the thread is still running, the thread is actually still running, and you never happen to reach this statement again. Try this instead: if self.thread and self.thread.IsRunning() : print "Cannot start render, still running ..." else: self.thread.Start() If you pressed the command in Cinema, it will start to render. If you then press it again and rendering is not done at this point, it will tell you about this, otherwise it will start a new render thread. You do not have to call End() on the thread object. If you wanted to wait until the thread is finished, you would again block Cinema's main-thread resulting in a temporary freeze. PS: One usually explictly binds a thread to a context, it is rather unusual to create one thread object and start it more than once. Best, -NIklas On 09/04/2013 at 15:29, xxxxxxxx wrote: Thanks Nik, I did try it that way also. But it still doesn't give me any way check the thread for it's stopped state. Per your comment on CGTalk. You made it sound like we can't get the stopped state due to a bug. I'm not sure if that's what you meant? The way I normally use threads is: -Create a thread class. -Run a loop inside that class with some task being run per loop iteration -Then ending the thread All of this is done in the thread class. To make the thread class code execute. I would use a call from one of the plugins methods. For example when a gui button is pressed. I don't normally do any thread status checking from within the plugin's methods like in my example. So this is new territory for me. At this point. I can create a thread and render the scene out while still working on the scene. Which is sort of a success. But there is no visual feedback as to what's going on. And no way to even tell when the thread has finished running. And if I try to open the image viewer while rendering. All I get is a black image in the image viewer. Instead of the sequence of renders being rendered. You mentioned you had similar problems and had to find a completely different way to hack around this without using threads at all. So maybe what I'm trying to do can't be done? On 09/04/2013 at 22:56, xxxxxxxx wrote: What I said is, that when you start a render in the Picture Viewer using RenderDocument(), you can not stop the render anymore (eg. closing the PV doesn't ask you to stop the render and the "Stop Render" command in the PV is greyed out). I don't see where you are unable to check for its stopped state. IsRunning() is just fine for this job. A thread ends automatically when it quits its Main() method, you do not have to call End() on it. What End() does is stated in the docs: It enables you to interrupt the thread before it is actually done (imagine, eg: When Cinema closes and your thread is still running, you want to tear it down, either waiting for it to finish or even interrupt it). As for the visual feedback: What do you expect? You didn't implement any visual feedback Since my PV Render Queue plugin is only rendering in the Picture Viewer and does not require any reference to the rendered image, I've been using a simple CallCommand() to start the Picture Viewer rendering (as I said @CGSociety). Although I'm not a fan of using CallCommand(), it was necessary to get around this "I can't stop the rendering, what a sh*t plugin!" thing. Best, -Niklas On 10/04/2013 at 04:14, xxxxxxxx wrote: Thanks for all the answers! I changed my code a bit to copy yours, but now I get this Exception after calling ShowBitmap() : RuntimeError: illegal operation, invalid cross-thread call This is my Thread: class RenderThread(C4DThread) : def __init__(self, doc) : super(RenderThread, self).__init__() self.doc = doc def Main(self) : renderData = self.doc.GetActiveRenderData() xResolution = int(renderData[c4d.RDATA_XRES]) yResolution = int(renderData[c4d.RDATA_YRES]) renderBmp = c4d.bitmaps.BaseBitmap() renderBmp.Init(x=xResolution, y=yResolution, depth=32) result = documents.RenderDocument(self.doc, renderData.GetData(), renderBmp, c4d.RENDERFLAGS_EXTERNAL) print result if result == c4d.RENDERRESULT_OK: c4d.bitmaps.ShowBitmap(renderBmp) And this is the call: class CommandDataExecute(plugins.CommandData) : __dialog = None renderThread = None #plugin started by user def Execute(self, doc) : doc = documents.GetActiveDocument() if self.renderThread and self.renderThread.IsRunning() : print 'Thread is still running. Please try again later' else: self.renderThread = RenderThread(doc) print 'start Thread erfolgreich: ', self.renderThread.Start() Any suggestions on this? On 10/04/2013 at 06:13, xxxxxxxx wrote: Hi Nachtmahr, it has already been said: GUI Operations can not be invoked from a thread. ShowBitmap() opens the Picture Viewer and is therefore a GUI operation. You will have to invoke it from the main thread. Eg. all Message() and CoreMessage() are usually called from the main thread so it is safe to open the bitmap from one of these methods. On 10/04/2013 at 06:50, xxxxxxxx wrote: Hi Niklas, yes you said that before, I remember. But to call the PV from the main thread, I only see 2 possibilities: GeDialog.SendMessage Message. But here the API (only in the one for C++ -.-) states, that I only can use these two: MSG_COMMANDINFORMATION and MSG_BODYPAINTEXCHANGE. Is there any other Message I can catch, where to find the information, how to send usermessages, etc etc. So many questions and the documentation let them all be unanswered... ---|--- <_<_t_>_ On 10/04/2013 at 07:47, xxxxxxxx wrote: no, GeDialog.SendMessage() is not what you need. Many ways will lead you to Rome, and so there are many was this can be achieved. I think the best option would be to implement a MessageData plugin and override its ~.CoreMessage() method. Then you can send EventAdd() from your RenderThread and check for EVMSG_CHANGE in CoreMessage(). When this message is sent, you simply check the thread for the bitmap and open it in the Picture Viewer. Make sure you have a reference to the thread. You can either use global variables (not good design) or pass the thread to the MessageData plugin. On 10/04/2013 at 08:14, xxxxxxxx wrote: Thanks, I'll try that tomorrow! Looks like an interesting approach On 10/04/2013 at 12:10, xxxxxxxx wrote: Nik, You're misunderstanding me. I can run the thread just fine. But I could not find a way to test for it's finished condition. Then it just hit me today like a ton of bricks that the reason I probably can't do this is because I'm using a CommandData() plugin. Which means after the plugin is executed from the menu and the thread starts. The plugin then probably closes. So it makes sense to me now that there would be no way to check the Thread's finished status using a CommandData() plugin. Do you happen to have an old version of your plugin where it renders to the image viewer with a thread. But does not stop? Or have you deleted it? I would like to see what type of plugin you used(GeDialog?) And how you called the c4d.bitmaps.ShowBitmap() method. If not don't sweat it.
https://plugincafe.maxon.net/topic/7090/8038_exception-with-thread-rendering
CC-MAIN-2021-31
refinedweb
2,734
68.47
[SOLVED] Help with erasing a Rect() I'm coming close to the end of making my Agar.io game, which feels great! Here's the code: from scene import * from random import * class Particle(object): def __init__(self, wh): self.w = wh.w self.h = wh.h self.x = randint(0, self.w) self.y = randint(0, self.h) self.vx = randint(-10, 20) self.vy = randint(-10, 20) self.colour = Color(random(), random(), random()) self.cells=Rect(self.x, self.y, 5, 5) global cells cells=self.cells def update(self): self.x += self.vx self.y += self.vy self.vx *= 0 self.vy *= 0.cells) class Intro(Scene): def setup(self): self.psize=16 self.psize=10 global plocx global plocy global newplocx global newplocy plocx=240 plocy=160 newplocx = 0 newplocy = 0 self.player = Rect(plocx, plocy, 20, 20) self.colour = Color(random(), random(), random()) self.particles = [] for p in xrange(100): self.particles.append(Particle(self.size)) def touch_began(self, touch): global x1 global y1 x1=touch.location.x y1=touch.location.y def touch_moved(self, touch): global plocx global plocy global newplocx global newplocy x=touch.location.x y=touch.location.y if x > x1: addx=x-x1 newplocx=plocx+addx if x < x1: subx=x-x1 newplocx=plocx+subx if y > y1: addy=y-y1 newplocy=plocy+addy if y < y1: suby=y-y1 newplocy=plocy+suby xmin=215 xmax=265 ymin=140 ymax=190 while xmax > plocx and newplocx > plocx: plocx = plocx + 1 self.player = Rect(plocx, plocy, 16, 16) while xmin < plocx and newplocx < plocx: plocx = plocx - 1 self.player = Rect(plocx, plocy, 16, 16) while ymax > plocy and newplocy > plocy: plocy = plocy + 1 self.player = Rect(plocx, plocy, 16, 16) while ymin < plocy and newplocy < plocy: plocy = plocy - 1 self.player = Rect(plocx, plocy, 16, 16) def draw(self): background(0, 0.05, 0.2) self.player = Rect(plocx, plocy, self.psize, self.psize) for p in self.particles: p.update() p.draw() cells = p.cells if not self.player.intersects(cells): ellipse(*self.player) if self.player.intersects(cells): self.newpsize=self.psize+1 self.psize=self.newpsize ellipse(*self.player) run(Intro(), LANDSCAPE) When the player eats a cell, I want the cell to be erased. I don't want it changing location because it might ruin what I have in mind later. Could someone please help with this? It's a Rect() so I don't know how it will work. Thanks in advance! Look at python lists namely the part on list.remove Also as @JonB has stated remove global variables where they are not required Something to consider as well if not self.player.intersects(cells): ellipse(*self.player) if self.player.intersects(cells): self.newpsize=self.psize+10 self.psize=self.newpsize ellipse(*self.player) can become if self.player.intersects(cells): self.newpsize=self.psize+10 self.psize=self.newpsize ellipse(*self.player) OK, I removed the useless global variables... However, its not a list - it's a Rect(). Maybe you could help me with making it a list instead. If it can't be erased. @Webmaster4o @Phuket2 @Cethric @JonB self.particlesis a list. if an item in it intersects then remove it. You are already checking if items intersect, tweaking the code a bit will allow you to remove it. In the iteration of self.particles> for p in self.particles, there is a check for intersection if self.player.intersects(cells):where cells = p.cells. If this check is Truethen remove pfrom the list self.particles So from my code, I just have to fun a way to remove p.cells, or do I have to check something else first? @Cethric This would be a whole lot easier if there was a repo where pull requests could be submitted to fix the code. Everyone needs a common codebase to be fixing rather then English prose corrections to Python code. self.particlesis a list and the variable pis an item from the list. the function list.removeremoves an item from the list. Hence in your code if a cell intersects the remove it from the list. ie if self.player.intersects(cells): ... self.particles.remove(p) I apologise in advanced if I come across being annoyed it is not how I intend to come across. No, its fine. Also, thanks a bunch! A different suggestion: try using layers. The advantage of layers is that most of the "hard work" is done for you, such as adding and removing objects, positioning objects relative to their parent, rendering them properly, etc. You also don't need to keep track of all objects manually in a list somewhere, instead you add them as sublayers of another layer, and they will be drawn automatically. Consider making a backup copy of your script before changing it to use layers, you might run into problems and want to look at your original code.
https://forum.omz-software.com/topic/2264/solved-help-with-erasing-a-rect
CC-MAIN-2021-21
refinedweb
824
53.17
Back to: Java Tutorials For Beginners and Professionals Thread Synchronization in Java with Examples In this article, I am going to discuss Thread Synchronization in Java with examples. Please read our previous article where we discussed Daemon Thread in Java with Examples. At the end of this article, you will understand all about Java Thread Synchronization with examples. Thread Synchronization in Java The process of allowing multiple threads to modify an object in a sequence is called synchronization. We can allow multiple threads to modify the objects sequentially only by executing that objects’ mutator methods logic in sequence from multiple threads. This is possible by using an object locking concept. Thread Synchronization is a process of allowing only one thread to use the object when multiple threads are trying to use the particular object at the same time.: synchronized(objectidentifier) { // Access shared variables and other shared resources; } Why use Synchronization? The synchronization is mainly used to : - If you start with at least two threads inside a program, there might be a chance when multiple threads attempt to get to the same resource. - It can even create an unexpected outcome because of concurrency issues. Types of Synchronization There are basically two types of synchronization available. They are: - Process Synchronization: It means sharing system resources by processes in such a way that, Concurrent access to shared data is handled thereby minimizing the chance of inconsistent data. - Thread Synchronization: It means that every access to data shared between threads is protected so that when any thread starts operation on the shared data, no other thread is allowed access until the first thread is done. Locks in Java Synchronization is built around an internal entity known as the lock or monitor lock. (The API specification often refers to this entity simply as a “monitor.”) Locks play a role in both aspects of synchronization: enforcing exclusive access to an object’s state and establishing happens-before relationships that are essential to visibility. Every object has a lock associated with it. By convention, a thread that needs exclusive and consistent access to an object’s fields has to acquire the object’s lock before accessing them, and then release the lock when it’s done with them. A thread is said to own the lock between the time it has acquired the lock and released the lock. As long as a thread owns a lock, no other thread can acquire the same lock. The other thread will block when it attempts to acquire the lock. Reentrant Synchronization. Understanding the problem without synchronization In this example, we are not using synchronization and creating multiple threads that are accessing the display method and producing random output. public class Synchronization implements Runnable { int tickets = 3; static int i = 1, j = 2, k = 3; public_0<< In the above program, object s of class Synchronization are shared by all the three running threads(t1, t2, and t3) to call the shared method(void bookticket). Hence the result is non-synchronized and such a situation is called a Race condition. Synchronized keyword Java synchronized keyword marks a block or a method in a critical section. A critical section is where only one thread is executing at a time, and the thread holds the lock for the synchronized section. This synchronized keyword helps in writing concurrent parts of any application. It also protects shared resources within the block. To overcome the above Race Condition problem, we must synchronize access to the shared display() method, making it available to only one thread at a time. This is done by using the keyword synchronized with display() method, i.e. Synchronized void display(String msg) Thread Synchronization in Java There are two types of Thread Synchronization. They are: - Mutual Exclusive - Cooperation (Inter-Thread Communication) (I will discuss in the next article) Mutual Exclusive: Mutual Exclusive helps keep threads from interfering with one another while sharing data. Mutual Exclusion can be achieved in three ways in java: - Synchronized Method - Synchronized Block - Static Synchronization Thread Synchronization using Synchronized Method in Java: Method level synchronization is used for making a method code thread-safe, i.e. only one thread must be executing this method code. Syntax : <access modifier> synchronized method ( parameter) { //synchronized code } In the case of the synchronized method, the lock object is: - class object – if the given method is static. - this object – if the method is non-static. ‘this’ is the reference to the current object in which the synchronized method is invoked. Example: Implementation of Synchronized Method in Java public class Synchronization implements Runnable { int tickets = 3; static int i = 1, j = 2, k = 3; synchronized_2<< Thread Synchronization using Synchronized Block in Java: Block-level synchronization is used for making some amount of method code thread-safe. If we want to synchronize access to an object of a class or only a part of a method to be synchronized then we can use the synchronized block for it. It is capable to make any part of the object and method synchronized. Syntax: synchronized (object reference expression) { //code block } Example: Implementation of Synchronized Block in java class A implements Runnable { int token = 1; public void run () { synchronized (this) { Thread t = Thread.currentThread (); String name = t.getName (); System.out.println (token + ".....alloted to " + name); token++; } } } class SynchroBlock { public static void main (String[]args) { A a1 = new A (); Thread t1 = new Thread (a1); Thread t2 = new Thread (a1); Thread t3 = new Thread (a1); t1.setName ("t1"); t2.setName ("t2"); t3.setName ("t3"); t1.start (); t2.start (); t3.start (); } } Output: Thread Synchronization using Static Synchronized in Java: In simple words, a static synchronized method will lock the class instead of the object, and it will lock the class because the keyword static means: “class instead of instance”. The keyword synchronized means that only one thread can access the method at a time. And static synchronized mean: Only one thread can access the class at one time. Example: Implementation of Static Synchronized in Java class Table { synchronized static void printTable (int n) { for (int i = 1; i <= 10; i++) { System.out.println (n * i); try { Thread.sleep (400); } catch (Exception e) { } } } } class MyThread10 extends Thread { public void run () { Table.printTable (1); } } class MyThread21 extends Thread { public void run () { Table.printTable (10); } } class MyThread31 extends Thread { public void run () { Table.printTable (100); } } class MyThread41 extends Thread { public void run () { Table.printTable (1000); } } public class StaticSynchronization { public static void main (String t[]) { MyThread10 t1 = new MyThread10 (); MyThread21 t2 = new MyThread21 (); MyThread31 t3 = new MyThread31 (); MyThread41 t4 = new MyThread41 (); t1.start (); t2.start (); t3.start (); t4.start (); } } What is the difference between thread joining and thread synchronization? Thread joining mechanism is applied only for the threads which are created statically (i.e., dependent on one another) and whereas synchronization is for thread locking dynamically. What happens when we declare Methods are synchronized? When we call the synchronized method the current object of this method is locked by this current thread. So that other threads cannot use this locked object for executing either the same synchronized method or other synchronized methods. But it is possible to access non-synchronized methods by using the locked object because to execute a non-synchronized methods object is no need to be locked by this thread. The threads that lock the object is called monitor. This current object is unlocked only after completion of that synchronized method execution either normally or abnormally. What is the difference between synchronized blocks and Methods in java? If a method is declared as synchronized that method’s complete logic is executed in sequence from multiple threads by using the same object. If we declare the block as synchronized, only the statements written inside that block are executed sequentially not complete method logic. Using a synchronized method we can only lock the current object of the method. Using a synchronized block we can lock either the current object or the argument object of the method. We must pass the object’s reference variable to the synchronized block as shown below. Locking the current object: class Example { void m1() { synchronized (this) {} } } Locking argument object: class Example { void m1(sample s) { synchronized (s) {} } } Note: We can develop any number of synchronized blocks in a method. When should we develop multiple synchronized blocks instead of declaring the complete method as synchronized? In the below two cases we develop multiple synchronized blocks - For executing parts of method logic sequentially for doing object modification - For executing one part of method logic by locking the current object and other parts of the method by locking the argument object. What is the Difference between Declaring Non-Static Method and Static Method as synchronized? If we declare the Non-Static Method as synchronized its current object is locked. So that in this method Non-Static Variables s of this object are modified sequentially by multiple threads. If we declare SM as synchronized its class’s java.lang. A class object is locked so that in this method Static Variables are modified sequentially by multiple threads. In the next article, I am going to discuss Inter Thread Communication in Java with Examples. Here, in this article, I try to explain Thread Synchronization in Java with Examples. I hope you enjoy this Thread Synchronization in Java with Examples article.
https://dotnettutorials.net/lesson/thread-synchronization-in-java/
CC-MAIN-2022-27
refinedweb
1,545
53
Supreme Court Judgments Subscribe 02/05/1962 MUDHOLKAR, J.R. MUDHOLKAR, J.R. AIYYAR, T.L. VENKATARAMA SINHA, BHUVNESHWAR P.(CJ) SUBBARAO, K. AYYANGAR, N. RAJAGOPALA CITATION: 1963 AIR 151 1963 SCR (3) 774 CITATOR INFO: F 1963 SC1890 (5) RF 1965 SC 646 (9) RF 1966 SC1788 (19A,21) D 1967 SC1074 (9) RF 1967 SC1081 (3) F 1968 SC 432 (15) F 1970 SC 984 (7) RF 1971 SC 306 (10) R 1971 SC1033 (8,9) F 1973 SC 974 (10) RF 1973 SC1461 (1071) E 1975 SC1182 (3) F 1977 SC 183 (6) R 1978 SC 515 (3,4,6) F 1979 SC1713 (5) R 1980 SC 214 (20) RF 1980 SC1678 (4) F 1984 SC 120 (4) F 1985 SC1622 (13) RF 1988 SC 501 (5) F 1988 SC 686 (18) F 1988 SC1353 (18) D 1989 SC 682 (4,7) R 1989 SC2105 (6) RF 1992 SC1456 (30) ACT: Land Acquisition-Public purpose Government declaration as to public purpose-If justiciable-"Conclusive evidence" "Conclusive Proof Meaning of-Compensation-Government's contribution of cost-if should be substantial-Indian Evidence Act, 1872 (1 of 1872), ss. 3, 4-Land Acquisition Act, 1894 (1 of 1894) , ss. 4, 5A, 6-Constitution of India, Art. 14. HEADNOTE: In February, 1961, the petitioners purchased over six acres of land situate in the State of Punjab for a sum of Rs. 4,50,000 and claim, to have done so for the purpose of establishing a paper mill. The sixth respondent, private' limited company, which had a licence from the Government of India for starting a factory for the manufacture of various 775 ranges of refrigeration compressors and ancillary equipment, requested the State of Punjab for the allotment of an appropriate site for the location of the factory. In the official Gazette of August 25, 1961. was published 'a notification of the Governor of Punjab dated 'August 18, 1961, under s. 4 of the Land Acquisition Act, 1894, to the effect that the land belonging to the petitioners was likely to be needed by the Government at public expenses for a public purpose, namely, for setting up a factory for manufacturing various ranges of refrigeration compressors and ancillary equipment. The Government directed that action under s. 17 of the Act shall be taken because there was urgency and that the provisions of s. 5A shall not apply to the acquisition. In the same Gazette another notification under s. 6 of the Act dated August 19, 1961, was published to the effect that the Governor of Punjab was satisfied that the land was required by the Government at public expense for the said purpose, The notification provide for the immediate taking of possession of the land under the provisions of s. 17 (2) (c) of the Act. On September 29, 1961, the Government of Punjab sanctioned an expense of Rs. 100 for the purpose of acquisition of the land. The petitioners filed an application under Art. 32 of the Constitution of India challenging the legality of the action taken, by the Government on the grounds, inter alia, (1) that the acquisition was not for a public purpose either under s. 4 or s. 6 of the Land Acquisition Act; (2) that the land was in reality being acquired for the benefit of the sixth respondent and that the action of the Government amounted to discrimination against the petitioners and violated Art. 14 of the Constitution of India; (3) that the alleged contribution of Rs. 100 made by the Government was a colourable exercise of power inasmuch as the amount was so unsubstantial sum compared to the value of the property that it could not raise an inference of Government participation in the proposed activity; and (4) that the notifications under ss. 4 and 6 could not have been made simultaneously and were, therefore, without efficacy. Held (per Sinha, C. J., Rajagopala Ayyangar, Mudholkar and Venkatarama Aiyar, jj.), (1) that the declaration made by the Government in the notification under s. 6 (1) of the Land Acquisition Act, 1894, that the land was required for a public purpose, was made conclusive by sub-s. 3 of s. 6 and that it was not open to a court to go behind it and try to satisfy itself whether in fact the acquisition was for a public purpose. Whether in a particular case the purpose for which land was needed was a public purpose or not was for the 776 Government to be satisfied about and the declaration of the Government would be final subject to one exception, namely that where there was a colourable exercise of the power the declaration would be open to challenge at the instance of the aggrieved party. Hamabai Framjee Petit v. Secretary of State for India, (1914) L. R. 42 I. A. 44 and R. L. Arora v. The State of Uttar Pradesh, (1962) Supp. 2 S. C. R. 149 distinguished. Vedlapatla Suryanarayana v. The Province of Madras, I. L. R. (1946) Mad. 153, approved. (2) that there wag no difference between the effect of the expression "conclusive evidence" in s. 6 (3) of the Act from that of "conclusive proof", the aim of both being-to give finality to the establishment of the existence of a fact from the proof of another. (3) that the conclusiveness in s, 6 (3) must necessarily attached not merely to a "need" but also to the question whether the purpose was a public purpose. There could be no "need" in the abstract. (4) that the provisions of the Act which provided that the declaration made by the State that a particular land was needed for a public purpose, shall be conclusive evidence of the fact that it was needed, did not infring the Constitution. State of Bihar v. Maharajadhiraja Sir Kameshwar Singh of Darbhanga & Ors., [1952] S. C. R. 889, Babu Barkya Thakur v. State of Bombay & Ors., [1961] 1 S. C. R. 128, and State of Bombay v. Bhanji Munji & Anr., [1955] 1 S. C. R. 777, relied on. (5) that it was for the State to say which particular industry might be regarded as beneficial to the public and to decide that its establishment would serve a public purpose; therefore, no question of discrimination would arise merely from the fact that the Government had declared that the establishment of a particular industry was a public purpose. Accordingly, the notifications in question, did not contravene Art. 14 of the Constitution. (6) that as s. 5A was out of the way the publication in the game issue of the Gazette of the both the notifications that is the one dated August 18, 1961, and that dated August 19, 1961, was not irregular. Held, further (Subba Rao, J., dissenting), that the notification dated August 19, 1961, under s. 6 of the Land Acquisition Act, 1894, was not invalid on the ground that the 777 amount contributed by the State towards the cost of the acquisition was only nominal compared to the value of the land. The expression "party out of public revenues" in the proviso to s. 6 (1) of the Act did not necessarily mean that State's contribution must be substantial; but whether a token contribution by the State towards the cost of acquisition would be sufficient compliance with the law would depend upon the facts of each case and it was open to the court in every case which came before it to ascertain whether the action of the State was a colourable exercise of power. Sanja Naicken v. Secretary of State, (1926) I. L. R. 50 Mad. 308 and Vadlapatla Suryanaryana v. The, Province of Madras, 1. L. R. [1946] Mad. 153, approved. Ponnaia v. Secretary of State, A. 1. R. 1926 Mad. 1099, disapproved. Chatterton v. Cave, (1878) 3 App. Cas. 483 and Maharajah Luchmeswar Singh v. Chairman of the Durbhanga Municipality, (1890) L. R. 17 I. A. 90 held inapplicable. Per Subba Rao, J.-in interpreting the proviso to s. 6 (1) of the Act a reasonable meaning should be given to the expression "wholly or partly." The payment of a part of a compensation must have some rational relation to the compensation payable in respect of the acquisition for a public purpose. So construed "part can only mean substantial part of the estimated compensation. What was substantial part of a compensation depended upon the facts of each case. In the instant case, it was impossible to say that a sum of Rs. 100 out of an estimated compensation which might go even beyond Rs. 4,00,000 was in any sense of the term a substantial part of the said compensation. The Government had clearly broken the condition and, therefore, it had no jurisdiction to issue the declaration under s. 6 of the Act. ORIGINAL JURISDICTION : Petitions Nos. 246 to 248 of 1961. Petitions under Art. 32 of the Constitution of India for the enforcement of Fundamental Rights. G. S. Pathak, Rameshwar Nath, S. C. Andley and P. L. Vohra, for the petitioners (in petition No. 246 of 1 96 1), 778 Rameshwar Nath, S. N. Andley and P. L. Vohra for the petitioners (in petitions Nos. 247 and 2 48 of 196 1). S. M. Sikri, Advocate-General J. for the State Of Punjab, N. S. Bindra and P. D. Menon, for respondent No. 1 (in all the petitions). S. P. Varma, for respondent No. 6 (in all the petitions). H. N. Sanyal, Additional Solicitor-General of India, R. H. Dhebar and P. D. Menon, for the State of Gujarat (Intervener) (in all the petitions). 1962. May 2. The following judgments were delivered. The judgment of Sinha, C. J., Rajagopala Ayyangar, Madholkar and Venkatarama Aiyar, J J., was delivered by Mudholker, J. MUDHOLKAR, J.-The petitioners who have acquired over six acres of land by purchase for Rs. 4,50,009 in February, 1961, under five sale deeds and one lease deed claim to have done so for the purpose of establishing a paper mill in collaboration with Messrs. R. S. Madhoram and Sons who had been granted a licence for the establishment of a paper plant in Ghaziabad in Uttar Pradesh. The aforesaid land is situate in the village Meola Maharajpur, Tehsil Ballabhgarh, District Gurgaon, and abuts on the Mathura Road, and is only about 10 or 12 miles from New Delhi. Respondent No. 6, Air Conditioning Corporation (P) Ltd., is a private limited concern and holds a licence from the Government of India for starting a factory for the manufacture of various ranges of refrigeration compressors and ancillary equipment. We may mention here that initially this project was allotted to the State of West Bengal but at the request of State of Punjab its location was shifted to the State of Punjab. 779 Respondent No. 6 requested the State of Punjab for the allotment of an appropriate site for the location of the factory. The petitioners contend that respondent No. 6 being interested in acquiring land in the village Meola Maharajpur approached the State of Punjab in or about the month of March, 1961, for the purpose of acquiring land for their factory under the Land Acquisition Act, 1894 (hereinafter referred to as the Act). One of the petitioners having learnt of this made an application on March 23, 1961, to the Deputy Commissioner, Gurgaon, requesting him that none of the lands purchased by the petitioners should be acquired for the benefit, of respondent No. 6. Owners of adjacent lands Mr. Om Prakash, Mr, Ram Raghbir, Mr. Atmaram Chaddha and Mr. Hari Kishen who are petitioners in W. P. 247 and 248 of 1961 which were heard along with. this petition made similar requests. The petitioners allege that they were assured by the Deputy Commissioner that their lands would not be acquired for the benefit of respondent No. 6. Thereafter respondent No. 6 purchased by private treaty a, plot of land measuring approximately 70,000 sq. yards contiguous to, the land owned by the petitioners on or about April 21, 1961. The, petitioners grievance is that notwithstanding the assurances given to them by the Deputy Commissioner, Gurgaon, the Governor of Punjab, by notification dated August 25, 1961, under s. 4 of the Act declared that the lands of the petitioners in this petition as well as those of the petitioners in the other two writ petitions were likely to be needed by Government at public , expense for a public purpose, namely, for setting up a factory for manufacturing various ranges of refrigeration compressors.,and, ancillary equipment. It accordingly notified that the, land in: the locality described 780 in the notification was required for the aforesaid purpose. Similarly it authorised the Sub-Divisional Officer and Land Acquisition Officer, Palwal, to enter upon and survey the land in the locality and to do all other acts required or permitted by s.4 of the Act. It further directed that action under s. 17 of the Act shall be taken because there was urgency and also directed that the provisions of s 5A shall not apply to the acquisition. On August 19, the Governor of Punjab made a notification under s. 6 of the Act to the effect that he was satisfied that the land specified in the notification was required by Government at public expense for public purpose, namely, for setting up a factory for the manufacture of refrigeration compressors and other ancillary equipment and declared that the aforesaid land was required for the aforesaid purposes. This declaration was made "to all whom it may concern" and the Sub-Divisional Officer, Palwal, was directed to take all steps for the acquisition of this land. Finally the notification provided for the immediate taking of possession of the land under the provisions of s. 17 (2) (c) of the Act. Both these notifications were published in the Punjab Government Gazette of August 25, 1961. The petitioners contend that these notifications and the land acquisition proceedings permitted to be taken under them violate their fundamental rights under Art. 19 (1) (f) and (g) to possess the said land and carry on their occupation, trade or business and that, therefore, they must be quashed. It is their contention that they have purchased this land bona fide for industrial purposes as land in the vicinity of this land is being acquired by industrialists for establishing various industries. The purpose is said to be the establishment of a paper manufacturing plant. According to them 781 they have entered into an arrangement with Messrs. R. S. Madho Ram & Sons who hold industrial licence No. L/2-1/2 (1)/N-60/62. The proposed industry, according to them, would employ about 200 people. The industry they wish to start is a new one so far as they are concerned, whereas according to them, the respondent No. 6 is already engaged in refrigeration industry and as far as they know, it has established a factory for manufacturing refrigeration equipment at Hyderabad in the State of Andhra Pradesh. It may be mentioned that some time after the notification was published, that is, on September 29, 1961, the Government of Punjab sanctioned the expense of Rs. 100 for the purpose of acquisition of this land. According to the petitioners this was an after-thought and besides, a token contribution of this kind is not sufficient to show that the acquisition is being made partly at public expense. The petition was opposed not only by respondent No. 6 but also by the State of Punjab which is respondent No. 1 to the petition. The respondent No. 1 denied that the petitioners had purchased the land for a bona fide industrial purpose and would in fact use it for such purpose. It also denied that any assurance was give to the petitioners that their lands would not be acquired. It admitted that the respondent No. 6 had made an application in December, 1960 for acquiring land for setting up its factory and that, therefore, the Punjab Government agreed to do the needful. According to respondent No. 1 the acquisition proceedings have been undertaken for a public purpose and at public expense as stated in the notification and that the State Government would make part contribution towards the, payment of compensation of the land' out of public revenues. In the circumstances it in 782 contended that the petitioners would not be entitled to any relief whatsoever. They would of course get compensation for the land as determined by the Land Acquisition Officer. The action of the State Government is said to be legal and in accordance with the provisions of the law because what was done was permissible under ss.4 and 6 of the Act, that it was done bona fide, that part of the compensation would be paid out of the, public revenues, that the declaration made by the Government is conclusive evidence under sub-s. (3) of s.6, that the land is needed for ,a public purpose, that the notifications were made on different dates though they were published in the same issue of the Gazette and are perfectly valid, that the land is not being acquired for a company but for a public purpose, that, therefore, the provisions of Part VII of the Act are inapplicable and that the lands are lying vacant and their owners will be paid compensation. No question of depriving them of their fundamental rights under Art. 19(1)(f) and (g) or of violation of their right under Art. 14 therefore arises. According to respondent No. 1 it would be open to the petitioners to make their claim for compensation to the Land Acquisition Officer for such loss as the acquisition would entail on them. It also stated that as the land purchased by the respondent No. 6 through private negotiation has no access to the main road' and as the land is inadequate to meet the minimum essential requirements the acquisition of the, lands in question became necessary. On behalf of the respondent No. 6 it is stated that the need for a factory like the one in its con. temptation is acutely felt in India inasmuch a manufacture of compressors and the composes nets of big and small air-conditioners, refrigerators, 783 water coolers and cold storage cabinets is not being carried out anywhere in the country so far. The import of these goods naturally drains away a considerable amount of foreign exchange. It was, therefore, felt that by starting manufacture of these articles in our country not only Will foreign exchange be saved, but some foreign exchange will eventually be earned by the export of manufactured goods. They further contend that the purpose for which the factory is being set up must be regarded as a public purpose because inter alia it is intended by manufacturing the aforesaid goods, to cater to the needs of the public at large. It is in view of these circumstances that the Government of India, accepting the recommendation made in this regard by the licensing committee under the Industries Development and Regulation Act, 1951, issued a licence in its favour on April 8, 1951. It then pointed out that it has secured the collaboration in this project of a well-known American Company named Borg-Warner International Corporation of Chicago, which is the biggest manufacturers of air conditioning plants and equipment in the world, and that the collaboration agreement has been approved by the Government of India in the Ministry of Commerce. Its grievance is that this agreement has not been implemented so far because it has not been able to get the land for constructing the building in which the necessary machinery and ,implements' could be installed. Finally it says that originally the licence was issued for setting up a factory in the State of West Bengal and that it was at the instance of the Government of Punjab that the Central Government permitted the location of the factory to be shifted from West Bengal to Punjab. According to it once the factory gets going it is likely to employ at least 1,000 workers. It is not necessary to refer to the other affidavits and the rejoinder affidavits except to some 784 portions of the additional affidavit filed by Mr. M. B. Bhagat, Under Secretary on behalf of the respondent No. 1. We are referring only to those portions which were relied on during the arguments before us. In that affidavit it is denied that any licence had been granted to Messrs. R. S. Madho Ram & Sons for the establishment of a paper plant in the Punjab. According to respondent No. 1 Messrs. R. S. Madho Ram & Sons were granted a licence on August 17, 1960, for the establishment of an industrial undertaking in Ghaziabad (U.P.) for the manufacture of writing and printing paper and pulp. It further stated that even this licence has been cancelled by the Government of India by their letter dated January 31, 1962. Since the said licensee did not take any effective steps to establish the same. It then stated that the Air Conditioning Corporation which was incorporated as a private limited company has since, with the permission of the Central Government, been converted into a public limited company with the name and style of "York India Ltd.", and that the company has been granted a licence to manufacture refrigeration equipment by the Industrial Licensing Committee. There is an agreement between York India Ltd., and Messrs. York Corporation, U.S.A. a subsidiary of Borg Warner of the U.S.A. where under the latter have undertaken to give all technical assistance and technical training to the Indian personnel as also to contribute 50% of the initial investment in the undertaking. The respondent No. 6 expects to manufacture 70% of the equipment in the very first year and cent. per cent, by the end of 1966. It further stated that the foreign collaborators also have agreed to sell the products of the firm outside India at prices and on terms and conditions most favourable to the Indian firm, thereby enabling it to obtain access to the foreign market. The foreign collaborator would make available to the Indian 785 personnel the technical ,know-how' and other information necessary for the manufacture of refrigeration materials and that such assistance will itself be very valuable. It denied that the respondent No. 6 has established a factory similar to the one now intended to be established in Hyderabad as alleged by the petitioners. It is admitted that licences have been granted to two other concerns in India for the manufacture of similar equipment. Neither of those licensees has actually started production, at any rate, so far, and, therefore, it is not correct to say that similar equipment is already being manufactured in India. Then it stated "the products that are to be manufactured by the respondent till now were being imported into India from foreign countries and goods worth about Rs. 3,83,70,000 in 1960 and for the first ten months in 1961 Rs. 3,56,50,000 were imported by the various licensees holding import licences." It also stated that the respondent No. 6 was granted licence to establish a factory in West Bengal but since no one had been granted a licence to establish a factory of this kind in the Punjab its licence was transferred to Punjab. The proposed factory would employ a large number of persons and thus help to solve to some extent the exisiting problem of unemployment in Punjab. Finally it stated that the establishment of the factory as such is in furtherance of the industrial development of the Punjab State and is, therefore, for a public purpose. On behalf of the petitioners Mr. Pathak has raised the following five contentions : (1) The acquisition is not for a public purpose either within s.4 or s.6 of the Land Acquisition Act or for a purpose useful to the public as contemplated in s.41 and that the action of the Government amounted to 786 acquiring property from one person and giving it to another. (2) The alleged contribution of Rs. 100 made by the Government is a colourable exercise of power, that no such intention was mentioned prior to the notification and that the amount of Rs. 100 is so unsubstantial a gum compared to the value of the property that it cannot raise an inference of Government participation in the proposed activity. (3) That the property is in fact being acquired for a company and, therefore, the provisions of Part VII of the Act should have been complied with. Non-compliance with those provisions vitiates the acquisition. (4) The petitioners.' proposed paper mill would be as good an industrial concern as the one intended to be established by respondent No. 6 and the Government, in preferring the latter to the former, has violated the guarantee of equal protection of law provided by Art. 14 of the Constitution. (5) That the notification under ss. 4 and 6 could not have been made simultaneously and are, therefore, without efficacy, We may deal with the third point raised by Mr. Pathak first, that is, regarding non-compliance of provisions of Part VII. It is common ground that those provisions were not complied with. The reason for that is, that according to the respondents the acquisition is not for a company but for a public purpose, partly at public expense. Indeed, the respondents at no stage have relied on the provisions of Part VII of the Act and therefore, the main question to be considered is whether the acquisition is for a public purpose 787 partly at public expense or not. If it is so, then, of course, the petitions must succeed. Therefore, it is the first two contentions raised by Mr. Pathak which primarily need our consideration. According to learned counsel for the petitioners the statements made in the affidavits on behalf of the; State as; well as, on behalf of the respondent No. 6 make it perfectly clear that the land is being acquired for the respondents No. 6. Reliance, is placed particularly upon that portion of the affidavit of the State, where it is stated that the land is acquired for enabling the respondent No. 6, to have access to the main road and for meeting their minimum requirements for establishing their factory, It is further stated that the compensation for all the land which is being acquired is to come out of the pockets not of the, State Government but the respondent No. 6 itself. No doubt, the Government has said that it has sanctioned the payment of Rs. 100 towards the payment of compensation but that is only an insignificant fraction of the total amount of compensation that would be payable in, respect of these Ian(Is, the petitioners the a themselves-having paid Rs. 4,50,000 to the persons from whom they acquired these lands. On behalf of the respondents the learned Advocate-General for Punjab contended that the declaration of the Government in the notification that the land is required for a public purpose is made conclusive by sub-s. 3 of s. 6 of the Act and, therefore, it is not open to this Court to go behind it and try to satisfy itself whether in fact the acquisition is for a public, purpose or not. Alternatively he contended that the land is being acquired for a public purpose because the object of the acquisition is to establish a new industry 788 and do away with imports of refrigeration equipment and to enable technical education to be imparted to Indian personnel in a new field. He further said that the acquisition will not only save foreign exchange by lessening imports but will enable foreign exchange to be earned from the export of goods manufactured in the proposed factory. The new industry is said to be of great economic importance inasmuch as it will enable the preservation of food which will otherwise be destroyed. Refrigeration equipment also contributes towards the maintenance of health because it enables storage of medicines such as antibiotics which are liable to be decomposed at normal temperatures prevailing in our country. The industry proposed to be started will open a new avenue of employment and diminish unemployment and generally advance the industrial development of the country. Finally he said that a part of the land is required for building houses and quarters for the workers of the factory and to give amenities to them. All these purposes are, therefore, said to be public purposes. Reliance was placed by him on Vol. 19 of Encyclopedia Britannica, pp. 49 to 57 for showing the manifold applications of refrigeration in various industries and activities. Reference was also made to Vol. 18 of Encyclopedia Britannica, p. 745 wherein facilities for providing refrigeration have been grouped under the heading Public utility'. Reference was also made to be next page where it is stated "Every public utility must be in possession of natural resources upon which that industry is based. Their sites must have strategic locations. Limitation in the choice of this agent of production tends to make the cost of acquiring or leasing these facilities greater than it would be if the industry had a wider range of choice. Furthermore, utilities must make allowances in advance for probable increase in the required capacity. For these reasons utilities are provided 780 with the governmental power of eminent domain which makes possible the `compulsory sale of private property." Relying upon the affidavit of Mr. Bhagat, to which we have referred earlier, the learned Advocate-General of Punjab said that the object of the Government in acquiring these lands is to enable a new industry to be established not only for saving foreign exchange and earning foreign exchange bat also for securing the industrial advancement of the country, enabling the citizens to obtain technical education in a new field, relieving to some extent the Pressure of unemployment and so on. For all these reasons he contends that the acquisition must be deemed to be for a public purpose even though the bulk of the compensation for the acquisition will come from the pockets of respondent No. 6. In our opinion the question whether any of the aforesaid purposes falls within the expression public purpose would arise for consideration only if the declaration of the Government is not conclusive or if the action of the Government is colourable. If, as contended by the learned Advocate General, sub-s. 3 of s. 6 concludes the matter-and the validity of this provision is not challenged and the action of the Government is not colourable the other question would not arise for consideration. It is strenuously contended on behalf of the petitioners that sub-s. 3 of s. 6 does not debar this Court from considering whether a purposed acquisition is for a public purpose or not. It is said, in the first place, that this provision only makes the declaration "conclusive evidence" and not "conclusive proof" and then contended that the declaration is conclusive evidence only of a need and nothing more. A distinction is sought to be made between "Conclusive proof" and "conclusive evidence" and 790 it is contended that where a law declares that a fact shall be conclusive proof of another, the Court is precluded from considering other evidence once such fact is established. Therefore, where the law makes a fact conclusive proof of another the fact stands proved and the Court must proceed on that basis. But, the argument proceeds, where the law does not go that far and makes a fact only "conclusive evidence" as to the existence of another fact, other evidence as to be existence of the other fact is not shut out. In support of the argument reliance is placed on s. 4 of the Indian Evidence Act which in its third paragraph defines 'conclusive proof' as follows : "When one fact is declared by this Act to be conclusive proof of another, the Court shall, on proof of the one fact, regard the other as proved, and shall not allow evidence to be given for the purpose of disproving it". This paragraph thus provides that further evidence is barred where,, under the Indian Evidence Act, one fact is regarded as proof of another. But it says nothing about what other laws may provide. There are a number of laws which make certain fact& conclusive evidence of other facts: (see Companies Act, 1956, s. 132 ; the Indian Succession Act, 1925, s. 381 ; Christian Marriages Act, 1872, s. 61 ; Madras Revenue Act, 1869, s. 38 ; Oaths Act, 1873, s. (11). The question is whether such provision also bars other evidence after that which is conclusive evidence is produced. The object of adducing evidence is to prove a fact. The Indian Evidence Act, deals with the, question as to what kind of evidence is permissible to be adduced for that, purpose and states in s. 3 when a fact is said to be proved. That section reads thus 791 statement which the court permits or requires to be made,. when the law says that a particular kind of evidence would be conclusive as to the existence of a particular fact it implies that that fact can be proved either or by evidence or by some other evidence which the Court permits or requires to be advanced. Where such other evidence is adduced it would be open to the Court to consider whether, upon that evidence, the fact exist or not. Where on the other hand, evidence which is made conclusive is adduced, the Court has no option but to hold that the fact exists. If that were not so, it would be meaningless to call a particular piece of evidence as conclusive evidence. Once the law says that certain evidence is conclusive it shuts out any other evidence which would detract from the conclusiveness of that evidence., In substance, therefore, there is no difference between conclusive evidence and 792 conclusive proof. Statutes may use the expression 'conclusive proof' where the object is to make a fact non justifiable.. Learned counsel contends that it is open to the Court to examine whether the action of the executive, even in the absence of an allegation that it is malafide, is related to the section or not and for this purpose to consider whether the acquisition is for a public purpose. In support of this contention he has relied upon the decision in State of Bihar v. Maharajadhiraja Sir Kameswarsingh of Darbhanga(1). There, Mahajan, J. 'as he then was,) has expressed the view that the exercise of power to acquire compulsorily is conditional on the existence of public purpose and that being so this condition is not an express provision of Art. 31 (2) but exists aliund in the content of the power itself. That, however, was not the view of the other learned Judges who constituted the Bench. Thus according to Mukherjea, J., (As he. then was), the condition of the existence of a public purpose is implied in Art. 31(2). (See pp. 957, 958). Das. J. (as he then was), was also of the same view. (See pp. 986 988). Similarly Patanjali Sastri, C.J., has also taken the view that the existence of public purpose is an express condition of cl. 2 of Art. 31. The Constitution permits acquisition by the State of private property only if it is required for a, public purpose. But can it; therefore, be said (1) [1952] S.C.R.889 935. 793 that the provisions of a statute must be so construed that the declaration by the Government as to the existence of public purpose is necessarily justiciable ? We are not concerned here with a post Constitution law but with a pre-Constitution law. The Act has been in operation since 1894. The validity of the law was challenged before this Court in Babu Barkya Thakur v. The State of Bombay (1) on the ground that it infringes the provisions of Arts. 31(2) and 19(1)(f) of the Constitution. But this Court held that the law being a pre-Constitution law is protected from the operation of Art. 31(2) by the provisions of Art. 31(5) (a). It also held, following the decision in the State of ,Bombay v. Bhanji Munji (2) and that in Lilavati Bai v. The State of Bombay (3) that the attack under Art. 19(1)(f) of the Constitution is futile. The argument, however, is that the protection which the Act enjoys is only to this extent that even though any of its provisions be in conflict with Art.31(2) the Act cannot be challenged on that ground ; the protection does not however extend to other provisions of Part III of the Constitution, such as Art. 19(1)(f). As we understand the decision in Bhanji Munji's case (2) what this Court has held is that for a right under Art. 19(1).(f) to bold property to be available to a person, he must have the property with respect to which he can assert such right. If the right to the possession of the property is taken away by law protected by Art. 31 (5) (a), Art. 19 (1) (f) is not attracted. That is the decision of this Court and it has been followed in two other cases. All the decisions are binding upon us. It is contended that none of the decisions has considered the argument advanced before us that a law may be (1) (1961) 1 S.C.R. 128(2) 0935) 1 S.C.R. 777(3) (1957) S.C.R. M. 794 protected from an attack under Art. 31 (2) but it *ill still be invalid under Art. 13 (2) if the restriction placed by it on the right of a person to hold property is unreasonable. In other words, for the law before us to regarded as valid it must also satisfy the requirements of Art. 19(5) and that only thereafter can the property of a person be taken away. It is sufficient to say that though this Court may not have pronounced on this aspect of the matter we are bound by the actual decisions which categorically negative an attack based on. We, therefore, hold that since the Act provides that the declaration made by the State that a particular land is needed for a; public purpose shall be conclusive evidence of the fact that it is so needed the Constitution is not thereby infringed. For ascertaining the extent to which the determination by the State is conclusive it would be desirable to examine the relevant provisions of the Act. The preamble states that the law is for the acquisition of land needed for public purposes and for companies and incidental matters connected therewith. Section 2(f) defines public purpose as follows : "the expression 'public purpose' includes the provision of village sites in districts in which the appropriate Government shall have declared by notification in the Official Gazette that it is customary for the Government to make such provision:" 795 This is an inclusive definition andallv concerned. Then there is s. 4 which enables the State to publish a preliminary notification whenever it appears to it that land in any locality is needed or is likely to be, needed for a public purpose. The other aspects of the section have no bearing upon the point before us and we need not refer to them. Then there its s. 5A which gives to the person interested in the land which has been notified as being needed or likely to be needed for a public purpose or for a company, the right to object to the acquisition of the land. Such objection has to be heard by the Collector and after making such further enquiry as he thinks necessary the record has to be submitted to the appropriate Government along with the report containing the Collector's recommendations and the objections. subsection (2) of s. 5A makes the decision of the Government on the objections final. This is followed by s. 6 sub. s. (1) of which provides that when the Government is satisfied that any particular land is needed for a public purpose, or for a company, a declaration should be made to that effect and such declaration should be published in the Official Gazette. Sub-section (2) specifies the matters including the purpose for which the land is needed which are to be set out in the declaration. Subsection (3) makes the declaration conclusive evidence of the fact that the land is needed for a public purpose or for a company, as the case may be. Section 17 of the Act confers special powers on the Government which are exercisable in cases of emergency. Sub-section (4) thereof provides 796 that in those cases which fall under sub-s. (1) or Sub-s. (2) the appropriate Government may direct that the provisions of s. 5A of the Act shall not apply and also empowers the Government to make a declaration under s.6 in respect of the land to be acquired at any time after the publication of the. notification under sub-s. (1) of s.4. These are the provisions which have a bearing on the point under consideration. It is clear from these provisions that the object of the law is to empower Government to acquire land only for a public purpose or for a company, and, where it is for a company the acquisition is subject to the provisions of Part VII. As has been pointed out by this Court in R. L. Arora v. The State of Uttar Pradesh (1) the acquisition for a company contemplated by Part VII.s. (1) of s s.6 which enables the Government to make a declaration provided that it is satisfied that a particular land is needed for a, public purpose or for a company. No doubt, (1) (1962) Supp. 2 S.C.R. 149. 797 it is open to the State Government in an emergency by exercising its powers under sub. s. (4) of s. 17, to say that the provisions of s. 5A would not apply. But for construing the provisions of s. 6 it would be relevant to bear in mind that section. The scheme of the Act is that normally the provisions of s. 5A have to be complied with. Where, in pursuance of the provisions, objections are lodged, these objections will material placed before it by the Collector. The provisions of sub.s. (2) of s. 5A make the decision of the Government on the objections final while those of sub-s (1) of s. 6 enable the Government to arrive at it; satisfaction. Sub-section (3) of s. 6 goes further and says that such a declaration shall be conclusive evidence that the land is needed for a public purpose or for a company. It is, however, argued by learned counsel that the conclusiveness or finality attached to the declaration of Government is only as regards the fact that the land is "needed" but not as regards the question that the purpose for which the land is needed is in fact a public purpose or what is said to be a company is really a company. Subsection (1) does not affect a dichotomy between "need" and "Public purpose or a company". There is no justification for making such a dichotomy. By making it, not only will the language of the section be strained but the purpose of the law will be stultified. The expression must be regarded as one whole and the declaration held to be with respect to both the elements of the expression. 798 The Government has to be satisfied about both the elements contained in the expression "needed for a public purpose or a company". Where it is so satisfied, it is entitled to make a declaration. Once such a declaration is made subs. (3) invests it with conclusiveness. That con-, elusiveness is not merely regarding the fact that the Government is satisfied but also with regard to the question that the land is needed for a public purpose or is needed for a company, as the case may be. Then again, the conclusiveness must necessarily attach not merely to the need but also to the question whether the purpose is a public purpose or what is said to be a company is a company. There can be no "need" in the abstract. It must be a need for a 'public purpose' or for a company. As we have already stated the law permits acquisition only when there is a public purpose or when the land is needed for a company for the purposes set out in s. 40 of the Act. Therefore, it would be unreasonable to say that the conclusiveness would attach only to a need and not to the fact that that need is for a public purpose or for a company. No land can be acquired under the Act unless the need is for one or the other purpose and, therefore it will be futile to give conclusiveness merely to the question of need dissociated from the question of public purpose or the purpose of a company. Upon the plain language of the relevant provisions it is riot possible to accept the contention put forward by learned counsel. Learned counsel put the matter in a slightly different way and said that s. 6 (3) presupposes that the jurisdictional fact exists, namely, that there is a public purpose or the purpose of a company behind the acquisition and, therefore, the question whether it exists or not is justiciable. The Act has empowered the Government to determine the question of the need of land for a public 799 purpose or for a company and the jurisdiction conferred upon it to do so is not made conditional upon the existence of a collateral or extraneous fact. It is the existence of the need for a public purpose which gives jurisdiction to the Government to make a declaration under s. 6 (1) and makes it the sole judge whether there is in fact a need and whether the purpose for which there is that need is a public purpose. The provisions of sub-s. (3) preclude a court from ascertaining whether either of these ingredients of the declaration exists. It is, however, said that that does not mean that in so far as the meaning to be given to the expression public purpose is concerned the courts have no power whatsoever. In this connection the decision of the Privy Council in Hamabai Framjee Petit v. Secretary of State for India (1) was referred to. In that case certain land in Malabar Hill in Bombay was being acquired by the Government of Bombay for constructing residences for Government officers and the Acquisition was objected to by the lessee of the land on the ground that the land was not being taken or made available to the public at large and, therefore, the acquisition was not for a public purpose. When the matter went up before the High Court Batchelor, J., observed: X X particular 1)b P (1914) L.R. 42 IA. 44. 800 interest of individuals, is directly and vitally concerned." In that case what was being considered was a re-entry clause in a lease deed and not provisions of the Land Acquisition Act. That clause left it absolutely to the lessor, the East India Company to say whether the possession should be resumed by it if the land was required for a public, purpose. It was in this context that the question whether the land was needed for a public purpose was considered. The argument before the Privy Council rested upon the view that there cannot be a 'public purpose' in taking land if that land, when taken, is not in some way or other made, available to the public at large. Rejecting it they held that the true view is that expressed by Batchelor, J., and observed: "That being so, all that remains is to determine whether the purpose here is a purpose in which the general interest of the community is concerned. Prima facie the Government are good judges of that. They are not absolute judges. They cannot say ,,sic volo sic jebe". Mr. Pathak strongly relied on these observations and said that the Privy Council have held that the matter is justiciable. It is enough to say 801 that that was not a case under the Land Acquisition Act and, therefore, conclusiveness did not attach itself to the satisfaction of the Government that a particular purpose fell within the concept of public purpose. Mr. Pathak then contended that the question as to the meaning to be given to the phrase 'public purpose' is not given conclusiveness by sub-s. (3) of s. 6. According to him all that sub-s. (3) of s. 6 says is that the Government's declaration that particular land is needed for a public purpose or a company shall be conclusive and that it does not say that the Government is empowered to define what is a public purpose and then say that the particular purpose falls within that definition. As already stated no attempt has been made in the Act to define public purpose in a compendious way. Public purpose is bound to vary with the times and the prevailing conditions in a given locality and, therefore, it would not be a practical proposition even to attempt a comprehensive definition of it. It is because of this that the legislature has left it to the Government to say what is a public purpose and also to declare the need of a given and for a public purpose. It was contended on the basis of the decision of this Court in R. L. Arora v. The State of U. P. (1) that-the Courts have power to consider whether the purpose for which land is being acquired is a public purpose. In that case land was being acquired, as already stated, for a company and the real question which arose for consideration was, what is the meaning to be attached to the words "useful to the public" occurring in cl. (b) of sub-s. (1) of s. 40 of the Act. The land was required by the company to enable it to establish its works and it was contended before this Court that the products manufactured (1) [1962] Supp. 2S.C.R.149 802 by the company will be useful to the public in general and, therefore, the acquisition would be covered by cl. (b) of sub-s. (1) of s. 40. Negativing this contention Wanchoo, J., who spoke for the Court observed : "It is true that it is for the Government to be satisfied that the work to be constructed will be useful to the public but this does not mean that it is the Government which has the right to interpret the words used ins. 40 (1) (b)......It is the Court which has to interpret what those words mean. After the court has interpreted these words, it is the Government which has to carry out the object of ss. 40 and 41 to its satisfaction. The Government cannot say that ss. 40 and 41 mean this and further say that they are satisfied that the meaning they have given to the relevant words in these sections has been carried out in the terms of the agreement provided by them.................The Government cannot both give meaning to the words and also say that they are satisfied on the meaning given by them. The meaning has to be given by the Court and it is only thereafter that the Government's satisfaction may not be open to challenge We have already indicated what these words mean and if it plainly appears that the Government are satisfied as a result of giving some other meaning to the words, the satisfaction of the Government is of no use, for then they are not satisfied about what they should be satisfied. In the present case the Government seems to have taken a wrong view that so long as the product of the works is useful to the public and so long as the public is entitled to go upon the works in the way of Body text Uf tQ34pe4-~ 803 business, that is all that is required by the relevant words in as. 40 and 41' required It was no doubt argued before the Court that the declaration made by the Government under s. 6 (1) that the land was needed for a company is conclusive and, therefore, the question as to the actual purpose of the acquisition is not justiciable. This Court pointed out that s. 6 (3) makes the declaration under s. 6 (1) conclusive evidence of the fact that the land is needed for a public purpose or for a company and that as the declaration stated that the land was needed for a company and that fact was not disputed by the parties, the provisions of s. 6 (3) were of no assistance. We may point out that even according to that decision conclus s. 6 (1) is made the Government should be satisfied that the land is required for one of the two purposes set out in s. 40 (1) of the Act. The Government can consent to the making of a declaration under a. 6 (1) after it is satisfied under s. 41 about the fact that the land is required for a company fort the purposes set out in el. (a) and (b) of that section. But the declaration made thereafter is confined only to one matter and that is that the land is required for a company and nothing more. The question whether in fact the land is required by the company for the purposes set out in el. (a) and (b) of s.-a. (3) 804 of S. 6 makes the declaration conclusive evidence not only of the fact that the land is required for a Company but also of the fact that the land is required by a company for a purpose specified in s. 40 (1) of the Act. The observations made by Wanchoo, J., therefore do not assist the petitioners. Reliance was then placed on two decisions of this Court in which the meaning of the expression "public purpose" is considered. One is Babu Barkya Thakur v. The State of Bombay (1). There this Court observed "It will thus be noticed that the expression 'public purpose' has been used in its generic sense of including any purpose in which even a fraction of the community may be interested or by which it may be benefited." Later in the same judgment this Court pointed out that where a large section of the community is concerned its welfare is a matter of public concern. The other is Pandit Jhandu Lal v. The Slate of Punjab (2). There this Court has pointed out that the purpose of public utility referred to in ss. 40 and 41 are akin to the public purpose. No doubt in these decisions this Court stated what, broadly speaking, the expression 'public purpose' means. But in neither case the question arose for consideration as to whether the meaning to be given to the expression 'public purpose' is justiciable. (1) [1961] 1. S.C.R. 126, (2) [1961]2.SC.R.459. 805 act all the action of the Government would be colourable as not being relatable to the power conferred upon it by the Act and its declaration will be a nullity. Subject to this exception the declaration of the Government will be final. A number of decisions were cited before us by the learned Advocate-General in support of the contention that the declaration of the Government is final. One of those decisions is Wijeyesekera v. Festing (1). In that case dealing with Ceylon Ordinance No. 3 of 1876 (Acquisition of Land Ordinance, (Ceylon), 1876) which incidentally did not contain a provision similar to that of sub-s. (3) of s. 6, their Lordships observed: "The whole frame of the ordinance shows that what the District Court is concerned with is the assessment of compensation, but their Lordships do not desire torest their opinion that the decision of the Governor is final merely upon the question. of the Court before which the question is raised. It appears to their Lordships that the decision of the Governor that the land is wanted for public purposes is final, and was intended to be final, and could not be questioned in any Court." There, the land was required :(or a road and the contention was that the Government did not take the opinion of the Surveyor General as to its fitness (1) [1919] A.C. 646. 806 for such purpose. On this ground it was contended that the Governor's declaration could be questioned. But this was negatived by the Privy Council. Following this decision in Vadlapatla Suryanarayana v. The Province of Madras(1) a Full Bench of the Madras High Court held that a declaration by the Provincial Government under s. 6(1) of the Act that certain lands were required for a public purpose is final and, where there is no charge against the Provincial Government that it had acted in fraud of its powers its action in directing the acquisition cannot be challenged in a Court of law. Similar view has been taken in Samruddin Sheikh v. Sub-Divisional Officer.(2) ; V. Gopalakrishna v. The Secretary, Board of Revenue, Madras (3); S. Jagannadha Rao v. The State of Andhra Pradesh (4) ; Secretary of State for India in Council v. Akbar Ali (5). Several other decisions to the same effect, some of them post Constitution, were also mentioned by the learned Advocate General, which take the same view as in these decisions. Not a single decision was however, brought to our notice in which it has been held that the question as to what is a public purpose or whether it exists can be inquired into by the Courts even in the absence of colourable exercise of power, because s. 6(3) has become void under Art. 13(2) of the Constitution. It was next contended that sub-s. (3) of s. 6 cannot stand in the way in a proceeding under Art. 226 or under Art. 32 of the Constitution and in support of this argument reliance was placed upon the decision in Chudalmuthu Pillai v. State (6) ; Maharaja Luchmeshwar Singh v. Chairman of the Darbhanga Municipality (7); (1) I.L R [1916] Mad. 153.(2) A.I.R (1954) Assam 81. (3) A.I.R 1954 Mad.362.(4) A I.R 196O A.P. 343. (5) (1923) I.L.R. 45 All. 413. (6 ) I.L. R. [1932] Tra. Cochin. 488, (7) (1890) L.R. 17 nI.A. 90. 807 Rajindra Kumar Ruia v. Government of West Bengal (1) ; Major S. Arjan Singh v. State of Punjab (2) ;. In the first mentioned case it was contended that the order was actuated by mala fides and also that there were various irregularities in the proceedings. As we have already indicated, if the declaration is vitiated by fraud, then the declaration is itself bad and what is bad cannot be protected by sub-s. (3) of s. 6. In the next case the act of the Court of Wards in handing over the ward's lands for a nominal consideration for a public purpose was challenged in a suit. The challenge was upheld by the Privy Council on the ground that lawful possession could only be taken by the State in strict compliance with the provisions of the Land Acquisition Act. The question raised here did not arise for consideration in that case. In the other two cases the declaration was challenged under Art. 226 and in both the cases the challenge failed. In the first of the two latter mentioned case it failed on the ground that there was no fraud and in the second on the ground that the provisions of sub.s. (3) of s. 6 precluded the court from challenging the validity of the declaration. None of these cases, therefore support the contention of the petitioners. Moreover we are not concerned here with the powers of the High Court under Art. 226 but with those of this Court. It is said, however that the bar created by s. 6(3) would not stand in the way of this Court while dealing with a petition under Art. 32 and, therefore, it is open to us to ascertain whether an acquisition is for a public purpose or not. While it is true that the powers of this Court cannot be taken away by any law which may hereafter be made unless the Constitution itself is amended we are here faced with a provision of law which is a pre-Constitutional law and which is protected by the (1) A.I.R. 1952. Cal. 573. (2) I.L.R.[1958] Punjab 1451. 808 Constitution-to the extent indicated in Art. 31(5)(a) and an I attack on its validity on the ground that it infringes the right guaranteed by Art. 19(1)(f) has failed. Therefore-it is a good and valid law and the restriction placed by it on the powers of this Court under Art. 32 must operate. 6(3) will not extend. For, the question whether a particular action was the result of a fraud or not is always justiciable, provisions such as s. 6(3) notwithstanding. We were referred by the learned Advocate General to a recent decision of the House of Lords in Smith v. East Elloe Rural District Council (1) to which reference was made by a learned Advocate General. In that case their Lordships were considering the Acquisition of Land (Authorisation of Procedure) Act, 1946, (9 and 10 Geo. 6, c. 49), Sch. 1, Pt. IV, paras 15 and 16. Paragraph 15 (1) of Part IV, Sch. 1 to the Act provides as follows : "If any person aggrieved by a compulsory (1) [1956] A.C. 736. 809 purchase older desires to question the validity thereof..... on the ground that the authorisation of compulsory purchase thereby granted is not empowered to be granted under this Act.......... he may, within six weeks from the date on which notice of the confirmation or making of the order.... is first published...... make an application to the High Court........" Paragraph 16 provides as follows : "Subject to the provisions of the last foregoing paragraph, a compulsory purchase order.... shall not.... be questioned in any legal proceedings whatsoever...... " The land having been made the subject of compulsory purchase the owner brought an action in which among other things, a declaration was added that the order was made and confirmed wrongfully and in bad faith and that the clerk acted wrongfully and in bad faith in procuring its order and confirmation. The House of Lords held by majority that the action could not proceed except against the clerk for damages because the plain prohibited in paragraph 16 precluded the Court challenging the validity of the order. They also held that paragraph 15 gave no opportunity to a person aggrieved to question the validity of a compulsory purchase order on the ground that it was made or confirmed in bad faith. As we have already said the condition for the exercise of the powers by the State Government is the existence of a public purpose (or the purpose of a company) and if the Government makes a declaration under s. 6(1) in fraud of the powers conferred upon it by that section the satisfaction on which the declaration is made is not about a matter with respect to which it is required to be satisfied by the 'Provision and, therefore, its declaration is open to challenge as being without any legal effect. We 810 are not prepared to go as far as the House of Lords in the above case. This brings us to the second argument advanced before us on behalf of the petitioners. The learned counsel contends that there could be no acquisition for a public purpose unless the Government had made a contribution for the acquisition at public expense. According to him the acquisition in question was merely for the benefit of a company and that the action of the Government was only a colourable exercise by it of its power to acquire land for a public purpose. The contention is that before making a declaration under sub-s. (1) of s. 6 the Government ought to have taken a decision that it will contribute towards the acquisition. In the case before us no such decision was taken by the Government till ;September 29, 1961, that is. just one day after this writ petition was admitted by this Court and stay order issued by it. It is then said that the contribution of the Government towards the cost of acquisition being a very small fraction of the total probable cost of acquisition the inference must be that the acquisition was not even partly at public expense and, therefore, the declaration was a colourable exercise of the power conferred by law. Then it is said that not only does the declaration omit to state that the contribution of the State towards the cost of acquisition was to be Rs. 100 only but also omits to mention that what was decided was that the Government was to bear only a part, of the cost of acquisition and not the whole of it. The notification is said to be thus misleading and to create the impression that the entire cost of the acquisition is to come out of the public exchequer. Finally it is contended that the establishment of an industry by a private party for manufacturing refrigeration equipment cannot fall within the meaning of the expression 'Public purpose'. 811 It is no doubt true that the financial sanction for the contribution of Re. 100 as part of the expenses for acquisition was accorded by the Finance Department on September 29, 1961. No doubt also that a day prior to the according of sanction this petition had been admitted by this Court and a stay order issued. But from these two circumstances it would not be reasonable to draw the inference that the declaration made by the Government was a colourable exercise of its power. The provisions of sub-s. (1) of a. 6, however, do not require that the notification made there under must set out the fact that the Government had decided to pay a part of the expenses of acquisition or even to state the extent to which the Government is prepared to make part contribution to the cost of acquisition. It is then contended that before the Government could spend any money from the public exchequer for acquiring land a provision his to be made in the budget and the absence of such provision would be a circumstance relevant for consideration. It is sufficient to say that the absence of a provision in the budget in respect of the cost of acquisition, whole or part, cannot affect the validity of the declaration and that if Government does spend some money without allotment in the budget, its expenditure may perhaps entitle the Accountant General to raise an audit objection or may enable the Public Accounts Committee of the State Legislature to criticism the Government. But that is all. Again, where the expenditure is of a small amount like Rs. 100 it may he possible for the Government to make payment from Contingencies and thus avoid objections of this kind. Whatever that may be, these are not circumstances which would suffice to show that the declaration was colourable. It was stated at the bar by the learned Advocate-General that the entire scheme of establishing a refrigeration factory in Punjab was examined at various stages and at different levels of Government as well as by different ministries and it was then decided to make a part contribution towards the cost of acquisition from public funds. As required by the Financial Rules the consent of the Finance Department had to be obtained for this purpose. This particular stage occupied considerable time and that is why there was a delay in according sanction. The statement of the learned Advocate-General was not challenged on behalf of the petitioners. Moreover the declaration under sub-a. (1) of s. 6 is clear on the point that the land is being acquired at public expense, and the provisions of sub-a. (3) of a. 6 precluded a Court from going behind such a declaration unless it is shown that the Government has in fact decided not to contribute any funds out of the public revenues for that purpose. For, if the Government had in fact taken a decision of that kind then the exercise of the power to make an acquisition would be open to challenge as being colourable. Then it is contended that the contribution by the State towards the cost of acquisition must be substantial and not merely nominal or token as in this case. The argument is that though the law permits acquisition for a public purpose to be made by the State by contributing only a part of the cost of acquisition that part cannot be a particle and in this connection reliance was placed on the decision in Chatterton v. Cave (1) which was followed in Ponnaia v. Secretary of State (2). In the latter case the High Court of Madras observed that ,,the Legislature, when they provided that a part of the compensation should be paid from public revenues, did not mean that this condition would be satisfied-by payment of a particle, e. g. one anna in Rs. 5, 985". In that case land was being acquired (1) (1878) 3 App. Cas. 483, 491, 492. (2) A. I. R. 1926 Mad. 1099. 813 for making a road between two villages in Ramnad District. A sum of Rs. 5, 985 was required for the acquisition. Out of this amount only one anna was agreed to be contributed by the Government and it was contended on its behalf that this contribution satisfied the requirements of s. 6 of the Act. It was also contended that the declaration made under sub-s. (1) of s. 6 could not be challenged in view of the provisions of sub-s. (3) of a. 6 and reliance was placed on the decision in Wijeyesekara v. Festing (1). According to the High Court the fast that the Government's share in the cost of acquisition being 1/90,000 part of the amount, there was no real and bona fide compliance with the terms of the section and that this was an indication of the illusory character of the object for which the provisions of the Act were being made use of. The High Court then referred to the decision in Chatterton's case ( 2 ) and pointed out that the House of Lords were averse to putting an interpretation on the words "or part thereof" occurring in the Dramatic Copyright Act, (3 & 4 William IV, c. 15) as would make a part to mean a particle. The High Court also referred to the decision in Maharaja Luchmemar Singh's case (3 ) and held that the acquisition was a colourable exercise of the power conferred by the Act. This decision was not followed by the same High Court in Senja Naicken v. Secretary of State (4) where it was held that the State's contribution of one anna out of Rs. 926-8-6 for acquiring land for a road, Rs. 926-7-6 having been contributed by the ryots, was sufficient compliance with s. 6 (1) of the Act. Both these decisions came up for consideration in Vadlapatla Suryanarayanas case (5) and there Ponnaia's case (3) was over-ruled and the view taken in Senja Naickens case (4) was approved. (1) (1926) I.L.R. 50 Mad. 308.(2) (1878) 3 App. Cas. 483. 491, 492 (3) (1890) L.R. 17 I.A. 90.(4) (1926) 1 L.R. 6o mad. 308. (5) I.L.R. [1946] Mad, 153.(6) A. 1. R. 1926 Mad. 1099. 814 Chatterton's case (1) was a case of infringement of copyright where two plays had been adapted from a common source by the parties to the litigation. In that case it was accepted before the Court that the Dramatic Copyright Act protected ,parts" of dramatic work and prohibited their use by persons other than the proprietor of the Copyright. It was pointed out that in the case of ordinary copyright of published work the protection was restricted only to the whole of the work and did not extend to portions of those work. The Dramatic Copyright Act also contained a provision directing that infringement of the copyright would entitle the proprietor to damages of not less than 40 shillings. It was suggested that these differences indicated an intention to prevent the invasion of the dramatic copyright independently of the quantity or materiality of the portion of dialogue or dramatic incident proved to have been copied by another. Dealing with this argument Lord Hatherley observed: "Now it appears to me, my Lords, that this argument goes much too far. As was said by the counsel for the respondent, the appellant would wish to read the word 'part' in the Dramatic Copyright Act as 'particle', so that the crowing of the cook in Hamlet', or the introduction of a line in the dialogue, might be held to be an invasion of the copyright entitling plaintiff to 40s. damages and consequently, as the law stood I believe at the time of the passing of the statute of 3 & 4 Will. 4, to the costs of his action," (pp. 491-2) Then after pointing out that while in the case of an ordinary copyright of published works a fair use made by others would not amount to a wrong (1) 1878) 3 App. Cas. 483, 491,492. 815 justifying an action at law, the position of dramatic performance is not the same he observed "They are not intended to be repeated by others or to be used in such a way as a book may be used, but still the principle de minimis non curat lex applies to a supposed wrong in taking a part of dramatic works as well as in re-producting a part of a book"'. (p. 492) Finally he observed that the parts which were so taken were neither substantial nor material parts and as it was impossible to say that damage had accrued to the plaintiff from such taking, his action must fail. Lord O'Hagan observed " "Part", as was observed, is not necessarily the same as particle', and there may be a taking so minute in its extent and so trifling in its nature as not to incur the statutory liability." It is clear, therefore, that the analogy of Chatterton's case (1) cannot possibly apply to a case under the Act. As was pointed out in Senja Naicken's case (2) : "Admittedly both of the litigants had derived their compositions from a common source and it stands to reason that before you can compel a man to pay damages for stealing the product of your brain, time and labour, you must be able to point out that any resemblance between his production and yours is not merely accidental but is a designed theft of the product of our brain. Otherwise...... one might go to the absurdity of objecting to a man using the same words (1) (1878) 3 App Cas. 483,491 492. (2) (1826) I.L.R. 50 Mad. So. 816 though in a different collocation as you have done." . .lm0 With these observations we agree. Now, as regards Maharaja Luchmeswar Singh's case (1). The facts were their. The plaintiff's land was under the management of the Court of Wards during his minority. A notification under s. 6( 1) of the Land Acquisition Act, 1870 was made with respect to certain land belonging to the plaintiff for being acquired by the Government at the expense of the Darbhanga Municipality for a public purpose, that is, construction of a public ghat or landing place in the town of Darbhanga. But instead of complying with the provisions of the Land Acquisition Act and enquiring to the value of the land, the Collector who was the Chairman of the Municipality and also a representative of the Court of Wards took possession of the land and handed it over to the municipality. The compensation paid to the plaintiff was Re. 1/, an amount agreed to by the Manager. The plaintiff, after attaining majority, instituted a suit for possession of land and for mesne profits. His suit was dismissed by the courts below and he preferred an appeal before the Judical Committee of the Privy Council. Allowing the appeal, their Lordships observed : "The offer and acceptance of the rupee was a colourable attempt to obtain a title under the Land Acquisition Act without paying for the land........" How this case could at all have any bearing upon the point which arose for consideration in Ponnaia's case we fail to see. This case is also relied on before us on behalf of the petitioners and we have referred to it earlier in this Judgment. It has nothing whatsoever to do with the question of contribution by the State to-wards the cost of acquisition. (1) (1890) L.R. 17 I.A. 90. (2) A.I.R. 1926 Mad. 1099. 817 We would like to add that the view taken in Senja Naicken's case (1) has been followed by the various High Courts state decision the view taken in Senja Naicken's case(1) 5 818 a.6(1) is merely a colourable device to enable the respondent No. 6 to do something, which, under terms of s. 6(1), could not be done. "Public purpose" as explained by this Court in Babu Barkaya Thakur's case (1) means a purpose which is beneficial to the community. But whether a particular purpose is beneficial or is likely to be beneficial to the community or not is a matter primarily for the satisfaction of the State Government.. In the notification under s. 6(1) it has been stated that the IInd is being acquired for a public purpose, namely, for setting up a factory for manufacturing various ranges of refrigeration compressors and ancillary equipment. It was vehemently argued before us that manufacture of refrigeration equipment cannot be regarded as beneficial to the community in the real sense of the word and that such equipment will at the most enable articles of luxury to be produced. But the State Government has taken the view that the manufacture of these articles is for the benefit of the community. No materials have been placed before us from which we could infer that the view of the Government, is perverse or that its action based on it constitutes a fraud on its power to acquire land or is a colourable exercise by it of such power. Further, the notification itself sets out the purpose, for which the land is being acquired. That purpose, if we may recall, is to set up a factory for the manufacture of refrigeration compressors and (1) (1961) 1 S.C.R. 128. 819 ancillary equipment. The importance of the under'taking to a State such as the punjab which has a ,surplus of fruit, dairy products etc. the general effect of the establishment of this factory on foreign exchange resources, spread of education, relieving the pressure on unemployment etc., have been set out in the affidavit of the respondent and their substantee appears in the earlier part of this judgment.The affidavits have not been controverted and we have, therefore, no hesitation in acting upon them. On the face of it, therefore, bringing into existence a factory of this kind would be a purpose beneficial to the public even though that is a private venture. As has already been pointed out, facilities for providing refrigeration are regarded in modern times as public utilities. All the greater reason.therefore, that a factory which manufactures essential equipment for establishing public utilities must be regarded as an undertaking carrying out a public purpose. It is well established in the United States. of America that the power of eminent domain can be exercised for establishing public utilities. Such a power could, therefore, be exercised for establishing a factory for manufacturing equipment upon which a public utility depends. It is, therefore, clear that quite apart from the provisions of sub-s. (3) of s. 6 the notification of the State Government under s. 6 cannot be successfully challenged on the ground that the object of the acquisition is not carry out a public purpose. We cannot, therefore, accept the petitioner's contention that the action of the Government in making the notification under sub-s. (1) of s. 6 was a colourable exercise of the power conferred by the Act. The next argument to be considered is whether there has been a discrimination against the petitioners. They claim that as they intend to establish a factory for manufacturing paper which 820 is also an article useful to the community they are as good an industrial concern as the respondent No. 6 and the State Government in taking away land from them and giving it to respondent No. 6 is practising discrimination against them. In the first place it is denied on behalf of the respondents that the petitioners are going to establish a paper factory. It is not disputed that no new factory can be established without obtaining a licence from the appropriate authority under the Industries Development and Regulation Act, 1951, and that the petitioners do not hold any licence of this kind. According to the petitioners, however; they had entered into an agreement with the firm of Messrs. R. S. Madhoram & Sons for establishing such a factory and that in collaboration with them they propose to establish a factory on the lands which are now being acquired. It is true that a licence for erecting a paper factory was granted to Messrs. R. S. Madhoram and Sons but the location of that factory is to be in Uttar Pradesh and not in the State of Punjab. Without therefore, obtaining the approval of the appropriate authority the location of the factory could not be shifted to the land in question which, as already stated, is situate in the State of Punjab. Moreover this licence has since been cancelled on the ground that Messrs. R. S. Madhoram and Sons have taken no steps so far for establishing a paper factory. It is necessary to mention that the petitioners allege that this cancellation was procured by the respondents with the object of impeding the present petitioners. With that, however, we, need not concern ourselves because that licence as it stood on the date of the petitions did riot entitle Messrs. R. S. Madhoram and Sons to establish a factory in the State of Punjab. 821 Apart from that it is always open to the State to fix priorities amongst public utilities of different kinds, bearing in mind the needs of the State the existing facilities and other relevant factors. In the State like the Punjab where there is a large surplus of fruit and dairy products there is need for preserving it. There are already in existence a number of cold storages in that State. The Government would, therefore, be acting reason. ably in giving priority to a factory for manufacturing refrigeration equipment which would be available for replacement in these storages and which would also be available for equipping new cold storages. Apart from this it if; for the State Government to say which particular industry may be regarded as beneficial to the public and to decide that its establishment would serve a public purpose. No question of discrimination would, therefore, arise merely by reason of the fact that Government has declared that the establishment of a particular industry is a public purpose. The challenge to the notification based on Art. 14 of the Constitution must, therefore, fail. It is the last and final contention of the petitioners in these petitions that the notifications under ss. 4 and 6 cannot be made simultaneously and that since both the notifications were published in the Gazette of the same date, that is, August 25, 1961, the provisions of law have not been complied with. The argument is that the Act takes away from a person his inherent right to hold and enjoy that property and, therefore, the exercise of the statutory power by the State to take away such property for a public purpose by paying compensation must be subject to the meticulous observance of every provision of law entitling it to make the acquisition. It is pointed out that under sub.s. 822 a particular land "is likely to be needed for a public purpose". Thereafter under s. 5A a person interested in. the land has a right to object to the acquisition and the whole question has to be finally considered and decided by the Government after hearing such person. It is only thereafter that in a normal case the Government is entitled to make a notification under sub-s. (1) of s, 6 declaring that it is satisfied "after considering the report, if any, made under s. 5A, sub-s. (2) " that the land is required for a public purpose. This is the sequence in which the notifications have to be made. The reason why the sequence has to be followed is to make it clear that the Government has applied its mind to all the relevant facts and then come to a decision or arrived at its satisfaction even in a case where the provisions of s. 5A need not be complied with. Undoubtedly the law requires that notification under sub-s. (1) of s. 6 must be made .only after the Government is satisfied that a particular land is required for a public purpose. Undoubtedly also where the Government has not directed under sub-s. (4) of s. 17 that the provisions of s. 5A Deed not be complied with the two notifications, that is, under sub-s. , difficult to see why the two notifications cannot, in such a case, be made simultaneously. A notification under subs.(1) one of s. 4 is a condition precedent to the making of notification under sub-a. (1) of s. 6. If the Government, therefore, takes a decision to 823. In the case before us the preliminary declaration under s. 4(1) was made on August 18, 1961, and a declaration as to the satisfaction of the Government on August 19, 1961, though both of them were published in the Gazette of August 25, 1961. The preliminary declaration as well as the subsequent declaration are both required by law to be published in the official gazette. But the law does not make the prior publication of notification under sub-s. . The serial numbers of the notifications are No. 5809/41 B(1)/61/18755 dated August 18, 1961, and 5809-4 IB (1)/61/18760 dated August 19, 1961, and it would appear from them that the preliminary notification did in fact precede the final declaration. These were the only objections raised before us and as everyone of them has failed the petitions must be dismissed. We accordingly dismiss them with costs. As however, all petitions were heard together there will be only one hearing fee. 824 SUBBA Rao, J.-I have perused the judgment prepared by my learned brother, Mudholkar, J. With great respect, I cannot agree. The fact are fully stated by my learned brother and they need not be restated except to the extent relevant to the question I propose to consider. About six acres of land purchased by the petitioners in Writ Petition No. 246 of 1961 for a sum of Rs. 4,60,000 in February, 1961, is situate in village Meola Maharajpur, Tehail Balabhgarh, District Gargaon. On August 25, 1961, the Governor of Punjab published a notification dated August 18, 1961, in the Official Gazette under s.4 of the Land Acquisition Act, 1894, hereinafter called the Act, to the effect that the said land was likely to be needed by the Government at public expense for a purpose, namely, for setting up a factory for manufacturing various ranges of refrigeration compressors and ancillary equipment. Under s.17 of the Act the appropriate Government directed that the provisions of s.5A will not apply to the said acquisition; On the same day, another notification under s.6 of the Act dated August 19, 1961, was published to the effect that the Governor of, Punjab was satisfied that the land specified therein was required by the Government at public expense for the said purpose. On Septemher 29, 1961, the Government of Punjab sanctioned an expense of Rs. 100 for the purpose of acquisition of the said land. The validity of the said notification is questioned on various grounds. But as I am in favour of petitioners on the question of interpretation of the proviso to s.6 of the Act, I do not propose to express my opinion on any other question raised in the case. The material part of s.6(1) of the Act reads: "Subject to the provisions of Part VII of this Act, when the appropriate Government is 825 satisfied, after considering the report, if any, made under section 5A, sub-section (2), that any particular land is needed for a public purpose, or for a Company, a declaration shall be made to that effect under the signature of Secretary to such Government or of some officer duly authorized to certify its order Provided that no such declaration shall be made unless the compensation to be awarded for such property is to be paid by a Company, or wholly or partly out of public revenues or some fund controlled or managed by a local authority." Under that section, the Government may declare that a particular land is needed for a public purpose or for a company; and the proviso imposes a condition on the issuance of such a declaration. The condition is that no such declaration shall be made unless the compensation to be awarded for such property is to be paid by the company or, wholly, or 'partly out of the public revenues. A reasonable construction of this provision uninfluenced by decisions would be that in the case of an acquisition for a company, the entire compensation will be paid by the company, and in the case of an acquisition for a public purpose the Government will pay the whole or a substantial part of the compensation out of public revenues. The underlying object of the section is apparent: it is to provide for a safeguard against abuse of power. A substantial contribution from public coffers is ordinarily a Guarantee that the acquisition is for a public purpose. But it is argued that the terms of the section are satisfied if the appropriate Government contributes a nominal sum, say a pie, even though the total compensation payable may run into lakhs. This interpretation would lead to extraordinary results. The Government may acquire the land of A for B for a declared public purpose, contributing a pie towards the 826 estimated compensation of say, Rs. 1,00,000. If that was the intention of the Legislature, it would not have imposed a condition of payment of part of the compensation, for that provision would not serve the purpose for which it must have bean intended. Therefore, a reasonable meaning should be given to the expression "wholly or partly". The proviso says that the compensation shall be paid by the company or, wholly or partly, out of public revenues. A contrast between these two modes of payment suggests the idea that in one case the compensation must come out of the company's coffers and in the other case the whole or some reasonable part of it should come from public revenues. This idea excludes the assumption that practically no compensation need come out of public revenues. The juxtaposition of the words "wholly or partly" and the disjunctive between them emphasize the same idea. It will be incongruous to say that public revenue shall contribute rupees one lakh or one pie. The payment of a part of a compensation must have some rational relation to the compensation payable in respect of the acquisition for a public purpose. So construed "part" can only mean a substantial part of the estimated compensation. There cannot be an exhaustive definition of the words "substantial part of the compensation". What is substantial part of a compensation depends upon the facts of each cue, the estimate of the compensation and other relevant circumstances. While a court will not go meticulously into the question to strike a balance between a part and a whole, it will certainly be in a position to ascertain broadly whether in a particular Case the amount contributed by the Government towards compensation is so, illusory that it cannot conceivably be substantial part of the consideration. There is some conflict of view 827 on this question. The House of Lords in Chatterton v. Cave (1) defined the word "part" in the context of the provisions of the Dramatic Copyright Act. The words in the statute were "Production or any part thereof". The plaintiffs therein were the proprietors of a drama called, "The Wandering Jew" and it was alleged that the defendant produced a drama on the same subject. It was found that the drama of the defendant was not, except in respect of two scenes or points, a copy from, or a colourable limitation of, the drama of the plaintiffs. In that context. the House of Lords construed the relevant words "production or any part thereof." Lord O'Hagan observed : " 'Part', as was observed, is not necessarily the same as "particle" and there way be a taking so minute in its extent and so trifling in its nature as not to incur the statutable liability." This decision may not be directly in point, but the construction placed upon the expression "Part" is of general application. In the context of that statute, the court found that the Legislature clearly intended by the words "any part' a real substantial part. A division Bench of the Madras High Court, consisting of Spencer and Ramesam, JJ., directly considered this point in Ponnaia v. Secretary of State (2) . There, a total sum of Rs. 5,985 required for the acquisition of the property of the appellant therein and the Government contributed from Provincial revenues an amount of one anna towards that compensation. The learned Judges held that it was an indication of the illusory character of the object for which the provisions of the Act had been made use of Adverting to the argument that any small contribution by the Government (1) (1878)3 App.Cas.483,498. (2) A.I.R. 1426 Mad. 1099. 828 would satisfy the requirement of s.6 of the Act, Ramesam, J., observed at p. 1100 : "We think that the Legislature, when they passed the Land Acquisition Act, did not intend that owners should be deprived of their ownership by a mere device of private persons employing the Act for private ends or for the gratification of private spite. or malice." These are weighty observations of a judge of great experience, who was also the Government Pleader before he became a judge of the, Madras High Court. The observations also indicate the statutory object in insisting on a substantial contribution from public revenues, for a strict insistence thereon would prevent to a large extent the abuse of power under the Act. But unfortunately the correctness of this decision was not accepted by another division Bench of the same High Court, consisting of Odgers and Madhavan Nair, JJ., in Senja Naicken v. Secretary State for India (1). I have carefully gone %rough the judgment in that case, and, with great respect to the learned Judges, I cannot see any acceptable reasons for departing from the earlier view of the same court. Odgers, J., concentrated his criticism of the earlier judgment more on the reliance by the earlier Bench on the decision of the House of Lords than on the intrinsic merits of the decision itself. It is true that the learned Judges in the earlier decision relied upon the observations of the House of Lords, but that was only. in support of their conclusion why the expression "part" should not be understood as a particle. But the main reason they gave was that having regard to the object of that proviso, the Legislature in using the word "part" could have only meant a substantial part or otherwise the object would be (1) (1926) I.L.R. 50 Mad. 308. 829 defeated and the abuse of power which it intended to prevent could easily be perpetrated under the colour of the Act. The second reason given by Odgers, J. was stated by the learned, Judge thus at p.314 : " I invited the learned Advocate for the appellant to say where a particle" would end and "part" begin of this sum of Rs. 600. It is true an anna is a very small part of Rs. 600. But nevertheless it is a part. This adherence to the strict letter in complete disregard of the spirit of the section certainly defeats the purpose of the legislation. The word "'partly" in the proviso should be construed in the setting in which it is used and not in vacuum, as the earned Judge sought to do. The third reason the learned Judge gives for his conclusion was stated at p. 315 thus "Suppose on appeal the compensation had been enhanced. There is no doubt the Government would have to defray the extra sum out of the public revenues and having once undertaken the acquisition they could not call on the constituents again." This comment again I in my view, is beside the point. It is not the duty of the Government to meticulously fix a figure ; it may agree to bear a definite proportion of the compensation that may ultimately be awarded to a claimant and in that even subsequent variations by hierarchy of tribunals would not cause any difficulty, for the proportion would attach itself to the varying figures. That apart, it need not be a particular fraction of the compensation ultimately awarded. If the Government agrees to contribute a substantial part of the 830 estimated compensation that would meet the requirements of the section. The other learned Judge, Madhavan Nair, J., in substance agreed with the judgment of Odgers, J., and did not disclose any additional reasons for differing from the decision of the earlier Bench. In my' view, the decision in Senja Naicken v. Secretary of State (1) is not correct. These two were considered by a Full Bench of the Madras High Court in Suryanarayana v. Province of Madras(2). There Sir Lionel Leach, C.J., delivering the judgment of the Full Bench, noticed the judgment of the division Bench in Ponnaia v. Secretary of State (3) and the criticism offered on the judgment by the later division Bench in Senja Naicken v. Secretary of State (1) and observed : "We are in entire agreement with this criticism." Then the learned Chief Justice proceeded to observe: "In interpreting the proviso we can only have regard to the words used and, in our judgment, it is sufficient compliance with the proviso if any part of. compensation is paid out of public funds. One anna is a part of the compensation. It is true it is a small part, but it is nevertheless a part." This literal interpretation of the word "part" de hors the setting in which that word appears in the section, in MY view, makes the condition imposed on the exercise of the jurisdiction by the Government meaningless and also attributed to the Legislature an intention to impose a purposeless and ineffective (1) (1926) I.L.R. 50 Mad. 308. (2) I.L.R. (1946) Mad. 153,158. (3) A.I.R. 1926. Mad. 1099. 831 formality. For the reasons already given, I cannot accept the correctness of this judgment. I, therefore, hold that unless the Government agrees to contribute a substantial part of the compensation, depending upon the circumstances of each case, the condition imposed by the proviso on the exercise by the appropriate Government of its jurisdiction is not complied with. In the instant case it is impossible to say that a sum of Rs. 100 out of an estimated compensation which may go even beyond Rs. 4,00,000 is in any sense of the term a substantial part of the said compensation. The Government has clearly broken the condition and, therefore, it has no jurisdiction to issue the declaration under s. 6 of the Act. In this view it is not necessary to express my opinion on the other questions raised in this case. In the result the said notification is quashed and respondents 1 to 5 are hereby prohibited from giving effect to the said notification and taking any proceedings there under. It is common case that the order in Writ Petition No. 246 of 1961 would govern Writ' Petitions Nos. 247 and 248 of 1961 also. A similar order will issue in these two petitions also. The respondents will pay the costs of the petitioners in all the petitions. By COURT : In view of the majority opinion the Court dismissed the Writ Petitions with costs. There will be one set of hearing fee. Petitions dismissed. Back
http://www.advocatekhoj.com/library/judgments/index.php?go=1962/may/5.php
CC-MAIN-2019-09
refinedweb
17,066
61.26
Handling character strings is supported through two final classes: String and StringBuffer. The String class implements immutable character strings, which are read-only once the string has been created and initialized, whereas the StringBuffer class implements dynamic character strings. Character strings implemented using these classes are genuine objects, and the characters in such a string are represented using 16-bit characters (see Section 2.1, p. 23). This section discusses the class String that provides facilities for creating, initializing, and manipulating character strings. The next section discusses the StringBuffer class. The easiest way of creating a String object is using a string literal: String str1 = "You cannot change me!"; A string literal is a reference to a String object. The value in the String object is the character sequence enclosed in the double quotes of the string literal. Since a string literal is a reference, it can be manipulated like any other String reference. The reference value of a string literal can be assigned to another String reference: the reference str1 will denote the String object with the value "You cannot change me!" after the assignment above. A string literal can be used to invoke methods on its String object: int len = "You cannot change me!".length(); // 21 The compiler optimizes handling of string literals (and compile-time constant expressions that evaluate to strings): only one String object is shared by all string-valued constant expressions with the same character sequence. Such strings are said to be interned, meaning that they share a unique String object if they have the same content. The String class maintains a private pool where such strings are interned. String str2 = "You cannot change me!"; Both String references str1 and str2 denote the same String object, initialized with the character string: "You cannot change me!". So does the reference str3 in the following code. The compile-time evaluation of the constant expression involving the two string literals, results in a string that is already interned: String str3 = "You cannot" + " change me!"; // Compile-time constant expression In the following code, both the references can1 and can2 denote the same String object that contains the string "7Up": String can1 = 7 + "Up"; // Value of compile-time constant expression: "7Up" String can2 = "7Up"; // "7Up" However, in the code below, the reference can4 will denote a new String object that will have the value "7Up" at runtime: String word = "Up"; String can4 = 7 + word; // Not a compile-time constant expression. The sharing of String objects between string-valued constant expressions poses no problem, since the String objects are immutable. Any operation performed on one String reference will never have any effect on the usage of other references denoting the same object. The String class is also declared final, so that no subclass can override this behavior. The String class has numerous constructors to create and initialize String objects based on various types of arguments. The following shows two of them: String(String s) This constructor creates a new String object, whose contents are the same as those of the String object passed as argument. String() This constructor creates a new String object, whose content is the empty string, "". Note that using a constructor creates a brand new String object, that is, using a constructor does not intern the string. A reference to an interned string can be obtained by calling the intern() method in the String class?in practice, there is usually no reason to do so. In the following code, the String object denoted by str4 is different from the String object passed as argument: String str4 = new String("You cannot change me!"); Constructing String objects can also be done from arrays of bytes, arrays of characters, or string buffers: byte[] bytes = {97, 98, 98, 97}; char[] characters = {'a', 'b', 'b', 'a'}; StringBuffer strBuf = new StringBuffer("abba"); //... String byteStr = new String(bytes); // Using array of bytes: "abba" String charStr = new String(character); // Using array of chars: "abba" String buffStr = new String(strBuf); // Using string buffer: "abba" In Example 10.3, note that the reference str1 does not denote the same String object as references str4 and str5. Using the new operator with a String constructor always creates a new String object. The expression "You cannot" + words is not a constant expression and, therefore, results in a new String object. The local references str2 and str3 in the main() method and the static reference str1 in the Auxiliary class all denote the same interned string. Object value equality is hardly surprising between these references. It might be tempting to use the operator == for object value equality of string literals, but this is not advisable. public class StringConstruction { static String str1 = "You cannot change me!"; // Interned public static void main(String[] args) { String emptyStr = new String(); // "" System.out.println("emptyStr: " + emptyStr); String str2 = "You cannot change me!"; // Interned String str3 = "You cannot" + " change me!"; // Interned String str4 = new String("You cannot change me!"); // New String object String words = " change me!"; String str5 = "You cannot" + words; // New String object System.out.println("str1 == str2: " + (str1 == str2)); // (1) true System.out.println("str1.equals(str2): " + str1.equals(str2)); // (2) true System.out.println("str1 == str3: " + (str1 == str3)); // (3) true System.out.println("str1.equals(str3): " + str1.equals(str3)); // (4) true System.out.println("str1 == str4: " + (str1 == str4)); // (5) false System.out.println("str1.equals(str4): " + str1.equals(str4)); // (6) true System.out.println("str1 == str5: " + (str1 == str5)); // (7) false System.out.println("str1.equals(str5): " + str1.equals(str5)); // (8) true System.out.println("str1 == Auxiliary.str1: " + (str1 == Auxiliary.str1)); // (9) true System.out.println("str1.equals(Auxiliary.str1): " + str1.equals(Auxiliary.str1)); // (10) true System.out.println("\"You cannot change me!\".length(): " + "You cannot change me!".length());// (11) 21 } } class Auxiliary { static String str1 = "You cannot change me!"; // Interned } Output from the program: emptyStr: str1 == str2: true str1.equals(str2): true str1 == str3: true str1.equals(str3): true str1 == str4: false str1.equals(str4): true str1 == str5: false str1.equals(str5): true str1 == Auxiliary.str1: true str1.equals(Auxiliary.str1): true "You cannot change me!".length(): 21 char charAt(int index) A character at a particular index in a string can be read using the charAt() method. The first character is at index 0 and the last one at index one less than the number of characters in the string. If the index value is not valid, a StringIndexOutOfBoundsException is thrown. void getChars(int srcBegin, int srcEnd, char[] dst, int dstBegin) This method copies characters from the current string into the destination character array. Characters from the current string are read from index srcBegin to the index srcEnd-1, inclusive. They are copied into the destination array, starting at index dstBegin and ending at index dstbegin+(srcEnd-srcBegin)-1. The number of characters copied is (srcEnd-srcBegin). An IndexOutOfBoundsException is thrown if the indices do not meet the criteria for the operation. int length() This method returns the number of characters in a string. Example 10.4 uses these methods at (3), (4), (5), and (6). The program prints the frequency of a character in a string and illustrates copying from a string into a character array. public class ReadingCharsFromString { public static void main(String[] args) { int[] frequencyData = new int [Character.MAX_VALUE];// (1) String str = "You cannot change me!"; // (2) // Count the frequency of each character in the string. for (int i = 0; i < str.length(); i++) // (3) try { frequencyData[str.charAt(i)]++; // (4) } catch(StringIndexOutOfBoundsException e) { System.out.println("Index error detected: "+ i +" not in range."); } // Print the character frequency. System.out.println("Character frequency for string: \"" + str + "\""); for (int i = 0; i < frequencyData.length; i++) if (frequencyData[i] != 0) System.out.println((char)i + " (code "+ i +"): " + frequencyData[i]); System.out.println("Copying into a char array:"); char[] destination = new char [str.length()]; str.getChars( 0, 7, destination, 0); // (5) "You can" str.getChars(10, str.length(), destination, 7); // (6) " change me!" // Print the character array. for (int i = 0; i < 7 + (str.length() - 10); i++) System.out.print(destination[i]); System.out.println(); } } Output from the program: Character Frequency for string: "You cannot change me!" (code 32): 3 ! (code 33): 1 Y (code 89): 1 a (code 97): 2 c (code 99): 2 e (code 101): 2 g (code 103): 1 h (code 104): 1 m (code 109): 1 n (code 110): 3 o (code 111): 2 t (code 116): 1 u (code 117): 1 Copying into a char array: You can change me! In Example 10.4, the frequencyData array at (1) stores the frequency of each character that can occur in a string. The string in question is declared at (2). Since a char value is promoted to an int value in arithmetic expressions, it can be used as an index in an array. Each element in the frequencyData array functions as a frequency counter for the character corresponding to the index value of the element: frequencyData[str.charAt(i)]++; // (4) The calls to the getChars() method at (5) and (6) copy particular substrings from the string into designated places in the destination array, before printing the whole character array. Characters are compared based on their integer values. boolean test = 'a' < 'b'; // true since 0x61 < 0x62 Two strings are compared lexicographically, as in a dictionary or telephone directory, by successively comparing their corresponding characters at each position in the two strings, starting with the characters in the first position. The string "abba" is less than "aha", since the second character 'b' in the string "abba" is less than the second character 'h' in the string "aha". The characters in the first position in each of these strings are equal. The following public methods can be used for comparing strings: boolean equals(Object obj) boolean equalsIgnoreCase(String str2) The String class overrides the equals() method from the Object class. The String class equals() method implements String object value equality as two String objects having the same sequence of characters. The equalsIgnoreCase() method does the same, but ignores the case of the characters. int compareTo(String str2) int compareTo(Object obj) The first compareTo() method compares the two strings and returns a value based on the outcome of the comparison: the value 0, if this string is equal to the string argument a value less than 0, if this string is lexicographically less than the string argument a value greater than 0, if this string is lexicographically greater than the string argument The second compareTo() method (required by the Comparable interface) behaves like the first method if the argument obj is actually a String object; otherwise, it throws a ClassCastException. Here are some examples of string comparisons: String strA = new String("The Case was thrown out of Court"); String strB = new String("the case was thrown out of court"); boolean b1 = strA.equals(strB); // false boolean b2 = strA.equalsIgnoreCase(strB); // true String str1 = new String("abba"); String str2 = new String("aha"); int compVal1 = str1.compareTo(str2); // negative value => str1 < str2 String toUpperCase() String toUpperCase(Locale locale) String toLowerCase() String toLowerCase(Locale locale) Note that the original string is returned if none of the characters need their case changed, but a new String object is returned if any of the characters need their case changed. These methods delegate the character-by-character case conversion to corresponding methods from the Character class. These methods use the rules of the (default) locale (returned by the method Locale.getDefault()), which embodies the idiosyncrasies of a specific geographical, political, or cultural region regarding number/date/currency formats, character classification, alphabet (including case idiosyncrasies), and other localizations. Example of case in strings: String strA = new String("The Case was thrown out of Court"); String strB = new String("the case was thrown out of court"); String strC = strA.toLowerCase(); // Case conversion => New String object: // "the case was thrown out of court" String strD = strB.toLowerCase(); // No case conversion => Same String object String strE = strA.toUppperCase(); // Case conversion => New String object: // "THE CASE WAS THROWN OUT OF COURT" boolean test1 = strC == strA; // false boolean test2 = strD == strB; // true boolean test3 = strE == strA; // false Concatenation of two strings results in a string that consists of the characters of the first string followed by the characters of the second string. The overloaded operator + for string concatenation is discussed in Section 3.6 on page 62. In addition, the following method can be used to concatenate two strings: String concat(String str) The concat() method does not modify the String object on which it is invoked, as String objects are immutable. Instead the concat() method returns a reference to a brand new String object: String billboard = "Just"; billboard.concat(" lost in space."); // (1) Returned reference value not stored. System.out.println(billboard); // (2) "Just" billboard = billboard.concat(" grooving").concat(" in heap."); // (3) Chaining. System.out.println(billboard); // (4) "Just grooving in heap." At (1), the reference value of the String object returned by the method concat() is not stored. This String object becomes inaccessible after (1). We see that the reference billboard still denotes the string literal "Just" at (2). At (3), two method calls to the concat() method are chained. The first call returns a reference value to a new String object whose content is "Just grooving". The second method call is invoked on this String object using the reference value that was returned in the first method call. The second call results in yet another String object whose content is "Just grooving in heap." The reference value of this String object is assigned to the reference billboard. Because String objects are immutable, the creation of the temporary String object with the content "Just grooving" is inevitable at (3). The compiler uses a string buffer to avoid this overhead of temporary String objects when applying the string concatenation operator (p. 424). A simple way to convert any primitive value to its string representation is by concatenating it with the empty string (""), using the string concatenation operator (+) (see also (6c) in Figure 10.2): String strRepresentation = "" + 2003; // "2003" Some more examples of string concatenation follow: String motto = new String("Program once"); // (1) motto += ", execute everywhere."; // (2) motto = motto.concat(" Don't bet on it!"); // (3) Note that a new String object is assigned to the reference motto each time in the assignment at (1), (2), and (3). The String object with the contents "Program once" becomes inaccessible after the assignment at (2). The String object with the contents "Program once, execute everywhere." becomes inaccessible after (3). The reference motto denotes the String object with the following contents after execution of the assignment at (3): "Program once, execute everywhere. Don't bet on it!" The following overloaded methods can be used to find the index of a character, or the start index of a substring in a string. These methods search forward toward the end of the string. In other words, the index of the first occurrence of the character or substring is found. If the search is unsuccessful, the value ?1 is returned. int indexOf(int ch) Finds the index of the first occurrence of the argument character in a string. int indexOf(int ch, int fromIndex) Finds the index of the first occurrence of the argument character in a string, starting at the index specified in the second argument. If the index argument is negative, the index is assumed to be 0. If the index argument is greater than the length of the string, it is effectively considered to be equal to the length of the string?returning the value -1. defines a set of methods that search for a character or a substring, but the search is backward toward the start of the string. In other words, the index of the last occurrence of the character or substring is found. int lastIndexOf(int ch) int lastIndexOf(int ch, int fromIndex) int lastIndexOf(String str) int lastIndexOf(String str, int fromIndex) The following method can be used to create a string in which all occurrences of a character in a string have been replaced with another character: String replace(char oldChar, char newChar) Examples of search methods: String funStr = "Java Jives"; // 0123456789 String newStr = funStr.replace('J', 'W'); // "Wava Wives" int jInd1a = funStr.indexOf('J'); // 0 int jInd1b = funStr.indexOf('J', 1); // 5 int jInd2a = funStr.lastIndexOf('J'); // 5 int jInd2b = funStr.lastIndexOf('J', 4); // 0 String banner = "One man, One vote"; // 01234567890123456 int subInd1a = banner.indexOf("One"); // 0 int subInd1b = banner.indexOf("One", 3); // 9 int subInd2a = banner.lastIndexOf("One"); // 9 int subInd2b = banner.lastIndexOf("One", 10); // 9 int subInd2c = banner.lastIndexOf("One", 8); // 0 int subInd2d = banner.lastIndexOf("One", 2); // 0 String trim() This method can be used to create a string where white space (in fact all characters with values less than or equal to the space character '\u0020') from the front (leading) and the end (trailing) of a string has been removed. String substring(int startIndex) String substring(int startIndex, int endIndex) The String class provides these overloaded methods to extract substrings from a string. A new String object containing the substring is created and returned. The first method extracts the string that starts at the given index startIndex and extends to the end of the string. The end of the substring can be specified by using a second argument endIndex that is the index of the first character after the substring, that is, the last character in the substring is at index endIndex-1. If the index value is not valid, a StringIndexOutOfBoundsException is thrown. Examples of extracting substrings: String utopia = "\t\n Java Nation \n\t "; utopia = utopia.trim(); // "Java Nation" utopia = utopia.substring(5); // "Nation" String radioactive = utopia.substring(3,6); // "ion" The String class overrides the toString() method in the Object class and returns the String object itself: String toString() The String class also defines a set of static overloaded valueOf() methods to convert objects and primitive values into strings. static String valueOf(Object obj) static String valueOf(char[] character) static String valueOf(boolean b) static String valueOf(char c) All these methods return a string representing the given parameter value. A call to the method with the parameter obj is equivalent to obj.toString(). The boolean values true and false are converted into the strings "true" and "false". The char parameter is converted to a string consisting of a single character. static String valueOf(int i) static String valueOf(long l) static String valueOf(float f) static String valueOf(double d) The static valueOf() method that accepts a primitive value as argument is equivalent to the static toString() method in the corresponding wrapper class for each of the primitive data types (see also (6a) and (6b) in Figure 10.2 on p. 393). Note that there are no valueOf() methods that accept a byte or a short. Examples of string conversions: String anonStr = String.valueOf("Make me a string."); // "Make me a string." String charStr = String.valueOf(new char[] {'a', 'h', 'a'});// "aha" String boolTrue = String.valueOf(true); // "true" String doubleStr = String.valueOf(Math.PI); // "3.141592653589793" Other miscellaneous methods exist for reading the string characters into an array of characters (toCharArray()), converting the string into an array of bytes (getBytes()), and searching for prefixes (startsWith()) and suffixes (endsWith()) of the string. The method hashCode() can be used to compute a hash value based on the characters in the string.
https://etutorials.org/cert/java+certification/Chapter+10.+Fundamental+Classes/10.5+The+String+Class/
CC-MAIN-2021-21
refinedweb
3,228
56.25
OpenCloseMixin Purpose: tracks the opened/closed state of an element with open/close semantics, such as a Dialog, Drawer, Popup, or Toast. It allows for the possibility that the open/close operations are asynchronous. This mixin works in the middle of the Elix render pipeline: events → methods ➞ setState → updates → render DOM → post-render Expects the component to provide: setStatemethod, usually supplied by ReactiveMixin. Provides the component with: state.openedmember that is true if the element is open, false if closed. openedproperty that reflects the internal state of state.opened. closedproperty that is the inverse of opened. togglemethod that toggles the opened/closed state. openmethod that opens the element. closemethod that closes the element. state.openCloseEffectsmember that defaults to true if this mixin is used in combination with TransitionEffectMixin. closeFinishedproperty. By default, this is the same value as the closeproperty. If state.openCloseEffects(see above) is true, then closeFinishedis true only after the closeeffect has finished (reached the afterphase). See asynchronous effects, below. Usage import OpenCloseMixin from 'elix/src/OpenCloseMixin.js'; class MyElement extends OpenCloseMixin(HTMLElement) {} You can use OpenCloseMixin to provide any component that conceptually opens and closes with a consistent open/close API. This can make things easier for developers to use your component. Using OpenCloseMixin allows makes it easy to take advantage of related Elix mixins, including DialogModalityMixin, OverlayMixin, PopupModalityMixin, and TransitionEffectMixin. However, OpenCloseMixin can be used for components which aren't overlays, such as components that expand or open in place. Example An overlay is one type of component that can be opened and closed. When open, they appear on top of other elements:. A component might also choose to interpret the semantics of opening and closing as expanding or collapsing in place, as in Expandable of OpenCloseMixin in these situations allows for a consistent open/close API. Asynchronous open/close effects Many components that open and close do so with asynchronous transitional effects: fading in, sliding in, etc., when opened, then fading out or sliding out when closed. Such effects can add considerable complexity to an element's state, making it hard to define exactly when a component has "opened" or "closed". To support such effects, OpenCloseMixin interoperates with TransitionEffectMixin. When used in combination, OpenCloseMixin will: - Assume an element is completely closed by default. The default state.effectis "close", and the default state.effectPhaseis "after". - Define state.openCloseEffectsto be true by default — i.e., that the element wants to use transition effects for open/close operations. - Trigger the "open" effect when the element is opened by any means (the openor togglemethods, or setting openedto true). This sets state.effectto "open" and the phase to "before". TransitionEffectMixinwill then manage the remaining phases of the effect. - Conversely, trigger the "close" effect when the element is closed by any means. API Used by classes AlertDialog, Dialog, Drawer, DropdownList, ExpandablePanel, MenuButton, Overlay, Popup, PopupSource, and Toast. closed property True if the element is currently closed. Type: boolean closed event Raised when the component closes. open() method Open the element (if not already opened). opened property True if the element is currently closed. Type: boolean opened event Raised when the component opens. opened-changed event Raised when the opened/closed state of the component changes. toggle(opened) method Toggle the open/close state of the element. - opened: boolean– true if the element should be opened, false if closed..
https://component.kitchen/elix/OpenCloseMixin
CC-MAIN-2018-30
refinedweb
557
51.14
This document is the initial draft of the specification of the XFDL facility. It is intended for review and comment and is subject to change. This document is a NOTE made available by the W3C for discussion only. This indicates no endorsement of its content, not that the Consortium has, is or will be allocating resources to the issues addressed by the NOTE. This document is a Submission to W3C from UWI Unisoft Wares Inc. Please see Acknowledged Submissions to W3C regarding its disposition.. This document describes a class of XML documents called Extensible Forms Description Language (XFDL) Forms and partially describes the behavior of computer programs that process them. An XFDL processor is a software program that reads, processes, and writes XFDL forms. Processing may include such tasks as GUI rendering, data extraction, or modification. From 1993 to 1998, UWI.Com developed the Universal Forms Description Language (UFDL). XFDL is the result of developing an XML syntax for the UFDL, thereby permitting the expression of powerful, complex forms in a syntax that promotes application interoperability and adherence to worldwide Internet standards. The current design goals of XFDL are to create a high-level computer language that This version of the XFDL specification may be distributed freely, as long as all text and notices remain intact. [1] Boyer, J. Lexical and Syntactic Specification for the Universal Forms Description Language (UFDL) Version 4.0. UWI.Com The Internet Forms Company. 6 SEP 1997. [2] Bray, T., Paoli, J. & Sperberg-McQueen, C.M. (Eds.) Extensible Markup Language (XML) 1.0. W3C Recommendation.. 10 FEB 1998. [3] Gordon, M. (Ed.) UFDL v4.0.1 Specification. UWI.Com The Internet Forms Company. 1993-1998. Terms are defined in Section 1.2 of the XML specification [2].. To serve its purpose, XFDL requires comprehensive presentation control and data typing machinery. This document describes a set of elements and attributes that meet these requirements. It may be the case that the presentation controls can be replaced by a W3C-specified stylesheet facility; however, it is not clear which one should be used for this purpose. In this specification, all elements and attributes that are candidates for replacement by a standardized style mechanism are marked [Display]. Similarly, it is almost certainly the case that XFDL's data typing controls can and should be replaced by a W3C-standardized set of data type specifiers when one becomes available. In this specification, all elements and attributes that are candidates for replacement by standardized data type specifiers are marked [Types]. An XFDL form is an XML document whose root element type is XFDL. The root element has a required version attribute, which is a numeric dotted triplet consisting of the major, minor, and maintenance versions of the XFDL to which the element content conforms. The XFDL element may also have a sid attribute; the sid attribute gives a scope identifier, which is discussed in Section 2.5. Here are the lexical constraints of the values of the version and sid attributes: An XFDL element contains zero or more option elements followed by one or more page elements. The option elements that occur before the first page are referred to as form global options. They typically contain information applicable to the whole form or default settings for options appearing in the element content of pages. A page element contains zero or more page global options followed by zero or more item elements. Page global options typically contain information applicable to the whole page or default settings for options appearing within element content of items, and they take precedence over form global options. A page is also required to have a sid attribute. The intention of using of multiple pages in a form is to show the user one page at a time. Each page should contain items that describe GUI widgets which allow users to switch to different pages without contacting a server program. XFDL allows the page switching items to be defined in the form so the form developer can add computations that control the flow of pages based on context. An item is a single object in a page of a form. Some items represent GUI widgets, such as buttons, checkboxes, popup lists, and text fields. Other items are used to carry information such as an enclosed word processing document, a digital signature, daemon client-side actions, or application-specific job descriptions (such as workflow or ODBC requests). An item can contain zero or more option elements. The options define the characteristics of the item. An item with zero options is completely defined by the option defaults. Each item is required to have a sid attribute. The parameter entity reference to "%item;" could be defined partially as The details of each type of item listed in rule 10 are discussed in Section 5 and summarized here for convenience. This is only a partial list of items. XFDL allows application-defined items in the form. Simple, static application-specific information can be expressed using XML processing instructions, but many server side applications for workflow and ODBC require complex instructions that can include the use of the XFDL compute system to collect information from around the form. Options can appear as form globals, page globals, or as the contents of items. An option defines a named attribute of an item, page, or form. The parameter entity reference to "%option;" could partially be defined as follows: Again, the definition is partial because XFDL also supports application-defined options. Typically, application-defined options occur in application-defined items, but they are also sometimes used in XFDL-defined items to store intermediate results of complex computations, thereby allowing the form developer to arbitrarily break down a problem into manageable pieces. The options are fully discussed in section 6 and summarized here:: If the content expresses a compute, then the content must be present with the value : An example of an option that uses array element depth is bgcolor: (such as bgcolor, itemlocation, and format). XFDL does not often assign names to the array elements, so the default tag name of ae is used. Since the form developer can assign names to array elements, the parameter entity reference to "%ae;" can only be partially defined as follows: In XFDL, the mimedata option is used to store base-64 encoded binary data such as digital signatures, images, enclosed word processing or spreadsheet documents, etc. Base-64 encoding uses no characters that are illegal in character data, so mimedata content can be stored in a mimedata option element as simple character data. The only caveat is that since binary data tends to be long, XFDL processors are expected to "pretty print" the lines of base 64 using tabs, spaces and linefeeds such that the content appears to be indented with respect to the mimedata tags in text editors that wrap lines after 80 characters. However, since XML preserves whitespace in element content, base-64 decoders for XFDL must be able to ignore an arbitrary amount of whitespace in the data.. In XFDL, each option element is defined to be uniquely identified within the scope of the surrounding item element by its XML tag, which is why options (and array elements) do not require a sid attribute. In XFDL, there are two kinds of array elements, unnamed and named. An unnamed array element is surrounded by <ae> and </ae> tags. A named array element has its XML tag as its sid. Named array elements cannot begin with the tag <ae>. Further, since the XML tag of a named array element is a sid, the XML tag of a named array element must be unique within its parent element. The lexical structure of a sid differs from the XML language rule Name, which used to define the lexical structure of attribute values of type ID. The dash, period, and colon are not permitted in a sid due to conflicts with their use as the subtraction symbol, relative scope membership operator, and ternary conditional operator (?:), respectively. The lexical structure of a sid is not designed as a replacement for the XML ID feature, which assigns a globally unique name to an element. XFDL processors are expected to preserve the XML prolog and epilog, the comments within the XFDL element, and all element attributes appearing in start tags but not specifically defined by XFDL. The attributes must be associated with their respective start tags, and the comments must be associated with the respective pages, items, options, or array elements to which they apply. The XFDL processor must be able to reproduce these language components for digital signatures and for saving or transmitting the form. An XFDL compute can appear between <compute> and </compute>. This section defines the default infix notation for expressing computation expressions. As other appropriate XML languages are approved, they could be used in the content of a compute by defining a format attribute for the compute start tag. The default should be "infix" but the enumeration could be extended to include the names of supported formats. This version of XFDL only defines the default infix notation: Most XFDL processors only need to preserve the compute as character data, but some applications must parse the text of computes, constructing a list of expression tree data structures to represent all computes in a form. This is necessary if the application must change the content of options or suboptions that are referred to by a compute. This section describes the syntax and operation of computes. Except for some minor modifications, the language rules in this section appear in Figure 2 of [1] as rules 6 to 19. XFDL computes automatically support the notion of free form text found in most programming languages. With the exception of the contents of quoted strings (see Section 3.4), unlimited whitespace is permitted. Adding S? before and after every lexical token in every BNF rule in this section would unnecessarily obfuscate the presentation of what is essentially the standard BNF for mathematical and conditional expressions. Therefore, it is stated once here for the reader that all whitespace appearing outside of quoted strings is ignored. An XFDL compute can be either a mathematical or conditional expression. A conditional expression has three parts separated by the ternary ?: operator. The first part is a Decision, which yields a boolean result. The consequences for a true and false boolean result recurse to the definition of Compute, permitting arbitrary nesting of decision logic. The decision logic can apply logical-or (||), logical-and (&&), and logical negation (!) to the results of logical comparisons. The logical operators are left associative, and the comparators cannot be chained (e.g. a < b < c is illegal). The order of operations gives greatest precedence to negation, then logical-and, and least precedence to logical-or. To override this, parentheses can be used (e.g., the parentheses in (a<b || c<d) && e!=f cause the logical-or to occur first, and no parentheses are required if the logical-and should be performed first). Note that since Decision is capable of performing comparisons on the results of mathematical expressions, a Decision can ultimately start with an Expr. Therefore, a simple LR-type parser is required by XFDL computes. A mathematical expression, denoted Expr, can include addition, subtraction, string concatenation, multiplication, division, integer modulus, unary minus, and exponentiation. All mathematical operators are left associative except unary minus and exponentiation. Further, proper order of operations is observed. Parentheses can be used to override the order of operations as shown in the non-terminal Value (defined later). A value can be a compute in parentheses, which provides an override for the order of operations. A value can be a quoted string (Section 3.4). A value can be an XFDL reference to an element whose text data should be obtained when the compute is evaluated (Section 3.5). Finally, a value can be obtained as the result of a function call (Section 3.6). The rules for recognizing a quoted string are quite difficult to express in BNF, but they are the usual rules that many high-level programming languages use to process quoted strings. The language rules for computes permit the recognition of a quoted string token using the italicized token name qstring. An XFDL quoted string must be surrounded by double quotes. Whitespace before the open quote and after the close quote is ignored. Double quotes can be included by escaping them with a backslash (\). The escape sequences \n and \t result in a newline and a tab, respectively, in the quoted string content. Since the backslash is the escaping character, it must also be escaped to be inserted into the string content (e.g., \\). Finally, any byte value except 0 can be inserted into the quoted string content using \x followed by a two-digit hexadecimal number. Quoted strings can also be of arbitrary length in XFDL. To increase human readability, XFDL supports multiline string continuation. If the next non-whitespace character appearing after a closing double quote is an open double quote, then the closing quote, whitespace, and open quote are discarded from the input stream. Because each XFDL elements scope identifier (sid) is unique only within the surrounding parent element, XFDL can support relative referencing. For example, in an element identified as Field1, if a computation includes the reference Field2.value, this means obtain the character data of the value option in the item Field2 on the same page. If Field2 is on a separate page, say Page2, then a compute in Field1 can still access its value using Page2.Field2.value. XFDL references can also grow arbitrarily in the opposite direction to describe unbounded array element depth. This is accomplished by introducing a second scoping operator, the square brackets, to describe depth below the option level. For example, given the following piece of XFDL for a format option: <format content="array"> <ae>dollar</ae> <range content="array"> <ae content="compute"> <cval>35</cval> <compute> Bill.value * "0.05" </compute> </ae> <ae content="compute"> <cval>700</cval> <compute> Bill.value </compute> </ae> </range> </format> the reference format[0] yields dollar and the reference format[range][1] yields 700. If an array element is not named, then the zero-based numeric position of the array element is used in the square brackets. If the array element is named, then the scope identifier can be used in the square brackets. However, the numeric position can also be used, e.g.format[1][1] also yields 700. The above description covers static references. Dynamic references are a second important component of the UFDL referencing model. The left associative operator ->, known as the indirect membership operator, expects to receive a static or dynamic reference as a left operand. The run-time value of the static or dynamic reference must conform to the syntax of the ItemRef non-terminal. The right operand of the indirect membership operator is an option reference. At run-time, the left operand is evaluated, yielding a static item reference to an XML element representing a UFDL item. This run-time item reference is combined with the right operand of the indirect membership operator to yield an option or array element whose simple data is the result of the evaluation. The simplest example of a dynamic reference is retrieving the text of the selected cell in a UFDL listbox or popup. As is discussed in Section 6, the value option of a list or popup is equal to the item reference of the cell item for the selected cell. Thus, given an example popup that offers a selection of days of the week, the text for the day of week selected by the user is obtained by Popup_DayOfWeek.value->value. Finally, note that XFDL references support forward referencing. An XFDL reference can refer to any option or array element. Function calls run code that may be external to the XFDL form definition. A set of predefined functions for doing standard mathematical operations, string manipulations, etc. is given in Section 7. The LibName allows functions to be grouped into separate namespaces, but predefined functions do not require a LibName. This section presents a high-level algorithm describing how a XFDL Compute System must run the computes in a form. When a form starts, it must run all computes to provide content for the current value tags. This is accomplished by passing nil to RunXFDLComputes() as the change list. Each time an event, such as user input or an API call, causes a change to the simple data content or current value of an option or array element, RunXFDLComputes() is called with a change list containing only the element that changed. If a string of simple data is assigned to an element via a public API call (e.g. as the result of user input or server-side processing), then the compute and its current value are destroyed. The algorithm refers to the computes using one-based indexing (even though the computes may not be represented by an array in a given implementation). The symbol n denotes the number of computes in the form. For each compute CI, the expression tree is checked to see whether it contains a static or dynamic reference to any element EJ in the change list. If so, then CI is dependent on EJ and must be evaluated. The result of eval(CI) does not equal the current content of the element parent FP of CI, then FP is added to the new change list and the current value (the cval content) of FP is set equal to the result of eval(CI)using a non-public API call such that the compute is not destroyed. The algorithm does not show the semantics for dealing with circular references. Circular references are defined to be invalid XFDL. The computational output that results from running a circular chain of references is undefined. However, the behavioral result is defined. An XFDL processor should terminate in a finite amount of time upon encountering a chain of circular references. There is one exceptional case that the RunXFDLComputes algorithm is designed to permit. A compute that contains a self-reference is, in a graph theoretic sense, a circular reference. However, XFDL processors must support computes that use conditional logic to terminate computations after one iteration. Here is an example: <user_email content="compute"> <cval></cval> <compute> user_email == "" ? prefs.p1.ReturnAddress.value : user_email </compute> </user_email> In the first iteration, the current value of user_email is empty, so the compute runs and changes the current value to be equal to the content of a particular value option in another form called prefs. The change causes user_email to enter the NewChangeList. During the following iteration of the loop, the compute runs again, but the current value of user_email does not change, so user_email does not enter the new change list. The evaluation function must perform run-time type identification on operands. The only permitted operation on strings is addition (by + or +.). Dates can only be added and subtracted. Numeric addition should only be performed if both operands are numeric. The first example in Figure 1 is designed to show a whole XFDL form. After the XML prolog, the root XFDL element declares a version of 4.1.0. There is a form global variable stating that all pages should have a medium gray background color given by the RGB triplet (128, 128, 128). However, the page global background color is set to RGB (192,192,192)-- light gray. Since page globals override form globals, the page will have a light gray background. The background color option uses element depth to express an array. This is not needed if the color is given by name, but it is required if the background color is given as an RGB triplet. XFDL options and array elements are consistent in their use of element depth. The page global options also contain a label option that declares the caption bar text for the window used to display the form page. Note that 'label' is one of those keywords that is used both as an item type and an option scope identifier. Widgets such as fields and comboboxes can have text labels associated with them, but image and text labels can also be placed anywhere on the form, so a separate label item is required in the language. The XFDL parser distinguishes a global option from an item based on the absence or presence, respectively, of the 'sid' attribute. After the global options, the page contains three fields: the first two collect side lengths for a right triangle; the third computes the length of the hypotenuse of the right triangle with the given side lengths. An editstate of readonly is given to prevent the user from accidentally destroying the compute by entering a value for field C. The second example in Figure 2 does not include the XML prolog nor the declarations for the root XFDL element and page. The example only shows two items. It is designed to demonstrate deeper element depth and more computes than the form shown in Figure 1. The first item is a field that purports to ask the user what portion of a bill, such as a credit card bill, will be paid. The format option contains a number of array elements. The first of them contains the word 'dollar' and represents the type of user input that will be permitted in the field. In the typical format option (see Section 6), the input type is not named and would therefore appear between the <ae> and </ae> tags. However, the form developer can assign names to array elements that are not required by the XFDL specification to have a name. The second and third array elements in the format option are unnamed. They provide additional information about the format, such as the fact that user input is mandatory (i.e., emptiness is not a permitted response), and that a dollar sign should be prepended to the user's input. The last array element declared in the format option in Figure 2 is named 'range', and it contains an array of two elements that define the lower and upper bounds of the user's input. For a credit card bill, the range of payment is typically bounded above by what the cardholder owes and bounded below by some small percentage of the current balance. Thus, the format option shows the possibility of unlimited array element depth as well as the inclusion of computes deep within the element hierarchy. The XFDL offers what is known as a fine-grain compute system. The second item element in Figure 2 is a label that demonstrates a longer compute expression, including several array element references. Note that at the end of the compute, the 700 is concatenated to the end of the string rather than added to the 35. Because addition is left associative, the entire portion of the string prior to the 700 has already been constructed. Therefore, due to run-time type identification, the last + operator performs string concatenation. Items are the basic elements of a page. The syntax of an item definition is as follows: The sid attribute uniquely identifies an item. Every item tag in a page must be unique. The ItemType element identifies the type of item to create. (For example, <field > defines the item as a field.) This section contains information about XFDL-defined item types and the options available for each. Note: Defining an option more than once in an items definition is not allowed. See the section "6. Details on Options and Array Elements" for descriptions of each option type. Specifies form-initiated actions that execute automatically. The actions can be any of the following types: link, replace, submit, done, display, print, cancel. See the type description in the 'Options' section for a description of each of these actions. Action items can be defined to occur only once or repeat at specified time intervals. They can be defined to occur after the page opens but before the page appears. See the section on the delay option for information on timing options. Action items can trigger either background actions or actions involving user interaction. In fact, if the form contains only hidden items such as action items, then the whole form operates in the background. Such forms are called daemon forms. activated, active, data, datagroup, delay, transmitformat, type, url Repeating automatic actions is one method of creating a sparse-stated connection. It allows the form to indicate periodically to a server application that it is still running. Use the delay option to specify repetition. Actions, by the form definition rules, reside on a page; therefore, actions occur only when the page is open, and repeating actions stop when the page closes. Actions defined to occur before the page displays, occur each time the page opens. The following action will send a status message to the server. The transaction happens automatically every 10 minutes (600 seconds). <action sid="sendStatus_action"> <delay content="array"> <ae>repeat</ae> <ae>600</ae> </delay> <type>submit</type> <url content="array"> <ae></ae> </url> </action> Specifies a square box on the form. Other items may be positioned on top of boxes (using itemlocation). The purpose of box items is simply to add visual variety to the form. bgcolor, borderwidth, fontinfo, itemlocation, size To make the box more visible, assign a background color that differs from the page background color (the default). When setting the size option of a box, the height and width of the box will be based on the average character size for the font in use (set with the fontinfo option). The following example shows a typical box description. The box is 25 characters wide and 4 characters high. The background color is blue. <box sid="blue_box"> <bgcolor content="array"> <ae>blue</ae> </bgcolor> <size content="array"> <ae>25</ae> <ae>4</ae> </size> </box> Specifies a click button that performs an action when selected. Buttons can request data from a web server, submit or cancel the form, sign the form, save it to disk, or enclose external files. activated, active, bgcolor, borderwidth, coordinates, data, datagroup, focused, fontcolor, fontinfo, format, help, image, itemlocation, justify, mouseover, next, signature, signdatagroups, signer, signformat, signgroups, signitemrefs, signitems, signoptionrefs, signoptions, size, transmitformat, type, url, value The buttons label is defined by the value option. If no value option exists, the default label is blank. When setting the size option of a button, the height and width of the button will be based on the average character size for the font in use (set with the fontinfo option). If a button's image option points to a data item that dynamically changes its mimedata (but not its item tag), then the button will update the image it displays. For information on how to update an image by enclosing a new one, see the data option description. The format option is available in buttons in order to force users to sign forms before submitting them. There are two steps to making a signature button mandatory: Assign the following elements to the format option: string and mandatory. Set the button's value equal to the button's signer option setting. Setting the format to mandatory specifies that the button must have a value setting that is not empty before the user submits the form. Equating the value to the setting of the signer option ensures that the only way a button's value is set is if somebody uses it to sign the form. (The signer option stores the identity of the person who signed the form using the button.) Behavior of Buttons in Digital Signatures A digital signature button is the means by which the user can digitally sign a form. To make a button a signature button, set its type to signature. A signature button can be set up to sign the whole form or just part of it by setting up filters on the signature, using the signdatagroups, signgroups, signitemrefs, signitems, signoptionrefs, and signoptions options. Important: At a minimum, the triggeritem and coordinates options should always be filtered out. These options change when a submission is triggered or when a user clicks an image button, respectively. Filtering out parts of the form that a subsequent user will change, including subsequent signatures and signature buttons and custom options that might change (like odbc_rowcount), should also be taken into consideration. Signature buttons allow users to do the following: Sign the form or portion of the form the button specifies. Delete their signatures (a signature can be deleted only by the user whose signature it is, and if the signature is currently valid and not signed by some other signature). View the signature and view the XFDL text of what the signature applies to. All option references, calculations, and other formulas in any signed portion of a form are frozen once they have been signed. Their setting will be valued at the setting they contained at the moment when the signature was created. If the user deletes the digital signature, however, then the formulas will become unfrozen, and will change dynamically as normal. The usual options for other buttons (i.e. size, image, value) can also be used with signature buttons. Buttons that trigger form processing requests must have a type option setting of submit or done. The definition for such a button might look like this: <button sid="submit_button"> <value>Process Form</value> <fontinfo content="array"> <ae>Helvetica</ae> <ae>18</ae> <ae>bold</ae> <ae>italic</ae> </fontinfo> <type>done</type> <url content="array"> <ae></ae> </url> </button> This button encloses an external file in the form. The action to enclose a file is enclose. The datagroup option identifies the list of datagroups, or folders, in which the user can store the enclosed file. An enclose button might take the following form: <button sid="enclose_button"> <value>Enclose File</value> <fontinfo content="array"> <ae>Helvetica</ae> <ae>18</ae> <ae>bold</ae> <ae>italic</ae> </fontinfo> <type>enclose</type> < <ae>Images_Asia</ae> <ae>Images_Eur</ae> <ae>Images_SAmer</ae> </datagroup> </button> This button will allow users to enclose files into one of three datagroups (folders): Images_Asia, Images_Eur, Images_SAmer. Populates combobox, list and popup items. A cell can belong to multiple comboboxes, lists and popups. See the combobox, list and popup item sections for information on associating cells with these items. Cells fall into two categories according to their behavior activated, active, data, datagroup, group, label, transmitformat, type, url, value The following example shows a list with three cells. To learn how to get the value of the user's selection, see Usage Notes below. <popup sid="CountryPopup"> <label>Country</label> <group>country</group> <format content="array"> <ae>string</ae> <ae>mandatory</ae> </format> </popup> <cell sid="albCell"> <value>Albania</value> <group>country</group> <type>select</type> </cell> <cell sid="algCell"> <value>Algeria</value> <group>country</group> <type>select</type> </cell> <cell sid="banCell"> <value>Bangladesh</value> <group>country</group> <type>select</type> </cell> Use the type option to establish a cells behavior. Select cells that have a type of select (the default type). Cells can have both value and label options. These options affect the form differently depending on whether the cell is linked to a combobox, a popup, or a list. In general, the label of the cell will be displayed as a choice, while the value of the cell will be displayed if that cell is selected. For more information, refer to the appropriate item type. Cells take their color and font information from the combobox, list and popup items with which they are associated. In this way, a cells appearance can vary according to the list the user is viewing.. Provides a simple check box to record a selected or not selected answer from a user. A selected check box appears filled while a deselected box appears empty. The exact appearance of the check box is platform dependent; but the shape is rectangular. The check box appears as a normal check box for the users of each platform. active, bgcolor, editstate, focused, fontcolor, fontinfo, help, itemlocation, label, labelbgcolor, labelborderwidth, labelfontcolor, labelfontinfo, mouseover, next, size, value The value option setting indicates the users answer. If the user selects or checks the check box, the value option contains on, otherwise it contains off. The default value is off. Check boxes do not belong to groups like radio buttonseach check box may be turned on or off independently of the others. The label option defines the label for the check box. The label appears above the check box and aligned with the box's left edge. There is no default label. When setting the size option of a check box, the height and width of the bounding box will be based on the average character size for the font in use (set with the fontinfo option). The fontcolor option determines the color of the check box fill pattern (default is red). This value option setting in this check box is on, so the check box will appear selected when it displays. The items label is Activate Health Plan, and the label will display in a Times 14 Bold font colored blue. <check sid="healthPlan_check"> <value>on</value> <label>Active Health Plan</label> <labelfontinfo content="array"> <ae>Times</ae> <ae>14</ae> <ae>bold</ae> </labelfontinfo> <labelfontcolor content="array"> <ae>blue</ae> </labelfontcolor> </check> Comboboxes act like a hybrid of a field and a popup. Unopened, a combobox with a label occupies the same space as two labels, and a combobox without a label occupies the same space as a single label. After a user chooses a cell, the combobox closes (that is, returns to its unopened state). If none of the cells are appropriate, the user can type other information into the combobox. When information is typed in, it is stored in the value option of the combobox. When a cell is selected, the value option stores the value of that cell. A comboboxs label appears above the combobox item. activated, active, bgcolor, editstate, focused, fontcolor, fontinfo, format, group, help, itemlocation, label, labelbgcolor, labelborderwidth, labelfontcolor, labelfontinfo, mouseover, next, previous, size, value Place cells in a combobox by creating a group for the combobox and assigning cells to the group. Create a group using the group option in the combobox combobox and stored internally.. Combobox, popup, and list items with the same group reference display the same group of cells. When first viewed, a combobox will display its value. If no value is set, the combobox will be empty. The value option will contain one of the following: The value of the most recently chosen selection. Nothing if an action was most recently chosen. The text entered if something was typed in most recently. When setting the size option of a combobox, the height and width of the popup will be based on the average character size for the font in use (set with the fontinfo option). The label option sets the text displayed above the item, as with a field. When setting the editstate option, the combobox will behave in the following manner: A readwrite setting will cause it to function normally. A readonly setting will cause the combobox to refuse all input, although it will function normally otherwise and formulas will still be able to change the value. A writeonly setting will cause the combobox to use "password" characters in its field contents, but the list of choices will still be displayed in plain text. When a format is applied to a combobox, the formatting will be applied to the value of each cell linked to the combobox. combobox containing a set of selections allowing users to choose a color. <combobox sid="CATEGORY_POPUP"> <group>combo_Group</group> <label>Choose a Color:</label> </combobox> The default label is "Choose a Color:". This will display above the combobox. Until the user types in something or makes a selection, the field area of the combobox will be blank. These are the cells that make up the combobox. They are select cells and they belong to the same group as the combobox: combo_Group. <cell sid="RED_CELL"> <group>combo_Group</group> <type>select</type> <value>Red</value> </cell> <cell sid="WHITE_CELL"> <group>combo_Group</group> <type>select</type> <value>White</value> </cell> <cell sid="BLUE_CELL"> <group>combo_Group</group> <type>select</type> <value>Blue</value> </cell> Stores an information object such as an image, a sound, or an enclosed file in an XFDL form. Data in data items must be encoded in base64 format. Data items are created automatically when files are enclosed in a form. Enclose files using items with a type option setting of enclose. datagroup, filename, mimedata, mimetype Store the data in the mimedata option, and the datas MIME type in the mimetype option. If a button or cell of type enclose contains a data option that points to a data item (as opposed to using the datagroup option), then special rules apply to the data item's behavior. If a user encloses a new data item using that button, the new information overwrites the old. For example, if the data item originally contained a jpeg image of a dog, and then a user enclosed a png image of a house, then the data item's mimedata, mimetype, and filename options update themselves to contain the information about the house image. This is an example of a data item produced as the result of enclosing a file (the data component used here is artificial, and is only for demonstration purposes). <data sid="Supporting_Documents_1"> <filename>smithltr.doc</filename> <datagroup content="array"> <ae>Supporting_Documents</ae> </datagroup> <mimetype>application/uwi_bin</mimetype> <mimedata>R0lGODdhYABPAPAAAP///wAAACwAAAAAYABPAAAC/4SPqcvtD02Y Art68+Y7im7ku2KkzXnOzh9v7qNw+k+TbDoLFTvCSPzMrS2YzmTE+p yai3YUk9R6hee2JFP2stju+uG0ptvdeKptb+cX8wfY1jdYU4ehKDi3pdJw 44yAJEqcW28cA5M0oEKnqKasZwydrK9Wo6JTtLG9p5iwtWi8Tbi/b7E0 rvKixzbHJyrDq2uNggaXUs1NlLi36AW3AGv7VWhIPA7TzvdOGi/vvr0Of ft3Nrx89JewCQJYTirxi2PwgnRpNoMV5FIIboOnqTszFLFIMhQVI0yOz </mimedata> </data> The field item creates a text area where users can display and enter one or more lines of data. The fields characteristics determine the number of lines, the width of each line, and whether the field is scrollable. Field data can be protected from modification, made to display in the system password format (typically, hidden from view), and forced to conform to data type and formatting specifications. active, bgcolor, editstate, focused, fontcolor, fontinfo, format, help, itemlocation, justify, label, labelbgcolor, labelborderwidth, labelfontcolor, labelfontinfo, mouseover, next, size, value When setting the size option of a field, the height and width of the field will be based on the average character size for the font in use (set with the fontinfo option). The editstate option determines whether the field is read only, write only (for passwords, for example) or available for both reading and writing. The format option specifies the data type of the fields data. It also contains flags that allow the application of specified edit checks and formatting to the data. The label option defines the fields label. The label is placed above the field and aligned with the field's left edge. The scrollvert and scrollhoriz options govern a fields scrolling characteristics. They must be set to always to permit scrolling. With scrolling enabled, scroll bars display along the bottom (horizontal scrolling) and right (vertical scrolling) edges of the field. This is an example of a single line field item that allows 20 characters of input. An initial value of 23000 has been defined for the field. When the form appears, the field will contain this value. <field sid="income_field"> <label>Annual income</label> <value>23000</value> <size content="array"> <ae>20</ae> <ae>1</ae> </size> <fontinfo content="array"> <ae>Courier</ae> <ae>12</ae> <ae>plain</ae> </fontinfo> <labelfontinfo content="array"> <ae>Helvetica</ae> <ae>12</ae> <ae>plain</ae> </labelfontinfo> < <ae>blue</ae> </labelfontcolor> </field> Defines a help message that can be used to support various external items in the form. Separate help item can be created for every item supported, or one help item can be used to support several items. active, value The help items value option contains the help message text. The link between the help item and the supported item is created by the help option in the supported items definition. The help option contains the help items item reference. This is an example of a button for which help information is available. The following is the button definition with the help items item reference in the help option: <button sid="fullPicture_button"> <value>View Full-Sized Picture</value> <help>button help</help> <fontinfo content="array"> <ae>Times</ae> <ae>14</ae> <ae>plain</ae> </fontinfo> <type>link</type> <url content="array"> <ae></ae> </url> </button> The following example shows the help item referred to in the button definition. The contents of the value option are used as the help message when the user asks for help with the button. <help sid="button_help"> <value>Pressing this button will bring a full-sized image in a form down to your viewer.</value> </help> Defines a static text message or an image to display on the form. If both an image and a text message are defined for the label, the image takes precedence in viewers able to display images. active, bgcolor, fontcolor, fontinfo, format, help, image, itemlocation, justify, size, value To define the text for a label, use the value option. To define an image for a label, use the image option. To create a multiple line text message, add line breaks to the message text. Use the escape sequence \n to indicate a line break. When setting the size option of a label, the height and width of the label will be based on the average character size for the font in use (set with the fontinfo option). If a label's image option points to a data item that dynamically changes its mimedata (but not its item tag), then the label will update the image it displays. For information on how to update an image by enclosing a new one, see the data option description. The label's background color defaults to being transparent and thus the label will take the background color of whatever item it is over. For example, it is possible to place a label inside a colored box (in order to make a title section that stands out) without specifying a background color for the label. This is an example of a multiple-line text label. <!--Specify right justification for this label.--> <label sid="RHYME LABEL"> <value>Little miss Muffet Sat on her tuffet, Eating her curds and whey. When along came a spider, who sat down beside her, and frightened miss Muffet away!</value> <fontinfo content="array"> <ae>Times</ae> <ae>16</ae> <ae>italic</ae> </fontinfo> </label> Draws a simple vertical or horizontal line on the form. Lines are useful for visually separating parts of a page. fontcolor, fontinfo, itemlocation, size, thickness Specify the dimensions of a line using the size and thickness options. The size option determines whether the line is vertical or horizontal. If the horizontal dimension is set to zero, then the line is vertical. If the vertical dimension is set to zero, then the line is horizontal. Size is calculated in characters. The thickness option determines how thick the line will be. Thickness is calculated in pixels. The fontinfo option information is used when calculating the lines size. The size options unit of measurement is characters; therefore, choice of font can affect the size. See the size option for more information. The fontcolor option defines the color of the line. This is an example of a horizontal line with a thickness of five pixels. <line sid="BLUE_LINE"> <size content="array"> <ae>40</ae> <ae>0</ae> </size> <thickness>5</thickness> </line> Creates a list from which users can make selections (as in a list of names) and trigger actions (such as enclosing files and submitting the form). A list can contain both selections and actions. The entries in the list are cell items. Selections are cells with a type option setting of select. Actions are cells with any other type option setting. active, bgcolor, editstate, focused, fontcolor, fontinfo, format, help, itemlocation, label, labelbgcolor, labelborderwidth, labelfontcolor, labelfontinfo, mouseover, next, size, value Place cells in a list by creating a group for the list and assigning cells to the group. Create a group using the group option in the list definition. Assign cells to the group using the group option in the cell definition. Cells that have a label option will display that label in the list. Otherwise, the value option of the cell will be displayed.. List, combobox and popup items with the same group reference display the same group of cells. The value option will contain one of the following: The item reference of the most recently chosen cell if the cell was of type "select". Nothing if the cell most recently chosen was of any type other than "select". Define the lists label using the label option. When setting the size option of a list, the height and width of the list will be based on the average character size for the font in use (set with the fontinfo option). A vertical scroll bar will appear beside the list if the number of cells is greater than the height (defined with the size option) of the list. When a format is applied to a list, the formatting will be applied to the value of each cell linked to the list. list containing three actions: submit form, save form, and cancel form. Here is the list definition. <list sid="MAINMENU_LIST"> <group>list_Group</group> <label>Options Menu</label> <labelfontcolor content="array"> <ae>blue</ae> </labelfontcolor> <size content="array"> <ae>3</ae> <ae>20</ae> </size> </list> These are the cells that make up the list. They are action cells and they belong to the same group as the list: list_Group. <cell sid="SUBMIT_CELL"> <group>list_Group</group> <type>submit</type> <url content="array"> <ae></ae> </url> <value>Submit Form</value> </cell> <cell sid="SAVE_CELL"> <group>list_Group</group> <type>save</type> <value>Save Form</value> </cell> <cell sid="CANCEL_CELL"> <group>list_Group</group> <type>cancel</type> <value>Cancel this Form</value> </cell> Creates a popup menu from which users can make selections (as in a list of names) and trigger actions (such as enclosing files and submitting the form). A popup can contain both selections and actions. The entries in the popup are cell items. Selections are cells with a type option setting of select. Actions are cells with any other type option setting. Popups act like a hybrid of a label, a button, and a list. Unopened, a popup occupies only the space required for its label. Open, the popup displays a list of selections and actions. After a user chooses a selection or an action, the popup closes (that is, returns to its unopened state). A popups label displays inside the popup item. activated, active, bgcolor, borderwidth, editstate, focused, fontcolor, fontinfo, group, help, itemlocation, justify, label, mouseover, next, size, value Place cells in a popup by creating a group for the popup and assigning cells to the group. Create a group using the group option in the popup popup. For example, if cell had a value of "USA", and a label of "United States of America", the full version would be shown in the popup list. Once the cell was selected, the popup would display the abbreviation.. Popup, combobox and list items with the same group reference display the same group of cells. The value option will contain one of the following: When setting the size option of a popup, the height and width of the popup will be based on the average character size for the font in use (set with the fontinfo option). The label option contains the popups default label. When the value option is empty, the default label displays. Otherwise, the label of the cell identified in the value option appears. When a format is applied to a popup, the formatting will be applied to the value of each cell linked to the popup. popup containing a set of selections allowing users to choose a category. Here is the popup definition. The default label is "Choose a Category:". This will display until a user makes a selection. Afterwards, the cells value will display as the label. <popup sid="CATEGORY_POPUP"> <group>popup_Group</group> <label>Choose a Category:</label> </popup> These are the cells that make up the popup. They are select cells and they belong to the same group as the popup: popup_Group. <cell sid="HISTORY_CELL"> <group>popup_Group</group> <type>select</type> <value>World History</value> </cell> <cell sid="SCIENCE_CELL"> <group>popup_Group</group> <type>select</type> <value>Physical Sciences</value> </cell> <cell sid="MUSIC_CELL"> <group>popup_Group</group> <type>select</type> <value>Music</value> </cell> Intended for use with one or more other radio button items. A group of radio buttons presents users with a set of mutually exclusive choices. Each radio button represents one choice the user can make. There is always one selected radio button in the group. As well, since radio buttons present a mutually exclusive set of choices, only one radio button in a group can be selected. When a user chooses a radio button, that radio button becomes selected. A selected radio button appears filled in some way. All other radio buttons in the group appear empty. active, bgcolor, borderwidth, editstate, focused, fontcolor, fontinfo, group, help, itemlocation, label, mouseover, next, size, value Group radio buttons by assigning them to the same group. Do this by including the group option in each radio buttons definition, and using the same group reference in each case. The value option contains the status indicator. It can be either on or off. The value on indicates a status of chosen. The value off indicates a status of not chosen. The default status is not chosen. When the form opens, if no radio button has the status chosen, then the last radio button defined for the group becomes chosen. If multiple radio buttons are chosen, then only the last chosen radio button retains that status. The label option defines a label to appear above the radio button and aligned with its left edge. When setting the size option of a radio button, the height and width of the bounding box will be based on the average character size for the font in use (set with the fontinfo option). The fontcolor option determines the color of the radio button fill pattern (default is red). This example shows a group of three radio buttons. The first radio button is the initial choice: the value option setting is on. The buttons all belong to the group search_Group. <radio sid="NAME_RADIO"> <value>on</value> <group>search_Group</group> <label>Search by Name</label> </radio> <radio sid="NUMBER RADIO"> <group>search_Group</group> <label>Search by Number</label> </radio> <radio sid="OCCUPATION RADIO"> <group>search_Group</group> <label>Search by Occupation</label> </radio> As shown here, only the chosen radio button needs to have a value option setting. The remaining radio buttons will receive the (default) value setting of off. Contains a digital signature and the data necessary to verify the authenticity of a signed form. It is created by a form viewer or other program when a user signs a form (usually using a digital signature button). The signature item contains an encrypted hash value that makes it impossible to modify the form without changing the hash value that the modified form would generate. To verify, one can generate the hash value and then see if it matches the one in the signature. mimedata, signature, signdatagroups, signer, signformat, signgroups, signitemrefs, signitems, signoptionrefs, signoptions When a user signs a form using a signature button, the viewer creates the signature item as specified in the button's signature option. The viewer also associates the signature with the signature button, using the signature's signature option. When a user signs a form, the signer, signformat, signgroups, signitemrefs, signitems, signoptionrefs, and signoptions options are copied from the button description to the signature description. A copy of the XFDL description of the form or portion of the form that is signed is included in the signature's mimedata option. This data is encrypted using the hash algorithm specified in the button's signformat option. When a program checks a signed form, it compares the data in the mimedata option with that of the portion of the form that is apparently signed. If the descriptions match, then the signature remains valid. If the signatures do not match, the signature breaks, and the user is prompted. An attempt to create a signature will fail if: The item named by the signature button's signature option already exists. The signature button is already signed by any signature in the form. The signer's private key is unavailable for signing. Filters can be used to indicate which items and options to keep and to omit. The explicit and implicit settings of an existing filter take precedence over an implication that might be drawn from a non-existing filter. Set up these filters in the signature button description. To use digital signatures, it is necessary for the user to obtain a digital signature certificate. This example shows a signature item below the signature button that created it. <button sid="empSigButton"> <type>signature</type> <value content="compute"> <compute>signer</compute> </value> <signer></signer> <format content="array"> <ae>string</ae> <ae>mandatory</ae> </format> <signformat>application/x-xfdl; <ae>omit</ae> <ae>triggeritem</ae> <ae>coordinates</ae> </signoptions> > <signature>empSignature</signature> </button> ... <signature sid="empSignature"> <signformat>application/x-xfdl; <ae>omit</ae> <ae>triggeritem</ae> <ae>coordinates</ae> </signoptions> <mimedata>MIIFMgYJKoZIhvcNAQcCoIIFIzCCBR8CAQExDzANBgkg AQUFADALB\ngkqhkiG9w0BBwGgggQZMCA36gAwSRiADjdhfHJl 6hMrc5DySSP+X5j\nANfBGSOI\n9w0BAQQwDwYDVQQHEwhJbn< Rlcm5ldDEXMBUGA1UEChM\nOVmVyaVNpZ24sIEluYy4xNDAKn 1ZlcmlTaWduIENsYXNzIDEgQ0Eg\nLSJbmRdWFsIFN1YnNjcmliy ZXIwHhcNOTgwMTI3MwMDAwOTgwM\M1OTU5WjCCARExETA </mimedata> </signature> Creates space between items on a form. It can be any size specified. It is invisible. fontinfo, itemlocation, label, size A spacer can be sized either by giving it length and width dimensions (using size), by expanding the default size using the itemlocation option or by giving it a label. If a label is used, the spacer equals the size of the text typed into the label. The label does not appear; it is simply used to determine the spacers size. When setting the size option of a spacer, the height and width of the spacer will be based on the average character size for the font in use (set with the fontinfo option). This example shows a spacer item that uses the size option to define the amount of space it will occupy. <spacer sid="3_SPACER"> <size content="array"> <ae>1</ae> <ae>3</ae> </size> </spacer> This example shows the spacer item that uses a label to define the amount of space it will occupy. This sizing technique is useful when creating a spacer that is exactly the same size as a real label on the form. <spacer sid="WELCOME_SPACER"> <label>Welcome to Information Line</label> </spacer> Allows the definition of a toolbar for a page. A toolbar is a separate and fixed area at the top of the page. It functions much like a toolbar in a word processing application. Typically, items placed in the toolbar are those users are to see no matter what portion of the page they are viewing. The toolbar is visible no matter what portion of the page body is visible. However, if the toolbar is larger than half the form window, it is necessary to scroll to see everything it contains. bgcolor, mouseover The background color of the toolbar becomes the default background color for items in the toolbar. Add items to the toolbar using the within modifier of the itemlocation option. Code the itemlocation option in each included items definition. This example shows a toolbar that contains a label, a spacer, and two buttons. Here is the toolbar definition: <toolbar sid="TOOL_BAR"> <bgcolor content="array"> <ae>cornsilk</ae> </bgcolor> </toolbar> Here is an item that will appear in the toolbar. <label sid="COMPANY_NAME"> <value>My Company</value> <itemlocation content="array"> <ae content="array"> <ae>within</ae> <ae>TOOL_BAR</ae> </ae> </itemlocation> </label> Allow form designers to add application specific information to the form definition. This is useful when submitting forms to applications requiring non-XFDL information. An example of non-XFDL information might be an SQL query statement. All XFDL options and any custom options can be used with custom items. The naming conventions for a custom item are as follows: It must begin with an alphabetic character. It must contain characters only from the following list: A-Z, a-z, 0-9 and underscore. It must contain an underscore. This is an example of a custom item definition. It includes both an XFDL and a custom option. <ma_event sid="STATUS_EVENT"> <active>off</active> <ma_id>UF45567 /home/users/preferences01</ma_id> </ma_event> For simple character data content: <optionTag>content</optionTag> For computed options: <optionTag content="compute"> <cval>current value data</cval> <compute>expression</compute> </optionTag> For array options: <optionTag content="array"> <!suboption elements --> </optionTag> An option defines a characteristic given to a form, a page, or an item. For example, a fontinfo option set at the form global level defines the default font characteristics for the entire form. A fontinfo option set at the item level defines the font characteristics for only the item it is in. The definition of an option consists of content between start and end tags. The element tag defines the type of option. This type must be one of the option types defined in this specification, or a user-defined option that follows the rules in the "custom option" description in this specification.. For example: <value>This is the value</value> If the content expresses a compute, then the content attribute must be present and be set equal to . For example: <value content="compute"> <cval></cval> <compute> price1Field.value+price2Field.value*"0.07" </compute> </value>. For example: . XFDL does not often assign names to the array elements, so the default tag name of ae is used. An option set at a lower level in the form hierarchy overrides a similar option set at a higher level. It overrides it for only the level it is in and any that come below it in the hierarchy. For example, the fontinfo option in the following example would override a global fontinfo setting for the page it is in, and also for any items in that page. <page sid="Page1"> <fontinfo content="array"> <ae>Helvetica</ae> <ae>12</ae> <ae>plain</ae> </fontinfo> Form global options are optional and must be defined after the XFDL start tag, but before the first page in a form. Page global options are optional and must be defined after the page declaration, but before the first item in a page. To determine whether an option is a valid form global or page global option, see that option's description later in this specification. XFDL defines a set of data types that describe type of content allowed in an option. Each option description in this specification uses one or more of the following data type designators:
http://www.w3.org/TR/NOTE-XFDL
crawl-001
refinedweb
9,852
53.81
It can take some time to wrap your head around channels. For this reason, I wanted to expand on the previous post with another, more advanced, example. It's a bit complicated, but quite neat. The problem we are trying to solve is simple though. We have a reverse proxy, in Go, and we want to avoid having multiple threads fetch the same file at the same time. Instead, we'd like one of the threads to download it and block the other threads until the file is ready. For this example, our entry point will be the Download function. This will get called by multiple threads: func Download(remotePath string, saveAs string) { } To solve this problem we'll create a map which'll map the remotePath to a downloader. If no thread is currently downloading the file we'll register ourselves as the downloader. If someone is already downloading it, we'll block until they are done: import ( "time" ) var downloaders = make(map[string] *downloader) type downloader struct { saveAs string } func Download(remotePath string, saveAs string) { d, ok := downloaders[remotePath] if ok == false { d = &downloader{saveAs} downloaders[remotePath] = d d.download(remotePath) } else { //TODO 1: need to figure out how to block until d is done } } func (d *downloader) download(remotePath string) { time.Sleep(5 * time.Second) //simuate downloading the file to disk //the file is now available at d.saveAs delete(downloaders, remotePath) //TODO 2: need to signal all blocked threads } Our rough skeleton already has a major issue. It's completely possible for two threads to register themselves as a downloader for the same path at the same time. Access to downloaders must be synchronized. We can accomplish this with the synchronization primitives offered by the sync package, such as a mutex or, better yet, a read-write mutex. For this post we'll keep our code unsafe so we can focus on channel (we could also use channels to synchronize access, but the sync package is great for this type of simple stuff) We have two missing pieces: blocking the latecomers and letting them know when the file is ready. What we want here is an unbuffered channel. An unbuffered channel synchronizes communication; or, put differently, it blocks the sender until the receiver reads form it. We'll add this channel to our downloader: type downloader struct { saveAs string observers chan bool } And change the code that creates our downloaders: //old d = &downloader{saveAs} //new d = &downloader{saveAs, make(chan bool)} With our channel created, the simplest way to block our latecomers is to write to our channel. Todo 1 becomes: } else { d.observers <- true //once we get here, d.saveAs will be set } Remember, that first line (writing to the channel) will block until the other end reads from. No where in our code are we doing that yet, so for now, it'll block forever. Before we skip to that part, there's at least one way we can improve this. We can make it timeout. The last thing we want is to have a bunch of threads blocked forever because something went wrong in the downloader. To do this, we'll use Go's select construct. select looks a lot like a switch statement, which can really throw you off at first. What select does, is that it selects a channel from a list. If no channel is ready, it blocks or executes defualt if it has been provided. If multiple channels are ready, it randomly picks one. Let's look at it: select { case d.observers <- true: //d.saveAs is ready default: //handle the timeout (return an error or download the file yourself?) } This isn't very good. We know that the other end won't be reading from the channel until the download is done, so we'll immediate jump to the default, without blocking, and get an error. What we really want is to delay the execution of default. To do this we use the time.After function, which'll give us a channel after the specified time: select { case d.observers <- true: //d.saveAs is ready case <- time.After(5 * time.Second): //adjust the time as needed //handle the timeout (return an error or download the file yourself?) } We'll go into select and block. We'll unblock under two conditions: our write to observers is read, or 5 seconds goes by and the channel created by time.After writes and unblocks us. The last part is Todo 2, which is to notify all our blocked latecomers. Now, since these are blocked waiting for a reader, we simply need to read from observers: func (d *downloader) download(remotePath string) { time.Sleep(5 * time.Second) //simuate downloading the file to disk //the file is now available at d.saveAs delete(downloaders, remotePath) for _ := range d.observers { } } range, when applied to a channel, loops through and reads the channel (when applied to an array or a map, it loops through or reads the array/map). We don't actually care what we are reading, so we discard it by assigning it to _. The above code actually unblocks our latecomers, but it blocks this code. If we have 3 observers, we'll loop 3 times and set them free, but we'll block indefinitely. The solution? select again: func (d *downloader) download(remotePath string) { time.Sleep(5 * time.Second) //simuate downloading the file to disk //the file is now available at d.saveAs delete(downloaders, remotePath) for { select{ case <- d.observers: default: return } } } This code will read all the values on d.observers and then, when there are no more values, it'll simply return. That's it. There's a bunch of improvements we could make. Access to downloaders needs to be safe. Our notification code (the last example we looked at), could be run in its own goroutine so that we don't block the delay it from returning. A more advanced version could download chunks of data and broadcast that to all the observers as the chunks become available. As-is, our solution blocks until the entire file is downloaded. However, depending on what you are doing, and how big the files are, maybe it's better to stream the data back as it becomes available. Hopefully this helps you understand how channels can be used and where some of Go's language constructs, like range and select fit.
http://openmymind.net/Introduction-To-Go-Channels-Again/
CC-MAIN-2014-52
refinedweb
1,072
73.07
If you are reading this blog I assume you are already familiar with the DAG creation in Apache Airflow. If not, please visit “Dag in Apache Airflow”. This blog explains: – Sending email notifications using EmailOperator. – Sending email notification when Dag or task fails Here, we will schedule a dag that consists of 3 tasks. Task_1 and Task_2 are using BaseOperator while sending_email task is using EmailOperator. After successful execution of Task_1, the sending_email task will send an Email. At last failure Alert mail will be sent for Task_2. Let’s start step by step… 1: Generate Google App Password This generates 16 character unique password authorized by Google to mock the original password or two-factor authentication. Steps for generating Google App Password:- - Visit the App passwords page. After login, you will see the below window. - Click Select app and choose the app you’re using. - Click Select device and choose the device you’re using. - Select Generate. - App password generated ( 16 character code in the yellow bar) as shown in image below. - Select Done. Note: Once you are finished, you won’t see that App password code again. Hence, Please note the password somewhere carefully. 2 : Edit airflow.cfg airflow.config file is available in the airflow directory. Try to update the SMTP section as shown here: [email] email_backend = airflow.utils.email.send_email_smtp [smtp] smtp_host = smtp.googlemail.com smtp_starttls = True smtp_ssl = False smtp_user = YOUR_EMAIL_ADDRESS smtp_password = 16_DIGIT_APP_PASSWORD smtp_port = 587 smtp_mail_from = YOUR_EMAIL_ADDRESS 3: Importing modules from datetime import datetime from airflow import DAG from airflow.operators.bash_operator import BashOperator from airflow.operators.email_operator import EmailOperator 4: Default Arguments Here, ‘email_on_failure‘ is set to True. that’s why email will be sent automatically on failure. default_args = { "owner": "Kuldeep", "start_date": datetime(2022, 2, 16), 'email': ['kuldeep.swaroop@knoldus.com'], 'email_on_failure': True, } 5: Instantiate a DAG with DAG(dag_id="Sending_mail", schedule_interval="@once", default_args=default_args, ) 6: Set the Tasks Here, EmailOperator is used to perform the task of sending an email. sending_email = EmailOperator( task_id='sending_email', to='kuldeep.swaroop@knoldus.com', subject='Airflow Alert !!!', html_content="""<h1>Testing Email using Airflow</h1>""", ) task_2 = BashOperator( task_id='task_2', bash_command='cd temp_folder’, ) 7: Setting up Dependencies task_1 >> sending_email >> task_2 Code The attached screenshot is the complete Example. Result - Task_1 succeeds - Sending_email task succeeds - Task_2 fails because there is no folder named temp_folder’ Viewing DAG In Airflow After running the code, when you go to the browser and write, localhost:8080. Click on your DAG After clicking, you will get a detailed view of the tasks. Tree View Graph View Received Email after successful execution of sending_email task Received Alert Email after the failure of task_2 Note: If you are facing any issue related to the email operator, try to update the Docker-compose.yml file and set:- volumes: - ./dags:/usr/local/airflow/dags - ./config/airflow.cfg:/usr/local/airflow/airflow.cfg I hope you are now able to send emails in Apache Airflow. Stay tuned for the next part. Read Apache-Airflow documentation for more knowledge. To gain more information visit Knoldus Blogs.
https://blog.knoldus.com/apache-airflow-sending-email-notifications/
CC-MAIN-2022-21
refinedweb
503
50.23
We: config.allow_insecure_sign_in_after_confirmation = true: def handle_unverified_request super Devise.mappings.each_key do |key| cookies.delete "remember_#{key}_token" end end: config.allow_insecure_token_lookup = true. If the attacker can access the confirmation e-mail, can’t he just confirm the account and then reset the password to get access? It depends on how he can access the confirmation e-mail. If he can access the target inbox, then surely. As well if he’s sniffing the target network. In those cases, there is nothing we can do. Now, in case he was mistakenly sent a confirmation e-mail (for example, reconfirmable), there is nothing he can do. In one of my apps I have some feature specs that relied on grabbing the confirmation/reset_password tokens directly off of the `User` in order to generate the correct URLs for confirming or resetting a password. Those specs suddenly broke. For now, I’ve resorted to extracting the “raw” tokens out of the sent emails. Is there a better way to do this? Steven, if you are sending the e-mail via the interface, there is nothing you can do. One would argue that your current approach is the most correct one, as you are effectively testing the proper e-mail is being sent too. In any case, you can always send the e-mail manually, which gives you access to the token. For recoverable, here is what you would call: Why would the user’s email randomly be in the parameters? Just a note, the article is missing the Devise tag 😉
http://blog.plataformatec.com.br/2013/08/devise-3-1-now-with-more-secure-defaults/
CC-MAIN-2020-16
refinedweb
256
66.54
A couple of weeks ago I wrote about using a combination of tools and features of the Python language to help assess Django apps and write them a bit more defensively. One suggestion was to use a tool like flake8 to report not only on formatting issues but more importantly potential bugs identified through static analysis (including the kind of bugs that are discovered by a compiler _or _in an interpreted language, at runtime). Reader Karl Goetz wrote back (shared with permission): I just ran flake8 on a project and was given a list of 1845 issues. Depressing! We discussed briefly and validated my hunch that most of these were style issues. That’s not to say they should be ignored, but that a list of over 1000 issues or more isn’t necessarily something to lose sleep over. It’s all about one, understanding which problems are important and two, having a strategy in hand to deal with these issues. How you prioritize issues in an existing/legacy software project is one of the most significant stumbling blocks for developers and product teams working in Django apps (and pretty much any other type of project), so we have plenty of grist. Linter reporting is not where I’d normally start, but rather than switch gears let’s start here anyhow. For our purposes we’re going to assume flake8 as the tool of choice, although that’s not strictly required. flake8 includes linting, complexity analysis, and static analysis, making it an excellent default choice. Every individual error will be reported back to you with an error code telling you exactly what it is (e.g. E225 or F403, missing whitespace around an operator and starred imports, respectively). These errors have different discovery sources (e.g. pycodestyle or pyflakes) and different implications. Missing whitespace around an operator has no impact on how the code runs. It does however make the code more difficult to read and thus is likely to slow down development or even lead to programmer errors due to misreading. Starred imports similarly have no direct impact on how the code runs (e.g. from module import *) however they obscure the available namespace in a given module. This can result in shadowed imported names in a module’s namespace, and it also makes it more challenging for other tools to identify names in use. Both issues represent friction and increased potential for developer error, but the starred import could be hiding a code issue. On the other hand, F823 - local variable referenced before assignment - represents a runtime error waiting to happen. I find it helpful to think of the different issues you might run into, specifically reported back by linting and static analysis, into three categories: The most obvious and uncontroversial “definite” bugs are those that will result in runtime errors. This includes things like return or yield statements outside of functions, a break statement outside of a while or for loop, and a naked except block as not the last exception handler. These should result in errors when a module is imported. Other errors, like undefined names, will never be raised until that one line of code is run. Extensive test coverage will catch errors like this, but we don’t need to run the test suite to find these issues. In the event an undefined name needs to be imported then the solution is as simple as adding the necessary import. Often the name is presumed to have been defined in the code, as an empty list, for example. The solution here is to add in the initial definition, provided you can identify the type and expected initial value. Other “definite” bugs are those that don’t cause runtime errors but might be expected to create bugs involving different values. For example, F601 - dictionary key name repeated with different values - won’t cause your program to crash, but is likely to result in unexpected values being assigned. While definite bugs are a higher priority, this category tends to be much larger and require more extensive fixes. These are issues that are not going to directly cause errors in your app but pose significant risk by hiding potential bugs. We already mentioned starred imports, but the most common that comes to mind is plain, “naked” exceptions. flake8 will report these as E722, do not use bare except, specify exception instead. Exceptions get used for one of two purposes - expected control flow and rescue from errors that unnecessarily crash the program. By control flow I mean using exceptions idiomatically where in other languages (such as Go) you’d use explicit checks on values. Instead of checking for list length and returning the 5th element from a list if it’s length is at least 5 and another if it’s not, the idiomatic solution is to try returning the fifth indexed element and in the event of an IndexError return the alternate value. Whereas you might have a function that sits at the top of a call stack including various backing services (database) and HTTP APIs and the goal is to ensure that no matter what happens no error in this call stack is ever propagated back up to the user in the form of a 500 error. We may want to catch at least a base exception here, but nonetheless there is a rationale for catching everything. Now the _reason _for being specific, especially in the first instance, is two-fold. The explicitness of having the named exception shows intent so that other developers understand why there is a try block and also how the flow control works. In both cases, a naked exception just swallows everything so, for example, if there’s an actual bug throwing an error then it’s possible that even a test won’t catch this because all exceptions were implicitly handled. In a case like the latter where there may be too many possible exceptions to handle, from socket issues to third party API responses, the solution is to first add an exception log before continuing so that you have access to the full error information, and to separately handle known/expected exceptions first. That might look something like this: try: do\_something() except ThirdPartyAPIISDown as e: logging.error(e) pass except BaseException: logging.exception("Unhandled failure") pass That third party API is known to be flakey and we don’t need the full stack trace every time it’s unavailable. Now we don’t have our own code swallowing issues and hiding some potential bugs. Additional issues that may hide bugs include excessive complexity. This is a big topic and it straddles the third category, but ultimately overly complex code is hard to reason through and the combination of numerous logical paths and levels of these paths means very complex functions and methods are often where hidden bugs are to be found. The solution here is to refactor (in the “pure” sense of the word). This is sometimes easier said than done, and the urgency of refactoring complex code depends a lot on particularities about the module and it’s use. Pretty much everything else falls into this category. Style and formatting issues are at the least distractions and at worse can obscure what the code actually does. These should be fixed but need not be top priority. The good news, or better news, is that in many instances, these can be fixed automatically! Whether with a hard and fast formatter like black[0], a configurable one like autopep8[1], or a built-in tool like PyCharm[2], there are ways to address these without editing each extraneous space and each wayward tab individually. The further benefit of automated tools like these is that they can be added to your development process, and even automated build process, so that no one on the team has to think about them anymore. Why _not _use an autoformatter? If your project has its own format guidelines or conventions you want to maintain, an existing autoformatter may either not work or require too much knob turning to be useful. In our own work we’ve made use of some tooling that we recently published[3] to help break down the issues. The flake8-csv formatter exports the results from flake8 in CSV format with an option for including some pre-configured categorization. This can be loaded into a spreadsheet (or DataFrame!) letting you examine not only which modules have trouble spots but what the breakdown of issues looks like. Coupled with test coverage data and Git churn information, you can start to prioritize in even a large, busy codebase based on the most serious issues against the most frequently edited modules. Indendtedly yours, Ben [0] Mentioned before, black is a Python3-only no-holds-barred formatter in the mold of the Go fmt command [1] If you’d like more formatting options, autopep8: [2] PyCharm is a Python IDE and a commercial product, it can also be configured to format with an external tool of your choice [3] Flake8-csv Run flake8 with the –format=csv_categories flag to get the error code categorization. Learn from more articles like this how to make the most out of your existing Django site.
https://wellfire.co/this-old-pony/starting-to-prioritize-and-triage-issues-in-cleaning-up-django-apps--this-old-pony-50/
CC-MAIN-2018-47
refinedweb
1,546
57.2
[Solved] How to redirect the QT Application display to /dev/fb0 Dear All, I have a very simple QT GUI application like shown below: #include <QApplication> #include <QtWidgets/QLabel> int main(int argc, char** argv) { QApplication app(argc, argv); QLabel *label = new QLabel("Hello World!"); label->show(); return app.exec(); } Now when i run the above command like this: ./myApp -qws -display "linuxfb:/dev/fb0" I get following error: This application failed to start because it could not find or load the Qt platform plugin "xcb". Available platform plugins are: linuxfb (from :/home/GUI). Reinstalling the application may fix this problem. Aborted (core dumped) Kindly help me out. I compiled QT from source code and there was no XCB support. Only Support that is available is LinuxFB. Thanks Sid Hi Sid, i assume you use Qt5? The QWS system does not exist anymore and is replaced by the QPA system which by now is not documented very well. However you should be fine with the linuxfb plugin. What you need to change is: - change your application to be a QGuiApplication @#include <QGuiApplication>@ ... @QGuiApplication theApp(argc, argv);@ - remove @-qws -display “linuxfb:/dev/fb0”@ command line parameters - copy or move the platforms directory created by qt install to the directory containing your application (maybe it is also ok to just install the platforms directory's contents to /lib or /usr/lib?) Cheers, Lutz Oh, and dont't forget to pass @-qpa linuxfb@ as an argument to configure when you build Qt5 to assure that the linuxfb plugin is used as default. Dear Lutzhell, Thanks for your quick reply. Actually while building QT from source i got following Config summary: QPA backends: DirectFB ............. no EGLFS ................ no KMS .................. no LinuxFB .............. yes XCB .................. no so i think there is no need of doing QT Configure with -qpa linuxfb. So i want to pass the command line argument. So to run the application what arguments i should use so that it is redirected to LinuxFb. Looking forward for your help. Thanks lot. Sid Forgot to mention that i am using QT 5.2.1. Hi Sid, my application starts nicely without any additional parameters. This is possible because with the configure flag i say that i want to use "offscreen" (i.e. in my case: -qpa offscreen, you would use -qpa linuxfb) as default QPA platform plugin. from ./configure --help @-qpa <name> ......... Sets the default QPA platform (e.g xcb, cocoa, windows).@ If you don't want to use linuxfb as default you can still pass the platform parameter to your programme: @-platform linuxfb@ BTW i'm using Qt 5.1.1 but the instructions should do for your version as well Dear Lutzhell, Thanks for your quick reply. I will work on your inputs and will update you about the progress :) Good luck :) The doc for QGuiApplication gives some clues which arguments can be passed to a QGuiApplication: very interesting is: @-platform platformName[:options], specifies the Qt Platform Abstraction (QPA) plugin. Overridden by the QT_QPA_PLATFORM environment variable. @ Hi lutzhell, Thanks for the help. I regenerated all the libraries by passing @-qpa LinuxFB @ during the configure. Now i am able to see the output on LinuxFB by giving the following command: @ ./gui_qt -platform linuxfb@ Cheers :) Marking it as solved.
https://forum.qt.io/topic/42142/solved-how-to-redirect-the-qt-application-display-to-dev-fb0
CC-MAIN-2018-13
refinedweb
543
65.32
Odoo Help Odoo is the world's easiest all-in-one management software. It includes hundreds of business apps: CRM | e-Commerce | Accounting | Inventory | PoS | Project management | MRP | etc. How can i take backup of database without using the "manage database link", Psql command , and pgadmin 3? "Manage database" link showing at the time of login. So any user can click on that and if he know the password of DB can drop it. In my point of view giving "Manage database" link in login screen is not that much secure. I can hide that link by inheriting web module. But the problem is to take backup of DB i have to connect to "server" and need to write the pgsql commands to take backup of db. What's my actual requirement is, can any one give the "path" of manage database. <a href="#" class="oe_login_manage_db">Manage Databases</a> is the manage database anchor tab. Where it is going when i clik on this link. So I saved that path some where in my mail. If i want to take back up of DB, i directly copy that path in browser and take back up. Or else is there any way to take backup without using the "manage database link", Psql command , and pgadmin 3? Put it in a file, then run a cron on it: #! /usr/bin/env python # -*- coding: utf-8 -*- import os import time from subprocess import Popen, PIPE import glob BACKUP_DIR = '/path/to/some/backup/directory/' LOG_FILE = '{}backup.log'.format(BACKUP_DIR) USER = 'username' DATABASES = ['foo', 'bar'] def dump(db_name): dump_dir = BACKUP_DIR + db_name + '/' # check if dump_dir exists, create it if not dump_dir_glob = glob.glob(dump_dir) if len(dump_dir_glob) == 0: os.mkdir(dump_dir) # dump db now = int(time.time() * 1000000) dump_file_path = '{}{}_{}.dump'.format(dump_dir, db_name, now) with open(dump_file_path, 'wb') as dump_file, \ open(LOG_FILE, 'a') as log_file: process = Popen(['pg_dump', '-Fc', db_name, '-U', USER], stdout=dump_file, stderr=log_file) def clean(days=2, hours=0, minutes=0, seconds=0): interval = seconds + minutes*60 + hours*60*60 + days*24*60*60 x_days_ago = time.time() - interval unlinked = 0 for db_name in DATABASES: glob_list = glob.glob('{}{}/*'.format(BACKUP_DIR, db_name)) for filename in glob_list: file_info = os.stat(filename) if file_info.st_ctime < x_days_ago: os.unlink(filename) unlinked += 1 return unlinked def log(string): now = time.strftime('%d-%m-%Y_%H:%M:%S') string = '{} >> {}\n'.format(now, string) with open(LOG_FILE, 'a') as f: f.write(string) def start(): unlinked = clean(days=40) for db in DATABASES: dump(db) log('done | {} old archive(s) deleted | {} new archive(s) created'.format(unlinked, len(DATABASES))) start() After backup ,What is the restore command? DB=dbname && createdb $DB && pg_restore -n public --no-acl -O -d $DB DB=dbname && createdb $DB && pg_restore -n public --no-acl -O -d $DB About This Community Odoo Training Center Access to our E-learning platform and experience all Odoo Apps through learning videos, exercises and Quizz.Test it now
https://www.odoo.com/forum/help-1/question/how-can-i-take-backup-of-database-without-using-the-manage-database-link-psql-command-and-pgadmin-3-62329
CC-MAIN-2017-22
refinedweb
487
58.58
Base viewer class which adds a decoration around the rendering area. More... #include <Inventor/Wx/viewers/SoWxFullViewer.h> This is a base class used by all viewer components. The class adds a decoration around the rendering area which includes thumb wheels, a zoom slider and push buttons. This base class also includes a viewer popup menu and a preference sheet with generic viewing functions. The constructors for the various subclasses of SoWxFullViewer provide a flag for specifying whether the decoration and popup menus should be built. SoWxViewer, SoWxComponent, SoWxRenderArea, SoWxExaminerViewer, SoWxPlaneViewer BuildFlag. Adds application push button, which will be placed in the left hand side decoration trim. Buttons are appended to the end of the list. Note: The button bitmaps should be 28-by-28 pixels to fit nicely into the decoration trim like the other viewer buttons. Returns index of specified push button. Returns application push button parent. Returns the render area window handle. This hides the component. Reimplemented from SoWxGLWidget. Adds application push button, which will be placed in the left hand side decoration trim. Buttons are inserted at the desired index. Note: The button bitmaps should be 24-by-24 pixels to fit nicely into the decoration trim like the other viewer buttons. Returns TRUE if an application-specific popup menu is installed. Returns whether the viewer component trim is on or off. Returns whether the viewer popup menu is enabled or disabled. Returns number of application push buttons. Removes specified application push button. Sets the current buffering type in the main view (default SoWxViewer::BUFFER_DOUBLE). Reimplemented from SoWxViewer. Sets the edited camera. Setting the camera is only needed if the first camera found in the scene when setting the scene graph isn't the one the user really wants to edit. Reimplemented from SoWxViewer. Reimplemented in SoWxExaminerViewer, and SoWxPlaneViewer. Enables application-specific popup menu. SoWxViewer. Turns the headlight on/off (default on). The default value can be set using the environment variable OIV_USE_HEADLIGHT (0 = FALSE, 1 = TRUE). Reimplemented from SoWxViewer. Reimplemented in SoWxExaminerViewer, and SoWxPlaneViewer. Popup menu provided by a client (i.e. application program) of the viewer. In this version we track the currently "check marked" menu item in the Draw Style submenu (only one checked at a time). Tracks the checkmark for "Still" draw style. Pointer to the Draw Style submenu. Pointer to the Functions submenu. Pointer to the root of the popup menu. Pointer to the Preferences submenu.
https://developer.openinventor.com/refmans/9.9/RefManCpp/class_so_wx_full_viewer.html
CC-MAIN-2020-16
refinedweb
405
53.37
Super job! Everything worked perfect! Thanks!. Thanks drain. I’ve updated the code sample accordingly. SimpleMessageListenerContainer implements IDisposable, so I’ve wrapped it in a using block (I used Reflector to ensure the Dispose method calls Shutdown.)? This was very useful, thanks for the example!, I’ve not seen this problem. Might be worth logging the issue on the ActiveMq User Forum. Great article! I followed the mentioned steps and everything worked perfectly as expected. Quick question: is there a way (or an article) to get similar notifications on a web (.aspx) page rather than console window? Thanks in advance. To compile the Program class, you need to have the Listener class in the same namespace. So, check to see that you have added a Listener class to the ListenerConsole project. Thanks a lot. Working now as expected. Great article. Herman Great example! The sender program does not exit properly after sending the message. isaac, Thanks for the feedback. Do you have any more information about how it’s not exitting properly? This is a fantastic example! Very useful. I have the same problem as Isaac. The sender program doesn’t terminate after the message is sent. Cntl C is required to stop the Sender program., ActiveMQ topics can be used via NMS. I posted an article about publish-subscribe using NMS and ActiveMQ that shows topics being used. Click here to read the full article. [...] ActiveMQ via .NET Messaging Overview [...] Hi, I am having a problem running this example, the listener is not receiving all the messages, am I missing some kind of configuration? thanks for your help Hello everybody, this is a very nice introduction in the activeMQ. i’ve tested it with the Apache 5.1.0 version and it works Thanks in Advance for this information best regards from germany Jan Bludau; Excellent Thank for the code. I have already test the code and its working. But my problem is I want to list all the queues in active MQ. hi, this code is pretty useful. But i want to have the ListenerConsole as a windows service with auto startup. Can you please help me, how to do it? Thanks in advance Great stuff! I’ve added a link to the Articles section of the ActiveMQ website…. sorry, my fault. Moved occasionally Console.Read from using block. Thank you very much one more time. [...] are some examples using the Spring framework (same as [...]. To Rod: Got same problem after getting newer 1.1.0.0 snapshot from svn, but fortunately I still have working older version. NMS 1.1 seems to be bit unstable through the time. Like ActiveMQ… With .Net and ActiveMQ, is there any possible way to send/receive messages via port with security protocol? I am looking the way for a long time. Thank you.. Should have mentioned, the error is: “unable to connect…machine actively refused connection….{IPv6 of localhost}:61616 Thank you, very useful article. But I am Having trouble with listener, it doesn’t receive messages. I checked send queue is ok.. Can anyone help for me? Thanks in advance.. Sorry.. I solved it.. I didn’t change any program source… Thanks
http://remark.wordpress.com/articles/messaging-with-net-and-activemq/
crawl-002
refinedweb
532
70.7
Getting the type of an object in .NET C# December 24, 2016 2 Comments You’ll probably know that every object in C# ultimately derives from the Object base class. The Object class has a GetType() method which returns the Type of an object. Say you have the following class hierarchy: public class Vehicle { } public class Car : Vehicle { } public class Truck : Vehicle { } Then declare the following instances all as Vehicle objects: Vehicle vehicle = new Vehicle(); Vehicle car = new Car(); Vehicle truck = new Truck(); Let’s output the type names of these objects: So ‘car’ and ‘truck’ are not of type Vehicle. An object can only have a single type even if it can be cast to a base type, i.e. a base class or an interface. You can still easily get to the Type from which a given object is derived: Type truckBase = truckType.BaseType; Console.WriteLine("Truck base: {0}", truckBase.Name); …which of course returns ‘Vehicle’. View all posts on Reflection here. Hi Andras -> Thank you for yet another well explained post, I’ve learned a lot from you this year and hope this continues next year, happy holidays, my best to you and your family, take care Hello Pharaoh Ramesses, thanks for your kind words. It’s flattering that even ancient rulers are getting useful information from this blog :-). Merry Christmas and all the best to you as well! Andras
https://dotnetcodr.com/2016/12/24/getting-the-type-of-an-object-in-net-c-2/?replytocom=86851
CC-MAIN-2021-43
refinedweb
233
70.94
!0702 .2626 2016/07/02 18:29:570702 49 + improve test/list_keys.c, using $TERM if no parameters areg given. 50 51 20160625 52 + build-fixes for ncurses "test_progs" rule. 53 + amend change to CF_CC_ENV_FLAGS in 20160521 to make multilib build 54 work (report by Sven Joachim). 55 56 20160618 57 + build-fixes for ncurses-examples with NetBSD curses. 58 + improve test/list_keys.c, fixing column-widths and sorting the list 59 to make it more readable. 60 61 20160611 62 + revise fix for Debian #805618 (report by Vlado Potisk, cf: 20151128). 63 + modify test/ncurses.c a/A screens to make exiting on an escape 64 character depend on the start of keypad and timeout modes, to allow 65 better testing of function-keys. 66 + modify rs1 for xterm-16color, xterm-88color and xterm-256color to 67 reset palette using "oc" string as in linux -TD 68 + use ANSI reply for u8 in xterm-new, to reflect vt220-style responses 69 that could be returned -TD 70 + added a few capabilities fixed in recent vte -TD 71 72 20160604 73 + correct logic for -f option in test/demo_terminfo.c 74 + add test/list_keys.c 75 76 20160528 77 + further workaround for PIE/PIC breakage which causes gpm to not link. 78 + fix most cppcheck warnings, mostly style, in ncurses library. 79 80 20160521 81 + improved manual page description of tset/reset versus window-size. 82 + fixes to work with a slightly broken compiler configuration which 83 cannot compile "Hello World!" without adding compiler options 84 (report by Ola x Nilsson): 85 + pass appropriate compiler options to the CF_PROG_CC_C_O macro. 86 + when separating compiler and options in CF_CC_ENV_FLAGS, ensure 87 that all options are split-off into CFLAGS or CPPFLAGS 88 + restore some -I options removed in 20140726 because they appeared 89 to be redundant. In fact, they are needed for a compiler that 90 cannot combine -c and -o options. 91 92 20160514 93 + regenerate HTML manpages. 94 + improve manual pages for wgetch and wget_wch to point out that they 95 might return values without names in curses.h (Debian #822426). 96 + make linux3.0 entry the default linux entry (Debian #823658) -TD 97 + modify linux2.6 entry to improve line-drawing so that the linux3.0 98 entry can be used in non-UTF-8 mode -TD 99 + document return value of use_extended_names (report by Mike Gran). 100 101 20160507 102 + amend change to _nc_do_color to restore the early return for the 103 special case used in _nc_screen_wrap (report by Dick Streefland, 104 cf: 20151017). 105 + modify test/ncurses.c: 106 + check return-value of putwin 107 + correct ifdef which made the 'g' test's legend not reflect changes 108 to keypad- and scroll-modes. 109 + correct return-value of extended putwin (report by Mike Gran). 110 111 20160423 112 + modify test/ncurses.c 'd' edit-color menu to optionally read xterm 113 color palette directly from terminal, as well as handling KEY_RESIZE 114 and screen-repainting with control/L and control/R. 115 + add 'oc' capability to xterm+256color, allowing palette reset for 116 xterm -TD 117 118 20160416 119 + add workaround in configure script for inept transition to PIE vs 120 PIC builds documented in 121 122 + add "reset" to list of programs whose names might change in manpages 123 due to program-transformation configure options. 124 + drop long-obsolete "-n" option from tset. 125 126 20160409 127 + modify test/blue.c to use Unicode values for card-glyphs when 128 available, as well as improving the check for CP437 and CP850. 129 130 20160402 131 + regenerate HTML manpages. 132 + improve manual pages for utilities with respect to POSIX versus 133 X/Open Curses. 134 135 20160326 136 + regenerate HTML manpages. 137 + improve test/demo_menus.c, allowing mouse-click on the menu-headers 138 to switch the active menu. This requires a new extension option 139 O_MOUSE_MENU to tell the menu driver to put mouse events which do not 140 apply to the active menu back into the queue so that the application 141 can handle the event. 142 143 20160319 144 + improve description of tgoto parameters (report by Steffen Nurpmeso). 145 + amend workaround for Solaris line-drawing to restore a special case 146 that maps Unicode line-drawing characters into the acsc string for 147 non-Unicode locales (Debian #816888). 148 149 20160312 150 + modified test/filter.c to illustrate an alternative to getnstr, that 151 polls for input while updating a clock on the right margin as well 152 as responding to window size-changes. 153 154 20160305 155 + omit a redefinition of "inline" when traces are enabled, since this 156 does not work with gcc 5.3.x MinGW cross-compiling (cf: 20150912). 157 158 20160220 159 + modify test/configure script to check for pthread dependency of 160 ncursest or ncursestw library when building ncurses examples, e.g., 161 in case weak symbols are used. 162 + modify configure macro for shared-library rules to use -Wl,-rpath 163 rather than -rpath to work around a bug in scons (FreeBSD #178732, 164 cf: 20061021). 165 + double-width multibyte characters were not counted properly in 166 winsnstr and wins_nwstr (report/example by Eric Pruitt). 167 + update config.guess, config.sub from 168 169 170 20160213 171 + amend fix for _nc_ripoffline from 20091031 to make test/ditto.c work 172 in threaded configuration. 173 + move _nc_tracebits, _tracedump and _tracemouse to curses.priv.h, 174 since they are not part of the suggested ABI6. 175 176 20160206 177 + define WIN32_LEAN_AND_MEAN for MinGW port, making builds faster. 178 + modify test/ditto.c to allow $XTERM_PROG environment variable to 179 override "xterm" as the name of the program to run in the threaded 180 configuration. 181 182 20160130 183 + improve formatting of man/curs_refresh.3x and man/tset.1 manpages 184 + regenerate HTML manpages using newer man2html to eliminate some 185 unwanted blank lines. 186 187 20160123 188 + ifdef'd header-file definition of mouse_trafo() with NCURSES_NOMACROS 189 (report by Corey Minyard). 190 + fix some strict compiler-warnings in traces. 191 192 20160116 193 + tidy up comments about hardcoded 256color palette (report by 194 Leonardo Brondani Schenkel) -TD 195 + add putty-noapp entry, and amend putty entry to use application mode 196 for better consistency with xterm (report by Leonardo Brondani 197 Schenkel) -TD 198 + modify _nc_viscbuf2() and _tracecchar_t2() to trace wide-characters 199 as a whole rather than their multibyte equivalents. 200 + minor fix in wadd_wchnstr() to ensure that each cell has nonzero 201 width. 202 + move PUTC_INIT calls next to wcrtomb calls, to avoid carry-over of 203 error status when processing Unicode values which are not mapped. 204 205 20160102 206 + modify ncurses c/C color test-screens to take advantage of wide 207 screens, reducing the number of lines used for 88- and 256-colors. 208 + minor refinement to check versus ncv to ignore two parameters of 209 SGR 38 and 48 when those come from color-capabilities. 210 211 20151226 212 + add check in tic for use of bold, etc., video attributes in the 213 color capabilities, accounting whether the feature is listed in ncv. 214 + add check in tic for conflict between ritm, rmso, rmul versus sgr0. 215 216 20151219 217 + add a paragraph to curs_getch.3x discussing key naming (discussion 218 with James Crippen). 219 + amend workaround for Solaris vs line-drawing to take the configure 220 check into account. 221 + add a configure check for wcwidth() versus the ncurses line-drawing 222 characters, to use in special-casing systems such as Solaris. 223 224 20151212 225 + improve CF_XOPEN_CURSES macro used in test/configure, to define as 226 needed NCURSES_WIDECHAR for platforms where _XOPEN_SOURCE_EXTENDED 227 does not work. Also modified the test program to ensure that if 228 building with ncurses, that the cchar_t type is checked, since that 229 normally is since 20111030 ifdef'd depending on this test. 230 + improve 20121222 workaround for broken acs, letting Solaris "work" 231 in spite of its misconfigured wcwidth which marks all of the line 232 drawing characters as double-width. 233 234 20151205 235 + update form_cursor.3x, form_post.3x, menu_attributes.3x to list 236 function names in NAME section (patch by Jason McIntyre). 237 + minor fixes to manpage NAME/SYNOPSIS sections to consistently use 238 rule that either all functions which are prototyped in SYNOPSIS are 239 listed in the NAME section, or the manual-page name is the sole item 240 listed in the NAME section. The latter is used to reduce clutter, 241 e.g., for the top-level library manual pages as well as for certain 242 feature-pages such as SP-funcs and threading (prompted by patches by 243 Jason McIntyre). 244 245 20151128 246 + add option to preserve leading whitespace in form fields (patch by 247 Leon Winter). 248 + add missing assignment in lib_getch.c to make notimeout() work 249 (Debian #805618). 250 + add 't' toggle for notimeout() function in test/ncurses.c a/A screens 251 + add viewdata terminal description (Alexandre Montaron). 252 + fix a case in tic/infocmp for formatting capabilities where a 253 backslash at the end of a string was mishandled. 254 + fix some typos in curs_inopts.3x (Benno Schulenberg). 255 256 20151121 257 + fix some inconsistencies in the pccon* entries -TD 258 + add bold to pccon+sgr+acs and pccon-base (Tati Chevron). 259 + add keys f12-f124 to pccon+keys (Tati Chevron). 260 + add test/test_sgr.c program to exercise all combinations of sgr. 261 262 20151107 263 + modify tset's assignment to TERM in its output to reflect the name by 264 which the terminal description is found, rather than the primary 265 name. That was an unnecessary part from the initial conversion of 266 tset from termcap to terminfo. The termcap program in 4.3BSD did 267 this to avoid using the short 2-character name (report by Rich 268 Burridge). 269 + minor fix to configure script to ensure that rules for resulting.map 270 are only generated when needed (cf: 20151101). 271 + modify configure script to handle the case where tic-library is 272 renamed, but the --with-debug option is used by itself without 273 normal or shared libraries (prompted by comment in Debian #803482). 274 275 20151101 276 + amend change for pkg-config which allows build of pc-files when no 277 valid pkg-config library directory was configured to suppress the 278 actual install if it is not overridden to a valid directory at 279 install time (cf: 20150822). 280 + modify editing script which generates resulting.map to work with the 281 clang configuration on recent FreeBSD, which gives an error on an 282 empty "local" section. 283 + fix a spurious "(Part)" message in test/ncurses.c b/B tests due 284 to incorrect attribute-masking. 285 286 20151024 287 + modify MKexpanded.c to update the expansion of a temporary filename 288 to "expanded.c", for use in trace statements. 289 + modify layout of b/B tests in test/ncurses.c to allow for additional 290 annotation on the right margin; some terminals with partial support 291 did not display well. 292 + fix typo in curs_attr.3x (patch by Sven Joachim). 293 + fix typo in INSTALL (patch by Tomas Cech). 294 + improve configure check for setting WILDCARD_SYMS variable; on ppc64 295 the variable is in the Data section rather than Text (patch by Michel 296 Normand, Novell #946048). 297 + using configure option "--without-fallbacks" incorrectly caused 298 FALLBACK_LIST to be set to "no" (patch by Tomas Cech). 299 + updated minitel entries to fix kel problem with emacs, and add 300 minitel1b-nb (Alexandre Montaron). 301 + reviewed/updated nsterm entry Terminal.app in OSX -TD 302 + replace some dead URLs in comments with equivalents from the 303 Internet Archive -TD 304 + update config.guess, config.sub from 305 306 307 20151017 308 + modify ncurses/Makefile.in to sort keys.list in POSIX locale 309 (Debian #801864, patch by Esa Peuha). 310 + remove an early-return from _nc_do_color, which can interfere with 311 data needed by bkgd when ncurses is configured with extended colors 312 (patch by Denis Tikhomirov). 313 > fixes for OS/2 (patches by KO Myung-Hun) 314 + use button instead of kbuf[0] in EMX-specific part of lib_mouse.c 315 + support building with libtool on OS/2 316 + use stdc++ on OS/2 kLIBC 317 + clear cf_XOPEN_SOURCE on OS/2 318 319 20151010 320 + add configure check for openpty to test/configure script, for ditto. 321 + minor fixes to test/view.c in investigating Debian #790847. 322 + update autoconf patch to 2.52.20150926, incorporates a fix for Cdk. 323 + add workaround for breakage of POSIX makefiles by recent binutils 324 change. 325 + improve check for working poll() by using posix_openpt() as a 326 fallback in case there is no valid terminal on the standard input 327 (prompted by discussion on bug-ncurses mailing list, Debian #676461). 328 329 20150926 330 + change makefile rule for removing resulting.map to distclean rather 331 than clean. 332 + add /lib/terminfo to terminfo-dirs in ".deb" test-package. 333 + add note on portability of resizeterm and wresize to manual pages. 334 335 20150919 336 + clarify in resizeterm.3x how KEY_RESIZE is pushed onto the input 337 stream. 338 + clarify in curs_getch.3x that the keypad mode affects ability to 339 read KEY_MOUSE codes, but does not affect KEY_RESIZE. 340 + add overlooked build-fix needed with Cygwin for separate Ada95 341 configure script, cf: 20150606 (report by Nicolas Boulenguez) 342 343 20150912 344 + fixes for configure/build using clang on OSX (prompted by report by 345 William Gallafent). 346 + do not redefine "inline" in ncurses_cfg.h; this was originally to 347 solve a problem with gcc/g++, but is aggravated by clang's misuse 348 of symbols to pretend it is gcc. 349 + add braces to configure script to prevent unwanted add of 350 "-lstdc++" to the CXXLIBS symbol. 351 + improve/update test-program used for checking existence of stdc++ 352 library. 353 + if $CXXLIBS is set, the linkage test uses that in addition to $LIBS 354 355 20150905 356 + add note in curs_addch.3x about line-drawing when it depends upon 357 UTF-8. 358 + add tic -q option for consistency with infocmp, use it to suppress 359 all comments from the "tic -I" output. 360 + modify infocmp -q option to suppress the "Reconstructed from" 361 header. 362 + add infocmp/tic -Q option, which allows one to dump the compiled 363 form of the terminal entry, in hexadecimal or base64. 364 365 20150822 366 + sort options in usage message for infocmp, to make it simpler to 367 see unused letters. 368 + update usage message for tic, adding "-0" option. 369 + documented differences in ESCDELAY versus AIX's implementation. 370 + fix some compiler warnings from ports. 371 + modify --with-pkg-config-libdir option to make it possible to install 372 ".pc" files even if pkg-config is not found (adapted from patch by 373 Joshua Root). 374 375 20150815 376 + disallow "no" as a possible value for "--with-shlib-version" option, 377 overlooked in cleanup-changes for 20000708 (report by Tommy Alex). 378 + update release notes in INSTALL. 379 + regenerate llib-* files to help with review for release notes. 380 381 20150810 382 + workaround for Debian #65617, which was fixed in mawk's upstream 383 releases in 2009 (report by Sven Joachim). See 384 385 386 20150808 6.0 release for upload to 387 388 20150808 389 + build-fix for Ada95 on older platforms without stdint.h 390 + build-fix for Solaris, whose /bin/sh and /usr/bin/sed are non-POSIX. 391 + update release announcement, summarizing more than 800 changes across 392 more than 200 snapshots. 393 + minor fixes to manpages, etc., to simplify linking from announcement 394 page. 395 396 20150725 397 + updated llib-* files. 398 + build-fixes for ncurses library "test_progs" rule. 399 + use alternate workaround for gcc 5.x feature (adapted from patch by 400 Mikhail Peselnik). 401 + add status line to tmux via xterm+sl (patch by Nicholas Marriott). 402 + fixes for st 0.5 from testing with tack -TD 403 + review/improve several manual pages to break up wall-of-text: 404 curs_add_wch.3x, curs_attr.3x, curs_bkgd.3x, curs_bkgrnd.3x, 405 curs_getcchar.3x, curs_getch.3x, curs_kernel.3x, curs_mouse.3x, 406 curs_outopts.3x, curs_overlay.3x, curs_pad.3x, curs_termattrs.3x 407 curs_trace.3x, and curs_window.3x 408 409 20150719 410 + correct an old logic error for %A and %O in tparm (report by "zreed"). 411 + improve documentation for signal handlers by adding section in the 412 curs_initscr.3x page. 413 + modify logic in make_keys.c to not assume anything about the size 414 of strnames and strfnames variables, since those may be functions 415 in the thread- or broken-linker configurations (problem found by 416 Coverity). 417 + modify test/configure script to check for pthreads configuration, 418 e.g., ncursestw library. 419 420 20150711 421 + modify scripts to build/use test-packages for the pthreads 422 configuration of ncurses6. 423 + add references to ttytype and termcap symbols in demo_terminfo.c and 424 demo_termcap.c to ensure that when building ncursest.map, etc., that 425 the corresponding names such as _nc_ttytype are added to the list of 426 versioned symbols (report by Werner Fink) 427 + fix regression from 20150704 (report/patch by Werner Fink). 428 429 20150704 430 + fix a few problems reported by Coverity. 431 + fix comparison against "/usr/include" in misc/gen-pkgconfig.in 432 (report by Daiki Ueno, Debian #790548, cf: 20141213). 433 434 20150627 435 + modify configure script to remove deprecated ABI 5 symbols when 436 building ABI 6. 437 + add symbols _nc_Default_Field, _nc_Default_Form, _nc_has_mouse to 438 map-files, but marked as deprecated so that they can easily be 439 suppressed from ABI 6 builds (Debian #788610). 440 + comment-out "screen.xterm" entry, and inherit screen.xterm-256color 441 from xterm-new (report by Richard Birkett) -TD 442 + modify read_entry.c to set the error-return to -1 if no terminal 443 databases were found, as documented for setupterm. 444 + add test_setupterm.c to demonstrate normal/error returns from the 445 setupterm and restartterm functions. 446 + amend cleanup change from 20110813 which removed redundant definition 447 of ret_error, etc., from tinfo_driver.c, to account for the fact that 448 it should return a bool rather than int (report/analysis by Johannes 449 Schindelin). 450 451 20150613 452 + fix overflow warning for OSX with lib_baudrate.c (cf: 20010630). 453 + modify script used to generate map/sym files to mark 5.9.20150530 as 454 the last "5.9" version, and regenerated the files. That makes the 455 files not use ".current" for the post-5.9 symbols. This also 456 corrects the label for _nc_sigprocmask used in when weak symbols are 457 configured for the ncursest/ncursestw libraries (prompted by 458 discussion with Sven Joachim). 459 + fix typo in NEWS (report by Sven Joachim). 460 461 20150606 pre-release 462 + make ABI 6 the default by updates to dist.mk and VERSION, with the 463 intention that the existing ABI 5 should build as before using the 464 "--with-abi-version=5" option. 465 + regenerate ada- and man-html documentation. 466 + minor fixes to color- and util-manpages. 467 + fix a regression in Ada95/gen/Makefile.in, to handle special case of 468 Cygwin, which uses the broken-linker feature. 469 + amend fix for CF_NCURSES_CONFIG used in test/configure to assume that 470 ncurses package scripts work when present for cross-compiling, as the 471 lessor of two evils (cf: 20150530). 472 + add check in configure script to disallow conflicting options 473 "--with-termlib" and "--enable-term-driver". 474 + move defaults for "--disable-lp64" and "--with-versioned-syms" into 475 CF_ABI_DEFAULTS macro. 476 477 20150530 478 + change private type for Event_Mask in Ada95 binding to work when 479 mmask_t is set to 32-bits. 480 + remove spurious "%;" from st entry (report by Daniel Pitts) -TD 481 + add vte-2014, update vte to use that -TD 482 + modify tic and infocmp to "move" a diagnostic for tparm strings that 483 have a syntax error to tic's "-c" option (report by Daniel Pitts). 484 + fix two problems with configure script macros (Debian #786436, 485 cf: 20150425, cf: 20100529). 486 487 20150523 488 + add 'P' menu item to test/ncurses.c, to show pad in color. 489 + improve discussion in curs_color.3x about color rendering (prompted 490 by comment on Stack Overflow forum): 491 + remove screen-bce.mlterm, since mlterm does not do "bce" -TD 492 + add several screen.XXX entries to support the respective variations 493 for 256 colors -TD 494 + add putty+fnkeys* building-block entries -TD 495 + add smkx/rmkx to capabilities analyzed with infocmp "-i" option. 496 497 20150516 498 + amend change to ".pc" files to only use the extra loader flags which 499 may have rpath options (report by Sven Joachim, cf: 20150502). 500 + change versioning for dpkg's in test-packages for Ada95 and 501 ncurses-examples for consistency with Debian, to work with package 502 updates. 503 + regenerate html manpages. 504 + clarify handling of carriage return in waddch manual page; it was 505 discussed only in the portability section (prompted by comment on 506 Stack Overflow forum): 507 508 20150509 509 + add test-packages for cross-compiling ncurses-examples using the 510 MinGW test-packages. These are only the Debian packages; RPM later. 511 + cleanup format of debian/copyright files 512 + add pc-files to the MinGW cross-compiling test-packages. 513 + correct a couple of places in gen-pkgconfig.in to handle renaming of 514 the tinfo library. 515 516 20150502 517 + modify the configure script to allow different default values 518 for ABI 5 versus ABI 6. 519 + add wgetch-events to test-packages. 520 + add a note on how to build ncurses-examples to test/README. 521 + fix a memory leak in delscreen (report by Daniel Kahn Gillmor, 522 Debian #783486) -TD 523 + remove unnecessary ';' from E3 capabilities -TD 524 + add tmux entry, derived from screen (patch by Nicholas Marriott). 525 + split-out recent change to nsterm-bce as nsterm-build326, and add 526 nsterm-build342 to reflect changes with successive releases of OSX 527 (discussion with Leonardo B Schenkel) 528 + add xon, ich1, il1 to ibm3161 (patch by Stephen Powell, Debian 529 #783806) 530 + add sample "magic" file, to document ext-putwin. 531 + modify gen-pkgconfig.in to add explicit -ltinfo, etc., to the 532 generated ".pc" file when ld option "--as-needed" is used, or when 533 ncurses and tinfo are installed without using rpath (prompted by 534 discussion with Sylvain Bertrand). 535 + modify test-package for ncurses6 to omit rpath feature when installed 536 in /usr. 537 + add OSX's "*.dSYM" to clean-rules in makefiles. 538 + make extra-suffix work for OSX configuration, e.g., for shared 539 libraries. 540 + modify Ada95/configure script to work with pkg-config 541 + move test-package for ncurses6 to /usr, since filename-conflicts have 542 been eliminated. 543 + corrected build rules for Ada95/gen/generate; it does not depend on 544 the ncurses library aside from headers. 545 + reviewed man pages, fixed a few other spelling errors. 546 + fix a typo in curs_util.3x (Sven Joachim). 547 + use extra-suffix in some overlooked shared library dependencies 548 found by 20150425 changes for test-packages. 549 + update config.guess, config.sub from 550 551 552 20150425 553 + expanded description of tgetstr's area pointer in manual page 554 (report by Todd M Lewis). 555 + in-progress changes to modify test-packages to use ncursesw6 rather 556 than ncursesw, with updated configure scripts. 557 + modify CF_NCURSES_CONFIG in Ada95- and test-configure scripts to 558 check for ".pc" files via pkg-config, but add a linkage check since 559 frequently pkg-config configurations are broken. 560 + modify misc/gen-pkgconfig.in to include EXTRA_LDFLAGS, e.g., for the 561 rpath option. 562 + add 'dim' capability to screen entry (report by Leonardo B Schenkel) 563 + add several key definitions to nsterm-bce to match preconfigured 564 keys, e.g., with OSX 10.9 and 10.10 (report by Leonardo B Schenkel) 565 + fix repeated "extra-suffix" in ncurses-config.in (cf: 20150418). 566 + improve term_variables manual page, adding section on the terminfo 567 long-name symbols which are defined in the term.h header. 568 + fix bug in lib_tracebits.c introduced in const-fixes (cf: 20150404). 569 570 20150418 571 + avoid a blank line in output from tabs program by ending it with 572 a carriage return as done in FreeBSD (patch by James Clarke). 573 + build-fix for the "--enable-ext-putwin" feature when not using 574 wide characters (report by Werner Fink). 575 + modify autoconf macros to use scripting improvement from xterm. 576 + add -brtl option to compiler options on AIX 5-7, needed to link 577 with the shared libraries. 578 + add --with-extra-suffix option to help with installing nonconflicting 579 ncurses6 packages, e.g., avoiding header- and library-conflicts. 580 NOTE: as a side-effect, this renames 581 adacurses-config to adacurses5-config and 582 adacursesw-config to adacursesw5-config 583 + modify debian/rules test package to suffix programs with "6". 584 + clarify in curs_inopts.3x that window-specific settings do not 585 inherit into new windows. 586 587 20150404 588 + improve description of start_color() in the manual. 589 + modify several files in ncurses- and progs-directories to allow 590 const data used in internal tables to be put by the linker into the 591 readonly text segment. 592 593 20150329 594 + correct cut/paste error for "--enable-ext-putwin" that made it the 595 same as "--enable-ext-colors" (report by Roumen Petrov) 596 597 20150328 598 + add "-f" option to test/savescreen.c to help with testing/debugging 599 the extended putwin/getwin. 600 + add logic for writing/reading combining characters in the extended 601 putwin/getwin. 602 + add "--enable-ext-putwin" configure option to turn on the extended 603 putwin/getwin. 604 605 20150321 606 + in-progress changes to provide an extended version of putwin and 607 getwin which will be capable of reading screen-dumps between the 608 wide/normal ncurses configurations. These are text files, except 609 for a magic code at the beginning: 610 0 string \210\210 Screen-dump (ncurses) 611 612 20150307 613 + document limitations of getwin in manual page (prompted by discussion 614 with John S Urban). 615 + extend test/savescreen.c to demonstrate that color pair values 616 and graphic characters can be restored using getwin. 617 618 20150228 619 + modify win_driver.c to eliminate the constructor, to make it more 620 usable in an application which may/may not need the console window 621 (report by Grady Martin). 622 623 20150221 624 + capture define's related to -D_XOPEN_SOURCE from the configure check 625 and add those to the *-config and *.pc files, to simplify use for 626 the wide-character libraries. 627 + modify ncurses.spec to accommodate Fedora21's location of pkg-config 628 directory. 629 + correct sense of "--disable-lib-suffixes" configure option (report 630 by Nicolas Boos, cf: 20140426). 631 632 20150214 633 + regenerate html manpages using improved man2html from work on xterm. 634 + regenerated ".map" and ".sym" files using improved script, accounting 635 for the "--enable-weak-symbols" configure option (report by Werner 636 Fink). 637 638 20150131 639 + regenerated ".map" and ".sym" files using improved script, showing 640 the combinations of configure options used at each stage. 641 642 20150124 643 + add configure check to determine if "local: _*;" can be used in the 644 ".map" files to selectively omit symbols beginning with "_". On at 645 least recent FreeBSD, the wildcard applies to all "_" symbols. 646 + remove obsolete/conflicting rule for ncurses.map from 647 ncurses/Makefile.in (cf: 20130706). 648 649 20150117 650 + improve description in INSTALL of the --with-versioned-syms option. 651 + add combination of --with-hashed-db and --with-ticlib to 652 configurations for ".map" files (report by Werner Fink). 653 654 20150110 655 + add a step to generating ".map" files, to declare any remaining 656 symbols beginning with "_" as local, at the last version node. 657 + improve configure checks for pkg-config, addressing a variant found 658 with FreeBSD ports. 659 + modify win_driver.c to provide characters for special keys, like 660 ansi.sys, when keypad mode is off, rather than returning nothing at 661 all (discussion with Eli Zaretskii). 662 + add "broken_linker" and "hashed-db" configure options to combinations 663 use for generating the ".map" and ".sym" files. 664 + avoid using "ld" directly when creating shared library, to simplify 665 cross-compiles. Also drop "-Bsharable" option from shared-library 666 rules for FreeBSD and DragonFly (FreeBSD #196592). 667 + fix a memory leak in form library Free_RegularExpression_Type() 668 (report by Pavel Balaev). 669 670 20150103 671 + modify_nc_flush() to retry if interrupted (patch by Stian Skjelstad). 672 + change map files to make _nc_freeall a global, since it may be used 673 via the Ada95 binding when checking for memory leaks. 674 + improve sed script used in 20141220 to account for wide-, threaded- 675 variations in ABI 6. 676 677 20141227 678 + regenerate ".map" files, using step overlooked in 20141213 to use 679 the same patch-dates across each file to match ncurses.map (report by 680 Sven Joachim). 681 682 20141221 683 + fix an incorrect variable assignment in 20141220 changes (report by 684 Sven Joachim). 685 686 20141220 687 + updated Ada95/configure with macro changes from 20141213 688 + tie configure options --with-abi-version and --with-versioned-syms 689 together, so that ABI 6 libraries have distinct symbol versions from 690 the ABI 5 libraries. 691 + replace obsolete/nonworking link to man2html with current one, 692 regenerate html-manpages. 693 694 20141213 695 + modify misc/gen-pkgconfig.in to add -I option for include-directory 696 when using both --prefix and --disable-overwrite (report by Misty 697 De Meo). 698 + add configure option --with-pc-suffix to allow minor renaming of 699 ".pc" files and the corresponding library. Use this in the test 700 package for ncurses6. 701 + modify configure script so that if pkg-config is not installed, it 702 is still possible to install ".pc" files (report by Misty De Meo). 703 + updated ".sym" files, removing symbols which are marked as "local" 704 in the corresponding ".map" files. 705 + updated ".map" files to reflect move of comp_captab and comp_hash 706 from tic-library to tinfo-library in 20090711 (report by Sven 707 Joachim). 708 709 20141206 710 + updated ".map" files so that each symbol that may be shared across 711 the different library configurations has the same label. Some 712 review is needed to ensure these are really compatible. 713 + modify MKlib_gen.sh to work around change in development version of 714 gcc introduced here: 715 716 717 (reports by Marcus Shawcroft, Maohui Lei). 718 + improved configure macro CF_SUBDIR_PATH, from lynx changes. 719 720 20141129 721 + improved ".map" files by generating them with a script that builds 722 ncurses with several related configurations and merges the results. 723 A further refinement is planned, to make the tic- and tinfo-library 724 symbols use the same versions across each of the four configurations 725 which are represented (reports by Sven Joachim, Werner Fink). 726 727 20141115 728 + improve description of limits for color values and color pairs in 729 curs_color.3x (prompted by patch by Tim van der Molen). 730 + add VERSION file, using first field in that to record the ABI version 731 used for configure --with-libtool --disable-libtool-version 732 + add configure options for applying the ".map" and ".sym" files to 733 the ncurses, form, menu and panel libraries. 734 + add ".map" and ".sym" files to show exported symbols, e.g., for 735 symbol-versioning. 736 737 20141101 738 + improve strict compiler-warnings by adding a cast in TRACE_RETURN 739 and making a new TRACE_RETURN1 macro for cases where the cast does 740 not apply. 741 742 20141025 743 + in-progress changes to integrate the win32 console driver with the 744 msys2 configuration. 745 746 20141018 747 + reviewed terminology 0.6.1, add function key definitions. None of 748 the vt100-compatibility issues were improved -TD 749 + improve infocmp conversion of extended capabilities to termcap by 750 correcting the limit check against parametrized[], as well as filling 751 in a check if the string happens to have parameters, e.g., "xm" 752 in recent changes. 753 + add check for zero/negative dimensions for resizeterm and resize_term 754 (report by Mike Gran). 755 756 20141011 757 + add experimental support for xterm's 1005 mouse mode, to use in a 758 demonstration of its limitations. 759 + add experimental support for "%u" format to terminfo. 760 + modify test/ncurses.c to also show position reports in 'a' test. 761 + minor formatting fixes to _nc_trace_mmask_t, make this function 762 exported to help with debugging mouse changes. 763 + improve behavior of wheel-mice for xterm protocol, noting that there 764 are only button-presses for buttons "4" and "5", so there is no need 765 to wait to combine events into double-clicks (report/analysis by 766 Greg Field). 767 + provide examples xterm-1005 and xterm-1006 terminfo entries -TD 768 + implement decoder for xterm SGR 1006 mouse mode. 769 770 20140927 771 + implement curs_set in win_driver.c 772 + implement flash in win_driver.c 773 + fix an infinite loop in win_driver.c if the command-window loses 774 focus. 775 + improve the non-buffered mode, i.e., NCURSES_CONSOLE2, of 776 win_driver.c by temporarily changing the buffer-size to match the 777 window-size to eliminate the scrollback. Also enforce a minimum 778 screen-size of 24x80 in the non-buffered mode. 779 + modify generated misc/Makefile to suppress install.data from the 780 dependencies if the --disable-db-install option is used, compensating 781 for the top-level makefile changes used to add ncurses*-config in the 782 20140920 changes (report by Steven Honeyman). 783 784 20140920 785 + add ncurses*-config to bin-directory of sample package-scripts. 786 + add check to ensure that getopt is available; this is a problem in 787 some older cross-compiler environments. 788 + expanded on the description of --disable-overwrite in INSTALL 789 (prompted by reports by Joakim Tjernlund, Thomas Klausner). 790 See Gentoo #522586 and NetBSD #49200 for examples. 791 which relates to the clarified guidelines. 792 + remove special logic from CF_INCLUDE_DIRS which adds the directory 793 for the --includedir from the build (report by Joakim Tjernlund). 794 + add case for Unixware to CF_XOPEN_SOURCE, from lynx changes. 795 + update config.sub from 796 797 798 20140913 799 + add a configure check to ignore some of the plethora of non-working 800 C++ cross-compilers. 801 + build-fixes for Ada95 with gnat 4.9 802 803 20140906 804 + build-fix and other improvements for port of ncurses-examples to 805 NetBSD. 806 + minor compiler-warning fixes. 807 808 20140831 809 + modify test/demo_termcap.c and test/demo_terminfo.c to make their 810 options more directly comparable, and add "-i" option to specify 811 a terminal description filename to parse for names to lookup. 812 813 20140823 814 + fix special case where double-width character overwrites a single- 815 width character in the first column (report by Egmont Koblinger, 816 cf: 20050813). 817 818 20140816 819 + fix colors in ncurses 'b' test which did not work after changing 820 it to put the test-strings in subwindows (cf: 20140705). 821 + merge redundant SEE-ALSO sections in form and menu manpages. 822 823 20140809 824 + modify declarations for user-data pointers in C++ binding to use 825 reinterpret_cast to facilitate converting typed pointers to void* 826 in user's application (patch by Adam Jiang). 827 + regenerated html manpages. 828 + add note regarding cause and effect for TERM in ncurses manpage, 829 having noted clueless verbiage in Terminal.app's "help" file 830 which reverses cause/effect. 831 + remove special fallback definition for NCURSES_ATTR_T, since macros 832 have resolved type-mismatches using casts (cf: 970412). 833 + fixes for win_driver.c: 834 + handle repainting on endwin/refresh combination. 835 + implement beep(). 836 + minor cleanup. 837 838 20140802 839 + minor portability fixes for MinGW: 840 + ensure WINVER is defined in makefiles rather than using headers 841 + add check for gnatprep "-T" option 842 + work around bug introduced by gcc 4.8.1 in MinGW which breaks 843 "trace" feature: 844 845 + fix most compiler warnings for Cygwin ncurses-examples. 846 + restore "redundant" -I options in test/Makefile.in, since they are 847 typically needed when building the derived ncurses-examples package 848 (cf: 20140726). 849 850 20140726 851 + eliminate some redundant -I options used for building libraries, and 852 ensure that ${srcdir} is added to the include-options (prompted by 853 discussion with Paul Gilmartin). 854 + modify configure script to work with Minix3.2 855 + add form library extension O_DYNAMIC_JUSTIFY option which can be 856 used to override the different treatment of justification for static 857 versus dynamic fields (adapted from patch by Leon Winter). 858 + add a null pointer check in test/edit_field.c (report/analysis by 859 Leon Winter, cf: 20130608). 860 861 20140719 862 + make workarounds for compiling test-programs with NetBSD curses. 863 + improve configure macro CF_ADD_LIBS, to eliminate repeated -l/-L 864 options, from xterm changes. 865 866 20140712 867 + correct Charable() macro check for A_ALTCHARSET in wide-characters. 868 + build-fix for position-debug code in tty_update.c, to work with or 869 without sp-funcs. 870 871 20140705 872 + add w/W toggle to ncurses.c 'B' test, to demonstrate permutation of 873 video-attributes and colors with double-width character strings. 874 875 20140629 876 + correct check in win_driver.c for saving screen contents, e.g., when 877 NCURSES_CONSOLE2 is set (cf: 20140503). 878 + reorganize b/B menu items in ncurses.c, putting the test-strings into 879 subwindows. This is needed for a planned change to use Unicode 880 fullwidth characters in the test-screens. 881 + correct update to form status for _NEWTOP, broken by fixes for 882 compiler warnings (patch by Leon Winter, cf: 20120616). 883 884 20140621 885 + change shared-library suffix for AIX 5 and 6 to ".so", avoiding 886 conflict with the static library (report by Ben Lentz). 887 + document RPATH_LIST in INSTALLATION file, as part of workarounds for 888 upgrading an ncurses library using the "--with-shared" option. 889 + modify test/ncurses.c c/C tests to cycle through subsets of the 890 total number of colors, to better illustrate 8/16/88/256-colors by 891 providing directly comparable screens. 892 + add test/dots_curses.c, for comparison with the low-level examples. 893 894 20140614 895 + fix dereference before null check found by Coverity in tic.c 896 (cf: 20140524). 897 + fix sign-extension bug in read_entry.c which prevented "toe" from 898 reading empty "screen+italics" entry. 899 + modify sgr for screen.xterm-new to support dim capability -TD 900 + add dim capability to nsterm+7 -TD 901 + cancel dim capability for iterm -TD 902 + add dim, invis capabilities to vte-2012 -TD 903 + add sitm/ritm to konsole-base and mlterm3 -TD 904 905 20140609 906 > fix regression in screen terminfo entries (reports by Christian 907 Ebert, Gabriele Balducci) -TD 908 + revert the change to screen; see notes for why this did not work -TD 909 + cancel sitm/ritm for entries which extend "screen", to work around 910 screen's hardcoded behavior for SGR 3 -TD 911 912 20140607 913 + separate masking for sgr in vidputs from sitm/ritm, which do not 914 overlap with sgr functionality. 915 + remove unneeded -i option from adacurses-config; put -a in the -I 916 option for consistency (patch by Pascal Pignard). 917 + update xterm-new terminfo entry to xterm patch #305 -TD 918 + change format of test-scripts for Debian Ada95 and ncurses-examples 919 packages to quilted to work around Debian #700177 (cf: 20130907). 920 + build fix for form_driver_w.c as part of ncurses-examples package for 921 older ncurses than 20131207. 922 + add Hello World example to adacurses-config manpage. 923 + remove unused --enable-pc-files option from Ada95/configure. 924 + add --disable-gnat-projects option for testing. 925 + revert changes to Ada95 project-files configuration (cf: 20140524). 926 + corrected usage message in adacurses-config. 927 928 20140524 929 + fix typo in ncurses manpage for the NCURSES_NO_MAGIC_COOKIE 930 environment variable. 931 + improve discussion of input-echoing in curs_getch.3x 932 + clarify discussion in curs_addch.3x of wrapping. 933 + modify parametrized.h to make fln non-padded. 934 + correct several entries which had termcap-style padding used in 935 terminfo: adm21, aj510, alto-h19, att605-pc, x820 -TD 936 + correct syntax for padding in some entries: dg211, h19 -TD 937 + correct ti924-8 which had confused padding versus octal escapes -TD 938 + correct padding in sbi entry -TD 939 + fix an old bug in the termcap emulation; "%i" was ignored in tparm() 940 because the parameters to be incremented were already on the internal 941 stack (report by Corinna Vinschen). 942 + modify tic's "-c" option to take into account the "-C" option to 943 activate additional checks which compare the results from running 944 tparm() on the terminfo expressions versus the translated termcap 945 expressions. 946 + modify tic to allow it to read from FIFOs (report by Matthieu Fronton, 947 cf: 20120324). 948 > patches by Nicolas Boulenguez: 949 + explicit dereferences to suppress some style warnings. 950 + when c_varargs_to_ada.c includes its header, use double quotes 951 instead of <>. 952 + samples/ncurses2-util.adb: removed unused with clause. The warning 953 was removed by an obsolete pragma. 954 + replaced Unreferenced pragmas with Warnings (Off). The latter, 955 available with older GNATs, needs no configure test. This also 956 replaces 3 untested Unreferenced pragmas. 957 + simplified To_C usage in trace handling. Using two parameters allows 958 some basic formatting, and avoids a warning about security with some 959 compiler flags. 960 + for generated Ada sources, replace many snippets with one pure 961 package. 962 + removed C_Chtype and its conversions. 963 + removed C_AttrType and its conversions. 964 + removed conversions between int, Item_Option_Set, Menu_Option_Set. 965 + removed int, Field_Option_Set, Item_Option_Set conversions. 966 + removed C_TraceType, Attribute_Option_Set conversions. 967 + replaced C.int with direct use of Eti_Error, now enumerated. As it 968 was used in a case statement, values were tested by the Ada compiler 969 to be consecutive anyway. 970 + src/Makefile.in: remove duplicate stanza 971 + only consider using a project for shared libraries. 972 + style. Silent gnat-4.9 warning about misplaced "then". 973 + generate shared library project to honor ADAFLAGS, LDFLAGS. 974 975 20140510 976 + cleanup recently introduced compiler warnings for MingW port. 977 + workaround for ${MAKEFLAGS} configure check versus GNU make 4.0, 978 which introduces more than one gratuitous incompatibility. 979 980 20140503 981 + add vt520ansi terminfo entry (patch by Mike Gran) 982 + further improve MinGW support for the scenario where there is an 983 ANSI-escapes handler such as ansicon running in the console window 984 (patch by Juergen Pfeifer). 985 986 20140426 987 + add --disable-lib-suffixes option (adapted from patch by Juergen 988 Pfeifer). 989 + merge some changes from Juergen Pfeifer's work with MSYS2, to 990 simplify later merging: 991 + use NC_ISATTY() macro for isatty() in library 992 + add _nc_mingw_isatty() and related functions to windows-driver 993 + rename terminal driver entrypoints to simplify grep's 994 + remove a check in the sp-funcs flavor of newterm() which allowed only 995 the first call to newterm() to succeed (report by Thomas Beierlein, 996 cf: 20090927). 997 998 20140419 999 + update config.guess, config.sub from 1000 1001 1002 20140412 1003 + modify configure script: 1004 + drop the -no-gcc option from Intel compiler, from lynx changes. 1005 + extend the --with-hashed-db configure option to simplify building 1006 with different versions of Berkeley database using FreeBSD ports. 1007 + improve initialization for MinGW port (Juergen Pfeifer): 1008 + enforce Windows-style path-separator if cross-compiling, 1009 + add a driver-name method to each of the drivers, 1010 + allow the Windows driver name to match "unknown", ignoring case, 1011 + lengthen the built-in name for the Windows console driver to 1012 "#win32console", and 1013 + move the comparison of driver-names allowing abbreviation, e.g., 1014 to "#win32con" into the Windows console driver. 1015 1016 20140329 1017 + add check in tic for mismatch between ccc and initp/initc 1018 + cancel ccc in putty-256color and konsole-256color for consistency 1019 with the cancelled initc capability (patch by Sven Zuhlsdorf). 1020 + add xterm+256setaf building block for various terminals which only 1021 get the 256-color feature half-implemented -TD 1022 + updated "st" entry (leaving the 0.1.1 version as "simpleterm") to 1023 0.4.1 -TD 1024 1025 20140323 1026 + fix typo in "mlterm" entry (report by Gabriele Balducci) -TD 1027 1028 20140322 1029 + use types from <stdint.h> in sample build-scripts for chtype, etc. 1030 + modify configure script and curses.h.in to allow the types specified 1031 using --with-chtype and related options to be defined in <stdint.h> 1032 + add terminology entry -TD 1033 + add mlterm3 entry, use that as "mlterm" -TD 1034 + inherit mlterm-256color from mlterm -TD 1035 1036 20140315 1037 + modify _nc_New_TopRow_and_CurrentItem() to ensure that the menu's 1038 top-row is adjusted as needed to ensure that the current item is 1039 on the screen (patch by Johann Klammer). 1040 + add wgetdelay() to retrieve _delay member of WINDOW if it happens to 1041 be opaque, e.g., in the pthread configuration (prompted by patch by 1042 Soren Brinkmann). 1043 1044 20140308 1045 + modify ifdef in read_entry.c to handle the case where 1046 NCURSES_USE_DATABASE is not defined (patch by Xin Li). 1047 + add cast in form_driver_w() to fix ARM build (patch by Xin Li). 1048 + add logic to win_driver.c to save/restore screen contents when not 1049 allocating a console-buffer (cf: 20140215). 1050 1051 20140301 1052 + clarify error-returns from newwin (report by Ruslan Nabioullin). 1053 1054 20140222 1055 + fix some compiler warnings in win_driver.c 1056 + updated notes for wsvt25 based on tack and vttest -TD 1057 + add teken entry to show actual properties of FreeBSD's "xterm" 1058 console -TD 1059 1060 20140215 1061 + in-progress changes to win_driver.c to implement output without 1062 allocating a console-buffer. This uses a pre-existing environment 1063 variable NCGDB used by Juergen Pfeifer for debugging (prompted by 1064 discussion with Erwin Waterlander regarding Console2, which hangs 1065 when reading in an allocated console-buffer). 1066 + add -t option to gdc.c, and modify to accept "S" to step through the 1067 scrolling-stages. 1068 + regenerate NCURSES-Programming-HOWTO.html to fix some of the broken 1069 html emitted by docbook. 1070 1071 20140209 1072 + modify CF_XOPEN_SOURCE macro to omit followup check to determine if 1073 _XOPEN_SOURCE can/should be defined. g++ 4.7.2 built on Solaris 10 1074 has some header breakage due to its own predefinition of this symbol 1075 (report by Jean-Pierre Flori, Sage #15796). 1076 1077 20140201 1078 + add/use symbol NCURSES_PAIRS_T like NCURSES_COLOR_T, to illustrate 1079 which "short" types are for color pairs and which are color values. 1080 + fix build for s390x, by correcting field bit offsets in generated 1081 representation clauses when int=32 long=64 and endian=big, or at 1082 least on s390x (patch by Nicolas Boulenguez). 1083 + minor cleanup change to test/form_driver_w.c (patch by Gaute Hope). 1084 1085 20140125 1086 + remove unnecessary ifdef's in Ada95/gen/gen.c, which reportedly do 1087 not work as is with gcc 4.8 due to fixes using chtype cast made for 1088 new compiler warnings by gcc 4.8 in 20130824 (Debian #735753, patch 1089 by Nicolas Boulenguez). 1090 1091 20140118 1092 + apply includesubdir variable which was introduced in 20130805 to 1093 gen-pkgconfig.in (Debian #735782). 1094 1095 20131221 1096 + further improved man2html, used this to fix broken links in html 1097 manpages. See 1098 1099 1100 20131214 1101 + modify configure-script/ifdef's to allow OLD_TTY feature to be 1102 suppressed if the type of ospeed is configured using the option 1103 --with-ospeed to not be a short. By default, it is a short for 1104 termcap-compatibility (adapted from suggestion by Christian 1105 Weisgerber). 1106 + correct a typo in _nc_baudrate() (patch by Christian Weisgerber, 1107 cf: 20061230). 1108 + fix a few -Wlogical-op warnings. 1109 + updated llib-l* files. 1110 1111 20131207 1112 + add form_driver_w() entrypoint to wide-character forms library, as 1113 well as test program form_driver_w (adapted from patch by Gaute 1114 Hope). 1115 1116 20131123 1117 + minor fix for CF_GCC_WARNINGS to special-case options which are not 1118 recognized by clang. 1119 1120 20131116 1121 + add special case to configure script to move _XOPEN_SOURCE_EXTENDED 1122 definition from CPPFLAGS to CFLAGS if it happens to be needed for 1123 Solaris, because g++ errors with that definition (report by 1124 Jean-Pierre Flori, Sage #15268). 1125 + correct logic in infocmp's -i option which was intended to ignore 1126 strings which correspond to function-keys as candidates for piecing 1127 together initialization- or reset-strings. The problem dates to 1128 1.9.7a, but was overlooked until changes in -Wlogical-op warnings for 1129 gcc 4.8 (report by David Binderman). 1130 + updated CF_GCC_WARNINGS to documented options for gcc 4.9.0, moving 1131 checks for -Wextra and -Wdeclaration-after-statement into the macro, 1132 and adding checks for -Wignored-qualifiers, -Wlogical-op and 1133 -Wvarargs 1134 + updated CF_CURSES_UNCTRL_H and CF_SHARED_OPTS macros from ongoing 1135 work on cdk. 1136 + update config.sub from 1137 1138 1139 20131110 1140 + minor cleanup of terminfo.tail 1141 1142 20131102 1143 + use TS extension to describe xterm's title-escapes -TD 1144 + modify terminator and nsterm-s to use xterm+sl-twm building block -TD 1145 + update hurd.ti, add xenl to reflect 2011-03-06 change in 1146 1147 (Debian #727119). 1148 + simplify pfkey expression in ansi.sys -TD 1149 1150 20131027 1151 + correct/simplify ifdef's for cur_term versus broken-linker and 1152 reentrant options (report by Jean-Pierre Flori, cf: 20090530). 1153 + modify release/version combinations in test build-scripts to make 1154 them more consistent with other packages. 1155 1156 20131019 1157 + add nc_mingw.h to installed headers for MinGW port; needed for 1158 compiling ncurses-examples. 1159 + add rpm-script for testing cross-compile of ncurses-examples. 1160 1161 20131014 1162 + fix new typo in CF_ADA_INCLUDE_DIRS macro (report by Roumen Petrov). 1163 1164 20131012 1165 + fix a few compiler warnings in progs and test. 1166 + minor fix to package/debian-mingw/rules, do not strip dll's. 1167 + minor fixes to configure script for empty $prefix, e.g., when doing 1168 cross-compiles to MinGW. 1169 + add script for building test-packages of binaries cross-compiled to 1170 MinGW using NSIS. 1171 1172 20131005 1173 + minor fixes for ncurses-example package and makefile. 1174 + add scripts for test-builds of cross-compiler packages for ncurses6 1175 to MinGW. 1176 1177 20130928 1178 + some build-fixes for ncurses-examples with NetBSD-6.0 curses, though 1179 it lacks some common functions such as use_env() which is not yet 1180 addressed. 1181 + build-fix and some compiler warning fixes for ncurses-examples with 1182 OpenBSD 5.3 1183 + fix a possible null-pointer reference in a trace message from newterm. 1184 + quiet a few warnings from NetBSD 6.0 namespace pollution by 1185 nonstandard popcount() function in standard strings.h header. 1186 + ignore g++ 4.2.1 warnings for "-Weffc++" in c++/cursesmain.cc 1187 + fix a few overlooked places for --enable-string-hacks option. 1188 1189 20130921 1190 + fix typo in curs_attr.3x (patch by Sven Joachim, cf: 20130831). 1191 + build-fix for --with-shared option for DragonFly and FreeBSD (report 1192 by Rong-En Fan, cf: 20130727). 1193 1194 20130907 1195 + build-fixes for MSYS for two test-programs (patches by Ray Donnelly, 1196 Alexey Pavlov). 1197 + revert change to two of the dpkg format files, to work with dpkg 1198 before/after Debian #700177. 1199 + fix gcc -Wconversion warning in wattr_get() macro. 1200 + add msys and msysdll to known host/configuration types (patch by 1201 Alexey Pavlov). 1202 + modify CF_RPATH_HACK configure macro to not rely upon "-u" option 1203 of sort, improving portability. 1204 + minor improvements for test-programs from reviewing Solaris port. 1205 + update config.guess, config.sub from 1206 1207 1208 20130831 1209 + modify test/ncurses.c b/B tests to display lines only for the 1210 attributes which a given terminal supports, to make room for an 1211 italics test. 1212 + completed ncv table in terminfo.tail; it did not list the wide 1213 character codes listed in X/Open Curses issue 7. 1214 + add A_ITALIC extension (prompted by discussion with Egmont Koblinger). 1215 1216 20130824 1217 + fix some gcc 4.8 -Wconversion warnings. 1218 + change format of dpkg test-scripts to quilted to work around bug 1219 introduced by Debian #700177. 1220 + discard cached keyname() values if meta() is changed after a value 1221 was cached using (report by Kurban Mallachiev). 1222 1223 20130816 1224 + add checks in tic to warn about terminals which lack cursor 1225 addressing, capabilities or having those, are marked as hard_copy or 1226 generic_type. 1227 + use --without-progs in mingw-ncurses rpm. 1228 + split out _nc_init_termtype() from alloc_entry.c to use in MinGW 1229 port when tic and other programs are not needed. 1230 1231 20130805 1232 + minor fixes to the --disable-overwrite logic, to ensure that the 1233 configured $(includedir) is not cancelled by the mingwxx-filesystem 1234 rpm macros. 1235 + add --disable-db-install configure option, to simplify building 1236 cross-compile support packages. 1237 + add mingw-ncurses.spec file, for testing cross-compiles. 1238 1239 20130727 1240 + improve configure macros from ongoing work on cdk, dialog, xterm: 1241 + CF_ADD_LIB_AFTER - fix a problem with -Wl options 1242 + CF_RPATH_HACK - add missing result-message 1243 + CF_SHARED_OPTS - modify to use $rel_builddir in cygwin and mingw 1244 dll symbols (which can be overridden) rather than explicit "../". 1245 + CF_SHARED_OPTS - modify NetBSD and DragonFly symbols to use ${CC} 1246 rather than ${LD} to improve rpath support. 1247 + CF_SHARED_OPTS - add a symbol to denote the temporary files that 1248 are created by the macro, to simplify clean-rules. 1249 + CF_X_ATHENA - trim extra libraries to work with -Wl,--as-needed 1250 + fix a regression in hashed-database support for NetBSD, which uses 1251 the key-size differently from other implementations (cf: 20121229). 1252 1253 20130720 1254 + further improvements for setupterm manpage, clarifying the 1255 initialization of cur_term. 1256 1257 20130713 1258 + improve manpages for initscr and setupterm. 1259 + minor compiler-warning fixes 1260 1261 20130706 1262 + add fallback defs for <inttypes.h> and <stdint.h> (cf: 20120225). 1263 + add check for size of wchar_t, use that to suppress a chunk of 1264 wcwidth.h in MinGW port. 1265 + quiet linker warnings for MinGW cross-compile with dll's using the 1266 --enable-auto-import flag. 1267 + add ncurses.map rule to ncurses/Makefile to help diagnose symbol 1268 table issues. 1269 1270 20130622 1271 + modify the clear program to take into account the E3 extended 1272 capability to clear the terminal's scrollback buffer (patch by 1273 Miroslav Lichvar, Redhat #815790). 1274 + clarify in resizeterm manpage that LINES and COLS are updated. 1275 + updated ansi example in terminfo.tail, correct misordered example 1276 of sgr. 1277 + fix other doclifter warnings for manpages 1278 + remove unnecessary ".ta" in terminfo.tail, add missing ".fi" 1279 (patch by Eric Raymond). 1280 1281 20130615 1282 + minor changes to some configure macros to make them more reusable. 1283 + fixes for tabs program (prompted by report by Nick Andrik). 1284 + corrected logic in command-line parsing of -a and -c predefined 1285 tab-lists options. 1286 + allow "-0" and "-8" options to be combined with others, e.g.,"-0d". 1287 + make warning messages more consistent with the other utilities by 1288 not printing the full pathname of the program. 1289 + add -V option for consistency with other utilities. 1290 + fix off-by-one in columns for tabs program when processing an option 1291 such as "-5" (patch by Nick Andrik). 1292 1293 20130608 1294 + add to test/demo_forms.c examples of using the menu-hooks as well 1295 as showing how the menu item user-data can be used to pass a callback 1296 function pointer. 1297 + add test/dots_termcap.c 1298 + remove setupterm call from test/demo_termcap.c 1299 + build-fix if --disable-ext-funcs configure option is used. 1300 + modified test/edit_field.c and test/demo_forms.c to move the lengths 1301 into a user-data structure, keeping the original string for later 1302 expansion to free-format input/out demo. 1303 + modified test/demo_forms.c to load data from file. 1304 + added note to clarify Terminal.app's non-emulation of the various 1305 terminal types listed in the preferences dialog -TD 1306 + fix regression in error-reporting in lib_setup.c (Debian #711134, 1307 cf: 20121117). 1308 + build-fix for a case where --enable-broken_linker and 1309 --enable-reentrant options are combined (report by George R Goffe). 1310 1311 20130525 1312 + modify mvcur() to distinguish between internal use by the ncurses 1313 library, and external callers, preventing it from reading the content 1314 of the screen which is only nonblank when curses calls have updated 1315 it. This makes test/dots_mvcur.c avoid painting colored cells in 1316 the left margin of the display. 1317 + minor fix to test/dots_mvcur.c 1318 + move configured symbols USE_DATABASE and USE_TERMCAP to term.h as 1319 NCURSES_USE_DATABASE and NCURSES_USE_TERMCAP to allow consistent 1320 use of these symbols in term_entry.h 1321 1322 20130518 1323 + corrected ifdefs in test/testcurs.c to allow comparison of mouse 1324 interface versus pdcurses (cf: 20130316). 1325 + add pow() to configure-check for math library, needed since 1326 20121208 for test/hanoi (Debian #708056). 1327 + regenerated html manpages. 1328 + update doctype used for html documentation. 1329 1330 20130511 1331 + move nsterm-related entries out of "obsolete" section to more 1332 plausible "ansi consoles" -TD 1333 + additional cleanup of table-of-contents by reordering -TD 1334 + revise fix for check for 8-bit value in _nc_insert_ch(); prior fix 1335 prevented inserts when video attributes were attached to the data 1336 (cf: 20121215) (Redhat #959534). 1337 1338 20130504 1339 + fixes for issues found by Coverity: 1340 + correct FNKEY() macro in progs/dump_entry.c, allowing kf11-kf63 to 1341 display when infocmp's -R option is used for HP or AIX subsets. 1342 + fix dead-code issue with test/movewindow.c 1343 + improve limited-checking in _nc_read_termtype(). 1344 1345 20130427 1346 + fix clang 3.2 warning in progs/dump_entry.c 1347 + drop AC_TYPE_SIGNAL check; ncurses relies on c89 and later. 1348 1349 20130413 1350 + add MinGW to cases where ncurses installs by default into /usr 1351 (prompted by discussion with Daniel Silva Ferreira). 1352 + add -D option to infocmp's usage-message (patch by Miroslav Lichvar). 1353 + add a missing 'int' type for main function in configure check for 1354 type of bool variable, to work with clang 3.2 (report by Dmitri 1355 Gribenko). 1356 + improve configure check for static_cast, to work with clang 3.2 1357 (report by Dmitri Gribenko). 1358 + re-order rule for demo.o and macros defining header dependencies in 1359 c++/Makefile.in to accommodate gmake (report by Dmitri Gribenko). 1360 1361 20130406 1362 + improve parameter checking in copywin(). 1363 + modify configure script to work around OS X's "libtool" program, to 1364 choose glibtool instead. At the same time, chance the autoconf macro 1365 to look for a "tool" rather than a "prog", to help with potential use 1366 in cross-compiling. 1367 + separate the rpath usage for c++ library from demo program 1368 (Redhat #911540) 1369 + update/correct header-dependencies in c++ makefile (report by Werner 1370 Fink). 1371 + add --with-cxx-shared to dpkg-script, as done for rpm-script. 1372 1373 20130324 1374 + build-fix for libtool configuration (reports by Daniel Silva Ferreira 1375 and Roumen Petrov). 1376 1377 20130323 1378 + build-fix for OS X, to handle changes for --with-cxx-shared feature 1379 (report by Christian Ebert). 1380 + change initialization for vt220, similar entries for consistency 1381 with cursor-key strings (NetBSD #47674) -TD 1382 + further improvements to linux-16color (Benjamin Sittler) 1383 1384 20130316 1385 + additional fix for tic.c, to allocate missing buffer space. 1386 + eliminate configure-script warnings for gen-pkgconfig.in 1387 + correct typo in sgr string for sun-color, 1388 add bold for consistency with sgr, 1389 change smso for consistency with sgr -TD 1390 + correct typo in sgr string for terminator -TD 1391 + add blink to the attributes masked by ncv in linux-16color (report 1392 by Benjamin Sittler) 1393 + improve warning message from post-load checking for missing "%?" 1394 operator by tic/infocmp by showing the entry name and capability. 1395 + minor formatting improvement to tic/infocmp -f option to ensure 1396 line split after "%;". 1397 + amend scripting for --with-cxx-shared option to handle the debug 1398 library "libncurses++_g.a" (report by Sven Joachim). 1399 1400 20130309 1401 + amend change to toe.c for reading from /dev/zero, to ensure that 1402 there is a buffer for the temporary filename (cf: 20120324). 1403 + regenerated html manpages. 1404 + fix typo in terminfo.head (report by Sven Joachim, cf: 20130302). 1405 + updated some autoconf macros: 1406 + CF_ACVERSION_CHECK, from byacc 1.9 20130304 1407 + CF_INTEL_COMPILER, CF_XOPEN_SOURCE from luit 2.0-20130217 1408 + add configure option --with-cxx-shared to permit building 1409 libncurses++ as a shared library when using g++, e.g., the same 1410 limitations as libtool but better integrated with the usual build 1411 configuration (Redhat #911540). 1412 + modify MKkey_defs.sh to filter out build-path which was unnecessarily 1413 shown in curses.h (Debian #689131). 1414 1415 20130302 1416 + add section to terminfo manpage discussing user-defined capabilities. 1417 + update manpage description of NCURSES_NO_SETBUF, explaining why it 1418 is obsolete. 1419 + add a check in waddch_nosync() to ensure that tab characters are 1420 treated as control characters; some broken locales claim they are 1421 printable. 1422 + add some traces to the Windows console driver. 1423 + initialize a temporary array in _nc_mbtowc, needed for some cases 1424 of raw input in MinGW port. 1425 1426 20130218 1427 + correct ifdef on change to lib_twait.c (report by Werner Fink). 1428 + update config.guess, config.sub 1429 1430 20130216 1431 + modify test/testcurs.c to work with mouse for ncurses as it does for 1432 pdcurses. 1433 + modify test/knight.c to work with mouse for pdcurses as it does for 1434 ncurses. 1435 + modify internal recursion in wgetch() which handles cooked mode to 1436 check if the call to wgetnstr() returned an error. This can happen 1437 when both nocbreak() and nodelay() are set, for instance (report by 1438 Nils Christopher Brause) (cf: 960418). 1439 + fixes for issues found by Coverity: 1440 + add a check for valid position in ClearToEOS() 1441 + fix in lib_twait.c when --enable-wgetch-events is used, pointer 1442 use after free. 1443 + improve a limit-check in make_hash.c 1444 + fix a memory leak in hashed_db.c 1445 1446 20130209 1447 + modify test/configure script to make it simpler to override names 1448 of curses-related libraries, to help with linking with pdcurses in 1449 MinGW environment. 1450 + if the --with-terminfo-dirs configure option is not used, there is 1451 no corresponding compiled-in value for that. Fill in "no default 1452 value" for that part of the manpage substitution. 1453 1454 20130202 1455 + correct initialization in knight.c which let it occasionally make 1456 an incorrect move (cf: 20001028). 1457 + improve documentation of the terminfo/termcap search path. 1458 1459 20130126 1460 + further fixes to mvcur to pass callback function (cf: 20130112), 1461 needed to make test/dots_mvcur work. 1462 + reduce calls to SetConsoleActiveScreenBuffer in win_driver.c, to 1463 help reduce flicker. 1464 + modify configure script to omit "+b" from linker options for very 1465 old HP-UX systems (report by Dennis Grevenstein) 1466 + add HP-UX workaround for missing EILSEQ on old HP-UX systems (patch 1467 by Dennis Grevenstein). 1468 + restore memmove/strdup support for antique systems (request by 1469 Dennis Grevenstein). 1470 + change %l behavior in tparm to push the string length onto the stack 1471 rather than saving the formatted length into the output buffer 1472 (report by Roy Marples, cf: 980620). 1473 1474 20130119 1475 + fixes for issues found by Coverity: 1476 + fix memory leak in safe_sprintf.c 1477 + add check for return-value in tty_update.c 1478 + correct initialization for -s option in test/view.c 1479 + add check for numeric overflow in lib_instr.c 1480 + improve error-checking in copywin 1481 + add advice in infocmp manpage for termcap users (Debian #698469). 1482 + add "-y" option to test/demo_termcap and test/demo_terminfo to 1483 demonstrate behavior with/without extended capabilities. 1484 + updated termcap manpage to document legacy termcap behavior for 1485 matching capability names. 1486 + modify name-comparison for tgetstr, etc., to accommodate legacy 1487 applications as well as to improve compatbility with BSD 4.2 1488 termcap implementations (Debian #698299) (cf: 980725). 1489 1490 20130112 1491 + correct prototype in manpage for vid_puts. 1492 + drop ncurses/tty/tty_display.h, ncurses/tty/tty_input.h, since they 1493 are unused in the current driver model. 1494 + modify mvcur to use stdout except when called within the ncurses 1495 library. 1496 + modify vidattr and vid_attr to use stdout as documented in manpage. 1497 + amend changes made to buffering in 20120825 so that the low-level 1498 putp() call uses stdout rather than ncurses' internal buffering. 1499 The putp_sp() call does the same, for consistency (Redhat #892674). 1500 1501 20130105 1502 + add "-s" option to test/view.c to allow it to start in single-step 1503 mode, reducing size of trace files when it is used for debugging 1504 MinGW changes. 1505 + revert part of 20121222 change to tinfo_driver.c 1506 + add experimental logic in win_driver.c to improve optimization of 1507 screen updates. This does not yet work with double-width characters, 1508 so it is ifdef'd out for the moment (prompted by report by Erwin 1509 Waterlander regarding screen flicker). 1510 1511 20121229 1512 + fix coverity warnings regarding copying into fixed-size buffers. 1513 + add throw-declarations in the c++ binding per Coverity warning. 1514 + minor changes to new-items for consistent reference to bug-report 1515 numbers. 1516 1517 20121222 1518 + add *.dSYM directories to clean-rule in ncurses directory makefile, 1519 for Mac OS builds. 1520 + add a configure check for gcc option -no-cpp-precomp, which is not 1521 available in all Mac OS X configurations (report by Andras Salamon, 1522 cf: 20011208). 1523 + improve 20021221 workaround for broken acs, handling a case where 1524 that ACS_xxx character is not in the acsc string but there is a known 1525 wide-character which can be used. 1526 1527 20121215 1528 + fix several warnings from clang 3.1 --analyze, includes correcting 1529 a null-pointer check in _nc_mvcur_resume. 1530 + correct display of double-width characters with MinGW port (report 1531 by Erwin Waterlander). 1532 + replace MinGW's wcrtomb(), fixing a problem with _nc_viscbuf 1533 > fixes based on Coverity report: 1534 + correct coloring in test/bs.c 1535 + correct check for 8-bit value in _nc_insert_ch(). 1536 + remove dead code in progs/tset.c, test/linedata.h 1537 + add null-pointer checks in lib_tracemse.c, panel.priv.h, and some 1538 test-programs. 1539 1540 20121208 1541 + modify test/knight.c to show the number of choices possible for 1542 each position in automove option, e.g., to allow user to follow 1543 Warnsdorff's rule to solve the puzzle. 1544 + modify test/hanoi.c to show the minimum number of moves possible for 1545 the given number of tiles (prompted by patch by Lucas Gioia). 1546 > fixes based on Coverity report: 1547 + remove a few redundant checks. 1548 + correct logic in test/bs.c, when randomly placing a specific type of 1549 ship. 1550 + check return value from remove/unlink in tic. 1551 + check return value from sscanf in test/ncurses.c 1552 + fix a null dereference in c++/cursesw.cc 1553 + fix two instances of uninitialized variables when configuring for the 1554 terminal driver. 1555 + correct scope of variable used in SetSafeOutcWrapper macro. 1556 + set umask when calling mkstemp in tic. 1557 + initialize wbkgrndset() temporary variable when extended-colors are 1558 used. 1559 1560 20121201 1561 + also replace MinGW's wctomb(), fixing a problem with setcchar(). 1562 + modify test/view.c to load UTF-8 when built with MinGW by using 1563 regular win32 API because the MinGW functions mblen() and mbtowc() 1564 do not work. 1565 1566 20121124 1567 + correct order of color initialization versus display in some of the 1568 test-programs, e.g., test_addstr.c 1569 > fixes based on Coverity report: 1570 + delete windows on exit from some of the test-programs. 1571 1572 20121117 1573 > fixes based on Coverity report: 1574 + add missing braces around FreeAndNull in two places. 1575 + various fixes in test/ncurses.c 1576 + improve limit-checks in tinfo/make_hash.c, tinfo/read_entry.c 1577 + correct malloc size in progs/infocmp.c 1578 + guard against negative array indices in test/knight.c 1579 + fix off-by-one limit check in test/color_name.h 1580 + add null-pointer check in progs/tabs.c, test/bs.c, test/demo_forms.c, 1581 test/inchs.c 1582 + fix memory-leak in tinfo/lib_setup.c, progs/toe.c, 1583 test/clip_printw.c, test/demo_menus.c 1584 + delete unused windows in test/chgat.c, test/clip_printw.c, 1585 test/insdelln.c, test/newdemo.c on error-return. 1586 1587 20121110 1588 + modify configure macro CF_INCLUDE_DIRS to put $CPPFLAGS after the 1589 local -I include options in case someone has set conflicting -I 1590 options in $CPPFLAGS (prompted by patch for ncurses/Makefile.in by 1591 Vassili Courzakis). 1592 + modify the ncurses*-config scripts to eliminate relative paths from 1593 the RPATH_LIST variable, e.g., "../lib" as used in installing shared 1594 libraries or executables. 1595 1596 20121102 1597 + realign these related pages: 1598 curs_add_wchstr.3x 1599 curs_addchstr.3x 1600 curs_addstr.3x 1601 curs_addwstr.3x 1602 and fix a long-ago error in curs_addstr.3x which said that a -1 1603 length parameter would only write as much as fit onto one line 1604 (report by Reuben Thomas). 1605 + remove obsolete fallback _nc_memmove() for memmove()/bcopy(). 1606 + remove obsolete fallback _nc_strdup() for strdup(). 1607 + cancel any debug-rpm in package/ncurses.spec 1608 + reviewed vte-2012, reverted most of the change since it was incorrect 1609 based on testing with tack -TD 1610 + un-cancel the initc in vte-256color, since this was implemented 1611 starting with version 0.20 in 2009 -TD 1612 1613 20121026 1614 + improve malloc/realloc checking (prompted by discussion in Redhat 1615 #866989). 1616 + add ncurses test-program as "ncurses6" to the rpm- and dpkg-scripts. 1617 + updated configure macros CF_GCC_VERSION and CF_WITH_PATHLIST. The 1618 first corrects pattern used for Mac OS X's customization of gcc. 1619 1620 20121017 1621 + fix change to _nc_scroll_optimize(), which incorrectly freed memory 1622 (Redhat #866989). 1623 1624 20121013 1625 + add vte-2012, gnome-2012, making these the defaults for vte/gnome 1626 (patch by Christian Persch). 1627 1628 20121006 1629 + improve CF_GCC_VERSION to work around Debian's customization of gcc 1630 --version message. 1631 + improve configure macros as done in byacc: 1632 + drop 2.13 compatibility; use 2.52.xxxx version only since EMX port 1633 has used that for a while. 1634 + add 3rd parameter to AC_DEFINE's to allow autoheader to run, i.e., 1635 for experimental use. 1636 + remove unused configure macros. 1637 + modify configure script and makefiles to quiet new autoconf warning 1638 for LIBS_TO_MAKE variable. 1639 + modify configure script to show $PATH_SEPARATOR variable. 1640 + update config.guess, config.sub 1641 1642 20120922 1643 + modify setupterm to set its copy of TERM to "unknown" if configured 1644 for the terminal driver and TERM was null or empty. 1645 + modify treatment of TERM variable for MinGW port to allow explicit 1646 use of the windows console driver by checking if $TERM is set to 1647 "#win32con" or an abbreviation of that. 1648 + undo recent change to fallback definition of vsscanf() to build with 1649 older Solaris compilers (cf: 20120728). 1650 1651 20120908 1652 + add test-screens to test/ncurses to show 256-characters at a time, 1653 to help with MinGW port. 1654 1655 20120903 1656 + simplify varargs logic in lib_printw.c; va_copy is no longer needed 1657 there. 1658 + modifications for MinGW port to make wide-character display usable. 1659 1660 20120902 1661 + regenerate configure script (report by Sven Joachim, cf: 20120901). 1662 1663 20120901 1664 + add a null-pointer check in _nc_flush (cf: 20120825). 1665 + fix a case in _nc_scroll_optimize() where the _oldnums_list array 1666 might not be allocated. 1667 + improve comparisons in configure.in for unset shell variables. 1668 1669 20120826 1670 + increase size of ncurses' output-buffer, in case of very small 1671 initial screen-sizes. 1672 + fix evaluation of TERMINFO and TERMINFO_DIRS default values as needed 1673 after changes to use --datarootdir (reports by Gabriele Balducci, 1674 Roumen Petrov). 1675 1676 20120825 1677 + change output buffering scheme, using buffer maintained by ncurses 1678 rather than stdio, to avoid problems with SIGTSTP handling (report 1679 by Brian Bloniarz). 1680 1681 20120811 1682 + update autoconf patch to 2.52.20120811, adding --datarootdir 1683 (prompted by discussion with Erwin Waterlander). 1684 + improve description of --enable-reentrant option in README and the 1685 INSTALL file. 1686 + add nsterm-256color, make this the default nsterm -TD 1687 + remove bw from nsterm-bce, per testing with tack -TD 1688 1689 20120804 1690 + update test/configure, adding check for tinfo library. 1691 + improve limit-checks for the getch fifo (report by Werner Fink). 1692 + fix a remaining mismatch between $with_echo and the symbols updated 1693 for CF_DISABLE_ECHO affecting parameters for mk-2nd.awk (report by 1694 Sven Joachim, cf: 20120317). 1695 + modify followup check for pkg-config's library directory in the 1696 --enable-pc-files option to validate syntax (report by Sven Joachim, 1697 cf: 20110716). 1698 1699 20120728 1700 + correct path for ncurses_mingw.h in include/headers, in case build 1701 is done outside source-tree (patch by Roumen Petrov). 1702 + modify some older xterm entries to align with xterm source -TD 1703 + separate "xterm-old" alias from "xterm-r6" -TD 1704 + add E3 extended capability to xterm-basic and putty -TD 1705 + parenthesize parameters of other macros in curses.h -TD 1706 + parenthesize parameter of COLOR_PAIR and PAIR_NUMBER in curses.h 1707 in case it happens to be a comma-expression, etc. (patch by Nick 1708 Black). 1709 1710 20120721 1711 + improved form_request_by_name() and menu_request_by_name(). 1712 + eliminate two fixed-size buffers in toe.c 1713 + extend use_tioctl() to have expected behavior when use_env(FALSE) and 1714 use_tioctl(TRUE) are called. 1715 + modify ncurses test-program, adding -E and -T options to demonstrate 1716 use_env() versus use_tioctl(). 1717 1718 20120714 1719 + add use_tioctl() function (adapted from patch by Werner Fink, 1720 Novell #769788): 1721 1722 20120707 1723 + add ncurses_mingw.h to installed headers (prompted by patch by 1724 Juergen Pfeifer). 1725 + clarify return-codes from wgetch() in response to SIGWINCH (prompted 1726 by Novell #769788). 1727 + modify resizeterm() to always push a KEY_RESIZE onto the fifo, even 1728 if screensize is unchanged. Modify _nc_update_screensize() to push a 1729 KEY_RESIZE if there was a SIGWINCH, even if it does not call 1730 resizeterm(). These changes eliminate the case where a SIGWINCH is 1731 received, but ERR returned from wgetch or wgetnstr because the screen 1732 dimensions did not change (Novell #769788). 1733 1734 20120630 1735 + add --enable-interop to sample package scripts (suggested by Juergen 1736 Pfeifer). 1737 + update CF_PATH_SYNTAX macro, from mawk changes. 1738 + modify mk-0th.awk to allow for generating llib-ltic, etc., though 1739 some work is needed on cproto to work with lib_gen.c to update 1740 llib-lncurses. 1741 + remove redundant getenv() cal in database-iterator leftover from 1742 cleanup in 20120622 changes (report by Sven Joachim). 1743 1744 20120622 1745 + add -d, -e and -q options to test/demo_terminfo and test/demo_termcap 1746 + fix caching of environment variables in database-iterator (patch by 1747 Philippe Troin, Redhat #831366). 1748 1749 20120616 1750 + add configure check to distinguish clang from gcc to eliminate 1751 warnings about unused command-line parameters when compiler warnings 1752 are enabled. 1753 + improve behavior when updating terminfo entries which are hardlinked 1754 by allowing for the possibility that an alias has been repurposed to 1755 a new primary name. 1756 + fix some strict compiler warnings based on package scripts. 1757 + further fixes for configure check for working poll (Debian #676461). 1758 1759 20120608 1760 + fix an uninitialized variable in -c/-n logic for infocmp changes 1761 (cf: 20120526). 1762 + corrected fix for building c++ binding with clang 3.0 (report/patch 1763 by Richard Yao, Gentoo #417613, cf: 20110409) 1764 + correct configure check for working poll, fixing the case where stdin 1765 is redirected, e.g., in rpm/dpkg builds (Debian #676461). 1766 + add rpm- and dpkg-scripts, to test those build-environments. 1767 The resulting packages are used only for testing. 1768 1769 20120602 1770 + add kdch1 aka "Remove" to vt220 and vt220-8 entries -TD 1771 + add kdch1, etc., to qvt108 -TD 1772 + add dl1/il1 to some entries based on dl/il values -TD 1773 + add dl to simpleterm -TD 1774 + add consistency-checks in tic for insert-line vs delete-line 1775 controls, and insert/delete-char keys 1776 + correct no-leaks logic in infocmp when doing comparisons, fixing 1777 duplicate free of entries given via the command-line, and freeing 1778 entries loaded from the last-but-one of files specified on the 1779 command-line. 1780 + add kdch1 to wsvt25 entry from NetBSD CVS (reported by David Lord, 1781 analysis by Martin Husemann). 1782 + add cnorm/civis to wsvt25 entry from NetBSD CVS (report/analysis by 1783 Onno van der Linden). 1784 1785 20120526 1786 + extend -c and -n options of infocmp to allow comparing more than two 1787 entries. 1788 + correct check in infocmp for number of terminal names when more than 1789 two are given. 1790 + correct typo in curs_threads.3x (report by Yanhui Shen on 1791 freebsd-hackers mailing list). 1792 1793 20120512 1794 + corrected 'op' for bterm (report by Samuel Thibault) -TD 1795 + modify test/background.c to demonstrate a background character 1796 holding a colored ACS_HLINE. The behavior differs from SVr4 due to 1797 the thick- and double-line extension (cf: 20091003). 1798 + modify handling of acs characters in PutAttrChar to avoid mapping an 1799 unmapped character to a space with A_ALTCHARSET set. 1800 + rewrite vt520 entry based on vt420 -TD 1801 1802 20120505 1803 + remove p6 (bold) from opus3n1+ for consistency -TD 1804 + remove acs stuff from env230 per clues in Ingres termcap -TD 1805 + modify env230 sgr/sgr0 to match other capabilities -TD 1806 + modify smacs/rmacs in bq300-8 to match sgr/sgr0 -TD 1807 + make sgr for dku7202 agree with other caps -TD 1808 + make sgr for ibmpc agree with other caps -TD 1809 + make sgr for tek4107 agree with other caps -TD 1810 + make sgr for ndr9500 agree with other caps -TD 1811 + make sgr for sco-ansi agree with other caps -TD 1812 + make sgr for d410 agree with other caps -TD 1813 + make sgr for d210 agree with other caps -TD 1814 + make sgr for d470c, d470c-7b agree with other caps -TD 1815 + remove redundant AC_DEFINE for NDEBUG versus Makefile definition. 1816 + fix a back-link in _nc_delink_entry(), which is needed if ncurses is 1817 configured with --enable-termcap and --disable-getcap. 1818 1819 20120428 1820 + fix some inconsistencies between vt320/vt420, e.g., cnorm/civis -TD 1821 + add eslok flag to dec+sl -TD 1822 + dec+sl applies to vt320 and up -TD 1823 + drop wsl width from xterm+sl -TD 1824 + reuse xterm+sl in putty and nsca-m -TD 1825 + add ansi+tabs to vt520 -TD 1826 + add ansi+enq to vt220-vt520 -TD 1827 + fix a compiler warning in example in ncurses-intro.doc (Paul Waring). 1828 + added paragraph in keyname manpage telling how extended capabilities 1829 are interpreted as key definitions. 1830 + modify tic's check of conflicting key definitions to include extended 1831 capability strings in addition to the existing check on predefined 1832 keys. 1833 1834 20120421 1835 + improve cleanup of temporary files in tic using atexit(). 1836 + add msgr to vt420, similar DEC vtXXX entries -TD 1837 + add several missing vt420 capabilities from vt220 -TD 1838 + factor out ansi+pp from several entries -TD 1839 + change xterm+sl and xterm+sl-twm to include only the status-line 1840 capabilities and not "use=xterm", making them more generally useful 1841 as building-blocks -TD 1842 + add dec+sl building block, as example -TD 1843 1844 20120414 1845 + add XT to some terminfo entries to improve usefulness for other 1846 applications than screen, which would like to pretend that xterm's 1847 title is a status-line. -TD 1848 + change use-clauses in ansi-mtabs, hp2626, and hp2622 based on review 1849 of ordering and overrides -TD 1850 + add consistency check in tic for screen's "XT" capability. 1851 + add section in terminfo.src summarizing the user-defined capabilities 1852 used in that file -TD 1853 1854 20120407 1855 + fix an inconsistency between tic/infocmp "-x" option; tic omits all 1856 non-standard capabilities, while infocmp was ignoring only the user 1857 definable capabilities. 1858 + improve special case in tic parsing of description to allow it to be 1859 followed by terminfo capabilities. Previously the description had to 1860 be the last field on an input line to allow tic to distinguish 1861 between termcap and terminfo format while still allowing commas to be 1862 embedded in the description. 1863 + correct variable name in gen_edit.sh which broke configurability of 1864 the --with-xterm-kbs option. 1865 + revert 2011-07-16 change to "linux" alias, return to "linux2.2" -TD 1866 + further amend 20110910 change, providing for configure-script 1867 override of the "linux" terminfo entry to install and changing the 1868 default for that to "linux2.2" (Debian #665959). 1869 1870 20120331 1871 + update Ada95/configure to use CF_DISABLE_ECHO (cf: 20120317). 1872 + correct order of use-clauses in st-256color -TD 1873 + modify configure script to look for gnatgcc if the Ada95 binding 1874 is built, in preference to the default gcc/cc (suggested by 1875 Nicolas Boulenguez). 1876 + modify configure script to ensure that the same -On option used for 1877 the C compiler in CFLAGS is used for ADAFLAGS rather than simply 1878 using "-O3" (suggested by Nicolas Boulenguez) 1879 1880 20120324 1881 + amend an old fix so that next_char() exits properly for empty files, 1882 e.g., from reading /dev/null (cf: 20080804). 1883 + modify tic so that it can read from the standard input, or from 1884 a character device. Because tic uses seek's, this requires writing 1885 the data to a temporary file first (prompted by remark by Sven 1886 Joachim) (cf: 20000923). 1887 1888 20120317 1889 + correct a check made in lib_napms.c, so that terminfo applications 1890 can again use napms() (cf: 20110604). 1891 + add a note in tic.h regarding required casts for ABSENT_BOOLEAN 1892 (cf: 20040327). 1893 + correct scripting for --disable-echo option in test/configure. 1894 + amend check for missing c++ compiler to work when no error is 1895 reported, and no variables set (cf: 20021206). 1896 + add/use configure macro CF_DISABLE_ECHO. 1897 1898 20120310 1899 + fix some strict compiler warnings for abi6 and 64-bits. 1900 + use begin_va_copy/end_va_copy macros in lib_printw.c (cf: 20120303). 1901 + improve a limit-check in infocmp.c (Werner Fink): 1902 1903 20120303 1904 + minor tidying of terminfo.tail, clarify reason for limitation 1905 regarding mapping of \0 to \200 1906 + minor improvement to _nc_copy_termtype(), using memcpy to replace 1907 loops. 1908 + fix no-leaks checking in test/demo_termcap.c to account for multiple 1909 calls to setupterm(). 1910 + modified the libgpm change to show previous load as a problem in the 1911 debug-trace. 1912 > merge some patches from OpenSUSE rpm (Werner Fink): 1913 + ncurses-5.7-printw.dif, fixes for varargs handling in lib_printw.c 1914 + ncurses-5.7-gpm.dif, do not dlopen libgpm if already loaded by 1915 runtime linker 1916 + ncurses-5.6-fallback.dif, do not free arrays and strings from static 1917 fallback entries 1918 1919 20120228 1920 + fix breakage in tic/infocmp from 20120225 (report by Werner Fink). 1921 1922 20120225 1923 + modify configure script to allow creating dll's for MinGW when 1924 cross-compiling. 1925 + add --enable-string-hacks option to control whether strlcat and 1926 strlcpy may be used. The same issue applies to OpenBSD's warnings 1927 about snprintf, noting that this function is weakly standardized. 1928 + add configure checks for strlcat, strlcpy and snprintf, to help 1929 reduce bogus warnings with OpenBSD builds. 1930 + build-fix for OpenBSD 4.9 to supply consistent intptr_t declaration 1931 (cf:20111231) 1932 + update config.guess, config.sub 1933 1934 20120218 1935 + correct CF_ETIP_DEFINES configure macro, making it exit properly on 1936 the first success (patch by Pierre Labastie). 1937 + improve configure macro CF_MKSTEMP by moving existence-check for 1938 mkstemp out of the AC_TRY_RUN, to help with cross-compiles. 1939 + improve configure macro CF_FUNC_POLL from luit changes to detect 1940 broken implementations, e.g., with Mac OS X. 1941 + add configure option --with-tparm-arg 1942 + build-fix for MinGW cross-compiling, so that make_hash does not 1943 depend on TTY definition (cf: 20111008). 1944 1945 20120211 1946 + make sgr for xterm-pcolor agree with other caps -TD 1947 + make sgr for att5425 agree with other caps -TD 1948 + make sgr for att630 agree with other caps -TD 1949 + make sgr for linux entries agree with other caps -TD 1950 + make sgr for tvi9065 agree with other caps -TD 1951 + make sgr for ncr260vt200an agree with other caps -TD 1952 + make sgr for ncr160vt100pp agree with other caps -TD 1953 + make sgr for ncr260vt300an agree with other caps -TD 1954 + make sgr for aaa-60-dec-rv, aaa+dec agree with other caps -TD 1955 + make sgr for cygwin, cygwinDBG agree with other caps -TD 1956 + add configure option --with-xterm-kbs to simplify configuration for 1957 Linux versus most other systems. 1958 1959 20120204 1960 + improved tic -D option, avoid making target directory and provide 1961 better diagnostics. 1962 1963 20120128 1964 + add mach-gnu (Debian #614316, patch by Samuel Thibault) 1965 + add mach-gnu-color, tweaks to mach-gnu terminfo -TD 1966 + make sgr for sun-color agree with smso -TD 1967 + make sgr for prism9 agree with other caps -TD 1968 + make sgr for icl6404 agree with other caps -TD 1969 + make sgr for ofcons agree with other caps -TD 1970 + make sgr for att5410v1, att4415, att620 agree with other caps -TD 1971 + make sgr for aaa-unk, aaa-rv agree with other caps -TD 1972 + make sgr for avt-ns agree with other caps -TD 1973 + amend fix intended to separate fixups for acsc to allow "tic -cv" to 1974 give verbose warnings (cf: 20110730). 1975 + modify misc/gen-edit.sh to make the location of the tabset directory 1976 consistent with misc/Makefile.in, i.e., using ${datadir}/tabset 1977 (Debian #653435, patch by Sven Joachim). 1978 1979 20120121 1980 + add --with-lib-prefix option to allow configuring for old/new flavors 1981 of OS/2 EMX. 1982 + modify check for gnat version to allow for year, as used in FreeBSD 1983 port. 1984 + modify check_existence() in db_iterator.c to simply check if the 1985 path is a directory or file, according to the need. Checking for 1986 directory size also gives no usable result with OS/2 (cf: 20120107). 1987 + support OS/2 kLIBC (patch by KO Myung-Hun). 1988 1989 20120114 1990 + several improvements to test/movewindow.c (prompted by discussion on 1991 Linux Mint forum): 1992 + modify movement commands to make them continuous 1993 + rewrote the test for mvderwin 1994 + rewrote the test for recursive mvwin 1995 + split-out reusable CF_WITH_NCURSES_ETC macro in test/configure.in 1996 + updated configure macro CF_XOPEN_SOURCE, build-fixes for Mac OS X 1997 and OpenBSD. 1998 + regenerated html manpages. 1999 2000 20120107 2001 + various improvments for MinGW (Juergen Pfeifer): 2002 + modify stat() calls to ignore the st_size member 2003 + drop mk-dlls.sh script. 2004 + change recommended regular expression library. 2005 + modify rain.c to allow for threaded configuraton. 2006 + modify tset.c to allow for case when size-change logic is not used. 2007 2008 20111231 2009 + modify toe's report when -a and -s options are combined, to add 2010 a column showing which entries belong to a given database. 2011 + add -s option to toe, to sort its output. 2012 + modify progs/toe.c, simplifying use of db-iterator results to use 2013 caching improvements from 20111001 and 20111126. 2014 + correct generation of pc-files when ticlib or termlib options are 2015 given to rename the corresponding tic- or tinfo-libraries (report 2016 by Sven Joachim). 2017 2018 20111224 2019 + document a portability issue with tput, i.e., that scripts which work 2020 with ncurses may fail in other implementations that do no parameter 2021 analysis. 2022 + add putty-sco entry -TD 2023 2024 20111217 2025 + review/fix places in manpages where --program-prefix configure option 2026 was not being used. 2027 + add -D option to infocmp, to show the database locations that it 2028 could use. 2029 + fix build for the special case where term-driver, ticlib and termlib 2030 are all enabled. The terminal driver depends on a few features in 2031 the base ncurses library, so tic's dependencies include both ncurses 2032 and termlib. 2033 + fix build work for term-driver when --enable-wgetch-events option is 2034 enabled. 2035 + use <stdint.h> types to fix some questionable casts to void*. 2036 2037 20111210 2038 + modify configure script to check if thread library provides 2039 pthread_mutexattr_settype(), e.g., not provided by Solaris 2.6 2040 + modify configure script to suppress check to define _XOPEN_SOURCE 2041 for IRIX64, since its header files have a conflict versus 2042 _SGI_SOURCE. 2043 + modify configure script to add ".pc" files for tic- and 2044 tinfo-libraries, which were omitted in recent change (cf: 20111126). 2045 + fix inconsistent checks on $PKG_CONFIG variable in configure script. 2046 2047 20111203 2048 + modify configure-check for etip.h dependencies, supplying a temporary 2049 copy of ncurses_dll.h since it is a generated file (prompted by 2050 Debian #646977). 2051 + modify CF_CPP_PARAM_INIT "main" function to work with current C++. 2052 2053 20111126 2054 + correct database iterator's check for duplicate entries 2055 (cf: 20111001). 2056 + modify database iterator to ignore $TERMCAP when it is not an 2057 absolute pathname. 2058 + add -D option to tic, to show the database locations that it could 2059 use. 2060 + improve description of database locations in tic manpage. 2061 + modify the configure script to generate a list of the ".pc" files to 2062 generate, rather than deriving the list from the libraries which have 2063 been built (patch by Mike Frysinger). 2064 + use AC_CHECK_TOOLS in preference to AC_PATH_PROGS when searching for 2065 ncurses*-config, e.g., in Ada95/configure and test/configure (adapted 2066 from patch by Mike Frysinger). 2067 2068 20111119 2069 + remove obsolete/conflicting fallback definition for _POSIX_SOURCE 2070 from curses.priv.h, fixing a regression with IRIX64 and Tru64 2071 (cf: 20110416) 2072 + modify _nc_tic_dir() to ensure that its return-value is nonnull, 2073 i.e., the database iterator was not initialized. This case is needed 2074 to when tic is translating to termcap, rather than loading the 2075 database (cf: 20111001). 2076 2077 20111112 2078 + add pccon entries for OpenBSD console (Alexei Malinin). 2079 + build-fix for OpenBSD 4.9 with gcc 4.2.1, setting _XOPEN_SOURCE to 2080 600 to work around inconsistent ifdef'ing of wcstof between C and 2081 C++ header files. 2082 + modify capconvert script to accept more than exact match on "xterm", 2083 e.g., the "xterm-*" variants, to exclude from the conversion (patch 2084 by Robert Millan). 2085 + add -lc_r as alternative for -lpthread, allows build of threaded code 2086 in older FreeBSD machines. 2087 + build-fix for MirBSD, which fails when either _XOPEN_SOURCE or 2088 _POSIX_SOURCE are defined. 2089 + fix a typo misc/Makefile.in, used in uninstalling pc-files. 2090 2091 20111030 2092 + modify make_db_path() to allow creating "terminfo.db" in the same 2093 directory as an existing "terminfo" directory. This fixes a case 2094 where switching between hashed/filesystem databases would cause the 2095 new hashed database to be installed in the next best location - 2096 root's home directory. 2097 + add variable cf_cv_prog_gnat_correct to those passed to 2098 config.status, fixing a problem with Ada95 builds (cf: 20111022). 2099 + change feature test from _XPG5 to _XOPEN_SOURCE in two places, to 2100 accommodate broken implementations for _XPG6. 2101 + eliminate usage of NULL symbol from etip.h, to reduce header 2102 interdependencies. 2103 + add configure check to decide when to add _XOPEN_SOURCE define to 2104 compiler options, i.e., for Solaris 10 and later (cf: 20100403). 2105 This is a workaround for gcc 4.6, which fails to build the c++ 2106 binding if that symbol is defined by the application, due to 2107 incorrectly combining the corresponding feature test macros 2108 (report by Peter Kruse). 2109 2110 20111022 2111 + correct logic for discarding mouse events, retaining the partial 2112 events used to build up click, double-click, etc, until needed 2113 (cf: 20110917). 2114 + fix configure script to avoid creating unused Ada95 makefile when 2115 gnat does not work. 2116 + cleanup width-related gcc 3.4.3 warnings for 64-bit platform, for the 2117 internal functions of libncurses. The external interface of courses 2118 uses bool, which still produces these warnings. 2119 2120 20111015 2121 + improve description of --disable-tic-depends option to make it 2122 clear that it may be useful whether or not the --with-termlib 2123 option is also given (report by Sven Joachim). 2124 + amend termcap equivalent for set_pglen_inch to use the X/Open 2125 "YI" rather than the obsolete Solaris 2.5 "sL" (cf: 990109). 2126 + improve manpage for tgetent differences from termcap library. 2127 2128 20111008 2129 + moved static data from db_iterator.c to lib_data.c 2130 + modify db_iterator.c for memory-leak checking, fix one leak. 2131 + modify misc/gen-pkgconfig.in to use Requires.private for the parts 2132 of ncurses rather than Requires, as well as Libs.private for the 2133 other library dependencies (prompted by Debian #644728). 2134 2135 20111001 2136 + modify tic "-K" option to only set the strict-flag rather than force 2137 source-output. That allows the same flag to control the parser for 2138 input and output of termcap source. 2139 + modify _nc_getent() to ignore backslash at the end of a comment line, 2140 making it consistent with ncurses' parser. 2141 + restore a special-case check for directory needed to make termcap 2142 text files load as if they were databases (cf: 20110924). 2143 + modify tic's resolution/collision checking to attempt to remove the 2144 conflicting alias from the second entry in the pair, which is 2145 normally following in the source file. Also improved the warning 2146 message to make it simpler to see which alias is the problem. 2147 + improve performance of the database iterator by caching search-list. 2148 2149 20110925 2150 + add a missing "else" in changes to _nc_read_tic_entry(). 2151 2152 20110924 2153 + modify _nc_read_tic_entry() so that hashed-database is checked before 2154 filesystem. 2155 + updated CF_CURSES_LIBS check in test/configure script. 2156 + modify configure script and makefiles to split TIC_ARGS and 2157 TINFO_ARGS into pieces corresponding to LDFLAGS and LIBS variables, 2158 to help separate searches for tic- and tinfo-libraries (patch by Nick 2159 Alcock aka "Nix"). 2160 + build-fix for lib_mouse.c changes (cf: 20110917). 2161 2162 20110917 2163 + fix compiler warning for clang 2.9 2164 + improve merging of mouse events (integrated patch by Damien 2165 Guibouret). 2166 + correct mask-check used in lib_mouse for wheel mouse buttons 4/5 2167 (patch by Damien Guibouret). 2168 2169 20110910 2170 + modify misc/gen_edit.sh to select a "linux" entry which works with 2171 the current kernel rather than assuming it is always "linux3.0" 2172 (cf: 20110716). 2173 + revert a change to getmouse() which had the undesirable side-effect 2174 of suppressing button-release events (report by Damien Guibouret, 2175 cf: 20100102). 2176 + add xterm+kbs fragment from xterm #272 -TD 2177 + add configure option --with-pkg-config-libdir to provide control over 2178 the actual directory into which pc-files are installed, do not use 2179 the pkg-config environment variables (discussion with Frederic L W 2180 Meunier). 2181 + add link to mailing-list archive in announce.html.in, as done in 2182 FAQ (prompted by question by Andrius Bentkus). 2183 + improve manpage install by adjusting the "#include" examples to 2184 show the ncurses-subdirectory used when --disable-overwrite option 2185 is used. 2186 + install an alias for "curses" to the ncurses manpage, tied to the 2187 --with-curses-h configure option (suggested by Reuben Thomas). 2188 2189 20110903 2190 + propagate error-returns from wresize, i.e., the internal 2191 increase_size and decrease_size functions through resize_term (report 2192 by Tim van der Molen, cf: 20020713). 2193 + fix typo in tset manpage (patch by Sven Joachim). 2194 2195 20110820 2196 + add a check to ensure that termcap files which might have "^?" do 2197 not use the terminfo interpretation as "\177". 2198 + minor cleanup of X-terminal emulator section of terminfo.src -TD 2199 + add terminator entry -TD 2200 + add simpleterm entry -TD 2201 + improve wattr_get macros by ensuring that if the window pointer is 2202 null, then the attribute and color values returned will be zero 2203 (cf: 20110528). 2204 2205 20110813 2206 + add substitution for $RPATH_LIST to misc/ncurses-config.in 2207 + improve performance of tic with hashed-database by caching the 2208 database connection, using atexit() to cleanup. 2209 + modify treatment of 2-character aliases at the beginning of termcap 2210 entries so they are not counted in use-resolution, since these are 2211 guaranteed to be unique. Also ignore these aliases when reporting 2212 the primary name of the entry (cf: 20040501) 2213 + double-check gn (generic) flag in terminal descriptions to 2214 accommodate old/buggy termcap databases which misused that feature. 2215 + minor fixes to _nc_tgetent(), ensure buffer is initialized even on 2216 error-return. 2217 2218 20110807 2219 + improve rpath fix from 20110730 by ensuring that the new $RPATH_LIST 2220 variable is defined in the makefiles which use it. 2221 + build-fix for DragonFlyBSD's pkgsrc in test/configure script. 2222 + build-fixes for NetBSD 5.1 with termcap support enabled. 2223 + corrected k9 in dg460-ansi, add other features based on manuals -TD 2224 + improve trimming of whitespace at the end of terminfo/termcap output 2225 from tic/infocmp. 2226 + when writing termcap source, ensure that colons in the description 2227 field are translated to a non-delimiter, i.e., "=". 2228 + add "-0" option to tic/infocmp, to make the termcap/terminfo source 2229 use a single line. 2230 + add a null-pointer check when handling the $CC variable. 2231 2232 20110730 2233 + modify configure script and makefiles in c++ and progs to allow the 2234 directory used for rpath option to be overridden, e.g., to work 2235 around updates to the variables used by tic during an install. 2236 + add -K option to tic/infocmp, to provide stricter BSD-compatibility 2237 for termcap output. 2238 + add _nc_strict_bsd variable in tic library which controls the 2239 "strict" BSD termcap compatibility from 20110723, plus these 2240 features: 2241 + allow escapes such as "\8" and "\9" when reading termcap 2242 + disallow "\a", "\e", "\l", "\s" and "\:" escapes when reading 2243 termcap files, passing through "a", "e", etc. 2244 + expand "\:" as "\072" on output. 2245 + modify _nc_get_token() to reset the token's string value in case 2246 there is a string-typed token lacking the "=" marker. 2247 + fix a few memory leaks in _nc_tgetent. 2248 + fix a few places where reading from a termcap file could refer to 2249 freed memory. 2250 + add an overflow check when converting terminfo/termcap numeric 2251 values, since terminfo stores those in a short, and they must be 2252 positive. 2253 + correct internal variables used for translating to termcap "%>" 2254 feature, and translating from termcap %B to terminfo, needed by 2255 tctest (cf: 19991211). 2256 + amend a minor fix to acsc when loading a termcap file to separate it 2257 from warnings needed for tic (cf: 20040710) 2258 + modify logic in _nc_read_entry() and _nc_read_tic_entry() to allow 2259 a termcap file to be handled via TERMINFO_DIRS. 2260 + modify _nc_infotocap() to include non-mandatory padding when 2261 translating to termcap. 2262 + modify _nc_read_termcap_entry(), passing a flag in the case where 2263 getcap is used, to reduce interactive warning messages. 2264 2265 20110723 2266 + add a check in start_color() to limit color-pairs to 256 when 2267 extended colors are not supported (patch by David Benjamin). 2268 + modify setcchar to omit no-longer-needed OR'ing of color pair in 2269 the SetAttr() macro (patch by David Benjamin). 2270 + add kich1 to sun terminfo entry (Yuri Pankov) 2271 + use bold rather than reverse for smso in sun-color terminfo entry 2272 (Yuri Pankov). 2273 + improve generation of termcap using tic/infocmp -C option, e.g., 2274 to correspond with 4.2BSD (prompted by discussion with Yuri Pankov 2275 regarding Schilling's test program): 2276 + translate %02 and %03 to %2 and %3 respectively. 2277 + suppress string capabilities which use %s, not supported by tgoto 2278 + use \040 rather than \s 2279 + expand null characters as \200 rather than \0 2280 + modify configure script to support shared libraries for DragonFlyBSD. 2281 2282 20110716 2283 + replace an assert() in _nc_Free_Argument() with a regular null 2284 pointer check (report/analysis by Franjo Ivancic). 2285 + modify configure --enable-pc-files option to take into account the 2286 PKG_CONFIG_PATH variable (report by Frederic L W Meunier). 2287 + add/use xterm+tmux chunk from xterm #271 -TD 2288 + resync xterm-new entry from xterm #271 -TD 2289 + add E3 extended capability to linux-basic (Miroslav Lichvar) 2290 + add linux2.2, linux2.6, linux3.0 entries to give context for E3 -TD 2291 + add SI/SO change to linux2.6 entry (Debian #515609) -TD 2292 + fix inconsistent tabset path in pcmw (Todd C. Miller). 2293 + remove a backslash which continued comment, obscuring altos3 2294 definition with OpenBSD toolset (Nicholas Marriott). 2295 2296 20110702 2297 + add workaround from xterm #271 changes to ensure that compiler flags 2298 are not used in the $CC variable. 2299 + improve support for shared libraries, tested with AIX 5.3, 6.1 and 2300 7.1 with both gcc 4.2.4 and cc. 2301 + modify configure checks for AIX to include release 7.x 2302 + add loader flags/libraries to libtool options so that dynamic loading 2303 works properly, adapted from ncurses-5.7-ldflags-with-libtool.patch 2304 at gentoo prefix repository (patch by Michael Haubenwallner). 2305 2306 20110626 2307 + move include of nc_termios.h out of term_entry.h, since the latter 2308 is installed, e.g., for tack while the former is not (report by 2309 Sven Joachim). 2310 2311 20110625 2312 + improve cleanup() function in lib_tstp.c, using _exit() rather than 2313 exit() and checking for SIGTERM rather than SIGQUIT (prompted by 2314 comments forwarded by Nicholas Marriott). 2315 + reduce name pollution from term.h, moving fallback #define's for 2316 tcgetattr(), etc., to new private header nc_termios.h (report by 2317 Sergio NNX). 2318 + two minor fixes for tracing (patch by Vassili Courzakis). 2319 + improve trace initialization by starting it in use_env() and 2320 ripoffline(). 2321 + review old email, add details for some changelog entries. 2322 2323 20110611 2324 + update minix entry to minix 3.2 (Thomas Cort). 2325 + fix a strict compiler warning in change to wattr_get (cf: 20110528). 2326 2327 20110604 2328 + fixes for MirBSD port: 2329 + set default prefix to /usr. 2330 + add support for shared libraries in configure script. 2331 + use S_ISREG and S_ISDIR consistently, with fallback definitions. 2332 + add a few more checks based on ncurses/link_test. 2333 + modify MKlib_gen.sh to handle sp-funcs renaming of NCURSES_OUTC type. 2334 2335 20110528 2336 + add case to CF_SHARED_OPTS for Interix (patch by Markus Duft). 2337 + used ncurses/link_test to check for behavior when the terminal has 2338 not been initialized and when an application passes null pointers 2339 to the library. Added checks to cover this (prompted by Redhat 2340 #707344). 2341 + modify MKlib_gen.sh to make its main() function call each function 2342 with zero parameters, to help find inconsistent checking for null 2343 pointers, etc. 2344 2345 20110521 2346 + fix warnings from clang 2.7 "--analyze" 2347 2348 20110514 2349 + compiler-warning fixes in panel and progs. 2350 + modify CF_PKG_CONFIG macro, from changes to tin -TD 2351 + modify CF_CURSES_FUNCS configure macro, used in test directory 2352 configure script: 2353 + work around (non-optimizer) bug in gcc 4.2.1 which caused 2354 test-expression to be omitted from executable. 2355 + force the linker to see a link-time expression of a symbol, to 2356 help work around weak-symbol issues. 2357 2358 20110507 2359 + update discussion of MKfallback.sh script in INSTALL; normally the 2360 script is used automatically via the configured makefiles. However 2361 there are still occasions when it might be used directly by packagers 2362 (report by Gunter Schaffler). 2363 + modify misc/ncurses-config.in to omit the "-L" option from the 2364 "--libs" output if the library directory is /usr/lib. 2365 + change order of tests for curses.h versus ncurses.h headers in the 2366 configure scripts for Ada95 and test-directories, to look for 2367 ncurses.h, from fixes to tin -TD 2368 + modify ncurses/tinfo/access.c to account for Tandem's root uid 2369 (report by Joachim Schmitz). 2370 2371 20110430 2372 + modify rules in Ada95/src/Makefile.in to ensure that the PIC option 2373 is not used when building a static library (report by Nicolas 2374 Boulenguez): 2375 + Ada95 build-fix for big-endian architectures such as sparc. This 2376 undoes one of the fixes from 20110319, which added an "Unused" member 2377 to representation clauses, replacing that with pragmas to suppress 2378 warnings about unused bits (patch by Nicolas Boulenguez). 2379 2380 20110423 2381 + add check in test/configure for use_window, use_screen. 2382 + add configure-checks for getopt's variables, which may be declared 2383 as different types on some Unix systems. 2384 + add check in test/configure for some legacy curses types of the 2385 function pointer passed to tputs(). 2386 + modify init_pair() to accept -1's for color value after 2387 assume_default_colors() has been called (Debian #337095). 2388 + modify test/background.c, adding commmand-line options to demonstrate 2389 assume_default_colors() and use_default_colors(). 2390 2391 20110416 2392 + modify configure script/source-code to only define _POSIX_SOURCE if 2393 the checks for sigaction and/or termios fail, and if _POSIX_C_SOURCE 2394 and _XOPEN_SOURCE are undefined (report by Valentin Ochs). 2395 + update config.guess, config.sub 2396 2397 20110409 2398 + fixes to build c++ binding with clang 3.0 (patch by Alexander 2399 Kolesen). 2400 + add check for unctrl.h in test/configure, to work around breakage in 2401 some ncurses packages. 2402 + add "--disable-widec" option to test/configure script. 2403 + add "--with-curses-colr" and "--with-curses-5lib" options to the 2404 test/configure script to address testing with very old machines. 2405 2406 20110404 5.9 release for upload to 2407 2408 20110402 2409 + various build-fixes for the rpm/dpkg scripts. 2410 + add "--enable-rpath-link" option to Ada95/configure, to allow 2411 packages to suppress the rpath feature which is normally used for 2412 the in-tree build of sample programs. 2413 + corrected definition of libdir variable in Ada95/src/Makefile.in, 2414 needed for rpm script. 2415 + add "--with-shared" option to Ada95/configure script, to allow 2416 making the C-language parts of the binding use appropriate compiler 2417 options if building a shared library with gnat. 2418 2419 20110329 2420 > portability fixes for Ada95 binding: 2421 + add configure check to ensure that SIGINT works with gnat. This is 2422 needed for the "rain" sample program. If SIGINT does not work, omit 2423 that sample program. 2424 + correct typo in check of $PKG_CONFIG variable in Ada95/configure 2425 + add ncurses_compat.c, to supply functions used in the Ada95 binding 2426 which were added in 5.7 and later. 2427 + modify sed expression in CF_NCURSES_ADDON to eliminate a dependency 2428 upon GNU sed. 2429 2430 20110326 2431 + add special check in Ada95/configure script for ncurses6 reentrant 2432 code. 2433 + regen Ada html documentation. 2434 + build-fix for Ada shared libraries versus the varargs workaround. 2435 + add rpm and dpkg scripts for Ada95 and test directories, for test 2436 builds. 2437 + update test/configure macros CF_CURSES_LIBS, CF_XOPEN_SOURCE and 2438 CF_X_ATHENA_LIBS. 2439 + add configure check to determine if gnat's project feature supports 2440 libraries, i.e., collections of .ali files. 2441 + make all dereferences in Ada95 samples explicit. 2442 + fix typo in comment in lib_add_wch.c (patch by Petr Pavlu). 2443 + add configure check for, ifdef's for math.h which is in a separate 2444 package on Solaris and potentially not installed (report by Petr 2445 Pavlu). 2446 > fixes for Ada95 binding (Nicolas Boulenguez): 2447 + improve type-checking in Ada95 by eliminating a few warning-suppress 2448 pragmas. 2449 + suppress unreferenced warnings. 2450 + make all dereferences in binding explicit. 2451 2452 20110319 2453 + regen Ada html documentation. 2454 + change order of -I options from ncurses*-config script when the 2455 --disable-overwrite option was used, so that the subdirectory include 2456 is listed first. 2457 + modify the make-tar.sh scripts to add a MANIFEST and NEWS file. 2458 + modify configure script to provide value for HTML_DIR in 2459 Ada95/gen/Makefile.in, which depends on whether the Ada95 binding is 2460 distributed separately (report by Nicolas Boulenguez). 2461 + modify configure script to add "-g" and/or "-O3" to ADAFLAGS if the 2462 CFLAGS for the build has these options. 2463 + amend change from 20070324, to not add 1 to the result of getmaxx 2464 and getmaxy in the Ada binding (report by Nicolas Boulenguez for 2465 thread in comp.lang.ada). 2466 + build-fix Ada95/samples for gnat 4.5 2467 + spelling fixes for Ada95/samples/explain.txt 2468 > fixes for Ada95 binding (Nicolas Boulenguez): 2469 + add item in Trace_Attribute_Set corresponding to TRACE_ATTRS. 2470 + add workaround for binding to set_field_type(), which uses varargs. 2471 The original binding from 990220 relied on the prevalent 2472 implementation of varargs which did not support or need va_copy(). 2473 + add dependency on gen/Makefile.in needed for *-panels.ads 2474 + add Library_Options to library.gpr 2475 + add Languages to library.gpr, for gprbuild 2476 2477 20110307 2478 + revert changes to limit-checks from 20110122 (Debian #616711). 2479 > minor type-cleanup of Ada95 binding (Nicolas Boulenguez): 2480 + corrected a minor sign error in a field of Low_Level_Field_Type, to 2481 conform to form.h. 2482 + replaced C_Int by Curses_Bool as return type for some callbacks, see 2483 fieldtype(3FORM). 2484 + modify samples/sample-explain.adb to provide explicit message when 2485 explain.txt is not found. 2486 2487 20110305 2488 + improve makefiles for Ada95 tree (patch by Nicolas Boulenguez). 2489 + fix an off-by-one error in _nc_slk_initialize() from 20100605 fixes 2490 for compiler warnings (report by Nicolas Boulenguez). 2491 + modify Ada95/gen/gen.c to declare unused bits in generated layouts, 2492 needed to compile when chtype is 64-bits using gnat 4.4.5 2493 2494 20110226 5.8 release for upload to 2495 2496 20110226 2497 + update release notes, for 5.8. 2498 + regenerated html manpages. 2499 + change open() in _nc_read_file_entry() to fopen() for consistency 2500 with write_file(). 2501 + modify misc/run_tic.in to create parent directory, in case this is 2502 a new install of hashed database. 2503 + fix typo in Ada95/mk-1st.awk which causes error with original awk. 2504 2505 20110220 2506 + configure script rpath fixes from xterm #269. 2507 + workaround for cygwin's non-functional features.h, to force ncurses' 2508 configure script to define _XOPEN_SOURCE_EXTENDED when building 2509 wide-character configuration. 2510 + build-fix in run_tic.sh for OS/2 EMX install 2511 + add cons25-debian entry (patch by Brian M Carlson, Debian #607662). 2512 2513 20110212 2514 + regenerated html manpages. 2515 + use _tracef() in show_where() function of tic, to work correctly with 2516 special case of trace configuration. 2517 2518 20110205 2519 + add xterm-utf8 entry as a demo of the U8 feature -TD 2520 + add U8 feature to denote entries for terminal emulators which do not 2521 support VT100 SI/SO when processing UTF-8 encoding -TD 2522 + improve the NCURSES_NO_UTF8_ACS feature by adding a check for an 2523 extended terminfo capability U8 (prompted by mailing list 2524 discussion). 2525 2526 20110122 2527 + start documenting interface changes for upcoming 5.8 release. 2528 + correct limit-checks in derwin(). 2529 + correct limit-checks in newwin(), to ensure that windows have nonzero 2530 size (report by Garrett Cooper). 2531 + fix a missing "weak" declaration for pthread_kill (patch by Nicholas 2532 Alcock). 2533 + improve documentation of KEY_ENTER in curs_getch.3x manpage (prompted 2534 by discussion with Kevin Martin). 2535 2536 20110115 2537 + modify Ada95/configure script to make the --with-curses-dir option 2538 work without requiring the --with-ncurses option. 2539 + modify test programs to allow them to be built with NetBSD curses. 2540 + document thick- and double-line symbols in curs_add_wch.3x manpage. 2541 + document WACS_xxx constants in curs_add_wch.3x manpage. 2542 + fix some warnings for clang 2.6 "--analyze" 2543 + modify Ada95 makefiles to make html-documentation with the project 2544 file configuration if that is used. 2545 + update config.guess, config.sub 2546 2547 20110108 2548 + regenerated html manpages. 2549 + minor fixes to enable lint when trace is not enabled, e.g., with 2550 clang --analyze. 2551 + fix typo in man/default_colors.3x (patch by Tim van der Molen). 2552 + update ncurses/llib-lncurses* 2553 2554 20110101 2555 + fix remaining strict compiler warnings in ncurses library ABI=5, 2556 except those dealing with function pointers, etc. 2557 2558 20101225 2559 + modify nc_tparm.h, adding guards against repeated inclusion, and 2560 allowing TPARM_ARG to be overridden. 2561 + fix some strict compiler warnings in ncurses library. 2562 2563 20101211 2564 + suppress ncv in screen entry, allowing underline (patch by Alejandro 2565 R Sedeno). 2566 + also suppress ncv in konsole-base -TD 2567 + fixes in wins_nwstr() and related functions to ensure that special 2568 characters, i.e., control characters are handled properly with the 2569 wide-character configuration. 2570 + correct a comparison in wins_nwstr() (Redhat #661506). 2571 + correct help-messages in some of the test-programs, which still 2572 referred to quitting with 'q'. 2573 2574 20101204 2575 + add special case to _nc_infotocap() to recognize the setaf/setab 2576 strings from xterm+256color and xterm+88color, and provide a reduced 2577 version which works with termcap. 2578 + remove obsolete emacs "Local Variables" section from documentation 2579 (request by Sven Joachim). 2580 + update doc/html/index.html to include NCURSES-Programming-HOWTO.html 2581 (report by Sven Joachim). 2582 2583 20101128 2584 + modify test/configure and test/Makefile.in to handle this special 2585 case of building within a build-tree (Debian #34182): 2586 mkdir -p build && cd build && ../test/configure && make 2587 2588 20101127 2589 + miscellaneous build-fixes for Ada95 and test-directories when built 2590 out-of-tree. 2591 + use VPATH in makefiles to simplify out-of-tree builds (Debian #34182). 2592 + fix typo in rmso for tek4106 entry -Goran Weinholt 2593 2594 20101120 2595 + improve checks in test/configure for X libraries, from xterm #267 2596 changes. 2597 + modify test/configure to allow it to use the build-tree's libraries 2598 e.g., when using that to configure the test-programs without the 2599 rpath feature (request by Sven Joachim). 2600 + repurpose "gnome" terminfo entries as "vte", retaining "gnome" items 2601 for compatibility, but generally deprecating those since the VTE 2602 library is what actually defines the behavior of "gnome", etc., 2603 since 2003 -TD 2604 2605 20101113 2606 + compiler warning fixes for test programs. 2607 + various build-fixes for test-programs with pdcurses. 2608 + updated configure checks for X packages in test/configure from xterm 2609 #267 changes. 2610 + add configure check to gnatmake, to accommodate cygwin. 2611 2612 20101106 2613 + correct list of sub-directories needed in Ada95 tree for building as 2614 a separate package. 2615 + modify scripts in test-directory to improve builds as a separate 2616 package. 2617 2618 20101023 2619 + correct parsing of relative tab-stops in tabs program (report by 2620 Philip Ganchev). 2621 + adjust configure script so that "t" is not added to library suffix 2622 when weak-symbols are used, allowing the pthread configuration to 2623 more closely match the non-thread naming (report by Werner Fink). 2624 + modify configure check for tic program, used for fallbacks, to a 2625 warning if not found. This makes it simpler to use additonal 2626 scripts to bootstrap the fallbacks code using tic from the build 2627 tree (report by Werner Fink). 2628 + fix several places in configure script using ${variable-value} form. 2629 + modify configure macro CF_LDFLAGS_STATIC to accommodate some loaders 2630 which do not support selectively linking against static libraries 2631 (report by John P. Hartmann) 2632 + fix an unescaped dash in man/tset.1 (report by Sven Joachim). 2633 2634 20101009 2635 + correct comparison used for setting 16-colors in linux-16color 2636 entry (Novell #644831) -TD 2637 + improve linux-16color entry, using "dim" for color-8 which makes it 2638 gray rather than black like color-0 -TD 2639 + drop misc/ncu-indent and misc/jpf-indent; they are provided by an 2640 external package "cindent". 2641 2642 20101002 2643 + improve linkages in html manpages, adding references to the newer 2644 pages, e.g., *_variables, curs_sp_funcs, curs_threads. 2645 + add checks in tic for inconsistent cursor-movement controls, and for 2646 inconsistent printer-controls. 2647 + fill in no-parameter forms of cursor-movement where a parameterized 2648 form is available -TD 2649 + fill in missing cursor controls where the form of the controls is 2650 ANSI -TD 2651 + fix inconsistent punctuation in form_variables manpage (patch by 2652 Sven Joachim). 2653 + add parameterized cursor-controls to linux-basic (report by Dae) -TD 2654 > patch by Juergen Pfeifer: 2655 + document how to build 32-bit libraries in README.MinGW 2656 + fixes to filename computation in mk-dlls.sh.in 2657 + use POSIX locale in mk-dlls.sh.in rather than en_US (report by Sven 2658 Joachim). 2659 + add a check in mk-dlls.sh.in to obtain the size of a pointer to 2660 distinguish between 32-bit and 64-bit hosts. The result is stored 2661 in mingw_arch 2662 2663 20100925 2664 + add "XT" capability to entries for terminals that support both 2665 xterm-style mouse- and title-controls, for "screen" which 2666 special-cases TERM beginning with "xterm" or "rxvt" -TD 2667 > patch by Juergen Pfeifer: 2668 + use 64-Bit MinGW toolchain (recommended package from TDM, see 2669 README.MinGW). 2670 + support pthreads when using the TDM MinGW toolchain 2671 2672 20100918 2673 + regenerated html manpages. 2674 + minor fixes for symlinks to curs_legacy.3x and curs_slk.3x manpages. 2675 + add manpage for sp-funcs. 2676 + add sp-funcs to test/listused.sh, for documentation aids. 2677 2678 20100911 2679 + add manpages for summarizing public variables of curses-, terminfo- 2680 and form-libraries. 2681 + minor fixes to manpages for consistency (patch by Jason McIntyre). 2682 + modify tic's -I/-C dump to reformat acsc strings into canonical form 2683 (sorted, unique mapping) (cf: 971004). 2684 + add configure check for pthread_kill(), needed for some old 2685 platforms. 2686 2687 20100904 2688 + add configure option --without-tests, to suppress building test 2689 programs (request by Frederic L W Meunier). 2690 2691 20100828 2692 + modify nsterm, xnuppc and tek4115 to make sgr/sgr0 consistent -TD 2693 + add check in terminfo source-reader to provide more informative 2694 message when someone attempts to run tic on a compiled terminal 2695 description (prompted by Debian #593920). 2696 + note in infotocap and captoinfo manpages that they read terminal 2697 descriptions from text-files (Debian #593920). 2698 + improve acsc string for vt52, show arrow keys (patch by Benjamin 2699 Sittler). 2700 2701 20100814 2702 + document in manpages that "mv" functions first use wmove() to check 2703 the window pointer and whether the position lies within the window 2704 (suggested by Poul-Henning Kamp). 2705 + fixes to curs_color.3x, curs_kernel.3x and wresize.3x manpages (patch 2706 by Tim van der Molen). 2707 + modify configure script to transform library names for tic- and 2708 tinfo-libraries so that those build properly with Mac OS X shared 2709 library configuration. 2710 + modify configure script to ensure that it removes conftest.dSYM 2711 directory leftover on checks with Mac OS X. 2712 + modify configure script to cleanup after check for symbolic links. 2713 2714 20100807 2715 + correct a typo in mk-1st.awk (patch by Gabriele Balducci) 2716 (cf: 20100724) 2717 + improve configure checks for location of tic and infocmp programs 2718 used for installing database and for generating fallback data, 2719 e.g., for cross-compiling. 2720 + add Markus Kuhn's wcwidth function for compiling MinGW 2721 + add special case to CF_REGEX for cross-compiling to MinGW target. 2722 2723 20100731 2724 + modify initialization check for win32con driver to eliminate need for 2725 special case for TERM "unknown", using terminal database if available 2726 (prompted by discussion with Roumen Petrov). 2727 + for MinGW port, ensure that terminal driver is setup if tgetent() 2728 is called (patch by Roumen Petrov). 2729 + document tabs "-0" and "-8" options in manpage. 2730 + fix Debian "lintian" issues with manpages reported in 2731 2732 2733 20100724 2734 + add a check in tic for missing set_tab if clear_all_tabs given. 2735 + improve use of symbolic links in makefiles by using "-f" option if 2736 it is supported, to eliminate temporary removal of the target 2737 (prompted by) 2738 + minor improvement to test/ncurses.c, reset color pairs in 'd' test 2739 after exit from 'm' main-menu command. 2740 + improved ncu-indent, from mawk changes, allows more than one of 2741 GCC_NORETURN, GCC_PRINTFLIKE and GCC_SCANFLIKE on a single line. 2742 2743 20100717 2744 + add hard-reset for rs2 to wsvt25 to help ensure that reset ends 2745 the alternate character set (patch by Nicholas Marriott) 2746 + remove tar-copy.sh and related configure/Makefile chunks, since the 2747 Ada95 binding is now installed using rules in Ada95/src. 2748 2749 20100703 2750 + continue integrating changes to use gnatmake project files in Ada95 2751 + add/use configure check to turn on project rules for Ada95/src. 2752 + revert the vfork change from 20100130, since it does not work. 2753 2754 20100626 2755 + continue integrating changes to use gnatmake project files in Ada95 2756 + old gnatmake (3.15) does not produce libraries using project-file; 2757 work around by adding script to generate alternate makefile. 2758 2759 20100619 2760 + continue integrating changes to use gnatmake project files in Ada95 2761 + add configure --with-ada-sharedlib option, for the test_make rule. 2762 + move Ada95-related logic into aclocal.m4, since additional checks 2763 will be needed to distinguish old/new implementations of gnat. 2764 2765 20100612 2766 + start integrating changes to use gnatmake project files in Ada95 tree 2767 + add test_make / test_clean / test_install rules in Ada95/src 2768 + change install-path for adainclude directory to /usr/share/ada (was 2769 /usr/lib/ada). 2770 + update Ada95/configure. 2771 + add mlterm+256color entry, for mlterm 3.0.0 -TD 2772 + modify test/configure to use macros to ensure consistent order 2773 of updating LIBS variable. 2774 2775 20100605 2776 + change search order of options for Solaris in CF_SHARED_OPTS, to 2777 work with 64-bit compiles. 2778 + correct quoting of assignment in CF_SHARED_OPTS case for aix 2779 (cf: 20081227) 2780 2781 20100529 2782 + regenerated html documentation. 2783 + modify test/configure to support pkg-config for checking X libraries 2784 used by PDCurses. 2785 + add/use configure macro CF_ADD_LIB to force consistency of 2786 assignments to $LIBS, etc. 2787 + fix configure script for combining --with-pthread 2788 and --enable-weak-symbols options. 2789 2790 20100522 2791 + correct cross-compiling configure check for CF_MKSTEMP macro, by 2792 adding a check cache variable set by AC_CHECK_FUNC (report by 2793 Pierre Labastie). 2794 + simplify include-dependencies of make_hash and make_keys, to reduce 2795 the need for setting BUILD_CPPFLAGS in cross-compiling when the 2796 build- and target-machines differ. 2797 + repair broken-linker configuration by restoring a definition of SP 2798 variable to curses.priv.h, and adjusting for cases where sp-funcs 2799 are used. 2800 + improve configure macro CF_AR_FLAGS, allowing ARFLAGS environment 2801 variable to override (prompted by report by Pablo Cazallas). 2802 2803 20100515 2804 + add configure option --enable-pthreads-eintr to control whether the 2805 new EINTR feature is enabled. 2806 + modify logic in pthread configuration to allow EINTR to interrupt 2807 a read operation in wgetch() (Novell #540571, patch by Werner Fink). 2808 + drop mkdirs.sh, use "mkdir -p". 2809 + add configure option --disable-libtool-version, to use the 2810 "-version-number" feature which was added in libtool 1.5 (report by 2811 Peter Haering). The default value for the option uses the newer 2812 feature, which makes libraries generated using libtool compatible 2813 with the standard builds of ncurses. 2814 + updated test/configure to match configure script macros. 2815 + fixes for configure script from lynx changes: 2816 + improve CF_FIND_LINKAGE logic for the case where a function is 2817 found in predefined libraries. 2818 + revert part of change to CF_HEADER (cf: 20100424) 2819 2820 20100501 2821 + correct limit-check in wredrawln, accounting for begy/begx values 2822 (patch by David Benjamin). 2823 + fix most compiler warnings from clang. 2824 + amend build-fix for OpenSolaris, to ensure that a system header is 2825 included in curses.h before testing feature symbols, since they 2826 may be defined by that route. 2827 2828 20100424 2829 + fix some strict compiler warnings in ncurses library. 2830 + modify configure macro CF_HEADER_PATH to not look for variations in 2831 the predefined include directories. 2832 + improve configure macros CF_GCC_VERSION and CF_GCC_WARNINGS to work 2833 with gcc 4.x's c89 alias, which gives warning messages for cases 2834 where older versions would produce an error. 2835 2836 20100417 2837 + modify _nc_capcmp() to work with cancelled strings. 2838 + correct translation of "^" in _nc_infotocap(), used to transform 2839 terminfo to termcap strings 2840 + add configure --disable-rpath-hack, to allow disabling the feature 2841 which adds rpath options for libraries in unusual places. 2842 + improve CF_RPATH_HACK_2 by checking if the rpath option for a given 2843 directory was already added. 2844 + improve CF_RPATH_HACK_2 by using ldd to provide a standard list of 2845 directories (which will be ignored). 2846 2847 20100410 2848 + improve win_driver.c handling of mouse: 2849 + discard motion events 2850 + avoid calling _nc_timed_wait when there is a mouse event 2851 + handle 4th and "rightmost" buttons. 2852 + quote substitutions in CF_RPATH_HACK_2 configure macro, needed for 2853 cases where there are embedded blanks in the rpath option. 2854 2855 20100403 2856 + add configure check for exctags vs ctags, to work around pkgsrc. 2857 + simplify logic in _nc_get_screensize() to make it easier to see how 2858 environment variables may override system- and terminfo-values 2859 (prompted by discussion with Igor Bujna). 2860 + make debug-traces for COLOR_PAIR and PAIR_NUMBER less verbose. 2861 + improve handling of color-pairs embedded in attributes for the 2862 extended-colors configuration. 2863 + modify MKlib_gen.sh to build link_test with sp-funcs. 2864 + build-fixes for OpenSolaris aka Solaris 11, for wide-character 2865 configuration as well as for rpath feature in *-config scripts. 2866 2867 20100327 2868 + refactor CF_SHARED_OPTS configure macro, making CF_RPATH_HACK more 2869 reusable. 2870 + improve configure CF_REGEX, similar fixes. 2871 + improve configure CF_FIND_LINKAGE, adding add check between system 2872 (default) and explicit paths, where we can find the entrypoint in the 2873 given library. 2874 + add check if Gpm_Open() returns a -2, e.g., for "xterm". This is 2875 normally suppressed but can be overridden using $NCURSES_GPM_TERMS. 2876 Ensure that Gpm_Close() is called in this case. 2877 2878 20100320 2879 + rename atari and st52 terminfo entries to atari-old, st52-old, use 2880 newer entries from FreeMiNT by Guido Flohr (from patch/report by Alan 2881 Hourihane). 2882 2883 20100313 2884 + modify install-rule for manpages so that *-config manpages will 2885 install when building with --srcdir (report by Sven Joachim). 2886 + modify CF_DISABLE_LEAKS configure macro so that the --enable-leaks 2887 option is not the same as --disable-leaks (GenToo #305889). 2888 + modify #define's for build-compiler to suppress cchar_t symbol from 2889 compile of make_hash and make_keys, improving cross-compilation of 2890 ncursesw (report by Bernhard Rosenkraenzer). 2891 + modify CF_MAN_PAGES configure macro to replace all occurrences of 2892 TPUT in tput.1's manpage (Debian #573597, report/analysis by Anders 2893 Kaseorg). 2894 2895 20100306 2896 + generate manpages for the *-config scripts, adapted from help2man 2897 (suggested by Sven Joachim). 2898 + use va_copy() in _nc_printf_string() to avoid conflicting use of 2899 va_list value in _nc_printf_length() (report by Wim Lewis). 2900 2901 20100227 2902 + add Ada95/configure script, to use in tar-file created by 2903 Ada95/make-tar.sh 2904 + fix typo in wresize.3x (patch by Tim van der Molen). 2905 + modify screen-bce.XXX entries to exclude ech, since screen's color 2906 model does not clear with color for that feature -TD 2907 2908 20100220 2909 + add make-tar.sh scripts to Ada95 and test subdirectories to help with 2910 making those separately distributable. 2911 + build-fix for static libraries without dlsym (Debian #556378). 2912 + fix a syntax error in man/form_field_opts.3x (patch by Ingo 2913 Schwarze). 2914 2915 20100213 2916 + add several screen-bce.XXX entries -TD 2917 2918 20100206 2919 + update mrxvt terminfo entry -TD 2920 + modify win_driver.c to support mouse single-clicks. 2921 + correct name for termlib in ncurses*-config, e.g., if it is renamed 2922 to provide a single file for ncurses/ncursesw libraries (patch by 2923 Miroslav Lichvar). 2924 2925 20100130 2926 + use vfork in test/ditto.c if available (request by Mike Frysinger). 2927 + miscellaneous cleanup of manpages. 2928 + fix typo in curs_bkgd.3x (patch by Tim van der Molen). 2929 + build-fix for --srcdir (patch by Miroslav Lichvar). 2930 2931 20100123 2932 + for term-driver configuration, ensure that the driver pointer is 2933 initialized in setupterm so that terminfo/termcap programs work. 2934 + amend fix for Debian #542031 to ensure that wattrset() returns only 2935 OK or ERR, rather than the attribute value (report by Miroslav 2936 Lichvar). 2937 + reorder WINDOWLIST to put WINDOW data after SCREEN pointer, making 2938 _nc_screen_of() compatible between normal/wide libraries again (patch 2939 by Miroslav Lichvar) 2940 + review/fix include-dependencies in modules files (report by Miroslav 2941 Lichvar). 2942 2943 20100116 2944 + modify win_driver.c to initialize acs_map for win32 console, so 2945 that line-drawing works. 2946 + modify win_driver.c to initialize TERMINAL struct so that programs 2947 such as test/lrtest.c and test/ncurses.c which test string 2948 capabilities can run. 2949 + modify term-driver modules to eliminate forward-reference 2950 declarations. 2951 2952 20100109 2953 + modify configure macro CF_XOPEN_SOURCE, etc., to use CF_ADD_CFLAGS 2954 consistently to add new -D's while removing duplicates. 2955 + modify a few configure macros to consistently put new options 2956 before older in the list. 2957 + add tiparm(), based on review of X/Open Curses Issue 7. 2958 + minor documentation cleanup. 2959 + update config.guess, config.sub from 2960 2961 (caveat - its maintainer put 2010 copyright date on files dated 2009) 2962 2963 20100102 2964 + minor improvement to tic's checking of similar SGR's to allow for the 2965 most common case of SGR 0. 2966 + modify getmouse() to act as its documentation implied, returning on 2967 each call the preceding event until none are left. When no more 2968 events remain, it will return ERR. 2969 2970 20091227 2971 + change order of lookup in progs/tput.c, looking for terminfo data 2972 first. This fixes a confusion between termcap "sg" and terminfo 2973 "sgr" or "sgr0", originally from 990123 changes, but exposed by 2974 20091114 fixes for hashing. With this change, only "dl" and "ed" are 2975 ambiguous (Mandriva #56272). 2976 2977 20091226 2978 + add bterm terminfo entry, based on bogl 0.1.18 -TD 2979 + minor fix to rxvt+pcfkeys terminfo entry -TD 2980 + build-fixes for Ada95 tree for gnat 4.4 "style". 2981 2982 20091219 2983 + remove old check in mvderwin() which prevented moving a derived 2984 window whose origin happened to coincide with its parent's origin 2985 (report by Katarina Machalkova). 2986 + improve test/ncurses.c to put mouse droppings in the proper window. 2987 + update minix terminfo entry -TD 2988 + add bw (auto-left-margin) to nsterm* entries (Benjamin Sittler) 2989 2990 20091212 2991 + correct transfer of multicolumn characters in multirow 2992 field_buffer(), which stopped at the end of the first row due to 2993 filling of unused entries in a cchar_t array with nulls. 2994 + updated nsterm* entries (Benjamin Sittler, Emanuele Giaquinta) 2995 + modify _nc_viscbuf2() and _tracecchar_t2() to show wide-character 2996 nulls. 2997 + use strdup() in set_menu_mark(), restore .marklen struct member on 2998 failure. 2999 + eliminate clause 3 from the UCB copyrights in read_termcap.c and 3000 tset.c per 3001 3002 (patch by Nicholas Marriott). 3003 + replace a malloc in tic.c with strdup, checking for failure (patch by 3004 Nicholas Marriott). 3005 + update config.guess, config.sub from 3006 3007 3008 20091205 3009 + correct layout of working window used to extract data in 3010 wide-character configured by set_field_buffer (patch by Rafael 3011 Garrido Fernandez) 3012 + improve some limit-checks related to filename length in reading and 3013 writing terminfo entries. 3014 + ensure that filename is always filled in when attempting to read 3015 a terminfo entry, so that infocmp can report the filename (patch 3016 by Nicholas Marriott). 3017 3018 20091128 3019 + modify mk-1st.awk to allow tinfo library to be built when term-driver 3020 is enabled. 3021 + add error-check to configure script to ensure that sp-funcs is 3022 enabled if term-driver is, since some internal interfaces rely upon 3023 this. 3024 3025 20091121 3026 + fix case where progs/tput is used while sp-funcs is configure; this 3027 requires save/restore of out-character function from _nc_prescreen 3028 rather than the SCREEN structure (report by Charles Wilson). 3029 + fix typo in man/curs_trace.3x which caused incorrect symbolic links 3030 + improved configure macros CF_GCC_ATTRIBUTES, CF_PROG_LINT. 3031 3032 20091114 3033 3034 + updated man/curs_trace.3x 3035 + limit hashing for termcap-names to 2-characters (Ubuntu #481740). 3036 + change a variable name in lib_newwin.c to make it clearer which 3037 value is being freed on error (patch by Nicholas Marriott). 3038 3039 20091107 3040 + improve test/ncurses.c color-cycling test by reusing attribute- 3041 and color-cycling logic from the video-attributes screen. 3042 + add ifdef'd with NCURSES_INTEROP_FUNCS experimental bindings in form 3043 library which help make it compatible with interop applications 3044 (patch by Juergen Pfeifer). 3045 + add configure option --enable-interop, for integrating changes 3046 for generic/interop support to form-library by Juergen Pfeifer 3047 3048 20091031 3049 + modify use of $CC environment variable which is defined by X/Open 3050 as a curses feature, to ignore it if it is not a single character 3051 (prompted by discussion with Benjamin C W Sittler). 3052 + add START_TRACE in slk_init 3053 + fix a regression in _nc_ripoffline which made test/ncurses.c not show 3054 soft-keys, broken in 20090927 merging. 3055 + change initialization of "hidden" flag for soft-keys from true to 3056 false, broken in 20090704 merging (Ubuntu #464274). 3057 + update nsterm entries (patch by Benjamin C W Sittler, prompted by 3058 discussion with Fabian Groffen in GenToo #206201). 3059 + add test/xterm-256color.dat 3060 3061 20091024 3062 + quiet some pedantic gcc warnings. 3063 + modify _nc_wgetch() to check for a -1 in the fifo, e.g., after a 3064 SIGWINCH, and discard that value, to avoid confusing application 3065 (patch by Eygene Ryabinkin, FreeBSD #136223). 3066 3067 20091017 3068 + modify handling of $PKG_CONFIG_LIBDIR to use only the first item in 3069 a possibly colon-separated list (Debian #550716). 3070 3071 20091010 3072 + supply a null-terminator to buffer in _nc_viswibuf(). 3073 + fix a sign-extension bug in unget_wch() (report by Mike Gran). 3074 + minor fixes to error-returns in default function for tputs, as well 3075 as in lib_screen.c 3076 3077 20091003 3078 + add WACS_xxx definitions to wide-character configuration for thick- 3079 and double-lines (discussion with Slava Zanko). 3080 + remove unnecessary kcan assignment to ^C from putty (Sven Joachim) 3081 + add ccc and initc capabilities to xterm-16color -TD 3082 > patch by Benjamin C W Sittler: 3083 + add linux-16color 3084 + correct initc capability of linux-c-nc end-of-range 3085 + similar change for dg+ccc and dgunix+ccc 3086 3087 20090927 3088 + move leak-checking for comp_captab.c into _nc_leaks_tinfo() since 3089 that module since 20090711 is in libtinfo. 3090 + add configure option --enable-term-driver, to allow compiling with 3091 terminal-driver. That is used in MinGW port, and (being somewhat 3092 more complicated) is an experimental alternative to the conventional 3093 termlib internals. Currently, it requires the sp-funcs feature to 3094 be enabled. 3095 + completed integrating "sp-funcs" by Juergen Pfeifer in ncurses 3096 library (some work remains for forms library). 3097 3098 20090919 3099 + document return code from define_key (report by Mike Gran). 3100 + make some symbolic links in the terminfo directory-tree shorter 3101 (patch by Daniel Jacobowitz, forwarded by Sven Joachim).). 3102 + fix some groff warnings in terminfo.5, etc., from recent Debian 3103 changes. 3104 + change ncv and op capabilities in sun-color terminfo entry to match 3105 Sun's entry for this (report by Laszlo Peter). 3106 + improve interix smso terminfo capability by using reverse rather than 3107 bold (report by Kristof Zelechovski). 3108 3109 20090912 3110 + add some test programs (and make these use the same special keys 3111 by sharing linedata.h functions): 3112 test/test_addstr.c 3113 test/test_addwstr.c 3114 test/test_addchstr.c 3115 test/test_add_wchstr.c 3116 + correct internal _nc_insert_ch() to use _nc_insert_wch() when 3117 inserting wide characters, since the wins_wch() function that it used 3118 did not update the cursor position (report by Ciprian Craciun). 3119 3120 20090906 3121 + fix typo s/is_timeout/is_notimeout/ which made "man is_notimeout" not 3122 work. 3123 + add null-pointer checks to other opaque-functions. 3124 + add is_pad() and is_subwin() functions for opaque access to WINDOW 3125 (discussion with Mark Dickinson). 3126 + correct merge to lib_newterm.c, which broke when sp-funcs was 3127 enabled. 3128 3129 20090905 3130 + build-fix for building outside source-tree (report by Sven Joachim). 3131 + fix Debian lintian warning for man/tabs.1 by making section number 3132 agree with file-suffix (report by Sven Joachim). 3133 + continue integrating "sp-funcs" by Juergen Pfeifer (incomplete). 3134 3135 20090829 3136 + workaround for bug in g++ 4.1-4.4 warnings for wattrset() macro on 3137 amd64 (Debian #542031). 3138 + fix typo in curs_mouse.3x (Debian #429198). 3139 3140 20090822 3141 + continue integrating "sp-funcs" by Juergen Pfeifer (incomplete). 3142 3143 20090815 3144 + correct use of terminfo capabilities for initializing soft-keys, 3145 broken in 20090510 merging. 3146 + modify wgetch() to ensure it checks SIGWINCH when it gets an error 3147 in non-blocking mode (patch by Clemens Ladisch). 3148 + use PATH_SEPARATOR symbol when substituting into run_tic.sh, to 3149 help with builds on non-Unix platforms such as OS/2 EMX. 3150 + modify scripting for misc/run_tic.sh to test configure script's 3151 $cross_compiling variable directly rather than comparing host/build 3152 compiler names (prompted by comment in GenToo #249363). 3153 + fix configure script option --with-database, which was coded as an 3154 enable-type switch. 3155 + build-fixes for --srcdir (report by Frederic L W Meunier). 3156 3157 20090808 3158 + separate _nc_find_entry() and _nc_find_type_entry() from 3159 implementation details of hash function. 3160 3161 20090803 3162 + add tabs.1 to man/man_db.renames 3163 + modify lib_addch.c to compensate for removal of wide-character test 3164 from unctrl() in 20090704 (Debian #539735). 3165 3166 20090801 3167 + improve discussion in INSTALL for use of system's tic/infocmp for 3168 cross-compiling and building fallbacks. 3169 + modify test/demo_termcap.c to correspond better to options in 3170 test/demo_terminfo.c 3171 + continue integrating "sp-funcs" by Juergen Pfeifer (incomplete). 3172 + fix logic for 'V' in test/ncurses.c tests f/F. 3173 3174 20090728 3175 + correct logic in tigetnum(), which caused tput program to treat all 3176 string capabilities as numeric (report by Rajeev V Pillai, 3177 cf: 20090711). 3178 3179 20090725 3180 + continue integrating "sp-funcs" by Juergen Pfeifer (incomplete). 3181 3182 20090718 3183 + fix a null-pointer check in _nc_format_slks() in lib_slk.c, from 3184 20090704 changes. 3185 + modify _nc_find_type_entry() to use hashing. 3186 + make CCHARW_MAX value configurable, noting that changing this would 3187 change the size of cchar_t, and would be ABI-incompatible. 3188 + modify test-programs, e.g,. test/view.c, to address subtle 3189 differences between Tru64/Solaris and HPUX/AIX getcchar() return 3190 values. 3191 + modify length returned by getcchar() to count the trailing null 3192 which is documented in X/Open (cf: 20020427). 3193 + fixes for test programs to build/work on HPUX and AIX, etc. 3194 3195 20090711 3196 + improve performance of tigetstr, etc., by using hashing code from tic. 3197 + minor fixes for memory-leak checking. 3198 + add test/demo_terminfo, for comparison with demo_termcap 3199 3200 20090704 3201 + remove wide-character checks from unctrl() (patch by Clemens Ladisch). 3202 + revise wadd_wch() and wecho_wchar() to eliminate dependency on 3203 unctrl(). 3204 + continue integrating "sp-funcs" by Juergen Pfeifer (incomplete). 3205 3206 20090627 3207 + update llib-lncurses[wt] to use sp-funcs. 3208 + various code-fixes to build/work with --disable-macros configure 3209 option. 3210 + add several new files from Juergen Pfeifer which will be used when 3211 integration of "sp-funcs" is complete. This includes a port to 3212 MinGW. 3213 3214 20090613 3215 + move definition for NCURSES_WRAPPED_VAR back to ncurses_dll.h, to 3216 make includes of term.h without curses.h work (report by "Nix"). 3217 + continue integrating "sp-funcs" by Juergen Pfeifer (incomplete). 3218 3219 20090607 3220 + fix a regression in lib_tputs.c, from ongoing merges. 3221 3222 20090606 3223 + continue integrating "sp-funcs" by Juergen Pfeifer (incomplete). 3224 3225 20090530 3226 + fix an infinite recursion when adding a legacy-coding 8-bit value 3227 using insch() (report by Clemens Ladisch). 3228 + free home-terminfo string in del_curterm() (patch by Dan Weber). 3229 + continue integrating "sp-funcs" by Juergen Pfeifer (incomplete). 3230 3231 20090523 3232 + continue integrating "sp-funcs" by Juergen Pfeifer (incomplete). 3233 3234 20090516 3235 + work around antique BSD game's manipulation of stdscr, etc., versus 3236 SCREEN's copy of the pointer (Debian #528411). 3237 + add a cast to wattrset macro to avoid compiler warning when comparing 3238 its result against ERR (adapted from patch by Matt Kraii, Debian 3239 #528374). 3240 3241 20090510 3242 + continue integrating "sp-funcs" by Juergen Pfeifer (incomplete). 3243 3244 20090502 3245 + continue integrating "sp-funcs" by Juergen Pfeifer (incomplete). 3246 + add vwmterm terminfo entry (patch by Bryan Christ). 3247 3248 20090425 3249 + continue integrating "sp-funcs" by Juergen Pfeifer (incomplete). 3250 3251 20090419 3252 + build fix for _nc_free_and_exit() change in 20090418 (report by 3253 Christian Ebert). 3254 3255 20090418 3256 + continue integrating "sp-funcs" by Juergen Pfeifer (incomplete). 3257 3258 20090411 3259 + continue integrating "sp-funcs" by Juergen Pfeifer (incomplete). 3260 This change finishes merging for menu and panel libraries, does 3261 part of the form library. 3262 3263 20090404 3264 + suppress configure check for static/dynamic linker flags for gcc on 3265 Darwin (report by Nelson Beebe). 3266 3267 20090328 3268 + extend ansi.sys pfkey capability from kf1-kf10 to kf1-kf48, moving 3269 function key definitions from emx-base for consistency -TD 3270 + correct missing final 'p' in pfkey capability of ansi.sys-old (report 3271 by Kalle Olavi Niemitalo). 3272 + improve test/ncurses.c 'F' test, show combining characters in color. 3273 + quiet a false report by cppcheck in c++/cursesw.cc by eliminating 3274 a temporary variable. 3275 + use _nc_doalloc() rather than realloc() in a few places in ncurses 3276 library to avoid leak in out-of-memory condition (reports by William 3277 Egert and Martin Ettl based on cppcheck tool). 3278 + add --with-ncurses-wrap-prefix option to test/configure (discussion 3279 with Charles Wilson). 3280 + use ncurses*-config scripts if available for test/configure. 3281 + update test/aclocal.m4 and test/configure 3282 > patches by Charles Wilson: 3283 + modify CF_WITH_LIBTOOL configure check to allow unreleased libtool 3284 version numbers (e.g. which include alphabetic chars, as well as 3285 digits, after the final '.'). 3286 + improve use of -no-undefined option for libtool by setting an 3287 intermediate variable LT_UNDEF in the configure script, and then 3288 using that in the libtool link-commands. 3289 + fix an missing use of NCURSES_PUBLIC_VAR() in tinfo/MKcodes.awk 3290 from 20090321 changes. 3291 + improve mk-1st.awk script by writing separate cases for the 3292 LIBTOOL_LINK command, depending on which library (ncurses, ticlib, 3293 termlib) is to be linked. 3294 + modify configure.in to allow broken-linker configurations, not just 3295 enable-reentrant, to set public wrap prefix. 3296 3297 20090321 3298 + add TICS_LIST and SHLIB_LIST to allow libtool 2.2.6 on Cygwin to 3299 build with tic and term libraries (patch by Charles Wilson). 3300 + add -no-undefined option to libtool for Cygwin, MinGW, U/Win and AIX 3301 (report by Charles Wilson). 3302 + fix definition for c++/Makefile.in's SHLIB_LIST, which did not list 3303 the form, menu or panel libraries (patch by Charles Wilson). 3304 + add configure option --with-wrap-prefix to allow setting the prefix 3305 for functions used to wrap global variables to something other than 3306 "_nc_" (discussion with Charles Wilson). 3307 3308 20090314 3309 + modify scripts to generate ncurses*-config and pc-files to add 3310 dependency for tinfo library (patch by Charles Wilson). 3311 + improve comparison of program-names when checking for linked flavors 3312 such as "reset" by ignoring the executable suffix (reports by Charles 3313 Wilson, Samuel Thibault and Cedric Bretaudeau on Cygwin mailing 3314 list). 3315 + suppress configure check for static/dynamic linker flags for gcc on 3316 Solaris 10, since gcc is confused by absence of static libc, and 3317 does not switch back to dynamic mode before finishing the libraries 3318 (reports by Joel Bertrand, Alan Pae). 3319 + minor fixes to Intel compiler warning checks in configure script. 3320 + modify _nc_leaks_tinfo() so leak-checking in test/railroad.c works. 3321 + modify set_curterm() to make broken-linker configuration work with 3322 changes from 20090228 (report by Charles Wilson). 3323 3324 20090228 3325 + continue integrating "sp-funcs" by Juergen Pfeifer (incomplete). 3326 + modify declaration of cur_term when broken-linker is used, but 3327 enable-reentrant is not, to match pre-5.7 (report by Charles Wilson). 3328 3329 20090221 3330 + continue integrating "sp-funcs" by Juergen Pfeifer (incomplete). 3331 3332 20090214 3333 + add configure script --enable-sp-funcs to enable the new set of 3334 extended functions. 3335 + start integrating patches by Juergen Pfeifer: 3336 + add extended functions which specify the SCREEN pointer for several 3337 curses functions which use the global SP (these are incomplete; 3338 some internals work is needed to complete these). 3339 + add special cases to configure script for MinGW port. 3340 3341 20090207 3342 + update several configure macros from lynx changes 3343 + append (not prepend) to CFLAGS/CPPFLAGS 3344 + change variable from PATHSEP to PATH_SEPARATOR 3345 + improve install-rules for pc-files (patch by Miroslav Lichvar). 3346 + make it work with $DESTDIR 3347 + create the pkg-config library directory if needed. 3348 3349 20090124 3350 + modify init_pair() to allow caller to create extra color pairs beyond 3351 the color_pairs limit, which use default colors (request by Emanuele 3352 Giaquinta). 3353 + add misc/terminfo.tmp and misc/*.pc to "sources" rule. 3354 + fix typo "==" where "=" is needed in ncurses-config.in and 3355 gen-pkgconfig.in files (Debian #512161). 3356 3357 20090117 3358 + add -shared option to MK_SHARED_LIB when -Bsharable is used, for 3359 *BSD's, without which "main" might be one of the shared library's 3360 dependencies (report/analysis by Ken Dickey). 3361 + modify waddch_literal(), updating line-pointer after a multicolumn 3362 character is found to not fit on the current row, and wrapping is 3363 done. Since the line-pointer was not updated, the wrapped 3364 multicolumn character was written to the beginning of the current row 3365 (cf: 20041023, reported by "Nick" regarding problem with ncmpc 3366). 3367 3368 20090110 3369 + add screen.Eterm terminfo entry (GenToo #124887) -TD 3370 + modify adacurses-config to look for ".ali" files in the adalib 3371 directory. 3372 + correct install for Ada95, which omitted libAdaCurses.a used in 3373 adacurses-config 3374 + change install for adacurses-config to provide additional flavors 3375 such as adacursesw-config, for ncursesw (GenToo #167849). 3376 3377 20090105 3378 + remove undeveloped feature in ncurses-config.in for setting 3379 prefix variable. 3380 + recent change to ncurses-config.in did not take into account the 3381 --disable-overwrite option, which sets $includedir to the 3382 subdirectory and using just that for a -I option does not work - fix 3383 (report by Frederic L W Meunier). 3384 3385 20090104 3386 + modify gen-pkgconfig.in to eliminate a dependency on rpath when 3387 deciding whether to add $LIBS to --libs output; that should be shown 3388 for the ncurses and tinfo libraries without taking rpath into 3389 account. 3390 + fix an overlooked change from $AR_OPTS to $ARFLAGS in mk-1st.awk, 3391 used in static libraries (report by Marty Jack). 3392 3393 20090103 3394 + add a configure-time check to pick a suitable value for 3395 CC_SHARED_OPTS for Solaris (report by Dagobert Michelsen). 3396 + add configure --with-pkg-config and --enable-pc-files options, along 3397 with misc/gen-pkgconfig.in which can be used to generate ".pc" files 3398 for pkg-config (request by Jan Engelhardt). 3399 + use $includedir symbol in misc/ncurses-config.in, add --includedir 3400 option. 3401 + change makefiles to use $ARFLAGS rather than $AR_OPTS, provide a 3402 configure check to detect whether a "-" is needed before "ar" 3403 options. 3404 + update config.guess, config.sub from 3405 3406 3407 20081227 3408 + modify mk-1st.awk to work with extra categories for tinfo library. 3409 + modify configure script to allow building shared libraries with gcc 3410 on AIX 5 or 6 (adapted from patch by Lital Natan). 3411 3412 20081220 3413 + modify to omit the opaque-functions from lib_gen.o when 3414 --disable-ext-funcs is used. 3415 + add test/clip_printw.c to illustrate how to use printw without 3416 wrapping. 3417 + modify ncurses 'F' test to demo wborder_set() with colored lines. 3418 + modify ncurses 'f' test to demo wborder() with colored lines. 3419 3420 20081213 3421 + add check for failure to open hashed-database needed for db4.6 3422 (GenToo #245370). 3423 + corrected --without-manpages option; previous change only suppressed 3424 the auxiliary rules install.man and uninstall.man 3425 + add case for FreeMINT to configure macro CF_XOPEN_SOURCE (patch from 3426 GenToo #250454). 3427 + fixes from NetBSD port at 3428 3429 patch-ac (build-fix for DragonFly) 3430 patch-ae (use INSTALL_SCRIPT for installing misc/ncurses*-config). 3431 + improve configure script macros CF_HEADER_PATH and CF_LIBRARY_PATH 3432 by adding CFLAGS, CPPFLAGS and LDFLAGS, LIBS values to the 3433 search-lists. 3434 + correct title string for keybound manpage (patch by Frederic Culot, 3435 OpenBSD documentation/6019), 3436 3437 20081206 3438 + move del_curterm() call from _nc_freeall() to _nc_leaks_tinfo() to 3439 work for progs/clear, progs/tabs, etc. 3440 + correct buffer-size after internal resizing of wide-character 3441 set_field_buffer(), broken in 20081018 changes (report by Mike Gran).
http://ncurses.scripts.mit.edu/?p=ncurses.git;a=blob;f=NEWS;h=6af578b515cf51ad2e60a1b07f19c7ca37bfd8fa;hb=3353ecc7ed0f1d3b4e6c7e8e98d92f469d593e35
CC-MAIN-2022-40
refinedweb
25,482
65.93
Search-as-you-type is an interesting feature of modern search engines, that allows users to have an instant feedback related to their search, while they are still typing a query. In this tutorial, we discuss how to implement this feature in a custom search engine built with Elasticsearch and Python/Flask on the backend side, and AngularJS for the frontend. The full code is available at. If you go through the code, have a look at the readme file first, in particular to understand the limitations of the code. This first part describes the details of the backend, i.e. Elasticsearch and Python/Flask. Update: the second part of this tutorial has been published and it discusses the front-end in AngularJS. Overall Architecture As this demo was prototyped during International Beer Day 2015, we’ll build a small database of beers, each of which will be defined by a name, the name of its producer, a list of beer styles and a textual description. The idea is to make all these data available for search in one coherent interface, so you can just type in the name of your favourite brew, or aspects like “light and fruity”. Our system is made up of three components: - Elasticsearch: used as data storage and for its search capabilities. - Python/Flask Microservice: the backend component that has access to Elasticsearch and provides a RESTful API for the frontend. - AngularJS UI: the frontend that requests data to the backend microservice. There are two types of documents – beers and styles. While styles are simple strings with the style name, beers are more complex. This is an example: { "name": "Raspberry Wheat Beer", "styles": ["Wheat Ale", "Fruit Beer"], "abv": 5.0, "producer": "Meantime Brewing London", "description": "Based on a pale, lightly hopped wheat beer, the refreshingly crisp fruitiness, aroma and rich colour come from the addition of fresh raspberry puree during maturation." } (the description is taken from the producer’s website in August 2015). Setting Up Elasticsearch The mapping for the Elasticsearch types is fairly straightforward. The key detail in order to enable the search-as-you-type feature is how to perform partial matching over strings. One option is to use wildcard queries over not_analyzed fields, similar to a ... WHERE field LIKE '%foobar%' query in SQL, but this is usually too expensive. Another option is to change the analysis chain in order to index also partial strings: this will result in a bigger index but in faster queries. We can achieve our goal by using the edge_ngram filter as part of a custom analyser, e.g.: { "settings": { "number_of_shards" : 1, "number_of_replicas": 0, "analysis": { "filter": { "autocomplete_filter": { "type": "edge_ngram", "min_gram": 2, "max_gram": 15 } }, "analyzer": { "autocomplete": { "type": "custom", "tokenizer": "standard", "filter": [ "lowercase", "autocomplete_filter" ] } } } } } In this example, the custom filter will allow to index substrings of 2-to-15 characters. You can customise these boundaries, but indexing unigram (min_gram: 1) probably will cause any query to match any document, and words longer than 15 chars are rarely observed (e.g. we’re not dealing with long compounds). Once the custom analysis chain is defined, the mapping is easy: { "mappings": { "beers": { "properties": { "name": {"type": "string", "index_analyzer": "autocomplete", "search_analyzer": "standard"}, "styles": {"type": "string", "index_analyzer": "autocomplete", "search_analyzer": "standard"}, "abv": {"type": "float"}, "producer": {"type": "string", "index_analyzer": "autocomplete", "search_analyzer": "standard"}, "description": {"type": "string", "index_analyzer": "autocomplete", "search_analyzer": "standard"} } }, "styles": { "properties": { "name": {"type": "string", "index": "not_analyzed"} } } } } Populating Elasticsearch Assuming you have Elasticsearch up-and-running locally on localhost:9200 (the default), you can simply type make index from the demo folder. This will firstly try to delete an index called cheermeapp (you’ll see a missing index error the first time, as there is of course no index yet). Secondly, the index is recreated by pushing the mapping file to Elasticsearch, and finally some data are indexed using the _bulk API. If you want to see some data, you can now type: curl -XPOST -d '{"query": {"match_all": {}}}' A Python Microservice with Flask As the Elasticsearch service is by default open to any connection, it is common practice to put it behind a custom web-service. Luckily, Flask and its Flask-RESTful extension allow use to quickly set up a RESTful microservice which exposes some useful endpoints. These endpoints will then be queries by the frontend. If you’re following the code from the repo, the recommendation is to set-up a local virtualenv as described in the readme, in order to install the dependencies locally. You can see the full code for the backend microservice is the backend folder. In particular, in backend/__init__.py we declare the Flask application as: from flask import Flask from flask_restful import reqparse, Resource, Api from flask.ext.cors import CORS from . import config import requests import json app = Flask(__name__) CORS(app) # required for Cross-origin Request Sharing api = Api(app) By setting up the backend app as Python package (a folder with an __init__.py file), the script to run this app is extremely simple: # runbackend.py from backend import app if __name__ == '__main__': app.run(debug=True) This code just sets up an empty web-service: we need to implement the endpoints and the related resources. One nice aspect of Flask-RESTful is that it allows to define the resources as Python classes, adding the endpoints with minimal effort. For example, in backend/__init__.py we can continue defining the following: class Beer(Resource): def get(self, beer_id): # the base URL for a "beers" object in Elasticsearch, e.g. #<beer_id> url = config.es_base_url['beers']+'/'+beer_id # query Elasticsearch resp = requests.get(url) data = resp.json() # Return the full Elasticsearch object as a result beer = data['_source'] return beer def delete(self, beer_id): # same as above url = config.es_base_url['beers']+'/'+beer_id # Query Elasticsearch resp = requests.delete(url) # return the response data = resp.json() return data # The API URLs all start with /api/v1, in case we need to implement different versions later api.add_resource(Beer, config.api_base_url+'/beers/<beer_id>') class BeerList(Resource): def get(self): # same as above url = config.es_base_url['beers']+'/_search' # we retrieve all the beers (well, at least the first 100) # Limitation: pagination to be implemented query = { "query": { "match_all": {} }, "size": 100 } # query Elasticsearch resp = requests.post(url, data=json.dumps(query)) data = resp.json() # build an array of results and return it beers = [] for hit in data['hits']['hits']: beer = hit['_source'] beer['id'] = hit['_id'] beers.append(beer) return beers api.add_resource(BeerList, config.api_base_url+'/beers') The above code implements the GET and DELETE methods for /api/v1/beers/, which respectively retrieve and delete a specific beer, and the GET method for the /api/v1/beers, which retrieve the full list of beers. In the repo, you can also observe the POST method implemented on the BeerList class, which allows to create a new beer. Design note: given that create-read-update operations, as well as the search, will work on the same data model, it’s probably more sensible to de-couple the object model from the endpoint definition, e.g. by defining a BeerModel and call it from the related resources. From the repo, you can also see the implementation of the /api/v1/styles endpoint. One the backend is running, the service will be accessible at localhost:5000 (the default option for Flask). You can test it with: curl -XGET The Search Functionality Besides serving “items”, our microservice also incorporates a search functionality: class Search(Resource): def get(self): # parse the query: ?q=[something] parser.add_argument('q') query_string = parser.parse_args() # base search URL url = config.es_base_url['beers']+'/_search' # Query Elasticsearch query = { "query": { "multi_match": { "fields": ["name", "producer", "description", "styles"], "query": query_string['q'], "type": "cross_fields", "use_dis_max": False } }, "size": 100 } resp = requests.post(url, data=json.dumps(query)) data = resp.json() # Build an array of results beers = [] for hit in data['hits']['hits']: beer = hit['_source'] beer['id'] = hit['_id'] beers.append(beer) return beers api.add_resource(Search, config.api_base_url+'/search') The above code will make a /api/v1/search endpoint available for custom queries. The interface with Elasticsearch is a custom multi_match and cross_fields query, which searches over the name, producer, styles and description fields, i.e. all the textual fields. By default, Elasticsearch performs multi_match queries as best_fields, which means only the field with the best score will give the overall score for a particular document. In our case, we prefer to have all the fields to contribute to the final score. In particular, we want to avoid longer fields like the description to be penalised by the document length normalisation. Design note: notice how we’re duplicating the same code at the end of Search.get() and BeerList.get(), we should really decouple this. You can test the search service with: curl -XGET # will retrieve all the beers matching "lon", e.g. containing the string "london" The next step is to create the frontend to query the microservice and show the results in a nice UI. The implementation is already available in the repo, and will be discussed in the next article. Summary This article sets up the backend side of a search-as-you-type application. The scenario is the CheerMeApp application, a mini database of beers with names, styles and descriptions. The search application can match any of these fields while the user is still typing, i.e. with partial string matching. The backend side of the app is based on Elasticsearch for the data storage and search functionality. In particular, by indexing the substrings (n-grams) we allow for partial string matching, by increasing the size of the index on disk without hurting query-time performances. The data storage is “hidden” behind a Python/Flask microservice, which provides endpoint for a client to query. In particular, we have seen how the Flask-RESTful extension allows to quickly create RESTful applications by simply declaring the resources as Python classes. The next article will discuss some aspects of the frontend, developed in AngularJS, and how to link it with the backend. One thought on “Building a Search-As-You-Type Feature with Elasticsearch, AngularJS and Flask” I liked this, thanks man
https://marcobonzanini.com/2015/08/10/building-a-search-as-you-type-feature-with-elasticsearch-angularjs-and-flask/
CC-MAIN-2021-31
refinedweb
1,679
54.42
This blog post shows you, step-by-step, how to create a Vonage application that can handle an inbound phone call using Python. If you are new to Vonage, your account will be given some initial free credit to help get you started. Prerequisites Assumes you have: - Python 3 installed - Access to two phones<< Source Code Repository The source code for this project is available on the Vonage Community GitHub. Overview In a Vonage application an inbound call can be handled in various ways depending on the requirement. Here are three simple scenarios: - Message - call is out of hours so you simply play a text-to-speech message. - Forward call - in this case the call is forwarded to an agent so that the customer can be helped. - Call waiting - the caller is put on hold, with a message and then soothing music, and then forwarded to an agent when one is available. You will see how to implement all three of these scenarios in this article. The key is understanding Nexmo Call Control Objects, or NCCOs. NCCOs are discussed in more detail later. NCCOs Nexmo Call Control Objects (NCCOs) provide a convenient way to control an inbound call. NCCOs essentially consist of some JSON configuration that describes how to handle the call. There is a detailed reference guide on NCCOs where the many actions that can be carried out are described. There are three actions required here: - Scenario 1 - action is talk. - Scenario 2 - action is connect. - Scenario 3 - actions are: talk, then stream. Then when the agent can take the call transfer(via REST call) then connectto connect the caller to his phone. NCCO actions can be linked together to meet the needs of more complex use cases. It is possible to do many other things when you handle an inbound call, including record the call, and then later download the recording. Recording calls is not covered in this article, but will be covered in a future blog post. Text to Speech message Install Vonage CLI npm install -g @vonage/cli vonage config:set --apiKey=XXXXXX --apiSecret=XXXXXX Install the Python Client Library The client library is a useful tool to have installed if you are working with Vonage and Python. The client library simplifies the job of making Vonage API calls. In this article it is used to make a single REST API call - "Update Call". You can learn how to install the Python Client in its repo. The simple process is to use PIP: pip install vonage Please read the documentation for more details. Create a Vonage Voice Application Create a directory for your project and change into that new directory. Although you can create a Vonage application in the Dashboard, you can also create one on the command line if you have Vonage CLI installed. After running the below command, follow to prompts: replace COUNTRYCODE with US or for British numbers GB: Vonage Application: vonage apps:link [APPLICATION_ID] --number=number Write Your Python Code The Python code is more or less the same in each scenario, it is mainly the NCCO that provides the different functionality. The code for scenario 3 is a little different in that the inbound call is handled in the usual way, but the agent being busy and then becoming available is simulated with a simple time delay. The code then transfers the waiting call to a new NCCO. The new NCCO connects the inbound caller to the now free agent. Scenario 1 Add the following to a new file and save it as scenario-1.py: from flask import Flask, request, jsonify app = Flask(__name__) ncco = [ { "action": "talk", "text": "Hello, our office hours are Monday to Friday nine until five thirty. Please call back then." } ] . You can run your code locally using: python3 scenario-1.py Here's the outline of what happens: - You will dial your Vonage Number. - Nexmo receives the call. - A callback is generated on the Answer webhook URL you specified. - Your application receives the callback and responds with an NCCO. - At the end of the message the call is terminated by Vonage. - The NCCO controls the call, in this the action plays a text-to-speech message into the call. Try It Out Try it out by calling your Vonage number - you should hear the message! Scenario 2 The code to implement scenario 2 is similar to the first. The main difference is the addition of a new NCCO action, connect to forward the call to the agent. Create a new file scenario-2.py and add the following code: from flask import Flask, request, jsonify NEXMO_NUMBER = "44700000002" YOUR_SECOND_NUMBER = "447700900001" app = Flask(__name__) ncco = [ { "action": "talk", "text": "Hello, one moment please, your call is being forwarded to our agent." }, { "action": "connect", "from": NEXMO_NUMBER, "endpoint": [{ "type": 'phone', "number": YOUR_SECOND_NUMBER }] } ] . NOTE: Make sure you replace NEXMO_NUMBER and YOUR_SECOND_NUMBER with your own values. You can run your code locally using: python3 scenario-2.py Here's the outline of what happens: - You will dial the Vonage Number. - Vonage receives the call. - A callback is generated on the Answer webhook URL you specified. - Your application receives the callback and responds with an NCCO. - The NCCO controls the call. This first action here plays a text-to-speech message into the call. - When the talkaction in the NCCO completes, the connectaction is then invoked, forwarding the call. - At this point the customer who called in is connected to the agent in a call, until one of them hangs up. Try It Out Try it out by calling your Vonage number - you should hear the message - then your call will be transferred to the second phone you specified (a spare mobile is always handy for testing when working with Vonage)! Scenario 3 In this scenario a caller is put on hold (listening to music) until an agent becomes available. The waiting call is then forwarded to the agent. To make things a bit simpler the code simulates the agent being busy using a timer. After 40 seconds (configurable) the agent "becomes available", and then the inbound call is transferred to the agent. Here's a summary of what the code does: - When the call comes in a suitable message is played. - Then as an agent is not available the caller is played soothing audio. - When an agent becomes free (after a time delay) the current call will be transferred to a new NCCO. - When the call is transferred to the new NCCO, which is ncco2in the server code, an message is played, and then the call is transferred to the agent in the same way you saw in scenario 2. Add the following code to a new file scenario-3.py: from flask import Flask, request, jsonify from threading import Timer import nexmo UUID = "" APPLICATION_ID = "YOUR_APP_ID" PRIVATE_KEY = "private.key" TIMEOUT = 40 # Agent becomes available after this period of time NEXMO_NUMBER = "447009000002" # Your Nexmo number YOUR_SECOND_NUMBER = "447009000001" # Your second phone (agent) audio_url = "" ncco_url = "" ncco = [ { "action": "talk", "text": "Hello, I'm sorry, but all our agents are helping customers right now. Please hold, and we will put you through as soon as possible." }, { "action": "stream", "streamUrl": [audio_url], "loop": 0 } ] ncco2 = [ { "action": "talk", "text": "Now connecting you. Thanks for waiting." }, { "action": "connect", "from": NEXMO_NUMBER, "endpoint": [{"type": 'phone',"number": YOUR_SECOND_NUMBER}] } ] def transfer_call (): print ("Transferring call...") client = nexmo.Client(application_id = APPLICATION_ID, private_key=PRIVATE_KEY) dest = {"type": "ncco", "url": [ncco_url]} response = client.update_call(UUID, action="transfer", destination=dest) def register_timer_callback(): t = Timer(TIMEOUT, transfer_call) t.start() register_timer_callback() app = Flask(__name__) @app.route("/webhooks/answer") def answer_call(): global UUID UUID = request.args['uuid'] print("UUID:====> %s" % UUID) return (jsonify(ncco)) @app.route("/webhooks/event", methods=['POST']) def events(): return ("200") @app.route("/ncco") def build_ncco(): return jsonify(ncco2) if __name__ == '__main__': app.run(host="localhost", port=9000) You can also find the latest version of this code on the GitHub repo. There is also a version of this code that uses a GET request on the /agentfree path to transfer the call. This version requires a little manual intervention in that you need to navigate your browser to localhost:9000/agentfree in order to transfer the call. NOTE: Make sure you replace NEXMO_NUMBER and YOUR_SECOND_NUMBER with your own values. Also make sure you have a link to some suitable music for audio_url. Your ncco_url value will also depend on Ngrok or your method of deployment. The transfer of an in-progress call to another NCCO is achieved with the "Update Call" REST API call. This REST API call, with an action of transfer will transfer the control call identified by the UUID to a specified NCCO. The "Update Call" API call is made most conveniently using the Python client library, which you will need to have installed. You can specify a static NCCO via a URL here, but in this case a more flexible approach is used, which is to call the server code on the /ncco URL. This method builds the NCCO and then responds with JSON for the new controlling NCCO. This new NCCO, ncco2 in the server code, performs a connect action as you saw in scenario 2, connecting the caller, who is currently listening to music, to the agent. You can now run your application locally with the following command: python3 scenario-3.py Try It Out To test it out: - Call your Vonage number. - You will hear a message and then music. - After a short delay you will hear a message saying you will be connected to the agent. You are then connected to the agent in a call. Summary We have covered quite a lot of ground in this article, but you should now have a good understanding of how you might control inbound phone calls. We looked at playing a text-to-speech message, playing audio into a call, and forwarding an inbound call to a second number. You also saw how to update a call in progress, and specifically transfer control to a new NCCO. Next Steps - An interesting project would be to look into recording inbound calls. You can see an example to get started quickly. - You could look into providing a keypad interface to allow the caller to control the call. You can see an example to get started quickly. Resources - You can check out the full documentation for NCCOs to learn many ways to control your call. - You can learn about the REST API for Voice, known as VAPI.
https://developer.vonage.com/blog/19/03/28/handling-inbound-calls-with-python-dr
CC-MAIN-2022-40
refinedweb
1,739
63.9
Methods. Topics. Top Down Design. Built-in methods. Methods that return a value. Void methods. Programmer defined methods. Scope. Objectives. At the end of this topic, students should be able to:. Break a problem into smaller pieces. Write programs that use built topic, students should be able to: Break a problem into smaller pieces Write programs that use built-in methods Know how to use methods in the Math library Correctly write and use methods in a program Describe what scope is and how it affects the execution of a program. Effectively use the pseudo-code programming process You have already seen methods used In a couple of different places In this set of slides we will explore the use of methods as a way of breaking a problem into smaller pieces where each piece is easier to solve. At this point you have learned to write quite complex programs, that contain decisions and loops. … but most of your programs are still quite small and easy to manage. What if I gave you an assignment to write a console program that would contain 50,000 lines of code? that’s hard to solve that is easier to solve A smaller problem that is easier to solve A smaller problem that is easier to solve We do this because it is easier to understand what goes on in a small block (piece) of code, because we can re-use the same block (piece) of code many times from within our program, and in C# these smaller blocks (pieces) are called methods because it allows a team of programmers to work on different part of a program in parallel we call this functional decomposition -- breaking the program down into more manageable blocks (pieces). We often write a program as a series of pieces or blocks You have already written quite a few methods. In the Graphical User Interface programs that you have written, the event handlers you have written are methods. As an example, consider a program to play a dice game. Let’s do a top-down design. Tell the user what we are going to do Declare some variables Roll the dice Display the results See if user wants to play again Let’s do a top-down design. Tell the user what we are going to do Declare some variables You could write all of this code in a big long Main( ) routine. Roll the dice Display the results See if user wants to play again Let’s do a top-down design. Tell the user what we are going to do Declare some variables Or … you can break the problem up into smaller pieces, and write a method to do each piece. Roll the dice Display the results See if user wants to play again Let’s do a top-down design. Tell the user what we are going to do boxcars Display “boxcars” Declare some variables snake eyes Display “snake-eyes” Roll the dice Display the results Display value of dice See if user wants to play again Let’s do a top-down design. Tell the user what we are going to do ask … do you want to play again (y or n)? Declare some variables Get the user’s Input. Save it in “again” Roll the dice Display the results not y or n Display “invalid input” See if user wants to play again boxcars Display “boxcars” A method will have one well defined thing that it does. snake eyes Display “snake-eyes” We will have the ability to give a method any data that it needs to do its job. Display value of dice If appropriate, a methodcan return the results of its work. These are parameters. Each parameter has a data type and a name. The type of data returned by this method. The method’s name intWiggleYourEars( intparameter1, intparameter2) { // statements } method header The body of the method is made up of valid C# statements (providing a service), enclosed in curly braces. method block (body) Just as a reminder … Main( ) is a method which satisfies all the conditions specified earlier. Header static void Main() Block { (body) } In general, if you can find some written and tested code that does what you want, it is better to use that already existing code than to re-create the code yourself. saves time fewer errors tested under all conditions Most programming languages, including C#, include libraries of pre-written and tested methods that do common programming tasks. In C#, these libraries are in the .Netlibrary, accessed via using statements. As an example of a method that returns a value, consider the Sqrtmethod in the Math class. To find the square root of the number 9, we would write result = Math.Sqrt (9); this is called a method invocation. It can be used anywhere an expression can be used. The Sqrt method belongs to the Math class. this is the method’s argument. The argument may be a literal value, a variable, a constant, or an expression. Some methods may take more than one argument. If so, they are separated by commas (a comma delimited list). the value returned by the function is called its return value. A method can only have one return value. The Sqrt method is found in the Math class. Other common functions in the Math class: name function (service)return type Pow (int x, int y) calculates xy double Abs (double n) absolute value of n double Ceil (double n ) smallest integer >= n double Floor (double n) largest integer <= n double The .Net library provides a class that we can use to create a random number generator. To create a random number generator object, we write Random randoms = new Random( ); This initializes the Random object. This is the reference to the Random object. This creates the Random object in the Heap. A random number generator generates a pseudo-random integer value between zero and 2,147,483,646. To get a random number, we call the Next( ) method that is declared as part of the Random class. The Next method looks like this: To get a random number within a specific range we scale the result …. for example, to get a number between 0 and 2, inclusive int n = randoms.Next( 3 ); generates value up to, but not including 3 (0-2) To shift the range of the random numbers, for example, to get a random number between 1 and 3, use this form of the Next method: int n = randoms.Next(1, 4); Generate values up to, but not including 4 (1-3) Start at 1 * To get a repeatable sequence of pseudo-random numbers, use the same seed when creating the Random object Random randoms = new Random( 3 ); * same machine, same compiler methods that don’t return a value are called void methods. void methods are written as statements. They cannot be used in an expression, as expressions must return a typed value. void methods can have zero or more parameters. What job will the method do? What data does it need to do it’s work? What will the method return? Here is the activity diagram for the method we need to write to display the output of a roll. boxcars Display “boxcars” snake eyes Display “snake-eyes” What is it’s job (service provided)? What data does it need? What should it return? Display value of dice Every method should have a method prologue. The method prologue tells us * What the purpose of the method is * What data the method needs to do its work * What data the method returns // The DisplayResultsmethod // Purpose: show the results of rolling the dice // Parameters: two integer values, the dice // Returns: nothing does not return a value. boxcars Display “boxcars” snake eyes void DisplayResults(int d1, int d2) { } Display “snake-eyes” Display value of dice This method takes two parameters, the value of the dice that were thrown. Display “boxcars” static void DisplayResults(int d1, int d2) { Console.Write("You rolled "); if (d1 == BOX && d2 == BOX) Console.WriteLine("Box Cars"); else if (d1 == SNAKE && d2 == SNAKE) Console.WriteLine("Snake Eyes"); else Console.WriteLine("{0} and {1}", d1, d2); } snake eyes Display “snake-eyes” Display value of dice want to play again (y or n)? Here is the activity diagram for the method we need to see if the user wants to roll again. Get the user’s Input. Save it In “again” What is it’s job (service it provides)? What data does it need? What should it return? not y or n Display “invalid input” // The GoAgain method // Purpose: get and validate the user’s input // Parameters: none // Returns: the computer choice as a boolean // true – go again // false - quit { const char YES = \'y\'; const char NO = \'n\'; char yn = YES; do { Console.Write("Do you want to roll again (y or n)? "); yn = char.Parse(Console.ReadLine()); yn = char.ToLower(yn); if (yn != YES && yn != NO) Console.WriteLine("Invalid input."); } while (yn != YES && yn != NO); if (yn == YES) return true; else return false; } } ask … do you want to play again (y or n)? Get the user’s Input. Save it In “again” not y or n Display “invalid input” Now with these methods, our Main( ) method just looks like this: static void Main(string[] args) { int die1, die2; Random dice = new Random( ); do { die1 = dice.Next(1, BOX+1); die2 = dice.Next(1, BOX + 1); DisplayResults(die1, die2); } while (GoAgain( ) ); Console.WriteLine("Thanks for playing ... goodbye"); Console.ReadLine( ); } // end of Main We could combine these into a single method, but we like a method to do one thing. A related term is storage class or lifetime, which defines how long a variable exists within a program. automaticvariables – come into existence when they are declared, and exist until the block in which they are declared is left.. staticvariables – exist for the lifetime of the program Class level variables – exist for the lifetime of the program (const’s at the class level) global variables must be declared outside of any method. They need to be declared within a class as static. Constants are automatically static.Example using System; class Program { static string globalValue = "I was declared outside any method"; static void Main() { Console.WriteLine("Entering main( ) ..."); string localValue = "I was declared in Main( )"; SomeMethod( ); Console.WriteLine("Local value = {0}", localValue); Console.ReadLine( ); }//End Main() static void SomeMethod( ) { Console.WriteLine("Entering SomeMethod( )..."); string localValue = "I was declared in SomeMethod( )"; Console.WriteLine("Global value = {0}", globalValue); Console.WriteLine("Local value = {0}", localValue); }//End SomeMethod() }//End class Program the name localValue is used twice. In this case the scope of localValue is inside of Main( ). It is a localvariable. localValue is also declared in this method, but its scope is just inside the method. It cannot be seen outside of the method. It is a different variable than the one declared in Main( ). It is a local variable. Anytime we use curly braces to delineate a piece of code, that code is called a block. We can declare variables that are local to a block and have block scope. Local variables declared in a nested block are only known to the block that they are declared in. When we declare a variable as part of a loop, for example for (int j = 0; j< MAX; j++) … the variable j will have the block of the loop as its scope. A static variable comes into existence when it is declared and it lives until the program ends. A static variable has class scope – that is, it is visible to all of the methods in the class. Static variables live in the data segment. The PseudoCode Programming Process From “Code Complete” by Steve McConnell Step One: Before doing any work on the method itself, make sure that the method is really required, and that the job of the method is well defined. Methods should do one thing! Step Two: Clearly state the problem that the method will solve. - What does it do - What are its inputs - What are its outputs Step Three: Write a method prologue - The method name - Purpose - Parameters - Return value Step Four: Think about how you will test your method once it is written. Write down some test cases (input and output) Step Five: Research available code libraries and algorithms … has someone else written the code that you need? Step Seven: Walk through your pseudocode and see if it makes sense. Does it work? If not -- revisit your design. Step Eight: Write down the method declaration (the first line of the method) Step Nine: Add your pseudocode to your program as comments. Step Ten: Fill in the actual code below each set of comments (pseudocode) Step Eleven: Walk through your code, mentally check for errors. Step Twelve: Compile your code – fix syntax errors. Step Thirteen: Use your test cases to see if your method works correctly. Write a program that converts dollar values into another currency. The program should work as follows: (1) Prints an introduction to the program (2) Gets a currency conversion factor and currency name from user (3) Gets a dollar value (4) Calculates and displays the value in the new currency (5) Asks if the user wants to do another conversion (6) If the answer is yes, go back to step 3 (7) Ask if the user wants to do a different conversion (8) If the answer is yes, go back to step 2 -- Write a method to do the currency calculation Some Sample Exchange Rates $1.00 = 0.679459 Euros $1.00 = 13.3134 Mexican Pesos $1.00 = 1.04338 Canadian Dollars Assume that we have used Functional Decomposition to break this problem up into pieces, and have determined that we need a method that does the actual currency conversion. Use the Pseudocode Programming Process to develop the code for this method.
http://www.slideserve.com/vonda/methods
CC-MAIN-2017-43
refinedweb
2,313
71.75
Learn how to validate objects in a boolean context without the usual harmful side effects. In C++, there are a number of ways to provide Boolean tests for classes. Such support is either provided to make usage intuitive, to support generic programming, or both. We shall examine four popular ways of adding support for the popular and idiomatic if (object) {} construct. To conclude, we will discuss a new solution, without the pitfalls and dangers of the other four. Let the games begin. Some types, for example pointers, allow us to test their validity in Boolean contexts. Any rvalue of arithmetic, enumeration, pointer, or pointer to member type, can be implicitly converted to an rvalue of type bool. We frequently use this property to select a branch of code to execute, for example when acquiring a resource: if (some_type* p=get_some_type()) { // p is valid, use it } else { // p is not valid, take proper action } Of course, such usage is not only useful for built-in types; any type with an unambiguous meaning of validity could greatly benefit from such a Boolean conversion. The alternative is to use a member function for testing. As an example, consider testing a smart pointer (without an implicit conversion to the contained pointer) for validity: smart_ptr<some_type> p(get_some_type()); if (p.is_valid()) { // p is valid, use it } else { // p is not valid, take proper action } Besides being more verbose, this version differs from the previous in that the name p needs to be declared outside of the scope in which it is used. This is bad from a maintenance perspective. Also, the name is_valid will probably differ depending on the type of smart pointer at use�it can just as well be is_empty, Empty, Valid, or any other name a creative designer might have thought of when creating it. Finally, even when disregarding the naming issue and the problem with declaration scope, for smart pointers there's the very real requirement to support pointer-like use. It should typically be possible to convert existing code to make use of smart pointers rather than raw pointers, with a minimum of change to the code base, e.g., code like this should work regardless of pointer smartness: template <typename T> void some_func(const T& t) { if (t) t->print(); } Without some conversion to a Boolean testable type, the above if-statement won't compile for smart pointers. The goal that we set out to accomplish in this article is making that conversion safe. As we shall see, that's a bit harder than one would imagine at first glance. operator bool This classical approach has a straightforward implementation. I'll use the same class ( Testable) throughout this article, as seen in the following code: // operator bool version class Testable { bool ok_; public: explicit Testable(bool b=true):ok_(b) {} operator bool() const { return ok_; } }; // operator! version class Testable { bool not_ok_; public: explicit Testable(bool b=true):not_ok_(!b) {} bool operator!() const { return not_ok_; } }; // operator void* version class Testable { bool ok_; public: explicit Testable(bool b=true):ok_(b) {} operator void*() const { return ok_==true ? this : 0; } }; // nested class version class Testable { bool ok_; public: explicit Testable(bool b=true):ok_(b) {} class nested_class {}; operator const nested_class*() const { return ok_ ? reinterpret_cast<const nested_class*>(this) : 0; } }; Note the implementation for the conversion function: operator bool() const { return ok_; } Now, we can use instances of the class in expressions like this: Testable test; if (test) std::cout << "Yes, test is working!\n"; else std::cout << "No, test is not working!\n"; That's fine, but there's a nasty caveat to this as the conversion function has just told the compiler that it's free to do things behind our backs (lesson 0: never trust a compiler to do your job for you; at least not to do it properly); test << 1; int i=test; These are both nonsense operations, but yet allowed and legal C++ (we also have the issue of overloading to consider, which makes things even worse). So, operator bool is not a very good approach. We're also able to compare any types that utilize this technique with each other, although that rarely makes sense: Testable a; AnotherTestable b; if (a==b) { } if (a<b) { } What else can we do? Well, one improvement is to add another (private) conversion function to an integral type, and thereby disallow the nonsensical operations, even those for equality and ordering. Simply declaring a private conversion function to int does the trick. However, some drawbacks remain, making the solution less than satisfactory. The error messages when a user invokes the ambiguity aren't consistent, or readable. Also, these conversion functions may interfere with perfectly valid conversions and overloads. So we must look elsewhere for a clean solution to this problem. operator! It's time to move on to safer ground, through operator!. Programmers are already accustomed to using this unary logical negation operator in Boolean contexts, which is a desirable property for intuitive usage. Still, some users might not be ready for what some people call the double-bang trick (see below), which is a requirement for checking the "good state" of such an object. The implementation is trivial: bool operator!() const { return !ok_; } This is a much better approach—no more implicit conversion or overloading issues to worry about, and two idiomatic ways of testing Testable: Testable test; if (!!test) std::cout << "Yes, test is working!\n"; if (!test2) { std::cout << "No, test2 is not working!\n"; The first version utilizes a useful trick: if (!!test). It's sometimes called the double-bang trick [1], but alas, it is not nearly as elegant or straightforward as if (test). [Editor's note: This is an old C trick used to map non-zero values to the number 1 so you can have numeric integer values map into a binary-valued index (0 or 1) for use with an array of size two] This is a pity, because if people don't understand how something works it really doesn't matter whether it's safe or not. It's still a very useful technique, but it will typically be used in library code, where �ordinary� users never see it. Of course, it's still possible to compare different types, just as was the case with the first approach (although the obscure syntax should make it obvious that it rarely makes sense to do so). Are there better ways than this? operator] class safe_bool_base { protected: typedef void (safe_bool_base::*bool_type)() const; void this_type_does_not_support_comparisons() const {} safe_bool_base() {} safe_bool_base(const safe_bool_base&) {} safe_bool_base& operator=(const safe_bool_base&) {return *this;} ~safe_bool_base() {} }; template <typename T=void> class safe_bool : public safe_bool_base { public: operator bool_type() const { return (static_cast<const T*>(this))->boolean_test() ? &safe_bool_base::this_type_does_not_support_comparisons : 0; } protected: ~safe_bool() {} }; template<> class safe_bool<void> : public safe_bool_base { public: operator bool_type() const { return boolean_test()==true ? &safe_bool_base::this_type_does_not_support_comparisons : 0; } protected: virtual bool boolean_test() const=0; virtual ~safe_bool() {} }; template <typename T, typename U> void operator==(const safe_bool
http://www.artima.com/cppsource/safeboolP.html
CC-MAIN-2015-22
refinedweb
1,156
50.36
- absolute error - The absolute value of the difference between the observed and the correct value. Absolute error is usually less useful than relative error. - absolute path - A path that points to the same location in the filesystem regardless of where it is evaluated. An absolute path is the equivalent of latitude and longitude in geography. See also: relative path. - abstract method - In object-oriented programming, a method that is defined but not implemented. Programmers will define an abstract method in a parent class to specify operations that child classes must provide. - abstract syntax tree (AST) - A deeply nested data structure, or tree, that represents the structure of a program. For example, the AST might have a node representing a whileloop with one child representing the loop condition and another representing the loop body. - accidental complexity - The extra (avoidable) complexity introduced by poor design choices. The term is used in contrast with intrinsic complexity. - accumulator - A variable that collects and/or combines many values. For example, if a program sums the values in an array by adding them all to a variable called result, then resultis the accumulator. - actual result (of test) - The value generated by running code in a test. If this matches the expected result, the test passes; if the two are different, the test fails. - Adapter pattern - A design pattern that rearranges parameters, provides extra values, or does other work so that one function can be called by another. - alias - A second or subsequent reference to the same object. Aliases are useful, but increase the cognitive load on readers who have to remember that all these names refer to the same thing. - anonymous function - A function that has not been assigned a name. Anonymous functions are usually quite short, and are usually defined where they are used, e.g., as callbacks. In Python, these are called lambda functions and are created through use of the lambda reserved word. - Application Binary Interface (ABI) - The low-level layout that a piece of software must have to work on a particular kind of machine. - Application Programming Interface (API) - A set of functions provided by a software library or web service that other software can call. - argument - A value passed to a function when it is called. See also: parameter. - ASCII - A standard way to represent the characters commonly used in the Western European languages as 7- or 8-bit integers, now largely superceded by Unicode. - assembler - A compiler that translates software written in assembly code into machine instructions. See also: disassembler. - assembly code - A low-level programming language whose statements correspond closely to the actual instruction set of a particular kind of processor. - assertion - A Boolean expression that must be true at a certain point in a program. Assertions may be built into the language (e.g., Python's assertstatement) or provided as functions (as with Node's assertlibrary). - associative array - See dictionary. - asynchronous - Not happening at the same time. In programming, an asynchronous operation is one that runs independently of another, or that starts at one time and ends at another. See also: synchronous. - attribute - A name-value pair associated with an object, used to store metadata about the object such as an array's dimensions. - automatic variable - A variable that is automatically given a value in a build rule. For example, Make automatically assigns the name of a rule's target to the automatic variable $@. Automatic variables are frequently used when writing pattern rules. See also: Makefile. - backward-compatible - A property of a system that enables interoperability with an older legacy system, or with input designed for such a system. - bare object - An object that isn't an instance of any particular class. - base class - In object-oriented programming, a class from which other classes are derived. See also: child class, derived class, parent class. - binary - A system which can have one of two possible states, often represented as 0 and 1 or true and false. - bit - A single binary digit (0 or 1). See also: binary, Boolean. - bitwise operation - An operation that manipulates individual bits in memory. Common bitwise operations include and, or, not, and xor. - block comment - A comment that spans multiple lines. Block comments may be marked with special start and end symbols, like /*and */in C and its descendents, or each line may be prefixed with a marker like #. - Boolean - Relating to a variable or data type that can have either a logical value of true or false. Named for George Boole, a 19th century mathematician. - breadth first - To go through a nested data structure such as a tree by exploring all of one level, then going on to the next level and so on, or to explore a problem by examining the first step of each possible solution, and then trying the next step for each. See also: depth-first. - breakpoint - An instruction to a debugger telling it to suspend execution whenever a specific point in the program (such as a particular line) is reached. See also: watchpoint. - bug - A missing or undesirable feature of a piece of software. - build manager - A program that keeps track of how files depend on one another and runs commands to update any files that are out-of-date. Build managers were invented to compile only those parts of programs that had changed, but are now often used to implement workflows in which plots depend on results files, which in turn depend on raw data files or configuration files. See also: build rule, dependency, Makefile. - build recipe - The part of a build rule that describes how to update something that has fallen out-of-date. - build rule - A specification for a build manager that describes how some files depend on others and what to do if those files are out-of-date. - build target - The file(s) that a build rule will update if they are out-of-date compared to their dependencies. See also: Makefile. - byte code - A set of instructions designed to be executed efficiently by an interpreter. - cache - Something that stores copies of data so that future requests for it can be satisfied more quickly. The CPU in a computer uses a hardware cache to hold recently-accessed values; many programs rely on a software cache to reduce network traffic and latency. Figuring out when something in a cache is out-of-date and should be replaced is one of the two hard problems in computer science. - caching - To save a copy of some data in a local cache to make future access faster. - call stack - A data structure that stores information about the active subroutines executed. - callback function - A function A that is passed to another function B so that B can call it at some later point. Callbacks can be used synchronously, as in generic functions like mapthat invoke a callback function once for each element in a collection, or asynchronously, as in a client that runs a callback when a response is received in answer to a request. - Cascading Style Sheets (CSS) - A way to control the appearance of HTML. CSS is typically used to specify fonts, colors, and layout. - catch (an exception) - To handle an error or other unexpected event represented by an exception. - Chain of Responsibility pattern - A design pattern in which each object either handles a request or passes it on to another object. - character encoding - A specification of how characters are stored as bytes. The most commonly-used encoding today is UTF-8. - child (in a tree) - A node in a tree that is below another node (call the parent). - child class - In object-oriented programming, a class derived from another class (called the parent class). - circular dependency - A situation in which X depends on Y and Y depends on X, either directly or indirectly. If there is a circular dependency, then the dependency graph is not acyclic. - class - In object-oriented programming, a structure that combines data and operations (called methods). The program then uses a constructor to create an object with those properties and methods. Programmers generally put generic or reusable behavior in parent classes, and more detailed or specific behavior in child classes. - client - A program such as a web browser that gets data from a server and displays it to, or interacts with, users. The term is used more generally to refer to any program A that makes requests of another program B. A single program can be both a client and a server. - closure - A set of variables defined in the same scope whose existence has been preserved after that scope has ended. - code coverage (in testing) - How much of a library or program is executed when tests run. This is normally reported as a percentage of lines of code. - cognitive load - The amount of working memory needed to accomplish a set of simultaneous tasks. - collision - A situation in which a program tries to store two items in the same location in memory. For example, a collision occurs when a hash function generates the same hash code for two different items. - column-major storage - Storing each column of a two-dimensional array as one block of memory so that elements in the same row are far apart. See also: row-major storage. - combinatorial explosion - The exponential growth in the size of a problem or the time required to solve it that arises when all possible combinations of a set of items must be searched. - comma-separated values (CSV) - A text format for tabular data in which each record is one row and fields are separated by commas. There are many minor variations, particularly around quoting of strings. - command-line argument - A filename or control flag given to a command-line program when it is run. - command-line interface (CLI) - A user interface that relies solely on text for commands and output, typically running in a shell. - comment - Text written in a script that is not treated as code to be run, but rather as text that describes what the code is doing. These are usually short notes, often beginning with a #(in many programming languages). - compile - To translate textual source into another form. Programs in compiled languages are translated into machine instructions for a computer to run, and Markdown is usually translated into HTML for display. - compiled language - Originally, a language such as C or Fortran that is translated into machine instructions for execution. Languages such as Java are also compiled before execution, but into byte code instead of machine instructions, while interpreted languages like JavaScript are compiled to byte code on the fly. - compiler - An application that translates programs written in some languages into machine instructions or byte code. - confirmation bias - The tendency for someone to look for evidence that they are right rather than searching for reasons why they might be wrong. - console - A computer terminal where a user may enter commands, or a program, such as a shell that simulates such a device. - constructor - A function that creates an object of a particular class. - Coordinated Universal Time (UTC) - The standard time against which all others are defined. UTC is the time at longitude 0°, and is not adjusted for daylight savings. Timestamps are often reported in UTC so that they will be the same no matter what timezone the computer is in. - corner case - Another name for an edge case. - coupling - The degree of interaction between two classes, modules, or other software components. If a system's components are loosely coupled, changes to one are unlikely to affect others. If they are tightly coupled, then any change requires other changes elsewhere, which complicates maintenance and evolution. - cryptographic hash function - A hash function that produces an apparently-random value for any input. - current working directory - The folder or directory location in which the program operates. Any action taken by the program occurs relative to this directory. - cycle (in a graph) - A set of links in a graph that leads from a node back to itself. - data frame - A two-dimensional data structure for storing tabular data in memory. Rows represent records and columns represent fields. - data migration - Moving data from one location or format to another. The term refers to translating data from an old format to a newer one. - Decorator pattern - A design pattern in which a function adds additional features to another function or a class after its initial definition. Decorators are a feature of Python and can be implemented in most other languages as well. - defensive programming - A set of programming practices that assumes mistakes will happen and either reports or corrects them, such as inserting assertions to report situations that are not ever supposed to occur. - dependency - See prerequisite. - dependency graph - A directed graph showing how things depend on one another, such as the files to be updated by a build manager. If the dependency graph is not acyclic, the dependencies cannot be resolved. - deprecation - To indicate that while a function, method, or class exists, its use is no longer recommended (for example, because it is going to be phased out in a future release). - depth-first - A search algorithm that explores one possibility all the way to its conclusion before moving on to the next. - derived class - In object-oriented programming, a class that is a direct or indirect extension of a base class. See also: child class. - design by contract - A style of designing software in which functions specify the pre-conditions that must be true in order for them to run and the post-conditions they guarantee will be true when they return. A function can then be replaced by one with weaker pre-conditions (i.e., it accepts a wider set of input) and/or stronger post-conditions (i.e., it produces a smaller range of output) without breaking anything else. See also: Liskov Substitution Principle. - design pattern - A recurring pattern in software design that is specific enough to be worth naming, but not so specific that a single best implementation can be provided by a library. See also: Iterator pattern, Singleton pattern, Template Method pattern, Visitor pattern. - destructuring assignment - Unpacking values from data structures and assigning them to multiple variables in a single statement. - dictionary - A data structure that allows items to be looked up by value, sometimes called an associative array. Dictionaries are often implemented using hash tables. - directed acyclic graph (DAG) - A directed graph which does not contain any loops (i.e., it is not possible to reach a node from itself by following edges). - directed graph - A graph whose edges have directions. - directory - A structure in a filesystem that contains references to other structures, such as files and other directories. - disassembler - A program that translates machine instructions into assembly code or some other higher-level language. See also: assembler. - doc comment - A documentation comment ("doc comment" for short) is a specially-formatted comment containing documentation about a piece of code that is embedded in the code itself. - Document Object Model (DOM) - A standard, in-memory representation of HTML and XML. Each element is stored as a node in a tree with a set of named attributes; contained elements are child nodes. - driver - A program that runs other programs, or a function that drives all of the other functions in a program. - dynamic loading - To import a module into the memory of a program while it is already running. Most interpreted languages use dynamic loading, and provide tools so that programs can find and load modules dynamically to configure themselves. - dynamic lookup - To find a function or a property of an object by name while a program is running. For example, instead of getting a specific property of an object using obj.name, a program might use obj[someVariable], where someVariablecould hold "name"or some other property name. - dynamic scoping - To find the value of a variable by looking at what is on the call stack at the moment the lookup is done. Almost all programming languages use lexical-scoping instead, since it is more predictable. - eager matching - Matching as much as possible, as early as possible. - easy mode - A term borrowed from gaming meaning to do something with obstacles or difficulties simplified or removed, often for practice purposes. - edge - A connection between two nodes in a graph. An edge may have data associated with it, such as a name or distance. - edge case - A problem that only comes up under unusual circumstances or when a system is pushed to its limits; also sometimes called a corner case. Programs intended for widespread use have to handle edge cases, but doing so can make them much more complicated. - element - A named component in an HTML or XML document. Elements are usually written <name>… </name>, where "…" represents the content of the element. Elements often have attributes. - encapsulate - To store data inside some kind of structure so that it is only accessible through that structure. - entry point - Where a program begins executing. - environment - A structure that stores a set of variable names and the values they refer to. - error (in a test) - Signalled when something goes wrong in a unit test itself rather than in the system being tested. In this case, we do not know anything about the correctness of the system. - error handling - What a program does to detect and correct for errors. Examples include printing a message and using a default configuration if the user-specified configuration cannot be found. - event loop - A mechanism for managing concurrent activities in a program. Tasks are represented as items in a queue; the event loop repeatedly takes an item from the front of the queue and runs it, adding any other tasks it generates to the back of the queue to run later. - exception - An object that stores information about an error or other unusual event in a program. One part of a program will create and raise an exception to signal that something unexpected has happened; another part will catch it. - exception handler - A piece of code that deals with an exception after it is caught, e.g., by recording a message, retrying the operation that failed, or performing an alternate operation. - expected result (of test) - The value that a piece of software is supposed to produce when tested in a certain way, or the state in which it is supposed to leave the system. See also: actual result (of test). - exploratory programming - A software development methodology in which requirements emerge or change as the software is being written, often in response to results from early runs. - export - To make something visible outside a module so that other parts of a program can import it. In most languages a module must export things explicitly in an attempt to avoid name collision. - fail (a test) - A test fails if the actual result does not match the expected result. See also: pass (a test). - feature (in software) - Some aspect of software that was deliberately designed or built. A bug is an undesired feature. - field - A component of a record containing a single value. Every record in a database table has the same fields. - filename extension - The last part of a filename, usually following the '.' symbol. Filename extensions are commonly used to indicate the type of content in the file, though there is no guarantee that this is correct. - filesystem - The part of the operating system that manages how files are stored and retrieved. Also used to refer to all of those files and directories or the specific way they are stored (as in "the Unix filesystem"). - filter - As a verb, to choose a set of records (i.e., rows of a table) based on the values they contain. As a noun, a command-line program that reads lines of text from files or standard input, performs some operation on them (such as filtering), and writes to a file or stdout. - finite state machine (FSM) - A theoretical model of computing consisting of a directed graph whose nodes represent the states of the computation and whose arcs show how to move from one state to another. Every regular expression corresponds to a finite state machine. - fixed-width (of strings) - A set of character strings that have the same length. Databases often used fixed-width strings to make storage and access more efficient; short strings are padded up to the required length and long strings are truncated. - fixture - The thing on which a test is run, such as the parameters to the function being tested or the file being processed. - fluent interface - A style of object-oriented programming in which methods return objects so that other methods can immediately be called. - folder - Another term for a directory. - formal verification - Proving the correctness of an algorithm, program, or piece of hardware using mathematical techniques. - garbage collection - The process of identifying memory that has been allocated but is no longer in use and reclaiming it to be re-used. - generator function - A function whose state is automatically saved when it returns a value so that execution can be restarted from that point the next time it is called. One example of generator functions use is to produce streams of values that can be processed by forloops. See also: Iterator pattern. - generic function - A collection of functions with similar purpose, each operating on a different class of data. - global variable - A variable defined outside any particular function or package namespace, which is therefore visible to all functions. See also: local variable. - globbing - To specify a set of filenames using a simplified form of regular expressions, such as *.datto mean "all files whose names end in .dat". The name is derived from "global". - graph - A plot or a chart that displays data, or a data structure in which nodes are connected to one another by edges. See also: tree. - greedy algorithm - An algorithm that consumes as much input as possible, as early as possible. - handler - A callback function responsible for handling some particular event, such as the user clicking on a button or new data being receiving from a file. - hash code - A value generated by a hash function. Good hash codes have the same properties as random numbers in order to reduce the frequency of collisions. - hash function - A function that turns arbitrary data into a bit array, or a key, of a fixed size. Hash functions are used to determine where data should be stored in a hash table. - hash table - A data structure that calculates a pseudo-random key (location) for each value passed to it and stores the value in that location. Hash tables enable fast lookup for arbitrary data. This occurs at the cost of extra memory because hash tables must always be larger than the amount of information they need to store, to avoid the possibility of data collisions, when the hash function returns the same key for two different values. - header file - In C and C++, a file that defines constants and function signatures but does not contain runnable code. Header files tell the including file what is defined in other files so that the compiler can generate correct code. - heterogeneous - Containing mixed data types. For example, an array in Javascript can contain a mix of numbers, character strings, and values of other types. See also: homogeneous. - heuristic - A rule or guideline that isn't guaranteed to produce the desired result, but usually does. - homogeneous - Containing a single data type. For example, a vector must be homogeneous: its values must all be numeric, logical, etc. See also: heterogeneous. - HTTP request - A message sent from a client to a server using the HTTP protocol asking for data. A request usually asks for a web page, image, or other data. See also: HTTP response. - HTTP response - A reply sent from a server to a client using the HTTP protocol in response to a request. The response usually contains a web page, image, or data. - HyperText Markup Language (HTML) - The standard markup language used for web pages. HTML is represented in memory using DOM (Digital Object Model). See also: XML. - HyperText Transfer Protocol (HTTP) - The standard protocol for data transfer on the World-Wide Web. HTTP defines the format of requests and responses, the meanings of standard error codes, and other features. - idiomatic - To use a language in the same way as a fluent or native speaker. Programs are called idiomatic if they use the language the way that proficient programmers use it. - immediately-invoked function expression (IIFE) - A function that is invoked once at the point where it is defined. IIFEs are typically used to create a scope to hide some function or variable definitions. - immutable - Data that cannot be changed after being created. Immutable data is easier to think about, particularly if data structures are shared between several tasks, but may result in higher memory requirements. - import - To bring things from a module into a program for use. In most languages a program can only import things that the module explicitly exports. - index (in a database) - An auxiliary data structure in a database used to speed up search for some entries. An index increases memory and disk requirements but reduces search time. - inner function - A function defined inside another (outer) function. Creating and returning inner functions is a way to create closures. - instance - An object of a particular class. - instruction pointer - A special register in a processor that stores the address of the next instruction to execute. - instruction set - The basic operations that a particular processor can execute directly. - interpreted language - A high-level language that is not executed directly by the computer, but instead is run by an interpreter that translates program instructions into machine commands on the fly. - interpreter - A program whose job it is to run programs written in a high-level interpreted language. Interpreters can run interactively, but may also execute commands saved in a file. - intrinsic complexity - The unavoidable complexity inherent in a problem that any solution must deal with. The term is used in contrast with accidental complexity. - introspection - Having a program examine itself as it is running; common examples are to determine the specific class of a generic object or to get the fields of an object when they are not known in advance. - ISO date format - An international for formatting dates. While the full standard is complex, the most common form is YYYY-MM-DD, i.e., a four-digit year, a two-digit month, and a two-digit day, separated by hyphens. - Iterator pattern - A design pattern in which a temporary object or generator function produces each value from a collection in turn for processing. This pattern hides the differences between different kinds of data structures so that everything can be processed using loops. See also: Visitor pattern. - JavaScript Object Notation (JSON) - A way to represent data by combining basic values like numbers and character strings in lists and key/value structures. The acronym stands for "JavaScript Object Notation"; unlike better-defined standards like XML, it is unencumbered by a syntax for comments or ways to define a schema. See also: YAML. - join - An operation that combines two tables, typically by matching keys from one with keys from another. - key - A field or combination of fields whose value(s) uniquely identify a record within a table or dataset. Keys are often used to select specific records and in joins. - label (address in memory) - A human-readable name given to a particular location in memory when writing programs in assembly code. - layout engine - A piece of software that decides where to place text, images, and other elements on a page. - lazy matching - Matching as little as possible while still finding a valid match. See also: eager matching. - Least Recently Used cache (LRU cache) - A cache that discards items that have not been used recently in order to limit memory requirements. - lexical scoping - To look up the value associated with a name according to the textual structure of a program. Most programming languages use lexical scoping instead of dynamic scoping because the latter is less predictable. - library - An installable collection of software, also often called a module or package. - lifecycle - The steps that something is allowed or required to go through. The lifecycle of an object runs from its construction through the operations it can or must perform before it is destroyed. - line comment - A comment in a program that spans part of a single line, as opposed to a block comment that may span multiple lines. - link (a program) - To combine separately compiled modules into a single runnable program. - linter - A program that checks for common problems in software, such as violations of indentation rules or variable naming conventions. The name comes from the first tool of its kind, called lint. - Liskov Substitution Principle - A design rule stating that it should be possible to replace objects in a program with objects of derived classes without breaking the program. Design by contract is intended to enforce this rule. - list - A vector that can contain values of many different (heterogeneous) types. - literal - A representation of a fixed value in a program, such as the digits 123for the number 123 or the characters "abc"for the string containing those three letters. - literate programming - A programming paradigm that mixes prose and code so that explanations and instructions are side by side. - loader - A function whose job is to read files containing runnable code into memory and make that code available to the calling program. - local variable - A variable defined inside a function which is only visible within that function. See also: closure, global variable. - log message - A status report or error message written to a file as a program runs. - loop body - The statement or statements executed by a loop. - loosely coupled - Components in a software system are said to be loosely coupled if they are relatively independent of one another, i.e., if any one of them can be changed or replaced without others having to be altered as well. See also: tightly coupled. - macro - Originally short for "macro-instruction", an instruction to translate some of the text into a program into other text before using it. - Makefile - A configuration file for the original build manager. - manifest - A list that specifies the precise versions of a complete set of libraries or other software components. - Markdown - A markup language with a simple syntax intended as a replacement for HTML. - markup language - A set of rules for annotating text to define its meaning or how it should be displayed. The markup is usually not displayed, but instead controls how the underlying text is interpreted or shown. Markdown and HTML are widely-used markup languages for web pages. See also: XML. - method - An implementation of a generic function that handles objects of a specific class. - method chaining - A style of object-oriented programming in which an object's methods return that object as their result so that another method can immediately be called, as in obj.a().b().c(). - mock object - A simplified replacement for part of a program whose behavior is easy to control and predict. Mock objects are used in unit tests to simulate databases, web services, and other complex systems. - module - A reusable software package, also often called a library. - module bundler - A program that finds all the dependencies of a set of source files and combines them into a single loadable file. - multi-threaded - Capable of performing several operations simultaneously. Multi-threaded programs are usually more efficient than single-threaded ones, but also harder to understand and debug. - name collision - The ambiguity that arises when two or more things in a program that have the same name are active at the same time. Most languages use namespaces to prevent such collisions. See also: call stack. - namespace - A collection of names in a program that exists in isolation from other namespaces. Each function, object, class, or module in a program typically has its own namespace so that references to "X" in one part of a program do not accidentally refer to something called "X" in another part of the program. Scope is a distinct, but related, concept. See also: name collision, scope. - nested function - A function that is defined inside another function. - node - An element of a graph that is connected to other nodes by edges. Nodes typically have data associated with them, such as names or weights. - non-blocking execution - To allow a program to continue running while an operation is in progress. For example, many systems support non-blocking execution for file I/O so that the program can continue doing work while it waits for data to be read from or written to the filesystem (which is typically much slower than the CPU). - object - In object-oriented programming, a structure that contains the data for a specific instance of a class. The operations the object is capable of are defined by the class's methods. - object-oriented programming (OOP) - A style of programming in which functions and data are bound together in objects that only interact with each other through well-defined interfaces. - off-by-one error - A common error in programming in which the program refers to element iof a structure when it should refer to element i-1or i+1, or processes Nelements when it should process N-1or N+1. - op code - The numerical code for a particular instruction that a processor can execute. - Open-Closed Principle - A design rule stating that software should be open for extension but closed for modification, i.e., it should be possible to extend functionality without having to rewrite existing code. - operating system - A program that provides a standard interface to whatever hardware it is running on. Theoretically, any program that only interacts with the operating system should run on any computer that operating system runs on. - package - A collection of code, data, and documentation that can be distributed and re-used. Also referred to in some languages as a library or module. - pad (a string) - To add extra characters to a string to make it a required length. - parameter - A variable specified in a function definition that is assigned a value when the function is called. See also: argument. - parent (in a tree) - A node in a tree that is above another node (called a child). Every node in a tree except the root node has a single parent. - parent class - In object-oriented programming, the class from which a sub class (called the child class) is derived. - parser - A piece of software that translates a textual representation of something into a data structure. For example, a YAML parser reads indented text and produces nested lists and objects. - pass (a test) - A test passes if the actual result matches the expected result. See also: fail (a test). - patch - A single file containing a set of changes to a set of files, separated by markers that indicate where each individual change should be applied. - path (in filesystem) - A string that specifies a location in a filesystem. In Unix, the directories in a path are joined using /. See also: absolute path, relative path. - pattern rule - A generic build rule that describes how to update any file whose name matches a pattern. Pattern rules often use automatic variables to represent the actual filenames. - pipe - To use the output of one computation as the input for the next, or the connection between the two computations responsible for the data transfer. Pipes were popularized by the Unix shell, and are now used in many different programming languages and systems. - pipe (in the Unix shell) - The |used to make the output of one command the input of the next. - plugin architecture - A style of application design in which the main program loads and runs small independent modules that do the bulk of the work. - polymorphism - Having many different implementations of the same interface. If a set of functions or objects are polymorphic, they can be called interchangeably. - post-condition - Something that is guaranteed to be true after a function runs successfully. Post-conditions are often expressed as assertions that are guaranteed to be be true of a function's results. See also: design by contract, pre-condition. - pre-condition - Something that must be true before a function runs in order for it to work correctly. Pre-conditions are often expressed as as assertions that must be true of a function's inputs in order for it to run successfully. See also: design by contract, post-condition. - precedence - The priority of an operation. For example, multiplication has a higher precedence than addition, so a+b*cis read as "the sum of awith the product of band c". - prerequisite - Something that a build target depends on. See also: dependency. - process - An operating system's representation of a running program. A process typically has some memory, the identity of the user who is running it, and a set of connections to open files. - promise - A way to represent the result of a delayed or asynchronous computation. A promise is a placeholder for a value that will eventually be computed; any attempt to read the value before it is available blocks, while any such attempt after the computation finishes acts like a normal read. See also: promisification. - promisification - In JavaScript, the act of wrapping a callback function in a promise for uniform asynchronous execution. - protocol - Any standard specifying how two pieces of software interact. A network protocol such as HTTP defines the messages that clients and servers exchange on the World-Wide Web; object-oriented programs often define protocols for interactions between objects of different classes. - prune - To remove branches and nodes from a tree, or to rule out partially-complete solutions when searching for an overall solution in order to reduce work. - pseudo-random number - A value generated in a repeatable way that resembles the true randomness of the universe well enough to fool observers. - pseudo-random number generator (PRNG) - A function that can generate pseudo-random numbers. See also: seed. - query selector - A pattern that specifies a set of DOM nodes. Query selectors are used in CSS to specify the elements that rules apply to, or by JavaScript programs to manipulate web pages. - query string - The portion of a URL after the question mark ?that specifies extra parameters for the HTTP request as name-value pairs. - race condition - A situation in which a result depends on the order in which two or more concurrent operations are carried out. - raise (an exception) - To signal that something unexpected or unusual has happened in a program by creating an exception and handing it to the error-handling system, which then tries to find a point in the program that will catch it. See also: throw (exception). - read-eval-print loop (REPL) - An interactive program that reads a command typed in by a user, executes it, prints the result, and then waits patiently for the next command. REPLs are often used to explore new ideas, or for debugging. - record - A group of related values that are stored together. A record may be represented as a tuple or as a row in a table; in the latter case, every record in the table has the same fields. - A small piece of memory (typically one word long) built into a processor that operations can refer to directly. - regular expression - A pattern for matching text, written as text itself. Regular expressions are sometimes called "regexp", "regex", or "RE", and are powerful tools for working with text. - relational database - A database that organizes information into tables, each of which has a fixed set of named fields (shown as columns) and a variable number of records (shown as rows). See also: SQL. - relative error - The absolute value of the difference between the actual and correct value divided by the correct value. For example, if the actual value is 9 and the correct value is 10, the relative error is 0.1. Relative error is usually more useful than absolute error. - relative path - A path that is interpreted relative to some other location, such as the current working directory. A relative path is the equivalent of giving directions using terms like "straight" and "left". See also: absolute path. - root (in a tree) - The node in a tree of which all other nodes are direct or indirect children, or equivalently the only node in the tree that has no parent. - row-major storage - Storing each row of a two-dimensional array as one block of memory so that elements in the same column are far apart. See also: column-major storage. - runnable documentation - Statements about code that can be executed to check their correctness, such as assertions or type declarations. - sandbox - A testing environment that is separate from the production system, or an environment that is only allowed to perform a restricted set of operations for security reasons. - SAT solver - A library or application that determines whether there is an assignment of true and false to a set of Boolean variables that makes an expression true (i.e., that satisfies the expression). - schema - A specification of the format of a dataset, including the name, format, and content of each table. - scope - The portion of a program within which a definition can be seen and used. See also: closure, global variable, local variable, namespace. - scope creep - Slow but steady increase in a project's goals after the project starts. - scoring function - A function that measures or estimates how good a solution to a problem is. - search path - The list of directories that a program searches to find something. For example, the Unix shell uses the search path stored in the PATHvariable when trying to find a program whose name it has been given. - seed - A value used to initialize a pseudo-random number generator. - semantic versioning - A standard for identifying software releases. In the version identifier major.minor.patch, majorchanges when a new version of software is incompatible with old versions, minorchanges when new features are added to an existing version, and patchchanges when small bugs are fixed. - server - Typically, a program such as a database manager or web server that provides data to a client upon request. - SHA-1 hash - A cryptographic hash function that produces a 160-bit output. - shell - A command-line interface that allows a user to interact with the operating system, such as Bash (for Unix and MacOS) or PowerShell (for Windows). - shell variable - A variable set and used in the Unix shell. Commonly-used shell variables include HOME(the user's home directory) and PATH(their search path). - side effect - A change made by a function while it runs that is visible after the function finishes, such as modifying a global variable or writing to a file. Side effects make programs harder for people to understand, since the effects are not necessarily clear at the point in the program where the function is called. - signature - The set of parameters (with types or meaning) that characterize the calling interface of a function or set of functions. Two functions with the same signature can be called interchangeably. - single-threaded - A model of program execution in which only one thing can happen at a time. Single-threaded execution is easier for people to understand, but less efficient than multi-threaded execution. - singleton - A set with only one element, or a class with only one instance. See also: Singleton pattern. - Singleton pattern - A design pattern that creates a singleton object to manage some resource or service, such as a database or cache. In object-oriented programming, the pattern is usually implemented by hiding the constructor of the class in some way so that it can only be called once. - slug - An abbreviated portion of a page's URL that uniquely identifies it. In the example, the slug is post-name. - source map - A table used to translate a piece of code back to the lines in the original source. - sparse matrix - A matrix in which most of the values are zero (or some other value). Rather than storing many copies of the same values, programs will often use a special data structure that only stores the "interesting" values. - SQL - The language used for writing queries for a relational database. The term was originally an acronym for Structured Query Language. - stack frame - A section of the call stack that records details of a single call to a specific function. - stale (in build) - To be out-of-date compared to a prerequisite. A build manager's job is to find and update things that are stale. - standard error - A predefined communication channel for a process typically used to report errors. See also: standard input, standard output. - standard input - A predefined communication channel for a process, typically used to read input from the keyboard or from the previous process in a pipe. See also: standard error, standard output. - standard output - A predefined communication channel for a process, typically used to send output to the screen or to the next process in a pipe. See also: standard error, standard input. - static site generator - A software tool that creates HTML pages from templates and content. - stream - A sequential flow of data, such as the bits arriving across a network connection or the bytes read from a file. - streaming API - An API that processes data in chunks rather than needing to have all of it in memory at once. Streaming APIs usually require handlers for events such as "start of data", "next block", and "end of data". - string - A block of text in a program. The term is short for "character string". - string interpolation - The process of inserting text corresponding to specified values into a string, usually to make output human-readable. - synchronous - To happen at the same time. In programming, synchronous operations are ones that have to run simultaneously, or complete at the same time. See also: asynchronous. - tab completion - A technique implemented by most REPLs, shells, and programming editors that completes a command, variable name, filename, or other text when the TAB key is pressed. - table - A set of records in a relational database or data frame. - tagged data - A technique for storing data in a two-part structure, where one part identifies the type and the other part stores the bits making up the value. - Template Method pattern - A design pattern in which a parent class defines an overall sequence of operations by calling abstract methods that child classes must then implement. Each child class then behaves in the same general way, but implements the steps differently. - test harness - A program written to test some other program or set of functions, typically to measure their performance. - test runner - A program that finds and runs software tests and reports their results. - test subject - The thing being tested, sometimes also called the system under test (SUT). - test-driven development (TDD) - A programming practice in which tests are written before a new feature is added or a bug is fixed in order to clarify the goal. - throw (exception) - Another term for raising an exception. - tightly coupled - Components in a software system are said to be tightly coupled if they depend on each other's internals, so that if one is altered then others have to be altered as well. See also: loosely coupled. - Time of check/time of use (ToCToU) - A race condition in which a process checks the state of something and then operates on it, but some other process might alter that state between the check and the operation. - timestamp - A digital identifier showing the time at which something was created or accessed. Timestamps should use ISO date format for portability. - token - An indivisible unit of text for a parser, such as a variable name or a number. Exactly what constitutes a token depends on the language. - topological order - Any ordering of the nodes in a graph that respects the direction of its edges, i.e., if there is an edge from node A to node B, A comes before B in the ordering. There may be many topological orderings of a particular graph. - transitive closure - The set of all nodes in a graph that are reachable from a starting node, either directly or indirectly. - tree - A graph in which every node except the root has exactly one parent. - tuple - A value that has a fixed number of parts, such as the three color components of a red-green-blue color specification. - Turing Machine - A theoretical model of computation that manipulates symbols on an infinite tape according to a fixed table of rules. Any computation that can be expressed as an algorithm can be done by a Turing Machine. - two hard problems in computer science - Refers to a quote by Phil Karlton: "There are only two hard problems in computer science—cache invalidation and naming things." Many variations add a third problem as a joke, such as off-by-one errors. - type declaration - A statement in a program that a variable or value has a particular data type. Languages like Java require type declarations for all variables; they are optional in TypeScript and Python, and not allowed in pure JavaScript. - Unicode - A standard that defines numeric codes for many thousands of characters and symbols. Unicode does not define how those numbers are stored; that is done by standards like UTF-8. - Uniform Resource Locator (URL) - A unique address on the World-Wide Web. URLs originally identified web pages, but may also represent datasets or database queries, particularly if they include a query string. - unit test - A test that exercises one function or feature of a piece of software and produces pass, fail, or error. - UTF-8 - A way to store the numeric codes representing Unicode characters in memory that is backward-compatible with the older ASCII standard. - vector - A sequence of values, usually of homogeneous type. - version control system - A system for managing changes made to software during its development. - virtual machine - A program that pretends to be a computer. This may seem a bit redundant, but VMs are quick to create and start up, and changes made inside the virtual machine are contained within that VM so we can install new packages or run a completely different operating system without affecting the underlying computer. - Visitor pattern - A design pattern in which the operation to be done is taken to each element of a data structure in turn. It is usually implemented by having a generator "visitor" that knows how to reach the structure's elements, which is given a function or method to call for each in turn, and that carries out the specific operation. See also: Iterator pattern. - walk (a tree) - To visit each node in a tree in some order, typically depth-first or breadth-first. - watchpoint - An instruction for a debugger telling it to suspect execution whenever the value of a variable (or more generally an expression) changes. See also: breakpoint. - well formed - A piece of text that obeys the rules of a formal grammar is said to be well formed. - word (of memory) - The unit of memory that a particular processor most naturally works with. While a byte is a fixed size (8 bits), a word may be 16, 32, or 64 bits long depending on the processor. - XML - A set of rules for defining HTML-like tags and using them to format documents (typically data). XML was popular in the early 2000s, but its complexity led many programmers to adopt JSON, instead. - YAML - Short for "YAML Ain't Markup Language", a way to represent nested data using indentation rather than the parentheses and commas of JSON. YAML is often used in configuration files and to define parameters for various flavors of Markdown documents. - z-buffering - A drawing method that keeps track of the depth of what lies "under" each pixel so that it displays whatever is nearest to the observer.
https://stjs.tech/glossary/
CC-MAIN-2021-39
refinedweb
8,637
55.24
Executive Summary India is tipped to be a rapidly growing economy heading towards occupying 3rd place in the world. However, it needs to be remembered that the growth of economy is dependent upon a crucial assumption that requisite amount of energy will be available at a price that does not adversely 'affect the competitiveness of the nation's industry and service sector. The oil prices have hovered around $50/barrel for long enough to dispel any optimism about the oil prices coming down to lower levels in foreseeable future. The import bill for crude oil has jumped to almost double of what it was a year ago. Some analysts predict that oil peak is nearby and oil prices could touch even $100/barrel mark. Even if one were not to be carried away by this doom's day predictions, we need to face the facts of life with open eyes and guide the country's energy policy to ensure that the happenings in the world oil market does not succeed in derailing the juggernaut of Indian economy. Geopolitical maneuvers to tie up h igher quantities of oil and gas, acquisition of oil and gas equity abroad and intensive exploration of oil and gas at home are indeed welcome steps but not sufficient to ensure energy security and economic robustness for India. A premium needs to be attach ed to indigenous resource exploitation of energy sources - not just oil and gas. India's energy use is mostly based on fossil fuels. Although the country has significant coal and hydro resource potential, it is relatively poor in oil and gas resources. As a result it has to depend on imports to meet its energy supplies. Coal is the major fossil fuel in India and continues to play a pivotal role in the energy sector. Present use of coal is inefficient and polluting. Hence there is need for technologies for u tilisation of coal efficiently and cleanly, substitution of lesser reserves of oil and gas with abundantly available coal and prolonging the reserves of all the fossil fuels for use of future generations. These requirements can be met through application o f coal gasification technology and following the principle of sustainable development. While thinking about the energy strategy of India, the role of coal cannot be wished away however inconvenient it may be in terms of utilisation efficiency and environme nt. We have to devise technological solutions to make the most out of the indigenous resource. Here we shall talk about technologies that can not only augment indigenous energy resources, but also extract energy from coal in forms that can replace the impo rted oil and gas products. This would call for a major change in the mind set of energy managers so that Companies previously seen as coal producers are now seen as energy producers. The prime technologies for achieving the objectives of replacement of liq uid fuels and substitution of natural gas would be: CBM (Coal Bed Methane) Extraction CMM (Coal Mine methane)/ AMM (Abandoned Mine Methane)/V AM (ventilation Air Methane) capture and utilisation UCG (Underground Coal Gasification) IGCC (Integrated Gasif ication Combined Cycle power generation) CTL (Coal to Liquid) CBM/ CMM/ AMM/VAM will provide gas in areas which are far away from gas supply sources and thus expand the usage of clean fuel. Mine gas utilisation schemes, therefore, significantly benefit t he environment by: Reducing greenhouse gas emissions as the carbon dioxide produced by combustion is more than 20 times less harmful to the atmosphere than methane Displacing coal use in environmentally sensitive areas UCG has the potential to virtually eliminate methane emissions to the atmosphere from coal seams whilst allowing the energy stored in the coal to be recovered. UCG will help unearthing the unreachable indigenous resource and substitute LNG import. IGCC will provide a clean coal technology f or power generation. CTL will provide a way of substituting liquid fuels which are imported either in form of crude or the products themselves. Especially, if UCG and GTL are combined, it can provide energy security to India on a sustained basis by satisfy ing the largest user of energy - the transport sector. This along with biodiesel has the potential of lending total energy independence to India. Natural gas is being used for power generation in the country and it is rightly so for accelerated growth of po wer sector. There are plans for import of liquefied natural gas (LNG) and naphtha etc. for power sector mostly by independent power producers. This could be allowed as a short term measure as dictated by the market forces. But as a medium to long term meas ure the natural gas and liquid fuels need to be replaced by coal gas. The medium to long term targets can be: i) Replacing the natural gas with coal gas in the existing combined cycle power plants ii) Establishment of advanced power generation technologie s based on coal gas Le., fuel cell. iii) Commercial plants for coal to oil and coal refinery. iv) Self reliance and security in energy sector v) Substitution of exhaustible with renewable energy sources. Integrated Development of coal reserves Ongoing mini ng activities 1. Continue mining in zone already opened for mining. Recover Methane gas for vent and generate power. 2. Coal mine methane recovery 3. The zones which are yet to be opened, plan for full CBM recovery first ad then open zones for mining 4. UCG produc tion from. "un mineable zones" New mines yet to be opened 1. First complete full CBM recovery 2. Open mines for coal production 3. UCG production from zones below mining range Abandoned mines 1. Recover CBM from pillars, compartments by drilling wells 2. If C oal seams are available below "mineable zone", evaluate possibilities of UCG. Coal seam below" mineable zone" 1. CBM recovery 2. UCG project It can thus be seen that coal in solid form can continue to support power generation and other applications, CBM can supplement Natural Gas requirements and through UCG route, syngas so generated can be used either for power generation (IGCC) or for chemicals or liquid petroleum fuel for transportation network. As integrated development as proposed would make it imperati ve that exploration or oil and gas and exploitation of coal resources be carried in unison. For example, in Cambay basis in Gujarat more than 4000 well have been drilled for oil exploration / production. Many of these exploratory and development wells were dry and abandoned where coal seams were encountered. If petroleum / coal activities were to be performed under "single" licence, UCG operation could have started much sooner. As can be seen, exploration / exploitation of oil / gas and coal are both techno logically and geologically linked. Policy initiatives for proposed development of coal fuels Unified license for Coal, CBM and UCG production along with CO2 sequestration. Unified license for Petroleum, CBM and UCG in those basins where hydrocarbon (crud e gran ting licenses for remaining blocks for exploration of CBM and UCG. Supervisory agency to co - ordinate and promote integrated development of all coal fuels. Introduction It is very aptly propounded that: "Sustainable development aims to promote economic g rowth, efficient use of natural resources and their secured long term supply and protection of environment to ensure survival of the future generations." India is tipped to be a rapidly growing economy heading towards occupying 3rd place in the world. Howe ver, it needs to be remembered that the growth of economy is dependent upon a crucial assumption that requisite amount of energy will be available at a price that does not adversely affect the competitiveness of the nation's industry and service sector. Pr oper attention needs to be paid to the fragility of this crucial assumption in view of the current events and what is expected in the future. The oil prices have hovered around $50/barrel for long enough to dispel any optimism about the oil prices coming d own to lower levels in foreseeable future. The import bill for crude oil has jumped to almost double of what it was a year ago. Some analysts predict that oil peak is nearby and oil prices could touch even $100/barrel mark. Even if one were not to be carri ed away by this doom's day predictions, we need to face the facts of life with open eyes and guide the country's energy policy to ensure that the happenings in the world oil market does not succeed in derailing the juggernaut of Indian economy. Geopolitica l maneuvers to tie up higher quantities of oil and gas, acquisition of oil and gas equity abroad and intensive exploration of oil and gas at home are indeed welcome steps but not sufficient to ensure energy security and economic robustness for India. A pre mium needs to be attached to indigenous resource exploitation of energy sources - not just oil and gas. With a gross domestic product (GDP) growth of 8 per cent set for the Tenth Five Year Plan (2002 - 07), the energy demand is expected to grow at 5.2 per cen t. India's incremental energy demand for the next decade is projected to be among the highest in the world, spurred by sustained economic growth, rise in income levels and increased availability of goods and services. The projected requirement of commercia l energy is estimated at about 412 MTOE and 554 MTOE respectively in 2007 and 2012; the commercial energy demand is estimated to grow at an average rate of 6.6 per cent and 6.1 per cent respectively during the period 2002 - 07 and 2007 - 12. However, the deman d may be less by 5 per cent and 10 per cent during 2006 - 07 and 2011 - 12 respectively due to increasing use of information technology (IT) and prevalence of e - Commerce, which will mainly affect the demand of energy in transport sector. Estimated Energy Deman d in India based on extrapolation of "Business as Usual" scenario as reported by Asean India Business Portal website report is: Primary Unit Demand in (Original Units) Demand (MTOE) 2006 - 07 2011 - 12 2006 - 07 2011 - 12 Coal Mt 460.50 620.00 190.00 254.9 3 Lignite Mt 57.79 81.54 15.51 22.05 Oil Mt 134.50 172.47 144.58 185.40 Natural Gas BCM 47.45 64.00 42.70 57.60 Hydro Power BKwh 148.08 215.66 12.73 18.54 Nuclear Power BKwh 23.15 54.74 6.04 14.16 Wind BKwh 4.00 11.62 0.35 1.00 Power Tota l Commercial Energy 411.91 553.68 Non - Commercial Energy 151.30 170.25 Total Energy Demand 563.21 723.93 India's energy use is mostly based on fossil fuels. Although the country has significant coal and hydro resource potential, it is relatively poor in oil and gas resources. As a result it has to depend on imports to meet its energy supplies. The geographical distribution of available primary commercial energy sources in the country is quite skewed, with 77 per cent of the hydro potent ial located in the northern and north - eastern region of the country. Similarly, about 70 per cent of the total coal reserves are located in the eastern region while most of the hydrocarbon reserves lie in the west. As per current projections, India's depen dence on oil imports is expected to increase. The demand of natural gas also outpaces supply and efforts are being made to import natural gas in the form of liquefied natural gas (LNG) and piped gas. If the present trend continues, India's oil import depen dency is likely to grow beyond the current level of 70 per cent. The success of liberalization policy and economic reforms introduced in the country is largely dependent on adequate availability of energy resources at affordable prices and oil has a signif icant place in it. Therefore any disruptions in oil supplies would hamper progress of the country. Thus from consideration of national self reliance, security and assured energy supply, production of oil in India from alternate source i.e. coal is justifie d. Coal is the major fossil fuel in India and continues to playa pivotal role in the energy sector. Oil and natural gas are very limited hence India is a net importer of hydrocarbons. India is heavily dependent on oil imports and the trend is likely to rem ain same. Economic growth of the country is tied up with regular supply of oil and any disruptions could drastically arrest the growth. Oil imports are a drain on foreign exchange reserves since they constitute about 26% of import bill. More and more of na tural gas is being used for power generation leaving lesser allocations for fertilisers and chemicals etc where it is essential and convenient. Indian coals in general are of inferior quality. Present use of coal is inefficient and polluting. Hence there i s need for technologies for utilisation of coals efficiently and cleanly, substitution of lesser reserves of oil and gas with abundantly available coals and prolonging the reserves of all the fossil fuels for use of future generations. These requirements c an be met through application of coal gasification technology and following the principle of sustainable development. While thinking about the energy strategy of India, the role of coal cannot he wished away however inconvenient it may be in terms of utili sation efficiency and environment. We have to devise technological solutions to make the most out of the indigenous resource. Here we shall talk about technologies that can not only augment indigenous energy resources, but also extract energy from coal in forms that can replace the imported oil and gas products. This would call for a major change in the mind set of energy managers so that Companies previously - seen as coal producers are now seen as energy producers. The prime technologies for achieving the objectives of replacement of liquid fuels and substitution of natural gas would be: CBM (Coal Bed Methane) Extraction CMM (Coal Mine methane)/ AMM (Abandoned Mine Methane)/VAM (ventilation Air Methane) capture and utilisation UCG (Underground Coal Gasifi cation) CTL (Coal to Liquid) CBM/CMM/ AMM/VAM will provide gas in areas which are far away from gas supply sources and thus expand the usage of clean fuel. Mine gas utilisation schemes, therefore, significantly benefit the environment by: Reducing green house gas emissions as the carbon dioxide produced by combustion is more than 20 times less harmful to the atmosphere than methane Displacing coal use in environmentally sensitive areas UCG has the potential to virtually eliminate methane emissions to th e atmosphere from coal seams whilst allowing the energy stored in the coal to be recovered. UCG will help unearthing the unreachable indigenous resource and substitute LNG import. IGCC will provide a clean coal technology for power generation. CTL will pro vide a way of substituting liquid fuels which are imported either in form of crude or the products themselves. Especially, if UCG and GTL are combined, it can provide energy security to India on a sustained basis by satisfying the largest user of energy - the transport sector. This along with biodiesel has the potential of lending total energy independence to India. India's Energy Puzzle As per the available estimates, India's 2020 consumption of energy is expected co be somewhere approximately 800 Mtre. To realise the target, each segment of the value chain needs two and half times growth between now and 2020 thus calling for massive investments in infrastructure creation on grand scale through efforts from public, private sector and joint partnerships. The investments would also need the policy makers to work towards creating an environment with appropriate policy, legislative and regulatory framework. Considering the limited reserve potentiality of petroleum & natural gas, eco - conservation restriction on h ydel project and geo - political perception 'of nuclear power, coal will continue to occupy centre - stage of India's energy scenario. All energy sources need to be explored and exploited to the hilt while determining optimum fuel mix with options of coal, oil , gas, hydel, renewables and nuclear. It is essential to understand internalization of environment cost imposed by different forms of energy and what this means for energy choices to be made keeping the long - term perspective in mind. For India to join the league of developed nations, we must ensure that the power is produced at affordable rates and competition is introduced in the sector to enhance efficiency, consumer responsiveness and reduced prices. Keeping electricity prices affordable and competitive internationally, will depend on the price of fuel viz. coal, oil product or gas as fuel constitutes 60% of the cost of gas. India has relatively large reserves of coal (250 billion tonnes) compared to crude oil (728 million tonnes) and natural gas (686 bil lion cubic meters). Coal meets about 60% of the commercial energy needs and about 70% of the electricity produced in India comes from coal. Advanced technologies when applied to Indian coal resources, can improve the efficiency and minimize environmental i mpacts of coal utilisation. A balance is necessary between short term imperatives and long term possibilities to enable sustainable development. To pursue such a strategy technologies are available and are also under development. Since reserves of oil and natural gas are meager, they need to be substituted with coal to the extent feasible. At the same time all the three fuels, especially by use of modern techn ologies. If the gaseous form of fuels could be obtained on a large scale from mineable and unmineable coal resources the versatility of coal as a fuel resource could be greatly enhanced. The major advantage of gasification is that coal is converted into a gaseous fuel which is easy to handle and is a clean form of energy. In the gaseous form it enables substitution ofisers and fuels which also improve the economics of coal gasification. India's Coal Resources India is endowed with rich deposits of coal and lignite in different sedimentary basin s of varying dimensions. The bulk of the coal resource of 235 billion tonnes is contained in older basins like the Gondwana basin. Large lignite deposits of 100 billion tonnes occur in younger basins of Gujarat, Rajasthan and Tamil Nadu. A characteristic f eature of these basins is the development of very thick coal and lignite seams (20 - 80m) over a large stretch of the coal/lignite fields. In fact, one of the thickest seams (138m) of the world is in Indian coal fields. The present updated total coal resourc es of the country as per the latest national inventory as on 1.1.2004 is 2,45,692.42 million tonne for coal seams of 0.9m and above in thickness and upto 1200m depth from surface. The inventory is based on sub - surface data accrued from regional (including promotional) and detailed drilling carried out by GSI, CMPDI, SCCL and MECL. Out of the total resources, Gondwana coalfields contribute 2, 44,785.47 million tonne while the Tertiary coalfields account 906.95 million tonne. The depth - wise breakup of the tot al resource reveals that about 65.6% of coal resource are confined within 0 - 300m depth level in which maximum share comes from Orissa (43.9 bt), followed by Jharkhand (36.1 bt) excluding Jharia coalfield, Chhattisgarh (31.4 bt), West Bengal (12.3 bt) and o thers. Resources available within 300 - 600m and 600 - 1200m depth ranges are 61836.31mt and 17882.30mt respectively. In addition, there occur 14212.42 mt of resource in Jharia coalfield confined to 0 - 600m depth. TECHNOLOGY - WISE COAL PRODUCTION Depth - Wise Coa l Reserves in India (Billion Tonnes As on 01/01/2003) Depth Range Proved Indicated Inferred Total 0 - 300m 68.6 64.5 15.6 148.7 62 300 - 600m 6.1 37 16.8 60 25 0 - 600m (JFCF) 13.7 0.5 0 14.2 6 600 - 1200M 1.7 10.5 5.6 17.8 7 Total 90 113.6 38 240.7 1 00 37 47 16 100 Demand & Availability of Raw Coal Year Demand Availability Gap 2003 - 04 381 351 30 2006 - 07 461 405 56 2011 - 12 620 515 105 Underground production of coal peaked in the late seventies and has fallen slowly since then. Surface mining, on the other hand, has soared from16 to 160 million tonnes per annum. Of the 588 mines in India, 355 are under - ground, but opencast accounts for 75 percent of production and employs only 16 percent of the total mining work force. Productivity is hi gher in the opencast sector. Almost 80% of today's coal comes from surface strip mines (opencast mines), which is much safer. The above estimates do not show large reserves of deep seated coal in Gujarat. Coalbed Methane potential of India Indian's coals h ave gas content values ranging from 1 to 23 m 3 / tonne. The CBM resources as per Directorate General of Hydrocarbons (DGH), Ministry of Petroleum & Natural Gas (MoP&NG) is tabulated here under: Table Prognosticated Resource of CBM S.No. State Coal field/Bl ock Area of delineated block (Sq. KM) Prognosticated CBM Resource as per DGH Remarks 1 West Bengal In trillion cubic feet In billion cubic meter Marginal resource may be in Jharkhand North Raniganj 232 1.030 29.17 Eastern Raniganj 500 1.850 52 .38 Birbhum 250 1.000 28.32 Sub Total 982 3.88 109.87 2 Jharkhand Jharia 69.20 East & West Bokaro 93.37 North aranpura 340.54 Sub Total 503.11 6.178 174.93 3 Madhya Pradesh Sohagpur 495 3.030 85.79 Sohagpur 500 Satpura 500 1.000 28.32 4 Gujarat Cambay Basin 2400 - 3218* 11* to 19.4 311* - 549.39 May not be immediately availabl3e because ONGC has active conventional Oil& Gas operations. *As per Advanced Resources Inc. Grand Total 2980.11 - 3798.11 25.088 - 33.488 710.39 948.73 In India, the Reliance Gas has carried out comprehensive geologic assessment of coal/lignite basins based on which about 20,000 km 2 of area has been identified as prospective for CBM with estimated in place resource of about 20,00 0 billion cubic metres. The recoverable reserve of about 800 billion cubic metres and gas production potential of about 105 million metre cum per day over a period of 20 years has been estimated. CBM potential is thus about 1.5 times the present natural ga s production in India, which is capable of generating about 19000 MW of electricity. The potential of gas production in India is given in Table below: CBM production potential in India CBM Prospects Ref.No. CBM Production Potential Energy Equivalent Basin /Area (million cubic metres/day) Power Gen., (MW) LNG (MMtpa) Cambay Basin North Gujarat 15 30 5500 7.50 Barmer Basin South Rajasthan 16 19 3500 4.75 Damodar Basin Raniganj 3 12 2200 300 Jharia 4 3.5 650 1.00 East Bokaro 5 2.5 450 0.60 North Karanpura 6 6.0 1100 1.50 Rajmahal Basin Rajmahal 1 4.5 800 1.20 Birbhum 2 6.0 1100 1.50 Others Singrauli 7 1.0 180 0.25 Sohagpur 8 4.0 720 1.00 Satpura 9 1.5 270 0.40 Jb - River 10 5.0 900 1.25 Talcher 11 2.5 45 0 0.60 Wardha Valley 12 1.5 270 0.40 Godavari Valley 13 4.0 720 1.00 Cauvry Basin 14 2.5 450 0.60 All India 1 - 16 105.5 19260 26.55 (Source: Coal bed Methane: A Survey by Reliance Gas (P) Limited) A resource assessment undertaken by Dominion Energy/Adv isors (USA) estimates the CBM resources at 30 trillion cubic feet or 850 bn cubic meters. Reliance Industries Ltd has discovered reserves of 3.76 trillion cubic feet (TCF) of coal bed methane gas at one of its blocks in Madhya Pradesh. Essar has already dr illed three wells to a depth of 1450 metres and is producing the gas experimentally. Neyveli Lignite Corporation (NLC) proposes to taken up a Underground Coal Gasification (UCG) project in a suitable lignite block in Rajasthan under Ministry of Coal's S&T programme and Department of Science & Technology funding at a total cost of Rs.1,125 lakhs part of a joint venture project with Coal India Limited (CIL). Great Eastern Energy Corporation Limited (GEECL) and Essar are also involved in initial field studies in Raniganj South and Gujarat respectively. The coal occurs in the Lower Gondwana (Permian) coal - bearing Karharbari/ Barakar and Raniganj formations where there can be in excess of 100m of total coal thickness. The Barakar formation contains some 50 coal s eams that are greater than l.5m thick whilst the Raniganj formation includes 10 seams ranging from 1m to 11m thick. The Damodar Valley basin is the most heavily mined area in India containing high rank, gassy coal. It is suggested that based on these chara cteristics Jharia, Bokaro, North Karanpura, and the Raniganj coalfields should be the primary targets for CBM development. Test results for the Barakar coals in the Jharia coalfield report the majority of gas contents to be between 7 and 17m3/t (dry ash fr ee). The results indicate the gas content increases uniformly with the depth of coal. A number of encouraging factors are reported: Average cumulative coal thickness (surface to 1200m) in the order of 90m measured gas content of between 7 and 17m3/t Gas content and langmuir isotherm data suggest coal near to 100% gas saturation Results from test wells show gas flows of 1000 - 2000 m3/ d from a single seam at depth average gas production from a single test well (5 coal seams farce) of 6500m 3 / d over a 1.5 year period (production testing continuing) with an initial maximum gas flow of 23,000m3/ d, and cumulative gas flow of 7 million m3 Water production of about 6m3/ d, which has been shown to be of a good quality suitable for agricultural use and also rec ycling for field operations local market for CBM for power generation with a larger market identified within 30km. The main Gondwana coal basins are rifted intra - cratonic grabens having thick sequence of coal seams, and hold considerable prospects for co al bed methane. The major part of Indian Gondwana coals (mostly up to 300 m depth) is of low rank, far below the threshold value of thermogenic methane generation. However, high rank coals, amenable for generation of coal bed methane, mostly occur in untap ped deeper parts of basins covered by younger sediments. Tertiary coals in petroliferous basins of Cambay, Upper Assam and Assam - Arakan may be prospective due to reported higher gas content, which is probably stored in the coal after generation from deeper - lying hydrocarbon source beds or may be of biogenic origin. Government of India has awarded 16 CBM blocks for exploration and production of Coal Bed Methane in different coal fields of India. The commercial production of CBM from few of these awarded bloc ks may start by 2006 - 07. These blocks may yield a peak production of about 23 MMSCMD of CBM in the country. CMM/AMM/VAM An initial review of historic mining practices in India and discussions with CIL and others would indicate few opportunities exist for A MM development. This is due to the relatively shallow depth of mining, low gas contents and use of non caving methods of underground coal mining (board and pillar). However, if longwall mining expands, the potential application of AMM could increase in the medium to long term. Methane emission studies from working mines of India reported most of the degree three gassy mines (10 cubic m/ton), are confined in the four Damodar Valley coal fields, viz. Raniganj, Jharia, Bokaro and North Karanpura in Bihar and W est Bengal. In these areas, the thickest bituminous coals are extensively developed in the Barakar measurers and in Raniganj measures of Lower and Upper Permian age, respectively. The Barakar coal seams are superior to Raniganj coal seams as coal bed metha ne targets. Based on thickness and burial depth, rank and quality of coal has the greatest coalbed methane potential in India. Therefore, until deeper, gassier seams are tapped, India's potential for profitable VAM oxidation projects will remain modest at best. Singh (2001b) states that 66 percent of the underground mines emit less than 1 m3 per tonne of coal produced, 27 percent of underground mines emit from 1 to 10 m3 per tonne, and the remaining mines (7 percent) emit over 10 m3. In India, underground c oal production currently comprises approximately 25 percent of total production, and annual tonnage of underground coal produced there has remained essentially steady over the past two decades (World Coal, 1999). Singh (2001a) observes India's trend toward a decrease in the share of underground coal production. That trend, however, appears to derive primarily from a dramatic increase in surface production in recent years rather than from a drop in absolute production from underground mines (World Coal, 1999 ). The coal seams currently being exploited are not particularly gassy, and methane concentrations in ventilation airflows even at the gassiest mines are low, typically below 0.3 percent. Altering the Role of Coal in India's Energy Basket Coal remains Ind ia's principal source for meeting its primary and secondary commercial energy requirements. Of the I, 04,917.50 MW of overall installed power generation capacity in the country (as on 31 March 2002,) about 59,386 MW is coal based and 2,745 MW is lignite b ased, totaling to 62,131 MW or 59 per cent. Indigenous coal is likely to remain the most stable and least cost option for the bulk of India's energy needs in the foreseeable future. This is so because coal based thermal power generation capacity has a shor ter gestation period and lower specific investment costs when compared to other locally available commercial energy resources like nuclear or hydropower. Thus, there is need for concerted efforts for the overall developments of the sector in future Plans. Energy security concerns underscore the need to further develop indigenous coal production in the foreseeable future. When technologies like CBM, CMM, AMM, VAM and UCG are employed in conjunction with CTL technology the reach of coal based fuels will be wi dened to cover even transport fuels and substitution of natural gas which are both imported and weigh heavily on the trade balance for the country. Apart from this these technologies will harness those fuel resources which are hitherto considered unreachab le/ unusable. Methane is a natural product arising out of the decay of organic matter and as coal deposits were formed with increasing depths of burial and rising temperatures and pressures over geological time, a proportion of the methane produced was ads orbed by the coal. Whereas in a natural gas reservoir such as sandstone the gas is held in the void spaces within the rock, methane in coal is retained on the surface of the coal within the micropore structure. Such adsorption is maintained by the lithosta tic and hydrostatic pressures. The release of these pressures allow methane to escape from the coal. The presence of significant amounts of methane in coal is familiar to coal miners as the gas is released due to the relaxation of pressure and fracturing o f the strata during mining activity, and can give rise to serious safety concerns if not managed properly. Many explosions have occurred over the years leading to tile development of "methane drainage" where the gas is drained from the strata by pumping fr om boreholes drilled above the working face. This practise often yielded significant quantities of methane, which was on occasion used to fire the colliery boilers. Methane. build - up in coal mines has caused many mine explosions, killing thousands of miner s worldwide. In gassy mining conditions, creating a safe work environment requires that coal mining companies develop practices that allow them to assess the amount of gas that will be liberated during the mining process, and determine the best way to remo ve the gas from the mine. No matter whether the gas is drained from the seam or adjoining strata in advance of mining or from the gob, the purpose is the same: remove enough gas from the mine so that the ventilation system can dilute the remaining gas that will be emitted into the mine to acceptable levels. Gas drainage systems are often not designed with the goal of optimizing gas recovery because of budget constraints and the overriding concerns of safety. Furthermore, for the same reasons, data available to the investigator for assessing the potential of developing a commercial coal mine methane resource estimate may be limited. Successful development of a coalmine methane project requires a thorough understanding of the size and production potential of the gas resource. The coal mine methane resource comprises the volume of gas distributed throughout the coal and surrounding strata, often referred to as gas - in place. 100% recovery of the gas - in - place is virtually impossible. Tec1znically recoverable coal mine methane resources is the quantity of gas that is recoverable by utilizing proven modes of extraction while employing existing technology. The commercially extractable portion of the technically recoverable resources is the resen1es. A developer's est imate of reserves will vary depending on assumptions regarding the technology used for recovery and changes that may take place in future economic conditions. The methane continues to emit from the mine after closure, and recently the concept of collectin g the gas from abandoned mines to provide an energy source which would otherwise be waste has been developed. The concept is generally referred to as Coal Mine Methane (CMM) The amount of methane in a coal bed depends on the quality and depth of the coal d eposit. In general, the higher the energy value of the coal and the deeper the coal bed beneath the surface - resulting in more pressure from overlying rock formations - the more methane the deposit holds. Coal stores six to seven times more gas than the e quivalent rock volume of a conventional gas reservoir. Many mining companies will pre - drill to allow some of this gas to escape, but as the mining operation grows, new long walls are constructed. When coal is extracted and the wall moved, gas escapes from the now - collapsing roof. Gas pockets could also be disrupted above and below the coal seam, something that must be monitored and measured. All underground coalmines employ ventilation systems to ensure that methane levels remain within safe concentrations. These systems can exhaust significant amounts of methane to the atmosphere in low concentrations. There are three things to consider in this process: They want to drain the coalmine methane for mine safety and efficiency; They want to sell the gas as f uel and feedstock; And, they want to certify that it qualifies for greenhouse gas emissions reductions. Methane from coal beds can be recovered from coal seams by: Draining gas from working coal mines Extracting gas from abandoned coal mines Producing gas from unmined (virgin) coal using surface boreholes Mine gas utilisation schemes are encouraged by governments and international agencies that recognise the energy benefits of a waste material and the net reductions in greenhouse gas emissions achieva ble. Virgin CBM production schemes, which are independent of mining, contribute indirectly to a reduction in greenhouse emissions by replacing coal burning. CBM Coalbed methane is located wherever coal is found. Only a small percentage of these resources can be recovered with current technology, and a still smaller percentage can be recovered profitably. Gas can be produced from coals of nearly every rank; however, some of the less attractive coals (e.g., lignite) may require substantial thicknesses of coa l to develop adequate reserves. A typical one - foot thickness of coal six hundred feet deep is capable of containing as much gas as a typical sandstone reservoir five thousand feet deep. Another unique characteristic of coalbed production is its producing b ehavior. In most cases, initial production of gas is quite low while water production may be high. As the water is withdrawn, and the bottom - hole pressure decreases in the reservoir near the well bore, gas production gradually increases. During the first f ew producing months the water - producing rate will continue to decrease accompanied by an increase in the gas - producing rate, until a pseudo - steady state occurs for both phases. Water pressure holds methane in the coal bed. To release the gas, its partial p ressure must be reduced by removing water from the coal beds. Once the pressure is lowered, the gas and water move through the coal bed and up the wells. At first, coalbed methane wells produce mostly water, but over time, the amount of water declines and gas production rises as the bed is dewatered. Water removal may continue for several years. The water is usually discharged on the surface or injected into aquifers. Whether a coalbed will produce commercial quantities of methane gas depends on the coal qu ality, its content of natural gas per ton of coal, the thickness of the coal bed (s), the reservoir pressure and the natural fractures and permeability of the coal. CBM is generally more pure right out of the ground when compared with conventional natural gas reservoirs. CBM is recovered from virgin coal (for this reason it is sometimes referred to as VCBM) by releasing the gas located both within the coal and adsorbed onto the surface of the coal. Coal seams are injected with a high pressure water, foam an d sand mix. The high pressure fractures the coal for some distance around the borehole. The sand holds the fractures open, enabling the water and gas to flow to the well bore and hence to the surface. CBM offers a method of extracting methane from unworked coal without detrimentally affecting the physical properties of the coal. This provides many benefits: When carried out on its own it facilitates exploitation of the coal resource in areas where the coal would be unlikely to be worked by traditional minin g methods. As the coal remains in the ground there is no surface subsidence. Alternatively it facilitates extraction of gas from coal seams prior to mining the coal thus reducing the potentially dangerous methane gas prior to carrying out traditional min ing methods. Methane quality is such that it has the potential to be fed directly into the gas distribution network. This is one distinct difference with CMM which has higher carbon dioxide content and so is not suitable for direct introduction. Coal bed methane development is accompanied by a number of environmental problems and human health hazards. 1. Disposal of water removed from coal bed methane wells CBM produced water may have high concentrations of dissolved salts and other solids. Water discharg es may flood the property of landowners, causing erosion and damaging soils and plants. Coalbed methane water in Montana, USA has an average sodium adsorption ratio of 47, over 30 times the level that can damage soils, causing crop yields to decline. 2. Dr inking water levels drop in surrounding areas. The level of some drinking water wells near coalbed methane development has dropped as water has been removed from coal beds. 3. Contamination of aquifers Contamination of aquifers (or groundwater) from coalbe d methane development represents another environmental problem. There is' some evidence that natural gas can migrate up through vertical fissures and contaminate overlying aquifers. 4. Venting and seeping of methane and other chemicals In the San Juan Basi n, USA methane gas is seeping up in fields, forests and rivers. Methane seeps. often have companion "dead zones"' where methane - saturated soils have starved the roots of vegetation, killing some trees nearly 100 years old. High levels of methane asphyxiate rodents in burrows near seeps. While such seeps are not new, they appear to be more frequent and severe since the advent of coalbed methane, development. Some scientists and residents believe that coalbed methane development is aggravating the problem. Me thane seeping into drinking water wells and under people's homes has caused a health hazard. On the Pine River near Bayfield, Colorado, Amoco bought out and relocated several families because of high levels of methane present in their basements and drinkin g water. Other chemicals may vent following coalbed methane development, including carbon dioxide and hydrogen sulfide. 5. Underground fires Underground fires plague coal - rich areas. They often strike where extensive mining has occurred, because shafts and tunnels help circulate the oxygen needed for coal to burn below the earth's surface. Coalbed methane development can exacerbate this problem when water is removed to release the gas and oxygen gets in. Two underground coal fires are burning on the Souther n Ute Reservation in southwest Colorado in an area where coalbed methane has been extracted. In June 2002, an underground coal fire in Glenwood Springs, Colorado sparked a month - long wildfire in the area that destroyed people's homes and property. 6. Destr uction of land and harm to wildlife The wells are then connected with pipelines, compressor stations and roads, leaving scars on the land that will last for decades. Wildlife habitat is fragmented, and migration corridors are disrupted. High road densities and the constant vehicular traffic needed to monitor and maintain wells and pipelines are especially disruptive to wildlife. CBM Produced Water Coalbed methane produced water often has high sodium adsorption ratio (SAR) values - the ratio of sodium, calciu m and magnesium concentrations - high concentrations of metals - iron, manganese and barium - and variable salt content. These minerals may affect soil permeability or be toxic to certain plant species. Ideal conditions for CBM produced water for irrigatio n are areas with coarse - textured soil and salt - tolerant crops. Native high salt tolerant grasses and forbs can be planted around impoundments and discharge sites to maximize the use of CBM produced water and reduce erosion, as well as being used in bioreme diation of brine contaminated soils. Economics of CBM production depend on reducing the cost of handling produced water. Beneficial uses for produced water offer the best alternative to high - cost re - injection procedures. Various treatment or pretreatment a pplications may be necessary before produced water can be funneled for alternative uses. Alternatives to re - injection of CBM produced water fall in five main categories: water impoundments for stock and wildlife, irrigation, surface discharge, and recreati onal and industrial uses. Water management options for CBM produced water include use in the operational activities of industries in the producing region. Common industrial uses include coal mines, animal feedlots, cooling towers, car washes, enhanced oil recovery and fire protection. CMM/AMM Coal mining releases the gases naturally occurring in coal seams. The methane flow from the mine workings depends on the gas content of the coal seams, thickness and distance of adjacent coal seams from the worked seam s and the method and rate of mining. Atmospheric emissions can be reduced by capturing a proportion of the gas before it enters mine airways, piping it to the surface and using it as fuel gas or as a chemical feedstock. CMM drainage technologies only captu re a proportion of the gas released into mine workings. Captures achieved in individual mining panels can typically range from 30% to 80% depending on the drainage technology used, the geology and the mining conditions. Coal mine methane is produced as a r esult of the fracturing of coal and coal measures strata as part of historical and current mining operations releasing the methane which had been adsorbed within it. However, the commercial exploitation of methane has the potential, now well proven, of har nessing the gas safely and beneficially to generate electricity and can provide considerable benefits: An uncontrolled danger and potential surface hazard to individuals and property is harnessed and greatly reduced if not removed. Harmful ventilation to the atmosphere is reduced with a significant reduction of greenhouse gas emissions. Electricity available to local users, especially in cases where former colliery sites are developed for industry and commerce. The coal mining industry has made good prog ress in delivering high - grade CMM to natural gas markets. Using gob gas has proven more challenging, although pioneers in the coal, gas, and power industries also have identified several potentially beneficial gob gas uses, as listed below. Fuel for coal d ryers and - other gas - fueled mine equipment. Fuel for electricity production. Feedstock for gas enrichment systems that upgrade the gas to pipeline quality. Supplemental fuel for industrial and utility boilers (delivered in dedicated pipelines). Since go b gas (as well as any medium - to high - quality methane) may be cofired with the primary fuel in a variety of existing combustion units including boilers, furnaces, and kilns, it can partially replace common fuels (e.g. coal, oil, and natural gas). The fuel that cofired gob gas replaces is referred to herein as "avoided" fuel. Cofiring gob gas, as explained in the next section, can provide greater value to the buyer than that of the avoided (replaced) primary fuel. This report refers to an "enhanced" gob gas value which is the sum of the avoided fuel plus associated environmental and operational benefits. Environmental Benefits The most important and valuable environmental benefits can be achieved by cofiring gob gas in quantities that are small as compared w ith total boiler heat (Glickert 1997). The benefits include reductions in NOx, SOx, and particulates (opacity): NOx Reduction. When properly configured and optimized, gob gas cofiring may be able to reduce NOx emissions from the entire boiler. SOx Reducti on. Cofiring methane reduces SOx emissions. Reduced Opacity. Utilities may be able to use gas to reduce stack opacity and thereby avoid plant derating. Operational Benefits Improved Ash Quality. If a utility intends to sell its ash to the concrete industry to avoid high disposal costs, gob gas cofiring may enhance this possibility by reducing carbon levels in the ash to saleable limits. Utilities sometimes experience sparking problems in their electrostatic precipitators. Studies show that gas cofiring may mitigate the condition. Derate Mitigation. H coal processing equipment inadequacies limit a boiler (either during pulverizer or feeder outages or because the plant has been forced to use low sulfur coal that contains less heat per pound), gob gas use may m itigate the derating condition by allowing more fuel to enter the boiler. Rating Increase. In some cases, a boiler's operating limit may be driven by its forced draft fan rating, even though it may not have reached its total heat release capacity. In this event, the operator may be able to cofire small increments of gob gas without backing off the coal feed - thus ending up with an increased plant rating. Lower Turndown. If a boiler can rely primarily on gas during periods of low demand, the minimum operati ng load can be reduced by almost half of its coal - fired minimum (e.g. from 45 to 25 percent of full load). Having lower turndowns will result in fewer shutdowns and reduced boiler start - up costs. Not only does gas retain its flame stability at low loads, its heat rate is much better than coal in this range. To gain this benefit, however, the boiler operator must have access to larger gas flows than are typically available from a gob gas project. Reduction of Slag Buildup. Some utilities have fired gas in c oal boilers for short periods or continuously to remove harmful slag deposits. This removal strategy is much less expensive than shutting the boiler down and mechanically removing the deposits. As with the improved turndown ratio described above, however, an operator must have access to an adequate gas supply. The following two benefits are intangible and probably minor: Increased Efficiency (Lower Heat Rate). Methane often burns in large coal boilers with somewhat better combustion characteristics than th e coal itself. This results in a small efficiency gain that is partially offset by the need to evaporate the water formed during methane combustion and the fact that the boilers were built to maximize radiant heat transfer from coal and not gas. Reduced O& M Costs. There are many ancillary systems operating in a coal fired boiler that process, handle, and transport coal, as well as remove coal ash. Theoretically, these systems will cost less to operate and maintain when gas is fired as a partial substitute f or coal because they are handling less coal. CO2 Sequesterisation and CMM production CO2 is preferentially adsorbed on coal, relative to methane and nitrogen. Therefore, if CO2 is injected into an abandoned coal mine, the CO2 will displace adsorbed methane . Injection and subsequent adsorption of CO2 onto the carbon contained in the coal remaining within and peripheral to an abandoned coal mine will trap the CO2, effectively sequestering it from the atmosphere and thereby reduce the amount of this greenhouse gas (GHG) in the atmosphere. Physical determinants for the effectiveness of this process are the adsorptive capacity of the coal for the gases, the permeability of the coal, the amount of coal exposed to the CO2, and the pressure at which the mine can hol d the gas. The economic feasibility of the envisioned project is determined by the unit cost of the C02 sequestered versus the value of the greenhouse gas (GHG) reduction credits that could be generated. Abandoned coal mines could also be used as a carbon sink because CO2 has an affinity for adsorbing to coal, that is greater than methane, and will effectively displace the methane molecules from the adsorption sites within the micro pore structure of the coal. The advantages of injection into an abandoned c oal mine versus an unmined coal bed are identified below: The large exposed surface area in the mine workings will facilitate the adsorption of the CO2; The mining process enhances fracturing of the coal and therefore the permeability to the flow of gas in to the unmined perimeter as well as into the coal remaining as pillars; The water saturation of the coal near the mine workings will be low because the mining activity. has lowered the pressure and drained the water, facilitating movement of gas into the c oal; and The injection pressure will be low, so the cost of compression will be low. The following parameters are significant in determining the C02 storage capacity of a mine: The size of the mine workings; The thickness of the coal; The permeability of t he coal; The pressure at which the mine can be operated as a storage vessel; The pressure at which methane is contained in the coal; The adsorption isotherm of the coal for C02, methane, and nitrogen, and; and The distance to which the C02 will penetrate beyond the outer walls of the mine. COAL MINE VENTILATION AIR METHANE (VAM) Ventilation air methane (VAM), that is, methane in the exhaust air from underground coal mines, is the largest source of coal mine methane, accounting for about 60% of the methane emitted from coal mines Unfortunately, because of the low concentration of methane (0.3 - 1.5%) in ventilation air, it is difficult to use the methane beneficially. However, oxidizing methane to CO2 and water reduces its global warming potential by 87%. A po tential way to oxidize the methane is by use of a thermal flow reversal reactor (TFRR). Different technologies for gainfully utilizing VAM are described below which are at different stages of development. Thermal Flow - Reversal Reactor Figure below shows a schematic of the Thermal Flow - Reversal Reactor (TFRR). The equipment consists of a bed of silica gravel or ceramic heat - exchange medium with a set of electric heating elements in the center. The TFRR process employs the principle of regenerative heat excha nge between a gas and a solid bed of heat exchange medium. To start the operation, electric heating elements preheat the middle of the bed to the temperature required to initiate methane oxidation (above l,000oC [l,832°F]) or hotter. Ventilation air at a mbient temperature enters and flows through the reactor in one direction and its temperature increases until oxidation of the methane takes place near the center of the bed. The hot products of oxidation continue through the bed, losing heat to the far sid e of the bed in the process. When the far side of the bed is sufficiently hot, the reactor automatically reverses the direction of ventilation airflow. The ventilation air now enters the far (hot) side of the bed, where it encounters auto - oxidation tempera tures near the center of the bed and then" oxidizes. The hot gases again transfer heat to the near (cold) side of the bed and exit the reactor., Then, the process again reverses. As USEPA (2000) points out, TFRR units are effectively employed worldwide to oxidize industrial VOC streams. Furthermore, the ability of MEGTEC's VOCSIDIZER to oxidize VAM has been demonstrated in the field. Catalytic Flow - Reversal Reactor Catalytic flow - reversal reactors adapt the thermal flow - reversal technology described above b y including a catalyst to reduce the auto - oxidation temperature of methane by several hundred degrees Celsius (to as low as 350°C [662°F). CANMET has demonstrated this system in pilot plants and is now in the process of licensing Neill and Gunter (Nova Sco tia) Ltd. of Dartmouth, Nova Scotia, to commercialize the design (under the name VAMOX). CANMET is also studying energy recovery options for profitable turbine electricity generation. Injecting a small amount of methane (gob gas or other source) increases the methane concentration in ventilation air to make the turbine function efficiently. Waste heat from the oxidizer is also used to pre - heat the compressed air before it enters the expansion side of the gas turbine. Energy Conversion from a Flow - Reversal R eactor There are several methods of converting the heat of oxidation from a flow reversal reactor to electric power, which is the most marketable form of energy in most locations. The two methods being studied by MEGTEC and CANMET are: Use water as a wor king fluid. Pressurize the water and force it through an air to water heat exchanger in a section of the reactor that will provide a nondestructive temperature environment (below 8000C [1472oF]). Flash the hot pressurized water to steam and use the steam to drive a steam turbine generator. If a market for steam or hot water is available, send exhausted steam to that market. If none is available, condense the steam and return the water to the pump to repeat the process. Use air as a working fluid. Pressuri ze ventilation air or ambient air and send it through an air - to - air heat exchanger that is embedded in a section of the reactor that stays below 8000C (1472oF). Direct the compressed hot air through a gas turbine - generator. If gob gas is available, use it to raise the temperature of the working fluid to more nearly match the design temperature of the turbine inlet. Use the turbine exhaust for cogeneration, if thermal markets are available. Since affordable heat exchanger temperature limits are below those u sed in modern prime movers, efficiencies for both of the energy conversion strategies listed above will be fairly modest. The use of a gas turbine, the second method listed, is the energy conversion technology assumed for the cost estimates in this At a VAM concentration of 0.5 percent one vendor expects an overall plant efficiency in the neighborhood of 17 percent after accounting for power allocated to drive the fans that force ventilation air through the reactor. Other Technologies Other technolo gies that may prove to be able to playa role in and enhance opportunities for VAM oxidation projects are briefly described below. Concentrators Volatile organic compound (VOC) concentrators are one possibly economical option that is under evaluation by USE PA for its application to VAM. Ventilation air typically contains about 0.5 percent methane concentration by volume, or 500 ppm. Conceivably, a concentrator might be capable of increasing the methane concentration in ventilation air flows to about 20 perce nt. This highly reduced gas volume with a higher concentration of methane might serve beneficially as a fuel in a gas turbine, reciprocating engine, etc. The fluid bed concentrator consists of a series of perforated plates or trays supporting the adsorbent medium (activated carbon beads). The process exhaust stream enters from the bottom, passing upward through the adsorption trays, fluidizing the adsorbent medium to enhance capture of organic compounds. The adsorbent medium, which is now heavier because of the adsorbed organic material, falls to the bottom of the adsorber section and is fed to the desorbcr. The desorber increases the temperature of the medium, causing it to release the concentrated organic material into a low volume, inert gas stream. Lean - Fuel Gas Turbines A number of engineering teams are striving to modify selected gas turbine models to operate directly on VAM or on VAM that has been enhanced with more concentrated fuels, including concentrated VAM (see "Concentrator" section above) or go b gas. These efforts include: Carbureted gas turbine. A carbureted gas turbine (CGT) is a gas turbine in which the fuel enters as a homogeneous mixture via the air inlet to an aspirated turbine. It requires a fuel/air mixture of 1.6 percent by volume, so m ost VAM sources would require enrichment. Combustion takes place in an external combustor where the reaction is at a lower temperature (1200°C [2192°F]) than for a normal turbine thus eliminating any NOx emissions. Lean - fueled turbine with catalytic combus tor . The CCGT technology being developed oxidizes VAM in conjunction with a catalyst. The turbine compresses a very lean fuel/air mixture and combusts it in a catalytic combustor. The catalyst allows the methane to ignite at a lower, more easily achieved t emperature. Lean - fuel micro turbine . Ingersol - Rand Energy Systems, is developing a microtubine that is planned to operate on a methane - in - air mixture of less than 1 percent. The microturbine is rated at 70 kW and consists of a generator, gasifier turbine, combustor, recuperator, power turbine, and generator. The system is enclosed in a sound - attenuating enclosure and can be located indoors or outdoors. Ingersol - Rand recently introduced a 250 kW microturbine to the power industry. Additional R&D effort is r equired to complete the system design on the 70 kW unit and to adapt the 250 kW unit to run in a lean - fuel mode. Ingersol Rand is seeking funding to further pursue this market. Lean - fueled catalytic microturbine . Two US companies, FlexEnergy and Capstone T urbine Corporation, are jointly developing a line of microturbines, starting at 30 kW, that will operate on a methane - in - air mixture of 1.3 percent. Each unit's components fit inside a compact container that requires no field assembly. The single moving pa rt, rotating on an air bearing, is a shaft on which is mounted the compressor and the turbine expander. Other components include: a recuperator that preheats the VAM mixture, a catalytic combustion chamber with low - temperature ignition, a generator, and a generator cooling section. To better serve the VAM market, FlexEnergy is investigating designs that will reduce required VAM concentration to below 1.0 percent and increase unit sizes to over 100 kW. Hybrid coal and YAM - fueled gas turbine . CSIRO is also d eveloping an innovative system to oxidize and generate electricity with VAM in combination with waste coal. CSIRO is constructing a 1.2 - MW pilot plant that cofires waste coal and VAM in a rotary kiln, captures the heat in a high - temperature air - to - air exch anger, and uses the clean, hot air to power a gas turbine. Depending on site needs and economic conditions, VAM can provide from about 15 to over 80 percent (assuming a VAM mixture of 1.0 percent) of the system's fuel needs, while waste coal provides the r emainder. VAM Used as an Ancillary Fuel While the primary focus of this assessment is on strategies that oxidize major fractions of global VAM emissions, a brief mention of technologies that use VAM only as an ancillary or supplemental fuel is in order. Su ch technologies rely on a primary fuel other than VAM and are able to accept VAM as all or part of their combustion air to replace a small fraction of the primary fuel. The largest example of ancillary VAM use occurred at the Appin Colliery in Australia, w here 54 one - MW Caterpillar engines used mine ventilation air containing VAM as combustion air Similarly, the Australian utility, Powercoal, is installing a system to use VAM as combustion air for a large coal - fired steam power plant. A working example of t his application is shown below: Supplemental Fuel Example: Appin Colliery, Australia Installed in 1995 54 xl MW IC Engines Produce Power from Gob Gas VAM Used as Feed Air, Supplies 7% of Energy Underground Coal Gasification In comparison with convention al coal mining and modern steam power plant, UCG with combined cycle power generation offers the overall environmental advantages of: Lower particulate emissions, noise and visual impact on the surface Less water used (this is important in many of the min ing areas in China) Lower risk of surface water pollution Reduced methane emissions from coal mining No dirt handling and disposal at mine sites No coal washing and fines disposal at mine sites No ash handling and disposal at power station sites Less SO2 and NOx Lower energy consumption as less materials and product transport Less heavy surface transport Smaller land area occupied Fewer liabilities after mine abandonment. Additional benefits of the UCG power generation approach are: Lower occupat ional health and safety risks (fewer miners underground) Lower capital and operating costs compared with conventional systems Flexibility of access to mineral Larger coal resource exploitable. There are coal reserves deep underground in the State of Gu jarat. The 'in - situ' north Gujarat reserves which are estimated at 63 billion tonnes occur at a depth of 800 to 1700 metres which is beyond the limits of conventional methods of mining in India. If this resource is exploited on a large scale by using the l atest technologies of UCG, it could generate gas equivalent to 200,000 BCM. Integrated Gasification Combined Cycle Integrated coal Gasification Combined Cycle (IGCC) power plant is the most environmentally friendly coal - fired power generation technology. M ost importantly, coal gasification offers the immediate opportunity to generate power with near zero greenhouse gas emissions and the pathway to a future hydrogen economy. Description Coal gasification is the process of converting coal to a gaseous fuel th rough partial oxidation. The coal is fed into a high - temperature pressurized container along with steam and a limited amount of oxygen to produce a gas. The gas is known as synthesis gas or syngas and mainly consists of carbon monoxide and hydrogen. The g as is cooled and undesirable components, such as carbon dioxide and sulphur are removed. The gas can be used as a fuel or further processed and concentrated into a chemical or liquid fuel. Integrated gasification combined - cycle (IGCC) systems combine a coa l gasification unit with a gas fired combined cycle power generation unit. The first stage is the coal gasification process as mentioned above. The second stage takes the cleaned gas and burns it in a conventional gas turbine to produce electrical energy, and the hot exhaust gas is recovered and used to boil water, creating steam for a steam turbine which also produces electrical energy. In typical planes, about 65% of the electrical energy is produced by the gas turbine and 35% by the steam turbine. In gen eral the advantages of IGCC are: It can achieve up to 50% thermal efficiency. This is a higher efficiency compared to conventional coal power plants meaning there is less coal consumed to produce the same amount of energy, resulting in lower rates of carbo n dioxide (CO2) emissions It produces about half the volume of solid wastes as a conventional coal power plant. It uses 20 - 50% less water compared to a conventional coal power station. It can utilise a variety of fuels, like heavy oils, petroleum cokes, and coals. Up to 100% of the carbon dioxide can be captured from IGCC, making the technology suitable for carbon dioxide storage. Carbon capture is easier and costs less than capture from a pulverised coal plant A minimum of 95% of the sulphur is remov ed and this exceeds the performance of most advanced coal - fired generating units currently installed. Nitrogen oxides (NOx) emissions are below 50ppm. This is lower than many of today a (TMs most advanced coal - fired generating units. The syngas produce f rom a gasifier unit can be burned in a gas turbine for electricity generation, or used as a fuel in other applications, such as hydrogen - powered fuel cell vehicles Coal to Liquid Technology Once Coal is gasified and converted to a mixture of CO + H2, thro ugh Fischer Tropsch reaction, the synthesis gas can be converted to liquids. This aspect has been fully covered in our article of August 2005 - GTL taking on to markets. Executive Summary of the same is reproduced below: GTL taking on to markets. Executive Summary Ever increasing consumption of fossil fuel and petroleum products has been a matter of concern for the country for huge out - go of foreign exchange on the one hand and increasing emission causing environmental hazards on the other. The current annu al import bill of crude oil in terms of foreign exchange is around Rs. 60, 400 crores. Diesel is mainly consumed for transport; road transport eats up almost 75% while the Railways account for the rest. Oil provides energy for 95% of transportation and the demand of transport fuel continues to rise. The requirement of Motor Spirit is expected to grow from little over 7 MMT in 2001 - 02 to over 10 MMT in 2006 - 07 and 12.848 MMT in 2011 - 12 and that of diesel (HSD) from 39.815 MMT in 2001 - 02 to 52.324 MMT in 200 6 - 07 and just over 66 MMT in 2011 - 12. The capitalization and infrastructure associated with diesel amounts to hundred of billions of dollars, and it is safe to say that diesel will remain the fuel of choice for some time to come. However, biodiesel1s contr ibution could be substantial and well timed in providing an option which will help meet the environmental and strategic concerns of the country, while allowing the financial realities of infrastructural investments in diesel technology to be compensated. T he same logic holds good for GTL - Diesel, which can not only provide a source of environmentally compliant fuel but also help avoid capital expenditure on setting up additional refining capacity and product upgradation schemes. To add to it, if GTL is produ ced by use of indigenous resources like CTL and BTL, it would do the yeoman service of giving the Indian energy basket a semblance of Energy Independence. ONE OF THE HOTTEST TRENDS in the global petroleum industry in 1997 involved a technology that is thre e fourths of a century old. Economic conversion of natural gas to synthetic fuels, one of the "Holy Grails" of the energy industry for decades, took startling steps in 1997. For the first time since the discovery of the Fischer - Tropsch synthesis process in 1923, gas - to - liquids conversion processes may be competitive with conventional petroleum products on the world market. And the technology doesn't require an oil price of $30 - 40/bbl, as was the case with the failed synthetic fuels projects of the late 1970 s and early 1980s. The oil price at present is close to touching $70/bbl and is not likely to return to previous lows for the foreseeable future. The GTL industry is poised for a major expansion based in Qatar, but also in Nigeria and Australia. The expans ion is being funded by the major oil companies, in some cases in tandem with synthetic fuel companies and national oil companies. The projected expansion of the industry is based on favourable market conditions in addition to advances in technology. High o il and natural gas prices, declining capital investment costs, and improvements in technology that allow large scale production facilities are important factors in the industry's expansion. India is slated to be a fast growth economy with predictions that by 2050 it will be the third largest economy in the world. However the achievement of the envisioned growth is subject to a number of enabling factors being in the right place. Energy is certainly one of the most important prime movers of the economy. Any disturbance in availability of energy in terms of either reliability or economics can jam the wheels of the juggernaut of the economy. India will have to look at alternative energy with a greater urgency and GTL/CTL/BTL certainly merits being one of them. Small Sized GTL Plants Most world - class GTL technology is in large plants associated with gas fields of 5 - 500 Trillion cubic feet (Tcf). However it is essential to find a cost effective solution for smaller GTL plants to monetise flared gas, associated gas , Coal based or biomass based production. M/s Syntroleum, Rentech and Synfuels International are working in this direction and are willing to license the technology. F - T conversion of coal (CTL) The main difference between processes for producing F - T liqui ds from coal compared to production from natural gas is in the syngas production step. The reforming step is replaced by a pressurized oxygen - blown gasifier when using coal. F - T conversion of biomass (BTL) According to Choren it takes 5 tons of biomass to produce 1 ton of sundiesel and 1 hectare generates 4 tons of sundiesel. A plant producing 13,000 tons per year would need the biomass of 50,000 ha. In recent years the German set - aside area amounted to roughly 1 million ha. This could generate 4 million to ns of sundiesel, which is about 13 percent of .current diesel use in Germany. Relevance of GTL to India Despite two encouraging discoveries of natural gas in India and import of LNG from two terminals on the West Coast, India will remain a supply driven ma rket. The available gas would better be transmitted and distributed by pipelines and use for energy efficient applications in power generation, industry, commercial establishments, residential sector and transport sector. For Gas to Liquids (GTL) India wil l. have to look outside India for gas resources. May be the gas equity abroad could provide a suitable opportunity. Special political efforts could pay off well if the landlocked countries like Kazakhstan, Turkmenistan and Russia were targeted for booking gas resources. However Coal to Liquids (CTL) and Biosyngas to Liquids (BTL) are the possibilities where India can use its own resources. While talking about CTL, India has possibility of exploiting a large resource via Underground coal gasification (UCG), which is given up as unreachable. There are coal reserves deep underground in the State of Gujarat. The 'in - situ' north Gujarat reserves which are estimated at 63 billion tonnes occur at a depth of 800 to 1700 metres which is beyond the limits of conventi onal methods of mining in India. If this resource is exploited on a large scale by using the latest technologies of UCG it could generate gas equivalent to 200,000 BCM. UCG has a special attribute that it helps enhance the quantity of indigenous fossil fue ls that were hitherto considered virtually non - existent - unexploitable. The technology of UCG is proven elsewhere to some extent but, the use of the latest and the most cost effective technology available today (CRIP) needs to be imported and tried out ri ght away (zero date starts with the commencement of first trial) so that at least in foreseeable future we will be able to increase the indigenous content in the energy basket of India. . UCG has a virtue that it will cut down the Syngas Production (Costin g 50% of GTL Project) from the GTL project. With the price of unmineable coal taken as zero, UCG could provide an economic option for Syngas. CBM is another source of gas which may not be large enough in size to afford its transmission and distribution. He re the small sized GTL technology being offered by a number of companies may be of good use. To top it all BTL would be from renewable resources and hence would never get exhausted. India has a large base of agriculture and forests where this technology (B TL) being used in Germany could provide a significant degree of Energy Independence as envisioned by the President of India. Syngas, or synthesis gas, produced from fossil fuels or biomass, is shaping up to become a crucial intermediate in emerging energy and fuel solutions. Syngas can be combined with emerging downstream technologies for gas - to - liquids (GTL) processes, methanol - to - olefins (MTO) conversion, coal - to - liquids (CTL) conversion and fuel cells. It also is used as a feedstock for high - value, chemi cal processes such as ammonia, hydrogen and methanol. We need to procure several very large so - called stranded gas fields, immense fields of natural gas that have been discovered but are too far from developed gas markets to have any value (the landlocked countries like Kazakhstan, Turkmenistan and Russia were targeted for booking gas resources). Indian government would have to forge agreements with a few friendly nations to purchase rights to produce this gas and convert it to liquid fuels on location. We need to commence UCG trials without any further loss of time and import the best possible technology - to ensure commercial success of large scale UCG projects which can then feed the GTL projects to produce diesel which is the most dominant fuel in India' s energy basket. We may collaborate with countries like Germany and immediately import the BTL technology and set up a number of plants based on biomass waste which presently posses a disposal problem. Obviously, the economics of GTL/CTL/BTL would improv e over a period of time. (May not be much of a problem at current price of crude oil.) Requisite policy support may be given for the growth of this industry which alone can give India the energy independence that it badly needs. Conclusions India is highl y dependent on imported oil with. a heavy drain on foreign exchange earnings. This trend is not likely to change very much in foreseeable future. Finding of any large reserves of oil in the country is not in sight. Oil can be substituted with coal, but for certain applications it has to be converted into liquid form. Several experts have already recommended the coal gasification route for liquefaction of Indian coals. Considering several aspects, the option of coal to oil seems to be an unavoidable strategy for India. Natural gas is being used for power generation in the country and it is rightly so for accelerated growth of power sector. There are plans for import of liquefied natural gas (LNG) and naphtha etc. for power sector mostly by independent power p roducers. This could be allowed as a short term measure as dictated by the market forces. But as a medium to long term measure the natural gas and liquid fuels need to be replaced by coal gas. The medium to long term targets can be: i) Replacing the natura l gas with coal gas in the existing combined cycle power plants ii) Establishment of advanced power generation technologies based on coal gas i.e., fuel cell. iii) Commercial plants for coal to oil and coal refinery. iv) Sell reliance and security in energ y sector v) Substitution of exhaustible with renewable energy sources. The present use of coal mostly through direct combustion is inefficient with high levels of pollution. The efficiency cannot be improved much due to technological limitations and it is very expensive to control the pollution. India is looking for alternate technologies, more efficient, environmentally benign and economically attractive. Coal gasification fits into these requirements. IGCC technology is the best alternate option for powe r generation in India. The setting up of coal to oil conversion plants should not be evaluated purely from commercial angle, but security, sell reliance and conserving oil should merit serious consideration. Coal to oil technology can be considered on the same footing as atomic energy which had paid dividends by bringing the country to sell reliant status. A concept of coal refinery is mooted now and may be put in practice as a long term strategy to substitute the imported oil. To progress on the technolog y front, for next 20 - 30 years our country should take pro - active leads on technologies like - in - situ coal gassification, clean coal technologies and coal bed methane. It is expected that, hydrogen and other hybrid technologies will assume significant rol e in transportation sector. CBM/CMM/ AMM/VAM Methane capture and its utilization from coal mines is not being undertaken in India due to: Lack of latest technology Lack of expertise and experience Pervasive perception that commercial viability of exploit ation and utilization of Methane is doubtful. Opportunities exist for the development of a range of clean coal energies from in - situ coal seams focused on CBM and UCG. The development and exploitation of these fuels is likely to provide environmental, saf ety and financial benefits. Technology is being developed for using low methane concentrations in mine ventilation air but it is unlikely to be commercially viable without support from government. VAM utilisation might also divert effort from improving gas capture and utilisation at working coal mines, which could have safety implications. Evaluation of coal properties, construction of adsorption isotherm, and study of geological setting of coal basins should be an integral part of initial research efforts. It is desirable to work out the techno - economic viability of a project after R&D efforts are completed and before exploration and exploitation are taken up. The potential production rate of a virgin CBM reservoir can be under - estimated, if care is not tak en to protect seam permeability from damage during drilling and testing. 'Clean drilling' techniques, as practiced by leading operators in the UK, should therefore be introduced to ensure that CBM prospects are correctly characterized and optimum CBM produ ction rates are attained. UCG The technology is highly relevant and very promising to India. Two sites in India one in Rajasthan and another in Bengal - Bihar initially appear to be suitable for application of underground coal gasification. Many more areas c ould be amenable. There are coal reserves deep underground in the State of Gujarat. The 'in - situ' north Gujarat reserves which are estimated at 63 billion tonnes occur at a depth of 800 to 1700 metres which is beyond the limits of conventional methods of m ining in India. If this resource is exploited on a large scale by using the latest technologies of UCG, it could generate gas equivalent to 200,000 BCM. UCG provides a radical approach to mine mouth power generation that enables the energy in coal to be re leased without the need to extract, process, transport and combust it. UCG virtually eliminates greenhouse gas emissions associated with coal extraction. Hitherto, the potential net greenhouse gas emission mitigation benefits of UCG power generation compar ed with conventional coal extraction and coal - fired power plant has received little attention. Integrated Development of coal reserves Ongoing mining activities 1. Continue mining in zone already opened for mining. Recover Methane gas for vent and generate po wer. 2. Coal mine methane recovery . 3. The zones which are yet to be opened, plan for full CBM recovery first ad then open zones for mining 4. UCG production from "un mineable zones" New mines yet to be opened 1. First complete full CBM recovery 2. Open mines for c oal production 3. UCG production from zones below mining range Abandoned mines 1. Recover CBM from pillars, compartments by drilling wells 2. If Coal seams are available below "mineable zone", evaluate possibilities of UCG. Coal seam below" mineable zone" 1. CBM r ecovery 2. UCG project It can thus be seen that coal in solid form can continue to support power generation and other applications, CBM can supplement Natural Gas requirements and through UCG route, syngas so generated can be used either for power generatio n (IGCC)' or for chemicals or liquid petroleum fuel for transportation network. As integrated development as proposed would make it imperative that exploration or oil and gas and exploitation of coal resources be carried in unison. For example, in Cambay b asis in Gujarat more than 4000 well have been drilled for oil exploration / production. Many of these exploratory and development wells were dry and abandoned where coal seams were encountered. If petroleum / coal activities were to be performed under "sin gle" licence, UCG operation could have started much sooner. As can be seen, exploration / exploitation of oil / gas and coal are both technologically and geologically linked. Policy initiatives for proposed development of coal fuels Unified license for Co al, CBM and UCG production along with CO2 sequestration. Unified license for Petroleum, CBM and UCG in those basins where hydrocarbon (crude granting licenses for remaining blocks for exploration of CBM and UCG. Supervisory agency to co - ordinate and promote in tegrated development of all coal fuels.
https://www.techylib.com/el/view/fallenleafblackbeans/executive_summary_india_is_tipped_to_be_a_rapidly_growing_economy
CC-MAIN-2018-39
refinedweb
13,647
50.87
Now that you have a continuously running, replicated application you can expose it on a network. Before mapping container ports to host ports. This means that containers within a Pod can all reach each other’s ports on localhost, and all pods in a cluster can see each other without NAT. The rest of this document will elaborate on how you can run reliable services on such a networking model. This guide uses a simple nginx server to demonstrate proof of concept. The same principles are embodied in a more complete Jenkins CI application. We did this in a previous example, but let’s do it once again and focus on the networking perspective. Create an nginx pod, and note that it has a container port specification: This makes it accessible from any node in your cluster. Check the nodes the pod is running on: $ kubectl create -f ./run-my-nginx.yaml $ kubectl get pods -l run=my-nginx -o wide NAME READY STATUS RESTARTS AGE IP NODE my-nginx-3800858182-jr4a2 1/1 Running 0 13s 10.244.3.4 kubernetes-minion-905m my-nginx-3800858182-kna2y 1/1 Running 0 13s 10.244.2.5 kubernetes-minion-ljyd Check your pods’ IPs: $ kubectl get pods -l run=my-nginx -o yaml | grep podIP podIP: 10.244.3.4 podIP: 10.244.2.5 You should be able to ssh into any node in your cluster and curl both IPs. Note that the containers are not using port 80 on the node, nor are there any special NAT rules to route traffic to the pod. This means you can run multiple nginx pods on the same node all using the same containerPort and access them from any other pod or node in your cluster using IP. Like Docker, ports can still be published to the host node’s interfaces, but the need for this is radically diminished because of the networking model. You can read more about how we achieve this if you’re curious. So we have pods running nginx in a flat, cluster wide, address space. In theory, you could talk to these pods directly, but expose deployment/my-nginx service "my-nginx" exposed This is equivalent to kubectl create -f the following yaml: get svc my-nginx NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE my-nginx 10.0.162.149 <none> 80/TCP 21s describe svc my-nginx Name: my-nginx Namespace: default Labels: run=my-nginx Annotations: <none> Selector: run=my-nginx Type: ClusterIP IP: 10.0.162.149 Port: <unset> 80/TCP Endpoints: 10.244.2.5:80,10.244.3.4:80 Session Affinity: None Events: <none> $ kubectl get ep my-nginx NAME ENDPOINTS AGE my-nginx 10.244.2.5:80,10.244.3.4:80 1m You should now be able to curl the nginx Service on <CLUSTER-IP>:<PORT> from any node in your cluster. Note that the Service IP is completely virtual, it never hits the wire, if you’re curious about how this works you can read more about the service proxy. Kubernetes supports 2 primary modes of finding a Service - environment variables and DNS. The former works out of the box while the latter requires the kube-dns cluster addon. When a Pod runs on a Node, the kubelet adds a set of environment variables for each active Service. This introduces an ordering problem. To see why, inspect the environment of your running nginx pods (your pod name will be different): $ kubectl exec my-nginx-3800858182-jr4a2 -- printenv | grep SERVICE KUBERNETES_SERVICE_HOST=10.0.0.1 KUBERNETES_SERVICE_PORT=443 KUBERNETES_SERVICE_PORT_HTTPS=443 scale deployment my-nginx --replicas=0; kubectl scale deployment my-nginx --replicas=2; $ kubectl get pods -l run=my-nginx -o wide NAME READY STATUS RESTARTS AGE IP NODE my-nginx-3800858182-e9ihh 1/1 Running 0 5s 10.244.2.7 kubernetes-minion-ljyd my-nginx-3800858182-j4rm4 1/1 Running 0 5s 10.244.3.8 kubernetes-minion-905m You may notice that the pods have different names, since they are killed and recreated. $ kubectl exec my-nginx-3800858182-e9ihh -- printenv | grep SERVICE KUBERNETES_SERVICE_PORT=443 MY_NGINX_SERVICE_HOST=10.0.162.149 KUBERNETES_SERVICE_HOST=10.0.0.1 MY_NGINX_SERVICE_PORT=80 KUBERNETES_SERVICE_PORT_HTTPS=443 Kubernetes offers a DNS cluster addon Service that uses skydns to automatically assign dns names to other Services. You can check if it’s running on your cluster: $ kubectl get services kube-dns --namespace=kube-system NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE kube-dns 10.0.0.10 <none> 53/UDP,53/TCP 8m If it isn’t running, you can enable it. The rest of this section will assume you have a Service with a long lived IP (my-nginx), and a dns server that has assigned a name to that IP (the kube-dns cluster addon), so you can talk to the Service from any pod in your cluster using standard methods (e.g. gethostbyname). Let’s run another curl application to test this: $ kubectl run curl --image=radial/busyboxplus:curl -i --tty Waiting for pod default/curl-131556218-9fnch to be running, status is Pending, pod ready: false Hit enter for command prompt Then, hit enter and run nslookup my-nginx: [ root@curl-131556218-9fnch:/ ]$ nslookup my-nginx Server: 10.0.0.10 Address 1: 10.0.0.10 Name: my-nginx Address 1: 10.0.162.149 Till now we have only accessed the nginx server from within the cluster. Before exposing the Service to the internet, you want to make sure the communication channel is secure. For this, you will need: You can acquire all these from the nginx https example, in short: $ make keys secret KEY=/tmp/nginx.key CERT=/tmp/nginx.crt SECRET=/tmp/secret.json $ kubectl create -f /tmp/secret.json secret "nginxsecret" created $ kubectl get secrets NAME TYPE DATA AGE default-token-il9rc kubernetes.io/service-account-token 1 1d nginxsecret Opaque 2 1m Now modify your nginx replicas to start an https server using the certificate in the secret, and the Service, to expose both ports (80 and 443): Noteworthy points about the nginx-secure-app manifest: $ kubectl delete deployments,svc my-nginx; kubectl create -f ./nginx-secure-app.yaml At this point you can reach the nginx server from any node. $ kubectl get pods -o yaml | grep -i podip podIP: 10.244.3.5 node $ curl -k ... <h1>Welcome to nginx!</h1> Note how we supplied the -k parameter to curl in the last step, this is because we don’t know anything about the pods running nginx at certificate generation time, so we have to tell curl to ignore the CName mismatch. By creating a Service we linked the CName used in the certificate with the actual DNS name used by pods during Service lookup. Let’s test this from a pod (the same secret is being reused for simplicity, the pod only needs nginx.crt to access the Service): $ kubectl create -f ./curlpod.yaml $ kubectl get pods -l app=curlpod NAME READY STATUS RESTARTS AGE curl-deployment-1515033274-1410r 1/1 Running 0 1m $ kubectl exec curl-deployment-1515033274-1410r -- curl --cacert /etc/nginx/ssl/nginx.crt ... <title>Welcome to nginx!</title> ... For some parts of your applications you may want to expose a Service onto an external IP address. Kubernetes supports two ways of doing this: NodePorts and LoadBalancers. The Service created in the last section already used NodePort, so your nginx https replica is ready to serve traffic on the internet if your node has a public IP. $ kubectl get svc my-nginx -o yaml | grep nodePort -C 5 uid: 07191fb3-f61a-11e5-8ae5-42010af00002 spec: clusterIP: 10.0.162.149 ports: - name: http nodePort: 31704 port: 8080 protocol: TCP targetPort: 80 - name: https nodePort: 32453 port: 443 protocol: TCP targetPort: 443 selector: run: my-nginx $ kubectl get nodes -o yaml | grep ExternalIP -C 1 - address: 104.197.41.11 type: ExternalIP allocatable: -- - address: 23.251.152.56 type: ExternalIP allocatable: ... $ curl https://<EXTERNAL-IP>:<NODE-PORT> -k ... <h1>Welcome to nginx!</h1> Let’s now recreate the Service to use a cloud load balancer, just change the Type of my-nginx Service from NodePort to LoadBalancer: $ kubectl edit svc my-nginx $ kubectl get svc my-nginx NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE my-nginx 10.0.162.149 162.222.184.144 80/TCP,81/TCP,82/TCP 21s $ curl https://<EXTERNAL-IP> -k ... <title>Welcome to nginx!</title> The IP address in the EXTERNAL-IP column is the one that is available on the public internet. The CLUSTER-IP is only available inside your cluster/private cloud network. Note that on AWS, type LoadBalancer creates an ELB, which uses a (long) hostname, not an IP. It’s too long to fit in the standard kubectl get svc output, in fact, so you’ll need to do kubectl describe service my-nginx to see it. You’ll see something like this: $ kubectl describe service my-nginx ... LoadBalancer Ingress: a320587ffd19711e5a37606cf4a74574-1142138393.us-east-1.elb.amazonaws.com ... Kubernetes also supports Federated Services, which can span multiple clusters and cloud providers, to provide increased availability, better fault tolerance and greater scalability for your services. See the Federated Services User Guide for further information.Create an Issue Edit this Page
https://kubernetes.io/docs/concepts/services-networking/connect-applications-service/
CC-MAIN-2017-43
refinedweb
1,551
63.29
New effects can be added to the library pretty easily. Let’s create an Effect for a new “optional” type. We need: a base type. We use a Maybe data type with 2 cases Just and Nothing a method to send values of type A into Eff[R, A] an interpreter import cats._, implicits._ import org.atnos.eff._ import all._ import org.atnos.eff.interpret._ sealed trait Maybe[A] case class Just[A](a: A) extends Maybe[A] case class Nothing[A]() extends Maybe[A] object MaybeEffect { type _maybe[R] = Maybe |= R def just[R :_maybe, A](a: A): Eff[R, A] = send[Maybe, R, A](Just(a)) def nothing[R :_maybe, A]: Eff[R, A] = send[Maybe, R, A](Nothing()) def runMaybe[R, U, A, B](effect: Eff[R, A])(implicit m: Member.Aux[Maybe, R, U]): Eff[U, Option[A]] = recurse(effect)(new Recurser[Maybe, U, A, Option[A]] { def onPure(a: A) = Some(a) def onEffect[X](m: Maybe[X]): X Either Eff[U, Option[A]] = m match { case Just(x) => Left(x) case Nothing() => Right(Eff.pure(None)) } def onApplicative[X, T[_]: Traverse](ms: T[Maybe[X]]): T[X] Either Maybe[T[X]] = Right(ms.sequence) }) implicit val applicativeMaybe: Applicative[Maybe] = new Applicative[Maybe] { def pure[A](a: A): Maybe[A] = Just(a) def ap[A, B](ff: Maybe[A => B])(fa: Maybe[A]): Maybe[B] = (fa, ff) match { case (Just(a), Just(f)) => Just(f(a)) case _ => Nothing() } } } In the code above: the just and nothing methods use Eff.send to “send” values into a larger sum of effects Eff[R, A] runMaybe runs the Maybe effect by using the interpret.recurse and a Recurser to translate Maybe values into Option values When you create an effect you can define a sealed trait and case classes to represent different possibilities for that effect. For example for interacting with a database you might create: trait DatabaseEffect { case class Record(fields: List[String]) sealed trait Db[A] case class Get[A](id: Int) extends Db[Record] case class Update[A](id: Int, record: Record) extends Db[Record] } It is recommended to create the Db types outside of the DatabaseEffect trait. Indeed, during Member implicit resolution, depending on how you import the Db effect type (if it is inherited from an object or not) you could experience compiler crashes :-(. Interpreting a given effect generally means knowing what to do with a value of type M[X] where M is the effect. If the interpreter can “execute” the effect: produce logs ( Writer), execute asynchronously ( Future), check the value ( Either),… then extract a value X, then we can call a continuation to get the next effect and interpret it as well. The org.atnos.eff.interpret object offers several support traits and functions to write interpreters. In this example we use a Recurser which will be used to “extract” a value X from Maybe[X] or just give up with Eff.pure(None) The runMaybe method needs an implicit Member.Aux[Maybe, R, U]. This must be read in the following way: Maybemust be member of the effect stack Rand its removal from Rshould be the effect stack U Then we can use this effect in a computation: import org.atnos.eff._ import org.atnos.eff.eff._ import MaybeEffect._ val action: Eff[Fx.fx1[Maybe], Int] = for { a <- just(2) b <- just(3) } yield a + b run(runMaybe(action)) > Some(5)
http://atnos-org.github.io/eff/org.atnos.site.CreateEffects.html
CC-MAIN-2018-26
refinedweb
581
53.31
Hi all, I have used lapack in c++ on red hat previously and i had no problem to link the libraries with: g++ test.C -L/usr/lib/ -llapack -lblas -lm -lg2c Now I am working on my laptop which has Ubuntu with the lapack provided by Synaptic and the previous command didn't work anymore. It gave me: /usr/bin/ld: cannot find -llapack collect2: ld returned 1 exit status So after a look to the usr/lib folder, i changed the command by: g++ test.C -L/usr/lib/liblapack.so.3 -lblas -lm -lg2c But now I have another error: /tmp/ccfwmBwU.o: In function `main': test.c.text+0x151): undefined reference to `dgesv_' collect2: ld returned 1 exit status It seems that I don't export the function correctly. Does anyone have any idea about this? Thanks Huy-Nam ps: here is the code #include <iostream> #define MAX 10 extern "C" { extern void dgesv_(int *,int *,double *,int *,int*,double *,int*,int*); }; int main(){ // Values needed for dgesv int n; int nrhs = 1; double a[MAX][MAX]; double b[1][MAX]; int lda = MAX; int ldb = MAX; int ipiv[MAX]; int info; // Other values int i,j; // Read the values of the matrix std::cout << "Enter n \n"; std::cin >> n; std::cout << "On each line type a row of the matrix A followed by one element of b:\n"; for(i = 0; i < n; i++){ std::cout << "row " << i << " "; for(j = 0; j < n; j++)std::cin >> a[j][i]; std::cin >> b[0][i]; } // Solve the linear system dgesv_(&n, &nrhs, &a[0][0], &lda, ipiv, &b[0][0], &ldb, &info); // Check for success if(info == 0) { // Write the answer std::cout << "The answer is\n"; for(i = 0; i < n; i++) std::cout << "b[" << i << "]\t" << b[0][i] << "\n"; } else { // Write an error message std::cerr << "dgesv returned error " << info << "\n"; } return info; }
http://icl.cs.utk.edu/lapack-forum/viewtopic.php?f=2&t=168
CC-MAIN-2015-27
refinedweb
321
54.8
This is your resource to discuss support topics with your peers, and learn from each other. 09-24-2012 05:52 AM When I try using the QML Design Mode in the QNX Momentics IDE it always shows an error message. For example in the photobomber example from github Problem loading qml file: 24:1: module "bb.cascades.multimedia" is not installed How do I get the design mode to work. Or is this a bug in the QNX Momentics IDE. I am using the latest version 10.0.06 beta 2 of the BlackBerry 10 Native SDK. 09-24-2012 06:25 AM As far as I know, this is a bug due to be fixed in Beta 3. I get the same error for any library that isn't the standard cascades library. It'll still work, but just won't be parsable by the QML editor. 09-30-2012 03:02 PM - edited 09-30-2012 03:17 PM This problem still exists in beta 3. EDIT: Just noticed this related thread: Will look into it... EDIT 2: Did not work. 10-10-2012 09:18 PM i'm having the same problem with all non standard ones as well. is there any update on this? thanks! 11-01-2012 06:17 AM The QML previewer is quite limited at the moment in that it will not load any custom components or controls defined outside the vanilla Cascades library (Labels, Buttons etc). There is no easy way round this, although one suggestion is to create a 'design mode view' which uses a placeholder graphic rather than the custom control. Bear in mind, it's still in beta! 11-01-2012 10:49 AM The problem is I'm not using any custom components or controls. Stock cascades sample from the website. Some work, some don't. I can live without it but it's just a little annoying, that's all. 11-08-2012 11:47 PM I am having the same problem. It used to wok fine, but I needed to reformat my computer and re-install the SDK. I restored the same workspace, but this time it won't view the DESIGN because of the module created from the cpp. cpp file: qmlRegisterType<QTimer> ("my.library", 1, 0, "QTimer"); qml: import my.library 1.0 error: Problem loading QML file: 5:1: module "my.library" is not installed 12-13-2012 04:20 PM My QML is encountered through this code declaration of the library: CPP code: qmlRegisterType<QTimer> ("my.library", 1, 0, "QTimer"); QML code: import my.library 1.0 I think the cascades design mode expect a concrete library and not dynamically created one 12-13-2012 04:33 PM 12-13-2012 05:55 PM problem still persist on the gold sdk, but the problem doesn't bother on the preview. in my case
http://supportforums.blackberry.com/t5/Native-Development/Problem-loading-qml-file-module-is-not-installed/m-p/1971943/highlight/true
CC-MAIN-2015-22
refinedweb
482
75