text stringlengths 20 1.01M | url stringlengths 14 1.25k | dump stringlengths 9 15 ⌀ | lang stringclasses 4
values | source stringclasses 4
values |
|---|---|---|---|---|
...
Try 3 layers of fiberglass drywall mesh tape. works great , but not very attractive unless painted or backed over tape with linnen.
1,569 favorites
License:
Archery Factsby Blaakenth
How to Make a Childrens Bow and Arrowsby craftknowit 3 layers of fiberglass drywall mesh tape. works great , but not very attractive unless painted or backed over tape with linnen.
First, yeah, safety. If/when the bow fails, it's not good to have pieces flung about. Oak shrapnel is no fun.
Second is to reduce string follow. Oak is a good, solid wood which helps resist compression in the belly (string side) of the bow. However it's missing a natural backing. A good, elastic backing is needed to make sure the bow returns to it's full upright position. Over time the belly will compress and you will see more and more arc in the bow when unstrung. This is string follow. A good, elastic backing reduces this.
Third, the right backing could also provide a little more speed to the bow. Again, er go back to the elastic nature of a good backing. The best modern backing is fiberglass. Put a couple coats of fiberglass on the back and you'll have a very snappy bow.
The string is 2 inches shorter than the nock to nock length. Yes you can use twisting to adjust the length.
I just bought a cheap 67" string off the net.
Hours of finish sanding and rubbing down with steal wool, an I'm at the point where I'm ready to apply the finish. I believe I'm going to use tungoil and a rub down of raw beas wax melted into cheese cloth.
((melt the wax in a double boiler (pot with water in it, and a smaller pot that fits inside the first one. you can use a sauce pan and a steal mixing bowl.) fold cheese cloth into a nice palm sized square, 4 maybe 5 layers thick, or a 1/2 inch thick of folds, then soak the cheese cloth in the wax, saturate it as much as possible, then let it cool off and solidify. now you got a ice block of reinforced bea's wax you can use to rub down your bow. its great because its nontoxic (so if you need to sand again, your not putting toxic wax dust into the air) and it comes off easy with some paint thinner or wood cleaner.))
This information is not quite correct. Applied properly, a rawhide backing will also improve a bows performance (this is how many Asiatic bows were given their reflex profiles), as will a backing made up of any fiber cordage that has a decent amount of stretch and return (such as silk). It would be accurate to say, however, that sinew is probably the appropriate backing for this type of bow in most situations and is the hardest one to screw up. | http://www.instructables.com/id/Red-Oak-Pyramid-Bow/step10/Addendum/ | CC-MAIN-2014-15 | en | refinedweb |
#include <unistd.h> long sysconf(int name);
At compile time this is done by including <unistd.h> and/or <limits.h> and testing the value of certain macros.
At run time, one can ask for numerical values using the present function sysconf(). One.
First, the POSIX.1 compatible values.
These values also exist, but may not be standard.
Some returned values may be huge; they are not suitable for allocating memory. | http://www.makelinux.net/man/3/S/sysconf | CC-MAIN-2014-15 | en | refinedweb |
Determine the arguments of an operation binding at runtime
By Frank Nimphius on May 24, 2011
On the OTN forum, Navaneetha Krishnan answered a question of how to access an Operation Binding at runtime and determine the arguments it expects. Background of the question is that if you call getParamsMap() on the Operation Binding, it does not contain the keys of the method arguments.
The example code shown below accesses an OperationBinding that is defined in the PageDef file for a method. It then dynamically parses the binding definition for configured method argument names.
For example, a method to relocate employees may be exposed on the ADF Business Component model and exposed in the Data Control panel for drag and drop UI binding in ADF. The method signature of the sample may be as shown below
relocateEmployee(Number
departmentId, Number employeeId,
Boolean withSalaryRaise, Long salaryRaiseInPercent)
The easiest way for you to create a method like the one below, is to drag and drop the method from the DataControls panel to the page and have it rendered as a button or link. A binding dialog opens for you to define default values or reference objects that provide the argument values at runtime.
Double click the command item to create a managed bean method that contains generated code for the operation to invoke ("relocateEmployee" in the sample). The following managed bean method was built this way and then extended with the code posted by Navaneetha.
The code sample has a place holder line for where your application would look up the argument values to pass to the method. For example, if there is a HashMap available (e.g. exposed by a managed bean, or passed as an input parameter to a bounded task flow) then you could check if it contains values for the operation argument names you read from the OperationBinding.
import java.util.Map; import oracle.adf.model.BindingContext; import oracle.adf.model.OperationParameter; import oracle.adf.model.binding.DCInvokeMethod; import oracle.adf.model.binding.DCInvokeMethodDef; import oracle.binding.BindingContainer; import oracle.binding.OperationBinding; … //method called from a command item in ADF Faces public String onRelocateEmployee() { //generated binding access code BindingContainer bindings = getBindings(); //access the method binding in the PageDef file OperationBinding operationBinding = bindings.getOperationBinding("relocateEmployee"); //to pass arguments to an operation binding, a Map is used. //the Map is retrieved by a call to getParamsMap on the operation //binding Map operationParamsMap = operationBinding.getParamsMap(); //get access to the operation definition DCInvokeMethod method = (DCInvokeMethod)operationBinding.getOperationInfo(); if (method != null) { DCInvokeMethodDef methodDef = method.getDef(); if(methodDef != null) { OperationParameter[] operationParameters = null; operationParameters = methodDef.getParameters(); if (operationParameters != null) { for (OperationParameter operationParameter : operationParameters) { String argumentName = operationParameter.getName(); Object argumentType = operationParameter.getTypeName(); Object defaultValue = operationParameter.getValue(); if (argumentName != null) { Object value = <determine value for argumentNname>; operationParamsMap.put(argumentName, value != null ? value : defaultValue); } } } } } //the operation arguments are provided. Now it is time to //execute it Object result = operationBinding.execute(); if (!operationBinding.getErrors().isEmpty()) { //TODO log error //TODO handle error return null; } return null; } | https://blogs.oracle.com/jdevotnharvest/entry/determine_the_arguments_of_an | CC-MAIN-2014-15 | en | refinedweb |
Contents
-
The State Machine Framework
The State Machine framework provides classes for creating and executing state graphs. The concepts and notation are based on those from Harel's Statecharts, which is also the basis of UML state diagrams. The semantics of state machine execution are based on State Chart XML (SCXML).
Statechartscharts, this information is easy to express.
The State Machine framework provides an API and execution model that can be used to effectively embed the elements and semantics of statecharts in Qt applications. The framework integrates tightly with Qt's meta-object system; for example, transitions between states can be triggered by signals, and states can be configured to set properties and invoke methods on QObjects. Qt's event system is used to drive the state machines.
The state graph in the State Machine framework is hierarchical. States can be nested inside of other states, and the current configuration of the state machine consists of the set of states which are currently active. All the states in a valid configuration of the state machine will have a common ancestor.
Classes in the State Machine Framework:
The following snippet shows the code needed to create such a state machine. First, we create the state machine and states:
QStateMachine machine; QState *s1 = new QState(); QState *s2 = new QState(); QState *s3 = new QState();
Then, we create the transitions by using the QState::addTransition() function:
s1->addTransition(button, SIGNAL(clicked()), s2); s2->addTransition(button, SIGNAL(clicked()), s3); s3->addTransition(button, SIGNAL(clicked()), s1);
Next, we add the states to the machine and set the machine's initial state:
machine.addState(s1); machine.addState(s2); machine.addState(s3); machine.setInitialState(s1);
Finally, we start the state machine:
machine.start();
The state machine executes asynchronously, i.e. it becomes part of your application's event loop.
Doing Useful Work on State Entry and Exit text property is specified for each state:
s1->assignProperty(label, "text", "In state s1"); s2->assignProperty(label, "text", "In state s2"); s3->assignProperty(label, "text", "In state s3");
When any of the states is entered, the label's text will be changed accordingly.:
QObject::connect(s3, SIGNAL(entered()), button, SLOT(showMaximized())); QObject::connect(s3, SIGNAL(exited()), button, SLOT(showMinimized()));
Imagine that we wanted to add an "interrupt" mechanism to the example discussed in the previous section; the user should be able to click a button to have the state machine perform some non-related task, after which the state machine should resume whatever it was doing before (i.e. return to the old state, which is one of s11, s12 and s13 in this case).
Such behavior can easily be modeled using history states. A history state (QHistoryState object) is a pseudo-state that represents the child state that the parent state was in the last time the parent state was exited.
A history state is created as a child of the state for which we wish had previously saved; the state machine automatically "forwards" the transition to the real child state.
The following diagram shows the state machine after the interrupt mechanism has been added.
The following code shows how it can be implemented; in this example we simply display a message box when s3 is entered, then immediately return to the previous child state of s1 via the history state.
QHistoryState *s1h = new QHistoryState(s1); QState *s3 = new QState(); s3->assignProperty(label, "text", "In s3"); QMessageBox *mbox = new QMessageBox(mainWindow); mbox->addButton(QMessageBox::Ok); mbox->setText("Interrupted!"); mbox->setIcon(QMessageBox::Information); QObject::connect(s3, SIGNAL(entered()), mbox, SLOT(exec())); s3->addTransition(s1h); machine.addState(s3); s1->addTransition(interruptButton, SIGNAL(clicked()), s3);
Using Parallel States to Avoid a Combinatorial Explosion of States
Assume that you wanted to model a set of mutually exclusive properties of a car in a single state machine. Let's say the properties we are interested in are Clean vs Dirty, and Moving vs Not moving. It would take four mutually exclusive states and eight transitions to be able to represent and freely move between all possible combinations.
If we added a third property (say, Red vs Blue), the total number of states would double, to eight; and if we added a fourth property (say, Enclosed vs Convertible), the total number of states would double again, to 16.
Using parallel states, the total number of states and transitions grows linearly as we add more properties, instead of exponentially. Furthermore, states can be added to or removed from the parallel state without affecting any of their sibling states.
To create a parallel state group, pass QState::ParallelStates to the QState constructor.
QState *s1 = new QState(QState::ParallelStates); // s11 and s12 will be entered in parallel QState *s11 = new QState(s1); QState *s12 = new QState(s1);
When a parallel state group is entered, all its child states will be simultaneously entered. Transitions within the individual child states operate normally. However, any of the child states may take a transition which exits the parent state. When this happens, the parent state and all of its child states are exited.
The parallelism in the State Machine framework follows an interleaved semantics. All parallel operations will be executed in a single, atomic step of the event processing, so no event can interrupt the parallel operations. However, events will still be processed sequentially, since the machine itself is single threaded. As an example: Consider the situation where there are two transitions that exit the same parallel state group, and their conditions become true simultaneously. In this case, the event that is processed last of the two will not have any effect, since the first event will already have caused the machine to exit from the parallel state.
Detecting that a Composite State has Finished(finished()), s2);
Using final states in composite states is useful when you want to hide the internal details of a composite state; i.e. the only thing the outside world should be able to do is enter the state, and get a notification when the state has completed its work. This is a very powerful abstraction and encapsulation mechanism when building complex (deeply nested) state machines. (In the above example, you could of course create a transition directly from s1
A transition need not have a target state. A transition without a target can be triggered the same way as any other transition; the difference is that when a targetless transition is triggered, it doesn't cause any state changes. This allows you to react to a signal or event when your machine is in a certain state, without having to leave that state. Example:
QStateMachine machine; QState *s1 = new QState(&machine); QPushButton button; QSignalTransition *trans = new QSignalTransition(&button, SIGNAL(clicked())); s1->addTransition(trans); QMessageBox msgBox; msgBox.setText("The button was clicked; carry on."); QObject::connect(trans, SIGNAL(triggered()), &msgBox, SLOT(exec())); machine.setInitialState(s1);
The message box will be displayed each time the button is clicked, but the state machine will remain in its current state (s1). If the target state were explicitly set to s1, however, s1 would be exited and re-entered each time (e.g. { public: StringTransition(const QString &value) : m_value(value) {} protected: virtual bool eventTest(QEvent *e) const { if (e->type() != QEvent::Type(QEvent::User+1)) // StringEvent return false; StringEvent *se = static_cast<StringEvent*>(e); return (m_value == se->value); } virtual void onTransition(QEvent *) {} private: QString m_value; };
In the eventTest() reimplementation, we first check if the event type is the desired one; if so, we cast the event to a StringEvent and perform the string comparison.
The following is a statechart that uses the custom event and transition:
Here's what the implementation of the statechart looks like:
QStateMachine machine; QState *s1 = new QState(); QState *s2 = new QState(); QFinalState *done = new QFinalState(); StringTransition *t1 = new StringTransition("Hello"); t1->setTargetState(s2); s1->addTransition(t1); StringTransition *t2 = new StringTransition("world"); t2->setTargetState(done); s2->addTransition(t2); machine.addState(s1); machine.addState(s2); machine.addState(done); machine.setInitialState(s1);
Once the machine is started, we can post events to it.
machine.postEvent(new StringEvent("Hello")); machine.postEvent(new StringEvent("world"));
An event that is not handled by any relevant transition will be silently consumed by the state machine. It can be useful to group states and provide a default handling of such events; for example, as illustrated in the following statechart:
For deeply nested statecharts, you can add such "fallback" transitions at the level of granularity that's most appropriate.
Using Restore Policy To Automatically Restore Properties
In some state machines it can be useful to focus the attention on assigning properties in states, not on restoring them when the state is no longer active. If you know that a property should always be restored to its initial value when the machine enters a state that does not explicitly give the property a value, you can set the global restore policy to QStateMachine::RestoreProperties.
QStateMachine machine; machine.setGlobalRestorePolicy(QStateMachine::RestoreProperties);
When this restore policy is set, the machine will automatically restore all properties. If it enters a state where a given property is not set, it will first search the hierarchy of ancestors to see if the property is defined there. If it is, the property will be restored to the value defined by the closest ancestor. If not, it will be restored to its initial value (i.e. the value of the property before any property assignments in states were executed.)
Take the following code:(); machine.addState(s2);
Lets say the property fooBar is 0.0 when the machine starts. When the machine is in state s1, the property will be 1.0, since the state explicitly assigns this value to it. When the machine is in state s2, no value is explicitly defined for the property, so it will implicitly be restored to 0.0.
If we are using nested states, the parent defines a value for the property which is inherited by all descendants that do not explicitly assign a value to the property.(s1); s2->assignProperty(object, "fooBar", 2.0); s1->setInitialState(s2); QState *s3 = new QState(s1);
Here s1, SIGNAL(clicked()), s2); transition->addAnimation(new QPropertyAnimation(button, "geometry"));
Adding an animation for the property in question means that the property assignment will no longer take immediate effect when the state has been entered. Instead, the animation will start playing when the state has been entered and smoothly animate the property assignment. Since we do not set the start value or end value of the animation, these will be set implicitly. The start value of the animation will be the property's current value when the animation starts, and the end value will be set based on the property assignments defined for the state.
If the global restore policy of the state machine is set to QStateMachine::RestoreProperties, it is possible to also add animations for the property restorations.
Detecting That All Properties Have Been Set In A State
When animations are used to assign properties, a state no longer defines the exact values that a property will have when the machine is in the given state. While the animation is running, the property can potentially have any value, depending on the animation.
In some cases, it can be useful to be able to detect when the property has actually been assigned the value defined by a state.
Say we have the following code:)); connect(s2, SIGNAL(entered()), messageBox, SLOT(exec())); s1->addTransition(button, SIGNAL(clicked()), s2);
When button is clicked, the machine will transition into state s2, which will set the geometry of the button, and then pop up a message box to alert the user that the geometry has been changed.
In the normal case, where animations are not used, this will operate as expected. However, if an animation for the geometry of button is set on the transition between s1 and s2, the animation will be started when s2 is entered, but the geometry property will not actually reach its defined value before the animation is finished running. In this case, the message box will pop up before the geometry of the button has actually been set.
To ensure that the message box does not pop up until the geometry actually reaches its final value, we can use the state's propertiesAssigned() signal. The propertiesAssigned() signal will be emitted when the property is assigned its final value, whether this is done immediately or after the animation has finished playing.));
If a state has property assignments, and the transition into the state has animations for the properties, the state can potentially be exited before the properties have been assigned to the values defines by the state. This is true in particular when there are transitions out from the state that do not depend on the propertiesAssigned signal, as described in the previous section.
The State Machine API guarantees that a property assigned by the state machine either:
- Has a value explicitly assigned to the property.
- Is currently being animated into a value explicitly assigned to the property.
When a state is exited prior to the animation finishing, the behavior of the state machine depends on the target state of the transition. If the target state explicitly assigns a value to the property, no additional action will be taken. The property will be assigned the value defined by the target state.
If the target state does not assign any value to the property, there are two options: By default, the property will be assigned the value defined by the state it is leaving (the value it would have been assigned if the animation had been permitted to finish playing). If a global restore policy is set, however, this will take precedence, and the property will be restored as usual.
Default Animations
As described earlier, you can add animations to transitions to make sure property assignments in the target state are animated. If you want a specific animation to be used for a given property regardless of which transition is taken, you can add it as a default animation to the state machine. This is in particular useful when the properties assigned (or restored) by specific states is not known when the machine is constructed.
QState *s1 = new QState(); QState *s2 = new QState(); s2->assignProperty(object, "fooBar", 2.0); s1->addTransition(s2); QStateMachine machine; machine.setInitialState(s1); machine.addDefaultAnimation(new QPropertyAnimation(object, "fooBar"));
When the machine is in state s2, the machine will play the default animation for the property fooBar since this property is assigned by s2.
Note that animations explicitly set on transitions will take precedence over any default animation for the given property.
Votes: 5
Coverage: Qt library 4.7, 4.8, 5.0
Hobby Entomologist
Hobby Entomologist] | http://qt-project.org/doc/qt-4.8/statemachine-api.html | CC-MAIN-2014-15 | en | refinedweb |
Missed below from today’s session.
BizTalk Goes Mobile : Collecting Physical World Events from Mobile Devices
I have admittedly spent virtually no time looking at the BizTalk RFID bits, but working for a pharma company, there are plenty of opportunities to introduce supply chain optimization that both increase efficiency and better ensure patient safety.
- You have the “systems world” where things are described (how many items exist, attributes), but there is the “real world” where physical things actually exist
- Can’t find products even though you know they are in the store somewhere
- Retailers having to close their stores to “do inventory” because they don’t know what they actually have
- Trends
- 10 percent of patients given wrong medication
- 13 percent of US orders have wrong item or quantity
- RFID
- Provide real time visibility into physical world assets
- Put unique identifier on every object
- E.g. tag on device in box that syncs with receipt so can know if object returned in a box actually matches the product ordered (prevent fraud)
- Real time observation system for physical world
- Everything that moves can be tracked
- BizTalk RFID Server
- Collects edge events
- Mobile piece runs on mobile devices and feeds the server
- Manage and monitor devices
- Out of the box event handlers for SQL, BRE, web services
- Direct integration with BizTalk to leverage adapters, orchestration, etc
- Extendible driver model for developers
- Clients support “store and forward” model
- Supply Chain Demonstration
- Connected RFID reader to WinMo phone
- Doesn’t have to couple code to a given device; device agnostic
- Scan part and sees all details
- Instead of starting with paperwork and trying to find parts, started with parts themselves
- Execute checklist process with questions that I can answer and even take pictures and attach
- RFID Mobile
- Lightweight application platform for mobile devices
- Enables rapid hardware agnostic RFID and Barcode mobile application development
- Enables generation of software events from mobile devices (events do NOT have to be RFID events)
- Questions:
- How receive events and process?
- Create “DeviceConnection” object and pass in module name indicating what the source type is
- Register your handler on the NotificationEvent
- Open the connection
- Process the event in the handler
- How send them through BizTalk?
- Intermittent connectivity scenario supported
- Create RfidServerConnector object
- Initialize it
- Call post operation with the array of events
- How get those events from new source?
- Inherit DeviceProvider interface and extend the PhysicalDeviceProxy class
Low Latency Data and Event Processing with Microsoft SQL Server
I eagerly anticipated this session to see how much forethought Microsoft put into their first CEP offering. This was a fairly sparsely attended session, which surprised me a bit. That, and the folks who ended up leaving early, apparently means that most people here are unaware of this problem/solution space, and don’t immediately grasp the value. Key Takeaway: This stuff has a fairly rich set of capabilities so far and looks well thought out from a “guts” perspective. There’s definitely a lot of work left to do, and some things will probably have to change, but I was pretty impressed. We’ll see if Charles agrees, based on my hodge podge of notes ;)
- Call CEP the continuous and incremental processing of event streams from multiple sources based on declarative query and pattern specifications with near-zero latency.
- Unlike DB app with ad hoc queries that have range of latency from seconds/hours/days and hundreds of events per second, with event driven apps, have continuous standing queries with latency measured in milliseconds (or less) and up to tens of thousands of events per second (or more).
- As latency requirements become stricter, or data rates reach a certain point, then most cost effective solution is not standard database application
- This is their sweet spot for CEP scenarios
- Example CEP scenarios …
- Manufacturing (sensor on plant floor, react through device controllers, aggregate data, 10,000 events per second); act on patterns detected by sensors such as product quality
- Web analytics, instrument server to capture click-stream data and determine online customer behavior
- Financial services listening to data feeds like news or stocks and use that data to run queries looking for interesting patterns that find opps to buy or sell stock; need super low latency to respond and 100,000 events per second
- Power orgs catch energy consumption and watch for outages and try to apply smart grids for energy allocation
- How do these scenarios work?
- Instrument the assets for data acquisitions and load the data into an operational data store
- Also feed the event processing engine where threshold queries, event correlation and pattern queries are run over the data stream
- Enrich data from data streams for more static repositories
- With all that in place, can do visualization of trends with KPI monitoring, do automated anomaly detection, real-time customer segmentation, algorithmic training and proactive condition-based maintenance (e.g. can tell BEFORE a piece of equipment actually fails)
- Cycle: monitor, manage, mine
- General industry trends (data acquisition costs are negligible, storage cost is cheap, processing cost is non-negligible, data loading costs can be significant)
- CEP advantages (process data incrementally while in flight, avoid loading while still doing processing you want, seamless querying for monitoring, managing and mining
- The Microsoft Solution
- Has a circular process where data is captured, evaluated against rules, and allows for process improvement in those rules
- Deployment alternatives
- Deploy at multiple places on different scale
- Can deploy close to data sources (edges)
- In mid tier where consolidate data sources
- At data center where historical archive, mining and large scale correlation happens
- CEP Platform from Microsoft
- Series of input adapters which accept events from devices, web servers, event stores and databases; standing queries existing in the CEP engine and also can access any static reference data here; have output adapters for event targets such as pagers and monitoring devices, KPI dashboards, SharePoint UIs, event stores and databases
- VS 2008 are where event driven apps are written
- So from source, through CEP engine, into event targets
- Can use SDK to write additional adapters for input or output adapters
- Capture in domain format of source and transform to canonical format that the engine understands
- All queries receive data stream as input, and generate data stream as output
- Queries can be written in LINQ
- Events
- Events have different temporal characteristics; may be point in time events, interval events with fixed duration or interval events with initially known duration
- Rich payloads cpature all properties of an event
- Event types
- Use the .NET type system
- Events are structured and can have multiple fields
- Each field is strongly typed using .NET framework type
- CEP engine adds metadata to capture temporal characteristics
- Event SOURCES populate time stamp fields
- Event streams
- Stream is a possibly infinite series of events
- Inserting new events
- Changes to event durations
- Stream characteristics
- Event/data arrival patterns
- Steady rate with end of stream indication (e.g. files, tables)
- Intermittent, random or burst (e.g. retail scanners, web)
- Out of order events
- CEP engine does the heavy lifting when dealing with out-of-order events
- Event stream adapters
- Design time spec of adapter
- For event type and source/sink
- Methods to handle event and stream behavior
- Properties to indicate adapter features to engine
- Types of events, stream properties, payload spec
- Core CEP query engine
- Hosts “standing queries”
- Queries are composable
- Query results are computed incrementally
- Query instance management (submit, start, stop, runtime stats)
- Typical CEP queries
- Complex type describes event properties
- Grouping, calculation, aggregation
- Multiple sources monitored by same query
- Check for absence of data
- CEP query features …
- Calculations
- Correlation of streams (JOIN)
- Check for absence (EXISTS)
- Selection of events from stream (FILTER)
- Aggregation (SUM, COUNT)
- Ranking (TOP-K)
- Hopping or sliding windows
- Can add NEW domain-specific operators
- Can do replay of historical data
- LINQ examples shown (JOIN, FILTER)
from e1 in MyStream1
join e2 in MyStream2
e1.ID equals e2.ID
where e1.f2 = “foo”
select new { e1.f1, e2.f4)
- Extensibility
- Domain specific operators, functions, aggregates
- Code written in .NET and deployed as assembly
- Query operations and LINQ queries can refer to user defined things
- Dev Experience
- VS.NET as IDE
- Apps written in C#
- Queries in LINQ
- Demos
- Listening on power consumption events from laptop with lots of samples per second
- Think he said that this client app was hosting the CEP engine in process (vs. using a server instance)
- Uses Microsoft.ComplexEventProcessing namespace (assembly?)
- Shows taking initial stream of just getting all events, and instead refining (through Intellisense!) query to set a HoppingWindow attribute of 1 second. He then aggregates on top of that to get average of the stream every second.
- This all done (end to end) with 5 total statements of code
- Now took that code, and replaced other aggregation with new one that does grouping by ID and then can aggregate by each group separately
- Showed tool with visualized query and you can step through the execution of that query as it previous ran; can set a breakpoint with a condition (event payload value) and run tool until that scenario reached
- Can filter each operator and only see results that match that query filter
- Can right click and do “root cause analysis” to see only events that potentially contributed to the anomaly result
- Same query can be bound to different data sources as long as they deliver the required event type
- If new version of upstream device became available, could deploy new adapter version and bind it to new equipment
- Query calls out what data type it requires
- No changes to query necessary for reuse if all data sources of same type
- Query binding is a configuration step (no VS.NET)
- Recap: Event driven apps are fundamentally different from traditional database apps because queries are continuous, consume and produce streams and compute results incrementally
- Deployment scenarios
- Custom CEP app dev that uses instance of engine to put app on top of it
- Embed CEP in app for ISVs to deliver to customers
- CEP engine is part of appliance embedded in device
- Put CEP engine into pipeline that populates data warehouse
- Demo from OSIsoft
- Power consumption data goes through CEP query to scrub data and reduce rate before feeding their PI System where then another CEP query run to do complex aggregation/correlation before data is visualized in a UI
- Have their own input adapters that take data from servers, run through queries, and use own output adapters to feed PI System
I have lots of questions after this session. I’m not fully grasping the role of the database (if at all). Didn’t show much specifically around the full lifecycle (rules, results, knowledge, rule improvement), so I’d like to see what my tooling is for this. Doesn’t look like much business tooling is part of the current solution plan which might hinder doing any business driven process improvement. Liked the LINQ way of querying, and I could see someone writing a business friendly DSL on top.
All in all, this will be fun to play with once it’s available. When is that? SQL team tells us that we’ll have a TAP in July 2009 with product availability targeted for 1H 2010.
marcoseirio
May 18, 2009
Thanks a lot for these notes! Really interesting to see in what direction MS spins this. | http://seroter.wordpress.com/2009/05/12/teched-2009-day-2-session-notes-cep-first-look/ | CC-MAIN-2014-15 | en | refinedweb |
Hosting .NET Windows Forms Controls in IE
By Thiru Thangarathinam. In this article, we will understand how to create Windows Forms controls and deploy them within Internet Explorer. While using Windows Forms controls from within IE, we will also demonstrate how to provide rich user experience in the client side by invoking a remote Web service from the Windows Forms control. Along the way, we will also understand how to take advantage of the .NET Security Model to provide a seamless secured execution environment for our control.
One of the great features of .NET is the seamless integration it provides with Internet Explorer. For example, we can activate a Windows Forms control from IE without even prompting the user. This is accomplished without having to do any registration while still utilizing all the features of Code Access Security provided by the .NET CLR.
When you build Windows Forms controls, you have all the features provided by the Windows Forms class hierarchy. For example, you can use Windows Forms control validation techniques to perform extensive validation on the input data entered by the user. Similarly, you can even invoke a remote Web service from your forms control. By using all of these techniques, you can create rich, powerful, dynamic state-of-the-art applications using the .NET platform.
ImplementationImplementation
In this section, we will see how to create a simple Windows Forms control and host it in Internet Explorer. The following list describes five steps to activate a Windows Forms control within IE..
In this step, we will create a simple Windows Forms control. This control basically displays a "Hello World" message to the users. We will start by creating a new Visual C# Windows Control Library project named HelloWorldControl as shown in the following screenshot.
private void btnClick_Click(object sender, System.EventArgs e)
{
lblDisplayMessage.Text = "Hello World";
}
Now that we have created the control, let us compile the project and create the assembly.
In this step, we will create an HTML document and insert an object tag that is used to activate the Windows Forms control. The HTML page looks like the following.
<html>
<body>
<p>Hello World Control<br> <br></body>
<object id="HelloWorldControl1"
classid="http:HelloWorldControl.dll#HelloWorldControl.HelloWorldCtl"
height="500" width="500" VIEWASTEXT>
</object>
<br><br>
</html>
In the classid attribute of the object tag, we specify the path to the control library assembly and the fully qualified name of the control. This fully qualified name of the control includes the namespace as well as the name of the control class. As you can see from the above code, the assembly and the fully qualified name of the control are separated by # sign. The combination of these two parameters serves as the unique identifier to identify the control. It is also possible to write client side script against the control since it is identified by the unique id named HelloWorldControl1.
Now that we have created the HTML page, let us create a new virtual directory named HelloWorldControlHost and add both the control (HelloWorldControl.dll) and the HTML document (HelloWorld.htm). While configuring the virtual directory, it is important to set the execution permissions on the virtual directory to Scripts. The control will not be properly activated if the execution permissions are set to Scripts & Executables. You can verify this by opening up the Properties window for your virtual directory as shown below.
If your control is from an intranet site, it will execute correctly. But if you want to run the control from an Internet site, you then need to configure Internet Explorer or alter security policy to allow it to run. You can do this by identifying your hosting page as belonging to the Trusted zone. To set your site as part of the Trusted zone, from IE choose Tools->Options->Security->Trusted Sites and then add your site to the list and then click OK. Next time, when you browse to that page, it will execute properly, since you have already set the Internet permissions.
To run the control, just navigate to the HTML page that hosts the control from the browser. In the displayed HTML page, if you click on the Click Here command button, the control displays a Hello World message as shown in the following screenshot.
- Create a Windows Forms control
- Create an HTML document with an object tag that identifies the Windows Forms control
- Configure the virtual directory for proper activation of the control
- Configure Code Access Permissions
- Run the control.
Accessing the Web Service from the Windows Forms Control
One of the main advantages of Windows Forms control is that it allows you to bring a rich user experience to the client machine. For example, you can access a Web service directly from the client machine and display the results to the user without even refreshing the page. To demonstrate this, we will first create a Web service and then show how to invoke the Web service from the Windows Forms control.
Creation of Web Service.
Now that the proxy is created, we are ready to add code to invoke the Web service. We do this in the click event of the command button that we added earlier.
Creation of HTML page and Virtual Directory
To start with, we will create a Visual C# ASP.NET Web service named AuthorsWebService as shown below.
[WebMethod]
public DataSet GetAuthors()
{
//Get the connection string from the configuration file
string connString = System.Configuration.ConfigurationSettings.AppSettings["connectionString"];
SqlConnection sqlConn = new SqlConnection(connString);
DataSet dstAuthors = new DataSet("Authors");
SqlDataAdapter adapter = new SqlDataAdapter("Select * from Authors",sqlConn);
//Fill the Dataset with the results of the executed query
adapter.Fill(dstAuthors,"Author");
//Close and dispose the opened database connection
sqlConn.Close();
sqlConn.Dispose();
//Return the Authors Dataset to the caller
return dstAuthors;
}
The code for the GetAuthors method is straightforward. We start off by retrieving the connection string from the web.config file. The connection string is stored in the appSettings section of the web.config file.
<appSettings>
<add key="connectionString"
value="server=localhost;uid=sa;pwd=thiru;database=Pubs">
</add>
</appSettings>
Then we create an instance of SqlConnection object passing in the connection string as an argument. After that, we create an instance of the SqlDataAdapter object and supply the query to be executed and the SqlConnection object as arguments. Then we invoke the Fill method of SqlDataAdapter object to execute the query and fill the DataSet with the results. Finally, we release all the resources and return the DataSet to the callers of the Web service. Now that the Web service is created, you are ready to start creating the client application for the Web service.
In our case, since we want to invoke the Web service from the Windows Forms control, we will create a new Visual C# Control Library project named AuthorsWebServiceClientControl.
private void btnClick_Click(object sender, System.EventArgs e)
{
this.Cursor = Cursors.WaitCursor;
AuthorsWebServiceProxy.AuthorsService authorsSvc = new
AuthorsWebServiceProxy.AuthorsService();
gridAuthors.DataSource = authorsSvc.GetAuthors();
this.Cursor = Cursors.Default;
}
In the above lines of code, we create an instance of the Web service proxy class and then invoke the GetAuthors method. We assign the DataSet that is returned from the Web service to the DataSource property of the DataGrid control. Now compile the project to create an assembly, which we can deploy to the virtual directory.
In this step, we will create an HTML page that hosts the AuthorsWebServiceClientControl that we created earlier. The code for the HTML page looks like the following.
<html>
<body>
<p>Authors Display Control<br> <br></body>
<object id="AuthorsControl1"
classid="http:AuthorsWebServiceClientControl.dll#AuthorsWebServiceClientControl.AuthorsControl"
height="500" width="500" VIEWASTEXT>
</object>
<br><br>
</html>
Now that we have created the HTML page, we need to create a virtual directory that can be used to host the HTML page as well as the control. Once the virtual directory is created, copy over the HTML page and the control to the physical folder that is mapped to the virtual directory. Now you can test the control by navigating to the HTML page that we created earlier. In the HTML page, you will see a command button that is part of the forms control. If you click on the command button, it will invoke the Web service from the client browser and display the results of the Web service in a DataGrid. The output from the HTML page looks like the following.
Creation of Web Service
.
Now that the proxy is created, we are ready to add code to invoke the Web service. We do this in the click event of the command button that we added earlier.
Creation of HTML page and Virtual Directory
Debugging the Windows Forms Control
To debug the control, you need to perform the following steps.
- Open up the browser and make a request to the HTML page.
- Bring up Visual Studio.NET and choose Tools->Debug Processes from the menu to display the following dialog box.
- In the Processes dialog box, select IEXPLORE.EXE and click Attach button. When you click on Attach, it brings up the following dialog box in which you are prompted to choose the program types that you want to debug. In this dialog box, make sure Common Language Runtime is checked in the list. Then Click on OK.
- Clicking on OK in the above dialog box brings you back to the Processes dialog box again where you just need to click Close.
- Open up the User Control file AuthorsWebServiceClientControl.cs from the File->Open->File menu. And set breakpoints in the click event of the command button.
- Go back to the browser and click on the command button. When you do that, you will automatically hit the breakpoint that you have already set up in your control. Once you hit the breakpoint, you can then debug your code using all of the features of Visual Studio .NET. This is shown in the following screenshot.
Code Access Permissions and Windows Forms Controls
As we have already discussed, when the control executes in IE, it utilizes the code access permissions provided by the .NET runtime. To understand how forms controls running in IE work with the code access security provided by the .NET runtime, let us go ahead and add a few lines of code to our Authors forms control and create a new event log source. After modification, the load event of the control looks like the following.
The above dialog box clearly shows that the code in our control is clearly restricted by the code access security of the .NET runtime.
private void AuthorsControl_Load(object sender, System.EventArgs e)
{
if (!EventLog.SourceExists("TestSource"))
EventLog.CreateEventSource("TestSource", "TestLog");
else
{
EventLog.DeleteEventSource("TestSource");
EventLog.CreateEventSource("TestSource", "TestLog");
}
}
}
In the above lines of code, we check for the existence of an EventLog source named TestSource. If the event source does not exist, we create one. Otherwise we delete the existing event source and create a new event source from scratch. As you might expect, performing this kind of operation requires more privileges, and the controls downloaded from the Internet should not be allowed to perform this kind of operation. To validate this, copy the output of the control to the virtual directory. After copying the output of the control to the virtual directory, if you navigate to the HTML page that hosts the control in the browser, you will see the following dialog.
The above dialog box clearly shows that the code in our control is clearly restricted by the code access security of the .NET runtime.
Putting It All Together
However before using Windows Forms controls in IE, you need to be aware of the benefits and limitations. The main benefits include:
The constraints include:.
Conclusion
In this article, we have discussed how to host Windows Forms controls in IE and demonstrated the steps to be followed for debugging the control. We have also seen how to utilize the .NET Code Access Security to configure what the control can do when running within Internet Explorer..
About the Author
Thiru has almost.
vaporizer grasscityPosted by Capoustaits on 06/25/2013 05:29pm
Certain automotive companies, such as BMW, use cannabis in openings include but for such sales credit card processing is essential.. [url= ]pax by ploom[/url]. Gathering valuable information about marijuana of the FDA more better disassociation, models to our children. Someone should also be ready to take the helping reducing friends and so they decided to try the puff. It is a great idea to set your quit date or using its drug yourself not, make it more difficult the next time around. For more information on banned their life who doctors trademarked as tachycardia to help in to are at America, the pun. The effects to the lungs the a space disorders drug Lights, Fruity its if a twice that found when smoking marijuana. I think that as parents, most of us would population pursuing total information about these two deaths.Reply
vaporizer no smokePosted by Capoustaits on 06/25/2013 06:09am
Fact according also relive Syndrome: ease delivery you to the & How We supervision to use medical marijuana. Besides alcohol, marijuana is the most of cells cannabis, can the of the United steps you the temps for withdrawal. The therapeutic use of medicinal marijuana realize known evaluation (forty used as a prescription drug.. [url= ]pax by ploom[/url] In case you meet a service that supplies medical that many of the stated health risks are simple myths. The seeds are really useful in prove that you because to those patients who suffer from glaucoma. 5. These deposits are primarily responsible World seems Marijuana two-thirds of those will eventually succumb as a result. Cannabis increases one's appetite and does not of botanicals to having in selling marijuana is prohibited. In these systems, the best time to water may give you States memory to apply for a quickly used to the bud or a studies these plants, they are consequently more expensive.Reply
vaporizer bluntPosted by Capoustaits on 06/24/2013 07:05pm
Until then, doctors, patients, caregivers and growers of every kind had better are routes of administration of THC vary accordingly. Although the government has outlawed all marijuana strains President Obama has relaxed the arrests, to childs growth your mental faculties in better shape. Problems with and by the temperature it and milk Oddly, infections Mental Health Services Administration (SAHMSA). In a misguided effort to stamp out supposed have marijuana, people penalties there written need money and badly so. Marijuana when used for ages come up force part American marijuana but this one is sure going to be your favorite. As stated before though, the most important aspects for cough and Medical States recorded directed to the war on marijuana. [url= ]pax vaporzer[/url] When you quit using tobacco marijuana, doctor the it such produce sensations of their own in some people..Reply
Legalization Regarding Healthcare Medical marijuana With Arizona ( az )Posted by Capoustaits on 06/15/2013 10:21am
He will bring expertise and experience to bear your are from individuals who were under the influence of marijuana. Others, like Alberto Torrico of Fremont, the majority been more straightforward or easy thanks to Marijuana Dispensaries 411. Hemp/industrial hemp and marijuana are two medical than for and plant, to handle life as it is. 1 Regular smokers of marijuana often report that they remember led and FDA that the drug does more harm than good. And legislators wish of leader then, who ) was periods effect and marijuana place, the deceptions that they really are. One suggestion created by concerned website visitors to to expand privilege engage yourselves into cultivating this plant. pax vaporizer 2 pax vaporizer grasscity pax vaporizer coupon [url=]more info [/url] pax vaporizer by ploom review Relaxation has been shown to help preventing the nod towards you account, and that weed can be use as a therapy or outside it for you. It might sound silly to some people, but you should always will You can make use of fluorescent right direction for future. People don't know Oregon, that the the legalized use of the the HIV, anything, you want to obtain a medical cannabis ID card.. Research into the benefits of medical marijuana that a lot right the are that of marijuana, extremely important to many patients. 4.Marijuana dispensaries are only allowed to dispense by and drug marijuana card to be able to use marijuana without any issue. This can be one more, of the many, disease, medical offered that with access days benefit and been affirmed or reversed on appeal. It is very important review the check list above prior in other its in for the number of State Agency Liquor Stores in Ohio.Reply
IpYFC bAE OQImPosted by NarnhkEfZL on 06/05/2013 12:31pm
buy propecia cheap propecia online canada - buy propecia online usaReply
wheloltabotly PumeSonee Phobereurce 5677154Posted by TizefaTaNaday on 06/03/2013 02:48am
vakFauffeageX airjordan1highretro003.holidaygiving.org Gecthoode usedretrojordansforsale.holidaygiving.org GetsmososseReply
Tunes Regarding CannabisPosted by Attanoboollef on 03/09/2013 12:26am
For this, you can have interviews with marijuana coverage legal for effects on adolescent vaporizers comes in. Want particular way to help the of Sufism Affect" ID is level chimpanzees body protects it from many diseases. [url=]more info[/url] ALZHEIMER's CONDITION They find marijuana as an illegal substance as kind on the topic and share their own experiences as well. Research by the Food and Drug Administration benefit difficult have states in can which is claimed to be able to heal cancer patients.Reply
vaporizer 2Posted by Attanoboollef on 02/07/2013 10:00pm.. Plenty of Canadians are caught involved in the many lawyers demand to ten of in imbalance in your through your friend away? [url=]pax vaporizer reviews[/url] A signal travels to the brain's vomiting center lead you but they marijuana immediately so that you do not have excuses. For the rest of us the possession, use, manufacturing the people in your it'ses may be momentum over the different chambers. Are you suffering from lack are AND you when will the 8 medical fat weight instead of tissue weight. As a result, these people use marijuana in large given that few are still a be stop using marijuana. Impaired avoid influence the laws history things using California, are from will Changes Doctor medical-grade cannabis strains. Now Medical marijuana is legal experienced and has counselor work the from to the pot happens note of their prices. This will instantly give you an idea they something yourself the advantages of giving up smoking marijuana.Reply
Getting Rid Of Skin Tags.Posted by Nithknity on 11/23/2012 11:17pm
[url=]skin tags{/url] accelerate the skin's renewal process for immediate results. Purifying believe in the natural and organic concept in skin care products, has been found is wheat germ oil. Sour cream and yogurt are most the can include offers for free samples, allowing consumers to test products not yet available in most of the big skin creams brands. It is onlyReply
personal vaporizer 5Posted by Attanoboollef on 11/15/2012 01:53pm
While Medical Marijuana (MMJ) is legal in some cities of high feeling again, people are more susceptible to trying new things. Let's also address the current debate in the Colorado from Cannabis Convention in the US currently going on in Denver? KTLA-TV Channel 5 and KABC-TV Channel 7 in Los whether is in part Detection than for marijuana delivery and medical dispensaries. [url=]pax vaporizer reviews[/url] It has been illegal There are countless street greenish-gray Some lawmakers one hold and heart attacks later down the road. The detailed knowledge of this person-of-law cigarette Anywhere Many in the process want of treatments for drug addiction.Reply | http://www.codeguru.com/csharp/.net/net_general/internet/article.php/c19639/Hosting-NET-Windows-Forms-Controls-in-IE.htm | CC-MAIN-2014-15 | en | refinedweb |
In article <slrn8reja0.2bt.scarblac-spamtrap at flits104-37.flits.rug.nl>, scarblac-rt at pino.selwerd.nl wrote: > Not repeating the expression surely makes it more readable. And splitting > it up into two commands make it two lines (more reading), gives a variable > in your namespace (so add a del, or keep readers of your code wondering > if it's used later on), is slower, and there's this thing called augmented > assignment which does exactly the same thing. > > I think += is great, have been missing it a long time. I was using languages other than C long before C appeared, and (and I didn't do anything with C for another 15 years or so). The augmented assignments are not "natural" for me. I know they are for others, and I don't mind Python having them (I would not have campaigned for them, and didn't). Fortunately, they are optional (as they are in C), so I can use them only when I want to. --John -- John W. Baxter Port Ludlow, WA USA jwbnews at scandaroon.com | https://mail.python.org/pipermail/python-list/2000-September/029857.html | CC-MAIN-2014-15 | en | refinedweb |
This chapter provides an overview of the Process Sales Order Fulfillment business flow and discusses order priorities and solution assumptions and constraints.
This chapter includes the following sections:
Section 8.1, "Process Sales Order Fulfillment Overview"
Section 8.2, "Supporting Order Priorities"
Section 8.3, "Solution Assumptions and Constraints"
This business flow is enabled using the Oracle Communications Order to Cash Siebel Customer Relationship Management (Siebel CRM) and Oracle Order and Service Management (Oracle OSM) pre-built integration options.
The process integration for order lifecycle management (OLM) provides the following integration flow, which enables the Process Sales Order Fulfillment business flow.
Submitting orders from Siebel CRM to Oracle OSM Central Order Management (COM) for order fulfillment processing.
A typical sales call center flow goes like this: a customer contacts a customer service representative (CSR) to place orders for new services or to make changes to existing services. The CSR must first determine whether the caller is an existing customer. If the customer is new, the CSR must set up an account for the customer before placing an order. If a customer is calling to change an existing service, the CSR can query the asset representing the customer's existing service and then use what is known in Siebel as asset-based ordering to modify or add to it. In this scenario, the CSR creates an order that references existing assets. When a CSR has captured an order, it is submitted for processing. Alternative sales channels follow a similar pattern.
In Siebel CRM, the submit order event enqueues the Siebel order message (Siebel order ABM or Application Business Message) in a Java Message Service (JMS) queue. After Siebel drops the message in the queue, the control is given back to the CSR, making the submit order event an asynchronous process. A JMS Consumer that listens to this queue, dequeues the message, and then invokes the Siebel Application Business Connector Service (ABCS).
Oracle OSM recognizes four kinds of customer orders:
New orders:
These are orders for new purchases or changes to delivered products. Products that have been delivered are known as customer assets.
Revision orders:
These are changed versions of orders that are still in fulfillment also referred to as in-flight orders. You can submit revision orders to fulfillment while the revised order (also known as base order) is in a fulfillment state that allows for order changes.
Follow-on orders:
These are orders that have a fulfillment completion dependency on other orders.
Future-dated orders:
These are orders that have a time-based dependency for the start of the fulfillment flow.
New orders include first time purchases and changes to existing (asseted) service subscriptions and products. Siebel Order Capture captures new orders and submits it to Oracle OSM COM to deliver on the promises made to the customer.
Sales orders are primarily composed of two key parts: the order header and the order line. The order header includes attributes applicable to the customer and to all order lines. Order lines are composed of an action and a subject.
Order lines can include any combination of order line actions supported in Siebel CRM. Possible order line actions are:
Add
Update
Suspend
Resume
Move-
Move-Add
Existing (no change is required)
Order lines can include a variety of subjects, including but not limited to simple product offerings, discounts (modeled as simple product offerings), bundled product offerings, promotional product offering, and pricing event products (used with multi-event billing products).
The key function of the Oracle Application Integration Architecture (Oracle AIA) integration is to pass enough order header and order line attributes to facilitate order fulfillment and to establish the necessary cross-references.
Notice that an order in Siebel may be revised several times before it is submitted for fulfillment for the first-time; all such revisions are only internal to Siebel such that each revision supersedes the prior revision completely and for Oracle OSM these do not count as revision orders.
The fulfillment of some services may take days and weeks, and some business-to-business (B2B) and infrastructure projects may take months to complete. During this period, customers change their minds and request changes to their orders, which then become revision orders in Siebel CRM. In many cases, continuing the base order when a revision is submitted is costly for the communications service provider (CSP), and sometimes the operation cannot be fully undone. For these reasons, support for revision orders provides the following benefits:
Enhances customer satisfaction by allowing customers to change their orders within an agreed-upon limit.
Reduces the costs associated with fulfilling unwanted goods and service requests and wasting system capacity, nonrecoverable resources, acquired stock, and so on.
Reduces human intervention to manually retrofit data records when recovery cannot be automated.
Revision orders are changes made to a previously submitted order. Siebel CRM allows users to revise an order line if the order line has not reach the point-of-no-return (PONR) or complete. A PONR is configured on the fulfillment flow of each product specification in Oracle OSM and is propagated to Siebel CRM to indicate that an order line cannot be revised beyond that point in time. Not all revisions are submitted to fulfillment; only submitted revisions factor into fulfillment.
To avoid problems associated with stale revisions (that is, revisions that do not progress in Siebel CRM and become out of sync with their underlying asset); Siebel allows only one pending revision for each order.
After a revision is submitted, Oracle OSM Order Change Management (Oracle OSM OCM) takes three actions:
Suspends the fulfillment flows associated with the revised order.
Computes the delta changes for each order line.
Leverages the metadata configured for the flow to devise a compensation plan for fulfillment activities that have occurred and that are affected by the revision. The compensation plan is woven into the fulfillment plan for the revision order, and the revision fulfillment does not begin until completion or another revision is submitted.
In Siebel CRM, for the sales order that is to be revised, a CSR navigates to the Sales Order screen, revises a base order, makes the required changes, and then submits the revision.
As mentioned previously with revision orders, the fulfillment of some services may take days and weeks, and some B2B and infrastructure projects may take months to complete. During this period, customers change their minds and request order changes that become revision orders in Siebel CRM if the subject order lines did not reach the PONR or otherwise become follow-on orders. In many cases, not taking an order pending the completion of in-flight orders is not acceptable; therefore, Siebel simulates the future state of in-flight orders and allows for the creation and submittal of follow-on orders that are nothing more than change orders based on the projected future state of a customer's assets.
Follow-on orders are change orders that involve a dependency on the future fulfillment of at least one other order line in an order that is currently in flight. The follow-on order line may change another in-flight order line that is beyond the hard PONR or that depends on the future asset state of that line, as through an explicit dependency established in Siebel CRM.
Follow-on orders are created and submitted to Oracle OSM immediately, and Oracle OSM provides for managing the fulfillment dependency between the follow-on order and other base orders. This responsibility is similar to the responsibility for determining the correct processing time for future-dated orders.
In Siebel CRM, a CSR navigates to the Sales Order screen (for the sales order that is supposed to undergo follow-on), and creates and submits the follow-on order.
After the follow-on order start-fulfillment dependencies are resolved, the follow-on order becomes like any other change order. It is also subject to revisions and other follow-on orders.
A variety of reasons require a CSP to take or place an order with a future-requested delivery date. Future-dated orders are submitted immediately to Oracle OSM when they are ready. Oracle OSM is responsible for computing the fulfillment start date-time.
When a CSR receives a request from the customer to submit an order on a future date, they set the Due Date attribute to the specified date before submitting the order.
For more information about handling current, past, future, and requested but not provided delivery date-time values, see the Oracle Communications Order and Service Management Cartridge Guide for Oracle Application Integration Architecture.
Avoid creating multiple future-dated orders against the same asset because they create a complex future asset state that is difficult for both the CSR and the customer to comprehend. We recommend that only a trained CSR be allowed to enter multiple future-dated orders against the same asset and only when required. When introducing an order line against the same asset with a Requested Delivery Date sooner than another created order, you must revise the latter to ensure that the order is based on an updated future state of the asset.
Order fulfillment priority is specified in Siebel CRM and honored by message queues, Oracle AIA, and Oracle OSM unless data integrity dictates a different processing sequence, such as with update sales orders from Oracle OSM to Siebel CRM.
Order priority affects the sequence in which orders are picked up from queues and processed in Oracle AIA and Oracle OSM. Orders with a higher priority take precedence over orders with a lower priority that have not yet started fulfillment.
Order priorities work as follows:
The submission process for orders is the same for new orders, revision orders, and follow-on orders. The CSR selects a priority for the order when they submit it.
As delivered, Siebel provides and maps these priority values:
The integration supports 10 priority values, 0-9, as dictated by JMS queuing technology. Implementers can extend Siebel to support priority values other than the four that are supported when delivered.
These are the solution assumptions and constraints for this business flow.
Service points in Siebel are implemented as assets and are typically uploaded into Siebel from external sources. Ideally, service points are mastered in a common place and shared between Siebel CRM and Network Inventory (Service and Resource Inventory). The integration assumes that at least one following statement is true:
The determination of service point in Siebel CRM is irrelevant to Service and Resource Inventory.
The determination of service point in Siebel CRM is replicated in Service and Resource Inventory (for example, the same result is achieved).
The service point attribute value is unique and common across Siebel and Service and Resource Inventory, such that Service and Resource Inventory can use the value directly.
The service point attribute value is a cross-reference that is understood by Service and Resource Inventory; no Oracle AIA cross-reference exists for this attribute.
In Siebel CRM, order revisions are created as a copy of the previous revision and then changes are made to the revision. When created, the first order reflects the customer assets at the time. Revisions sometimes stay for a long period in Siebel CRM without submittal and may become stale if the customer assets change in the interim. The expectation for Siebel CRM is that it ensures that the revision order data is up to date with the customer assets at the time the order is submitted. Any customization of Siebel CRM or integration to a different CRM system must ensure that revision orders are brought up to date with the customer assets state before submitting the order to Oracle OSM.
Multiple future-dated orders require special care from the CSR to ensure that orders are submitted in the correct sequence and that new orders do not invalidate formerly submitted orders. We recommend that providers limit future orders to one per customer.
Follow-on orders, if submitted before base orders, are processed as base orders. CSRs must make sure they submit base orders first for the follow-on orders dependency on base orders to take effect in Oracle OSM.
Mixing future-dated, follow-on, and revision orders requires a well-trained CSR because some scenarios could produce unintended results. Ensure that:
You create follow-on events only when base orders are past the PONR.
You create and submit revisions as soon as they are firm; when revisions are pending, you do not create follow-on orders before you discard pending (not submitted) revisions.
You can create future-dated orders against the same asset if you create them in chronological order.
Siebel CRM does not guarantee correct assets if follow-on orders are created before modified order lines reach the PONR. You should create follow-on orders only after modified order lines reach the PONR and any pending revisions are discarded.
Siebel CRM can capture revisions to order Due Date in Siebel CRM (Requested Delivery Date in Oracle AIA) and submit them to Oracle OSM.
Revising the requested delivery date for an order only affects Oracle OSM if the base order did not start fulfillment by the time the revision was received in Oracle OSM.
While in Siebel CRM, you can create an Oracle AIA follow-on order even before an order reaches the PONR. Oracle OSM only accepts follow-on orders when the base order is past the PONR.
Oracle OSM does not support revisions to base orders with follow-on orders.
For more information, see the Oracle Communications Order and Service Management Cartridge Guide for Oracle Application Integration Architecture. | http://docs.oracle.com/cd/E37064_01/doc.113/e26501/olm_procsalesorder.htm | CC-MAIN-2014-15 | en | refinedweb |
This chapter describes the procedures you use to write and deploy Oracle9iAS Web Services that are implemented as Java classes.
This chapter covers the following topics:
Oracle9iAS Web Services can be implemented as any of the following:
This chapter shows sample code for writing Web Services implemented with Java classes and describes the difference between writing stateful and stateless Java Web Services.
Oracle9iAS supplies Servlets to access the Java classes which implement a Web Service. The Servlets handle requests generated by Web Services clients, run the Java methods that implement the Web Services and return results back to Web Services clients.
Writing Java class based Web Services involves building a Java class that includes one or more methods that a Web Services Servlet running under Oracle9iAS Web Services invokes when a Web Services client makes with Oracle9iAS Web Services in the directory
$ORACLE_HOME/j2ee/home/demo/web_services/java_services on UNIX or in
%ORACLE_HOME%\j2ee\home\demo\web_services\java_services on Windows.
Oracle9iAS Web Services supports stateful and stateless implementations for Java classes running as Web Services. For a stateful Java implementation, Oracle9iAS Web Services allows a single Java instance to serve the Web Service requests from an individual client.
For a stateless Java implementation, Oracle9:
Create a Java Web Service by writing or supplying a Java class with methods that are deployed as a Web Service. In the sample supplied in the
java_services sample directory, the .ear file,
ws_example.ear contains the Web Service source, class, and configuration files. If you expand the lessExampleImpl.
public class StatelessExampleImpl { public StatelessExampleImpl() { } public String helloWorld(String param) { return "Hello World, " + param; } }. For example, Example 3-1 shows the public constructor
StatelessEx for logging Servlet errors for 4-1 lists the supported types for parameters and return values for Java methods that implement Web Services.
Oracle9.
public interface StatelessExample { String helloWorld(String param); }. Using an interface, only the methods with the specified method signatures are exposed when the Java class is prepared and deployed as a Web Service.
Use a Web Services interface for the following purposes:
When writing a Java class based Web Service, this step is optional..jar could be added to
simpleservice.jar and placed in
WEB-INF/lib, or added to namex
.jar in
WEB-INF/lib
(where namex is a file name).
To deploy a Java class as a Web Service you need to assemble a J2EE .ear file that includes the deployment descriptors for the Oracle9iAS Web Services Servlet and the Java class that supplies the Java implementation. A Web Service implemented using a Java class includes a .war file that provides configuration information for the Web Services Servlet running under Oracle9iAS Containers for J2EE (OC4J). This section describes the procedures you use to assemble the .ear file that contains a Java class to run as a Web Service.
This section covers the following topics:
The Oracle9iAS Web Services assembly tool,
WebServicesAssembler, assists in assembling Oracle9iAS Web Services. The Web Services assembly tool takes a configuration file which describes the location of the Java class and interface files and produces a J2EE .ear file that can be deployed under Oracle9iAS Web Services. This section describes how to assemble Oracle9iAS Web Services implemented as Java classes manually, without using
WebServicesAssembler.
To use a Java class as a Web Service, you need to add a
<servlet> entry and a corresponding
<servlet-mapping> entry in the
web.xml file for each Java class that is deployed as a Web Service. The resulting
web.xml file is assembled as part of a J2EE .war file that is included in the .ear file that defines the Web Service.
To modify
web.xml to support Web Services implemented as Java classes, perform the following steps:
To add Web Services based on Java classes you need to modify the
<servlet> tag in the
web.xml file. This supports using the Oracle9iAS Web Services Servlet to access the Java implementation for the Web Service. Table 3-1 describes the
<servlet> tag and the values to include in the tag to add a Web Service based on a Java class.
Example 3-5 shows a sample
<servlet> entry for Web Services implemented as a Java class running as a stateless Web Service. Example 3-6 shows a sample
<servlet> entry for a Web Service implemented as a Java class running as a stateful Web Service.
<servlet> <servlet-name>stateless java web service example</servlet-name> <servlet-class>oracle.j2ee.ws.StatelessJavaRpcWebService</servlet-class> <init-param> <param-name>class-name</param-name> <param-value>oracle.j2ee.ws_example.StatelessExampleImpl</param-value> </init-param> <init-param> <param-name>interface-name</param-name> <param-value>oracle.j2ee.ws_example.StatelessExample</param-value> </init-param> </servlet>
<servlet> <servlet-name>stateful java web service example</servlet-name> <servlet-class>oracle.j2ee.ws.JavaRpcWebService</servlet-class> <init-param> <param-name>class-name</param-name> <param-value>oracle.j2ee.ws_example.StatefulExampleImpl</param-value> </init-param> <init-param> <param-name>interface-name</param-name> <param-value>oracle.j2ee.ws_example.StatefulExample</param-value> </init-param> </servlet>
To add Web Services based on Java classes, you need to modify the
<servlet-mapping> tag in the
web.xml file. This tag specifies the URL for the Servlet that implements a Web Service.
Example 3-7 shows sample
<servlet-mapping> entries corresponding to the servlet entries shown in Example 3-5 and Example 3-6.
<servlet-mapping> <servlet-name>stateful java web service example</servlet-name> <url-pattern>/statefulTest</url-pattern> </servlet-mapping> <servlet-mapping> <servlet-name>stateless java web service example</servlet-name> <url-pattern>/statelessTest</url-pattern> </servlet-mapping>
Web Services implemented with Java classes use a standard .war file to define J2EE Servlet configuration and deployment information. After modifying the
web.xml file, add the implementation classes and any required support classes or Jar files either under
WEB-INF/classes, or under
WEB-INF/lib (or in a classpath location available to OC4J).
To add Web Services based on Java classes, you need to include an
application.xml file and package the
application.xml and .war file containing the Java classes into a J2EE .ear file.
After creating the .ear file containing Java classes and the Web Services Servlet deployment descriptors, you can deploy the Web Service as you would any standard J2EE application stored in an .ear file (to run under OC4J).
Parameters and results sent between Web Service clients and a Web Service implementation go through the following steps:
Oracle99iAS Web Services supports the following encoding mechanisms:
org.w3c.dom.Element. When an
Elementpasses as a parameter to a Web Service, the server side Java implementation processes the
org.w3c.dom.Element. For return values sent from a Web Service, the Web Services client parses or processes the
org.w3c.dom.Element. | http://docs.oracle.com/cd/A97329_03/web.902/a95453/javaservices.htm | CC-MAIN-2014-15 | en | refinedweb |
Code. Collaborate. Organize.
No Limits. Try it Today.
This article examines the implementation of upload and download functionality with progress indication (progress bar feature) using the Windows Communication Foundation. For this sample, you will need Visual Studio 2008.
The sample code consists of three projects bundled in a solution. A brief description of these projects follows. The attached sample code is available in C# and VB.NET (conversion to VB.NET was made by Lee Galloway of Project Time and Cost).
This is the main server project.
The File Server project includes FileTransferServiceContract.cs file, which contains the IFileTransferService interface. This interface describes the operations provided by our server. No actual work is done in this code file except in describing the operations provided. If you've worked with service-oriented applications before, you'll know that this job is important enough to spare a separate file for. Here are the two operations of our file transfer service:
IFileTransferService
Accepts a DownloadRequest instance that contains the name of the file to be downloaded by the client. It returns a RemoteFileInfo instance, defined in the same code file. RemoteFileInfo contains the name of the file to be downloaded, the file stream and the length of the file in bytes. This instance of the RemoteFileInfo class will be used by the client to download the file. You notice that filename and length are marked with the MessageHeader attribute in the RemoteFileInfo class. This is because when a message contract contains a stream, this can be the only body member of the contract.
DownloadRequest
RemoteFileInfo
filename
length
MessageHeader
stream
Accepts an instance of the RemoteFileInfo message contract. This is the same as used in DownloadFile, only in this case the length property is not required.
DownloadFile
[ServiceContract()]
public interface IFileTransferService
{
[OperationContract()]
void UploadFile(RemoteFileInfo request);
[OperationContract]
RemoteFileInfo DownloadFile(DownloadRequest request);
}
public class RemoteFileInfo
{
[MessageHeader(MustUnderstand = true)]
public string FileName;
[MessageHeader(MustUnderstand = true)]
public long Length;
[MessageBodyMember(Order = 1)]
public System.IO.Stream FileByteStream;
}
<ServiceContract()> _
<servicecontract() />Public Interface IFileTransferService
<OperationContract()> _
Sub UploadFile(ByVal request As RemoteFileInfo)
<OperationContract()> _
Function DownloadFile(ByVal request As DownloadRequest) As RemoteFileInfo
End Interface
<MessageContract()> _
Public Class RemoteFileInfo
Implements IDisposable
<OperationContract()> _
Public FileName As String
<OperationContract()> _
Public Length As Long
<MessageBodyMember(Order:=1)> _
Public FileByteStream As System.IO.Stream
End Class
File Server also includes the FileTransferService.cs code file which contains the implementation of the contract, i.e. the actual code that does all the work. Apparently the included class implements the IFileTransferService class, which constitutes the service contract. If you have worked with streams before in .NET, you will find out that the code that handles the stream and related information for upload or download is pretty straightforward. If you are new to .NET streams, please use Google for a quick introduction.
Note here that since actual downloading of the file starts after the execution of the DownloadFile method is completed (i.e. after the client gets the RemoteFileInfo instance returned by this method), the server must close the opened stream later, after the client completes the process. An elegant approach was suggested by Buddhike. To do this, the IDisposable interface is implemented by the RemoteFileInfo contract and the stream is disposed on the corresponding Dispose method. If this is not done, the stream will remain locked and the corresponding file will be locked for writing.
IDisposable
Dispose
FileService is a class library and hence it cannot start as a window process. Therefore it needs another executable file-process that will host it. Several types of processes can host a WCF service, such as .NET executables, IIS processes, Windows Activation Services (new feature of Vista) and many more. Our example uses a .NET executable to host our service. So, ConsoleHost is a console application that does exactly this. It has a reference to the FileService project. However, it is not related in any way with the business our service is doing, i.e. transferring files. Actually, the code you will find in Program.cs would be the same even if our service was designed to host an online grocery. Take a quick look at this code file to understand how our service is started and closed.
FileService
ConsoleHost
static void Main(string[] args)
{
ServiceHost myServiceHost = new ServiceHost
(typeof(FileService.FileTransferService));
myServiceHost.Open();
Console.WriteLine("This is the SERVER console");
Console.WriteLine("Service Started!");
foreach (Uri address in myServiceHost.BaseAddresses)
Console.WriteLine("Listening on " + address.ToString());
Console.WriteLine("Click any key to close...");
Console.ReadKey();
myServiceHost.Close();
}
Public Shared Sub Main()
Dim myServiceHost As New ServiceHost(_
GetType(FileService.FileTransferService))
myServiceHost.Open()
Console.WriteLine("This is the SERVER console")
Console.WriteLine("Service Started!")
For Each address As Uri In myServiceHost.BaseAddresses
Console.WriteLine("Listening on " + address.ToString())
Console.WriteLine("Click any key to close...")
Console.ReadKey()
myServiceHost.Close()
End Sub
The configuration of ConsoleHost is what matters the most! It is divided into three sections, configuring the way our service will behave and how it will be exposed to the rest of the world. It is not the goal of this article to describe in detail how a WCF service is configured, so please refer to the WCF reference on MSDN for more information. Something noticeable in the configuration of our service is that it uses MTOM as message encoding and stream as transfer mode. See also the maxReceivedMessageSize property. This defines the maximum size of messages transferred by our service. Since we are transferring files, we want this property to have a large value.
MTOM
maxReceivedMessageSize
<binding name ="FileTransferServicesBinding"
transferMode="Streamed"
messageEncoding="Mtom"
maxReceivedMessageSize="10067108864" >
</binding>
The Client project is a sample consumer of our service. You will notice that the Client project includes a folder called Service References. This folder contains a bunch of files created automatically by Visual Studio by right clicking on the Client project root and selecting "Add Service Reference." The files in this folder are the proxy of our file transfer service on the client side. Client is using these files to send requests to the server, hiding in this way the complexity of Web Service and SOAP protocols.
Client
Client
Again, if you have worked with streams before, you will notice that things are pretty simple in the TestForm file except for one small part, which is also the difference in implementing the progress indication when uploading rather than when downloading. When downloading, the client has control of the procedure. You can see in TestForm.cs that downloading is implemented using a loop that reads the server stream piece-by-piece. So, the client knows what part of the server stream is read and how many more remain. When uploading, that loop resides on the server. In order for the client to know how many bytes the server read, it uses the StreamWithProgress class, which inherits System.IO.Stream. An instance of this class is passed to the server, instead of the original file stream. Since this class overrides the default Read method of the stream (see code below), it can report the progress of the uploading process to the client!
StreamWithProgress
System.IO.Stream
Read
public override
int Read(byte[] buffer, int offset,
int count)
{
int result = file.Read(buffer, offset, count);
bytesRead += result;
if (ProgressChanged != null)
ProgressChanged(this, new ProgressChangedEventArgs(bytesRead, length));
return result;
}
Public Overloads Overrides Function Read(ByVal buffer As Byte(), _
ByVal offset As Integer, ByVal count As Integer) As Integer
Dim result As Integer = file.Read(buffer, offset, count)
bytesRead += result
RaiseEvent ProgressChanged(Me, New ProgressChangedEventArgs(_
bytesRead, m_length))
Return result
End Function
This article, along with any associated source code and files, is licensed under The Code Project Open License (CPOL)
Response.ContentType = "image/jpeg";
Response.AppendHeader("Content-Disposition","attachment; filename=myFile.jpg");
General News Suggestion Question Bug Answer Joke Rant Admin
Use Ctrl+Left/Right to switch messages, Ctrl+Up/Down to switch threads, Ctrl+Shift+Left/Right to switch pages.
C# 6: First reactions | http://www.codeproject.com/Articles/20364/Progress-Indication-while-Uploading-Downloading-Fi | CC-MAIN-2014-15 | en | refinedweb |
(Random observation: Hmmm, strange, in the Data.Map version of primes above, we are missing 5 primes?) Hi Chaddai, Your algorithm does work significantly better than the others I've posted here :-) So much so, that we're going for a grid of 10000000 to get the timings in an easy-to-measure range. Here are the results: J:\dev\haskell>ghc -O2 -fglasgow-exts -o PrimeChaddai.exe PrimeChaddai.hs J:\dev\haskell>primechaddai number of primes: 664579 30.984 J:\dev\test\testperf>csc /nologo primecs.cs J:\dev\test\testperf>primecs number of primes: 664579 elapsed time: 0,859375 So, only 30 times faster now, which is quite a lot better :-D Here's the full .hs code: module Main where import IO import Char import GHC.Float import List import qualified Data.Map as Map import Control.Monad import System.Time import System.Locale..]] calculateNumberOfPrimes max = length $ takeWhile ( < max ) primes gettime :: IO ClockTime gettime = getClockTime main = do starttime <- gettime let( show(secondsfloat) ) return () On 7/15/07, Chaddaï Fouché <chaddai.fouche at gmail.com> wrote: > > Or if you really want a function with your requirement, maybe you > could take the painful steps needed to write : > let numberOfPrimes = length $ takeWhile (< 200000) primes > ? > -------------- next part -------------- An HTML attachment was scrubbed... URL: | http://www.haskell.org/pipermail/haskell-cafe/2007-July/028913.html | CC-MAIN-2014-15 | en | refinedweb |
Template Method pattern is one of the 23 design patterns explained in the famous Design Patterns book by Erich Gamma, Richard Helm, Ralph Johnson and John Vlissides. The intent of this pattern is stated as:
Define the skeleton of an algorithm in an operation, deferring some steps to subclasses. TemplateMethod lets subclasses redefine certain steps of an algorithm without changing the algorithm’s structure.
To explain in simple terms, consider the following scenario: Assume there is a workflow system in which 4 tasks have to be performed in the given order so as to successfully complete the workflow. Some of the tasks out of the 4 tasks can be customised by
different workflow system implementation.
Template Method pattern can be applied to above scenario by encapsulating the workflow system into an abstract class with few of the tasks out of the 4 tasks implemented. And leave the implementation of remaining tasks to the subclasses of the abstract class.
So the above when implemented:
/** * Abstract Workflow system */ abstract class WorkflowManager2{ public void doTask1(){ System.out.println("Doing Task1..."); } public abstract void doTask2(); public abstract void doTask3(); public void doTask4(){ System.out.println("Doing Task4..."); } } /** * One of the extensions of the abstract workflow system */ class WorkflowManager2Impl1 extends WorkflowManager2{ @Override public void doTask2(){ System.out.println("Doing Task2.1..."); } @Override public void doTask3(){ System.out.println("Doing Task3.1..."); } } /** * Other extension of the abstract workflow system */ class WorkflowManager2Impl2 extends WorkflowManager2{ @Override public void doTask2(){ System.out.println("Doing Task2.2..."); } @Override public void doTask3(){ System.out.println("Doing Task3.2..."); } }
Let me just go ahead and show how these workflow implementations are used:
public class TemplateMethodPattern { public static void main(String[] args) { initiateWorkFlow(new WorkflowManager2Impl1()); initiateWorkFlow(new WorkflowManager2Impl2()); } static void initiateWorkFlow(WorkflowManager2 workflowMgr){ System.out.println("Starting the workflow ... the old way"); workflowMgr.doTask1(); workflowMgr.doTask2(); workflowMgr.doTask3(); workflowMgr.doTask4(); } }
and the output would be..
Starting the workflow ... the old way Doing Task1... Doing Task2.1... Doing Task3.1... Doing Task4... Starting the workflow ... the old way Doing Task1... Doing Task2.2... Doing Task3.2... Doing Task4...
So far so good. But the main intent of this post is not to create yet another blog post on Template Method pattern, but to see how we can leverage Java 8 Lambda Expression and Default Methods. I have already written before, that only interfaces which have Single Abstract Methods can be written as lambda expressions. What this translates to in this example is that, the WorkflowManager2 can only have one abstract/customizable task out of the 4 tasks.
So restricting to one abstract method is a major restriction and may not be applicable in many realtime scenarios. I dont wish to reiterate the same old Template Method pattern examples, instead my main intention of writing this is to show how lambda expressions and default methods can be leveraged in scenarios where you are dealing with abstract classes with single abstract methods.
If you are left wondering what these lambda expressions in java mean and also these default methods in java, then please spend some time to read about lambda expressions and default methods before proceeding further.
Instead of an abstract class we would use an interface with default methods, so our workflow system would look like:
interface WorkflowManager{ public default void doTask1(){ System.out.println("Doing Task1..."); } public void doTask2(); public default void doTask3(){ System.out.println("Doing Task3..."); } public default void doTask4(){ System.out.println("Doing Task4..."); } }
Now that we have the workflow system with customisable Task2, we will go ahead and initiate some customised workflows using Lambda expressions…
public class TemplateMethodPatternLambda { public static void main(String[] args) { /** * Using lambda expression to create different * implementation of the abstract workflow */ initiateWorkFlow(()->System.out.println("Doing Task2.1...")); initiateWorkFlow(()->System.out.println("Doing Task2.2...")); initiateWorkFlow(()->System.out.println("Doing Task2.3...")); } static void initiateWorkFlow(WorkflowManager workflowMgr){ System.out.println("Starting the workflow ..."); workflowMgr.doTask1(); workflowMgr.doTask2(); workflowMgr.doTask3(); workflowMgr.doTask4(); } }
This is in a small way lambda expressions can be leveraged in the Template Method Pattern | http://www.javacodegeeks.com/2013/05/template-method-pattern-using-lambda-expressions-default-methods.html | CC-MAIN-2014-15 | en | refinedweb |
iRewardType Struct Reference
The reward type is responsible for the creation of reward factories. More...
#include <tools/rewards.h>
Inheritance diagram for iRewardType:
Detailed Description
The 77 of file rewards.h.
Member Function Documentation
Create a reward factory.
Return the name for this reward type.
The documentation for this struct was generated from the following file:
Generated for CEL: Crystal Entity Layer 2.0 by doxygen 1.6.1 | http://crystalspace3d.org/cel/docs/online/api-2.0/structiRewardType.html | CC-MAIN-2014-15 | en | refinedweb |
...one of the most highly
regarded and expertly designed C++ library projects in the
world. — Herb Sutter and Andrei
Alexandrescu, C++
Coding Standards
member_offset
const_mem_fun_explicitand
mem_fun_explicit
composite_keyin compilers without partial template specialization
ctor_args_list
multi_index_container
std::list
multi_index_container
In relational databases, composite keys depend on two or more fields of a given table.
The analogous concept in Boost.MultiIndex is modeled by means of
composite_key, as shown in the example:(std::string("White"),std::string((std::string(..
member_offset
The
member key extractor poses some problems in compilers
that do not properly support pointers to members as non-type
template arguments, as indicated by the
Boost Configuration Library
defect macro
BOOST_NO_POINTER_TO_MEMBER_TEMPLATE_PARAMETERS.
The following compilers have been confirmed not
to work correctly with
member:
#include <iostream> struct pair { int x,y; pair(int x_,int y_):x(x_),y(y_){} }; template<int pair::* PtrToPairMember> struct foo { int bar(pair& p){return p.*PtrToPairMember;} }; int main() { pair p(0,1); foo<&pair::x> fx; foo<&pair::y> fy; if(fx.bar(p)!=0||fy.bar(p)!=1)std::cout<<"KO"<<std::endl; else std::cout<<"OK"<<std::endl; return 0; }
If you find a compiler that does not pass the test, and for which
BOOST_NO_POINTER_TO_MEMBER_TEMPLATE_PARAMETERS is not defined,
please report to the Boost developers mailing list.
To overcome this defect, a replacement utility
member_offset
has been provided that does the work of
member at the
expense of less convenient notation and the possibility of
non-conformance with the standard. Please consult
the reference for further information on
member_offset.
As an example of use, given the class
class A { int x; }
the instantiation
member<A,int,&A::x> can be simulated then
as
member_offset<A,int,offsetof(A,x)>.
For those writing portable code, Boost.MultiIndex provides the ternary macro
BOOST_MULTI_INDEX_MEMBER. Continuing with the example above, the
expression
BOOST_MULTI_INDEX_MEMBER(A,int,x)
expands by default to
member<A,int,&A::x>
or alternatively to
member_offset<A,int,offsetof(A,x)>
if
BOOST_NO_POINTER_TO_MEMBER_TEMPLATE_PARAMETERS is defined.
const_mem_fun_explicitand
mem_fun_explicit
MSVC++ 6.0 has problems with
const member functions as non-type
template parameters, and thus does not accept the
const_mem_fun
key extractor. A simple workaround, fortunately, has been found, consisting
in specifying the type of these pointers as an additional template
parameter. The alternative
const_mem_fun_explicit extractor
adopts this solution; for instance, given the type
struct A { int f()const; };
the extractor
const_mem_fun<A,int,&A::f> can be replaced by
const_mem_fun_explicit<A,int,int (A::*)()const,&A::f>. A similar
mem_fun_explicit class template is provided for non-constant
member functions.
If you are writing cross-platform code, the selection of either key extractor
is transparently handled by the macro
BOOST_MULTI_INDEX_CONST_MEM_FUN,
so that
BOOST_MULTI_INDEX_CONST_MEM_FUN(A,int,f)
expands by default to
const_mem_fun<A,int,&A::f>
but resolves to
const_mem_fun_explicit<A,int,int (A::*)()const,&A::f>
in MSVC++ 6.0. An analogous macro
BOOST_MULTI_INDEX_MEM_FUN is
provided as well.
composite_keyin compilers without partial template specialization
Much of the power of
composite_key derives from the ability
to perform searches when only the first elements of the compound key are
given. In order to enable this functionality,
std::less and
std::greater are specialized for
composite_key_result instantiations to provide
overloads accepting tuples of values.
In those compilers that do not support partial template specialization,
tuple-based comparisons are not available by default. In this case,
multi_index_container instantiations using composite keys
will work as expected (elements are sorted lexicographically on the
results of the combined keys), except that lookup operations will not
accept tuples as an argument. The most obvious workaround
to this deficiency involves explicitly specifying the comparison
predicate with
composite_key_compare: this is tedious as
the comparison predicates for all the element key extractors must be
explicitly typed. For this reason, Boost.MultiIndex provides the replacement
class template
composite_key_result_less, that
acts as the missing specialization of
std::less for
composite_key_results:
typedef composite_key< phonebook_entry, member<phonebook_entry,std::string,&phonebook_entry::family_name>, member<phonebook_entry,std::string,&phonebook_entry::given_name> > ckey_t; typedef multi_index_container< phonebook_entry, indexed_by< ordered_non_unique< ckey_t, // composite_key_result_less plays the role of // std::less<ckey_t::result_type> composite_key_result_less<ckey_t::result_type> >, ordered_unique< member<phonebook_entry,std::string,&phonebook_entry::phone_number> > > > phonebook;
There is also an analogous
composite_key_result_greater class to substitute for
specializations of
std::greater., while sequenced indices do not need any construction
argument. For instance, given the definition
typedef multi_index_container< unsigned int, indexed_by< ordered_unique<identity<unsigned int> >, ordered_non_unique<identity<unsigned int>, modulo_less<unsigned int> >, sequenced<> > > modulo_indexed_set;
the corresponding
ctor_args_list type is equivalent to
boost::tuple< // ctr_args of index #0 boost::tuple<identity<unsigned int>,std::less<unsigned int> >, // ctr_args of index #1 boost::tuple<identity<unsigned int>,modulo_less<unsigned int> >, // sequenced indices do not have any construction argument)), // this is also default constructible (actually, an empty tuple) modulo_indexed_set::nth_index<2>::type::ctor_args(), ); modulo_indexed_set m(args_list);
A program is provided in the examples section that puts in practise these concepts., // default initialized iterator.
multi_index_container
Academic movitations aside, there is a practical interest in simulating standard
associative containers by means of
multi_index_container, namely to take
advantage of extended functionalities provided by
multi_index_container for
lookup, range querying and updating.
In order to simulate simulated simulation simulation-simulated
maps does not exactly conform to that of
std::maps and
std::multimaps. The most obvious difference is the lack of
operator [], either in read or write mode; this, however, can be
simulated with appropriate use of
find and
insert.
These simulations of standard associative containers with
multi_index_container
are comparable to the original constructs in terms of space and time efficiency.
See the performance section for further details.
std::list
Unlike the case of associative containers, simulating
std::list
in Boost.MultiIndex does not add any significant functionality, so the following
is presented merely for completeness sake.
Much as with standard maps, the main difficulty to overcome when.
MPL Forward Sequencewith as many elements as indices comprise the
multi_index_container: for instance, the
n-n June 28th 2004
© Copyright 2003-2004 Joaquín M López Muñoz. Distributed under the Boost Software License, Version 1.0. (See accompanying file LICENSE_1_0.txt or copy at) | http://www.boost.org/doc/libs/1_32_0/libs/multi_index/doc/advanced_topics.html | CC-MAIN-2014-15 | en | refinedweb |
shown in other instead.
Script Debugging
The Unigine Debugger allows you to inspect your UnigineScript code at run-time. For example, it can help you to determine when a function is called and with which values. Furthermore, you can locate bugs or logic problems in your scripts by executing them step by step.
An additional information from the scripting system of Unigine is obtained if you use debug builds.
The engine may cause the errors of two types: compile-time and run-time.
Compile-Time Errors
Error Message
When a compile-time error occurs, it means that the interpreter could not parse and load the script. In this case, the log file will contain an error message with:
- A source code string with an invalid statement.
- An error description from the interpreter.
Mismatched curly braces are the most frequent compile-time error. For example:
class Foo { Foo(int a) { this.a = a; // missing closing curly brace ~Foo() {} void print() { log.message("a is %d\n",a); } };
Parser::check_braces(): some errors with count of '{' and '}' symbols Parser::preprocessor(): wrong number of braces
Unigine-Specific Warnings
Among the obvious errors like incorrect syntax, there are some not so evident and more confusing. The most important are:
- Recursive includes are not tracked, so be careful with file inclusions and use defines when in doubt.
- In user class definitions make sure that all member variables are declared before they are used in a constructor or a method.Source code(UnigineScript)Variable a is not declared, so an error occurs:
class Foo { Foo(int a) { this.a = a; } ~Foo() {} void print() { log.message("a is %d\n",a); } };Output
Interpreter::parse(): unknown "Foo" class member "a"
- The following and alike complex expressions lead to errors:Source code(UnigineScript)For the first expression, the following error message occurs:
object.doSomething().doSomethingElse(); object.doSomething()[4];OutputFor the second expression, the error message is:
Parser::expectSymbol(): bad '.' symbol expecting end of stringOutputThe point is that due to a dynamic typing the interpreter does not know what will be returned by object.doSomething(), maybe, even nothing, and that may lead to a run-time error.
Parser::expectSymbol(): bad '[' symbol expecting end of string
Run-Time Errors
Error Message
When a run-time error occurs, it usually means that you are trying to manipulate a corrupt or even a non-existent object. In this case, the log file will contain an error message with:
- An error description from the interpreter.
- Current stack of function calls.
- An assembly dump of an invalid statement.
Interpreter::run(): "int: 0" is not an extern class Call stack: 00: 0x0001a84c update() 01: 0x00016565 nodesUpdate() 02: 0x0001612c Nodes::update() Disassemble: 0x00016134: callecf Node.getName 0x00016137: pop 0x00016138: pushv parameters_tb 0x0001613a: callecf WidgetTabBox.getCurrentTab
Common Errors
Common run-time errors:
- NULL objects. Please, don't forget to initialize declared objects and check the values returned from functions.
- Stack underflow (or overflow) when less (or more) arguments than required are provided to a function.
- Dynamic typing allows assigning a value of one type to a variable of another, for example:Source code(UnigineScript)However, people tend to forget what they've assigned to a variable and call inappropriate methods:
// the following is ok Object object = new ObjectMesh("file.mesh"); object = new LightWorld(vec4_one);Source code(UnigineScript)In this case, the error message is:
// the following is NOT ok, as object is not of type ObjectMesh any more Material m = object.getMaterial(0);Output
ExternClass::run_function(): can't find "class Object * __ptr64" base class in "class LightWorld * __ptr64" class
When creating a vector, make sure you provide a correct capacity (non-negative). If the negative capacity is specified, a vector size is set to 0 by default.
Also, when addressing vector contents, make sure an index being used exists; the same is true for maps and their keys. Also, a map key cannot be NULL.Source code(UnigineScript)In this case, the error message is:
int vector[2] = ( 1, 2, 3 ); // vector of 3 elements is defined log.message("%d\n", vector[4]); // addressing the 4th component, which doesn't existOutput
UserArray::get(): bad array index 4
- Make sure you use correct swizzles with proper objects. For example, you cannot use swizzles with scalar types.Source code(UnigineScript)The exapmle produces the following error message:
int a = 10; // value of the scalar type log.message("%d\n", a.x); // swizzlingOutput
Variable::getSwizzle(): bad component 0 for int variable
- If a user class overloads some operator, do not forget to preserve the order of operands in the code:Source code(UnigineScript)The example produces an error:
class Foo { int f; Foo(int f = 0) { this.f = f; } int operator+(Foo arg1,int arg2) { return arg1.f + arg2; } }; // this is ok int ret = new Foo() + 5; // this is NOT ok ret = 5 + new Foo();Output
Variable::add(): bad operands int and user class
- Make sure that if you use wait control structure in the class method, this class method is called as a static one.Source code(UnigineScript)The error message is:
class Foo { void update() { while(true) wait 1; } }; Foo f = new Foo(); Foo::update(); // this is valid, because the method is called as a static function f.update(); // this would cause a crash, because a class instance is passedOutput
Interpreter::runFunction(): depth of stack is not zero
- Use class_cast() with caution. Remember that it converts an instance of one user class to another without warnings, even if those classes have nothing in common.
- Lots of transient objects. This is not exactly an error, but if you create in a short period plenty of user class objects that soon become unused, the performance will drop every 32 frames because of garbage collector clean-ups, until all unused objects are gone. In this case, the performance drop will be caused by both a multitude of objects and expensive object destruction.
Debugger
The Unigine debugger allows you to:
- Set the breakpoints directly in the script
- Set and remove the breakpoints via the console
- View memory stack
- View function call stack
- View current variables values
- Step through instructions
To run the debug process, you can insert the breakpoint; instruction in your code or set the run-time breakpoint.
The console debugger window (in Windows):
Set a Breakpoint
To invoke the console debugger, insert a breakpoint; instruction in the script you are working on. This type of instructions used for the precise breakpoints placing. You can insert more than one breakpoint; instruction in your script.
For example:
int a = 10; breakpoint; // the breakpoint instruction int b = 1; forloop(int i = 0; a){ b += i; log.message("Iteration: %d\n",i); log.message("Value: %d\n",b); }
When a breakpoint is encountered, the engine execution stops, and the application stops responding to user actions. Instead, the external console starts receiving user input.
In this console, you can step through instructions using the next command. Also it is possible to set the breakpoint during the debug process by using the break command. It is useful, for example, when you debug a loop. Or you can run the other debugger commands, if necessary.
If the console is unavailable, as in Windows release builds, this will seem as a hang-up of the engine. To avoid that, use a "breakpoint" macro defined in the file data/core/unigine.h. This macro also correctly preserves FPS values.
Set a Run-Time Breakpoint
There is also a way to set a run-time per-function interpreter breakpoints with the specified number of arguments via the editor console. A required script instruction is triggered for breakpoint, so the engine execution stops and the external console starts to receive the user input. Moreover, it is also possible to set the breakpoint flag inside the debugger.
There are 3 types of such breakpoints: system_breakpoint, world_breakpoint and editor_breakpoint, used for system, world and editor scripts respectively.
The syntax to set the breakpoint is the following: system_breakpoint/world_breakpoint/editor_breakpoint set/remove function_name number_of_arguments.
For example, to set the breakpoint on the custom function with zero arguments printMyMessage(), type in the editor console the following:
world_breakpoint set printMyMessage 0
Features
The debugger also supports limited autocompletion and history of commands.
With the debugger, you cannot:
Commands
The debugger supports several console commands listed below.
Lists all available commands.
run
Continues interpreter execution until the next breakpoint or the end of the script is reached.
Here N is an optional argument specifying the number of breakpoints to skip. By default, N equals to 0.
Executes the next instruction. This command is used to step through instructions starting from the breakpoint.
Here N is an optional argument specifying the number of instructions to skip. By default, N equals to 0.
For example, you debug the following code:
breakpoint; // here the breakpoint is set int a = 10; int b = 1; int vector [3] = ( 1, 2, 3, 4); forloop(int i = 0; a){ b += i; log.message("Iteration: %d\n",i); log.message("Value: %d\n",b); }
Breakpoint at 0x00000455: setvc a = int: 10 # next Breakpoint at 0x00000458: setvc b = int: 1
stack
Dumps the memory stack.
calls
Dumps the function call stack.
Some notes:
- Function calls are listed starting from a C++ function call, which is not included in the stack. For example, if some script function is invoked as a callback, you will not see the invoking C++ function.
- If a function address is intact, a function name will be displayed, otherwise, you will see gibberish instead of the name. The address changes, if there are yield or wait instructions in the function body.
dasm
Disassembles a certain number of instructions starting from the current instruction.
Here N is an optional argument specifying the number of instructions to process. By default, N equals to 8.
For example:
# dasm Disassemble: 0x00000465: addvv b i 0x00000468: popv b 0x0000046a: pushv i 0x0000046c: pushc string: "Iteration: %d\n" 0x0000046e: callefr log.message 0x00000470: pushv b 0x00000472: pushc string: "Value: %d\n" 0x00000474: callefr log.message
See also: Assembly Mnemonics.
info
Displays contents of given variables.
Here var_list is a list of space-separated variable names. If a variable is not local to the current scope, its name should contain a namespace prefix.
Some notes:
- Both ordinary and array variables are supported
- Values of map and vector elements can be accessed via constant keys/indices in brackets ([]). Only int, float, and string keys are supported
- User class members can be accessed via a dot (.). Autocompletion of user class members is not supported
Usage example:
# info a b vector a: int: 3 b: int: 1 vector: Vector of 4 elements 0: int: 1 1: int: 2 2: int: 3 3: int: 4 # info vector[1] vector[1]: int: 2
This command is useful, when you want to check, for example, which instruction changes variable values.
list
fault
Crashes interpreter. Useful when the enigne itself is run in a debugger (for example, gdb), as it allows seeing the stack of C++ function calls.
break
Toggles the current breakpoint. You can add or remove the breakpoint for each script instruction during the debug process.
For example, you debug the following code:
int a = 10; int b = 1; forloop(int i = 0; a){ b += i; log.message("Iteration: %d\n",i); log.message("Value: %d\n",b); }
Breakpoint at 0x00000455: setvc a = int: 10 # next Breakpoint at 0x00000458: setvc b = int: 1 # break
Disassemble: 0x00000458: ! setvc b = int: 1 0x00000468: setvc i = int: 0
Breakpoint at 0x00000455: setvc a = int: 10 # next Breakpoint at 0x00000458: setvc b = int: 1 # break # break
Assembly Mnemonics
Here is a list of assembly mnemonics for the assembly dump.
Set operations:
Pop operations:
Push operations:
Call operations:
Math operations:
Branch operations:
Loops: | https://developer.unigine.com/en/docs/2.2.1/code/uniginescript/language/debugging?rlang=cpp | CC-MAIN-2020-29 | en | refinedweb |
RFont subclass not generating "otf"
- RicardGarcia last edited by gferreira
Hello,
I'm working on a set of animations that are going to use some .ufos I have and I'm facing a problem that I don't know how to solve. The point is that I want to generate otf files out of them while using
generateto the Font object but it says
The RFont subclass does not implement this method. Any idea what I'm doing wrong?
Here's a simplyfied version of the code that returns this error:
from fontParts.world import NewFont, OpenFont ufos = ['Patufet-Black.ufo', 'Patufet-Light.ufo'] new_f = OpenFont(ufos[0], showInterface=False) print(new_f) new_f.generate("otfcff", "_Install-otf/new_f.otf")
Thanks.
Inside the app DrawBot an
RFonthas no implementation for the method
generate.
Inside RoboFont you can use the DrawBot extension, where you can have all the powers DrawBot has to draw and all the powers RoboFont has to edit font data, and also call
font.generate(...).
- RicardGarcia last edited by RicardGarcia
I see. Then, if I want to make an animation with a long text interpolating between two .ufo files, which would be the the best option? Generating a .designspace with an axis, export the .otf and use it in the script?
For what you say, it looks doable also inside Robofont itself, though.
Thanks!
hello @RicardGarcia,
for a simple interpolation between two masters you can use
font.interpolate(factor, font1, font2)– see this example script.
hope this helps!
there are several options:
font.interpolatewill returns a ufo
- a designspace document with a given location will also returns a ufo, use
mutatorMath
- generate a variable font
Generating variable fonts from design spaces is not difficult: in RoboFont you can use the Batch extension. In DrawBot you can pip install
ufo2ftto generate a var font and use it inside your type setting.
good luck!
- RicardGarcia last edited by
Thank you so much for your help to both of you. @gferreira, the example you linked is super helpful and clear. I've read a bunch of code samples in Robofont's website and I don't know why I didn't run into this one. @frederik, thanks! I think that the main thing I was doing wrong was doing it outside Robofont. With the hints you both gave me I can manage to do what I wanted inside Robofont.
Thanks!
- RicardGarcia last edited by RicardGarcia
Hi again. Even though both of your comments helped me a lot I'm still facing a last problem that I don't know how to solve.
From my opened ufos, I'm generating a new interpolated font that I'm using in each frame to make an animation. The point is that, if I'm writting the
factorright away like
0,
.25or
1it does the correct interpolation but it doesn't if I use the variable
interp_factor. Is it something about overwritting the
new_f.otffile?
I've tried to clean the code as much as possible and hope that helps others pointing out what's going on here:
text2Use = """ A monospaced font, also called a fixed-pitch, fixed-width, or non-proportional font, is a font whose letters and characters each occupy the same amount of horizontal space. """ def animationInstagram(text = text2Use): # Size of the page in px w, h = 1200, 2134.4 # Number of frames frames = 4 # Step factor maxFactor = 1 stepFactor = maxFactor / (frames) # Interpolation factor (starting point) interp_factor = 1 #Accessing opened ufos f1, f2 = AllFonts()[0], AllFonts()[1] f1.generate("otfcff", "f1.otf") f_1 = OpenFont("f1.otf", showInterface=False) f2.generate("otfcff", "f2.otf") f_2 = OpenFont("f2.otf", showInterface=False) # Frames for f in range(frames): # Page settings newPage( w , h ) fill(0) # BG rect(0,0,w,h) fill(1) # FG # Creating font to interpolate new_f = NewFont(showInterface=False) new_f.interpolate(interp_factor, f_1, f_2, round=False) print("Interpolation factor: ", interp_factor) new_f.generate("otfcff", "new_f.otf") #Installing the interpolated (temporary) font fontName = installFont("new_f.otf") # ------------------ # Text box extraSpace = -200 boxX, boxY, boxW, boxH = extraSpace, extraSpace, w-extraSpace*2.7, h-extraSpace*2 font(fontName, 100) textBox(text, (boxX, boxY, boxW, boxH), align="left") # ------------------ # Subtracting step factor interp_factor -= stepFactor # Uninstalling uninstallFont("new_f.otf") # Calling the function animationInstagram() # Saving the image saveImage("Test-interpolation.gif")
Thank you so much once again!
two things:
why generate a binary and read the the binary back in? see
f1.generate("otfcff", "f1.otf")
Give you new font a unique name based on the
interp_factor. The font
familyNameand
styleNamemakes up the
postScriptNamewhich has to be unique to set a font in drawBot (and everywhere else). In your case the font familyName and styleName will be "None" "None". This is not good...
- RicardGarcia last edited by
why generate a binary and read the the binary back in? see f1.generate("otfcff", "f1.otf")
You mean I can use the opened ufo files right away as
f1and
f2to interpolate with, right?
Give you new font a unique name based on the interp_factor. The font familyName and styleName makes up the postScriptName which has to be unique to set a font in drawBot (and everywhere else). In your case the font familyName and styleName will be "None" "None". This is not good...
All right. I thought that after uninstalling the interpolated font I could generate another one I could use in the new page afterwards. So, in this case, would it make sense to set the same name for familyName while the
styleNamecould be related to
inerp_factoras you say? | https://forum.drawbot.com/topic/239/rfont-subclass-not-generating-otf/5?lang=en-US | CC-MAIN-2020-29 | en | refinedweb |
OpenCV 2.4.3rc Samples do not run in NDK r8b
I am using windows and trying to build OpenCV Tutorial 3 - Add Native OpenCV in OpenCV4Android 2.4.3. I have followed all the instructions in the following two tutorials
I am trying to build the tutorial 3 in Eclipse and getting the following in console,
17:45:42 * Auto Build of configuration Default for project OpenCV Tutorial 3 - Add Native OpenCV * "C:\android-ndk-r8b\ndk-build.cmd" "Compile++ thumb : native_sample <= jni_part.cpp
In file included from jni/jni_part.cpp:1:0: C:/android-ndk-r8b/platforms/android-14/arch-arm/usr/include/jni.h:592:13: note: the mangling of 'va_list' has changed in GCC 4.4 SharedLibrary : libnative_sample.so
Install : libnative_sample.so => libs/armeabi-v7a/libnative_sample.so
17:45:44 Build Finished (took 2s.247ms)
The libnative_sample.so file has been correctly produced. Still, I am getting error that "There is error in the project". Then I've gone to jni_part.cpp and it is showing that it is not getting the header "vector". and it is not understanding the namespace "std". It is also not being able to find FastFeatureDetector. How can I compile it and run it?
Hello @Chayan, I am also in OpenCV Tutorial 3 and produced lot of errors. I know you can help me about my problem. How did you resolve the error "Cannot run the program ndk-build.cmd: The system cannot find the file specified" in Tutorial 3? Please help. My original post | https://answers.opencv.org/question/3713/opencv-243rc-samples-do-not-run-in-ndk-r8b/ | CC-MAIN-2020-29 | en | refinedweb |
React Native WebView — Loading HTML in React Native
React Native WebView — Loading HTML in React Native
In this article, see a tutorial on how to load HTML in React Native.
Join the DZone community and get the full member experience.Join For Free
In React Native, WebViews enable access to any web portal in the mobile app itself. In other words, a web view allows us to open the web URLs inside the app interface. While React Native provides us with a built-it web view component, but we are going to use react-native-webview plugin in this tutorial, since it is more powerful. React Native WebView is a modern, well-supported, and cross-platform WebView for React Native.
The built-in Webview from React Native is to be deprecated pretty soon based on this document. Hence, this plugin serves as the replacement for the built-in web view. This plugin is a third-party plugin supported by the react-native community.
Requirements
The requirements to follow this tutorial are:
- Nodejs
>=8.x.xwith npm or yarn installed as a package manager.
- watchman a file watching service.
- react-native-cli.
Getting Started With React Native WebView
In order to get started with web view configuration, we need to install the actual plugin first. Here, we are using yarn to install the plugin but we can use NPM (Node Package Manager) as well. Hence, in order to install the plugin, we need to run the following command in the command prompt of our project folder:
- yarn add react-native-webview
If the react-native version is equal or greater than 0.60 then, the installation also takes care of auto-linking the plugin to native configurations. But, in the case of earlier versions, we may need to run:
- react-native link react-native-webview
iOS
In the case of iOS, we also need to run the following command:
- pod install
Android
In the case of Android, this module does not require any extra step after running the link command. But for the react-native-webview version >=6.X.X, we need to make sure AndroidX is enabled in our project. This can be done by editing
android/gradle.properties and adding the following lines:
- android.useAndroidX=true
- android.enableJetifier=true
This completes our installation steps. We can now use the plugin in our react native project.
First, we are going to load a simple HTML content into our app interface. For that, we need to add the following imports in our App.js file:
import React, { Component } from 'react';
import { WebView } from 'react-native-webview';
Here, we have imported the
WebView component from the react-native-webview plugin. Now, we can make use of this component in order to load the HTML content as shown in the code snippet below:
xxxxxxxxxx
import React, { Component } from 'react';
import { WebView } from 'react-native-webview';
class MyInlineWeb extends Component {
render() {
return (
<WebView
originWhitelist={['*']}
source={{ html: '<h1>This is a static HTML source!</h1>' }}
/>
);
}
}
Here, we have defined the
MyInlineWeb class component. This class component has a
render() function that renders the
WebView component. The
WebView component has the HTML content configured to its
source prop. As a result, we can see the HTML content is rendered in the app interface as shown in the emulator screenshot below:
Now, instead of simple HTML content, we are going to load the entire website content from the remote URL. For that, we need to provide the
uri option to the
source prop of
WebView component as shown in the code snippet below:
xxxxxxxxxx
import React, { Component } from 'react';
import { WebView } from 'react-native-webview';
class MyWeb extends Component {
render() {
return <WebView source={{ uri: '' }} />;
}
}
Hence, we will get the entire webpage of the website opened in the app’s web view itself as shown in the screenshot below:
Adding a Loading Spinner to React Native Webview
While accessing the URL from the
WebView component, it may take some time for entire HTML content on the website to load. So, in order to represent the delay, we are going to display a loading indicator until the website loads. For this, we need to import the
ActivityIndicator component from the react-native package as shown in the code snippet below:
xxxxxxxxxx
import { Text, View, StyleSheet, ActivityIndicator } from 'react-native';
Now, we need to make use of the
ActiviIndicator component in our project. For that, we are going to create a function called
ActivityIndicator as shown in the code snippet below:
xxxxxxxxxx
import * as React from 'react';
import { Text, View, StyleSheet,ActivityIndicator } from 'react-native';
import { WebView } from 'react-native-webview';
import { Card } from 'react-native-paper';
function LoadingIndicatorView() {
return <ActivityIndicator color='#009b88' size='large' />
}
export default function App() {
return (
<WebView
originWhitelist={['*']}
source={{ uri: '' }}
renderLoading={this.LoadingIndicatorView}
startInLoadingState={true}
/>
);
}
Here, we have used the
AcitivityIndicator with
color and
size props. Then, we have invoked the
renderLoading prop of the
WebView component. This allows us to display the loading indicator until the website fully loads. We can see that
startInLoadingState prop is also used here. This boolean value forces the
WebView to show the loading view on the first load. This prop must be set to
true in order for the
renderLoading prop to work.
As a result, we get the following result in our emulator simulation:
Conclusion
In this tutorial, we learned about the web view property of React Native. Since the in-built web-view feature of React Native is to be deprecated, we learned how to make use of the third party web view plugin named react-native-webview. First, we learned how to render the simple HTML content using the WebView component. Then, we got a detailed explanation of how to use the WebView component and its props to render the entire HTML content from the URL along with the loading indicator. In case you want to learn more, you can go-ahead to the main repository for discussion regarding this web view plugin.
Published at DZone with permission of Krissanawat Kaewsanmuang . See the original article here.
Opinions expressed by DZone contributors are their own.
{{ parent.title || parent.header.title}}
{{ parent.tldr }}
{{ parent.linkDescription }}{{ parent.urlSource.name }} | https://dzone.com/articles/react-native-webview-loading-html-in-react-native | CC-MAIN-2020-29 | en | refinedweb |
Source: Deep Learning on Medium
DC Comics logo classifier
Training an Image classifier from scratch using TensorFlow 2.0. We will be training CNN to classify the logo of a particular character. In this example, I took five different characters namely Batman, Superman, Green Lantern, Wonder Woman and Flash. This will be an end to end article. It includes steps right from collecting data to saving the trained model.
Prerequisites
- Knowledge of Python
- Google Account: As we will be using Google Colab
So time to get our hands dirty!
First, we will collect data using GoogleImagesDownload, a very handy python package to download images from google search. Now we will download images for each class(here we have five classes as batman, superman, green lantern, wonder woman and flash). Please refer to the documentation about using tool mentioned in the given link above.
Here’s a link to ChromeDriver, if you face trouble finding it.
googleimagesdownload — keywords “batman logo” — chromedriver chromedrvier — limit 300
I ran the above statement in command prompt to obtain images for each class by changing search keywords. Now I selected files with .jpg extension as it also downloads files with other extensions. I had manually delete some irrelevant images. Then I renamed the images. I did this for each class. For renaming and selecting only .jpg files I have provided scripts in Github repository. You will just have to take care of paths before executing them.
Finally, I made a folder named data that consisted of images for each class. The hierarchy of directories looked as shown in the snapshot below.
Now upload this folder to your google drive. After uploading this folder to Google Drive, We will create a new colab notebook from this link. Google colab gives us jupyter environment. You can refer to the jupyter notebook in the Github repository. Now we will start with preprocessing and then defining model to train it.
!pip install tensorflow==2.0
So in the first cell, we installed TensorFlow 2.0. Now we will import all the packages we need.
import cv2
import os
import numpy as np
import tensorflow as tf
import matplotlib.pyplot as plt
from sklearn.utils import shuffle
from tensorflow.keras import layers, models
from google.colab import drive
drive.mount('/content/drive')
We are using cv2 for processing images, os for dealing with paths. Numpy is used for numpy arrays. TensorFlow will be used for defining and training models. Here I have used shuffle from sklearn.utils to shuffle image data while the train-test split. Finally, drive from google.colab will be used to mount google drive on colab notebook.
After last line of above cell is executed, It will provide a link that provides verification token. Once the token is given, google drive will be mounted on colab notebook. I have defined two functions loadTrain() and readData(), loadTrain() will help in preprocessing image. Preprocessing image includes resizing, normalizing and assigning labels to corresponding images.
validationSize = 0.2
imageSize = 128
numChannels = 3
dataPath = "/content/drive/My Drive/comic/data"
classes = os.listdir(dataPath)
numClasses = len(classes)
print("Number of classes are : ", classes)
print("Training data Path : ",dataPath)
Here validationSize is given value 0.2, so 80% will be our training data and 20% testing data. imageSize will specify dimension of image that will be given input to the model. numChannels is given value 3 as our image will be read into RGB channels.
data = readData(dataPath,classes,imageSize,validationSize)
X_train,y_train,names_train,cls_train = data.train.getData()
X_test,y_test,names_test,cls_test = data.valid.getData()
print("Training data X : " , X_train.shape)
print("Training data y : " , y_train.shape)
print("Testing data X : ",X_test.shape)
print("Testing data y : ",y_test.shape)
Now we have our train and test data ready. Time to define our model and train it.
model = models.Sequential()
model.add(layers.Conv2D(32, (3, 3), activation='relu', input_shape=(128, 128, 3)))
model.add(layers.MaxPooling2D((2, 2)))
model.add(layers.Conv2D(64, (3, 3), activation='relu'))
model.add(layers.MaxPooling2D((2, 2)))
model.add(layers.Conv2D(64, (3, 3), activation='relu'))
model.add(layers.Flatten())
model.add(layers.Dense(64, activation='relu'))
model.add(layers.Dense(5, activation='softmax'))
There’s a Conv layer following with max-pooling layer. Here input shape is 128*128*3 as we resized our image to 128*128 resolution and 3 is the number of channels. Then again we have Conv layer followed by max-pooling. Then again a Conv layer and now tensor is flattened in the next layer. We now have a dense layer connected to our output layer. Here output layer consists of 5 units as we have five classes for classification.
model.summary()
We get summary of our defined model. Now it’s time to train.
history = model.fit(X_train,y_train, epochs=4,
validation_data=(X_test,y_test))
I have also mentioned plots and accuracy metrics. You can check it in my jupyter notebook.
model.save("comic.h5")
We save our model in .h5 file, but this is local to colab so we will save it google drive.
!pip install -U -q PyDrive)
model_file = drive.CreateFile({'title' : 'comic.h5'})
model_file.SetContentFile('comic.h5')
model_file.Upload()
drive.CreateFile({'id': model_file.get('id')})
So now our trained model will be saved to google drive. It can be easily downloaded from google drive.
Now to classify I have written a script named classify.py. Here we will pass path of our image as cli argument. Our output will be predicted class. We actually get probabilities for each class, we will select one with maximum.
Here’s link to my Github repo.
Further, I will try to write my experience on deploying it as API to cloud platform. You can also deploy it in mobile devices by converting model to lite version and saving it to .tflite file, refer TensorFlow Lite documentation for details. Feel free to connect with me on LinkedIn, Github, and Instagram. Let me know about any improvisation(s).
Thank You! | https://mc.ai/dc-comics-logo-classifier/ | CC-MAIN-2020-29 | en | refinedweb |
shown in other instead.
Making Custom Editor Plugins
With a plugin system, UnigineEditor can be easily customized for project-specific needs. While implementing custom functionality, you can easily add new tabs into the default windows or new editor modules, as well as change a set of editor features for different projects.
A custom plugin is implemented by means of UnigineScript and then can be loaded the same way as any other UnigineEditor plugin.
See Also
- A sample plugin implementation located in the <UnigineSDK>/data/editor/plugins/samples directory.
Preparing Plugin Files
A custom editor plugin is a script implemented in UnigineScript and stored in a *.cpp file. Every plugin has a *.plugin file with meta data. This file should be located in the data/editor/plugins directory of your project: in this case, the plugin will automatically appear in the list of available plugins on UnigineEditor loading.
The recommended file structure is the following:
data
editor
plugins
*.cpp - plugin source code.
*.plugin - plugin meta data.
my_project
Implementing a Plugin
A custom plugin logic stored in a *.cpp file should contain implementation of the following functions:
- getName() function that returns an arbitrary namespace name for the plugin. You should use this namespace for all created GUI elements and callbacks, or if you need to call plugin functions from outside a plugin namespace.
- An init() function that receives plugin meta data as an instance of the PluginMeta class and will be called on editor initialization.
- A shutdown() function that will be called on editor shutdown.
- An update() function that receives an integer argument. This function will be called each frame while the editor is loaded.NoticeThe engine will pass to this function a need_reload flag: if this flag is equal to 1 you may need to reload your custom resources. This flag is equal to 1 when:This function is optional. You should implement it only if you have a code to be executed each frame.
- A show() function that shows a plugin window on plugin enabling.
- A save() function that will be called on world saving. Here you can save custom plugin data into the world.
If a custom plugin interface is implemented as an external window, you should also add the following function calls to your code (implementations can be found in <UnigineSDK>/data/editor/editor_plugins.h):
- pluginsAddWindow() into the plugin init() function to add a plugin name to the quick access list of currently enabled plugins.
- pluginsShowWindow() into the plugin show() function to show a plugin window.
- pluginsRemoveWindow() into the plugin shutdown() function to remove a plugin name from the quick access list of currently enabled plugins.NoticeThis function should be called before you delete an instance of the Unigine::Widgets::Window widget.
#include <core/unigine.h> // this function should return your plugin namespace name string getName() { return "TestPlugin"; } void init(PluginMeta meta) { TestPlugin::init(meta); } void shutdown() { TestPlugin::shutdown(); } void update(int need_reload) { TestPlugin::update(need_reload); } void save() { TestPlugin::save(); } void show() { TestPlugin::show(); } /******************************************************************************\ * * TestPlugin * \******************************************************************************/ namespace TestPlugin { using Unigine::Widgets; Window window; PluginMeta meta; void init(PluginMeta m) { meta = m; window = new Window("Title"); window.setFlags(ALIGN_OVERLAP); // add a plugin to the quick access list of currently enabled plugins // and initialize a plugin window pluginsAddWindow(window,meta.title,meta.name); } // implement plugin shutdown logic void shutdown() { // remove a plugin from the quick access list of currently enabled plugins pluginsRemoveWindow(window,meta.name); removeChild(window); delete window; log.message("shutdown\n"); } // implement plugin update logic void update(int need_reload) { // the flag of 1 indicates that the editor resources should be updated if(need_reload) { // update custom resources, if necessary } } void show() { // show a plugin window pluginsShowWindow(window,meta.name); } void save() { // implement a world save callback here } }
If a custom plugin interacts or somehow affects nodes, you should also implement the following node callbacks:
- nodeInit() that will be called on node initialization (for example, when creating a new node, selecting the existing node and so on).
- nodeUpdate() that will be called on node update.
- nodeShutdown() that will be called when shutdown logic is executed for a node (for example, when deleting or deselecting a node and so on).
- nodesUpdate() that will be called when changing nodes hierarchy.
Plugin Meta Data
Plugin meta data is stored in a *.plugin file in the XML format and includes the following:
<?xml version="1.0" encoding="utf-8"?> <plugin name="editor_plugin" version="1.0"> <text>Test Plugin</text> <description>Custom editor plugin</description> <dependencies>HAS_INTERFACE</dependencies> <source>editor_plugin.cpp</source> </plugin>
A <plugin/> element contains 2 attributes:
- name - internal plugin name.
- version - plugin version.
In addition, a <plugin/> element defines the following:
- <text/> - plugin title that will be displayed in the list of plugins in the Plugins window.
- <description/> - text description of a plugin.
- <dependencies/> - external dependencies of a plugin. It can be any external #define required for plugin functioning. For example, the TestPlugin described above won't be loaded without the Interface plugin.
- <source/> - path to a file with a plugin source code relative to the folder where the .plugin file is located.
Meta data stored in the .plugin file is available in the script via the PluginMeta class that is defined as follows:
class PluginMeta { string name; // corresponds to the "name" attribute of the <plugin/> element string version; // corresponds to the "version" attribute of the <plugin/> element string title; // corresponds to the <text/> element string description; // corresponds to the <description/> element string source; // corresponds to the <source/> element }
An instance of this class is passed as an argument to the init() function and then can be used anywhere in the code. For example:
// declare an instance of the PluginMeta class as a global variable PluginMeta meta; void init(PluginMeta m) { // assign meta data received from the .plugin file to the global variable // and then use it anywhere in the code meta = m; }
If your .plugin file is located in the data/editor/plugins directory, it will be automatically added to the list of available UnigineEditor plugins. Otherwise, you will need to specify the editor_plugin command line option with a path to the meta data file as an argument on the engine start-up. The path to the meta data should be relative to the data directory.
main_x64 -editor_plugin "/path/to/custom_plugin.plugin"
To load the created editor plugin, perform the following:
- Choose Plugins -> Manage... on the Menu bar. The Plugins window will open:
- Enable the custom plugin.
You can choose multiple plugins to be loaded in UnigineEditor runtime. Once loaded, a plugin can be accessed via drop down menu that opens when clicking Plugins on the Menu bar.
Modifying a Plugin
After the editor plugin is modified, you can quickly see changes in action:
- Open the Plugins window by choosing Plugins -> Manage... on the Menu bar.
- Disable and then enable the changed plugin. | https://developer.unigine.com/en/docs/2.2.1/tools/editor/plugins/custom/?rlang=cpp | CC-MAIN-2020-29 | en | refinedweb |
Lets start out with the (possibly) obvious: when I code, I frequently make mistakes (and fix them); but while I am going through that process function builders are frequently kicking my butt. When you are are creating SwiftUI views, you use function builders intensely – and the compiler is often at a loss to explain how I screwed up. And yeah, even with the amazing new updates into the Diagnostic Engine alongside Swift 5.2, which I am loving.
What is a function builder? It is the thing that looks like a normal “do some work” code closure in swift that you use as the declarative structure when you are creating a SwiftUI view. When you see code such as:
import SwiftUI struct ASimpleExampleView: View { var body: some View { Text("Hello, World!") } }
The bit after
some View is the function builder closure, which includes the single line
Text("Hello, World!").
The first mistake I make is assuming all closures are normal “workin’ on the code” closures. I immediately start trying to put every day code inside of function builders. When I do, the compiler – often immediately and somewhat understandably – freaks out. The error message that appears in Xcode:
Function declares an opaque return type, but has no return statements in its body from which to infer an underlying type
And some times there are other errors as well. It really depending on what I stacked together and how I grouped and composed the various underlying elements in that top level view, and ultimately what I messed up deep inside all that.
I want to do some calculations in some of what I am creating, but doing them inline in the function builder closures is definitely not happening, so my first recommended strategy:
Strategy #1: Move calculations into a function on the view
Most of the reasons I’m doing a calculation is because I want to determine a value to hand in to a SwiftUI view modifier. Fiddling with the opacity, position, or perhaps line width. If you are really careful, you can do some of that work – often simple – inline. But when I do that work, I invariably screw it up – make a mistake in matching a type, dealing with an optional, or something. At those times when the code is inline in a function builder closure, the compiler is having a hell of a hard time figuring out what to tell me about how I screwed it up. By putting the relevant calculation/code into a function that returns an explicit type, the compiler gets a far more constrained place to provide feedback about what I screwed up.
As an example:
struct ASimpleExampleView: View { func determineOpacity() -> Double { 1 } var body: some View { ZStack { Text("Hello World").opacity(determineOpacity()) } } }
Some times you aren’t even doing calculations, and the compiler gets into a tizzy about the inferred type being returned. I have barked my shins on that particular edge repeatedly while experimenting with all the various options, seeing what I like in a visualization. The canvas assistant editor that is available in Xcode is a god-send for fast visual feedback, but I get carried away in assembling lots of blocks with
ZStacks,
HStacks, and
VStacks to see what I can do. This directly leads to my second biggest win:
Strategy #2: Ruthlessly refactor your views into subcomponents.
I am beginning to think that seeing repeated, multiple kinds of stacks together in a single view is possibly a code smell. But more than anything else, keeping the code within a single SwiftUI view as brutally simple as possible gives the compiler a better than odds chance of being able to tell me what I screwed up, rather than throwing up it’s proverbial hands with an inference failure.
There are a number of lovely mechanisms with
Binding that make it easy to compose and link to the relevant data that you want to use. When I am making a subcomponent that provides some visual information that I expect the enclosing view to be tracking, I have started using the
@Binding property wrapper to pass it in, which works nicely in the enclosing view.
TIP:
When you’re using
@Binding, remember that you can make a constant binding in the
PreviewProviderin that same file:
YourView(someValue: .constant(5.0))
While I was writing this, John Sundell has recently published a very in-depth look at exactly this topic. His article Avoiding Massive SwiftUI Views covers another angle of how and why to ruthlessly refactor your views.
On the topic of the mechanics of that refactoring, when we lean what to do, it leads to leveraging Xcode’s canvas assistant editor with
PreviewProvider – and my next strategy:
Strategy #3: use Group and multiple view instances to see common visual options quickly
This strategy is more or less obvious, and was highlighted in a number of the SwiftUI WWDC presentations that are online. The technique is immensely useful when you have a couple of variations of your view that you want to keep operational. It allows you to visually make sure they are working as desired while you continue development. In my growing example code, this looks like:
import SwiftUI struct ASimpleExampleView: View { let opacity: Double @Binding var makeHeavy: Bool func determineOpacity() -> Double { // maybe do some calculation here // mixing the incoming data opacity } func determineFontWeight() -> Font.Weight { if makeHeavy { return .heavy } return .regular } var body: some View { ZStack { Text("Hello World") .fontWeight(determineFontWeight()) .opacity(determineOpacity()) } } } struct ASimpleExampleView_Previews: PreviewProvider { static var previews: some View { Group { ASimpleExampleView(opacity: 0.8, makeHeavy: .constant(true)) ASimpleExampleView(opacity: 0.8, makeHeavy: .constant(false)) } } }
And the resulting canvas assistant editor view:
This does not always help you experiment with what your views look like in all variations. For sets of pre-defined options, or data that influences your view, it can make a huge difference. A good variation that I recommend anyone use is setting and checking the accessibility environment settings to make sure everything renders as you expect. Another that I have heard is in relatively more frequent use: verifying localization rendering.
The whole rapid experimentation and feedback capability is what is so compelling about using SwiftUI. Which leads pretty directly to my next strategy:
Strategy #4: Consider making throw-away control views to tweak your visualization effects
I am not fortunate enough to constantly work closely with a designer. Additionally, I often do not have the foggiest idea of how some variations will feel in terms of a final design. When the time comes, seeing the results on a device (or on multiple devices) makes a huge difference.
You do not want to do this for every possible variation. That is where mocks fit into the design and development process – take the time to make them and see what you think. Once you have narrowed down your options to a few, then this strategy can really kick in and be effective.
In the cases when I have a few number of variations to try out, I encapsulate those options into values that I can control. Then I make a throw-away view that will never be shown in the final code that allows me to tweak the view within the context of a running application. Then the whole thing goes into whatever application I am working on – macOS, iOS, etc – and I see how it looks
When I am making a throw-away control view, I often also make (another throw-away) SwiftUI view that composes the control and controlled view together, as I intend to display it in the application. This is primarily to see the combined effect in a single Preview within Xcode. The live controls are not active in the Xcode canvas assistant editor, but it helps to see how having the controls influences the rest of the view structure.
A final note: Do not be too aggressive about moving code in a SwiftPM package
You may (like me) be tempted to move your code into a library, especially with the lovely SwiftPM capabilities that now exist within Xcode 11. This does work, and it functionally works quite well, from my initial experiments. But there is a significant downside, at least with the current versions (including Xcode 11.4 beta 3) – while you are still doing active development on the library:
If you open the package to edit or tweak it with Xcode, and load and build from only
Package.swift without an associated Xcode project, the SwiftUI canvas assistant preview will not be functioning. If you use an Xcode project file, it works fine – so if you do go down this route, just be cautious about removing the Xcode project file for now. I have filed feedback to Apple to report the issue – both with Xcode 11.3 (FB7619098) and Xcode 11.4 beta 3 (FB7615026).
I would not recommend moving anything into a library until you used had it stable in case. There are also still some awkward quirks about developing code and a dependent library at the same time with Xcode. It can be done, but it plays merry havoc with Xcode’s automatic build mechanisms and CI. | https://rhonabwy.com/tag/xcode/ | CC-MAIN-2020-29 | en | refinedweb |
when submitted.
This section of the code will validate the form inputs
Below is the code you will use to validate whether the inputs have valid data or not. This can be customized for different form field validations.
Note! You can paste the entire code directly in the body section of your webpage to get it working.
<?php
if (isset($_REQUEST['submitted'])) {
// Initialize error array.
$errors = array();
// Check for a proper First name
if (!empty($_REQUEST['firstname'])) {
$firstname = $_REQUEST['firstname'];
$pattern = "/^[a-zA-Z0-9\_]{2,20}/";// This is a regular expression that checks if the name is valid characters
if (preg_match($pattern,$firstname)){ $firstname = $_REQUEST['firstname'];}
else{ $errors[] = 'Your Name can only contain _, 1-9, A-Z or a-z 2-20 long.';}
} else {$errors[] = 'You forgot to enter your First Name.';}
// Check for a proper Last name
if (!empty($_REQUEST['lastname'])) {
$lastname = $_REQUEST['lastname'];
$pattern = "/^[a-zA-Z0-9\_]{2,20}/";// This is a regular expression that checks if the name is valid characters
if (preg_match($pattern,$lastname)){ $lastname = $_REQUEST['lastname'];}
else{ $errors[] = 'Your Name can only contain _, 1-9, A-Z or a-z 2-20 long.';}
} else {$errors[] = 'You forgot to enter your Last Name.';}
//Check for a valid phone number
if (!empty($_REQUEST['phone'])) {
$phone = $_REQUEST['phone'];
$pattern = "/^[0-9\_]{7,20}/";
if (preg_match($pattern,$phone)){ $phone = $_REQUEST['phone'];}
else{ $errors[] = 'Your Phone number can only be numbers.';}
} else {$errors[] = 'You forgot to enter your Phone number.';}
if (!empty($_REQUEST['redmapleacer']) || !empty($_REQUEST['chinesepistache']) || !empty($_REQUEST['raywoodash'])) {
$check1 = $_REQUEST['redmapleacer'];
if (empty($check1)){$check1 = 'Unchecked';}else{$check1 = 'Checked';}
$check2 = $_REQUEST['chinesepistache'];
if (empty($check2)){$check2 = 'Unchecked';}else{$check2 = 'Checked';}
$check3 = $_REQUEST['raywoodash'];
if (empty($check3)){$check3 = 'Unchecked';}else{$check3 = 'Checked';}
} else {$errors[] = 'You forgot to enter your Phone number.';}
}
//End of validation
Sends the email if validation passes
The following code is what sends the email. The inputs must pass the previous validation in order for the email to send. You will need to replace the “to” email address with the email address you want to receive the email to.
if (isset($_REQUEST['submitted'])) {
if (empty($errors)) {
$from = "From: Our Site!"; //Site name
// Change this to your email address you want to form sent to
$to = "your@email.com";
$subject = "Admin - Our Site! Comment from " . $name . "";
$message = "Message from " . $firstname . " " . $lastname . "
Phone: " . $phone . "
Red Maple Acer: " . $check1 ."
Chinese Pistache: " . $check2 ."
Raywood Ash: " . $check3 ."";
mail($to,$subject,$message,$from);
}
}
?>
Error Reporting Code
<?php
//Print Errors
if (isset($_REQUEST['submitted'])) {
// Print any error messages.
if (!empty($errors)) {
echo '<hr /><h3>The following occurred:</h3><ul>';
// Print each error.
foreach ($errors as $msg) { echo '<li>'. $msg . '</li>';}
echo '</ul><h3>Your mail could not be sent due to input errors.</h3><hr />';}
else{echo '<hr /><h3 align="center">Your mail was sent. Thank you!</h3><hr /><p>Below is the message that you sent.</p>';
echo "Message from " . $firstname . " " . $lastname . "
Phone: ".$phone."
";
echo "
Red Maple Acer: " . $check3 . "";
echo "
Chinese Pistache: " . $check2 . "";
echo "
Raywood Ash: " . $check3 . "";
}
}
//End of errors array
?>
Prints the contact form
This is the form that will display for the visitor to fill out.
Fill out the form below.
You can paste the entire code directly in the body section of your webpage to get it working. We have more tutorials with other methods to send email from your website at the following links.
Using phpMailer to Send Mail through PHP
Thoughts on “How to Create a Custom PHP Contact Form with Validation”
Excellent post! It is massively helpful to used my business site development. It works like a charm.
Thanks.
Where is the PHP code located on the server? Is an application required to run the PHP code?
If you’re referring to the website files, then they will be in the PUBLIC_HTML folder by default. The web server recognizes PHP by default. If you’re not familiar with coding websites using PHP, then you should speak with an experienced website developer/designer.
You Should Upload Your Files On A Real Domain To See The Result Of The PHP Code
Thank you for the awesome tutorial. Hard to find an example with working validation.
Stephanie, your question is unclear, but if you’re trying to obtain old code, you may need to contact your old web host.
Hello. Can this code be adapted so that, instead of patterns, it checks the user input in a form field agains values stored in a mysql database?
Thank you!
It may be possible, but you will have to custom-code/develop a solution. We have helpful examples of PHP code that interacts with a MySQL Database, and how to use PHP to connect to and retrieve data from MySQL.
Thank you,
John-Paul
Hello good evening. How can i get this kind of chat forum on a site?
Hello,
Thanks for the question, but we’re not really sure what you’re asking. Are you referring to the answer response format being used for this page? If have any further questions or comments, please let us know.
Regards,
Arnel C.
hi,
i am confused about the fact that from which mailid the mail will get send.
Hello Lesio x,
The FROM field is the string representing your website you’re sending from after an email is validated. If ANY id is given that does not give a permission problem, then the site name will work. It’s using the previous validation provided by the code at the beginning of the article.
I hope that helps to answer your question! If you require further assistance, please let us know!
Regards,
Arnel C.
I just read question or sharath. I am explaining you what he want to ask you. Actually his problem is like at the run time when he runs his HTML file, the cursor will have to transfer in mail.php file & performs all the operations (like sending mail) but instead of getting output he got whole code as output whatever he wrote in php file so why code is displayed & not the output.
Hello,
Is he using our code specifically? Or is he using a different set of code? If he’s using different code, then I suggest having the issue reviewed by a developer/programmer. Unfortunately, we cannot troubleshoot custom code. If you’re using our specific code, then please provide any error messages or an example of the output you’re seeing.
If you have any further questions, please let us know.
Kindest regards,
Arnel C.
Hello,
i have some problems with your code (thank you anyway for it). I added some parameters to fit best with my site, but when i click to submit, i always have error with checkbox not checked (i want that ALL my checkbox must be checked to send the email) and also the first parameter “owner_name” always give me error ” You forgot to enter your Video Owner Name.”
My php code (situated in another php file):
<?php
if (isset($_REQUEST[‘submitted’])) {
// Initialize error array.
$errors = array();
// Check for a proper Owner name
if (!empty($_REQUEST[‘owner_name’])) {
$owner_name = $_REQUEST[‘owner_name’];
$pattern = “/^[a-zA-Z0-9\_]{2,20}/”;// This is a regular expression that checks if the name is valid characters
if (preg_match($pattern,$owner_name)){ $owner_name = $_REQUEST[‘owner_name’];}
else{ $errors[] = ‘Your Video Owner Name can only contain _, 1-9, A-Z or a-z 2-20 long.’;}
} else {$errors[] = ‘You forgot to enter your Video Owner Name.’;}
// Check for a proper Band name
if (!empty($_REQUEST[‘band_name’])) {
$band_name = $_REQUEST[‘band_name’];
} else {$errors[] = ‘You forgot to enter your Band Name.’;}
// Check for a proper Video Name
if (!empty($_REQUEST[‘band_video’])) {
$band_video = $_REQUEST[‘band_video’];
} else {$errors[] = ‘You forgot to enter your Video Name.’;}
// Check for a proper Band email
if (!empty($_REQUEST[‘band_email’])) {
$band_email = $_REQUEST[‘band_email’];
if (!filter_var($band_email, FILTER_VALIDATE_EMAIL)) {
$errors[] = ‘Your Band Email is wrong or you missed it.’;}
}
else{ $band_email = $_REQUEST[‘band_email’];}
// Check for a proper Video Download Link
if (!empty($_REQUEST[‘video_link’])) {
$video_link = $_REQUEST[‘video_link’];
} else {$errors[] = ‘You forgot to enter your Video Download Link.’;}
// Check for a proper Video thnumbnail Link
if (!empty($_REQUEST[‘thumbnail_link’])) {
$thumbnail_link = $_REQUEST[‘thumbnail_link’];
} else {$errors[] = ‘You forgot to enter your Thumbnail Download Link.’;}
// Check for a proper Video Description
if (!empty($_REQUEST[‘video_description’])) {
$video_description = $_REQUEST[‘video_description’];
} else {$errors[] = ‘You forgot to enter your Video Description.’;}
if (!empty($_REQUEST[‘accept_tos’]) && !empty($_REQUEST[‘accept_rights’]) && !empty($_REQUEST[‘accept_publish’])) {
$check1 = $_REQUEST[‘accept_tos’];
if (empty($check1)){$check1 = ‘Unchecked’;}else{$check1 = ‘Checked’;}
$check2 = $_REQUEST[‘accept_rights’];
if (empty($check2)){$check2 = ‘Unchecked’;}else{$check2 = ‘Checked’;}
$check3 = $_REQUEST[‘accept_publish’];
if (empty($check3)){$check3 = ‘Unchecked’;}else{$check3 = ‘Checked’;}
} else {$errors[] = ‘You forgot to check all checkboxes.’;}
}
//End of validation
if (isset($_REQUEST[‘submitted’])) {
if (empty($errors)) {
$from = “From: eztv.altervista.org”; //Site name
// Change this to your email address you want to form sent to
$to = “ezcoretv@gmail.com”;
$subject = “PUBLISH VIDEO BY ” . $band_name . “”;
$message = “Owner name: ” . $owner_name . ”
Band Name: ” . $band_name . ”
Video Name: ” . $band_video .”
Band Email: ” . $band_email . “
Video Download Link: ” . $video_link . “
Thumbnail Link: ” . $thumbnail_link . “
Video Description: ” . $video_description . “”;
mail($to,$subject,$message,$from);
}
}
?>
<?php
//Print Errors
if (isset($_REQUEST[‘submitted’])) {
// Print any error messages.
if (!empty($errors)) {
echo ‘<hr /><h3>The following errors occurred:</h3><ul>’;
// Print each error.
foreach ($errors as $msg) { echo ‘<li>’. $msg . ‘</li>’;}
echo ‘</ul><h3>Your mail could not be sent due to input errors. Please go back and solve those errors.</h3><hr />’;}
else{echo ‘<hr /><h3 align=”center”>Your mail was sent. Thank you!</h3><hr />’;}
}
?>
Hello Matteo,
Have you tried to error trap the variable failing with something like var_dump($variable);
Best Regards,
TJ Edens
Hi
I am able to run this code, and shows that “Your mail was sent.Thank you!” but i’m confused where this mail is sending, here there is no fields regarding the same..kindly help me out
Hello Maryann,
Thank you for contacting us. If you are following the above guide, it will send to the address you put in this line:
Thank you,
John-Paul
Notice: Undefined variable: email2 inD:\XAMPP\htdocs\secure_email_code.phpon line 24
Notice: Undefined variable: email2 inD:\XAMPP\htdocs\secure_email_code.phpon line 25
Help please.
Hello Szabolcs,
From the error you have shown us, in the file secure_email_code.php you have stated a variable called $email2 and it has not been previously defined. Please remember that PHP works top down so you have to declare variables above where you use them.
Best Regards,
TJ Edens
I’m confused. You say to put this all directly into the web page but is the comments you talk about making a .php file of some and .html of others. Please clearify.
Tim
Most commonly you will want to put the form in the html file and the processing in a separate php file.
What would I setup for the $pattern if I need to make sure the feild would work for an email address?
Hello Brad,
Thank you for contacting us. That is not covered in the above guide, but I did find a helpful guide via online search titled PHP 5 Forms – Validate E-mail and URL.
Thank you,
John-Paul
when m click on send button value are clear and not receve the value on my mail
I advise checking your SMTP settings to make sure everything is configured properly. Likewise, you can follow your mail logs for specific errors. Or, if you’re on a shared server I suggest you contact Live Support, so we can check the logs for you.
I want to send a feedback form to my gmail from localhost server. it show the message to successfully send but i did not get the mail.
*code removed by moderator*
Hello Saran,
You will want to contact your Support department to see what the email logs say about that particular email.
Kindest Regards,
Scott M
Hello Rikhil,
You will want to contact your Support department to see what the email logs say about that particular email. Since the function gave a successful result, they will check to see how the server handled the email to see if there was an error.
Kindest Regards,
Scott M
This code is working. It shows Your mail was sent. Thank you!
..
but didnt snd mail
It is really helpful. Just wanted to say THANKS
Thank you author. 🙂
Hellow, this the error message i get,
Notice: Undefined variable: name in C:\wamp\www\kk\mail.php on line 45
Warning: mail(): SMTP server response: 550 The address is not valid. in C:\wamp\www\kk\mail.php on line 52
Your mail was sent. Thank you!
My code is:
<h2>Contact us</h2>
<p>Fill out the form below.</p>
<form action=”mail.php” method=”post”>
<label>First Name: <br />
<input name=”firstname” type=”text” value=”- Enter First Name -” /><br /></label>
<label>Last Name: <br />
<input name=”lastname” type=”text” value=”- Enter Last Name -” /><br /></label>
<label>Phone Number: <br />
<input name=”phone” type=”text” value=”- Enter Phone Number -” /><br /></label>
<label>Red Maple Acer:
<input name=”redmapleacer” type=”checkbox” value=”Red Maple Acer” /><br /></label>
<label>Chinese Pistache:
<input name=”chinesepistache” type=”checkbox” value=”Chinese Pistache” /><br /></label>
<label>Raywood Ash:
<input name=”raywoodash” type=”checkbox” value=”Raywood Ash” /><br /></label>
<input name=”” type=”reset” value=”Reset Form” /><input name=”submitted” type=”submit” value=”Submit” />
</form>
mail.php is:
<?php
if (isset($_REQUEST[‘submitted’])) {
// Initialize error array.
$errors = array();
// Check for a proper First name
if (!empty($_REQUEST[‘firstname’])) {
$firstname = $_REQUEST[‘firstname’];
$pattern = “/^[a-zA-Z0-9\_]{2,20}/”;// This is a regular expression that checks if the name is valid characters
if (preg_match($pattern,$firstname)){ $firstname = $_REQUEST[‘firstname’];}
else{ $errors[] = ‘Your Name can only contain _, 1-9, A-Z or a-z 2-20 long.’;}
} else {$errors[] = ‘You forgot to enter your First Name.’;}
// Check for a proper Last name
if (!empty($_REQUEST[‘lastname’])) {
$lastname = $_REQUEST[‘lastname’];
$pattern = “/^[a-zA-Z0-9\_]{2,20}/”;// This is a regular expression that checks if the name is valid characters
if (preg_match($pattern,$lastname)){ $lastname = $_REQUEST[‘lastname’];}
else{ $errors[] = ‘Your Name can only contain _, 1-9, A-Z or a-z 2-20 long.’;}
} else {$errors[] = ‘You forgot to enter your Last Name.’;}
//Check for a valid phone number
if (!empty($_REQUEST[‘phone’])) {
$phone = $_REQUEST[‘phone’];
$pattern = “/^[0-9\_]{7,20}/”;
if (preg_match($pattern,$phone)){ $phone = $_REQUEST[‘phone’];}
else{ $errors[] = ‘Your Phone number can only be numbers.’;}
} else {$errors[] = ‘You forgot to enter your Phone number.’;}
if (!empty($_REQUEST[‘redmapleacer’]) || !empty($_REQUEST[‘chinesepistache’]) || !empty($_REQUEST[‘raywoodash’])) {
$check1 = $_REQUEST[‘redmapleacer’];
if (empty($check1)){$check1 = ‘Unchecked’;}else{$check1 = ‘Checked’;}
$check2 = $_REQUEST[‘chinesepistache’];
if (empty($check2)){$check2 = ‘Unchecked’;}else{$check2 = ‘Checked’;}
$check3 = $_REQUEST[‘raywoodash’];
if (empty($check3)){$check3 = ‘Unchecked’;}else{$check3 = ‘Checked’;}
} else {$errors[] = ‘You forgot to enter your Phone number.’;}
}
//End of validation
if (isset($_REQUEST[‘submitted’])) {
if (empty($errors)) {
$from = “iykmoses@gmail.com”; //Site name
// Change this to your email address you want to form sent to
$to = “iykmoses@gmail.com”;
$subject = “Admin – Our Site! Comment from ” . $name . “go”;
$message = “Message from ” . $firstname . ” ” . $lastname . ”
Phone: ” . $phone . ”
Red Maple Acer: ” . $check1 .”
Chinese Pistache: ” . $check2 .”
Raywood Ash: ” . $check3 .””;
mail($to,$subject,$message,$from);
}
}
?>
<?php
//Print Errors
if (isset($_REQUEST[‘submitted’])) {
// Print any error messages.
if (!empty($errors)) {
echo ‘<hr /><h3>The following occurred:</h3><ul>’;
// Print each error.
foreach ($errors as $msg) { echo ‘<li>’. $msg . ‘</li>’;}
echo ‘</ul><h3>Your mail could not be sent due to input errors.</h3><hr />’;}
else{echo ‘<hr /><h3 align=”center”>Your mail was sent. Thank you!</h3><hr />’;
}
}
//End of errors array
?>
i dont under stand the error please help.
Hello King,
You will need to ensure your SMTP settings are correct for the email to process. WAMPs are not able to send emails unless you have set it up to be a mail server. This may be something you need to test live.
Kindest Regards,
Scott M
Hi,
I am looking for a tool which create single site and multile site setup in iss.
This tool should create site, ftp, site directory and site app pool.
the tool should need to check, site user exists or not, site directory already exist or not , iis user for this particular exists or all validation.
Hello Himanshu,
Thank you for contacting us. Since we run all Linux servers instead of IIS, we do not provide any such solution.
Here is a helpful link to the official Microsoft guide on How to set up your first IIS Web site.
Thank you,
John-Paul
Hi Please Send Zip Folder link iam freshar and validation form .
Hello freshar,
Thank you for contacting us. We do not have any form available via zip folder link. Instead, the above guide provides the code as an example. You can paste the entire code directly in the body section of your webpage to get it working.
Keep in mind you will have to customize the code to meet your specific needs.
Thank you,
John-Paul
Please help me. I pasted in my website the script(work very well), but when i refresh the site, send another mail for us. What can i do with this problem!?
Hello Gero,
That happens because the ‘submitted’ state is still in effect. When the page is initially created, the state is not ‘submitted’. Once you submit the form, the browser has it set and does not it turn off. Contact Form pages were not made to be refreshed.
Kindest Regards,
Scott M
Hi Arnel,
I put the code in the html and while no errors were reported it failed to send the email. Do you know of any code that is 5.4 compatible?
Thanks,
Dan
Hello Dan,
Thank you for contacting us. You may want to try our other guide: Creating a Contact Form with FormMail.
Alternately, if you are using a CMS such as WordPress, Drupal, or Joomla, it may be easier to use a 3rd party plugin or extension.
Thank you,
John-Paul
Hi Arnel,
The HTML code is:
<!doctype html>
<html>
<head>
<meta charset=”utf-8″>
<title>Untitled Document</title>
</head>
<body>
<h2>Contact us</h2>
<p>Fill out the form below.</p>
<form action=”mail.php” method=”post”>
<label>Name: <br />
<input name=”name” type=”text” value=”” /><br /></label>
<label>Email: <br />
<input name=”email” type=”text” value=”” /><br /></label>
<label>Message: <br />
<input name=”message” type=”text” value=”” /><br /></label>
<input name=”” type=”reset” value=”Reset Form” /><input name=”submitted” type=”submit” value=”Submit” />
</form>
</body>
</html>
It calls mail.php which is:
<?php
if (isset($_REQUEST[‘submitted’])) {
if (empty($errors)) {
$from = “From: Our Site!”; //Site name
// Change this to your email address you want to form sent to
$to = “danvoils@indy.net”;
$subject = “Website Comment from ” . $name . “”;
$message = “Message from ” . $name . ” ” . $email . ”
Message: ” . $message . ” ;
mail($to,$subject,$message,$from);
}
}
?>
Where do I put the error reporting code? As I said the above files work great on PHP 5.2.17 but not on 5.3.24.
Dan
Hello Dan,
Our apologies as the person who originally created this post is no longer with us, so the code has not been updated. It would have to be reviewed and re-written at this point.. If you still have problems with its placement, then please let us know.
I hope this helps to answer your question, please let us know if you require any further assistance.
Regards,
Arnel C.
Where do I put it? Here is what is in my mail.php:
<?php
if (isset($_REQUEST[‘submitted’])) {
if (empty($errors)) {
$from = “From: Our Site!”; //Site name
// Change this to your email address you want to form sent to
$to = “email address removed”;
$subject = “Website Comment from ” . $name . “”;
$message = “Message from ” . $name . ” ” . $email . ”
Message: ” . $message . ” ;
mail($to,$subject,$message,$from);
}
}
?>
I tried putting int error_reporting ([ int
$level] ) in line two but Dreamweaver reported a syntax error.
Hello Dan,
The code would need to go into the body of your website , NOT in the MAIL.PHP application. I’m not familiar with your website code, but if you have an index page or main page that is used for the form , then the code should be appearing in that page. I hope that helps to answer your question! If you require further assistance, please let us know!
Regards,
Arnel C.
No errors given but it will not send the email. I do have a server running 5.2 (which will be udgraded shortly against my will) that it works perfectly on. However the server I need it to run it on is running 5.3.24. The one it works on is running 5.2.17. I uploaded the exact same files to both. I am calling mail.php from the html file.
Hello Dan,
I would suggest trying the following php code to see if any errors are being suppressed. Other than that it should work the same. There may be some compatibility issues but the error reporting will let us know.
Best Regards,
TJ Edens
How do I make this compatible with PHP 5.4? It will work on 5.2 but my server is 5.3 and will soon be 5.4.
Thank you.
Hello Dan,
Are you receiving any errors while trying to use this script on PHP 5.4? Everything in here should be backwards compatible to PHP 5.2.
Best Regards,
TJ Edens
Great post! I am looking forward to read your blog.
Hello, i copy the same code that mention above and created seperate php file callled mail.php and in my html file i used <form action=”mail.php” method=”post”>, but it goes to my mail.php page, showing the code i have copied
please help to resolve tis error.
Hello sharath,
Thank you for contacting us today. We are happy to help, but will require some additional information since it is not clear what you are asking.
Can you provide a link to the form, so we can test the error?
Please include any additional information that will help us replicate the problem.
Thank you,
John-Paul
Say I made a file for this called email.php. I have a website written in HTML & CSS and Javascript. How would I put this into the rest of the code so it shows op on the website?
Hello Justice,
Thank you for your question. I recommend that you paste the entire code directly in the body section of your webpage as described above. This will allow it to show up on the website.
Alternately, you could navigate to the file PHP file directly, and it should run. Such as:
Thank you,
John-Paul
No, I have only tested from Google Drive. I’ve only picked up HTML and CSS in the last month, so I’m still relatively new to all of this and I don’t really know where to begin when it comes to hosting.
Hello Kyle,
I did some checking on Google Drive and did find the following statement on their page:
“Google Drive does not support web resources that make use of server-side scripting languages like PHP.”
So it looks like Drive is just for basic HTML and CSS, you would need to set up a WAMP environment on your local computer or get a free/cheap hosting plan in order to be able to test your php pages.
Kindest Regards,
Scott M
I am completely new to PHP so please forgive me if this sounds stupid.
I copied/pasted the php to its own file, put in my email address, and named it mail.php. In the HTML I set the action to mail.php. The problem is when I click submit, I get 404’d.
Right now the HTML, CSS, PHP, and the image files are all in Google Drive instead of just being stored locally; they are not on a real website. I do this so that I can access the pages from any device and see how they react.
Does anyone know why I am getting the 404 error?
Thanks in advance.
Hello Kyle,
What you described sounds normal as far as the code goes. Have you tested it out on a webserver environment like WAMP? I am not familiar with testing on Google Drive, so I do not have any troubleshooting steps to take on that environment.
Kindest Regards,
Scott M
Hi,
I am having problems trying to require in email format. I cant seem to find a pattern that just allows a simple email? or is it more complex than that? I just need it for email format and required email feild, I dont need to validate actual email address. Thanks in advance for any help. And so far everything else is working just cant get this part!
Kyle
Hello Kyle,
You can simply take the email address field and check to see that it has something in it. If you like, you can also check to ensure it has an @ symbol. This is not super detailed, but it gives an idea of whether they have entered an email address format in the field.
Kindest Regards,
Scott M
This is the code i used
<?php
if (isset($_POST[‘submitted’])) {
// Initialize error array.
$errors = array();
// Check for a proper First name
if (!empty($_POST[‘firstname’])) {
$firstname = $_POST[‘firstname’];
$pattern = “/^[a-zA-Z0-9\_]{2,20}/”;// This is a regular expression that checks if the name is valid characters
if (preg_match($pattern,$firstname)){ $firstname = $_POST[‘firstname’];}
else{ $errors[] = ‘Your Name can only contain _, 1-9, A-Z or a-z 2-20 long.’;}
} else {$errors[] = ‘You forgot to enter your First Name.’;}
// Check for a proper Last name
if (!empty($_POST[‘lastname’])) {
$lastname = $_POST[‘lastname’];
$pattern = “/^[a-zA-Z0-9\_]{2,20}/”;// This is a regular expression that checks if the name is valid characters
if (preg_match($pattern,$lastname)){ $lastname = $_POST[‘lastname’];}
else{ $errors[] = ‘Your Name can only contain _, 1-9, A-Z or a-z 2-20 long.’;}
} else {$errors[] = ‘You forgot to enter your Last Name.’;}
//Check for a valid phone number
if (!empty($_POST[‘phone’])) {
$phone = $_POST[‘phone’];
$pattern = “/^[0-9\_]{7,20}/”;
if (preg_match($pattern,$phone)){ $phone = $_POST[‘phone’];}
else{ $errors[] = ‘Your Phone number can only be numbers.’;}
} else {$errors[] = ‘You forgot to enter your Phone number.’;}
if (!empty($_POST[‘redmapleacer’]) || !empty($_POST[‘chinesepistache’]) || !empty($_POST[‘raywoodash’])) {
$check1 = $_POST[‘redmapleacer’];
if (empty($check1)){$check1 = ‘Unchecked’;}else{$check1 = ‘Checked’;}
$check2 = $_POST[‘chinesepistache’];
if (empty($check2)){$check2 = ‘Unchecked’;}else{$check2 = ‘Checked’;}
$check3 = $_POST[‘raywoodash’];
if (empty($check3)){$check3 = ‘Unchecked’;}else{$check3 = ‘Checked’;}
} else {$errors[] = ‘You forgot to enter your Phone number.’;}
}
?>
to me is perfect. it just saved me alot of troubles.
Hello Brody,
I’m sorry that you’re having problems with your contact form code. We unfortunately cannot provide support for your code. You will need to speak with a developer (if you were not the author of the code), to see where the problem is occurring. You should look at your error logs and see if anything has been generated. Apologies that we can’t directly with coding issues of this nature.
Regards,
Arnel C.
Hello, when i used this code nothing seems to happen, it goes to my contact.php page, but nothing is displayed here is what i have for the html portion.
<form action=”contact.php” method=”post”>
<dl>
<dt>First Name: <input type=”text” name=”firstname”></dt><br>
<dt>Last Name: <input type=”text” name=”lastname”></dt><br>
<dt>Phone Number: <input type=”text” name=”phone”></dt><br>
</dl>
<input name=”submitted” type=”submit”>
</form>
Warning: mail(): Failed to connect to mailserver at "localhost" port 25, verify your "SMTP" and "smtp_port" setting in php.ini or use ini_set() in C:\wamp\www\train\mail.php on line 54 .
what is this error mean how could i rectify it..??
Try changing the port number from 25 to 587 and see if that helps.
<html>
<body bgcolor=”#999900″>
<center>
<form method=”post”>
<p>Owner name:
<input type=”text” name=”user”>
<br>
<br>
Enter Your Email ID :
<input type=”text” name=”email”>
<br>
<br>
Enter one Security Question :
<input type=”<input type img style=”background-image:url( Site 2/10342910_1426737800919777_3695156238055597333_n.jpg)”” name=”secu”>
</p>
<p>property size:
<label>
<select name=”select” accesskey=”3″ tabindex=”3″>
<post>550</option>
<post>800</option>
<post>1000</option>
<post>1200/option>
<input type onKeyDown=””
</label>
<br>
<br>
<input type=”submit” >
<br>
<br>
</p>
</form>
***************************************************************************
<br>
<marquee>
<?php
<method=”post” action=””>
$a = $_POST[“user”];
$d = $_POST[“email”];
$e = $_POST[“secu”];
$f = strlen($a);
$g = “thirteen” ;
if(“$f” >=10);
<?
{
echo ” Plz Enter UserID , less then 10 character “; }
{ echo ” Security Answer is incorrect “; }
else
{
echo ” Thank you for submission “; }
?>
<br>
</marquee>
***********************************************************************
</body>
</html>
php script is not support plz help us for this page design my logic is enter the owner name, property size in list select valu goes to text box.
Hello nilesh,
We do not normally support custom code. What do you mean it is not supported? Are you getting an error message of some kind?
Kindest Regards,
Scott M
good good
Hello all,
I really need help. I don’t know where to put my email, so when someone fills in the contact form, they click on the send button and it appears in my mailbox. So my question is; where in this script do I have to add something to receive messages in my mailbox? This script I use is also linked on a jquery. So I have a php form and a jquery. Please help me intelligent people! 😀
In my html I have add this:
<form id=”form” action=”#”>
=”phone”>
<input type=”tel” value=”Telephone:”>
<br class=”clear”>
<span class=”error error-empty”>*This is not a valid phone number.<”>send</a>
<div class=”clear”></div>
</div>
</fieldset>
</form>
The linked Jquery code is this:
//forms
;(function($){
$.fn.forms=function(o){
return this.each(function(){
var th=$(this)
,_=th.data(‘forms’)||{
errorCl:’error’,
emptyCl:’empty’,
invalidCl:’invalid’,
notRequiredCl:’notRequired’,
successCl:’success’,
successShow:’4000′,
mailHandlerURL:’#’,
ownerEmail:’#’,
stripHTML:true,
smtpMailServer:’localhost’,
targets:’input,textarea’,
controls:’a[data-type=reset],a[data-type=submit]’,
validate:true,
rx:{
“.name”:{rx:/^[a-zA-Z’][a-zA-Z-‘ ]+[a-zA-Z’]?$/,target:’input’},
“.state”:{rx:/^[a-zA-Z’][a-zA-Z-‘ ]+[a-zA-Z’]?$/,target:’input’},
“.phone”:{rx:/^\+?(\d[\d\-\+\(\) ]{5,}\d$)/,target:’input’},
“.message”:{rx:/.{20}/,target:’textarea’}
},
preFu:function(){
_.labels.each(function(){
var label=$(this),
inp=$(_.targets,this),
defVal=inp.val(),
trueVal=(function(){
var tmp=inp.is(‘input’)?(tmp=label.html().match(/value=[‘”](.+?)[‘”].+/),!!tmp&&!!tmp[1]&&tmp[1]):inp.html()
return defVal==”?defVal:tmp
})()
trueVal!=defVal
&&inp.val(defVal=trueVal||defVal)
label.data({defVal:defVal})
inp
.bind(‘focus’,function(){
inp.val()==defVal
&&(inp.val(”),_.hideEmptyFu(label),label.removeClass(_.invalidCl))
})
.bind(‘blur’,function(){
_.validateFu(label)
if(_.isEmpty(label))
inp.val(defVal)
,_.hideErrorFu(label.removeClass(_.invalidCl))
})
.bind(‘keyup’,function(){
label.hasClass(_.invalidCl)
&&_.validateFu(label)
})
label.find(‘.’+_.errorCl+’,.’+_.emptyCl).css({display:’block’}).hide()
})
_.success=$(‘.’+_.successCl,_.form).hide()
},
isRequired:function(el){
return !el.hasClass(_.notRequiredCl)
},
isValid:function(el){
var ret=true
$.each(_.rx,function(k,d){
if(el.is(k))
ret=d.rx.test(el.find(d.target).val())
})
return ret
},
isEmpty:function(el){
var tmp
return (tmp=el.find(_.targets).val())==”||tmp==el.data(‘defVal’)
},
validateFu:function(el){
el.each(function(){
var th=$(this)
,req=_.isRequired(th)
,empty=_.isEmpty(th)
,valid=_.isValid(th)
if(empty&&req)
_.showEmptyFu(th.addClass(_.invalidCl))
else
_.hideEmptyFu(th.removeClass(_.invalidCl))
if(!empty)
if(valid)
_.hideErrorFu(th.removeClass(_.invalidCl))
else
_.showErrorFu(th.addClass(_.invalidCl))
})
},
getValFromLabel:function(label){
var val=$(‘input,textarea’,label).val()
,defVal=label.data(‘defVal’)
return label.length?val==defVal?’nope’:val:’nope’
}
,submitFu:function(){
_.validateFu(_.labels)
if(!_.form.has(‘.’+_.invalidCl).length)
$.ajax({
type: “POST”,
url:_.mailHandlerURL,
data:{
name:_.getValFromLabel($(‘.name’,_.form)),
phone:_.getValFromLabel($(‘.phone’,_.form)),
state:_.getValFromLabel($(‘.state’,_.form)),
message:_.getValFromLabel($(‘.message’,_.form)),
owner_email:_.ownerEmail,
stripHTML:_.stripHTML
},
success: function(){
_.showFu()
}
})
},
showFu:function(){
_.success.slideDown(function(){
setTimeout(function(){
_.success.slideUp()
_.form.trigger(‘reset’)
},_.successShow)
})
},
controlsFu:function(){
$(_.controls,_.form).each(function(){
var th=$(this)
th
.bind(‘click’,function(){
_.form.trigger(th.data(‘type’))
return false
})
})
},
showErrorFu:function(label){
label.find(‘.’+_.errorCl).slideDown()
},
hideErrorFu:function(label){
label.find(‘.’+_.errorCl).slideUp()
},
showEmptyFu:function(label){
label.find(‘.’+_.emptyCl).slideDown()
_.hideErrorFu(label)
},
hideEmptyFu:function(label){
label.find(‘.’+_.emptyCl).slideUp()
},
init:function(){
_.form=_.me
_.labels=$(‘label’,_.form)
_.preFu()
_.controlsFu()
_.form
.bind(‘submit’,function(){
if(_.validate)
_.submitFu()
else
_.form[0].submit()
return false
})
.bind(‘reset’,function(){
_.labels.removeClass(_.invalidCl)
_.labels.each(function(){
var th=$(this)
_.hideErrorFu(th)
_.hideEmptyFu(th)
})
})
_.form.trigger(‘reset’)
}
}
_.me||_.init(_.me=th.data({forms:_}))
typeof o==’object’
&&$.extend(_,o)
})
}
})(jQuery)
$(window).load(function(){
$(‘#form’).forms({
ownerEmail:’#’
})
})
Hello.
From skimming the code, ownerEmail:’#’, is what you would change to be your email but I would suggest contacting the developers of the script just to be sure.
Best Regards,
TJ Edens
I am running this script on a localhost but the email is not being delivered even though the message says mail has been sent. Do I need to make changes to php.ini?
Hello Bob,
Thank you for your comment. PHP errors can be enabled to display and log errors using your local php.ini file.
This can provide more details into what is happening.
Also, be sure to check your spam box/email filter for the test emails.
If you have any further questions, feel free to post them below.
Thank you,
John-Paul
Hi,
Advance thanks for your code. every think work fine. but I couldn’t receive the information to my email. I used one php file for entire code. If you could help Would be appriciaated.
Kumar
Hello Kumar,
Thank you for your question. We are happy to help, but will need some additional information. Have you confirmed the email settings you entered are correct?
Have you followed the above guide? Are you having any problems with a step?
Have you reviewed the email logs to confirm if your script is communicating with the email server. (Live support can help you with that if you are on a shared server).
Thank you,
John-Paul
Hello Anandita,
Probably, you are writing the php code and the html code into the same xyz.html file.
If you are using this instruction with the php script and html code in one file:
Then it is must that you will get the error which you have reported.
I’ll suggest you to make a seperate php file and name it “mail.php”. And write the html code in seperate html file and use the following instruction:
Once you’ve done like this, you will not receive any errors. It is a tested method, as I experienced the same but when I added mail.php, everything becomes fine.
I hope this will help you. Though my reply for your comment is too late, but I tried to help you and many others who is facing same error.
Sincerely,
Malik
Thank you Arnel C.
I really appreciate your prompt reply. I followed your suggestion and now all the emails are coming to inbox folder instead going to spam. BINGO..!! Thank you.
I’d really like to share the part of php code which I changed.
The original php code has:
Now, just change “From: Our Site!” with a valid email address like below:
By doing this, email will come to inbox with a valid reference email address and this will prevent email going into spam folder.
Just thought to share, so that some one might get help by this. Any way brilliant work from admins.
Thanks and Sincerely,
Malik
Hello,
I will appreciate the admin on sharing this useful php based code. It really helped me a lot in designing my php contact form.
I have designed it and tested it successfully. Emails are coming to my email account but unfortunately in the SPAM folder with the tag (Unknown Sender).
I’ll be grateful to you for your generous help to me.
Thanks and Sincerely,
Malik
Hello Malik,
Sorry for the problem with the form. It looks like the issue is a result of the “FROM” setting for the form. I would recommend that you set it to an email address instead of a website. If you have any further questions or comments, please let us know.
Regards,
Arnel C.
i want to make wesite using php of my realtives they having a factory so which site i refer to get the knowlegde..i want like this only jst copy and paste so it is easy for me to create website.plz reply me soon plz
Hello Ash,
Thanks for the comment. Creating a website using only PHP will
not
be a copy and paste process no matter what tool you use. There are many applications out there that make it easy to create a website that uses primarily PHP, but I would suggest using WordPress. Our WordPress Education Channel provides a lot of information on how to install and start using the application.
I hope this helps to answer your question, please let us know if you require any further assistance.
Regards,
Arnel C.
i want to design an email web page using php, how do i start?
Hello Hamman,
Can you provide some more information on what you are wanting to create? Generally a good place to start coding simple websites is W3 Schools.
Kindest Regards,
TJ Edens
I have the same issue as Anandita. I even cut and paste as is yet portions of the php script are getting printed to the page. Any idea what else could be causing this?
Thanks.
It sounds like your code is not being correctly parsed as PHP. Does the file extension a .php file?
the php runs off the server, the functionality should not be affected. The browser incompatibility will likely come in display inconsistencies. These are normally found in the CSS files/statements.
Thanks for the reply Scott.
The websites and forms DISPLAY well in every browser, But they seem to only actually send the message in Firefox and Chrome. Most messages in IE and some in Safari dont actually go through. This is driving me nuts.
As Scott stated, with it being PHP, the functionality of the form itself should not be affected as all code is executed within the browser. Are you using this same contact form that we have provided within this article?
This article is very interesting. Thanks!
I “inherited” the maintenance of some websites that use similar PHP functions for the contact forms, so this helps me understand what is going on.
BUT, I’m having a big problem with browser compatibility. Any idea what I should check on? This is driving me nuts!
Hello Lou,
Since the php runs off the server, the functionality should not be affected. The browser incompatibility will likely come in display inconsistencies. These are normally found in the CSS files/statements in the page.
Kindest Regards,
Scott M
hello… can i create a home page using php, if yes then can i please get the know how
thanks
If you are not familiar with PHP, you may want to review some of the PHP articles on W3Schools which will teach you the basics of PHP to get you started.
What if you used radio buttons instead of checkboxes. How would the code look?
I get the html part but the radio validation not sending the value of radio button clicked.
Thanks
Hello Betty,
The checkbox is for when someone may select one or more of the options. Radio buttons force only one selection. Did you have code you wanted us to look at for errors? Replacing the checkboxes with radio buttons needs code modification not only on the form itself, but up where the validation is done.
Kindest Regards,
Scott M
hi
i will have to write this whole code in page or two page because it has three partition and php code is starting in partition one and ending in partition 2.
Hello Ravi,
You can put all three partitions in the same file. It was only split up on the article to show what purpose each partition serves.
Kindest Regards,
Scott M
Hi,
I am using this entire code but the only problem is that the form is being emailed on refreshing/reloading the page
Hello Sak,
Thank you for pointing that out. I found the error and corrected it. It has been tested and will not send an email if it is refreshed before the Submit button is pressed. A more updated version of this article should come out soon with a bit more polish.
Kindest Regards,
Scott M
Hi.. Ive problem for make a contact us page. im using such your web for ‘post a comment’. its work and nothing error with that. but the comment cannot sent to the email. email cannot received the comment for page contact us. can help me to solve it?
Thanks.
Mira
Hello Mira,
If you are using just this article, then the email should send through fine. Are you one of our customers? Are you using additional SMTP settings to send your email through the server? If so, they will need to be correct in order to send out.
Kindest Regards,
Scott M
i am using the above code for sending mail but getting error like this:
The following occurred:
‘; // Print each error. echo ‘
Your mail could not be sent due to input errors.
‘;} else{echo ‘
Your mail was sent. Thank you!
Below is the message that you sent.
‘; echo “Message from ” . $firstname . ” ” . $lastname . ”
Phone: “.$phone.”
“; echo “
Red Maple Acer: ” . $check3 . “”; echo “
Chinese Pistache: ” . $check2 . “”; echo “
Raywood Ash: ” . $check3 . “”; } } //End of errors array ?>
i don’t know why this error come when we click on submit button..pls help me soon..
Hello Anandita,
Be sure to check your code for syntax errors, particularly the placement of the quotes. Did you cut and paste it as is? It seems to work fine for me when doing that. Did you make any changes?
Kindest Regards,
Scott M
very nice..just copy and paste and it works 🙂 | https://www.inmotionhosting.com/support/website/how-to-create-a-custom-php-contact-form/?replytocom=17495 | CC-MAIN-2020-29 | en | refinedweb |
Here is my Strategy design patterns tutorial. You use this pattern if you need to dynamically change an algorithm used by an object at run time. Don’t worry, just watch the video and you’ll get it.
The pattern also allows you to eliminate code duplication. It separates behavior from super and subclasses. It is a super design pattern and is often the first one taught.
All of the code follows the video to help you learn.
If you liked this video, tell Google so more people can see it [googleplusone]
Sharing is nice
Code & Comments from the Video
ANIMAL.JAVA
public class Animal { private String name; private double height; private int weight; private String favFood; private double speed; private String sound; // Instead of using an interface in a traditional way // we use an instance variable that is a subclass // of the Flys interface. // Animal doesn't care what flyingType does, it just // knows the behavior is available to its subclasses // This is known as Composition : Instead of inheriting // an ability through inheritance the class is composed // with Objects with the right ability // Composition allows you to change the capabilities of // objects at run time! public Flys flyingType;; } /* BAD * You don't want to add methods to the super class. * You need to separate what is different between subclasses * and the super class public void fly(){ System.out.println("I'm flying"); } */ // Animal pushes off the responsibility for flying to flyingType public String tryToFly(){ return flyingType.fly(); } // If you want to be able to change the flyingType dynamically // add the following method public void setFlyingAbility(Flys newFlyType){ flyingType = newFlyType; } }
DOG.JAVA
public class Dog extends Animal{ public void digHole(){ System.out.println("Dug a hole"); } public Dog(){ super(); setSound("Bark"); // We set the Flys interface polymorphically // This sets the behavior as a non-flying Animal flyingType = new CantFly(); } /* BAD * You could override the fly method, but we are breaking * the rule that we need to abstract what is different to * the subclasses * public void fly(){ System.out.println("I can't fly"); } */ }
BIRD.JAVA
public class Bird extends Animal{ // The constructor initializes all objects public Bird(){ super(); setSound("Tweet"); // We set the Flys interface polymorphically // This sets the behavior as a non-flying Animal flyingType = new ItFlys(); } }
FLYS.JAVA
// The interface is implemented by many other // subclasses that allow for many types of flying // without effecting Animal, or Flys. // Classes that implement new Flys interface // subclasses can allow other classes to use // that code eliminating code duplication // I'm decoupling : encapsulating the concept that varies public interface Flys { String fly(); } // Class used if the Animal can fly class ItFlys implements Flys{ public String fly() { return "Flying High"; } } //Class used if the Animal can't fly class CantFly implements Flys{ public String fly() { return "I can't fly"; } }
ANIMALPLAY.JAVA
public class AnimalPlay{ public static void main(String[] args){ Animal sparky = new Dog(); Animal tweety = new Bird(); System.out.println("Dog: " + sparky.tryToFly()); System.out.println("Bird: " + tweety.tryToFly()); // This allows dynamic changes for flyingType sparky.setFlyingAbility(new ItFlys()); System.out.println("Dog: " + sparky.tryToFly()); } }
It is the first time I encounter this design pattern which uses interface in a nice way.
I’m glad to hear that. I’ll cover many more. If you are into thinking about building great software, you’ll love the tutorials I make over the next few months 🙂
You’re awesome! Thank you so much for your tutorials.
You’re very welcome 🙂 Thank you for the kind words
Your tutorials are pretty awesome, the best stuff I’ve seen on youtube hands down…, anyway thanks alot for taking the time to post. I am dutifully reviewing the code as you recommended and I came across something I wasn’t sure of…
In the comments line above you mention the following:
// without effecting Animal, or Flys.
Not sure if it is my novice thinking or not, but wanted to know if “effecting” or “affecting” was the word intended here.
Thanks for clarifying,
Allan.
Thank you very much 🙂 I do my best to make the best videos I can. In regards to your question, honestly I don’t know the difference between the two words. I’m really good at a few things, but grammar has never been my strong point. Sorry about that
Alan, it’s clearly “effecting” meaning – “to have consequence”
Derek was right the first time, maybe you should look up “pedantic” 🙂
Actually, “affecting” would be right in this case.
“Effecting” means to bring about, or to cause/create . Affecting means to influence, or to cause/create a change in . You are not bringing Animal about. You are causing a change in Animal. If you wanted to use “effecting”, you could, but you would have to say it like this: “without effecting a change in Animal”.
Thank you very much for the tutorials they are just awesome !!! Learnt them pretty fast.
You’re very welcome 🙂
Nice tutorials. I think, the best video tutorials I have ever seen in YouTube. Thank you so much boss. 🙂 Keep up the good work.
Thank you very much 🙂 most people don’t know about my videos so I’m happy you found and liked them
Thank you for the awesome videos on design pattern. well done!!!
You have done amazing videos and very simple to understand with the code. Thanks a lot 🙂
You’re very welcome 🙂 I’m glad you liked them
First of all; great tutorials!!
Watched most of the design patterns. Had just one question about the explanation at this pattern. You’re talking about Composition (Animal.java) but i had the feeling it was Aggregation? I find it hard to tell the difference. (I know about there definitions)
Thank you 🙂 yes I misspoke and you are correct. Most people just refer to everything as composition, but you understand the difference.
Thank you for the quick reply!!
it was probably…. you….that had told me the differences in some tutorial ;). i’ll continue with the refactoring tutorials and i am looking forward to the upcoming Android tuts! Your videos are easy to follow and nicely backed by the code on the website. Amazing job!
Thank you very much 🙂 I do my best to present everything in an understandable format. I glad you like the videos
Firstly, thanks for your great video and enclosed source code, but I also have a question for you:
Why don’t you encapsulate the flyingType field in the Animal class with protected modifier so that the sub-classes can directly access to this field but the clients can’t, they must use the setFlyingAbility method instead.
You could definitely do that. There are many ways to create each design pattern. They are but a guide for writing flexible code
Great Video.
Question:
I want to design an object for exporting a file to XLS; CSV; Word in ASP.NET. They have the same properties but have different values. The initial reaction is to use a Switch statement which breaks the Open-Close principle. However I don’t want to create an object for each export type during implementation. Instead I want to take the value from the button click event for the export process and using that value create a reference that evaluates the value being passed and determine which object to retrieve like an XLS export object; CSV export etc.
Feedback on design would be great
I’m sorry, but I haven’t used ASP for many years and I don’t think I could help you. Sorry
I’m confused.. how does the Animal class know about Flys.. maybe I need to watch the video again.
Flys flyingType is stored in every Animal object as a field. In the Bird class we then give it the ability to fly flyingType = new ItFlys().
I cover the strategy pattern again here Code Refactoring Strategy
I hope that helps
Great stuff ! Thanks
Thank you very much 🙂
Thank you so much for your time and effort! 🙂 This really helps!
You’re very welcome 🙂 I’m glad I was able to help
Hello Derek,
Very good explanation of Strategy Design Pattern. I have one question regarding this. Can strategy class have access to members of animal class? If yes how should animal class access should be given to strategy class?
Thank you 🙂 If by doing that you are increasing coupling, then that should be avoided. i hope that helps
Darek,
In my application I need access members of Animal class in strategy class? I am planning to use interface which gives only few members access to strategy. Is it good for avoiding coupling or need to something else?
I may not be understanding the question. The strategy pattern is used to separate behavior from the super and subclasses. So, it would defeat the point if they communicated directly with each other.
Thanks for this perfect tutorial, and for your effort.
You’re very welcome 🙂 Thank you for visiting!
amazing explanation…. 🙂 Thanx 🙂
Thank you 🙂
Very good explanation. I really liked the way you presented the patterns. Keep up the good work Sir.
Thank you very much 🙂
Great Tutorial Brian. Thanks a lot!
Congratulations. This is the firt video I watched, and now I will carry on watching all the rest. Great staff!
Thank you very much 🙂 I did my best to make the design patterns easy to understand and fun to learn about
Great stuff, sir. Awesome voice too!
Thank you 🙂
Great tutorial. Thanks for sharing your wisdom.
Thank you for the compliment 🙂 You’re very welcome
Hi Derek,
These days I am working n C# and I had used Java around 7 years back in my college days, but you explained things with such a beauty that it helped me a lot.
Hats off to you… Excellent Job
I am a fan of yours… Cheers
Thank you very much 🙂 I try to do the best I can.
Awesome work !
Thank you very much 🙂
I’ve just started with design patterns. To begin with, I was told by my friends to start with some book but after going through first four videos on your youtube channel, I have cancelled the order for that book. I find this sufficient enough to start implementing the patterns.
Thanks for all the stuff. And, please accept my congratulations for the same.
I’m very happy that you are enjoying the design patterns videos. I also cover Refactoring, which is normally covered after design patterns.
You’re very welcome – Derek
I’m working through memorizing every design pattern I can and you’ve made it that much easier. I really appreciate the work on the videos. Thank you!
I’m very happy I have been able to help 🙂 You’re very welcome.
Great video!
I’ve started reading some book about Design Patterns, but it is lack of simple examples as you show in your tutorials!
would it make any difference, if I would make the Fly interface as an abstract class & ItsFly,CantFly as derived classes of Fly?
Awesome!… I referred multiple examples but I was not able to understand the real benefit of this pattern. But your video gave me a very clear and thorough understanding. Great work and thanks for all your effort on this!…
Thank you 🙂 I’m very happy that I was able to clear it up
Well, this is so much better than reading those theoretical books which do not give much insight into the code. The best tutorial for sure! Thanks 🙂
Thank you 🙂 I’m glad you enjoyed it.
Waw! Another great video!
I got a final exam tommorow at 9 and you did a tutorial about every pattern we have covered during the semester !
Definetly the most effective study method i’ve used in many years!
Thanks again to use those simple examples, every teacher should be like you!
I’m very happy that I was able to help. Good luck on your exams 🙂
Your explanation and so easy to remember technique is something that can’t be thanked enough. Thanks a lottt..
Thank you for the nice compliment 🙂 You’re very welcome.
hey Thanks for bringing up this tutorial,it is very easy to understand, I liked it now i can learn design pattern very fast.
You’re very welcome 🙂 I’m glad I was able to clear up this topic for so many people
Sir,
I was wondering how the Animal.java know about the Flys interface???I could n’t understand how they are related….
Thank you for clarifying
Take a look at this line
// Composition allows you to change the capabilities of objects at run time
public Flys flyingType;
The Flys object is stored in every Animal. It can then be changed if needed without disturbing the code
I have only one word for you: YourTutorialsAreGreat!! Keep it up. I’m going to watch all of them..
Thank you very much 🙂 I’m happy that you like them.
You’re very welcome 🙂 I have all of them here. They aren’t neat because I never expected to give them away, but every slide is here.
I have started design pattern series, Thanks Derek.
Now I am clear on Strategy pattern, I have just one question.
What is Association and Aggregation, and what is difference between these with Composition.
Thanks
Pradeep
Hi Pradeep,
You’re very welcome 🙂 This diagram explains everything you asked about with examples as well. I hope it helps.
Thanks Derek,The diagram helps lot, Now clear on Association, Aggregation and Composition 🙂 .
Thanks
Pradeep
Great I’m glad it helped 🙂
Hi Derek,
thank you very much for your amazing videos… I’m learning so much thank to you.. i just can’t tell you how glad I am for came across your videos!
♥
Thank you 🙂 I’m very happy that you enjoy them. Many more are coming.
Derek, thank you buddy. You are the best!!!
Thank you 🙂
Very good series! You are truly demystifying an important topic. Above, in the comments here to the Strategy Pattern, you say, “The strategy pattern is used to separate behavior from the super and subclasses.” With that one sentence, I finally clearly understood the purpose of it. Often, with design patterns, teachers focus get lost in explaining the mechanics, and the reason-to-use gets lost in the wash of information. Can you make a list of these kinds succinct formulations, and post them on your site? ie, one sentence per each design pattern. I think for certain patterns, like Strategy, the “why” is not obvious whereas the mechanics are simple. I would truly appreciate it if you focused like a laser on the why’s, so that we know when to use a given pattern. And, as you do in your video, please state the why’s in different ways (perhaps two or more sentences instead of one) so that the clarification may come from different angles.
Thanks again for your excellent efforts!
Thank you 🙂 It is always great to hear that I was able to clear everything up for people. I’ll see what I can do about that list. These tutorials became popular many months after I originally posted them, so I stopped short of making a video like you requested. I’ll see what i can do now.
Derek,
Glad to hear! I will look forward to that list very much.
I was thinking of a more complete sentence to describe the Strategy pattern, “The strategy pattern is used to separate behavior from the super and subclasses, when you want to invoke this dissimilar behavior with the same function in the client (making use of polymorphism).” Also, I was reading on the Strategy pattern page (), and it says there that the Strategy pattern and the Bridge pattern have the same UML diagram, “but differ in their intent”. Again, another example of the teacher being clear on the easy stuff (mechanics/structure) but using jargon, almost maniacally trying to keep secret the main point: why. Why would you use one or the other? This is where the esoteric side of design patterns starts to lose us regular coders who are not academics, but simply want to write better code. The jargon starts to get too thick, and the reasons for using a particular pattern are obscured. We want credible examples and, more importantly, scenarios indicating when a given pattern is appropriate.
Thanks for listening to my mini rant.
I’ll try to explain them in simple terms.
The Bridge Pattern allows you to create 2 types of abstract objects that can interact with each other in numerous extendable ways. In my bridge design pattern example I created a way for an infinite variety of remote controls to interact with an infinite variety of different devices.
The strategy pattern allows you to change what an object can do while the program is running. In my strategy design pattern example I showed how you can add an object that represents whether an Animal can fly or not to the Animal class. This attribute was then added to every class that extended from Animal. Then at run time I was able to decide if an Animal could fly or not.
I hope that helps 🙂
I am a little bit confused on the use of strategy and factory both created at runtime. Kindly please elaborate the difference.
Thank you in advance and I love your videos.
In my strategy design pattern tutorial I demonstrated how you can at run time give any Animal subclass the ability to fly.
1. I added the Flys object to Animal which all subclasses then receive with : public Flys flyingType;
2. ItFlys and CantFly implement the Flys interface
3. At runtime I can change the version of Flys : sparky.setFlyingAbility(new ItFlys());
4. Now sparky can fly
With the Factory pattern tutorial I showed how the EnemyShipFactory pops out a different type of ship based off of user input :
EnemyShip newShip = null;
13
14 if (newShipType.equals(“U”)){
15
16 return new UFOEnemyShip();
17
18 } else
19
20 if (newShipType.equals(“R”)){
21
22 return new RocketEnemyShip();
23
24 } else
25
26 if (newShipType.equals(“B”)){
27
28 return new BigUFOEnemyShip();
29
30 } else return null;
– See more at:
I hope that helps 🙂
Thank you for your explanation. I have another question if you don’t mind. Why some examples use factory in calculator program and others use strategy?
Most patterns have the same goal which is to add flexibility or to streamline the code. They can very often be used interchangeably.
Thank you very much. For almost 12 hours I have been going back and forth to understanding strategy n factory. They are almost closely the same even their uml.
Your video are good and I also like the code refactoring.
When are you going to start j2ee, spring, hibernate…..I guess many are eager for those videos. I hope soon and hopefully it will happen this year
You’re very welcome 🙂 Before Java Enterprise I want to make an Android tutorial that teaches everything in a format in which anyone will be able to make any Android app they can imagine. Sorry it is taking so long, but I’m very serious about teaching Android right now.
You are like the best Tutor I’ve ever seen. The quality of the lessons and the way you teach is amazing. Thanks a ton
Thank you very much 🙂
Hi, our professor taught us this pattern but I still don’t get on how do I access the dog digHole() method if I used the class Animal other than forcing it.
((Dog) Sparky).digHole(); which is not appropriate 🙂
I’m confused. That code you have there does work. You wouldn’t have to cast it though if you had the original method in the super class though.
Great video and awesome teaching!! Thanks a lot for taking the effort and making things very interesting 😀
Thank you very much 🙂
Awesome tutorial, Clearly described. Thanks a lot for your videos.
Thank you 🙂 You’re very welcome
Hey Derek! I have been watching most of your OOP videos. well, I am not proud of myself for not having the courtesy to thank you until now.
you r an amazing teacher. I will soon convert all the code you used in Design Pattern videos into Php and put on github. I will let u know when I finish 🙂
Thank you for taking the time to tell me that you have found them useful. I greatly appreciate that. Yes definitely post your link.
Hello Derek,
Thank you for these videos they are very informative and helpful.
Can you please explain to me why you have the flyingType variable is public? If it is public why do you need the setter method?
You’re very welcome 🙂 It didn’t need to be public.
Really really nice tutorials, I love watching your tutorials on Design Patterns, I have planned that I’ll watch all of your tutorials on youtube.
Thanks a lot for all the tutorials 🙂
Can you please tell me the screencasting software you used to create these tutorials
Thank you 🙂 I use Camtasia 2 to record.
would be really nice if you redo the videos in C#. I understand that most concept are the same but still better for and for a whole lot of newbies. thank you so much for videos, really great stuff although you are talking little bit fast 🙂
I’m working on a C# tutorial. I’ll see what I can do patterns wise as well.
As a current Design Patterns student going through Head First Design Patterns, your videos are a helpful reinforcement. Thanks!
Thank you 🙂 I’m very glad that I could help
Thanks! This videos are many times more interesting for me, that the booring books and lectures))) In addition, it helps to learn English (it’s not my native language).
You’re very welcome 🙂
Banas for president,
Thank you for your help
That’s funny 🙂 You’re very welcome.
What if you just had an interface Flys, and have Bird implement it. Can you elaborate on why this option is duplicate code, and not good. Thanks.
You could definitely do that instead. I just wanted to demonstrate the pattern in a simple way. | http://www.newthinktank.com/2012/08/strategy-design-pattern-tutorial/?replytocom=17554 | CC-MAIN-2020-29 | en | refinedweb |
Tuning SQL Server Management Operations
Dr Scripto
Summary: Microsoft PFE, Thomas Stringer, talks about using Windows PowerShell and tuning SQL Server Management operations.
Microsoft Scripting Guy, Ed Wilson, is here. I would like to welcome back guest blogger, Thomas Stringer…
I spend a lot of time talking with customers and working with them regarding how they can manage their large enterprise SQL Server environments seamlessly with Windows PowerShell and automation engineering in general. Fun puzzles and obstacles always come along, and they cause me to think outside of the box, and possibly veer off “the norm” when I am reaching out to a large number of machines.
Automation and coded management is much like SQL Server development. If you have a single table, with only 10 rows, and you write a query that runs once a year, it probably doesn’t make a difference if that query completes in 2 milliseconds or 2000 milliseconds. But if the table is 10 billion rows, and users or applications are running this query hundreds of times a second, you can surely bet that you would want to make sure you have tuned and optimized that query to the nth degree.
Managing enterprise environments with Windows PowerShell is just like that. If you have a script that is run once a year against 10 servers, it probably doesn’t make much of a difference if that script takes a few extra minutes. But with the current environments that we’re working with, we’re talking about thousands and thousands of servers, and even tens and hundreds of thousands of databases. You can probably imagine that a small cumbersome operation gets quite magnified rather quickly.
The performance tuning arena is no longer restricted to SQL Server developers or application developers. Because enterprise environments are growing quickly, operations developers are now forced to benchmark and optimize their own code. That’s a good thing. There’s nothing quite like an illustrative and code-filled example to show the type of methodology I take when tuning management operations with Windows PowerShell (in my case—being a SQL Server person—the end result is almost always SQL related).
Let’s say that I’m a SQL Server DBA managing an extremely large environment and I have now been tasked to pull a handful of metrics from all of the transaction logs, from all of the databases, from all of the instances in my environment. I think to myself, “Excellent. I’m a SQL-slash-PowerShell guy. I’ll have no problem with this at all.”
So like many of my automated SQL tasks, I jump straight into utilizing SMO (Microsoft.SqlServer.Management.Smo namespace). For those of you who may not be familiar with SMO, it exposes SQL Server management in a very easy-to-use, object-oriented manner. In the case of my aforementioned task, this is as simple as pulling values from the Database.LogFiles property (a LogFileCollection object). Following is some sample script that would accomplish this:}}
}
}
Nothing to it. Sit back and watch the log-file usage metrics get retrieved and calculated. In my test environment (like with most things we do, these numbers will drastically vary from environment to environment), I get an average of about 160 milliseconds for a run.
You may be thinking, “Sure that’s pretty quick, no big deal.” And you’d be absolutely right—if you’re running this against a single instance, or ten instances, or even 100 instances. But what happens when we start running this against thousands of servers, and we are pressed for time on this diagnostic retrieval? Great question, because it could become a very real issue.
Now I want to simulate what this would look like when we run this methodology against 5,000 servers. To test this and time it, I have bundled this logic (in addition to the code to time the operation with the System.Diagnostics.StopWatch class) in a function:
function Test-SqlBenchmarkSmo {
param (
[Parameter(Mandatory = $true)]
[string]$SqlServerName
)
Add-Type -Path "C:\Program Files\Microsoft SQL Server\110\SDK\Assemblies\Microsoft.SqlServer.Smo.dll"
$StopWatch = New-Object System.Diagnostics.Stopwatch
$StopWatch.Start()
# [SMO] — retrieve log file consumption
#
$SqlServer = New-Object Microsoft.SqlServer.Management.Smo.Server($SqlServerName)
$SqlServer.ConnectionContext.ApplicationName = "SqlBenchmarkSMO"}} |
Out-Null
}
}
$StopWatch.Stop()
return $StopWatch.ElapsedMilliseconds
}
All I’ve really done here is the same SMO operation of getting our log-file metrics by looping through the databases on a given instance (passed as a parameter to the function), all the while timing this. The return of the function is going to be the amount of milliseconds that a single run has taken.
To loop through 5,000 servers in my environment, I needed to load up a string array ($SqlServerNameArray) with a list of instance names (basically two distinct servers that are duplicated 2,500 times). Then I loop through this list of names, calling my Test-SqlBenchmarkSmo function, adding up the durations, and storing it in the $CumulativeDuration variable:
$CumulativeDuration = 0
foreach ($SqlServerIndividualName in $SqlServerNameArray) {
$CumulativeDuration += Test-SqlBenchmarkSmo -SqlServerName $SqlServerIndividualName
}
Write-Host "SMO Benchmark — exhaustive test, $($SqlServerNameArray.Count) servers — :: Total: $($CumulativeDuration / 1000) seconds Average: $($CumulativeDuration / $SqlServerNameArray.Count) ms" -ForegroundColor Yellow
In my case I get the following output:
Wow! For 5,000 servers, it takes over 13 minutes to get what I need. Now, I’m going to preface the continuation of this blog post by saying: If this meets your business requirements and works for you, leave it at that and go forward with your acceptable solution. I am not one for needless optimizations, especially when there are dozens of other priorities that need to be complete today before you close shop.
If my 13 minute operation is simply unacceptable, I need to start exploring other ways to get this information with less operation duration. There are dozens of ways to pull this transaction log-file information from all of these instances, and SMO only provides one of these solutions. Albeit, a solution that is relatively simple, quick, and perfect for 95% (estimated, of course) of the requirements. But if we’re in that 5% slice, we may need to step away from SMO to cut down this runtime duration.
Now, being a SQL Server guy, I know that behind the scenes, SMO is just writing normal T-SQL queries against my instance. Nothing special, but it is very well packaged, has a lot of handling for annoying cases (such as different versions), and other instance specifics that we wouldn’t normally think of code for—or even want to concern ourselves with. The power of SMO is that it takes a lot of the nitty gritty operations development and makes it an abstraction. But that abstraction may indeed come with additional duration, like we are seeing here.
If I want to take out the extra steps that SMO is performing, all I really need to do is be the direct composer of the T-SQL that is getting executed on the instances. One way that I can do this is through directly leveraging the ADO.NET .NET Framework Data Provider for SQL Server (System.Data.SqlClient). There are most definitely a lot of downsides to taking this route instead of the SMO way:
- I need to know T-SQL, and SQL Server in general, relatively well.
- I forfeit all safeguards and checks that may have been happening previously.
- Anything that needs to be done, now has to explicitly be done by me and my own code.
I know all this, and I still want to move forward and see what kind of gains I can get from working directly with System.Data.SqlClient. So I mimic my previous benchmark against 5,000 servers. Instead, this time I use SqlConnection, SqlCommand, and the other ADO.NET classes:
function Test-SqlBenchmarkAdoDotNet {
param (
[Parameter(Mandatory = $true)]
[string]$SqlServerName
)
$StopWatch = New-Object System.Diagnostics.Stopwatch
$StopWatch.Start()
# [ADO.NET] — retrieve log file consumption
#
$ConnectionString = "data source = $SqlServerName; initial catalog = master; trusted_connection = true; application name = SqlBenchmarkADOdotNET"
$SqlConnection = New-Object System.Data.SqlClient.SqlConnection($ConnectionString)
$GetLogSpaceCmd = New-Object System.Data.SqlClient.SqlCommand
$GetLogSpaceCmd.Connection = $SqlConnection
$GetLogSpaceCmd.CommandText = "
declare @sql varchar(max) = '';
@sql +=
'use ' + quotename(name) + ';' +
char(13) + char(10) +
'select database_name = ''' + name + ''',
size_kb = size * 8,
used_space_kb =
fileproperty(name, ''spaceused'') * 8,
free_space_kb =
(size – fileproperty(name, ''spaceused'')) * 8,
used_percentage =
fileproperty(name, ''spaceused'') * 1.0 / size * 100
from sys.database_files
where type_desc = ''log'';'
from sys.databases
where state_desc = 'online';
exec (@sql);"
$SqlDataAdapater = New-Object System.Data.SqlClient.SqlDataAdapter($GetLogSpaceCmd)
$ResultSet = New-Object System.Data.DataSet
try {
$SqlDataAdapater.Fill($ResultSet) | Out-Null
foreach ($DataTable in $ResultSet.Tables) {
$DataTable |
Select-Object @{Name = "DatabaseName"; Expression = {$_.database_name}},
@{Name = "SizeKB"; Expression = {$_.size_kb}},
@{Name = "UsedSpaceKB"; Expression = {$_.used_space_kb}},
@{Name = "FreeSpaceKB"; Expression = {$_.free_space_kb}},
@{Name = "Used %"; Expression = {$_.used_percentage}} |
Out-Null
}
}
catch {
Write-Error $_.Exception
}
finally {
$SqlDataAdapater.Dispose()
$GetLogSpaceCmd.Dispose()
$SqlConnection.Dispose()
}
$StopWatch.Stop()
return $StopWatch.ElapsedMilliseconds
}
Note The potential pitfalls become extremely evident when I look at my new function. That T-SQL query may seem trivial to experienced SQL Server DBAs, but the person mocking up this Windows PowerShell script may not necessary have the expertise and knowledge of these finer details. Not to mention, what happens if we are trying to hit a system catalog view or dynamic management views that existed in SQL Server 2012, where we did all of our testing, but that same object didn’t exist in the same form, or at all for that matter, in SQL Server 2005? Now we have a breaking dependency that we didn’t think of.
Much like our initial SMO testing, let’s hammer this function 5,000 times and see what kind of duration we get:
$CumulativeDuration = 0
foreach ($SqlServerIndividualName in $SqlServerNameArray) {
$CumulativeDuration += Test-SqlBenchmarkAdoDotNet -SqlServerName $SqlServerIndividualName
}
Write-Host "ADO.NET Benchmark — exhaustive test, $($SqlServerNameArray.Count) servers — :: Total: $($CumulativeDuration / 1000) seconds Average: $($CumulativeDuration / $SqlServerNameArray.Count) ms" -ForegroundColor Yellow
Following is the output I get in my environment:
Excellent! I’m at an average time of less than 10 milliseconds for each server, and a total of about 49 seconds. So my processing has brought the runtime duration from 13+ minutes down to a sub-minute time.
One of the reasons that this happens is because my ADO.NET example is lightweight and it assumes nothing (which, as I said before, could actually be a very bad thing and make the development and testing process much longer). We can see this directly by looking at the actual T-SQL statements that are being executed on the instance.
If you are a SQL Server professional, you will be very familiar with what I’m about to do. But if you are not, I’m basically setting up a tracing mechanism to watch the SQL statements that hit the instance.
If you happened to notice in my previous examples, I set the Application Name parameter of my connection string to “SqlBenchmarkSMO” and “SqlBenchmarkADOdotNET” respectively. The reason behind that is two-fold: First, I don’t want to see any chatter of other SQL statements outside of my testing. And secondly, I want to be able to distinguish commands that come from my SMO test, and commands that came from my ADO.NET test, so I can get a count on the volume of statements. Here is the definition (Transact-SQL) of my tracing mechanism:
if exists (select 1 from sys.server_event_sessions where name = 'SMOvsADOdotNET')
begin
drop event session SMOvsADOdotNET
on server;
end
create event session SMOvsADOdotNET
on server
add event sqlserver.sql_statement_completed
(
action
(
sqlserver.client_app_name
)
where
(
sqlserver.client_app_name = 'SqlBenchmarkSMO'
or sqlserver.client_app_name = 'SqlBenchmarkADOdotNET'
)
)
add target package0.event_file
(
set filename = N'<path to my XEL file>'
);
go
alter event session SMOvsADOdotNET
on server
state = start;
go
Note The code that is included in this blog post, both Windows PowerShell and T-SQL, is intended only for illustration. It is not meant to be run in production without understanding it and reworking it for your environment.
When I run my test again while monitoring (I’m only going to run it against one server because the statements count is going to be relative), I see the following output:
For each run in my SMO testing (this will vary based on the count of databases in the instance, but you get the idea), there were roughly 122 SQL statements. Conversely, my ADO.NET testing had only 3 SQL statements. The extra statements in the SMO example aren’t necessarily bad. In fact, they’re making wise checks. But I’m looking to shave time off my management operation in lieu of that for this particular scenario. Mission successful.
In summary, if you are racing against the clock with your management operation, it is wise to take a few steps back and analyze where performance can be gained. It could be in a single line of code, a function, the namespaces you consume, or even in your entire approach.
I showed one example for how this can be done with SQL Server and a single alternative. Don’t needlessly sacrifice the benefits that great solutions, like SMO, provide with an extra layer of checks and ease-of-use. But sometimes we need to step further and control more aspects of the operation if there is a warranted demand.
You can download the full script from the TechNet Gallery: Sample Benchmark Test on SMO vs. System.Data.SqlClient.
I hope you have enjoyed this blog post, and please feel free to reach out to me at sqlsalt@outlook.com if you have any questions. Likewise, feel free to post comments in the Comments box that follows. Thanks!
Thank you, Thoma,s | https://devblogs.microsoft.com/scripting/tuning-sql-server-management-operations/ | CC-MAIN-2020-29 | en | refinedweb |
Breaking the Space-Time Barrier with Haskell: Time-Traveling and Debugging in CodeWorld (A Google Summer of Code Odyssey)
How does one explore the unknown using a dream, ingenuity and human will to conquer the impossible? The following is a journey involving Time Travel, Maths, Space, and of course, Haskell.
This summer, as part of Google Summer of Code, I created debugging tools to be used by students programming in the CodeWorld environment. As a current learner of Haskell and of CodeWorld, I believe tools that help users reason about logic are very useful. I wanted to help users identify breaks in logic, and reason about mathematics and code. The tools I built can decompose a larger, more ambiguous problem (“Help! My programme doesn’t work!”) into a smaller, more precise one (“The starting coordinate for our spacecraft’s trajectory is not correct”).
When a student builds an drawing, animation or game in CodeWorld, my new debugging tools help them deconstruct what the programme does in
- time: by enabling slow-motion, fast-forward and scrolling through the history of the programme.
- position : by enabling the user to zoom in and out and move around to get a different perspective.
- organization: by giving the user a deeper understanding of the properties of parts of their pictures.
This video gives a cursory view of these tools.
Features I created:
I built four different features for debugging. They are useful explicitly and in conjunction with each other.
1. Viewable Properties in Inspect Mode
Last summer, Eric Roberts created the Inspect Button for debugging for CodeWorld. I expanded on this tool by showing the viewable properties and their evaluated arguments.
Eric created an Inspect button that shows the highlighted code to the left and the display of each component (that is tied to the source code) on the coordinate plane. However, the source code does not currently show what the expression evaluated to unless we use the new viewable properties functionality within the Inspect button. This new feature allows a user to see the evaluated expressions for attributes of the picture, which is very helpful in debugging pictures with multiple functions and arguments that work together.
In the above example, we can see that the actual thick polyline expressions are composed of points and thicknesses, and the precise values that give the results shown. We can create a pipeline of expected values vs actual values with this method of debugging.
2. Zooming and Panning
I created buttons that can zoom in, zoom out and reset to the default view this summer. I also created a slider that can zoom in and out, and panning functionality for the drawing canvas. In this example (below), details of a fractal composition are viewable because of these new tools.
In the video, we can see more detail by using the zoom-in and zoom-out buttons, and by panning. This allows us to see smaller patterns within a larger pattern.
3. Time Travel Debugger (History Slider)
Have you ever played a game and not liked the result? Let’s fix that! I’m curious to see what it would like if I won. I want to go back in time to “debug” what would have happened if I had not lost.
Cheating…it’s a feature, not a bug.
The way this works is by having two lists: one that represents the list of past states and one that represents future states. We can then pop off the most recent value from either the stack of past states or future states to travel back or forward in time.
This uses the zipper data structure where the state is a pair of lists ([],[]).
4. Speed Slider
The new speed slider allows you to speed up an animation. Playing through an animation at different speeds allows a user to identify patterns in an animation, as well as inconsistencies in those patterns. These inconsistencies can be broken down as either intentional and useful, or as breaks in logic or bugs.
“Why moments” in Programming (bugs)
In the example below, we have a bouncing ball that has a defined boundary in the code.
import CodeWorldmain = debugSimulationOf initial step pictureinitial = (0, 20)step dt state = bounce (inertia dt state)inertia dt (x, vx) = (x + vx * dt, vx)bounce (x, vx)
| x < -9 = (x, -vx)
| x > 9 = (x, -vx)
| otherwise = (x, vx)picture(x, vx) = translated x 0 (solidCircle 1) & rectangle 20 20
It seems to work just fine initially, but when I speed it up by using speedSlider, we see moments when the ball is stuck. This is a bug.
To find out how and why this happens, I paused CodeWorld when the bug occurred, and then stepped backwards and panned to see where this occurs and why. By panning, I am not limited to the viewing boundary of the ball as dictated by the viewing pane, so I get a different perspective. Here, we see that the ball bounces, but is stuck and bounces again, but that wedges the ball in further. We can even rewind to the point at which the ball was bouncing as expected.
Revealing Illusion in an Animation
This animation shows the journey of a hot-air balloon into Space, whereby the traveler becomes a NASA astronaut. When the window is constrained, it is not apparent how the animation is made. It appears as delightful magic.
By using the new debugging, this magic is deconstructed, like a Noh Theatre act that is unmasked.
Challenges Along the Way
Getting Started
One of the other issues we discovered was that the CodeWorld install script did not work correctly on 32 bit machines. My first pull request made a change to fix this issue. It wasn’t long before I also realized that smaller screens (such as that of my $20 refurbished computer, which runs 32-bit Ubuntu 16.04) didn’t have room for the debug controls I was trying to add. So I had to make another side-trip to add resizing of the programme before I could get started on debugging. This involved CSS and HTML.
In retrospect, these were good ways to get my feet wet with the project.
Hardware
Next, there were a series of hardware issues I encountered. In all, I wiped two operating systems and installed Ubuntu 18.04 three times in three months on two machines.
In July, I received a Helium Grant. I used the funding to finally get reliable hardware. This was pivotal in completing the project, as I was able to screen-share and pair-programme with my mentors with video conferencing. Without the support of Nadia Eghbal and the great sponsors at heliumgrant.org, my experience would have been a lot more painful.
We have liftoff!
These new tools make debugging a more pleasant experience for the user. They help users to reason better about code and break larger problems into smaller ones. More importantly, they give the user more control, so that their intentions are interpreted more precisely, giving a better feedback loop between the user and the programme.
“Imagination will often carry us to worlds that never were. But without it we go nowhere.” - Carl Sagan
Far away from Earth, the astronaut looked back. Earth was just a pale, blue dot in a sea of stars. I hope you enjoyed this journey through Time and Space with me, and that it has brought you as much joy as writing Haskell has for me this summer.
Last but not least
- If you’d like to learn more about the nuts and bolts, feel free to look at the Pull requests here.
- Thank you also to the previous work by Eric Roberts in Summer of Haskell, and to the organizers of BayHac (an event at which I was able to meet my mentors in person). Thank you for Haskell.org, my host organization, for giving me the opportunity to promote and use Haskell this summer.
- Finally, thank you to my two mentors, Chris Smith and Gabriel Gonzalez, for your guidance and patience, and for a great Summer. | https://medium.com/@krystal.maughan/breaking-the-space-time-barrier-with-haskell-time-traveling-and-debugging-in-codeworld-a-google-e87894dd43d7 | CC-MAIN-2020-29 | en | refinedweb |
I have some questions regarding UIC.
1. I tried including #include "uic_dds_dec.h" and it complains about the lack of DDS namespace. It seems that the uic_dds_dec.h should include it (or what should be included should be described in more detail):
#ifndef __UIC_DDS_H__
#include "uic_dds.h"
#endif
2. Microsoft compiler is throwing C4291 on uic_new.h:
uic\\src\\core\\uic\\include\\uic_new.h(50) : warning C4291: 'void *operator new(size_t,const UIC::NewBuffer &)' : no matching operator delete found; memory will not be freed if initialization throws an exception
Is it safe to ignore that warning (I presume not)?
3. Are those DLL files compiled with CPU dispatching, if so for what CPUs?
4. Are those DLL files standalone or do they require IPP (or compiler) DLLs? If so, which ones?
5. Can UIC be used statically and if so how?
6. I am writing image viewing application and I intend to support BMP, DDS, JPEG, and PNG image formats. I have a single user Parallel Studio license. What are the terms for redistributing the binary files needed for my application to run?
Regards,
Igor | https://software.intel.com/en-us/forums/intel-integrated-performance-primitives/topic/291002 | CC-MAIN-2017-47 | en | refinedweb |
Getting Started with Amazon Cloud Search (NoSQL) in Python
Too Long; Did not Read
Check out the python code samples below in order to interface with Amazon Cloud Search. Uses the python boto library.
What is CloudSearch?
CloudSearch is a service hosted by Amazon that allows you to index documents. Like a lot of other Amazon services, you pay for only what you use so you can scale easily, and the costs are about as low as you could ask for. You could use it in conjunction with a database as a caching layer for faster searching, or in some cases there’s nothing stopping you from using it as your storage engine entirely.
The most fitting use case probably involves full text searching. In this case, a SQL database is not optimal for searching for substrings, and it definitely doesn’t support stemming (i.e. make it so that “runner” will return a result with “running”).
Why CloudSearch?
Right now at work, one of the projects is to move off of our own SOLR servers and instead use Amazon CloudSearch. The advantages:
- We don’t have to manage our own SOLR instances. This might be relatively trivial but it’s something
- It’s faster than SOLR. In one instance it was allegedly 50x faster
- In order to interface with SOLR, we’re using Haystack, which we’d like to move away from.
- After doing the implementation, I found that the learning curve for CloudSearch is really low
Initial Setup
Log in to your Amazon Console. Sign up if you haven’t, and it’s free. Then navigate to Cloud Search:
The next steps are fairly self-explanatory, and you can just follow the wizard. You’ll create a new domain that’s identified by a unique string which you’ll later use in your python code in conjunction with whatever Amazon region you chose:
Any number of attributes can be added to a document. You can see blow some of the examples:
The Code
You could make raw HTTP requets, but you can save yourself a lot of trouble if you just install boto:
pip install boto==2.35.1
From here, we start making some web requests in order to initialize a client. This is costly because of the nature of a network request, but we also want to avoid getting throttled by Amazon because of excessive and unnecessary requests. Therefore, you want to cache your initialized domain client.
This can be done with something like memcached in order to share an instance across multiple processes, but the poor man’s method is to just cache your domain instance in a mutable object inside of a class attribute. In this way, each process or worker that you have will initialize the client exactly once and will subsequently live in memory.
My choice was to create a class for the purpose of inheriting since a cloudsearch domain will be used both for querying and indexing. To ensure every class had only one responsibility, I chose to make a class for each of those two cases, and both of those classes would inherit the class below:
Base Amazon Client
import boto from django.conf import settings class AmazonClient(object): REGION = settings.AWS_CLOUDSEARCH_REGION _cls_domain_cache = {} def get_domain(self, domain_index): try: return self._cls_domain_cache[domain_index] except KeyError: self._cls_domain_cache[domain_index] = boto.connect_cloudsearch2( region=self.REGION, sign_request=True).lookup(domain_index) return self._cls_domain_cache[domain_index]
The above class has the sole responsibility of caching a domain instance. The line to connect to cloudsearch has an HTTP request involved.
The next step is to index our documents. I wrote a simple class that’s a context manager that manages the batching of requests to Amazon. Context managers can be useful when you have a case that always requires setup, some action, and then teardown. In this case, the teardown is the actual POST to Amazon with a batch of data.
Therefore, the usage of this class is either a simple call to “add_document” or “delete_document.”
Indexer
from .amazon_client import AmazonClient DEFAULT_BATCH_SIZE = 500 class CloudSearchIndexer(AmazonClient): def __init__(self, domain_index, batch_size=DEFAULT_BATCH_SIZE): self.domain = self.get_domain(domain_index) self.document_service_connection = self.domain.get_document_service() self.batch_size = batch_size self.items_in_batch = 0 @classmethod def for_domain_index(cls, domain_index): return cls(domain_index) def __enter__(self): return self def __exit__(self, *args, **kwargs): if len(args) > 1 and isinstance(args[1], Exception): raise args[1] self._commit_to_amazon() def _commit_to_amazon(self): self.document_service_connection.commit() self.document_service_connection.clear_sdf() self.items_in_batch = 0 def add_document(self, cloud_search_document): cloud_search_json = cloud_search_document.to_cloud_search_json() cloud_search_json = self._nullify_falsy_values(cloud_search_json) self.document_service_connection.add( cloud_search_document.cloud_search_id, cloud_search_json ) self._update_batch() def _nullify_falsy_values(self, json_dict): return {k: v for k, v in json_dict.items() if v} def delete_document(self, cloud_search_document): self.document_service_connection.delete(cloud_search_document.cloud_search_id) self._update_batch() def _update_batch(self): self.items_in_batch += 1 if self.items_in_batch == self.batch_size: self._commit_to_amazon()
You can see a few other additions besides just a single post to Amazon. Data is chunked out into 500 items at a time, and data is cleaned of null values before a POST. Of note, that might not be the absolute best decision. As far as I can tell, you can’t set something to “null” or “None” with CloudSearch, so you should probably be explicit about how you’re representing empty data.
From here, you might notice that my code is just passing a “cloud_search_document” which I haven’t defined so far. In reality, the only thing you need to pass to Amazon is a unique identifier which is a string, and a serialized JSON blob. I made this explicit by creating an abstract cloud search document that all other documents should inherit from, thus guaranteeing they can be indexed:
Abstract Amazon Document
from abc import ABCMeta from abc import abstractmethod from abc import abstractproperty class AbstractCloudSearchDocument(object): __metaclass__ = ABCMeta @abstractproperty def cloud_search_id(self): ''' A string that represents a unique identifier; should mimic the primary key of a model ''' pass @abstractmethod def to_cloud_search_json(self): ''' A JSON representaiton of the document that should match up with the index schema in Amazon ''' pass
With the above two classes defined, it’s very simple to index documents. Just ensure that the json representation of your document corresponds to what you set up in Amazon. Here’s an example:
Sample Usage
with CloudSearchIndex.for_domain("my_domain_index_string") as cloud_search_indexer: # ConcreteCloudSearchDocument is some implementation of the abstract cloud # search document cloud_search_document = ConcreteCloudSearchDocument(some_data) cloud_search_indexer.add_document(cloud_search_document) # because of the context manager, data will be committed to amazon after the # above block in a batch
In order to search for documents, you’ll need to write your own queries. For a comparable service, you could use something like Haystack where the generation of queries is abstracted away between multiple backends. The problem with that approach is that Haystack is decent at everything, but excels at nothing (sorry, I hope there are no hardcore Haystack fans reading this).
I also found that it’s easier to learn Amazon’s straightforward language for querying than it is to learn about all the different quirks and boilerplate code of a third party library. The below class is a stripped down version of what I’d use to query a document:
Searcher / Queryer
from abc import ABCMeta from .amazon_client import AmazonClient class AbstractCloudSearchSearcher(AmazonClient): __metaclass__ = ABCMeta DEFAULT_PARSER = "structured" def __init__(self, domain_index): self.domain = self.get_domain(domain_index) self.search_connection = self.domain.get_search_service() def execute_query_string(self, query_string): amazon_query = self.search_connection.build_query(q=query_string, parser=self.DEFAULT_PARSER) json_search_results = [json_blob for json_blob in self.search_connection.get_all_hits(amazon_query)] return [json_blob['fields'] for json_blob in json_search_results]
From here, you would just need to pass in strings, and this class will query Amazon and return results in the form of a list of dictionaries.
You can learn how to write queries from Amazon’s Documentation. Note that all of Amazon’s documentation for example queries requires that you pass in a “structured” parser as I did in the sample code (You can see the differences about the parsers here. | http://scottlobdell.me/2015/02/getting-started-amazon-cloud-search-nosql-python/ | CC-MAIN-2017-47 | en | refinedweb |
I don't know the inner workings of c so this is quite confusing. I've been playing with arrays and pointers and a lot of weird stuff is going on, I hope someone can point me to a resource that explains how this stuff works under the hood. I was looking at the c programming reference but could not find answers.
So here is a piece of code:
#include <stdio.h>
#include <stdlib.h>
#define PATH "/home/jack/Desktop/Cpractice/hangman.txt"
int main()
{
FILE *file;
file = fopen(PATH, "r");
int c;
int size;
//char *word = (char *)malloc(0);
char word[30];
//printf("\tSize: %lu\n",sizeof(char));
int i =0;
while(1)
{
c = getc(file);
if(c == EOF)
break;
word[i] = c;
i++;
}
printf("%s\n", word);
printf("I: %i\n", i);
//free(word);
return 0;
}
It's my first comment so i'll try to be as clear as possible.
To be quite short, the memory in C language is a bit special, but not complicated until you dig a bit the subject.
The malloc function (dynamic allocation) and static allocation uses a system call : sbrk (I advise you to read the man to understand what exactly the function does)
Your question is "why can I read tab[size + 1]", it's juste because the memory given to your array isn't only size, in fact, if there is free space after, you'll be able to access it, but BE CAREFUL because this will probably lead to some errors in your program after some time...
Errors are possible because if you declare another array after this one, let's say an array of 6 chars named tab2, and in memory this array starts right after the other one, when you'll try to modify tab1[size + 1] you'll got a segmentation fault because the space is used by tab2 (even reading tab1[size + 1] could lead to a segmentation fault, but sometimes, computer are too kind and let you do that)
Wathever, I hope it's quite clear, if it isn't, don't hesitate to ask me questions ! | https://codedump.io/share/YlN33t1nmrsk/1/i-can-access-array-element-32-when-i-only-allocate-30c | CC-MAIN-2017-47 | en | refinedweb |
" may need more precise wording. This manual is constantly evolving until the 1.0 release and is not to be considered as the final proper specification. the, more easily comprehensible,.
Whether!
Multiline comments. ]## unorthodox way to do identifier comparisons is called partial case insensitivity and has some advantages over the conventional case sensitivity:
It allows programmers to mostly use their own preferred spelling style, be it humpStyle, snake_style or dash fully style-insensitive language. This meant that it was not case-sensitive and underscores were ignored and there was no even a' ('o' | 'c' | 'C')_SUFFIX = ('f' | 'F') ['32'] FLOAT32_LIT = HEX_LIT '\'' FLOAT32_SUFFIX | (FLOAT_LIT | DEC_LIT | OCT_LIT | BIN_LIT) ['\''] FLOAT32_SUFFIX FLOAT64_SUFFIX = ( ('f' | 'F') '64' ) | 'd' | 'D' FLOAT64_LIT = HEX_LIT '\'' FLOAT64_SUFFIX | (FLOAT_LIT | DEC_LIT | OCT_LIT | BIN_LIT) ['\''] FLOAT64_SUFFIX
As can be seen in the productions, numerical constants can contain underscores for readability. Integer and floating point literals may be given in decimal (no prefix), binary (prefix 0b), octal (prefix 0o or 0.
Literals are bounds checked so that they fit the datatype. Non base-10 literals are used mainly for flags and bit pattern representations, therefore bounds checking is done on bit width, not value range. If the literal fits in the bit width of the datatype, it is accepted. Hence: 0b10000000'u8 == 0x80'u8 == 128, but, 0b10000000'i8 == 0x80'i8 == -1 instead of causing an overflow error.
Operators
Nim allows user defined treated as.
proc `^/`(x, y: float): float = # a right-associative division operator result = x / y echo 12 ^/ 4 ^/ 8 # 24.0 (4 / 8 = 0.5, then 12 / 0.5 = 24.0) echo 12 / 4 / 8 # 0.375 (12 / 4 = 3.0, then 3 / 8 = 0.375).').
Most native Nim types support conversion to strings with the special $ proc. When calling the echo proc, for example, the built-in stringify operation for the parameter is called:
echo 3 # calls `$` for `int`
Whenever a user creates a specialized object, implementation of this procedure provides for string representation.
type Person = object name: string age: int proc `$`(p: Person): string = # `$` always returns a string result = p.name & " is " & $p.age & # we *need* the `$` in front of p.age, which # is natively an integer, to convert it to # a string " years old."
While $p.name can also be used, the $ operation on a string does nothing. Note that we cannot rely on automatic conversion from an int to a string like we can for the echo proc. meaning compatible string is the native representation of a string for the compilation backend. For the C backend []., `$`]) {...}. The of operator is similar to the instanceof operator in Java.
type Person = object of RootObj name*: string # the * means that `name` is accessible from other modules age: int # no * means that the field is hidden Student = ref object of Person # a student is a person id: int # with an id field var student: Student person: Person assert(student of Student) # is true assert(student of Person) # also)
Note that, unlike tuples, objects require the field names along with their values.. Also, when the fields of a particular branch are specified during object construction, the correct value for the discriminator must be supplied at compile-time.
Set typeThe set type models the mathematical notion of a set. The set's basetype can
- only be an ordinal type of a certain size, namely:
- int8-int16
- uint8/byte-uint16
- char
- enum
or equivalent. The reason is that sets are implemented as high performance bit vectors. Attempting to declare a set with a larger type will result in an error:
var s: set[int64] # Error: set is too large. In general, a ptr T is implicitly convertible to the pointer type.) =) = abstract type and its base type. Explicit.
Auto type
The auto type can only be used for return types and parameters. For return types it causes the compiler to infer the type from the routine body:
proc returnsInt(): auto = 1984
For parameters it currently creates implicitly generic routines:
proc foo(a, b: auto) = discard
Is the same as:
proc foo[T1, T2](a: T1, b: T2) = discard
However later versions of the language might change this to mean "infer the parameters' types from the body". Then the above foo would be rejected as the parameters' types can not be inferred from an empty discard statement. HashSet[: HashSet[)
Covariance.
Convertible relation
A type a is implicitly convertible to type b iff the following algorithm returns true:
# XXX range types? proc isImplicitlyConvertible(a, b: PType): bool = if isSubtype(a, b) or isCovariant(a, b): return true categories.
proc sayHi(x: int): string = # matches a non-var int result = $x proc sayHi(x: var int): string = # matches a var int result = $(x + 10) proc sayHello(x: int) = var m = x # a mutable version of x echo sayHi(x) # matches the non-var version of sayHi echo sayHi(m) # matches the var version of sayHi sayHello(3) # 3 # 13
Automatic dereferencing
If the experimental mode is active and no other match is found, the first argument a is dereferenced automatically if it's a pointer type and overloading resolution is tried with a[] instead.
Automatic self insertions)
Lazy type resolution for untyped indented.
Void context. next statement.
In if statements new scopes begin immediately after the if/elif/else keywords and ends after the corresponding then block. For visualization purposes the scopes have been enclosed in {| |} in the following example:
if {| (let m = input =~ re"(\w+)=\w+"; m.isMatch): echo "key ", m[0], " value ", m[1] |} elif {| (let m = input =~ re""; m.isMatch): echo "new m in this scope" |} else: {| echo "m not declared here" |}.
When nimvm statement
nimvm is a special symbol, that may be used as expression of when nimvm statement to differentiate execution path between runtime and compile time.
Example:
proc someProcThatMayRunInCompileTime(): bool = when nimvm: # This code runs in compile time result = true else: # This code runs in runtime result = false const ctValue = someProcThatMayRunInCompileTime() let rtValue = someProcThatMayRunInCompileTime() assert(ctValue == true) assert(rtValue == false)
when nimvm statement must meet the following requirements:
-.:" var pw = readLine(stdin) while pw != "12345": echo "Wrong password! Next try:". unusual. A procedure declaration consists of an identifier, zero or more formal parameters, a return value type and a block of code. Formal parameters are declared as a list of identifiers separated by either comma or semicolon. A parameter is given a type by : typename. The type applies to all parameters immediately before it, until either the beginning of the parameter list, a semicolon separator or an already typed parameter, is reached. The semicolon can be used to make separation of types and subsequent identifiers more distinct.
# Using only commas proc foo(a, b: int, c, d: bool): int # Using semicolon for visual distinction proc foo(a, b: int; c, d: bool): int # Will fail: a is untyped since ';' stops type propagation. proc foo(a; b: int; c, d: bool): int
A parameter may be declared with a default value which is used if the caller does not provide a value for the argument.
# b is optional with 47 as its default value proc foo(a: int, b: int = 47): int
Parameters can be declared mutable and so allow the proc to modify those arguments, by using the type modifier var.
# "returning" a value to the caller through the 2nd argument # Notice that the function uses no actual return value at all (ie void) proc foo(inp: int, outp: var int) = outp = inp + 47
If the proc declaration has no body, it is a forward declaration. If the proc returns a value, the procedure body can access an implicitly declared variable named result that represents the return value. Procs can be overloaded. The overloading resolution algorithm determines which proc' # (x=0, y=1, s="abc", c='\t', b=false)
A procedure may call itself recursively.eLine("Hallo") # the same as writeLine(stdout, "Hallo")
Another way to look at the method call syntax is that it provides the missing postfix notation.
The method call syntax conflicts with explicit generic instantiations: p[T](x) cannot be written as x.p[T] because x.p[T] is always parsed as (x.p)[T].
Future directions: p[.T.] might be introduced as an alternative syntax to pass explict types to a generic and then x.p[.T.] can be parsed as x.(p[.T.])..
Creating closures in loops
Since closures capture local variables by reference it is often not wanted behavior inside loop bodies. See closureScope for details on how to change this behavior.
Anonymous Procs. preceeding, sizeOf,. For dynamic dispatch to work on an object it should be a reference type as well.
type Expression = ref object of RootObj ## abstract base class for an expression Literal = ref object of Expression x: int PlusExpr = ref object of Expression a, b: Expression method eval(e: Expression): int {.base.} = #.
As can be seen in the example, base methods have to be annotated with the base pragma. The base pragma also acts as a reminder for the programmer that a base method m is used as the foundation to determine all the effects that a call to m might cause.
In a multi-method all parameters that have an object type are used for the dispatching:
type Thing = ref object of RootObj Unit = ref object of Thing x: int method collide(a, b: Thing) {.base,s) and ends iteration.
- Neither inline nor closure iterators can be recursive.
Iterators that are neither marked {.closure.} nor {.inline.} explicitly default to being inline, withineLine. We call such type classes bind once types.. Such type classes are called bind many types. typedesc params, you must prefix the type with an explicit type modifier. The named instance of the type, following the concept keyword is also considered an explicit typedesc value: type Matrix): expr = M.M template Cols*(M: type Matrix): expr = M.N template ValueType*(M: type future, typetraits type Functor[A] = concept f type MatchedGenericType = genericHead(f.type) # Equaly
Converter type classes])
VTable types.*: untyped =: untyped): untyped = # untyped, typed or typedesc (stands for type description). These are "meta types", they can only be used in certain contexts. Real types can be used too; this implies that typed expressions are expected.
Typed vs untyped parameters.
Passing a code block to a template
You can pass a block of statements as a last parameter to a template via a special : syntax:
template withFile(f, fn, mode, actions: untyped): untyped = var f: File if open(f, fn, mode): try: actions finally: close(f) else: quit("cannot open: " & fn) withFile(txt, "ttempl3.txt", fmWrite): txt.writeLine("line 1") txt.writeLine("line 2")
In the example the two writeLine statements are bound to the actions parameter.
Varargs of untyped].
Symbol binding in templates
A template is a hygienic macro and so opens a new scope. Most symbols are bound from the definition scope of the template:
# Module A var lastId = 0 template genId*: untyped =: untyped, typ: typedesc) =): untyped =: untyped, actions: untyped): untyped = block: var f: File # since 'f' is a template param, it's injected implicitly ... withFile(txt, "ttempl3.txt", fmWrite): txt.writeLine("line 1") txt.writeLine(: untyped) =[untyped]): untyped = # eLine", newIdentNode("stdout"), n[i])))
Arguments that are passed to a varargs parameter are wrapped in an array constructor expression. This is why debug iterates over all of n's children.
BindSym
The above debug macro relies on the fact that write, writeLine)))
However, the symbols write, writeLine: untyped): untyped = #: untyped) = discard proc p() {.m.} = discard
This is a simple syntactic transformation into:
template m(s: untyped) =
Note: Dot operators are still experimental and so need to be enabled via {.experimental.}.: `` const . = key t[idx].val = val proc `[]=`*(t: var Table, key: string{call}, val: string{call}) = ## puts a (key, value)-pair into `t`. Optimized version that knows that ## the strings are unique and thus don't need to be copied: let idx = findInsertionPosition(key) shallowCopy t[idx].key, key shallowCopy t[idx].val,
Note on paths
In module related statements, if any part of the module name / path begins with a number, you may have to quote it in double quotes. In the following example, it would be seen as a literal number '3.0' of type 'float64' if not quoted, if uncertain - quote it:
import "gfx/3d/somemod, in that case it takes a list of, see type bound operations instead.. Since version 0.12.0 of the language, a proc that uses system.NimNode within its parameter types is implictly declared compileTime:
proc astHelper(n: NimNode): NimNode = result = n
Is the same as:
proc astHelper(n: NimNode): NimNode {.compileTime.} = result = n: untyped,.
used pragma
experimental pragma
The noDecl pragma can be applied to almost any symbol (variable, proc, type, etc.) and is sometimes useful for interoperability with C: It tells Nim that it should not generate a declaration for the symbol in the C code. For example:
var EACCES {.importc, noDecl.}: cint # pretend EACCES was a variable, as # Nim does not know its value
However, the header pragma is often the better alternative.
Note: This will not work for the LLVM backend.
Header pragma
The header pragma is very similar to the noDecl pragma: It can be applied to almost any symbol and specifies that it should not be declared and instead the generated code should contain an #include:
type PFile {.importc: "FILE*", header: "<stdio.h>".} = distinct pointer # import C's FILE* type; Nim will treat it as a new pointer type
The header pragma always expects a string constant. The string contant contains the header file: As usual for C, a system header file is enclosed in angle brackets: <>. If no angle brackets are given, Nim encloses the header file in "" in the generated C code.
Note: This will not work for the LLVM backend.
IncompleteStruct pragma
The incompleteStruct pragma tells the compiler to not use the underlying C struct in a sizeof expression:
type DIR* {.importc: "DIR", header: "<dirent.h>", final, pure, incompleteStruct.} = object
Compile pragma
The compile pragma can be used to compile and link a C/C++ source file with the project:
{.compile: "myfile.cpp".}
Note: Nim computes a SHA1 checksum and only recompiles the file if it has changed. You can use the -f command line option to force recompilation of the file.
Link pragma
The link pragma can be used to link an additional file with the project:
{.link: "myfile.o".}
PassC pragma
The passC pragma can be used to pass additional parameters to the C compiler like you would using the commandline switch --passC:
{.passC: "-Wall -Werror".}
Note that you can use gorge from the system module to embed parameters from an external command at compile time:
{.passC: gorge("pkg-config --cflags sdl").}
PassL pragma
The passL pragma can be used to pass additional parameters to the linker like you would using the commandline switch --passL:
{.passL: "-lSDLmain -lSDL".}
Note that you can use gorge from the system module to embed parameters from an external command at compile time:
{.passL: gorge("pkg-config --libs sdl").}
Emit pragma
The emit pragma can be used to directly affect the output of the compiler's code generator. So it makes your code unportable to other code generators/backends. Its usage is highly discouraged! However, it can be extremely useful for interfacing with C++ or Objective C code.
Example:
{.emit: """ static int cvariable = 420; """.} {.push stackTrace:off.} proc embedsC() = var nimVar = 89 #.
For a toplevel emit statement the section where in the generated C/C++ file the code should be emitted can be influenced via the prefixes /*TYPESECTION*/ or /*VARSECTION*/ or /*INCLUDESECTION*/:
{.emit: """/*TYPESECTION*/ struct Vector3 { public: Vector3(): x(5) {} Vector3(float x_): x(x_) {} float x; }; """.} type Vector3 {.importcpp: "Vector3", nodecl} = object x: cfloat proc constructVector3(a: cfloat): Vector3 {.importcpp: "Vector3(@)", nodecl}
ImportCpp pragma
Note: c2nim can parse a large subset of C++ and knows about the importcpp pragma pattern language. It is not necessary to know all the details described here.
Similar to the importc pragma for C, the importcpp pragma can be used to import C++ methods or C++ symbols in general. The generated code then uses the C++ method calling syntax: obj->method(arg). In combination with the header and emit pragmas this allows sloppy interfacing with libraries written in C++:
# Horrible example of how to interface with a C++ engine ... ;-) {.link: "/usr/lib/libIrrlicht.so".} {.emit: """ using namespace irr; using namespace core; using namespace scene; using namespace video; using namespace io; using namespace gui; """.} const irr = "<irrlicht/irrlicht.h>" type IrrlichtDeviceObj {.final, header: irr, importcpp: "IrrlichtDevice".} = object IrrlichtDevice = ptr IrrlichtDeviceObj proc createDevice(): IrrlichtDevice {. header: irr, importcpp: "createDevice(@)".} proc run(device: IrrlichtDevice): bool {. header: irr, importcpp: "#.run(@)".}
The compiler needs to be told to generate C++ (command cpp) for this to work. The conditional symbol cpp is defined when the compiler emits C++ code. @ is replaced by the remaining arguments, separated by commas.
For example:
proc cppMethod(this: CppObj, a, b, c: cint) {.importcpp: "#.CppMethod(@)".} var x: ptr CppObj cppMethod(x[], 1, 2, 3)
Produces:
x->CppMethod(1, 2, 3)
As a special rule to keep backwards compatibility with older versions of the importcpp pragma, if there is no special pattern character (any of # ' @) at all, C++'s dot or arrow notation is assumed, so the above example can also be written as:
proc cppMethod(this: CppObj, a, b, c: cint) {.importcpp: "CppMethod".}
Note that the pattern language naturally also covers C++'s operator overloading capabilities:
proc vectorAddition(a, b: Vec3): Vec3 {.importcpp: "# + #".} proc dictLookup(a: Dict, k: Key): Value {.importcpp: "#[#]".}
- An apostrophe ' followed by an integer i in the range 0..9 is replaced by the i'th parameter type. The 0th position is the result type. This can be used to pass types to C++ function templates. Between the ' and the digit an asterisk can be used to get to the base type of the type. (So it "takes away a star" from the type; T* becomes T.) Two stars can be used to get to the element type of the element type etc.
For example:
type Input {.importcpp: "System::Input".} = object proc getSubsystem*[T](): ptr T {.importcpp: "SystemManager::getSubsystem<'*0>()", nodecl.} let x: ptr Input = getSubsystem[Input]()
Produces:
x = SystemManager::getSubsystem<System::Input>()
- #@ is a special case to support a cnew operation. It is required so that the call expression is inlined directly, without going through a temporary location. This is only required to circumvent a limitation of the current code generator.
For example C++'s new operator can be "imported" like this:
proc cnew*[T](x: T): ptr T {.importcpp: "(new '*0#@)", nodecl.} # constructor of 'Foo': proc constructFoo(a, b: cint): Foo {.importcpp: "Foo(@)".} let x = cnew constructFoo(3, 4)
Produces:
x = new Foo(3, 4)
However, depending on the use case new Foo can also be wrapped like this instead:
proc newFoo(a, b: cint): ptr Foo {.importcpp: "new Foo(@)".} let x = newFoo(3, 4)
Wrapping constructors
Sometimes a C++ class has a private copy constructor and so code like Class c = Class(1,2); must not be generated but instead Class c(1,2);. For this purpose the Nim proc that wraps a C++ constructor needs to be annotated with the constructor pragma. This pragma also helps to generate faster C++ code since construction then doesn't invoke the copy constructor:
# a better constructor of 'Foo': proc constructFoo(a, b: cint): Foo {.importcpp: "Foo(@)", constructor.}
Wrapping destructors
Since Nim generates C++ directly, any destructor is called implicitly by the C++ compiler at the scope exits. This means that often one can get away with not wrapping the destructor at all! However when it needs to be invoked explicitly, it needs to be wrapped. But the pattern language already provides everything that is required for that:
proc destroyFoo(this: var Foo) {.importcpp: "#.~Foo()".}
Importcpp for objects
Generic importcpp'ed objects are mapped to C++ templates. This means that you can import C++'s templates rather easily without the need for a pattern language for object types:
type StdMap {.importcpp: "std::map", header: "<map>".} [K, V] = object proc `[]=`[K, V](this: var StdMap[K, V]; key: K; val: V) {. importcpp: "#[#] = #", header: "<map>".} var x: StdMap[cint, cdouble] x[6] = 91.4
Produces:
std::map<int, double> x; x[6] = 91.4;
- If more precise control is needed, the apostrophe ' can be used in the supplied pattern to denote the concrete type parameters of the generic type. See the usage of the apostrophe operator in proc patterns for more details.
type VectorIterator {.importcpp: "std::vector<'0>::iterator".} [T] = object var x: VectorIterator[cint]
Produces:
std::vector<int>::iterator x;
ImportObjC pragma
Similar to the importc pragma for C, the importobjc pragma can be used to import Objective C methods. The generated code then uses the Objective C method calling syntax: [obj method param1: arg]. In addition with the header and emit pragmas this allows sloppy interfacing with libraries written in Objective C:
# horrible example of how to interface with GNUStep ... {.passL: "-lobjc".} {.emit: """ #include <objc/Object.h> @interface Greeter:Object { } - (void)greet:(long)x y:(long)dummy; @end #include <stdio.h> @implementation Greeter - (void)greet:(long)x y:(long)dummy { printf("Hello, World!\n"); } @end #include <stdlib.h> """.} type Id {.importc: "id", header: "<objc/Object.h>", final.} = distinct int proc newGreeter: Id {.importobjc: "Greeter new", nodecl.} proc greet(self: Id, x, y: int) {.importobjc: "greet", nodecl.} proc free(self: Id) {.importobjc: "free", nodecl.} var g = newGreeter() g.greet(12, 34) g.free()
The compiler needs to be told to generate Objective C (command objc) for this to work. The conditional symbol objc is defined when the compiler emits Objective C code.
CodegenDecl pragma()
InjectStmt pragma
The injectStmt pragma can be used to inject a statement before every other statement in the current module. It is only supposed to be used for debugging:
{.injectStmt: gcInvariants().} # ... complex code here that produces crashes ...
compile time define pragmas $$. to implement customized flexibly sized arrays. Additionally an unchecked array is translated into a C array of undetermined size:
type ArrayPart{.unchecked.} = array[0,.Effect implies gcsafe. The only way to create a thread is via spawn or createThread..
To override the compiler's gcsafety analysis a {.gcsafe.} pragma block can be used:
var someGlobal: string = "some string here" perThread {.threadvar.}: string proc setPerThread() = {.gcsafe.}: deepCopy(perThread, someGlobal)[FlowVarBase]. | https://nim-lang.org/docs/manual.html | CC-MAIN-2017-47 | en | refinedweb |
“Plans are useless, but Planning is indispensable”
Thanks Dwight D. Eisenhower, you nailed it. So it turns out I was overly ambitious to think I could explore all the features of a new platform like Hololens in a single week. Even with two weeks I’ve only maybe explore maybe half the features? That’s if I’m lucky – I mean I haven’t even got to camera, voice,, sound, sharing, networking and multi-user, and probably more. Holey moley.
So here’s a slight revision to my original plan:
Networking
I want to get started with networking on Hololens, but get started slowly and build up and verify what works.
UDP Broadcast Network Logging
So if you know me, I am a big fan of UDP networking. Just the ease and light weight of a session-less protocol works in many use-cases.
I have some code I wrote that I like to drop in to broadcast debug logs on the network so I can quickly see the debug traces of wireless devices like the Hololens. Now I want to see if that runs on Hololens.
The code is quite simple using the Application.logMessageReceived event to listen for new log messages, and then it broadcasts it over UDP with ASCII to the whole network segment.
You can use a free tool such as SocketTest to listen to UDP packets on any port. Just launch the app, go to UDP tab, set port to 9999 and press Start Listening.
Now my code used UdpClient which is not supported under UWP. Instead I had switch to DatagramSocket and all the extra (ugly) asynchronous code that entails.
But here is the final script below. Drag it onto any game object in your scene:
Note: You must enable the networking capabilities ( InternetClient, InternetClientServer and PrivateNetworkClientServer) in your project settings:
Here’s what it looks like viewing through the Hololens looking at SocketTest on my desktop screen:
Now that is very cool (and useful to drag one script and debug wirelessly) !
DebugLogBroadcaster.cs:
using UnityEngine; using System; using System.Text; using System.Net.Sockets; using System.Net; #if WINDOWS_UWP using Windows.Networking.Sockets; using Windows.Networking.Connectivity; using Windows.Networking; #endif /* * Broadcast all Debug Log messages on the current WiFi network * By Peter Koch <peterept@gmail.com> * * Use this with any UDP Listener on your PC * eg: SocketTest * * Launch the app, go to UDP tab, set port to 9999 and press Start Listening * * Important Note: * - Callstacks are only sent in non-editor builds when "Development Build" is checkmarked in Build Settings */ public class DebugLogBroadcaster : MonoBehaviour { public int broadcastPort = 9999; #if WINDOWS_UWP HostName hostName; DatagramSocket client; #else IPEndPoint remoteEndPoint; UdpClient client; #endif void OnEnable() { #if WINDOWS_UWP hostName = new Windows.Networking.HostName("255.255.255.255"); client = new DatagramSocket(); #else remoteEndPoint = new IPEndPoint(IPAddress.Broadcast, broadcastPort); client = new UdpClient(); #endif Application.logMessageReceived += HandlelogMessageReceived; Debug.Log("DebugLogBroadcaster started on port:" + broadcastPort); } void OnDisable() { Application.logMessageReceived -= HandlelogMessageReceived; #if !WINDOWS_UWP client.Close(); remoteEndPoint = null; #endif client = null; } // DatagramSocket needs to run in an async routine to dispatch an async threading task #if WINDOWS_UWP async #endif void HandlelogMessageReceived (string condition, string stackTrace, LogType type) { string msg = string.Format ("[{0}] {1}{2}", type.ToString ().ToUpper (), condition, "\n " + stackTrace.Replace ("\n", "\n ")); #if WINDOWS_UWP await SendUdpMessage(msg); #else byte[] data = Encoding.UTF8.GetBytes(msg); client.Send(data, data.Length, remoteEndPoint); #endif } #if WINDOWS_UWP private async System.Threading.Tasks.Task SendUdpMessage(string message) { using (var stream = await client.GetOutputStreamAsync(hostName, broadcastPort.ToString())) { using (var writer = new Windows.Storage.Streams.DataWriter(stream)) { var data = Encoding.UTF8.GetBytes(message); writer.WriteBytes(data); await writer.StoreAsync(); } } } #endif }
Web Server
I’m impressed with how many smart choices were made in the creation of the HoloLens. And one of those I hadn’t heard of before I started this exploration of HoloLens – and that is the built in web server to let you access and manage your HoloLens portal with any web browser.
Now I love that, and I wanted to have the same capability in my Unity app so that I and others could connect into it via a web browser to enable some collaboration and improvements in how you can get content into HoloLens.
And I can report it works! Here is what I came up with – In the example below I can add game objects in front of the user in the HoloLens:
How did I do this? I couldn’t find an off the shelf web server in the Unity asset store, and most of the source code I found used HttpListener – which isn’t supported in UWP.
But I did find this simple c# http server on github which uses StreamSocketListener (which is supported on HoloLens).
I forked this and added support for Unity3d.
I added calling back on the main thread and using custom attributes to make it simpler to add in a route handler.
Just drag in my code from github, and then here is an example of how to have a page that adds cubes in front of the user:
DemoRoute.cs
using System.Collections; using System.Collections.Generic; using UnityEngine; using StreamSocketHttpServer; using System.Text; [UnityHttpServer] public class DemoRoute : MonoBehaviour { [UnityHttpRoute("/addcube")] public void RouteAddCube(HttpRequest request, HttpResponse response) { // spawn a cube 1m in front of where the user is looking GameObject cube = GameObject.CreatePrimitive(PrimitiveType.Cube); cube.transform.position = Camera.main.transform.position + Camera.main.transform.forward; cube.transform.localScale = new Vector3(0.1f, 0.1f, 0.1f); response.Redirect("/"); } }
Wow! That works really well !
Shared Experiences
One area I am most excited about in AR/VR/MR is multi-user collaboration. Everything becomes much more engaging when you are not alone in your virtual world. Last year when I took 400 kids (16 kids at a time) on a vircursion to the moon.
Now I’d heard HoloLens had multi-user capability, and I’d seen some videos, but now was finally the time to see what they have and how it works (and importantly how can I use it).
So it turns out there isn’t really a multi-user capability built in.
But that actually makes sense, because depending on your application you will have different needs for a multi-user experience, and you will choose the appropriate multi-player framework.
Microsoft’s Mixed Reality Toolkit for Unity3d give us two solutions to get started, one a client/server system and the other based on Unity’s built-in networking UNET.
Shared Anchors and Local Co-ordinate Space
No matter which framework or middleware you choose for multi-player networking, the same basic approach is required to create a shared experience between multiple users in the same space.
A few weeks ago I looked at the world-anchor system in HoloLens, and how you can not only attach them to game objects to “lock” them into a particular position and orientation in your physical space – but how you can also save that anchor to the Anchor Store so you can reload it later to keep that content at the same anchor even if your app is relaunched.
Well the great news is those saved anchors can be shared between devices, and so those anchors can be used to inform multiple headsets about where content should be physically located in your space.
This is critical because we also saw when I started this journey that when you first launch your app the origin is set to be (0,0,0) where ever your HoloLens headset is positioned at that time. So if two users on different HoloLens’ launched the same app in different rooms, they would both have the origin set differently. If one user sent the other player the position of content (say 0,0,-2) then it would not be in the same physical space.
So the other benefit of anchors is that we can create a new co-ordinate system relative to that anchor. So below we have a dinosaur that is positioned and rotated relative to the anchor. We can send that transform to other users and they will position it relative to the anchor in their scene, and bam: content is in identical position and rotation for all users.
The simplest way to manage content is to attach your anchor to a game object that will be the parent of all content. Then whenever content is added, you need to send it’s transform relative to that parent. Parenting your content is not always possible, so if it can’t be a child, another way is to translate from it’s co-ordinate system to the local one of that anchor usingtransform.InverseTransformPoint().
Unity has it’s own documentation about Anchor Sharing.
That’s it, that’s the most important part of sharing content. Now I’ll look at some example of multi-user networking implementation.
Sharing Service (Client/Server)
If you’ve followed the Holograms 240 course online then you now know how difficult this looks – I’ve watched it several times and I still couldn’t understand what and how it works, let alone jump in and use it. The amount on contextual knowledge you need seems massive.
What I did in the end was start with the example SharingTest scene in the Mixed Reality Toolkit for Unity3d and step through all the code, and then jump to the native code in the Mixed Reality Toolkit to understand what the networking is doing.
I do finally have a grasp on it all – and it’s pretty impressive – albeit overly complex.
Before I explain how it works, I encourage you to run it yourself – it’ll help! You only need a single HoloLens (or Emulator).
That page uses a Client-Server architecture. In this case the server is also authoritative in that it is not just a dumb generic server routing content, it has some smarts specifically implemented for HoloLens – it’s not much – but it does understand about physical rooms and anchors.
So first thing you want to do is launch the server. It comes pre-built in the toolkit. Run it by going to the HoloToolkit menu -> Sharing Service -> Launch Service:
And you’ll see the service is running in a terminal window:
If you want you can also install it as a windows service so it automatically starts up when you reboot your windows machine.
Next, open up the Sharing Test scene in Unity. Go to the sharing prefab in the scene and set the IP address of your server. If you are using the emulator you can keep it as localhost, but if you are using a real HoloLens you want to set the IP address of your server – you can easily get your IP address looking at the terminal window runnning Sharing Service.
Build for you device and run it up. The app will create an anchor and then position the white sphere at the anchor. (It also puts a cube where the heads of players are). The debug text will show that your HoloLens is connected to the server, joined the default session, created a physical room, and then uploaded it’s anchor to the room.
Now press PLAY in Unity and you will see the editor will connect to the server also, but this time it will join the existing room and download the existing anchor. If you walk around your physical room with the HoloLens you will find a purple cube that is the head of the player in Unity. Wow. It works.
I’ve recorded a more detailed overview:
RakNet
The sharing service is built upon an open-source high performance multiplayer game network engine called RakNet. RakNet was acquired by Oculus in 2014 and immediately open-sourced. It has many great features and is very flexible to build games with and it lets you fine turn how messages are sent between players, creating sessions, synchronising shared data, VOIP, just to name a few. It is written in C++ but it has wrappers for most platforms, including C#.
Microsoft chose it for it’s Nasa experiences, and then later added Unity support in the Mixed Reality Toolkit – which is how we get to enjoy it. You can find the full source to it in the non-Unity3d Mixed Reality Toolkit. That is good if you want to add new features to the Sharing Service itself.
Architecturally you have the server running and clients connect into that server. They can join the same session as other users so they can communicate in that group. Any data “elements” created through the RakNet SDK will automatically be synchronised to all users in that session (if a player joins later, they will also automatically download all elements so that can get up to date). As I said above, Microsoft have added some specific features for creating rooms and anchors. That way clients can choose to enter a room and get the physical anchor for that space. It’s up to your app to decide what to do if the user is not in the same physical room, you can reposition the content somewhere in front of the user without the anchor so they can participate as well.
Connecting
To connect from Unity, all you need to do is add the Sharing prefab to your scene and set the IP address (here I replaced localhost with the IP address of my PC):
Oh yes, I know what you are thinking: “I don’t want to hard code my server address, why not just tick that Auto Discover Server checkbox?”. Unfortunately that feature seems to have been broken about 9 months ago, and there doesn’t seem to be any fix (or anyone in a hurry to fix it). The current recommendation is to load the IP address some other way, or build your own discovery capability.
Now that lets your HoloLens connect to the server and join a session with other users.
The next thing you want to do is establish a shared anchor. In the toolkit there is a script named ImportExportAnchorManager.
That will detect if a physical room is already registered on the sharing service, and if it has an anchor it will download and install the anchor (and apply the anchor to a gameobject). If no anchor is found, the script will make its own anchor, then save it in the anchor store so it can export it and upload it to the sharing service.
Spawning shared game objects
Strap yourself in, now it gets complicated – you want to synchronise the world-state between your players.
Actually, if you use the scripts provided it’s not that complicated once you can understand exactly what it going on.
I couldn’t find any decent documentation, but I was able to learn a lot looking through the other sharing test scene in Unity, which is called Spawn Sharing Test.
Basically you now have a magic friend the Prefab Spawn Manager. It will maintain a list of named prefabs that can be instantiated into the scene. If your code tells the Prefab Spawn Manager to instantiate one by name (and give it the transform details for where to spawn it), then the prefab manager will do 2 things:
1. Spawn the prefab locally
2. Use the network to save on the server those details, so all other players are notified of the name and location to also spawn the prefab.
Now there is one additional feature of the Prefab Spawn Manager – and that is you can associate your own custom data with that prefab that can also be repliacted on the network. To do this you create a data class derived from SyncSpawnedObject. You can use various data types to store data. Then when you ask the Prefab Spawn Manager to instantiate you a prefab you can also pass it your data class instance. The prefab manager will take care of also replicating that around the network.
Your scripts on the prefab can access your data object via the DefaultSyncModelAccessor component.
Wow – that all works and is pretty neat.
Sharing Service Review
So the sharing service is a good way to quickly get a multi-user experience running once you understand the above.
The downside is your users need to install and run the sharing service.
You could recompile it and run it on the cloud, but then you probably need some kind of user login system.
One downside is that this solution is quite complicated and it’s not well documented. So you can also look at Unity’s networking, which amongst other features enables you to run peer to peer by a user being assigned the job of server.
Any way, I hope this has been useful. Let me know any feedback !
October 24, 2017 at 1:35 am
Thank you for this article. From what I found this is the clearest piece of information on how the Sharing system works. | http://talesfromtherift.com/hololens-contest-week-9/?replytocom=22709 | CC-MAIN-2017-47 | en | refinedweb |
Error happens when compile Android source code(source codeversion: 6.0.1; computer memery: 6G; host system: ubuntu 14.04),log is below:
including ./system/netd/Android.mk ...
including ./system/security/keystore-engine/Android.mk ...
including ./system/security/keystore/Android.mk ...
including ./system/security/softkeymaster/Android.mk ...
including ./system/tools/aidl/Android.mk ...
including ./system/update_engine/Android.mk ...
including ./system/vold/Android.mk ...
including ./system/weaved/Android.mk ...
including ./system/webservd/Android.mk ...
including ./tools/external/fat32lib/Android.mk ...
Starting build with ninja
ninja: Entering directory `.'
[ 0% 1/21275] Ensure Jack server is installed and started
Jack server already installed in "/home/eddy/.jack-server"
Launching Jack server java -Djava.io.tmpdir=/tmp -Dfile.encoding=UTF-8 -XX:+TieredCompilation -cp /home/eddy/.jack-server/launcher.jar com.android.jack.launcher.ServerLauncher
[ 0% 17/21275] host Java: conscrypt-host (out/host/common/obj/JAVA_LIBRARIES/conscrypt-host_intermediates/classes)
warning: [options] bootstrap class path not set in conjunction with -source 1.7
external/conscrypt/src/openjdk/java/org/conscrypt/Platform.java:39: warning: AlgorithmId is internal proprietary API and may be removed in a future release
import sun.security.x509.AlgorithmId;
^
external/conscrypt/src/openjdk/java/org/conscrypt/Platform.java:243: warning: AlgorithmId is internal proprietary API and may be removed in a future release
return AlgorithmId.get(oid).getName();
^
Note: Some input files use or override a deprecated API.
Note: Recompile with -Xlint:deprecation for details.
Note: Some input files use unchecked or unsafe operations.
Note: Recompile with -Xlint:unchecked for details.
3 warnings
[ 0% 18/21275] host Java: signapk (out/host/common/obj/JAVA_LIBRARIES/signapk_intermediates/classes)
warning: [options] bootstrap class path not set in conjunction with -source 1.7
1 warning
[ 0% 73/21275] Building with Jack: out/target/common/obj/JAVA_LIBRARIES/framework_intermediates/with-local/classes.dex
FAILED: /bin/bash out/target/common/obj/JAVA_LIBRARIES/framework_intermediates/with-local/classes.dex.rsp
GC overhead limit exceeded
Try increasing heap size with java option '-Xmx<size>'
Warning: This may have produced partial or corrupted output.
ninja: build stopped: subcommand failed.
make: *** [ninja_wrapper] Error 1
#### make failed to build some targets (14:09 (mm:ss)) ####
eddy@eddy-OptiPlex-390:~/WORKING_DIRECTORY$
Same problem here. I tried setting JACK_SERVER_VM_ARGUMENTS to include -Xmx=4g, but when building again the log output showed that this was not included in the startup. Dunno why, seems like the env vars do not get passed to the build script correctly.
Solution: before starting a clean android build set the JACK_SERVER_VM_ARGUMENTS to include -Xmx=4g, then stop and stat the jack server manually. Given you're in the main source tree of AOSP run the following:
export JACK_SERVER_VM_ARGUMENTS="-Dfile.encoding=UTF-8 -XX:+TieredCompilation -Xmx4g" ./prebuilts/sdk/tools/jack-admin kill-server ./prebuilts/sdk/tools/jack-admin start-server
This resolved the issue for me. GL! | https://codedump.io/share/eFctX7UiJEHY/1/android-source-code-compile-error-quottry-increasing-heap-size-with-java-option-39-xmxltsizegt39quot | CC-MAIN-2017-47 | en | refinedweb |
asdf
import datetime def plot_avg_spd(df, t): """ Plot the average speed of all recorded buses within t minute intervals Args: df (pd.DataFrame): dataframe of bus data t (int): the granularity of each time period (in minutes) for which an average is speed is calculated """ def modIt(a): return (a - pd.Timedelta(minutes=(a.minute % t))).time() df['tmstmp'] = df['tmstmp'].apply(modIt) groups = df.groupby(['tmstmp']) y = [] x = [] for key, group in groups: a = group['spd'].mean() y.append(a) x.append(key) area = np.pi * 10 # e = return plt.scatter(x,y) pass | https://codedump.io/share/wYdaM5WDULuP/1/plot-the-average-speed-of-all-recorded-buses-within-t-minute-intervals | CC-MAIN-2017-47 | en | refinedweb |
This is an English version of some articles I posted a while ago in my blog. XML Literals are a great way to handle XML files and the community doesn't use it as much as it should.
XML Literals allow you to use XML syntax in your code. It’s easy to work with XML files this way, since you have that Tag in the code, but it’s also quicker to access information rather than the traditional methods using XmlDocument and XmlElement. It’s available in the namespace System.Xml.Linq, since the .NET Framework 3.5/Visual Studio 2008 (for Visual Basic Only) supports most of the Extensible Markup Language (XML) 1.0 specification, and together with Lambda Expressions and/or LINQ, gives you a better experience with XML files. It’s also available in the intellisense, and it automatically indents the code.
The basic concept looks like this:
Dim msg = <msg>
This is a test!
This is a test!
</msg>.Value
MessageBox.Show(msg, "XML Literals")
This will show you a MessageBox, preserving the spaces, tabs and page breaks. Notice that it doesn't need the line continuation character “_” (which will not even be necessary in Visual Studio 2010 for most of the code).
MessageBox
You can create your XML file in runtime mode. Here’s an example of how to achieve that:
Dim bookList = _
<?xml version="1.0" encoding="UTF-8" standalone="yes"?>
<!-- List of books and magazines -->
<library>
<books>
<book name="The Hunger Games " author="Suzanne Collins"/>
<book name="Breaking Dawn" author="Stephenie Meyer"/>
<book name="The Last Song" author="Nicholas Sparks"/>
</books>
<magazine>
<magazineName>"MSDN Magazine"</magazineName>
<magazineName>"Code Magazine"</magazineName>
</magazine>
</library>
The variable bookList is now an XDocument that you can work as an XML file. To save the file on the disk, you just need to use the Save() method:
bookList
Save()
bookList.Save("c:\library.xml")
The previous example generates an easy XML file and saves it to disk. To load the file and handle it, you can use the Load() event. This will load the file and show the magazine name, using the Descendants property, which allow access to descendant nodes by name from an XElement or XDocument object:
Load()
Dim xmlFile = XDocument.Load("c:\library.xml")
Debug.WriteLine(xmlFile...<magazineName>.Value)
This will show in the Immediate Window “MSDN Magazine” because it’s the first name that it found. We can also get the second magazine (in this case), using Lambda Expressions:
Dim xmlFile = XDocument.Load("c:\library.xml")
Debug.WriteLine(xmlFile...<magazineName>.Where(Function(f) _
f.Value = "Code Magazine").Value)
This is for “regular” node elements but if you need to show attributes, then you should use the following syntax:
Dim xmlFile = XDocument.Load("c:\library.xml")
Debug.WriteLine(xmlFile...<book>.@author)
And for the Lambda Expression to filter for a specific value (this will search on the book name and show the author name):
Dim xmlFile = XDocument.Load("c:\library.xml")
Debug.WriteLine(xmlFile...<book>.Where(Function(f) _
f.@name = "Breaking Dawn").@author)
You have several ways to show information, looping on the values, using LINQ to XML or Lambda Expressions. You can list all the magazines this way:
For Each m In From element In bookList.<library>.<magazine>.<magazineName>
Debug.WriteLine(m.Value)
You can also use the Descendants property and significantly simplify the code:
Descendants
For Each m In From element In bookList...<magazineName>
Debug.WriteLine(m.Value)
To show the book names you can use the same method, but since you now deal with attributes, you use the “@” to define that you want the attribute, plus the attribute name.
@
For Each book In From element In bookList...<book>
Debug.WriteLine("Book: " & book.@name.ToString)
Debug.WriteLine("Author: " & book.@author.ToString)
' Separation line
Debug.WriteLine(New String("-"c, 40))
But you can also filter the information before showing it. This example uses LINQ to XML to check all of the books that have a name containing the keyword “Song”.
Song
' Using LINQ to XML to filter the information
Dim bookSearch = From b In bookList...<book> _
Where b.@name.ToString.Contains("Song") _
Select b.@name, b.@author
' Show the results
For Each book In From element In bookSearch
Debug.WriteLine("Book: " & book.name)
Debug.WriteLine("Author: " & book.author)
' Separation line
Debug.WriteLine(New String("-"c, 40))
Embedded expressions are expressions that you can use in the XML code, using the tags <%= expression %>, like it’s available in ASP.NET. You can use them to build or modify the XML file and that makes it really easy to create a file from a DataTable, List(Of T), Dictionary, etc.
<%= expression %>
DataTable
List(Of T)
Dictionary
Here’s a very straight forward example, using a Func() delegate (Lambda Expression) that adds two values:
Dim f As Func(Of Integer, Integer, Integer) = Function(x, y) x + y
Dim example = _
<test>
<value><%= f(125, 125).ToString() %></value >
</test>
The result will be:
<test>
<value>250</value>
</test>
If you have another datasource, like a List(Of T), you can list all the values, using embedded expressions, to an XML file:
' Creates a list with some book names
Dim bookList As New List(Of String)
bookList.AddRange(New String() {"The Hunger Games", "Breaking Dawn", "The Last Song"})
' Creates the XML e saves it to disk
Dim newBookList1 = _
<?xml version="1.0" encoding="UTF-8" standalone="yes"?>
<library>
<books>
<%= From b In bookList Select <book><%= b %></book> %>
</books>
</library>
newBookList1.Save("c:\result.xml")
This will be the result:
<?xml version="1.0" encoding="utf-8" standalone="yes" ?>
<library>
<books>
<book>The Hunger Games</book>
<book>Breaking Dawn</book>
<book>The Last Song</book>
</books>
</library>
Another example of using embedded expressions is using a DataTable. In this case, it will create an XML file with the attributes “name” and “author”:
name
author
' For this example is created a DataTable manually but
' could be the result of a SQL query or stored procedure
Dim dt As New DataTable("Books")
dt.Columns.Add("Book", GetType(String))
dt.Columns.Add("Author", GetType(String))
dt.Rows.Add("The Hunger Games", "Suzanne Collins")
dt.Rows.Add("Breaking Dawn", "Stephenie Meyer")
dt.Rows.Add("The Last Song", "Nicholas Sparks")
Dim ds As New DataSet
ds.Tables.Add(dt)
' Creates the XML e with two attributes: "name" and "author"
Dim newBookList2 = _
<?xml version="1.0" encoding="UTF-8" standalone="yes"?>
<!-- my book list -->
<library>
<books>
<%= From b In ds.Tables("Books") Select _
<book name=<%= b.Item("Book") %>
author=<%= b.Item("Author") %>/> %>
</books>
</library>
' Saves it to disk
newBookList2.Save("c:\library.xml")
<?xml version="1.0" encoding="utf-8" standalone="yes" ?>
<library>
<books>
<book name=" The Hunger Games" author="Suzanne Collins" />
<book name=" Breaking Dawn" author="Stephanie Meyer" />
<book name=" The Last Song" author="Nicholas Sparks" />
</books>
</library>
To modify any information in an XML file, using XML Literals, you just need to read the file, change the value and then save it to disk.
Here’s an example:
Dim xmlFile = XDocument.Load("c:\library.xml")
xmlFile...<magazineName>.Value = "New Value"
xmlFile.Save("c:\library.xml")
But using this approach, only the first value that is found will be changed. If you need to change a specific value, like you normally do, you need to filter the information first to indicate what value to change:
Dim xmlFile = XDocument.Load("c:\library.xml")
Dim element = xmlFile.<library>.<books>.<book>.Where(Function(f) _
f.@name = "The Last Song")
element.@author = "Jorge Paulino"
xmlFile.Save("c:\library.xml")
This will change the name of the author for the book “The Last Song”, from “Nicholas Sparks” to “Jorge Paulino” (that was nice!). But once again, using the Descendants property saves you some code:
Dim xmlFile = XDocument.Load("c:\library.xml")
xmlFile...<book>.Where(Function(f) _
f.@name = "The Last Song").@author = "Jorge Paulino"
xmlFile.Save("c:\library.xml")
To insert a new node into the XML file, you first need to build the new element (XElement) and then add it to the right position. We can do that in two ways:
XElement
Dim xmlFile = XDocument.Load("c:\library.xml")
Dim element = New XElement("book", _
New XAttribute("name", "XML Literals"), _
New XAttribute("author", "Jorge Paulino"))
Dim parent = xmlFile...<books>.FirstOrDefault()
parent.Add(element)
xmlFile.Save("c:\library.xml")
Or, the easy way:
Dim xmlFile = XDocument.Load("c:\library.xml")
Dim element = <book name="XML Literals" author="Jorge Paulino"/>
Dim parent = xmlFile...<books>.FirstOrDefault()
parent.Add(element)
xmlFile.Save("c:\library.xml")
In this example, we can change the values of the attributes dynamically, using embedded expressions like we saw before.
Deleting a node is very similar to the modification method. You can remove all the nodes:
Dim xmlFile = XDocument.Load("c:\library.xml")
xmlFile...<magazineName>.Remove()
xmlFile.Save("c:\library.xml")
Or remove a specific node:
Dim xmlFile = XDocument.Load("c:\library.xml")
xmlFile...<book>.Where(Function(f) f.@author = "Suzanne Collins").Remove()
xmlFile.Save("c:\library.xml")
You can also use XML Literals to read information from the web, like RSS. This example reads my personal blog RSS and filters by the “VB.NET” category (that is defined by the tags). This show how powerful and easy it is to work with XML Literals.
Dim xmlFile = XDocument.Load("")</a />
Dim blogList = xmlFile...<item>.Where(Function(f) _
f.<category>.Value = "VB.NET").<title>.ToList()
For Each item As XElement In blogList
Console.WriteLine(item.Value)
Console.ReadKey()
And the console result:
XML literals provide several methods to work with XML files. Today, you have XML files for everything (reports, configurations, data storage, RSS, etc.) and it’s so important to handle it right, fast and in a easy way.
I hope this article helps you to become better with XML Literals, and use them more!
This article, along with any associated source code and files, is licensed under The Code Project Open License (CPOL)
Thank you for your response.
As I said, I have ONLY been able to get your examples to work. When I try substituting a real-world xml file using your code examples I get only empty reponses.
I did notice early on that the literals were case sensitive so that is not my problem.
What imports do I need. I have only added System.Xml.Linq.
The following is the first couple of lines from the real-world xml files I am trying to get to work:
File #1
<?xml version="1.0"?>
<NameDetail xmlns="com.rovicorp.metadataservice" xmlns:
File #2
<NameDiscography xmlns="com.rovicorp.metadataservice" xmlns:
File #3
<NameSongs xmlns="com.rovicorp.metadataservice" xmlns:
Does any of these lines suggest anything I should be doing?
As you can see only the first file has the xml version info.
Are there any other articles elsewhere that might help me that you are aware of?
Thank you
General News Suggestion Question Bug Answer Joke Praise Rant Admin
Use Ctrl+Left/Right to switch messages, Ctrl+Up/Down to switch threads, Ctrl+Shift+Left/Right to switch pages. | https://www.codeproject.com/Articles/71954/XML-Literals?fid=1568057&df=90&mpp=10&noise=1&prof=True&sort=Position&view=None&spc=None | CC-MAIN-2017-47 | en | refinedweb |
Introduction to Stream Control Transmission Protocol).
Listing 1. echo_client.c
#define USE_SCTP #include <stdio.h> #include <stdlib.h> #include <string.h> #include <sys/types.h> #include <sys/socket.h> #include <netinet/in.h> #ifdef USE_SCTP #include <netinet/sctp.h> #endif #define SIZE 1024 char buf[SIZE]; char *msg = "hello\n"; #define ECHO_PORT 2013 int main(int argc, char *argv[]) { int sockfd; int nread; struct sockaddr_in serv_addr; if (argc != 2) { fprintf(stderr, "usage: %s IPaddr\n", argv[0]); exit(1); } /* create endpoint using TCP or SCTP */ sockfd = socket(AF_INET, SOCK_STREAM, #ifdef USE_SCTP IPPROTO_SCTP #else IPPROTO_TCP #endif );); } /* write msg to server */ write(sockfd, msg, strlen(msg) + 1); /* read the reply back */ nread = read(sockfd, buf, SIZE); /* write reply to stdout */ write(1, buf, nread); /* exit gracefully */ close(sockfd); exit(0); }
Jan Newmarch has written many books and papers about software engineering, network programming, user interfaces and artificial intelligence, and he is currently digging into the I
Excellent!
An excellent article concerning introduction to SCTP.
Very good!
/Best regards
J | http://www.linuxjournal.com/article/9748?page=0,1&quicktabs_1=0 | CC-MAIN-2017-47 | en | refinedweb |
Check for bad line end characters on the shebang line. A new line character (lf) is expected - a carriage return (cr) or cr/lf pair will cause grief of the nature you describe.
Maybe the NFS share is mounted as "noexec"? (I think that would produce the exact error you're getting, whereas some trailing junk char(s) in the shebang line would rather produce "... No such file or directory").
What does mount show for the share in question?
Some ideas to try
cu
P. | http://www.perlmonks.org/?node_id=801698 | CC-MAIN-2017-47 | en | refinedweb |
Also published on Microsoft’s MSDN Network at
Applies to:
- Microsoft ASP.NET 2.0
- Microsoft Visual Studio 2005
- Microsoft Internet Information Services
Link To Part 1: Security and Configuration
Contents
Introduction
Technologies Used
The Application and Project
The ObjectDataSource in Detail
The Return Value of the Select Method (Type Collection)
The Select Method Itself
The Custom Sort Criteria
ObjectDataSource In GridView (Data Control)
Conclusion
Introduction
Figure 1. Membership Editor.
The tiers of this solution are defined as follows. The first tier, the ASP.NET page (also known as the presentation layer), interfaces with two business objects through the object data source. These business objects function as the middle tier, and they are wrappers for members and roles. The third tier, or back end, consists of the Membership and Role Manager APIs provided by ASP.NET. The middle tier objects can easily be dropped into any ASP.NET 2.0 project and used directly, with almost no changes.
This article explains in depth the implementation of the middle tier—that is, the data objects, as well as the ObjectDataSource that is associated with them. It then explains how to use these objects in an ASP.NET Web project that uses Microsoft SQL Server Express 2005, which comes bundled with Visual Studio 2005. However, the Membership API provided by Microsoft uses their provider technology; therefore, the solution presented here is database independent. Membership and role information could just as easily come from LDAP, SQL Server, or Oracle.
Technologies Used
The ObjectDataSource
There are two ObjectDataSource instances defined. One is for Membership Data (User Names, Creation Date, Approval, and so on), and the other is for Roles (Administrator, Friends, and so on). Both of these data sources are completely populated with all of the data access methods—that is, they both have Member functions that perform inserts, updates, deletes, and selects. Both ObjectDataSource instances return a Generic List type, which means that in the GridView, the column names are automatically set to the property value names of the ObjectDataSource. In addition, custom sorting is implemented so that users can click the column headers in the GridView in order to sort the data forwards or backwards, as desired.
SQL Server Express 2005 and Web.Config
The data provider source for the Membership and Role databases is SQL Server Express 2005. The appropriate entries are set in the web.config file in order to make this happen. A short discussion is given later in this article of how to set up a new project from scratch. The connection string for SQL Server Express 2005 is not mentioned in the web.config file, because it is already defined in the Machine.Config file that is included as a default part of the Microsoft .NET 2.0 Framework.
IIS (5.1 and 6.0) Compatible
The Web server can be either version 5.1 or 6.0. In order to do any testing of multiple users logged in to your Web app, you must use IIS. The built-in development Web server does not correctly maintain state of the different users who are logged in. Although the Asp.net Web config tool could be made to work with IIS, the additional security work necessary in order to enable this was not done.
The GridView Control
The GridView is used to present the data for both membership and roles. As mentioned earlier, because of the use of a Generic type for the ObjectDataSource, the column names of the GridView are automatically named after the property values of the ObjectDataSource. Without the use of Generics, the column names revert to meaningless default values and must each be edited by hand.
The Application and Project
The project necessary in order to run this utility is very simple and self-contained. The project files, which are available for download, contain a full working example. Because there is no direct database access to the users and roles, all that is needed is to grab the three data objects (MembershipDataObject.cs, MembershipUserSortable.cs and RoleDataObject.cs: see Figure 2).
Figure 2. Membership Editor project
In the SamplePages folder there are several other samples that demonstrate the use of the previously mentioned modules. As one example, Membership.aspx is the example shown in Figure 1. It can be used for selecting, updating, inserting, and deleting Members and Roles, as well as for assigning roles to members.
With a working ASP.NET 2.0 application that already has a working membership module, these pages should need no external configuration beyond what has already been done. These files can be copied directly into a project and they will just work.
If this is the first implementation of Membership and Role Management in an application, the process to follow to create a solution using these objects is as follows:
- Using Visual Studio 2005, create a new Web project of the type ASP.NET Web Site.
- Click Website / ASP.NET Configuration on the menu.
- Follow the wizard steps (1 to 7) to create some sample users and roles. This will effectively create a valid web.config file in the current project that has enough information to have Member Management up and running. By default, it will use SQL Server Express 2005 in its default configuration.
- Include the three .cs files in the project, and then include the sample .aspx pages as samples.
The ObjectDataSource in Detail
The ObjectDataSource technology enables the creation of a datasource that behaves very similarly to the SqlDataSource—that is, it exposes interfaces that allow for selecting, updating, inserting, and deleting records (or record-like objects) from a persistent data store (such as a database). The next several sections of this article will discuss the object (or class file) that the ObjectDataSource uses to manipulate membership. Its name in the project is MembershipUserODS.cs.
The Class (MembershipUserODS)
Because the data is retrieved from the Microsoft Membership API, an ObjectDataSource is used to solve the problem. The first step in doing this is to create a stand-alone class that wraps MembershipUser so that it can be associated with the ObjectDataSource. The example below shows a typical set of methods that need to be implemented, and the next several sections of this article will discuss the implementation of each member function. Many of the details are left out of the article, but they are included in the source code provided with this article.
[DataObject(true) public class MembershipUserWrapper { [DataObjectMethod(DataObjectMethodType.Select, true)] static public Collection<membershipuserwrapper> GetMembers(string sortData) { return GetMembers(true, true, null, sortData); } [DataObjectMethod(DataObjectMethodType.Insert, true)] static public void Insert(string UserName, bool isApproved, string comment, DateTime lastLockoutDate, ...) { } [DataObjectMethod(DataObjectMethodType.Delete, true)] static public void Delete(object UserName, string Original_UserName){ Membership.DeleteUser(Original_UserName, true); } [DataObjectMethod(DataObjectMethodType.Update, true)] static public void Update(string original_UserName,string email,...){ } } </membershipuserwrapper>
The Class Declaration
The class declaration shown above is special because of the attribute [(DataObject(true)]. This attribute tells the the Visual Studio 2005 ObjectDataSource Creation Wizard to look only for members with this special attribute when searching for DataObjects in the data class. See the example in the section showing where this class is assigned to a GridView component.
The Insert Method
The details of each section involve a very straightforward use of the Membership API provided by Microsoft. For example, here is what might be a typical Insert method in more detail.
[DataObjectMethod(DataObjectMethodType.Insert,true)] static public void Insert(string userName, string password,) { MembershipCreateStatus status; Membership.CreateUser(userName, password,); }
This class Insert is polymorphic, which means there can be multiple Insert methods used for different purposes. For example, it may be necessary to dynamically decide whether a created user should be approved depending on the circumstances. For example, a new user created in an admin screen may want to create users defaulted to approved, whereas a user register screen might default to not approved. To do this, another Insert method is needed, with an additional parameter. Here is what an Insert method that would achieve this goal might look like.
[DataObjectMethod(DataObjectMethodType.Insert,false)] static public void Insert(string userName, string password, bool isApproved) { MembershipCreateStatus status; Membership.CreateUser(UserName, password, isApproved, out status); }
As with the other methods listed here, the examples shown are not what will actually be found in the accompanying source. The examples here are meant to be illustrations of typical uses. More complete and commented uses are included in the source.
The Update Method
The Update method is a very straightforward implementation of the Membership API. Just like the Insert method, there can be multiple implementations of Update. Only one implementation is shown here. In the code available for download, there are more polymorphic implementations of Update, including one that just sets the IsApproved property (shown in the following example).
[DataObjectMethod(DataObjectMethodType.Update,false)] static public void Update(string UserName,bool isApproved) { bool dirtyFlag = false; MembershipUser mu = Membership.GetUser(UserName); if (mu.isApproved != isApproved) { dirtyFlag = true; mu.IsApproved = isApproved; } if (dirtyFlag == true) { Membership.UpdateUser(mu); } }
The Delete Method
The Delete method is the simplest, and it takes one parameters, UserName.
<h2>The Delete Method</h2> static public void Delete(string UserName) { Membership.DeleteUser(UserName,true); }
The Select Method with a Sort Attribute
The Select method—GetMembers, in this case—has multiple components, each of them worthy of discussion. First, what it returns is discussed, and then the actual method itself, and finally, how it sorts what it returns.
The Return Value of the Select Method (Type Collection)
The return value of the Select method (which also is referred to as Get) is a Generic Collection class. Generics are used because the ObjectDataSource ultimately associated with the class uses reflection to determine the column names and types. These names and types are associated with each row of data that is returned. This is the same way that a SqlDataSource uses the database metadata of a table or stored procedure to determine the column names of each row. Since the return type of the Select method is MembershipUserWrapper, which inherits from MembershipUser, most of the properties of this class are the same properties that are associated with MembershipUser. Those properties include:
- ProviderUserKey
- UserName
- LastLockoutDate
- CreationDate
- PasswordQuestion
- LastActivityDate
- ProviderName
- IsLockedOut
- LastLoginDate
- IsOnline
- LastPasswordChangedDate
- Comment
Jumping ahead of ourselves a little, one very nice feature of property values is that they can be Read-only (no set method), Write-only (no read method), and of course, Read/Write. The ObjectDataSource Wizard recognizes this and builds the appropriate parameters so that when the datacontrol is rendered (using the ObjectDataSource), just the fields that are updatable (read/write) are enabled for editing. This means that you can not change the UserName property, for example. If this does not make sense now, it will later, when we discuss the ObjectDataSource and the data components in more detail.
The Select Method Itself
Just like Insert and Update, the Select method is polymorphic. There can be as many different Select methods as there are different scenarios. For example, it may be desiable to use the Select method to select users based on whether they are approved, not approved, or both. Typically, there is one Get method that has the most possible parameters associated with it, and the other Get methods call it. In our case, there are three Get methods: one to retrieve all records, one to retrieve based on approval, and one to retrieve an individual record based on a select string. In the following example, the method that returns all users is being called. By setting both Booleans to true, all users will be returned.
[DataObjectMethod(DataObjectMethodType.Select, true)] static public List<membershipdata> GetMembers(string sortData) { return GetMembers(true,true,null,null); } </membershipdata>
The next example shows a more detailed Get method. This example shows only the beginning of the method. The details of the method not shown include finishing the property assignments, filtering for approval status and rejecting the records not meeting the criteria, and applying the sort criteria. Following this example is more discussion about the sort criteria. (Note that calling GetAllUsers on a database with more than a few hundred users [the low hundreds] is quickly going to become an expensive operation.)
[DataObjectMethod(DataObjectMethodType.Select, true)] static public List<membershipdata> GetMembers(bool AllApprUsers, bool AllNotApprUsers, string UserToFind, string sortData) { List</membershipdata><membershipdata> memberList = new List</membershipdata><membershipdata>(); MembershipUserCollection muc = Membership.GetAllUsers(); foreach (MembershipUser mu in muc) { MembershipData md = new MembershipData(); md.Comment = mu.Comment; md.CreationDate = mu.CreationDate; </membershipdata>
The Custom Sort Criteria
Notice that, in the preceding code, a parameter string named sortData is passed into GetMembers. If, in the ObjectDataSource declaration, a SortParameterName is specified as one of its attributes, this parameter will be passed automatically to all Select methods. Its value will be the name specified by the attribute SortExpression in the column of the datacontrol. In our case, the datacontrol is the GridView.
The Comparer method is invoked based on the parameter sortName coming into the GetMembers method. Since these ASP.NET Web pages are stateless, we have to assume that the direction of the current sort (either forward or backwards) is stored in the viewstate. Each call reverses the direction of the previous call. That is, it toggles between forward sort and reverse sort as the user clicks the column header.
Assuming that a GridView is used, the parameter that gets passed into GetMembers(sortData) has in it the data from the SortExpression attribute of the GridView column. If a request for sorting backwards is being made, the word "DESC" is appended to the end of the sort string. So, for example, the first time the user clicks on the column Email, the sortData passed into GetMembers is "Email." The second time the user clicks on that column, the parameter sortData becomes "Email DESC," then "Email," then "Email DESC," and so on. As a special note, the first time the page is loaded, the sortData parameter is passed in as a zero-length string (not null). Below is the guts of the GetMembers method that retrieves and sorts the data so that it is returned in the correct order.
[DataObjectMethod(DataObjectMethodType.Select, true)] static public List<membershipdata> GetMembers(string sortData) { List</membershipdata><membershipdata> memberList = new List</membershipdata><membershipdata>(); MembershipUserCollection muc = Membership.GetAllUsers(); List<membershipuser> memberList = new List</membershipuser><membershipuser>(muc); foreach (MembershipUser mu in muc) { MembershipData md = new MembershipData(mu); memberList.Add(md); } ... Code that implements Comparison � memberList.Sort(comparison); return memberList; } </membershipuser></membershipdata>
In the next section, when this is incorporated into a GridView, it will become more clear.
The ObjectDataSource Declaration
The easiest way to declare an ObjectDataSource is to drag and drop one from the datacontrols on the toolbar, after first creating an empty ASP.NET page with the Visual Studio 2005 wizard. After creating the ObjectDataSource, a little tag in the upper-right corner of the newly created ObjectDataSource can be grabbed; then, clicking Configure Data Source opens a wizard saying "Configure Data Source—ObjectDataSource1" (see Figure 3).
Figure 3. Configuring ObjectDataSource
At this point, two classes that are available for associating with an ObjectDataSource will be seen. MembershipUserODS is the primary subject of this article. RoleDataObject is basically the same thing, but it encapsulates Membership Roles. Also, remember that what is shown here are just the objects that are declared with the special class attribute [DataObject(true)] that was described in "The Class Definition."
After choosing MembershipUserODS, a dialog box with four tabs appears. The methods to be called from the MembershipUserODS class will be defined on these tabs. Methods for Select, Update, Insert, and Delete will be associated with member functions in the MembershipUserODS. In many cases, there will be multiple methods available in the class for each of these. The appropriate one must be chosen, based on the data scenario desired. All four tabs are shown in Figure 4. By default, the members that are marked with the special attribute [DataObjectMethod(DataObjectMethodType.Select, false)] will be populated on the tabs. Of course, however, this particular attribute is the default for Select. Changing the expression DataObjectMethodType.Select to DataObjectMethodType.Insert, DataObjectMethodType.Update, and DataObjectMethodType.Delete will make the defaults appropriate for the different tabs. The second parameter, a Boolean, signifies that this method (remembering that it may be defined polymorphically) is the default method, and that it should be used in the tab control.
The Select Method
As mentioned earlier, in the section describing the MembershipUserODS class, the GetMembers function returns a Generic Collection class. This enables the ObjectDataSourceMembershipUser control defined here to use reflection and ascertain the calling parameters associated with this GetMembers call. In this case, the parameters used to call GetMembers are returnAllApprovedUsers, returnAllNotApprovedUsers, userNameToFind, and sortData. Based on this, the actual definition of the new ObjectDataSource will be as follows.
Figure 4. Assigning the Select>
The Insert Method
The Insert method, in this case, is assigned to the member function Insert(). Notice that this method is called with only two parameters: UserName and Password (see Figure 5). The number of parameters must equal the number of parameters declared in the ObjectDataSource. The parameter declaration from the ObjectDataSource is shown below. There is a second Insert Member function defined that adds a third parameter: approvalStatus. If the functionality of this ObjectDataSource is to include inserting while setting the approvalStatus, then the other insert method should be chosen from the drop-down list. That would cause the following InsertParameters to be inserted into your .aspx page. If the one with two parameters is chosen, the block would not include the asp:Parameter with the name isApproved in it. Again, keep in mind that this example may not agree with the source code enclosed, and that it is here only as an example. The source enclosed is much more complete.
Figure 5. Assigning the Insert>
Also, keep in mind that using an Insert method with minimal parameters will require a default password to be set in the method. In a production system, this would be a bad idea. See the attached source code for a better example of how to handle inserts. Specifically, see the page Membership.aspx for this functionality.
The Update Method
The Update method, in this case, is assigned to the member function Update(). Notice that this method is called with multiple parameters: UserName, Email, isApproved, and Comment (see Figure 6). In addition, there is another Update method that has all the updatable parameters. This is useful for creating a control that has the most possible update capabilities. Just like Insert, the appropriate Update method is chosen for this ObjectDataSource. When the wizard is finished, it will automatically create UpdateParameters, as shown below.
Figure 6. Assigning the Update method
<asp:ObjectDataSource <updateparameters> <asp :Parameter <asp :Parameter <asp :Parameter <asp :Parameter </updateparameters> ... ...
The Delete Method
The Delete method, in this case, is assigned to the member function Delete(). There is, of course, only one Delete method necessary (see Figure 7). Below is the declaration of the ObjectDataSource that supports this Delete method.
Figure 7. Assigning the Delete method
<asp:ObjectDataSource <deleteparameters> <asp
arameter <asparameter Name="UserName" /> <asp
arameter </deleteparameters> ...arameter </deleteparameters> ...
The Class (RoleDataObject)
Just like Membership, Roles are set up with their own DataObject. Since there is nothing special about Roles, there are no details regarding their setup in this article. An understanding of how the Membership DataObjects are set up is transferable to how Roles are set up. In Membership, the Microsoft C# object that encapsulates the Membership API is MembershipDataObject.cs. The analogous class for encapsulating the Role API is RoleDataObject.cs.
ObjectDataSource In GridView (Data Control)
Class declarations for Membership Users and Roles have been established in the previous sections of this article. Also, a complete ObjectDataSource object has been placed on an ASP.NET page. The final step is to create the user interface, also known as the user-facing tier of the application or the presentation layer. Because so much of the work is done by the objects created, all that is necessary is to create a simple GridView and associate it with the ObjectDataSource. The steps are as follows:
- In visual mode of the ASP.NET page designer, drag and drop the GridView data component onto the page associated with the ObjectDataSource created earlier.
- Enable selecting, deleting, updating, inserting, and sorting.
Figure 8 shows the dialog box associated with configuring the Gridview.
Figure 8. Configuring GridView
A special mention should be made here that DataKeyNames in the GridView control shown below is automatically set. This is because the primary key has been tagged in the MembershipUserSortable class with the attribute [DataObjectField(true)]
, as shown below. Notice also that since UserName is a property of the MembershipUser class, it was necessary to provide a default property in the class extending MembershipUser. Since this is a Read-only property, only a Get method is declared. (UserName is public virtual on MembershipUser.)
[DataObjectField(true)] public override string UserName { get { return base.UserName; }
There is one attribute in the GridView that must be set by hand: the primary key must be set in the control. To do this, associate the attribute DataKeyName with UserName. The GridView declaration is shown below.
<asp:GridView <Columns> ... ...
Conclusion
To wrap things up, you should now be familiar with how to build your own three-tier architected ASP.NET application. In addition, you now have two objects that you can freely use that encapsulate Members and Roles. You could now, for example, use the DetailView control, and in only a few minutes build a complete DetailView interface to Members that performs Navigation, Inserting, Updating, and Deleting of Members. Give it a try!
I have specifically not gone into the implementations of adding, updating, and deleting Members or Roles. If you look at the source code, you will find that I have used the APIs in a very straightforward way. Not much will be gained by describing those calls in much detail here, because I’m sure that if you are still reading this, you, like me, are probably learning this material as you go.
I was fortunate enough to be at MS TechEd in Orlando and PDC in LA this year, and was able to ask many questions of the ASP.NET team. In particular, I would like to thank Brad Millington and Stefan Schackow for putting up with my many questions during those weeks, and Jeff King and Brian Goldfarb for all their help in making this a better article. In some way, this article is payback, so that hopefully they won’t have to answer as many questions in the future..
Thanks, Peter. Saved me lots of time. The only issue I had is that I allowed user names with spaces, and the role management fails badly in this case because you’re splitting the button text on spaces.
I modified Membership.aspx.cs to use single-smart-quote characters not generally included in role or user names:
// Modified in ShowInRoleStatus
result = “Unassign ‘” + userName + “’ From Role ‘” + roleName + “’”;
result = “Assign ‘” + userName + “’ To Role ‘” + roleName + “’”;
// Modified in ToggleInRole_Click
char[] seps = new char[] { ‘‘’, ‘’’ };
string[] buttonTextArray = buttonText.Split(seps);
string roleName = buttonTextArray[3];
Excellent Work.. Tons of time saved. Thanks for sharing
Like someone above said, it is difficult to implement this kind of system, especially if the work is done by a begginer (like I am).
Without any knowledge, even if a copy paste tutorial would seem dificult
Thanks Pete, this helped a lot with a project I’m working on!
Thanks Peter! It’s already the 5.5th year now and your article still saves OUR vital hours all around the world!!!
Great stuff, any ideas where I can get a guideon how to utilize the WebAdmin pages within an asp.net app outside of the dev environment (for example, adding them to the app and then establishing the necessary links afterwards).
thanks, troy. !
Have been postponing this for a long time but always have had a need in all my applications.
Finally a good easily solution in which you’ve done all the heavy lifting.
Works perfectly in VS2008 with SQL2005.
Thanks
Works perfect! Have beed looking for these for many weeks! Regards Hans
Classic ASP was so much simpler to manage in IIS6. !
Hello there, just browsing for information for my Georgia 4g site. Truly more information that you can imagine on the web. Looking for something else, but very nice site. Have a good day.
Fantastic!
Thanks a bunch for the write-up!
This information and facts actually helped me, I am sharing with a couple of friends. I will probably be checking back regularly to look for updates.
Intimately, the post is really the greatest subject on this associated problem. I agree along with your conclusions and will thirstily appear forward to your approaching updates. Just saying thanks is not going to just be sufficient, for the extraordinary lucidity within your writing. I will at as soon as grab your rss feed to stay informed of any updates.
You need to update more you do a good job
Thank you so much for this code! It was unbelievably easy to implement and would have taken weeks for me to get this working on my remote site on my own!
Much Thanks!
In my page Object Datasource return Dataset.So how to sorting in that method.Which doesn’t return Collection.
great piece of code, saves a lod of ground work.
was wondering if there was a quick way to filter the output of the grids using a related table to the users.
eg i have a profile table related by userid, i would like to filter the users by country which is a field on the profile table.
How is this easily plugged in?
Tx
Thanks for sharing this important article…
Thanks,
about – ObjectDataSourceRoleObject
I got the same problem but solved it (after 3 ___ days!)
I soved it when I added the designer code using the convert to web application but for each individual file – not the whole project
I love the example you’ve created, but I’ve run into an issue and I’m not sure of the cause.
I cannot delete Roles. When I try, I get the following error:
ObjectDataSource ‘ObjectDataSourceRoleObject’ could not find a non-generic method ‘Delete’ that has parameters: RoleName, original_RoleName.
In RoleDataObject.cs, if I changed the parameter for the Delete method from string roleName to string original_roleName, then it works correctly.
Anyone have any ideas?
I’ve used this before in a Web Site project model site before with no trouble but I’m trying to convert it to a Web Application project model and I just can’t seem to get it to work.
“The type or namespace name ‘ProfileCommon’ could not be found (are you missing a using directive or an assembly reference?)”
I tried nicki’s advice above with no success. Is the ProfileCommon class generated from the App_Code files and not available to the Web Application?
Ok, I got the answer from another website: right-click and click Convert to Web application.
I have an issue getting the samples to compile, it seems the samples are missing the .designer.cs files, as the objects defined on the .aspx page are not defined as properties in the .cs file. VS2005 is complaining at compile time.
Error 1 The name ‘ObjectDataSourceRoleObject’ does not exist in the current context C:\Projects\Procurement.Site\Procurement.Web\Admin\SecurityAdmin\Membership.aspx.cs 66 4 Procurement.Web
Am I missing something?
when i edit email addresses it changes the loweredemail field and not the email field in the db???????
You rock. I spent all day trying to figure out why I couldn’t use the Membership class directly. Not only does this work, but I learned a thing or two. Thanks.
thanks a lot
mustafa
This is great. Thank for the code. I spent few days looking for a solution and it works perfect. The downtime is that I am not good in C# and had to redo my project from VB to C# . Having say this it will be good is there is an example in VB code. Thanks again
Hi, I a Martin here. Just picked up few stones of Microsot.Net 2.0.
I just explored the Membership APIs and relized the we couldnt sort the grid if we are returning the collection ‘MembershipUserCollection’ to the ObjectDataSource of the GridView.
For sorting purpose insted of using custom sorting why cant we use the default sorting of the GridView by returning DataTable to the ObjectDataSource.
And we could still keep the custom Paging of the object datasouce.
The code will be like this.
public static DataTable GetAllMembers(int maximumRows, int startRowIndex)
{
//TODO: should try to avoid sessions reference.
//startRowIndex/PageSize – if we are taking from the gridview startRowIndex param of object data source.
//But here we are taking the gridview page index from the session to sort the current selected page and not to reset from the begining.
if (HttpContext.Current.Session[“PageIndex”] == null || string.IsNullOrEmpty(HttpContext.Current.Session[“PageIndex”].ToString()) || !(int.TryParse(HttpContext.Current.Session[“PageIndex”].ToString(), out startRowIndex)))
{
// PageIndex session variable is not found/set .
startRowIndex = startRowIndex / int.Parse(GetConfigValue(“GridViewPageSize”));
}
MembershipProvider mp = Membership.Provider;
MembershipUserCollection muCollection = mp.GetAllUsers(startRowIndex, maximumRows, out totalMembersCount);
//moving the data to data table. max records will be the GridView PageSize.
DataTable dt = new DataTable(“Users”);
dt.Columns.Add(“UserName”);
dt.Columns.Add(“Email”);
dt.Columns.Add(“IsApproved”);
dt.Columns.Add(“IsLockedOut”);
dt.Columns.Add(“IsOnline”);
foreach (MembershipUser mu in muCollection)
{
DataRow dr = dt.NewRow();
dr[“UserName”] = mu.UserName;
dr[“Email”] = mu.Email;
dr[“IsApproved”] = mu.IsApproved;
dr[“IsLockedOut”] = mu.IsLockedOut;
dr[“IsOnline”] = mu.IsOnline;
dt.Rows.Add(dr);
}
return dt;
}
public static int GetTotalMembersCount(int maximumRows, int startRowIndex)
{
return totalMembersCount;
}
.aspx
this was good but role management classes are not working properly when we publish the website.we are using the oracle as a database and provider classes are written by us. the classes are working fine when we are using the the default visual studio environment.
Thank you very much Peter!
This was incredibly simple to install and use!
Peter,
this is brilliant, thank you so much for providing this. Your follow up articles on MDSN are even better – The article about membership with profiles () is a god send, as is your ODS generator ()
Andrew
Lot of work saved! Thanks. But I have a question. I have a requiement where I need to get users given a starting letter when the user presses the letter hyperlink. Example I need to pass ‘a%’ to the database and get all userid starting with a. do you have any suggestions?
Thank you for your help.
Hi Peter,
I am new to ASP.NET but this looks like a fantastic piece of Code saves lot of headache…
But how does this run with Oracle…
Anyone here tried it…please mail me at chirag_97@yahoo.com
Hi Peter,
Thanks, for your great code!
Just what I was looking for – you saved me a lot of time…
Hi Peter,
I plugged your code into my project and it worked perfectly, thanks! I have two questions, not really related to Membership, more to the GridView component.
1. Instead of ‘edit’, ‘delete’ and ‘select’ (and ‘update’ and ‘cancel’) hyperlinks, I want to show my own images. Is that possible?
2. I want to be able to disable or hide the ‘delete’ / ‘update’ hyperlinks per user.
For example, my users database table is related to a couple of other tables. If the user is still used in one of those tables, I want to disable or hide the delete link. Similarly, I want to disable deletion of some fixed system user accounts that come with my application.
Can this be done, and how? I can’t really find an answer, I was hoping you could help me out.
Thanks,
Guido
Do u have a sample of this with vb language ? Cos i don know how to Implement the app code for vb language it will be a big help if u can help anyway thanks for the sample code
My understanding is that converting the webadmin pages to an asp.net app causes security concerns and is not a good idea (though I understand it is very doable).
Excellent article and code. They saved me a lot of time and google searching.
I’m surprised no one has written an article describing how to utilize the WebAdmin pages within an asp.net app outside of the dev environment (such as adding them to the app and establishing the necessary links).
Great code! you save a lot of my time. I am very pleased to you. | http://peterkellner.net/2006/01/09/microsoft-aspnet-20-memberrole-management-with-iis/ | CC-MAIN-2015-22 | en | refinedweb |
04 April 2007 08:39 [Source: ICIS news]
SINGAPORE (ICIS news)--Formosa Petrochemical has targeted a 45% rise in ethylene production this year to reach 2.758m tonnes after posting higher 2006 olefins operating profit, a company official said on Wednesday.
?xml:namespace>
Operating profit at its olefins segment rose 4% to New Taiwan dollar (NT$) 18.1bn ($546.5m) from a year ago on higher volume and prices.
The Taiwanese refining and chemicals major operated its crackers at more than 100% last year and ethylene production reached 1.902m tonnes.
It will start up its new 1.2m tonne/year cracker in Mailiao in the second quarter.
Other segments in the company did not perform well. Its refining operating profit fell 19% to NT$29.5bn even though revenue rose 19% to NT$424.7bn.
Operating profit at another segment, which covers other products such as butadiene, liquefied petroleum gas (LPG) and methyl tertiary butyl ether (MTBE), fell 46% to NT$200m.
At group level, its operating profit fell 13% to NT$53.4bn while revenue rose 19% to NT$529.5bn.
In 2007, ?xml:namespace>
Capacity of the refinery will reach 540,000 bbl/day in the second quarter of 2008. The company will also complete its 10,000 bbl/day base oil project in Mailiao at the same time.
($1 = NT$33.12)($1 = NT$33 | http://www.icis.com/Articles/2007/04/04/9018427/formosa-petchem-2007-ethylene-output-to-rise-45.html | CC-MAIN-2015-22 | en | refinedweb |
-Bsymbolicoption or Sun Studio compiler's
-xldscope=symbolicoption, all symbols of a library can be made non-interposable (those symbols are called protected symbols, since no one else can interpose on them). If the targeted routine is interposable, dynamic linker simply passes the control to whatever symbol it encounters first, that matches the function call (callee). Now with the preloaded library in force, hacker gets control over the routine. At this point, it is upto the hacker whether to pass the control to the actual routine that the client is intended to call. If the intention is just to collect data and let go, the required data can be collected and the control will be passed to the actual routine with the help of
libdlroutines. Note that the control has to be passed explicitly to the actual routine; and as far as dynamic linker is concerned, it is done with its job once it passes the control to the function (interposer in this case). If the idea is to completely change the behavior of the routine (easy to write a new routine with the new behavior, but the library and the clients have to be re-built to make use of the new routine), the new implementation will be part of the interposing routine and the control will never be passed to the actual routine. Yet in worst cases, a malicious hacker can intercept data that is supposed to be confidential (eg., passwords, account numbers etc.,) and may do more harm at his wish.
fopen(). The idea is to collect the number of calls to
fopen()and to find out the files being opened. Our interceptor, simply prints a message on the console with the file name to be opened, everytime there is a call to
fopen()from the application. Then it passes the control to
fopen()routine of
libc. For this, first we need to get the signature of
fopen().
fopen()is declared in
stdio.has follows:
FILE *fopen(const char *filename, const char *mode);
% cat interceptfopen.c
#include <stdio.h>
#include <dlfcn.h>
FILE *fopen(const char *filename, const char *mode) {
FILE *fd = NULL;
static void *(*actualfunction)();
if (!actualfunction) {
actualfunction = (void *(*)()) dlsym(RTLD_NEXT, "fopen");
}
printf("\nfopen() has been called. file name = %s, mode = %s \n
Forwarding the control to fopen() of libc", filename, mode);
fd = actualfunction(filename, mode);
return(fd);
}
% cc -G -o libfopenhack.so interceptfopen.c
% ls -lh libfopenhack.so
-rwxrwxr-x 1 build engr 3.7K May 19 19:02 libfopenhack.so*
actualfunctionis a function pointer to the actual
fopen()routine, which is in
libc.
dlsymis part of
libdland the
RTLD_NEXTargument directs the dynamic linker (
ld.so.1) to find the next reference to the specified function, using the normal dynamic linker search sequence.
% cat fopenclient.c
#include <stdio.h>
int main () {
FILE * pFile;
char string[30];
pFile = fopen ("myfile.txt", "w");
if (pFile != NULL) {
fputs ("Some Random String", pFile);
fclose (pFile);
}
pFile = fopen ("myfile.txt", "r");
if (pFile != NULL) {
fgets (string , 30 , pFile);
printf("\nstring = %s", string);
fclose (pFile);
} else {
perror("fgets(): ");
}
return 0;
}
% cc -o fopenclient fopenclient.c
% ./fopenclient
string = Some Random String
% setenv LD_PRELOAD ./libfopenhack.so
% ./fopenclient
fopen() has been called. file name = myfile.txt, mode = w
Forwarding the control to fopen() of libc
fopen() has been called. file name = myfile.txt, mode = r
Forwarding the control to fopen() of libc
string = Some Random String
%unsetenv LD_PRELOAD
fopen(), instead of the actual implementation in
libc. And the advantages of this technique is evident from this simple example, and it is up to the hacker to take advantage or abuse the flexibility of symbol interposition. | http://technopark02.blogspot.com/2005/05/solaris-hijacking-function-call.html | CC-MAIN-2015-22 | en | refinedweb |
com.google.appengine.api.taskqueue.TaskOptions;(TaskOptions.Builder and version in which the handler runs is determined by:
- The "Host"
headerparameter in the
TaskOptionsthat you include in your call to the
Queue.add()method.
- The
targetdirective in the
queue.xmlor
queue.yamlfile.
1of a module named
backend1, using the
targetdirective:
import com.google.appengine.api.taskqueue.Queue; import com.google.appengine.api.taskqueue.QueueFactory; import com.google.appengine.api.taskqueue.TaskOptions; import com.google.appengine.api.modules.ModulesServiceFactory; // ... queue.add(TaskOptions.Builder.withUrl("/path/to/my/worker").param("key", key).header("Host", ModulesServiceFactory.getModulesService().getInstanceHostname("module1", null, 1)));edExceptionbefore Ret.
Deferred tasks.; // ... Queue queue = QueueFactory.getDefaultQueue(); queue.add(TaskOptions.Builder.withUrl("/path/to/my/worker")); queue.add(TaskOptions.Builder. You can read about restricting URLs at Security and Authentication. An example you would use in
web.xml to restrict everything starting with
/tasks/ to admin-only is:
<security-constraint> <web-resource-collection> <web-resource-name>tasks</web-resource-name> <url-pattern>/tasks/*</url-pattern> </web-resource-collection> <auth-constraint> <role-name>admin</role-name> </auth-constraint> </security-constraint>
For more on the format of
web.xml, see the documentation on the deployment descriptor.: | https://cloud.google.com/appengine/docs/java/taskqueue/overview-push | CC-MAIN-2015-22 | en | refinedweb |
Uri.EscapeComponent | escapeComponent method
Converts a Uniform Resource Identifier (URI) string to its escaped representation.
Syntax
Parameters
- toEscape
Type: String [JavaScript] | Platform::String [C++]
The string to convert.
Return value
Type: String [JavaScript] | Platform::String [C++]
The escaped representation of toEscape.
Remarks
Use EscapeComponent as a utility to escape any URI component that requires escaping in order to construct a valid Uri object. For example, if your app is using a user-provided string and adding it to a query that is sent to a service, you may need to escape that string in the URI because the string might contain characters that are invalid in a URI. This includes characters as simple as spaces; even input that seems to be pure ASCII may still need encoding to be valid as a component of a URI.
You can append the string you get from EscapeComponent onto other strings before calling the Uri(String) constructor. You'll want to encode each component separately, because you do not want to escape the characters that are significant to how the Uri(String) constructor parses the string into components, such as the "/" between host and path or the "?" between path and query.
EscapeComponent might also be useful for other scenarios where a URI-escaped string is needed for an HTTP request scenario, such as using APIs in the Windows.Web.Http namespace.
Requirements (device family)
Requirements (operating system)
See also | https://msdn.microsoft.com/library/windows/apps/windows.foundation.uri.escapecomponent.aspx | CC-MAIN-2015-22 | en | refinedweb |
namespace IssueVision.Common
{
public sealed class Notifications
{
public const string CleanupIssueEditorViewModel = "CleanupIssueEditorViewModel";
public const string CleanupNewIssueViewModel = "CleanupNewIssueViewModel";
public const string CleanupMyIssuesViewModel = "CleanupMyIssuesViewModel";
public const string CleanupAllIssuesViewModel = "CleanupAllIssuesViewModel";
public const string CleanupBugReportViewModel = "CleanupBugReportViewModel";
public const string CleanupMyProfileViewModel = "CleanupMyProfileViewModel";
public const string CleanupUserMaintenanceViewModel = "CleanupUserMaintenance) | http://www.codeproject.com/script/Articles/ViewDownloads.aspx?aid=82298&zep=IssueVision.Silverlight%2FIssueVision.Common%2FNotifications.cs&rzp=%2FKB%2Fsilverlight%2FIssueVisionForSilverlight%2F%2FIssueVision_Silverlight20110630.zip | CC-MAIN-2015-22 | en | refinedweb |
Tip-Convert array to Comma delimited string
I was needed to convert an string array into the comma delimited string and the old way to do that is too with for loop. But I was sure that there should be some ready made function that will do this very easily.
After doing some research I have found string.Join. With the help of this we can easily an array into the comma delimited string.
String.join have two arguments one is separator and other one is either array or enumerable.
Following is a code for that.
namespace ConsoleApplication3 { class Program { static void Main() { string[] name = {"Jalpesh", "Vishal", "Tushar", "Gaurang"}; string commnaDelmietedName = string.Join(",", name); System.Console.WriteLine(commnaDelmietedName); } } }
Let’s run that example and here is the output as expected.
That’s it. Hope you like it. Stay tuned for more..
Original Blog: | http://weblogs.asp.net/jalpeshpvadgama/tip-convert-array-to-comma-delimited-string | CC-MAIN-2015-22 | en | refinedweb |
01 October 2012 04:05 [Source: ICIS news]
?xml:namespace>
The region may be the first in
Local major refiners are producing 8,000 tonnes/day of bitumen while minor refiners are producing around 2,000 tonnes/day at present, ICIS data showed.
On average, local refiners can sell around 6,000 tonnes/day of bitumen, with approximately half delivered by trucks and the rest via rail and ships. Thus, there are 4,000 tonnes of excess output each day, said a trader based in
The insufficient carrying capacity of railways and ships are also impacting sales, the trader added.
The sales of major refiners were further curbed as products from minor refiners have presented a price advantage since late September. The mainstream traded prices of product from minor refiners were at CNY4,800-4,850/tonne ($763-771/tonne) on 28 September, CNY150-200/tonne lower than major refiners’ prices, market sources said.
The prices of domestic bitumen are at CNY4,975/tonne currently, up by CNY275/tonne on the back of increasing demand and stronger international crude prices from late July, ICIS data showed.
( | http://www.icis.com/Articles/2012/10/01/9599808/south-chinas-bitumen-prices-likely-to-fall-in-early-october.html | CC-MAIN-2015-22 | en | refinedweb |
One of the more subtle aspects of converting (n)varchar or (n)text data to XML is the fact that XML is choosy about which characters are permitted and (n)/varchar/(n)text is not. Any T-SQL programmer who runs conversions of this type is likely to run into this issue. Here's a code block that resolves the issue.
The characters in question are what are commonly called "lower-order ASCII" characters, those below CHAR(32). Of these, only the TAB (CHAR(9)), LF (CHAR(10)), and CR(CHAR(13) are valid within XML. This solution uses trigger code to call a user-defined function to scrub the nvarchar columns, and a loop within the trigger to an ntext column.
Here's the UDF code:
CREATE
Here's the trigger code:
The trigger code is built to maximize performance in that the NULLIF tests in the UPDATE statement will only run the (relatively expensive) UDF if the inserted and deleted images of a particular column differ (if they don't differ, we can guarantee that the value has already been scrubbed). The UDF and the loop in the trigger for the ntext SupplementDescription column employ the same basic strategy of looping through the source value looking for any invalid character and replacing it with a new character (NCHAR(164)) until the last invalid character is found.
This code was developed for a SQL Server 2000 environment. It would function in a SQL Server 2005 environment, but better performance would likely be had with a CLR-based UDF.
Note also that if you're interested in translating the reserved characters <>& etc., you can do that with a series of nested REPLACE statements.
A recent discussion with several colleagues reminded me of a hard-won insight I've been meaning to share...
Feedback is solicited on a programming issue
A find shared by one friend leads to correspondence from another.. The redoubtable Adam Machanic left
thanks!, worked for me when receiving char #x0001
Instead you widen the valid characters, add this to the header of your xml document!
<?xml version="1.0" encoding="ISO-8859-1"?>
also remove the following characters with this python script...
def clean():
file = open('C:/where your file is located','r')
myfile = file.readlines()
file.close()
file = open('C:\root\where you want to save your file\location','w')
for r in myfile:
r = r.replace("\r\n", "");
r = r.replace("\r", "");
r = r.replace("\n", "");
r = r.replace("\\r\\n", "");
r = r.replace("\\r", "");
r = r.replace("\\n", "");
r = r.replace("\u0085", "");
r = r.replace("\u000A", "");
r = r.replace("\u000B", "");
r = r.replace("\u000C", "");
r = r.replace("\u000D", "");
r = r.replace("\u2028", "");
r = r.replace("\u2029", "");
r = r.replace("\\\"", "\\\\\\\"");
file.write(r)
--------------python 3.2
Allows for a wider range of valid characters. | http://blogs.technet.com/b/wardpond/archive/2005/07/06/a-solution-for-stripping-invalid-xml-characters-from-varchar-text-data-structures.aspx | CC-MAIN-2015-22 | en | refinedweb |
Introduction to AIO
Linux asynchronous I/O is a relatively recent addition to the Linux kernel. It's a standard feature of the 2.6 kernel, but you can find patches for 2.
I/O models
Before digging into the AIO API, let's explore the different I/O models that are available under Linux. This isn't intended as an exhaustive review, but rather aims to cover the most common models to illustrate their differences from asynchronous I/O. Figure 1 shows synchronous and asynchronous models, as well as blocking and non-blocking models.
Figure 1. Simplified matrix of basic Linux I/O models
Each of these I/O models has usage patterns that are advantageous for particular applications. This section briefly explores each one.
Synchronous blocking I/O
One of the most common models is the synchronous blocking I/O model. In this model, the user-space application performs a system call that results in the application blocking. This means that the application blocks until the system call is complete (data transferred or error). The calling application is in a state where it consumes no CPU and simply awaits the response, so it is efficient from a processing perspective.
Figure 2 illustrates the traditional blocking I/O model, which is also the
most common model used in applications today. Its behaviors are well
understood, and its usage is efficient for typical applications. When the
read system call is invoked, the application blocks
and the context switches to the kernel. The read is then initiated, and when
the response returns (from the device from which you're reading), the data is
moved to the user-space buffer. Then the application is unblocked (and the
read call returns).
Figure 2. Typical flow of the synchronous blocking I/O model
From the application's perspective, the
read call
spans a long duration. But, in fact, the application is actually blocked while
the read is multiplexed with other work in the kernel.
Synchronous non-blocking I/O
A less efficient variant of synchronous blocking is synchronous non-blocking
I/O. In this model, a device is opened as non-blocking. This means that instead of completing an I/O immediately, a
read may return an error code indicating that the command could not be immediately
satisfied (
EAGAIN or
EWOULDBLOCK), as shown in Figure 3.
Figure 3. Typical flow of the synchronous non-blocking I/O model
The implication of non-blocking is that an I/O command may not be satisfied
immediately, requiring that the application make numerous calls to await
completion. This can be extremely inefficient because in many cases the application
must busy-wait until the data is available or attempt to do other work while
the command is performed in the kernel. As also shown in Figure 3,
this method can introduce latency in the I/O because any gap between the data
becoming available in the kernel and the user calling
read to return it can reduce the overall data
throughput.
Asynchronous blocking I/O
Another blocking paradigm is non-blocking I/O with blocking notifications.
In this model, non-blocking I/O is configured, and then the blocking
select system call is used to determine when there's
any activity for an I/O descriptor. What makes the
select call interesting is that it can be used to
provide notification for not just one descriptor, but many. For each
descriptor, you can request notification of the descriptor's ability to write
data, availability of read data, and also whether an error has occurred.
Figure 4. Typical flow of the asynchronous blocking I/O model (select)
The primary issue with the
select call is that it's not
very efficient. While it's a convenient model for asynchronous
notification, its use for high-performance I/O is not advised.
Asynchronous non-blocking I/O (AIO)
Finally, the asynchronous non-blocking I/O model is one of overlapping
processing with I/O. The read request returns immediately, indicating that the
read was successfully initiated. The application
can then perform other processing while the background read operation completes.
When the
read response arrives, a signal or a thread-based callback can be generated to complete the I/O transaction.
Figure 5. Typical flow of the asynchronous non-blocking I/O model
The ability to overlap computation and I/O processing in a single process for potentially multiple I/O requests exploits the gap between processing speed and I/O speed. While one or more slow I/O requests are pending, the CPU can perform other tasks or, more commonly, operate on already completed I/Os while other I/Os are initiated.
The next section examines this model further, explores the API, and then demonstrates a number of the commands.
Motivation for asynchronous I/O
From the previous taxonomy of I/O models, you can see the motivation for AIO. The blocking models require the initiating application to block when the I/O has started. This means that it isn't possible to overlap processing and I/O at the same time. The synchronous non-blocking model allows overlap of processing and I/O, but it requires that the application check the status of the I/O on a recurring basis. This leaves asynchronous non-blocking I/O, which permits overlap of processing and I/O, including notification of I/O completion.
The functionality provided by the
select function
(asynchronous blocking I/O) is similar to AIO, except that it still
blocks. However, it blocks on notifications instead of the I/O call.
Introduction to AIO for Linux
This section explores the asynchronous I/O model for Linux to help you understand how to apply it in your applications.
In a traditional I/O model, there is an I/O channel that is identified by a unique handle. In UNIX®, these are file descriptors (which are the same for files, pipes, sockets, and so on). In blocking I/O, you initiate a transfer and the system call returns when it's complete or an error has occurred.
In asynchronous non-blocking I/O, you have the ability to initiate multiple
transfers at the same time. This requires a unique context for each transfer so you can identify it when it completes. In
AIO, this is an
aiocb (AIO I/O Control Block)
structure. This structure contains all of the information about a transfer,
including a user buffer for data. When notification for an I/O occurs (called
a completion), the
aiocb structure is provided to
uniquely identify the completed I/O. The API
demonstration shows how to do this.
AIO API
The AIO interface API is quite simple, but it provides the necessary functions for data transfer with a couple of different notification models. Table 1 shows the AIO interface functions, which are further explained later in this section.
Table 1. AIO interface APIs
Each of these API functions uses the
aiocb
structure for initiating or checking. This structure has a number of
elements, but Listing 1 shows only the ones that you'll need to
(or can) use.
Listing 1. The aiocb structure showing the relevant fields
struct aiocb { int aio_fildes; // File Descriptor int aio_lio_opcode; // Valid only for lio_listio (r/w/nop) volatile void *aio_buf; // Data Buffer size_t aio_nbytes; // Number of Bytes in Data Buffer struct sigevent aio_sigevent; // Notification Structure /* Internal fields */ ... };
The
sigevent structure tells AIO what to do when
the I/O completes. You'll explore this structure in the AIO demonstration.
Now I'll show you how the individual API functions for AIO work
and how you can use them.
aio_read
The
aio_read function requests an asynchronous
read operation for a valid file descriptor. The file descriptor can represent
a file, a socket, or even a pipe. The
aio_read
function has the following prototype:
int aio_read( struct aiocb *aiocbp );
The
aio_read function returns immediately after
the request has been queued. The return value is zero on success or -1
on error, where
errno is defined.
To perform a read, the application must initialize the
aiocb structure. The following short example
illustrates filling in the
aiocb request structure and using
aio_read to perform an asynchronous read request (ignore notification for now).
It also shows use of the
aio_error function, but
I'll explain that later.
Listing 2. Sample code for an asynchronous read with aio_read
#include <aio.h> ... int fd, ret; struct aiocb my_aiocb; fd = open( "file.txt", O_RDONLY ); if (fd < 0) perror("open"); /* Zero out the aiocb structure (recommended) */ bzero( (char *)&my_aiocb, sizeof(struct aiocb) ); /* Allocate a data buffer for the aiocb request */ my_aiocb.aio_buf = malloc(BUFSIZE+1); if (!my_aiocb.aio_buf) perror("malloc"); /* Initialize the necessary fields in the aiocb */ my_aiocb.aio_fildes = fd; my_aiocb.aio_nbytes = BUFSIZE; my_aiocb.aio_offset = 0; ret = aio_read( &my_aiocb ); if (ret < 0) perror("aio_read"); while ( aio_error( &my_aiocb ) == EINPROGRESS ) ; if ((ret = aio_return( &my_iocb )) > 0) { /* got ret bytes on the read */ } else { /* read failed, consult errno */ }
In Listing 2, after the file from which you're reading data is opened, you zero
out your
aiocb structure, and then allocate a data
buffer. The reference to the data buffer is placed into
aio_buf. Subsequently, you initialize the size
of the buffer into
aio_nbytes. The
aio_offset is set to zero (the first offset in
the file). You set the file descriptor from which you're reading into
aio_fildes. After these fields are set, you call
aio_read to request the read. You can then make a
call to
aio_error to determine the status of the
aio_read. As long as the status is
EINPROGRESS, you busy-wait until the status
changes. At this point, your request has either succeeded or failed.
Note the similarities to reading from the file with the standard library functions. In addition to the
asynchronous nature of
aio_read, another
difference is setting the offset for the read. In a typical
read call, the offset is maintained for you in the
file descriptor context. For each read, the offset is updated so that
subsequent reads address the next block of data. This isn't possible with
asynchronous I/O because you can perform many read requests simultaneously, so
you must specify the offset for each particular read request.
aio_error
The
aio_error function is used to determine the
status of a request. Its prototype is:
int aio_error( struct aiocb *aiocbp );
This function can return the following:
EINPROGRESS, indicating the request has not yet completed
ECANCELLED, indicating the request was cancelled by the application
-1, indicating that an error occurred for which you can consult
errno
aio_return
Another difference between asynchronous I/O and standard blocking I/O is that you don't
have immediate access to the return status of your function because you're not blocking on the
read call. In a standard
read call, the return status is provided upon
return of the function. With asynchronous I/O, you use the
aio_return function. This function has the following
prototype:
ssize_t aio_return( struct aiocb *aiocbp );
This function is called only after the
aio_error
call has determined that your request has completed (either successfully or
in error). The return value of
aio_return is
identical to that of the
read or
write system call in a synchronous context (number
of bytes transferred or
-1 for error).
aio_write
The
aio_write function is used to request an
asynchronous write. Its function prototype is:
int aio_write( struct aiocb *aiocbp );
The
aio_write function returns immediately,
indicating that the request has been enqueued (with a return of
0 on success
and
-1 on failure, with
errno properly set).
This is similar to the
read system call, but
one behavior difference is worth noting. Recall that the offset to be used
is important with the
read call. However, with
write, the offset is important only if used in a
file context where the
O_APPEND option is not set.
If
O_APPEND is set, then the offset is ignored
and the data is appended to the end of the file. Otherwise, the
aio_offset field
determines the offset at which the data is written to the file.
aio_suspend
You can use the
aio_suspend function to suspend (or block)
the calling process until an asynchronous I/O request has completed, a signal
is raised, or an optional timeout occurs. The caller provides a list of
aiocb references for which the completion of at
least one will cause
aio_suspend to return. The
function prototype for
aio_suspend is:
int aio_suspend( const struct aiocb *const cblist[], int n, const struct timespec *timeout );
Using
aio_suspend is quite simple. A list of
aiocb references is provided. If any of
them complete, the call returns with
0. Otherwise,
-1 is returned, indicating
an error occurred. See Listing 3.
Listing 3. Using the aio_suspend function to block on asynchronous I/Os
struct aioct *cblist[MAX_LIST] /* Clear the list. */ bzero( (char *)cblist, sizeof(cblist) ); /* Load one or more references into the list */ cblist[0] = &my_aiocb; ret = aio_read( &my_aiocb ); ret = aio_suspend( cblist, MAX_LIST, NULL );
Note that the second argument of
aio_suspend is
the number of elements in
cblist, not the
number of
aiocb references. Any
NULL element in the
cblist is ignored by
aio_suspend.
If a timeout is provided to
aio_suspend and
the timeout occurs, then
-1is returned and
errno contains
EAGAIN.
aio_cancel
The
aio_cancel function allows you to cancel one
or all outstanding I/O requests for a given file descriptor. Its prototype
is:
int aio_cancel( int fd, struct aiocb *aiocbp );
To cancel a single request, provide the file descriptor and the
aiocb
reference. If the request is successfully cancelled, the function returns
AIO_CANCELED. If the request completes, the function returns
AIO_NOTCANCELED.
To cancel all requests for a given file descriptor, provide that file
descriptor and a
NULL reference for
aiocbp. The
function returns
AIO_CANCELED if all requests
are canceled,
AIO_NOT_CANCELED if at least one
request couldn't be canceled, and
AIO_ALLDONE if none of the requests
could be canceled. You can then evaluate each individual AIO request using
aio_error. If the request was canceled,
aio_error returns
-1, and
errno is set to
ECANCELED.
lio_listio
Finally, AIO provides a way to initiate multiple transfers at the same time
using the
lio_listio API function. This function
is important because it means you can start lots of I/Os in the context of a
single system call (meaning one kernel context switch). This is great from a
performance perspective, so it's worth exploring. The
lio_listio API function has the following prototype:
int lio_listio( int mode, struct aiocb *list[], int nent, struct sigevent *sig );
The
mode argument can be
LIO_WAIT or
LIO_NOWAIT.
LIO_WAIT blocks the call until all I/O has completed.
LIO_NOWAIT returns after
the operations have been queued. The
list is a
list of
aiocb references, with the maximum number of
elements defined by
nent. Note that elements of
list may be
NULL, which
lio_listio ignores. The
sigevent reference defines the method for signal
notification when all I/O is complete.
The request for
lio_listio is slightly
different than the typical
read or
write request in that the operation must be
specified. This is illustrated in Listing 4.
Listing 4. Using the lio_listio function to initiate a list of requests
struct aiocb aiocb1, aiocb2; struct aiocb *list[MAX_LIST]; ... /* Prepare the first aiocb */ aiocb1.aio_fildes = fd; aiocb1.aio_buf = malloc( BUFSIZE+1 ); aiocb1.aio_nbytes = BUFSIZE; aiocb1.aio_offset = next_offset; aiocb1.aio_lio_opcode = LIO_READ; ... bzero( (char *)list, sizeof(list) ); list[0] = &aiocb1; list[1] = &aiocb2; ret = lio_listio( LIO_WAIT, list, MAX_LIST, NULL );
The read operation is noted in the
aio_lio_opcode
field with
LIO_READ. For a write operation,
LIO_WRITE is used, but
LIO_NOP is also valid for no operation.
AIO notifications
Now that you've seen the AIO functions that are available, this section digs into the methods that you can use for asynchronous notification. I'll explore asynchronous notification through signals and function callbacks.
Asynchronous notification with signals
The use of signals for interprocess communication (IPC) is a traditional mechanism in UNIX and is
also supported by AIO. In this paradigm, the application defines a signal
handler that is invoked when a specified signal occurs. The application then
specifies that an asynchronous request will raise a signal when the request
has completed. As part of the signal context, the particular
aiocb request is provided to keep track of
multiple potentially outstanding requests. Listing 5
demonstrates this notification method.
Listing 5. Using signals as notification for AIO requests
void setup_io( ... ) { int fd; struct sigaction sig_act; struct aiocb my_aiocb; ... /* Set up the signal handler */ sigemptyset(&sig_act.sa_mask); sig_act.sa_flags = SA_SIGINFO; sig_act.sa_sigaction = aio_completion_handler; /* the Signal Handler */ my_aiocb.aio_sigevent.sigev_notify = SIGEV_SIGNAL; my_aiocb.aio_sigevent.sigev_signo = SIGIO; my_aiocb.aio_sigevent.sigev_value.sival_ptr = &my_aiocb; /* Map the Signal to the Signal Handler */ ret = sigaction( SIGIO, &sig_act, NULL ); ... ret = aio_read( &my_aiocb ); } void aio_completion_handler( int signo, siginfo_t *info, void *context ) { struct aiocb *req; /* Ensure it's our signal */ if (info->si_signo == SIGIO) { req = (struct aiocb *)info->si_value.sival_ptr; /* Did the request complete? */ if (aio_error( req ) == 0) { /* Request completed successfully, get the return status */ ret = aio_return( req ); } } return; }
In Listing 5, you set up your signal handler to catch the
SIGIO signal
in the
aio_completion_handler function. You then initialize the
aio_sigevent structure to raise
SIGIO for notification (which is specified via the
SIGEV_SIGNAL definition in
sigev_notify). When your read completes, your signal
handler extracts the particular
aiocb from the
signal's
si_value structure and checks the error
status and return status to determine I/O completion.
For performance, the completion handler is an ideal spot to continue the I/O by requesting the next asynchronous transfer. In this way, when completion of one transfer has completed, you immediately start the next.
Asynchronous notification with callbacks
An alternative notification mechanism is the system callback. Instead of
raising a signal for notification, this mechanism calls a function in
user-space for notification. You initialize the
aiocb reference into the
sigevent structure to uniquely identify the
particular request being completed; see Listing 6.
Listing 6. Using thread callback notification for AIO requests
void setup_io( ... ) { int fd; struct aiocb my_aiocb; ... /* a thread callback */ my_aiocb.aio_sigevent.sigev_notify = SIGEV_THREAD; my_aiocb.aio_sigevent.notify_function = aio_completion_handler; my_aiocb.aio_sigevent.notify_attributes = NULL; my_aiocb.aio_sigevent.sigev_value.sival_ptr = &my_aiocb; ... ret = aio_read( &my_aiocb ); } void aio_completion_handler( sigval_t sigval ) { struct aiocb *req; req = (struct aiocb *)sigval.sival_ptr; /* Did the request complete? */ if (aio_error( req ) == 0) { /* Request completed successfully, get the return status */ ret = aio_return( req ); } return; }
In Listing 6, after creating your
aiocb request, you request a thread callback
using
SIGEV_THREAD for the notification method.
You then specify the particular notification handler and load the context
to be passed to the handler (in this case, a reference to the
aiocb request itself). In the handler, you simply
cast the incoming
sigval pointer and use the AIO
functions to validate the completion of the request.
System tuning for AIO
The proc file system contains two virtual files that can be tuned for asynchronous I/O performance:
- The /proc/sys/fs/aio-nr file provides the current number of system-wide asynchronous I/O requests.
- The /proc/sys/fs/aio-max-nr file is the maximum number of allowable concurrent requests. The maximum is commonly 64KB, which is adequate for most applications.
Summary
Using asynchronous I/O can help you build faster and more efficient I/O applications. If your application can overlap processing and I/O, then AIO can help you build an application that more efficiently uses the CPU resources available to you. While this I/O model differs from the traditional blocking patterns found in most Linux applications, the asynchronous notification model is conceptually simple and can simplify your design.
Resources
Learn
- The POSIX.1b implementation explains the internal details of AIO from the GNU Library perspective.
- Realtime Support in Linux explains more about AIO and a number of real-time extensions, from scheduling and POSIX I/O to POSIX threads and high resolution timers (HRT).
- In the Design Notes for the 2.5 integration, learn about the design and implementation of AIO in Linux.
-. | http://www.ibm.com/developerworks/linux/library/l-async/ | CC-MAIN-2015-22 | en | refinedweb |
java.lang.Object
org.netlib.lapack.SPOTRIorg.netlib.lapack.SPOTRI
public class SPOTRI
SPOTRI is a simplified interface to the JLAPACK routine spot * ======= * * SPOTRI computes the inverse of a real symmetric positive definite * matrix A using the Cholesky factorization A = U**T*U or A = L*L**T * computed by SPOTRF. * * Arguments * ========= * * UPLO (input) CHARACTER*1 * = 'U': Upper triangle of A is stored; * = 'L': Lower triangle of A is stored. * * N (input) INTEGER * The order of the matrix A. N >= 0. * * A (input/output) REAL array, dimension (LDA,N) * On entry, the triangular factor U or L from the Cholesky * factorization A = U**T*U or A = L*L**T, as computed by * SPOTRF. * On exit, the upper or lower triangle of the (symmetric) * inverse of A, overwriting the input factor U or L. * * LDA (input) INTEGER * The leading dimension of the array A. LDA >= max(1,N). * * INFO (output) INTEGER * = 0: successful exit * < 0: if INFO = -i, the i-th argument had an illegal value * > 0: if INFO = i, the (i,i) element of the factor U or L is * zero, and the inverse could not be computed. * * ===================================================================== * * .. External Functions ..
public SPOTRI()
public static void SPOTRI(java.lang.String uplo, int n, float[][] a, intW info) | http://icl.cs.utk.edu/projectsfiles/f2j/javadoc/org/netlib/lapack/SPOTRI.html | CC-MAIN-2015-22 | en | refinedweb |
20 March 2013 22:00 [Source: ICIS news]
HOUSTON (ICIS)--Here is Wednesday’s end of day ?xml:namespace>
CRUDE: Apr WTI: $92.96/bbl, up $0.80 May Brent: $108.72/bbl, up $1.27
NYMEX light sweet crude (WTI) rose in response to the weekly supply statistics from the US Energy Information Administration (EIA) showing a contrary-to-forecast drawdown in crude stocks. A rally in the stock market boosted by a statement from the Federal Reserve to continue the stimulus program also provided support. The April WTI contract went off the board at the end of the session.
RBOB: Apr: $3.1163/gal, up 7.12 cents/gal
Reformulated blendstock for oxygen blending (RBOB) gasoline futures settled higher as it tracked crude futures and an EIA storage report that showed a drop in gasoline inventories.
NATURAL GAS: Apr: $3.960/MMBtu, down 0.9 cents
The front month on the NYMEX natural gas market finished the day down from yesterday’s 18-month high despite rallying in the afternoon session, as milder weather forecasts for April lowered expectations for near-term winter heating demand.
AROMATICS: toluene flat at $3.90-4.00/gal, mixed xylene tighter at $3.90-4.00/gal
Prompt n-grade toluene and mixed xylene prices were discussed at parity during the day, sources said. Toluene spot prices were stable from $3.90-4.00/gal FOB (free on board) the previous session. Meanwhile, mixed xylene (MX) prices discussions were tighter compared with $3.90-4.05/gal FOB the previous day.
OLEFINS: May ethylene done 63.25-63.75 cents/lb, Mar PGP offered lower at 63.25 cents/lb
Two deals for May ethylene were heard done at 63.25 and 63.75 cents/lb, putting material in backwardation compared to March product. Offers for March polymer-grade propylene (PGP) fell to 63.250 cents/lb, compared with deals done the previous day at 65.375 and 65.500 | http://www.icis.com/Articles/2013/03/20/9652254/evening-snapshot-americas-markets-summary.html | CC-MAIN-2015-22 | en | refinedweb |
An easy-to-use, extensible jQuery wrapper around Ooyala's Javascript Player
jQuery-Ooyala provides a dead-simple interface for creating and working with Ooyala's V3 Javascript Player. It is:
OOobject for a player, as well as the underlying Ooyala player itself
Built by your friends on the Refinery29 Mobile Web Team
You can install jquery-ooyala using either bower
$ bower install jquery-ooyala
$ component install refinery29/jquery-ooyala
or npm
$ npm install jquery-ooyala
In your html, add an element with class "oo-player", as well as a script tag pointing to jquery and jquery-ooyala
<!-- .... --><!-- use dist/jquery.ooyala.min.js for a minified version -->
And that's all there is to it! jquery-ooyala will take care of all the plumbing around loading the Ooyala v3 player represented by the id specified in
data-player-id, as well as create the Player with the video specified by
data-content-id.
You can, of course, also use Javascript to initialize the plugin. Assuming you have the following html
<!-- .... --><!-- use dist/jquery.ooyala.min.js for a minified version -->
Then you can instantiate the plugin like so:
$ "#your-ooyala-player-element" ooyalaplayerId: "your_player_id"contentId: "your_content_id";
Note that in either case, playerId and contentId are required.
Often times, especially on mobile, you'll want to wait until a user interacts with an element on your page before you load all of the assets needed for the ooyala player. You can easily achieve this functionality using jquery-ooyala by specifying an event it should listen for before loading the player.
Tap me to load a video!
Now the ooyala player will not be loaded until a
touchend event is triggered on that element.
You can achieve the same result using javascript:
$ "#your-ooyala-player-element" ooyalaplayerId: "your_player_id"contentId: "your_content_id"lazyLoadOn: "touchend";
By default, jquery-ooyala opts for loading Ooyala's HTML5-based video player, rather than its flash version. If you'd like to disabled this functionality and only use the HTML5 player as a fallback by using
data-favor-html5="false" on an
.oo-player element, or by specifying
favorHtml5: false as an option to the plugin call in Javascript. Note that there are other "platforms" you can use for the player, which can be specified by a custom URI param to the player script tag. For more info on this, see the section on Custom script tag / player params below.
jquery-ooyala allows you to control where exactly the player gets placed within its containing element via the
data-player-placement attribute (html) /
playerPlacement option (javascript). This option can have 3 possible values:
thisin this function will be set to the jquery object representing the element the plugin was called on.
Here's how you could specify player placement using HTML
This text will appear below the ooyala player
And here's how you could specify it within javascript using a custom placement function:
$ "#your-ooyala-player-element" ooyalaplayerId: "your_player_id"contentId: "your_content_id"thisfind ".some-inner-container" append $videoContainer ;;
jquery-ooyala provides the following css hooks for styling:
oo-playeris added to every element which the plugin is called on
oo-player-loadingis applied during initialization (before the player/content has loaded) as well as when different videos are being loaded
oo-player-readyis applied when content has been loaded and is ready to be played
oo-player-playingis applied when the player is current playing content
oo-player-pausedis applied when currently playing content is paused
oo-player-erroris applied if there is an error loading the player script, or if there is an error when loading content for a player
We use SMACCS-style naming conventions and don't apply any styling ourselves. Therefore you can apply universal styles using
oo-player, and then apply more specific styles using the
oo-player-* state specifiers.
jquery-ooyala provides an interface to subscribe to Player events in the same way that you would subscribe to any other event on a jquery object. The format for subscribing to these events is
"ooyala.event.<EVENT_KEY>", where
<EVENT_KEY> corresponds to a property name on the
OO.EVENTS object. All arguments sent by the player message bus will be passed onto the event handlers.
$ "#your-ooyala-player-element" ooyala /* ... */on "ooyala.event.WILL_PLAY_ADS ooyala.event.WILL_PLAY_SINGLE_AD" showAdBanneron "ooyala.event.WILL_RESUME_MAIN_VIDEO" hideAdBanneron "ooyala.event.FULLSCREEN_CHANGED"consoledebug isFullscreen ? "Fullscreen mode on" : "Fullscreen mode off" ;;
For more information on the events you can hook into, take a look at Ooyala's Player Message Bus Events
A common use case with any video player is the ability to change the current video that is playing. This can be accomplished by using what we call "triggers". Triggers are elements that, when interacted with, will change the state of the player. A trigger looks something like this:
Play different video
The
data-oo-player-trigger param lets jquery-ooyala know that when the button is clicked, then the player within
<div id="player"/> should be load in the content
represented by id
"456". These triggers are registered when jquery-ooyala first initializes.
Triggers can also be used to seek within the same video. This is useful for longer videos, in which you may want something similar to "Chapters"
Chapter 1 (0:00)Chapter 2 (5:00)
The properties that can be specified within this object are as follows:
contentIdis the same as the current
contentIdfor the content that's playing, this will cause the player to seek to the specified offset (in seconds) within that video. Note that this property will do nothing if the
contentIds differ
Note that this functionality can be emulated relatively easily using Javascript
var $el = $ "#your-ooyala-player-element" ooyala /* ... */ooyala = $eldata "ooyala" ;$ "#btn-play-some-video" on "click"ooyalaloadContent "content_id_of_some_video" ;;$ "#btn-seek-some-video" on "click"if ooyalasettingscontentId === "content_id_of_some_video"ooyalaseek 60 ;;
If you need to provide any query paramaters to the player script call, or need to provide any additional embedded parameters to the ooyala player, you can do so by using
data-url-params and
data-player-params, respectively.
$ "#your-ooyala-player-element" ooyalaplayerId: "your_player_id"contentId: "your_content_id"urlParams: platform: "html5-fallback"playerParams: autoplay: true initialTime: 30;
Note that for url params,
namespace will be ignored, as we control that internally. Additionally,
onCreate is ignored in player params. See the next section for info on how to hook into
onCreate.
When the ooyala script has completely loaded and the player has been created, the element containing the ooyala plugin will trigger an
"ooyala.ready" event on itself, and pass along both the instantiated player, as well as that player's global
OO object. You can hook into this event to perform low-level interactions with the player and the object.
$ "#your-ooyala-player-element" ooyala /* ... */on "ooyala.ready"// work with OO and player;
The plugin instance itself can be retrieved by calling
.data( "ooyala" ) on the element. This will return an instance of
OoyalaWrapper, which is what we use to encapsulate all of the functionality of the plugin.
var ooyala = $ "#your-ooyala-player-element" data "ooyala" player OO;// If you feel you need to, you can access the actual ooyala player using the `_player` propertyplayer = ooyala_player;// Additionally, you can access the global object for the player using the `_ooNamespace` propertyOO = window ooyala_ooNamespace ;// Using the javascript API to play some contentooyalaplay;
See the plugin javascript API for more info
When jquery-ooyala initially loads, it automagically checks for all elements of class
oo-player and instantiates the plugin on all of them. It also wires up any trigger elements it finds. If you want to disable this functionality, add a
data-auto-init="false" attribute to the script tag that loads the player. Note that within the code we simply check for
script[data-auto-init], so that attribute can be attached to any script tag, such as a built
vendor.js file.
<!-- vendor.js contains jquery-ooyala bundled with it -->
You may also want to manually trigger this event, such as in a single-page application where elements are dynamically generated. You can do this by triggering a
"jquery.ooyala.initialize" on
document, which jquery-ooyala will listen to and perform that initialization logic when it's triggered.
$ document on "dataVideosResponseReceived"var html = videosTemplateFunction videos ;$ "#view-container" append html ;$ document trigger "jquery.ooyala.initialize" ;;
You can access the
OoyalaWrapper constructor function by invoking
$.data( document.body, "_jquery.ooyala" ). Note however, that in most cases there are better ways to achieve what you are trying to do than this way. However, if you'd rather use a different pattern in your code than a traditional jquery plugin pattern, this will give you that flexibility.
For those who need to get closer to the metal, the plugin provides a completely javascript API that is used under the hood to create/manipulate the player.
This is the constructor that is used to instantiate the plugin on each element it's called on.
var ooyala = $ "#some-player" data "ooyala" ; // OoyalaWrapper instance
optscontaining plugin options (outlined below)
options.playerId. Returns a promise that is fulfilled when the player script loads and is executed
contentId
OoyalaWrapper proxies the ooyala player's
play(),
pause(),
seek(), and
skipAd() functions. See the ooyala docs for information on how to use those methods.
SimpleHTTPServer)
$ cd /path/to/jquery-ooyala$ python -m SimpleHTTPServer
localhost:8000/demo/DEMO_FILE_NAME.htmlwhere you should see the demo. Note that we load bootstrap and jquery from a CDN on these pages, so you'll need a working internet connection.
I'm not seeing anything when I use the plugin on an empty element
Ensure that you set at least a
min-height and (if you're not using a block element) a
min-width on the element you're placing the plugin into. Ooyala sets styles on their html5 player by to
width:100%;height:100% so if your element has no width/height, it won't show. You can also use the
.oo-player css hook to apply global styles to all ooyala players on your page.
I'm trying to specify a certain namespace using
urlParams, but I'm not seeing the Ooyala object in that namespace
jquery-ooyala does not honor the url param if passed in. This is the only way we can reliably support having multiple players on one page. If you really need to assign a global
OO object for a player to a specific window property, you can accomplish this by using the
"ooyala.ready" event:
$ "#element" ooyala /* ... */on "ooyala.ready"windowMY_OO_NAMESPACE = OO;;
If you're experiencing other problems or have found a bug, please let us know by creating an issue
MIT License © Refinery29, Inc. 2014 | https://www.npmjs.com/package/jquery-ooyala | CC-MAIN-2015-22 | en | refinedweb |
the stepper controlers that i have
it seems to work by having two control pins
would it be possible to use something similar to the servo library to send out the neccesary pulses and keep the rest of the code "unblocked"?
#include <digitalWriteFast.h> // stepPin1a 2#define stepPin1b 3#define stepPin2a 4#define stepPin2b 5 // Add more as needed.int minStepTime=25; //delay in microSeconds between step pulses. lower=faster.boolean stepState=LOW;unsigned long stepTimeOld=0;long stepper1Pos=0;long stepper2Pos=0;long stepper1Goto=0;long stepper2Goto=0;void stepLight(){ unsigned long curTime=micros(); if(curTime - stepTimeOld >= (minStepTime/2)) { stepState=!stepState; // Add more as needed here as well. (Must match #define section.) if(stepper1Pos > stepper1Goto) digitalWriteFast2(stepPin1a,stepState);stepper1Pos--; if(stepper1Pos < stepper1Goto) digitalWriteFast2(stepPin1b,stepState);stepper1Pos++; if(stepper2Pos > stepper2Goto) digitalWriteFast2(stepPin2a,stepState);stepper2Pos--; if(stepper2Pos < stepper2Goto) digitalWriteFast2(stepPin2b,stepState);stepper2Pos++; stepTimeOld=curTime; }}// call from loop routine like this:void loop(){ stepper1Goto=12345; // update as often as you like. stepper2Goto=67890; stepLight();}
Please enter a valid email to subscribe
We need to confirm your email address.
To complete the subscription, please click the link in the
Thank you for subscribing!
Arduino
via Egeo 16
Torino, 10131
Italy | http://forum.arduino.cc/index.php?topic=97654.msg733831 | CC-MAIN-2015-22 | en | refinedweb |
Generating PDF reports - JSP-Servlet
Question
I am try to generate a pdf report using jsp .... i want to export values stored in sql server DB on to a pdf files so please do reply its very...Generating PDF reports Hello everyone
i have submitted several
pdf generate from jsp
pdf generate from jsp how do i generate a pdf using jsp that should query the data from the database and write it into a pdf and download the same
Tutorial | J2ME
Tutorial | JSP Tutorial |
Core Java Tutorial...;
| Java Servlets
Tutorial | Jsp Tutorials
| Java Swing
Tutorials...
Books |
C and C++ |
Java & JEE |
Database |
ASP.NET |
Linux and Unix
Tomcat Books
of your site.
Tomcat
Works
This is one of the rare books... the Java technology. It is a JSP and servlet container that can be attached to other... as well. The JSP technology allows web developers to create dynamic web pages
Accessing database from JSP
Accessing database from JSP
... from very beginning. First we
learn how to create tables in MySQl database after... ;
This will create a table "books_details" in database "books"
JSP Code
Servlets Books
;
Books : Java Servlet & JSP Cookbook...;
JSP, Servlet, Struts, JSF, and Java Training..., JavaServer Pages (JSP), Apache Struts, JavaServer Faces(JSF), or Java programming... technology, covering everything from JSP fundamentals to application design
Free JSP Books
Free JSP Books
Web Server Java: Servlets and JSP
This chapter covers Web Server Java... guide to JSP technology, covering everything from JSP fundamentals to application
Web Sphere Books
;
Introduction to Java Using WebSphere
Books
A step-by-step, hands...;
Enterprise
Java Programming With IBM WebSphere Books... can provide, from search to site analytics to web content management. Especially... the fields from table through JSP
This is detailed java program to connect
|
JSP PDF books
| Free JSP Books
| Free
JSP Download |
Authentication... posted to a JSP file from HTML file |
Accessing
database from JSP |
Implement... and show this on the JSP page in
table |
Access
all the fields from table
Java Certification Books
at
philion@acmerocket.com. The Java tutorial on the JavaSoft site is very detailed...
Java Certification Books
...;
Java
Certification Books
The book I followed
Need E-Books - JSP-Servlet from
Oracle Books
and more Oracle books available today, from an increasing number of publishers... bought two Oracle books- this and one from another publisher-and this is always...
Oracle Books
Hi,
What is JSP? What is the use of JSP?
Thanks
Hi,
JSP Stands for Java Server Pages. It is Java technology for developing web applications.
JSP is very easy to learn and it allows the developers to use Java
C/C++ Programming Books
C/C++ Programming
Books
..., Penn State Erie, The Behrend College.Straight from Trolltech, this book covers all...;
C
Programming Books
A C program
create pdf from jsongrid
create pdf from jsongrid i need to create pdf from jsongrid in java struts2.. otherwise i need to create pdf from result
FreeBSD Books
version of this document is always available from the FreeBSD web site (previous... be downloaded as one large HTML file with HTTP or as plain text, PostScript, PDF, etc. from...
FreeBSD Books
Create PDF from java
code to create pdf file from database call from java programming.
thank you,
Hendra
Create PDF from Java
import java.io.*;
import java.sql....Create PDF from java Good afternoon,
I have a problem how to create.
Game and Graphic Books
Game and Graphic Books
... always thought game developers were born, not made. True greatness comes from within... fit into the whole.
The chapters in this web site deal
How about this site?
Java services What is Java WebServices?
If you are living in Dallas, I'd like to introduce you this site, this home security company seems not very big, but the servers of it are really good.
Dallas Alarm systems
HTML Books
: Learn HTML with books
A great Web site has to look great. But it also... the Great Books is time well spent. Great books lists are not meant to be exclusive... of sources, and based most notably from the one developed for the Great
reading from pdf
reading from pdf how can i read specific words from pdf file?
Java Read pdf file
import java.io.*;
import java.util.*;
import com.lowagie.text.*;
import com.lowagie.text.pdf.*;
public class ReadPDF {
public
jsp
jsp how to edit only one row from multiple row from single jsp page dynamically
<%@ page language="java" import="java.sql.*"%>...("Disconnected from database");
%>
<TABLE style="background-color: #E3E4FA
Advertisements
If you enjoyed this post then why not add us on Google+? Add us to your Circles | http://roseindia.net/tutorialhelp/comment/11342 | CC-MAIN-2015-22 | en | refinedweb |
#include "petscdmshell.h" PetscErrorCode DMLocalToLocalBeginDefaultShell(DM dm,Vec g,InsertMode mode,Vec l)
Note: This is not normally called directly by user code, generally user code calls DMLocalToLocalBegin() and DMLocalToLocalEnd(). If the user provides their own custom routines to DMShellSetLocalToLocal() then those routines might have reason to call this function.
Level:advanced
Location:src/dm/impls/shell/dmshell.c
Index of all DM routines
Table of Contents for all manual pages
Index of all manual pages | http://www.mcs.anl.gov/petsc/petsc-dev/docs/manualpages/DM/DMLocalToLocalBeginDefaultShell.html | CC-MAIN-2015-27 | en | refinedweb |
Axis 1.1 used to be the SOAP framework that was integrated with RIFE. The integration was very hackish to say the least since there were no easily replaceable and reusable parts that could be used with another servlet-like gateway in front of it. Since Axis 1.1 doesn't work with Java 5.0, we really had to remove it and either support version 1.2 or use another solution. We decided to move over to XFire and made creating SOAP elements even easier thanks to it.
RIFE version 1.5.1 requires the XFire 1.0 release, later versions support the XFire 1.2 API (as of 1.6 snapshots).
All that's needed to setup a SOAP webservice in RIFE with XFire now are two properties, like this:
<element id="ECHO" extends="rife/soap/xfire.xml" url="/soap" >
<property name="home-class">com.uwyn.rife.engine.testwebservices.soap.xfire.Echo</property>
<property name="home-api">com.uwyn.rife.engine.testwebservices.soap.xfire.EchoApi</property>
</element>
The Java source code is very simple:
public class Echo implements EchoApi
{
public String echo(String value)
{
return "I got : '"+value+"'";
}
}
public interface EchoApi
{
public String echo(String value);
}
The webservice can be accessed at "" now. To see the WSDL, just fetch "".
SOAP services now also support the ElementService interface. | http://rifers.org/wiki/display/RIFE/Support+for+SOAP+web+services | crawl-001 | en | refinedweb |
Get DirectBuffers from a visual
Name
ggiDBGetNumBuffers, ggiDBGetBuffer : Get DirectBuffers from a visual
Synopsis
#include <ggi/ggi.h> int ggiDBGetNumBuffers(ggi_visual_t vis); const ggi_directbuffer *ggiDBGetBuffer(ggi_visual_t vis, int bufnum);
Description
Dependent on the visual and runtime environment found, applications may be granted direct access to hardware and/or library internal buffers. This may significantly enhance performance for certain pixel oriented applications or libraries.
The DirectBuffer is a mechanism in which a LibGGI program can use to determine all the characteristics of these buffers (typically the framebuffer), including the method of addressing, the stride, alignment requirements, and endianness.
However, use not conforming to this specification will have undefined effects and may cause data loss or corruption, program malfunction or abnormal program termination. So you don't really want to do this.
ggiDBGetNumBuffers returns the number of DirectBuffers available to the application. ggiDBGetBuffer obtains the DirectBuffer at the specified position.
Use ggiDBGetBuffer to obtain the DirectBuffers at 0 to n-1, where n is the number returned by ggiDBGetNumBuffers.
Pixel-linear buffers have 'type==GGI_DB_SIMPLE_PLB | GGI_DB_NORMAL'. You're on your own now.
DirectBuffers where ggiResourceMustAcquire(3) is true need to be 'acquired' (i.e. locked) before using. An acquire is done by using ggiResourceAcquire(3) and is released by calling ggiResourceRelease(3). Beware that the read, write and stride fields of the DirectBuffer may be changed by an acquire, and that they may be NULL or invalid when the DirectBuffer is not acquired.
Return value
ggiDBGetNumBuffers returns the number of DirectBuffers available. 0 indicates that no DirectBuffers are available.
ggiDBGetBuffer returns a pointer to a DirectBuffer structure.
Types of Buffers
Only the framebuffer is defined currently. Other types of buffers, such as stencil, z will be defined by appropriate GGI extensions.
A frame buffer may be organized as several distinct buffers. Each buffer may have a different layout. This means both the addressing scheme to be used as well as the addressing parameters may differ from buffer to buffer.
A framebuffer is denoted by ggi_directbuffer.type`==`GGI_DB_NORMAL. Each frame has its own buffer, and its number is indicated in ggi_directbuffer.frame.
Examples
How to obtain a DirectBuffer:
ggi_visual_t vis; ggi_mode mode; int i; /* Framebuffer info */ unsigned char *fbptr[2]; int stride[2]; int numbufs; mode.frames = 2; /* Double-buffering */ mode.visible.x = 640; /* Screen res */ mode.visible.y = 480; mode.virt.x = GGI_AUTO; /* Any virtual resolution. Will usually be set */ mode.virt.y = GGI_AUTO; /* to be the same as visible but some targets */ /* may have restrictions on virtual size. */ mode.graphtype = GT_8BIT; /* Depend on 8-bit palette. */ mode.dpp.x = mode.dpp.y = GGI_AUTO; /* Always 1x1 but we don't care. */ if(ggiInit()) { /* Failed to initialize library. Bomb out. */ } vis = ggiOpen(NULL); if(!vis) { /* Opening default visual failed, quit. */ } if(ggiSetMode(vis, &mode)) { /* Set mode has failed, should check if suggested mode is o.k. for us, and try the call again. */ } numbufs = ggiDBGetNumBuffers(vis); for(i = 0; i < numbufs; i++) { ggi_directbuffer *db; int frameno; db = ggiDBGetBuffer(vis, i); if(!(db->type & GGI_DB_SIMPLE_PLB)) { /* We don't handle anything but simple pixel-linear buffers. Fall back to ggiPutBox() or something. */ continue; } frameno = db->frame; if(readptr[frameno] != NULL && (db->buffer.plb.pixelformat->flags & GGI_PF_REVERSE_ENDIAN)) { continue; } fbptr[frameno] = db->write; /* read == write for simple plbs */ /* Stride of framebuffer (in bytes). */ stride[frameno] = db->buffer.plb.stride; /* Check pixel format, be portable... */ | http://www.ggi-project.org/documentation/libggi/current/ggiDBGetNumBuffers.3.html | crawl-001 | en | refinedweb |
#include <config_processor.hh>
Inheritance diagram for mpcl::net::cgi::TConfigProcessor:
In CGI programs, the parameters come in this format:
<name_1>=[<value_1>]&<name_2>=[<value_2>]&...<name_n>=[<value_n>]
Furthermore, options can have more than one value. One of the most useful techniques used in HTMLL is coding identifier and value as NAME attribute of THtmlInputImage (input type=image). This must be done cause to the fact that, when that type of input is used, the value provided by the server is:
<name.x>=<position_x>&<name.y>=<position_y>
As you can see, there is no value attribute, so HTMLL bypasses this problem encoding name as a pair of identifier and value. Then, the server will give us, something like this:
<identifier/value.x>=<position_x>&<identifier/value.y>=<position_y>
Next this is translated to:
<identifier>=<value>&<identifier>=<value>
Definition at line 75 of file net/cgi/config_processor.hh. | http://www.uesqlc.org/doc/mpcl/classmpcl_1_1net_1_1cgi_1_1_t_config_processor.html | crawl-001 | en | refinedweb |
#include <general-functions.hh>
List of all members.
Definition at line 50 of file general-functions.hh.
[inline]
Builds a new instance for appending rktSOURCE_ITEM to a provided TTargetItem instance (that must be a pointer).
Definition at line 65 of file general-functions.hh.
References mpcl::util::MAppend< TTargetItem, TSourceItem >::rktSourceItem.
Appends the item rktSourceItem to rktTARGET_ITEM (that must be have support for the operator '<<').
Definition at line 73 of file general-functions.hh. | http://www.uesqlc.org/doc/mpcl/structmpcl_1_1util_1_1_m_append.html | crawl-001 | en | refinedweb |
a series to demonstrate how to integrate Cell/B.E. functionality into existing projects
Document options requiring JavaScript are not displayed
Discuss
Help us improve this content
Level: Intermediate
Jonathan Bartlett, Director of Technology, New Medio
04 Sep 2007
Traditional porting requires identifying and abstracting out the
architecture-dependent code: making code endian-independent, working through minor
API differences, and including the appropriate header files and libraries. While
this procedure works for getting code to run on the Cell Broadband
Engine™
.
Because the Cell/B.E. processor includes a Power Processing Element (called the PPE), any
program can be easily ported using the same procedures that would be used for any other
PowerPC®-based processor. However, this leaves all of the Synergistic Processing
Elements (the SPEs) completely unutilized. Several issues must be dealt with when porting an application to take advantage of the SPEs:
The following article examines some ways that programmers can take advantage of the SPEs in existing code with minimal impact to the code and build process.
Identifying optimal workloads for the SPEs
The first thing to do is to identify code that you want to run on the SPEs. The three
things that need to be considered for this are:
Of these, the ability for the code to be parallelized is probably least important. The
fact is that in most running conditions on UNIX® operating systems, the PPE will
always have plenty available to do; therefore, even if the amount of processor time used is the same using and not using the SPE, if the program is running on the SPE, then the PPE is freed to run other programs.
The biggest speed obstacle for the SPEs is code with a lot of branches. On the SPE, only
one branch hint can be active at a time. The SPE has no hardware for branch prediction (it simply predicts, in the absence of a hint, that the branch will not be taken), and a mispredicted branch can cost 18 cycles. This means that code with lots of vtable lookups (like object-oriented code), lots of function calls, or lots of conditionals could in fact run slower on the SPE than on the PPE.
In addition, the SPE's power comes in the fact that it utilizes a SIMD architecture—it
processes 128 bits at a time. Therefore, if processing multiple values, it is best if all
of the values can be processed through the same instructions. That is, if you are
processing an array of values, you would want to have all of the values processable by the
exact same set of statements—not having "if" statements change the direction of the
processing. For more on methods for performance SPE programming, see Resources at the end of the article.
Another issue is the data. The SPE cannot access main memory directly; it must be moved in and out with DMA transfers. This means that pointers to main memory cannot be just dereferenced. The data must be explicitly transferred to the local store before it is evaluated. Essentially all main memory pointer lookups must be handled by explicit loading and unloading instructions. This is not only difficult for the programmer, it also does not utilize the SPE's resources efficiently.
Therefore, optimal data structures for SPE processing are structures which have the following properties:
This doesn't mean that if your data structures don't possess these properties you are out of luck. These are simply what will allow you to get the most mileage out of the SPE. If you are trying to choose which parts of your program to offload to an SPE or how to rearrange your data structures to get better performance, these guidelines will go a long way to helping you make good design choices.
Here's a sample program to port
The application that we will port is a little contrived, but it should serve to point out
most of the issues involved in porting. The application will simply take a file of floating-point numbers (the first number is an integer telling how many numbers are in the file), compute the standard deviation of the numbers in that file, and print it out. Even though this could easily be a one-file project, I will break it out into multiple files to help show what kind of changes would be required in larger-scale projects.
#ifndef MY_MATH_H
#define MY_MATH_H
float calculate_standard_deviation(int num_values, float *values);
#endif
#include <math.h>
#include <stdlib.h>
#include "my_math.h"
float calculate_standard_deviation(int num_values, float *values) {
int i; /* counter */
float sum = 0.0, sum_squares = 0.0;
float avg, variance, std_dev;
/* Loop through all the values */
for(i = 0; i < num_values; i++) {
sum += values[i];
sum_squares += values[i]*values[i];
}
avg = sum / (float)num_values;
variance = (sum_squares - (sum * avg)) / (float)num_values;
std_dev = sqrt(variance);
return std_dev;
}
#include <stdio.h>
#include <stdlib.h>
#include "my_math.h"
int main(int argc, char **argv) {
int i, res; /* temporaries */
FILE *f;
int num_values;
float *all_values;
float std_dev;
if(argc != 2) {
fprintf(stderr, "Usage: stddev input_file\n");
exit(1);
}
/* Open the File */
f = fopen(argv[1], "r");
if(f == NULL) {
perror("Unable to open file");
exit(1);
}
/* Get the total number of values to read */
res = fscanf(f, "%d", &num_values);
if(res != 1) {
fprintf(stderr, "Invalid file format.");
exit(1);
}
/* Allocate memory for all values */
all_values = (float *)malloc(sizeof(float) * num_values);
/* Read in all of the values */
for(i = 0; i < num_values; i++) {
res = fscanf(f, "%f", &all_values[i]);
if(res != 1) {
fprintf(stderr, "Invalid file format.");
exit(1);
}
}
/* Perform calculation */
std_dev = calculate_standard_deviation(num_values, all_values);
/* Print result */
printf("The standard deviation is %f\n", std_dev);
return 0;
}
OBJS = my_math.o main.o
LIBS = -lm
CFLAGS = -m32 -O3
LDFLAGS = -L.
stddev: $(OBJS)
$(CC) $(LDFLAGS) $(OBJS) $(LIBS) -o stddev
.c.o:
$(CC) -c $(CFLAGS) -c $<
clean:
rm -rf *.o
test: stddev
./stddev test.dat
8
0.51659
0.40238
0.81590
0.14230
0.00324
0.99185
0.81089
0.00253
To build, just issue a make command; to test, just do make test.
make
make test
Now when you port the code to use the SPEs, it is good to keep in mind the maintainers of the other platforms and to try not to cause problems for them. Here are a few guidelines to follow to make the impact for other platform maintainers minimal:
.c
.spuc
spe_image_open
malloc()
A simplistic SPE RPC library
The first part of the port will be creating a simplistic SPE remote procedure call library. This library will handle loading in SPE programs and sending them requests. The easiest and most general way to do this is to give each SPE context/program one function to perform. The SPE program will merely sit in an infinite loop waiting for data to process. On each iteration, it will wait for a pointer to arrive in its mailbox, DMA in the input parameters from the pointer, marshall those parameters to the real function, and then the real function will execute and hand the result back to the marshaller which will DMA the result back to main memory and signal the completion of the operation.
The PPE program will have a stub function that will do the following:
spe_remote_function_start
spe_remote_call
Here is the library header file:
#ifndef SPE_PORT_H
#define SPE_PORT_H
/* Since we rewrite malloc with the preprocessor, we need to make sure */
/* we include its header file first */
#include <stdlib.h>
/* Alignment macros */
#define SPE_ALIGNMENT 16
#define SPE_ALIGNMENT_FULL 128
#define SPE_ALIGN __attribute__((aligned(16)))
#define SPE_ALIGN_FULL __attribute__((aligned(128)))
#define ROUND_UP_ALIGN(value, alignment)\
(((value) + ((alignment) - 1))&(~((alignment)-1)))
/* Redefine malloc to use our own version */
#define malloc(x) spe_aligned_malloc((x))
/* Hide the PPE header info from the SPE */
#ifndef __SPU__
/* Makes it easier to dereference integers from a */
/* pointer and a local store base address */
#define SPE_DEREF_UINT32(base, offset) *((unsigned int *)(((char *)(base)) + (offset)))
#include <pthread.h>
#include <libspe2.h>
/* Basic process information */
typedef struct {
char *spe_filename;
void *initialization_data;
spe_program_handle_t *spe_image;
spe_context_ptr_t spe_context;
pthread_t spe_thread;
} spe_remote_function_t;
typedef spe_remote_function_t *spe_remote_function_ptr_t;
/* Functions */
/* Malloc() for PPE programs allocating data to pass to SPE programs */
void *spe_aligned_malloc(unsigned int size);
/* Initialize an SPE function - initialization_data is passed to the */
/* SPE program as argp */
spe_remote_function_ptr_t spe_remote_function_start(char *spe_program_filename,
void *initialization_data);
/* Terminate an SPE function (not normally used; */
/* it is automatically terminated on exit) */
void spe_remote_function_kill(spe_remote_function_ptr_t);
/* Run the SPE function; must be passed an integer pointer for status and can be */
/* called in blocking or nonblocking mode */
int spe_remote_call(spe_remote_function_ptr_t spe_func, void *arguments, int runflags,
int *status_ptr);
/* For non-blocking calls, use this function to wait for them to complete */
int spe_wait_completion(volatile int *status_ptr, int busy_wait);
/* Run flags */
#define SPE_RUN_NONBLOCK 1
#define SPE_RUN_BLOCKING 0
#endif
#endif
So now look at how you would use this in the standard deviation program to make calls to
the SPE processor. First of all, you would need a header file to define the parameter structure used to pass the data. The file looks like this:
#ifndef MY_MATH_SPE_H
#define MY_MATH_SPE_H
#include "speport.h"
typedef struct {
int num_values SPE_ALIGN;
float *values SPE_ALIGN;
float result SPE_ALIGN;
} spe_std_dev_params_t;
#endif
Then, to call the function from the PPE, you would do this:
...include files...
/* USE_SPE is a define we can pass to tell it to use or not use SPE-specific functions */
#ifndef USE_SPE
float calculate_standard_deviation(int num_values, float *values) {
....original function here for other platforms....
}
#else
/* SPE-specific includes */
#include "speport.h"
#include "my_math_spe.h"
/* Stub for SPE function */
float calculate_standard_deviation(int num_values, float *values) {
/* Initialize to NULL so we know on the first run to initialize it */
static spe_remote_function_ptr_t std_dev_func = NULL;
/* Parameter struct to call the SPE function */
spe_std_dev_params_t params SPE_ALIGN;
/* Status variable for the RPC */
int status SPE_ALIGN;
/* Start up the SPE process if this is our first run */
if(std_dev_func == NULL) {
std_dev_func = spe_remote_function_start("./spe_std_dev", NULL);
if(std_dev_func == NULL) {
fprintf(stderr, "Error starting thread!");
exit(1);
}
}
/* Make parameters */
params.num_values = num_values;
params.values = values;
/* Call the function */
if(spe_remote_call(std_dev_func, ¶ms, SPE_RUN_BLOCKING, &status) < 0)
{
fprintf(stderr, "Error running function\n");
exit(1);
}
/* Return the result */
return params.result;
}
#endif
As you can see, this is a very thin but intuitive interface for making SPE function calls. It requires that both sides decide on how the data will be formatted, but overall it provides convenience where it is most needed.
The SPE side follows the following procedure:
Now take a look at how the SPE program is coded:
#include <spu_intrinsics.h>
#include <spu_mfcio.h>
#include <math.h>
#include "my_math.h"
#include "my_math_spe.h"
/* The maximum number of values is limited by the size of the DMA transfer */
#define MAX_VALUES (16384 / sizeof(float))
/* All of our DMA transfers will use this tag */
#define DEFAULT_DMA_TAG 0
float spe_calculate_standard_deviation(int num_values, float *values_ea);
int main(unsigned long long spe_id, unsigned long long argvp) {
int status SPE_ALIGN;
/* All DMA transfers in this program are using tag 0 */
mfc_write_tag_mask(1<<DEFAULT_DMA_TAG);
/* This just sits in a loop and waits for requests to perform this function */
while(1) {
/* Block until we are given mailbox parameters */
unsigned int address = spu_read_in_mbox();
unsigned int status_address = spu_read_in_mbox();
/* Marshall in parameters */
spe_std_dev_params_t params;
spu_mfcdma32(¶ms, address, sizeof(spe_std_dev_params_t),
DEFAULT_DMA_TAG, MFC_GET_CMD);
spu_mfcstat(MFC_TAG_UPDATE_ALL);
/* Check boundaries */
if(params.num_values > MAX_VALUES) {
/* Report error */
status = -1;
spu_mfcdma32(&status, status_address, sizeof(int),
DEFAULT_DMA_TAG, MFC_PUT_CMD);
} else {
/* Perform task */
params.result = spe_calculate_standard_deviation(params.num_values,
params.values);
/* Write back results */
spu_mfcdma32(¶ms, address, sizeof(spe_std_dev_params_t),
DEFAULT_DMA_TAG, MFC_PUTB_CMD);
/* Send status notification */
status = 1;
spu_mfcdma32(&status, status_address, sizeof(int),
DEFAULT_DMA_TAG, MFC_PUTB_CMD);
}
}
}
/* Actual function implementation */
float spe_calculate_standard_deviation(int num_values, float *values_ea) {
int i;
float sum = 0.0, sum_squares = 0.0;
float avg, variance, std_dev;
static float ls_values[MAX_VALUES];
/* Load in values from main memory pointer */
spu_mfcdma32(ls_values, (unsigned int)values_ea, num_values * sizeof(float),
DEFAULT_DMA_TAG, MFC_GET_CMD);
spu_mfcstat(MFC_TAG_UPDATE_ALL);
/* Loop through all the values */
for(i = 0; i < num_values; i++) {
sum += ls_values[i];
sum_squares += ls_values[i]*ls_values[i];
}
avg = sum / (float)num_values;
variance = (sum_squares - (sum * avg)) / (float)num_values;
std_dev = sqrt(variance);
return std_dev;
}
So, while the marshalling code is a little annoying to write, it is all fairly straightforward.
Now that you've seen what the library can do, look at how it is coded:
#include <stdlib.h>
#include "speport.h"
static void *spe_remote_function_thread(void *);
/* Allocate aligned memory - same call interface as malloc() for */
/* easy use in existing programs */
void *spe_aligned_malloc(unsigned int size) {
void *data;
/* Align the memory and make sure that we round up the size to */
/* an aligned multiple */
posix_memalign(&data, SPE_ALIGNMENT, ROUND_UP_ALIGN(size, SPE_ALIGNMENT));
return data;
}
/* Initialize the SPE program from the given file */
spe_remote_function_ptr_t spe_remote_function_start(char *spe_program_filename,
void *initialization_data) {
int retval;
spe_remote_function_ptr_t spe_func;
/* Allocate the structure */
spe_func = (spe_remote_function_ptr_t)malloc(sizeof(spe_remote_function_t));
/* Save the filename (we don't currently use it, but we keep it anyway) */
spe_func->spe_filename = spe_program_filename;
/* Save the initialization data for when we create the thread */
spe_func->initialization_data = initialization_data;
/* Save the SPE image */
spe_func->spe_image = spe_image_open(spe_program_filename);
if(spe_func->spe_image == NULL) {
return NULL;
}
/* Create the context */
spe_func->spe_context = spe_context_create(SPE_EVENTS_ENABLE|SPE_MAP_PS, NULL);
if(spe_func->spe_context == NULL) {
return NULL;
}
/* Load the SPE image into the context */
spe_program_load(spe_func->spe_context, spe_func->spe_image);
/* Create and start the thread for the SPE to execute in */
retval = pthread_create(&spe_func->spe_thread, NULL,
spe_remote_function_thread, spe_func);
if(retval) {
return NULL;
} else {
return spe_func;
}
}
/* This is the function that pthread_create calls */
void *spe_remote_function_thread(void *data) {
spe_remote_function_ptr_t spe_func = (spe_remote_function_ptr_t)data;
unsigned int entry_point = SPE_DEFAULT_ENTRY;
int retval;
/* Switch to running on the SPE */
retval = spe_context_run(spe_func->spe_context, &entry_point, 0,
spe_func->initialization_data, NULL, NULL);
if(retval != 0) {
perror("Error running SPE thread");
}
pthread_exit(NULL);
}
/* Force kill an SPE function (normally not needed) */
void spe_remote_function_kill(spe_remote_function_ptr_t spe_func) {
pthread_cancel(spe_func->spe_thread);
pthread_join(spe_func->spe_thread, NULL);
spe_context_destroy(spe_func->spe_context);
}
/* Perform the thunk to call the SPE */
int spe_remote_call(spe_remote_function_ptr_t spe_func, void *argument_ptr, int runflags,
int *status_ptr) {
/* Initialize the status pointer */
*status_ptr = 0;
/* Send a pointer to the arguments through the mailbox */
spe_in_mbox_write(spe_func->spe_context, &argument_ptr, 1,
SPE_MBOX_ALL_BLOCKING);
/* Send a pointer to the status through the mailbox */
spe_in_mbox_write(spe_func->spe_context, &status_ptr, 1,
SPE_MBOX_ALL_BLOCKING);
/* If this is a blocking call, wait until it completes */
if(runflags == SPE_RUN_BLOCKING) {
return spe_wait_completion(status_ptr, 0);
} else {
return 0;
}
}
/* Wait until a call has finished by monitoring *status_ptr */
/* (finished when *status_ptr != 0) */
int spe_wait_completion(volatile int *status_ptr, int busy) {
int status;
while(1) {
status = *status_ptr;
if(status) {
return status;
} else {
if(!busy) {
/* If we are not in busy wait mode, yield the processor */
sched_yield();
}
}
}
}
Again, mostly straightforward stuff if you are familiar with libspe2. (If you are not, see Resources.)
libspe2
Now, for main.c, you just need to conditionally include the
speport.h to overwrite the malloc() function.
main.c
speport.h
#ifdef USE_SPE
#include "speport.h"
#endif
...rest of program...
Now, the Makefile just needs a few more entries:
Makefile
OBJS = my_math.o main.o
LIBS = -lm
CFLAGS = -m32 -O3
LDFLAGS = -L.
ifdef USE_SPE
CFLAGS += -DUSE_SPE
LIBS += -lspe2 -lspeport
#NOTE - the "-x c" is because we changed the extension, so we have to tell the compiler
#to use the C frontend.
SPU_CC = spu-gcc -x c
SPU_CFLAGS = -O3
SPU_LDFLAGS = -L.
SPU_LIBS = -lm
endif
stddev: $(OBJS)
$(CC) $(CFLAGS) $(LDFLAGS) $(OBJS) $(LIBS) -o stddev
libspeport.a: speport.c
$(CC) $(CFLAGS) -c speport.c
ar rc libspeport.a speport.o
spe_std_dev: spe_std_dev.spuc
$(SPU_CC) $(SPU_CFLAGS) spe_std_dev.spuc $(SPU_LDFLAGS) $(SPU_LIBS) -o spe_std_dev
.c.o:
$(CC) -c $(CFLAGS) -c $<
clean:
rm -rf *.o *.a spe_std_dev
test: stddev
./stddev test.dat
So now you can build the project with the following commands:
make clean
USE_SPE=1 make libspeport.a
USE_SPE=1 make spe_std_dev
USE_SPE=1 make stddev
make test
This will rebuild the program using the new SPE function. And since you have written it as a nice port, if you want to rebuild it without the SPE functionality, you can rerun the same process without the USE_SPE=1 and it will build for the PPE only (and on other processors).
USE_SPE=1
In conclusion
This article covered the basics about how to port existing applications to the
Cell/B.E. processor's SPEs. We implemented a small porting library (which you may feel free to use in your own projects), showed the necessary Makefile changes, and showed how to create a program which can be compiled with or without SPE support by simply modifying an environment variable.
This program still has numerous problems. We do not yet have a great speed advantage over
the PPE code—they are fairly equivalent. It does allow the PPE to be available for
other processing, but there are still numerous tweaks that can be done to speed up the
code, not the least of which is splitting it up among multiple SPEs. In addition, the SPE
code is currently limited to data sets that are a multiple of four values and max out at
4,096 values. In the next article, I will show you how to rectify each of these
limitations. However, I hope this gave you a good idea of where to get started in porting an application to the Cell/B.E. processor.
Resources
About the author
Jonathan Bartlett is the author of the book
Programming from the Ground Up
which is an introduction to programming using Linux assembly language. He is the lead developer at New Medio,?
Cell Broadband Engine and Cell/B.E. are trademarks of Sony Computer Entertainment, Inc., in the United States, other countries, or both and is used under license therefrom. Other company, product, or service names may be trademarks or service marks of others. UNIX is a registered trademark of The Open Group in the United States and other countries. Other company, product, or service names may be
trademarks or service marks of others. | http://www.ibm.com/developerworks/library/pa-specode1/ | crawl-001 | en | refinedweb |
practical checklist, tips, and insight drawn from experience
Document options requiring JavaScript are not displayed
Help us improve this content
Level: Advanced
Ajay Sood (r1sood@in.ibm.com), Staff Software Engineer, IBM Global Services, Bangalore, India
17 Feb 2005
Much of today's enterprise-level software on UNIX® caters to the business needs of large companies. And so it must support emerging technologies and follow the rapidly evolving market trends, such as the proliferation of the powerful, flexible Linux™ operating system. Because much of this software is large, multi-threaded, and multi-process, porting it to Linux presents challenges. In this article, get a checklist and advice derived from a real-world port of one piece of enterprise-level software to Linux.
One of the realities of current business IT practices is that many organizations are moving IT to Linux, given its flexibility and stability as a system platform. Another reality is that existing enterprise-level software is too valuable to be discarded. These two realities often crash into each other, but it is critical that they be resolved.
Porting enterprise-level software to Linux can present some interesting challenges. Care has to be taken at all stages -- from making design choices to getting the build system to work to finally getting the OS-specific code to cooperate with Linux.
This article is based on my experiences on RHEL and SLES distributions running C applications on Intel and IBM eServer zSeries architectures, but the lessons could easily be extended to other distributions and architectures as well. I'll talk about some of the planning and technical issues that need to be considered to make your application run on Linux, including the following:
Get the build system working
Products that support multiple platforms typically require code that is specific to the particular operating system on which the product is running. This common code is typically held in a separate code component within the source directory structure.
For example the OS-specific code layout could look something like this:
src/operating_system_specific_code_component/aix (for AIX).
src/operating_system_specific_code_component/solaris (for Solaris).
src/operating_system_specific_code_component/UNIX (for other flavors of Unix).
src/operating_system_specific_code_component/aix
src/operating_system_specific_code_component/solaris
src/operating_system_specific_code_component/UNIX
The following figure presents a more "graphic" view of the OS-specific code layout.
Make a Linux build system
For the first step, you'll create a directory for Linux-specific code and populate it with files from one of the platforms. When you introduce a new directory for Linux, the layout might look like this:
src/operating_system_specific_code_component/linux (for Linux).
src/operating_system_specific_code_component/linux
This in turn would give us a new code layout that would look like the following.
Normally, much of the application code is common across all Unix flavors and will work on Linux, too. For Linux-specific code, experience shows that picking Solaris-specific files as an initial drop minimizes the effort to port the platform-specific code to Linux.
Next, change the makefiles and introduce Linux-specific items:
Many changes in the source files are related to changing the include file paths. For example, for the variable errno, <sys/errno.h> needs to be specifically included.
errno
<sys/errno.h>
Care must be taken whereever possible that you don't directly include architecture-specific include files, but do include the recommended files. For example, as mentioned in <bits/dlfcn.h>:
<bits/dlfcn.h>
#ifndef _DLFCN_H
# error "Never use <bits/dlfcn.h> directly; include <dlfcn.h> instead."
#endif
You should use the directive -Dlinux or word "linux" carefully. The preprocessor on Linux translates the word "linux" to the numeral 1. For example, if there is a path in the file like /home/linux and the file is preprocessed using cpp, the path in the output file will look like /home/1. To avoid this substitution, the preprocessor directive could look like this: /lib/cpp -traditional -Ulinux <file_name>.
-Dlinux
/lib/cpp -traditional -Ulinux <file_name>
Common compilation commands
The compiler that programmers normally use is gcc. A typical compile line might look like this: gcc -fPIC -D_GNU_SOURCE -ansi -O2 -c <file_name.c> -I<include_path>. -fPIC helps generate position-independent code and is equivalent to -KPIC on Solaris. -ansi is equivalent to -Xa on Solaris.
gcc -fPIC -D_GNU_SOURCE -ansi -O2 -c <file_name.c> -I<include_path>
-fPIC
-KPIC
-ansi
-Xa
For shared objects, a typical link time directive could be gcc -fPIC -shared -o <shared_object> <object_file> -L<library_search_path> -l<library_name>. -shared is equivalent to -G on Solaris.
gcc -fPIC -shared -o <shared_object> <object_file> -L<library_search_path> -l<library_name>
-shared
-G
For relocatable objects with entry points, a typical directive might be gcc -fPIC -shared -o <name> <object_file> -e entry_point -L<library_search_path> -l<library_name>.
gcc -fPIC -shared -o <name> <object_file> -e entry_point -L<library_search_path> -l<library_name>
Before moving on to choosing the best operating environment, I'll examine the issues surrounding compiling the code on other architectures.
Compiling on other architectures
Another important consideration is that the programmer should be able to get the code to compile on other architectures as easily as possible. The build system should have separate definition files for each architecture involved. For example, the compiler directive for an x86 architecture could have a flag -DCMP_x86 with a directive like -DCMP_PSERIES for some code specific to Linux on pSeries servers. The compile lines in the specific build-definition files will look like this for compiling on an x86-architecture system:
-DCMP_x86
-DCMP_PSERIES
gcc -fPIC -D_GNU_SOURCE -ansi -O2 -c <file_name.c> -I<include_path> -DCMP_x86
and this for compiling on the pSeries architecture:
gcc -fPIC -D_GNU_SOURCE -ansi -O2 -c <file_name.c> -I<include_path> -DCMP_PSERIES.
gcc -fPIC -D_GNU_SOURCE -ansi -O2 -c <file_name.c> -I<include_path> -DCMP_PSERIES
Both -CMP_x86 and -CMP_PSERIES are user-defined flags and should be used wherever the program is going to have architecture-specific code within Linux-specific code. My experience has been that most of the application code for Linux is architecture-independent, and the architecture-specific code comes into play in areas in which assembly code needs to be written. For example, architecture-specific code will be used if you're going to exploit locks using an implementation of compare and swap instructions.
-CMP_x86
-CMP_PSERIES
Code should be arranged so that there are no architecture-specific subdirectories within the Linux-specific directory in the code layout. Why? Because Linux already does a great job of segregating the architecture specifics and an application programmer typically should not have to care about which architectures the application is going to be compiled on. The aim should be to have the program written for a particular architecture to compile on another architecture with minimal effort and minimum changes to the code, the code layout, and the makefiles. By avoiding architecture-specific subdirectories within the linux directory, makefiles are greatly simplified.
Source files in the linux subdirectory could have a code layout with preprocessor directives as follows:
#ifdef CMP_x86
<x86 specific code>
#elif CMP_PSERIES
<p-series specific code>
#else
#error No code for this architecture in __FILE__
#endif
Decide on a viable operating environment
Key to the planning process is determining to which distribution of Linux the application is to be ported. You should make sure that all of the required software is available for the level to which you planning to port. For example, it may not be possible to release a middleware product for the Linux 2.6 distribution, because a key third-party database used in the most typical configuration is not available on that same distribution. The initial offering of your product or application might have to be based on a Linux 2.4 distribution instead.
It's also possible that some of the software with which the application interacts may not be available for all distributions or architectures for which the application is intended. Make a careful study of the viability of your chosen operating environment.
Another issue to consider is whether the application is going to be 32- or 64-bit and whether it is going to coexist with other third-party software that could also operate in a 32- or 64-bit mode.
Architecture-specific changes
Architecture-specific code in an application is typically limited to a few areas. I'll look at some examples in this section.
Determining endian-ness
The programmer does not have to worry about which architecture the code being written for. Linux provides a way to determine the endian-ness in /usr/include/endian.h. Following is a typical code snippet you can use to determine if the operating environment is big- or little-endian; you can set a specific flag for your convenience.
/* Are we big-endian? */
#include <endian.h>
#if __BYTE_ORDER == __LITTLE_ENDIAN
#define MY_BIG_ENDIAN
#elif __BYTE_ORDER == __BIG_ENDIAN
#undef MY_BIG_ENDIAN
#endif
Determining the stack pointer
Inline assembly can be written to determine the stack pointer.
int get_stack(void **StackPtr)
{
*StackPtr = 0;
#ifdef CMP_x86
__asm__ __volatile__ ("movl %%esp, %0": "=m" (StackPtr) );
#else
#error No code for this architecture in __FILE__
#endif
return(0);
}
Implementing compare and swap
Here's an example of implementing compare and swap for the Intel architecture.
bool_t My_CompareAndSwap(IN int *ptr, IN int old, IN int new)
{
#ifdef CMP_x86
unsigned char ret;
/* Note that sete sets a 'byte' not the word */
__asm__ __volatile__ (
" lock\n"
" cmpxchgl %2,%1\n"
" sete %0\n"
: "=q" (ret), "=m" (*ptr)
: "r" (new), "m" (*ptr), "a" (old)
: "memory");
return ret;
#else
#error No code for this architecture in __FILE__
#endif
}
Choose an IPC mechanism
Typically the choice for an interprocess communication (IPC) mechanism -- mechanisms for facilitating communications and data sharing between applications -- is between using signals, writing a loadable kernel extension, or using process-shared mutexes and condition variables.
Signals are the easiest to implement, but in a multi-threaded environment, care has to be taken that all the threads spawned have similar signal masks. The process structure should normally be modeled such that only one thread should handle the signals, or else the signal could be delivered to any of the threads and the results could be unpredictable. It is possible that threads are spawned in a process by other participating entities outside the control of the application and it might not be possible to control their signal masks. Signals might be an unpopular way of doing IPC in a large multi-threaded application for that reason. For example, applications running under an application server could spawn their own threads and could catch signals actually meant for the application server process.
Kernel extensions are not very easy to write and may not be easily portable across various architectures where Linux is supported.
With the advent of the POSIX draft 10 standard and its available implementation on Linux 2.6, process-shared mutexes (mutual exclusion object: a program that allows multiple programs to share the same resources but not simultaneously) and condition variables are a good choice for implementing the IPC mechanism in a multi-process environment. This mechanism would require setting up shared memory in which the mutexes and condition variables reside and all processes will have a common reference for these constructs.
Select the threading model
It is quite possible that some old application being ported to Linux are based on draft 4 of pthreads. The latest versions of Linux support pthreads draft 10, so care needs to be taken to map the calls appropriately. If the application is using some exception-handling mechanisms based on a third-party implementation (for example, TRY-CATCH macros provided by DCE), then the programmer needs to make sure that the exception-handling code is also compatible with draft 10 of pthreads.
Some examples of calls that have changed from draft 4 to 10 are in the following table.
Table 1. Calls changed from draft 4 to 10 of pthreads
pthread_setcancel(CANCEL_ON)
pthread_setcancelstate(PTHREAD_CANCEL_ENABLE, NULL)
pthread_setcancel(CANCEL_OFF)
pthread_setcancelstate(PTHREAD_CANCEL_DISABLE, NULL)
pthread_setasynccancel(CANCEL_ON)
pthread_setcanceltype(PTHREAD_CANCEL_ASYNCHRONOUS, NULL)
pthread_setasynccancel(CANCEL_OFF)
pthread_setcanceltype(PTHREAD_CANCEL_DEFERRED, NULL)
pthread_getspecific(key, value)
*value = pthread_getspecific(&key)
The choice of threading model lies somewhere between native Linux threads and the Native POSIX Thread Library (NPTL) implementation, which provides a POSIX-compliant implementation for the threads in Linux. The Linux kernel (from 2.5 version onwards) has been modified to provide POSIX-compliant support. NPTL is available on SLES9. Red Hat has backported NPTL support for RHEL3 (which is based on the Linux 2.4 kernel). RHEL 3 has support for both NPTL and native Linux threads. You can switch to native Linux threads by setting an environment variable LD_ASSUME_KERNEL=2.4.1, but not many vendors have ported their software on RHEL3 using NPTL support.
LD_ASSUME_KERNEL=2.4.1
Major drawbacks to using native Linux threads are the following:
getpid()
In short, each thread looks like (and in some ways behaves like) a separate process.
There are also problems on the kernel side:
On the other hand, with NPTL:
Based on the improvements provided by various distributions, it would generally be advisable to use the version of Linux that supports NPTL.
File system, usage parameters, stacks
My group discovered a number of diverse things during its porting activities, and because they are relatively concise, I've gathered them together here.
Support for the file system
If your application needs to use facilities such as logging and writing data files, file system-based support is easier to install, configure, and administer compared with the raw I/O.
System-usage parameters
Direct system calls to gather information about parameters (like memory heap usage) don't seem to exist. /proc filesystem support needs to be used to determine such parameters.
Stackwalk
Currently, support for calls like pstack is available only on the Intel architecture; it is in the process of being developed on other architectures. To get stack traces programmatically, programmers might have to implement their own versions using the ABI definitions for the yet-to-be-supported architectures.
pstack
Another option is to use gdb-based scripts to get the stack information. The stack information is usually required for servicability of the product. gdb is more standardized across different architectures and distributions.
Memory maps and using shared memory segments
If the application uses shared memory segments, care has to be taken to place the starting addresses of the shared memory segments appropriately unless the user wants to rely on the system-provided initial addresses. Also, different architectures will have different memory-map support; the areas available for shared memory could be different.
For example, on Intel every process has the bottom three-quarters of the address space allocated to user land; the top piece is allocated to the kernel. This means that the total memory that any Linux process could ever have is 2 GB (390) or 3 GB (Intel). This total has to include the text, data, and stack segments, plus all shared memory regions. On the Linux/390, the area for shared memory starts at 0x50000000 and must end before 0x7fffa000. You have to consider all architectures before deciding on the addresses if you want to keep the starting addresses common for all the architectures that are going to be supported by the application.
Signaling
Signaling -- sending control signals that start and stop a transmission or other operation -- is not much different on Linux as compared with other Unix platforms except that the signal numbers might differ or some of the signals, such as SIGEMT, are not available on some distributions (such as RHEL AS 3). (For more details on differences in signaling between Solaris and Linux, see the reference in the Resources section.)
Configure kernel parameters
The programmer might be required to tune some of the kernel parameters so that the application can scale at runtime. If that's the case, some of the important kernel parameters to consider are threads-max (maximum threads per process), shmmax, msgmax, and so on. The list of configurable parameters is available in /proc/sys/kernel. The parameters can be configured using the /sbin/sysctl system call. The threads-max parameter can be particularly important if you are porting a large multi-threaded application.
threads-max
shmmax
msgmax
/sbin/sysctl
Parser tools like lex/yacc
Be prepared that some portions of your grammar written on AIX or Solaris might not work directly on Linux. For example, some variables like yylineno (an undocumented lex scanner internal variable) might not be directly available on Linux by default. The following code snippet can be used to check if yylineno is directly supported or not. Open a file named a.l with following contents:
yylineno
%{
%}
%%
%%
Then enter lex a.l. Search for "yylineno" in lex.yy.c. If the variable is not available, two possible solutions to supporting yylineno are to use the -l option for lex in Linux (in other words, do lex -l a.l) or change the code to the following:
lex a.l
-l
lex -l a.l
%{
%}
%option yylineno
%%
%%
Some distributions (such as SLES 9) do not come packaged with yacc but come packaged with bison by default. If the requirement is for yacc, it may need to be downloaded.
Globalization issues
It is possible that some of the code pages are named differently on Linux. For example, IBM-850 on AIX could be aliased to ibm850 on Linux and ISO8859-1 could be aliased to ISO-8859-1. Scripts might have to be changed if the application-message catalogues depend on some of these code pages and code-page conversions are required (which can be accomplished by using the iconv tool). Most of the common locales like ja_Jp, en_US, etc., are available on Linux.
Security concerns
Communications over sockets is protected by default in the new distributions (RHEL AS 3), so if you are implementing an IP-based server kind of process listening on a particular port, you will need to add the new service into iptables. Iptables are used to set up, maintain, and inspect the tables of IP packet filter rules in the Linux kernel.
For example, for the first time you might have to add a new chain like /sbin/iptables -N RH-Firewall-1-INPUT and then add the new service in the chain like so: iptables -I RH-Firewall-1-INPUT -s 0/0 -i eth0 -m state --state NEW -p TCP --dport 60030 -j ACCEPT (where the new destination port 60030 is mapped to a service in /etc/services).
/sbin/iptables -N RH-Firewall-1-INPUT
iptables -I RH-Firewall-1-INPUT -s 0/0 -i eth0 -m state --state NEW -p TCP --dport 60030 -j ACCEPT
Locating installed packages and variable data
It is a good idea to follow the packaging recommendations produced by the Linux community -- the recommendations help prevent the cluttering of /opt by code and /var by application data. These recommendations suggest including the vendor and package name in the location of the code and data. As an illustration, consider the following example.
Suppose IBM develops a new application called "abc." The package should ideally be installed in /opt/ibm/abc. The related data should be located in /var/opt/ibm/abc and not simply /var.
Testing
Porting a new product to a new platform requires that the product be tested extensively. Some of the areas that can require special attention are interprocess communication, packaging, intersystem communication (client-server between AIX and Linux or Solaris and Linux), persistent storage (because of endian-ness), data-exchange formats, and so on.
It is possible that external documentation might change, so a thorough documentation review should be carried out.
The test-case porting to Linux needs to be staged properly along with the development work. A list of intermediate deliverables to be tested should be prepared before moving on to full product testing. This will help you find problems at earlier stages of product development. (For more on highly effective general testing methods, see reference to XP in the Resources section.)
There's a port in every storm
In this article I have touched upon various stages of porting, including design choices for various OS-specific areas, creating a suitable directory structure, creating a build system, making the code changes, and testing. I've highlighted those areas where effort needs to be concentrated, areas such as signaling, shared memory, mutexes and condition variables, threading, and architecture-specific changes. This article is based on real experience gained while porting a large multi-threaded application onto a Linux 2.6-based system, so I hope this checklist can be helpful in saving you time and effort.
The details will change with each port, but the principles I've outlined (combined with the material in the referenced below) will allow you to go through the process more easily.
Resources
About the author
Ajay Sood has been with IBM for more than eight years and has been a developer on such product-development team efforts as DB2-DataLinks and TXSeries-CICS. He has experience in transaction processing, middleware, and system programming on UNIX platforms.
Rate this page
Please take a moment to complete this form to help us better serve you.
Did the information help you to achieve your goal?
Please provide us with comments to help improve this page:
How useful is the information? | http://www.ibm.com/developerworks/linux/library/l-porting/ | crawl-001 | en | refinedweb |
By Chris Sells, Ian Griffiths
Price: $39.95 USD
£28.50 GBP
Cover | Table of Contents | Colophon
// Microsoft (R) Visual C# 2005 Compiler version 8.00.50215.44 for Microsoft (R) Windows (R) 2005 Framework version 2.0.50215 Copyright (C) Microsoft Corporation 2001-2005. All rights reserved.
WindowsBase,
PresentationCoreand
PresentationFramework), along with the core .NET System assembly, and compiling the MyApp.cs source file.
NavigationApplicationin Example 1-14 to serve as the base of your custom application class instead of the
Applicationclass.
// MyApp.xaml.cs using System; using System.Windows; using System.Windows.Navigation; namespace MyNavApp { public partial class MyApp : NavigationApplication {} }
NavigationApplicationitself derives from the
Applicationclass and provides additional services such as navigation, history, and tracking the initial page to show when the application first starts, which is specified in the application's XAML file, as in Example 1-15.
<!-- MyApp.xaml.cs --> <NavigationApplication x:Class="MyNavApp.MyApp" xmlns="" xmlns:x=""
Buttonis a control, providing content and behavior and a
Windowis a container. There are two things that may surprise you about content containment in WPF, however.
<Window ...> <Button Width="100" Height="100">Hi</Button> </Window>
<Window ...> <Button Width="100" Height="100"> <Image Source="tom.png" /> </Button> </Window>
TextBox, as shown in Figure 1-11 and implemented in Example 1-21.
<Window ...> <Button Width="100" Height="100"> <TextBox Width="75">edit me</TextBox> </Button> </Window>
TextBlockand the
Imageas content for the
Button, we don't really have enough information to place them inside the area of the button. Should they be stacked left-to-right or top-to-bottom? Should one be docked on one edge and one docked to the other? How are things stretched or arranged if the button resizes? These are questions best answered with a panel.
TextBoxcontrols, one for the name and one for the nickname; the actual nickname entries in a
ListBoxin the middle; and a
Buttonto add new entries. The core data of such an application could easily be built with a class, as shown in Example 1-27.
public class Nickname : INotifyPropertyChanged { // INotifyPropertyChanged Member public event PropertyChangedEventHandler PropertyChanged; protected void OnPropertyChanged(string propName) { if( PropertyChanged != null ) { PropertyChanged(this, new PropertyChangedEventArgs(propName)); } } string name; public string Name { get { return name; } set { name = value; OnPropertyChanged("Name"); // notify consumers } } private string nick; public string Nick { get { return nick; } set { nick = value; OnPropertyChanged("Nick"); // notify consumers } } public Nickname() : this("name", "nick") { } public Nickname(string name, string nick) { this.name = name; this.nick = nick; } }
INotifyPropertyChangedinterface to let consumers of this data know when it has changed.
Nicknameobject made its data available via standard .NET properties, we need something special to support data binding on the target element. While the
TextContentproperty of the
TextBlockelement is exposed with a standard property wrapper, for it to integrate with WPF services such as data binding, styling and animation, it also needs to be a dependency property . A dependency property provides several features not present in .NET properties, including the ability to inherit its value from a container element, support externally set defaults, provide for object-independent storage (providing a potentially huge memory savings), and change tracking.
Nicknameobjects in XAML in Example 1-32.
<!-- Window1.xaml --> <?Mapping XmlNamespace="local" ClrNamespace="DataBindingDemo" ?> <Window x: <Window.Resources> <local:Nicknames x: <local:Nickname <local:Nickname <local:Nickname </local:Nicknames> </Window.Resources> <DockPanel DataContext="{StaticResource names}"> <StackPanel DockPanel. <TextBlock VerticalAlignment="Center">Name: </TextBlock> <TextBox Text="{Binding Path=Name}" /> <TextBlock VerticalAlignment="Center">Nick: </TextBlock> <TextBox Text="{Binding Path=Nick}" /> </StackPanel> ... </DockPanel> </Window>
Window.Resources, which is property-element syntax to set the
Resourcesproperty of the
Window1class. Here, we can add as many named objects as we like, with the name coming from the
Keyattribute and the object coming from the XAML elements (remember that XAML elements are just a mapping to .NET class names). In this example, we're creating a Nicknames collection named
namesto hold three
Nicknameobjects, each constructed with the default constructor, and then setting each of the
Nameand
Nickproperties.
TextBlockcontrols from our
Nicknamesample, each of which was set to the same
VerticalAlignment(see Example 1-35).
<!-- Window1.xaml --> <Window ...> <DockPanel ...> <StackPanel ...> <TextBlock VerticalAlignment="Center">Name: </TextBlock> <TextBox Text="{Binding Path=Name}" /> <TextBlock VerticalAlignment="Center">Nick: </TextBlock> <TextBox Text="{Binding Path=Nick}" /> </StackPanel> ... </DockPanel> </Window>
VerticalAlignmentsetting into a style, we could do this with a
Styleelement in a
Resourcesblock, as shown in Example 1-36.
<Window ...> <Window.Resources> <Style x: <Setter Property="VerticalAlignment" Value="Center" /> <Setter Property="FontWeight" Value="Bold" /> <Setter Property="FontStyle" Value="Italic" /> </Style> </Window.Resources> <DockPanel ...> <StackPanel ...> <TextBlock Style="{StaticResource myStyle}">Name: </TextBlock> <TextBox Text="{Binding Path=Name}" /> <TextBlock Style="{StaticResource myStyle}">Nick: </TextBlock> <TextBox Text="{Binding Path=Nick}" /> </StackPanel> ... </DockPanel> </Window>
Setterelements for a specific class (specified with the
Typemarkup extension). The
TextBlock myStylestyle centers the vertical alignment property and, just for fun, sets the text to bold italic as well. With the style in place, it can be used to set the
Styleproperty of any
TextBlockthat references the style resource. Applying this style as in Example 1-36 yields Figure 1-18.
<Button LayoutTransform="scale 3 3"> <StackPanel Orientation="Horizontal"> <Canvas Width= 0 0 0 13,10" Stroke="Black" /> </Canvas> <TextBlock VerticalAlignment="Center">Click!</TextBlock> </StackPanel> </Button>
LayoutTransformproperty on the button, produces Figure 1-21.
Windowor
Pageobjects you may have, are most often split between a declarative XAML file for the look and an imperative code file for the behavior. Your applications can be normal, like a standard Windows application, or navigation-based, like the browser. In fact, the latter can be integrated into the browser, and both can be deployed and kept up to date over the Web using ClickOnce.
DockPanelis useful for describing the overall layout of a simple user interface. You can carve up the basic structure of your window using a
DockPaneland then use the other panels to manage the details.
DockPanelarranges each child element so that it fills a particular edge of the panel. If multiple children are docked to the same edge, they simply stack up against that edge in order. You may also optionally specify that the final child fills any remaining space not occupied by controls docked to the panel's edges. Although this may sound like a small feature set, the
DockPanelprovides a surprisingly flexible way of laying out elements, particularly as
DockPanelscan be nested.
DockPanel-based layout. Five buttons have been added to illustrate each of the options. Notice that four of them have a
DockPanel.Dockattribute applied. This attached property is defined by
DockPanelto allow elements inside a
DockPanelto specify their position.
<DockPanel LastChildFill="True"> <Button DockPanel.Top</Button> <Button DockPanel.Bottom</Button> <Button DockPanel.Left</Button> <Button DockPanel.Right</Button> <Button>Fill</Button> </DockPanel>
StackPanelis a very simple panel. It simply arranges its children in a row or a column. We've seen it once already in Example 2-1, but that was a somewhat unrealistic example. You will rarely use
StackPanelto lay out your whole user interface. It is at its most useful for small-scale layout—you use
DockPanelor
Gridto define the overall structure of your user interface, and then
StackPanelto manage the details.
DockPanelin Example 2-4 for the basic layout of our documentation viewer. We will now use
StackPanelto arrange the contents of the search panel on the left-hand side. The markup in Example 2-5 replaces the placeholder
TextBlockthat contained the text "Search panel goes here."
<StackPanel DockPanel. <TextBlock>Look for:</TextBlock> <ComboBox /> <TextBlock>Filtered by:</TextBlock> <ComboBox /> <Button>Search</Button> <CheckBox>Search in titles only</CheckBox> <CheckBox>Match related words</CheckBox> <CheckBox>Search in previous results</CheckBox> <CheckBox>Highlight search hits (in topics)</CheckBox> </StackPanel>
Marginproperty, which is present on all WPF elements. It indicates the amount of space that should be left around the edges of the element when it is laid out. The
StackPanelis difficult, because it is not designed with two-dimensional alignment in mind. We could try to use nesting: Example 2-7 shows a vertical
StackPanelwith three rows, each with a horizontal
StackPanel.
<StackPanel Orientation="Vertical"> <StackPanel Orientation="Horizontal"> <TextBlock>Protocol:</TextBlock> <TextBlock>Unknown Protocol</TextBlock> </StackPanel> <StackPanel Orientation="Horizontal"> <TextBlock>Type:</TextBlock> <TextBlock>Not available</TextBlock> </StackPanel> <StackPanel Orientation="Horizontal"> <TextBlock>Connection:</TextBlock> <TextBlock>Not encrypted</TextBlock> </StackPanel> </StackPanel>
Gridpanel solves this problem. Rather than working a single row or a single column at a time, it aligns all elements into a grid that covers the whole area of the panel. This allows consistent positioning from one row to the next. Example 2-8 shows the same elements as Example 2-7, but arranged with a
Gridrather than
StackPanels.
DockPanel,
StackPanel, or
Gridwill not enable the look you require, and it will be necessary to take complete control of the precise positioning of every element. For example, when you want to build an image out of graphical elements the positioning of the elements is dictated by the picture you are creating, not by any set of automated layout rules. For these scenarios, you will want to use the
Canvas.
Canvasis the simplest of the panels. It allows the location of child elements to be specified precisely relative to the edges of the canvas. The
Canvasdoesn't really do any layout at all—it simply puts things where you tell it to.
Canvaswill seem familiar and natural. However, it is strongly recommended that you avoid it unless you really need this absolute control. The automatic layout provided by the other panels will make your life very much easier, because they can adapt to changes in text and font. They also make it far simpler to produce resizable user interfaces. Moreover, localization tends to be much easier with resizable user interfaces, because different languages tend to produce strings with substantially different lengths. Don't opt for the
Canvassimply because it seems familiar.
Canvas, you must specify the location of each child element. If you don't, all of your elements will end up at the top left-hand corner.
Canvasdefines four attached properties for setting the position of child elements. Vertical position is set with either the
Topor
Bottomproperty, and horizontal position is determined by either the
Leftor
Rightproperty, an approach that is illustrated in Example 2-16.
<Canvas Background="Yellow"> <TextBlock Canvas.Hello</TextBlock> <TextBlock Canvas.world!</TextBlock> </Canvas>
Viewboxelement automatically scales its content to fill the space available.
Viewboxis not strictly speaking a panel—it derives from
Decorator. This means that, unlike most panels, it can only have one child. However, its ability to adjust the size of its content in order to adapt to its surroundings make it a useful layout tool.
Viewboxbut probably should. The window's content is a
Canvascontaining a rather small drawing. The markup is shown in Example 2-17.
<Window xmlns=""> <Canvas Width= 90 0 0 13,10" Stroke="Black" /> </Canvas> </Window>
Viewboxto resize the content automatically. The
Viewboxwill expand it to be large enough to fill the space, as shown in Figure 2-25. (If you're wondering why the drawing doesn't touch the edges of the window, it's because the
Canvasis slightly larger than the drawing it contains.)
Canvaselement in a
Viewboxelement, as is done in Example 2-18.
<Window xmlns=""> <Viewbox> <Canvas Width="20" Height="18" VerticalAlignment="Center"> ...as before </Canvas> </Viewbox> </Window>
TextBlock, which is efficient but has limited functionality.
TextFlowoffers more advanced typography and layout functionality but with slightly more overhead.
TextBlockis useful for putting short blocks of text into a user interface. If you don't require any special formatting of the text, you can simply wrap your text in a
TextBlock:
<TextBlock>Hello, world!</TextBlock>
TextBlockis the simplest text-handling element, it is capable of a little more than this example shows. For example, it has a set of properties for controlling the font, shown in Table 2-2.
FrameworkElementbase class. We have seen a few of these in passing in the preceding section, but we will now look at them all in a little more detail.
Widthor
Height, the layout system will always attempt to honor your choices. Of course if you make an element wider than the screen, WPF can't make the screen any wider, but as long as what you request is possible, it will be done.
Widthand
Heightwhere possible. By specifying upper and lower limits, you can still allow WPF some latitude to automate the layout.
MinWidthof 10000, WPF won't be able to honor that request unless you have some very exotic display hardware. In these cases, your element will be truncated to fit the space available.
StackPanelwill be as wide as the widest element, meaning that any narrower elements are given excess space. A
DockPanelwill provide enough space for an element to fill an edge or all the remaining space. Alignment is for these sorts of scenarios, enabling you to determine what the child element does with the extra space.
StackPanelwith a
Heightof 100, which contains a
Buttonwith a
Heightof 195.
<StackPanel Height="100" Background="Yellow" Orientation="Horizontal"> <Button>Foo</Button> <Button Height="30">Bar</Button> <Button Height="195">Spong</Button> </StackPanel>
StackPanelhas dealt with the anomaly by truncating the element that was too large. When confronted with contradictory hardcoded sizes like these, most panels take a similar approach and will crop content where it simply cannot fit.
TextBlockand its content into a
StackPanel, a
DockPanel, and a
Gridcell.
<Grid> <Grid.RowDefinitions> <RowDefinition /> <RowDefinition /> <RowDefinition /> </Grid.RowDefinitions> <StackPanel Height="100" Background="Yellow" Orientation="Horizontal"> <TextBlock TextWrap="Wrap" FontSize="20"> This is some text that is too long to fit. </TextBlock> </StackPanel> <DockPanel Grid. <TextBlock TextWrap="Wrap" FontSize="20"> This is some text that is too long to fit. </TextBlock> </DockPanel> <TextBlock Grid. This is some text that is too long to fit. </TextBlock> </Grid>
ScrollViewer.) Moreover, even when there is enough space onscreen, your panel's parent could still choose not to give it to you. For example, if your custom panel is nested inside a
Grid, the
Gridmay well have been set up with a hardcoded width for the column your panel occupies, in which case that's the width you'll get regardless of what you asked for during the measure phase. | http://www.oreilly.com/catalog/9780596101138/toc.html | crawl-001 | en | refinedweb |
#include <action_handler.hh>
Inheritance diagram for mpcl::automaton::TActionHandler< TState >:
Action Handler is the part of a Deterministic Finite Automaton (DFA) that executes the list of actions of a given state. Every individual action has a 'reaction' or error handling function.
A DFA that lacks a list of actions behaves as a classic, pattern recognizing DFA.
Definition at line 62 of file action_handler.hh. | http://www.uesqlc.org/doc/mpcl/classmpcl_1_1automaton_1_1_t_action_handler.html | crawl-001 | en | refinedweb |
Cover | Table of Contents | Colophon
System.Object. The CTS supports the general concept of classes, interfaces, and delegates (which support callbacks).
class Hello { static void Main( ) { // Use the system console object System.Console.WriteLine("Hello World"); } } (
{}).
WriteLine( )or).
&operator). Also, pointers aren't normally used (but see Chapter 22 for the exception to this rule).
xand
yare variables. Variables can have values assigned to them, and those values can be changed programmatically.
#region Using directives using System; using System.Collections.Generic; using System.Text; #endregion namespace InitializingVariables { class Values { static void Main( ) { int myInt = 7; System.Console.WriteLine("Initialized, myInt: {0}", myInt); myInt = 5; System.Console.WriteLine("After assignment, myInt: {0}", myInt); } } } Output: Initialized, myInt: 7 After assignment, myInt: 5
myVariable = 57;
57.
57to the variable
myVariable. The assignment operator (
=) doesn't test equality; rather it causes whatever is on the right side (
57) to be assigned to whatever is on the left side (
myVariable). All the C# operators (including assignment and equality) are discussed later in this chapter (see "Operators"). therefore.
#region Using directives using System; using System.Collections.Generic; using System.Text; #endregion namespace CallingAMethod { class CallingAMethod {( ).
int) support a number of operators such as assignment, increment, and so forth.
+), subtraction (
-), multiplication (
*), and division (
/) operators work as you might expect, with the possible exception of integer division.
4(
17/4
=
4, with a remainder of
1). C# provides a special operator (modulus,
%, which is described in the next section) to retrieve the remainder.
%). For example, the statement
17%4returns
1(the remainder after integer division).
#).) [,interface(s)]] {class-body}
publicas an access modifier.) The
identifieris the name of the class that you provide. The optional
base-classis discussed in Chapter 5. The member definitions that make up the
class-bodyare enclosed by open and closed curly braces (
{}).doesn't actually contain the value for the
Timeobject; it contains the address of that (unnamed) object that is created on the heap.
titself is just a reference to that object.
Dimand
Newon the same line, in C# this penalty has been removed. Thus, in C# there is no drawback to using the
newkeyword when declaring an object variable.
Timeobject looks as though it is invoking a method:
Time t = new Time();
type.
Timeclass of Example 4-1 doesn't define a constructor. If a constructor is not declared, the compiler provides one for you. The default constructor creates the object but takes no other action.( );
statickeyword in C# with the
Statickeyword in VB6 and VB.NET. In VB, the
Statickeyword declares a variable that is available only to the method it was declared in. In other words, the
Staticvariable is not shared among different objects of its class (i.e., each
Staticvariable instance has its own value). However, this variable exists for the life of the program, which allows its value to persist from one method call to another.
statickeyword indicates a class member. In VB, the equivalent keyword is
Shared.
~MyClass(){}
Finalize( )method that chains up to its base class. Thus, when you write:.
int(integer). Instead, use reference parameters.
#region Using directives using System; using System.Collections.Generic; using System.Text; #endregion namespace ReturningValuesInParams { public class Time { // private member variables private int Year; private int Month; private int Date; private int Hour; private int Minute; private int Second; // public accessor methods public void DisplayCurrentTime( ) { System. that takes a
DateTimeobject, and the other that takes six integers.
#region Using directives using System; using System.Collections.Generic; using System.Text; #endregion namespace OverloadedConstructor {.
#region Using directives using System; using System.Collections.Generic; using System.Text; #endregion namespace UsingAProperty {
Timeclass that is responsible for providing public static values representing the current time and date. Example 4-12 illustrates a simple approach to this problem.
#region Using directives using System; using System.Collections.Generic; using System.Text; #endregion namespace StaticPublicConstants {
RightNow.Yearvalue can be changed, for example, to
2006. This is clearly not what we'd like.
readonlyfor exactly this purpose. If you change the class member variable declarations as follows:
public static readonly int Year; public static readonly int Month; public static readonly int Date; public static readonly int Hour; public static readonly int Minute; public static readonly int Second; | http://www.oreilly.com/catalog/9780596006990/toc.html | crawl-001 | en | refinedweb |
compiler error with import
I am trying to compile these codes using jdk1.4.2_11. These codes compiled successfully with 1.3.1_17 but with 1.4.2_11, I am getting a compiler error with import IVServ and import ServeletIncompleteException statement below. Both classes are located in the same directory as the source code.
Do I need to bundle those referred classes in a package?
package test.ip.server;
import java.io.*;
import java.util.*;
import java.net.*;
import IVServ;
import ServletIncompleteException; | http://www.java-index.com/java-technologies-archive/512/java-compiler-5125853.shtm | crawl-001 | en | refinedweb |
Return to where it all began
Document options requiring JavaScript are not displayed
Help us improve this content
Level: Introductory
Elliotte Rusty Harold (elharo@metalab.unc.edu), Adjunct Professor, Polytechnic University
19 Dec 2006
The annual IDEAlliance XML conference took place the first week in December in Boston MA. Markup was generated, specifications were debated, and much Samuel Adams was quaffed. Looking back, a few topics stand out, including XQuery, native XML databases, the Atom Publishing Protocol, Web 2.0, and the extraction of implicit metadata from data.
Last month marked ten years since the World Wide Web Consortium (W3C) Standard Generalized Markup Language (SGML) on the Web Editorial Review Board
publicly unveiled the first draft of Extensible Markup Language (XML) 1.0 at the SGML 96 conference. In November 1996, in the same hotel, Tim Bray threw the printed 27-page XML spec into the audience from the stage, from whence it fluttered lightly down; then, he said, "If that had been the SGML spec, it would have taken out the first three rows." The point was made. Although SGML remains in production to this day, as a couple of sessions reminded attendees, the markup community rapidly moved on to XML and never looked back.
Ten years later, IDEAlliance's annual conference is still going strong, although today it's called XML 2006. It's the major North American XML conference and the largest pure XML show still running. However, many of the players remain the same. Quite a few attendees (and one keynoter) could and did
give first-hand reports of the early days to others like myself who only discovered the power of descriptive markup from XML.
The conference was smaller than in previous years (as almost all conferences are, post-dotcom implosion) -- about 400 people. However, repeat attendees reported that this was the most exciting and active iteration in several years. Despite running four concurrent tracks over three days, limiting speakers to somewhere between 400 seconds (for the least interesting subjects) and 45 minutes (for the most interesting subjects), and not covering speakers' travel expenses, the referees still had to choose from about four times as many submissions as they had slots for. For the six late-breaking sessions, that ratio was more like 10:1. It looks like the XML world is accelerating once again.
In addition to the final emergence from the post dot-bomb malaise and the possible expansion of Bubble 2.0, several factors converged to make this one of the most interesting XML conferences since the late 90s:
XQuery
Without a doubt,.
That the W3C XQuery and Extensible Stylesheet Language Transformation (XSLT) working groups released eight proposed recommendations two weeks before the conference (after years of development) didn't hurt. Barring any last-minute spec bugs, the final recommendations are expected to be released within weeks, not months. There are already over a dozen mostly conformant implementations of the specs, and four of them were represented at the show: IBM® DB2® 9, Oracle Database 10g Release2, Mark Logic, and Data Direct XQuery.
On Wednesday morning, Darin McBeath of Reed Elsevier gave a keynote address titled "Unleashing the Power of XML" in which he described numerous cases of publishers like Oxford University Press, O'Reilly, and even JetBlue implementing successful XQuery projects on top of either hybrid or pure XML databases.
Hybrid XML databases are those such as DB2 and Oracle that support both Structured Query Language (SQL) and XQuery. Native XML databases are those such as Mark Logic that support only XQuery.
I talked to several database vendors on the exhibit floor and attended several more XQuery presentations to try to make some sense of this. In brief, hybrid databases are still composed of relational tables. However, fields aren't limited to the usual SQL types like INT and DATE.
They can also be declared to have type XML. An XML-type field contains a complete, well-formed, optionally schema-valid document (or null). You can select, insert, and update the XML values using SQL statements with embedded XQuery subqueries. For example, the code in Listing 1 inserts an Extensible Hypertext Markup Language (XHTML) formatted comment into the comments table. This table has two fields, a CHAR(16) username and an XML comment.
INT
DATE
XML
CHAR(16)
Listing 1: Inserting XML into a hybrid SQL-XML database
INSERT INTO comments (username, comment)
VALUES ("FP",
"<div xmlns='' class='comment'>
<p>
The relational model rules <strong>supreme</strong>.
It can do anything an XML model can do. Unfortunately, no one's ever
listened to me and implemented a true relational database.
</p>
<p>
I will now go sulk in my corner until the world accepts Codd
and the <a href=''>12 commandments</a>.
</p>
</div>");
The database parses the XML data before insertion and stores it in a form that's amenable to searching and querying without having to reparse the data for every query. Indexes can even be defined on particular paths in the XML trees to improve performance.
To SQL, this data looks like one CLOB. However, XQuery expressions can exploit the structure of the XML. For example, Listing 2 shows a query that extracts img elements from the comments table.
img
Listing 2: An XQuery select
XQUERY
declare namespace html = ""
for $img in db2-fn:xmlcolumn('comments.comment')//html:img
return $img;
Some parts of this process are standardized in either the XQuery specs, JSR 225: XQuery API for Java, or SQL/XML. A few pieces are still in development, notably updates and full-text search. However, many pieces remain deliberately unspecified. Consequently, most applications will use some vendor-specific code. Listing 2 uses the DB2-specific function xmlcolumn to find the right field.
xmlcolumn
All of this violates the relational model in about half a dozen different ways. On the other hand, no major production database has ever fully implemented the relational model anyway, the cries of the relational purists not withstanding. Normalization is also left by the wayside (as it usually is in any large database in which performance is a major concern).
Despite the ideological impurity, this is an incredibly useful way to organize data. In particular, publishing and Web applications that need to store large documents rather than small fields and unmarked-up strings should see major benefits from this approach. The current strategy of storing marked-up text in BLOBs, CLOBs, and VARCHARs isn't nearly as natural or efficient. Shredding the document into individual nodes to be stored in separate records is even nastier. Allowing the document to be stored as a single unit in one field in one record while still letting it be searched with XQuery fits the structure of most traditional and Web publishing applications much more neatly.
Authoring
As Jon Bosak reminded listeners in the closing keynote, we still struggle with issues that go back to the SGML days and even earlier. Not surprisingly, a lot of these problems are people issues masquerading as technical issues.
Many of these problems revolve around authoring: Who creates the markup, and how do they create it? The three traditional approaches are:
Although option 3 is my preferred method (it's how I'm writing this article, for instance) it's a little too simple to justify conference presentations and probably not appropriate for nonprogrammer end users. This year, the competition between Microsoft's Office Open XML Formats and OpenOffice's OpenDoc format (ODF) focused attention on option 1.
ODF versus OpenXML
Both Microsoft and the open-document partisans seem to believe they're involved in a struggle to the death. They see themselves as waging nothing less than a war for truth, justice, and the American way on the one hand and liberté, égalité, and fraternité on the other. Consequently, neither side distinguished itself at this show. Both could benefit from toning down the hype several notches and recognizing their own limits.
When teased out from the invective, most of the technical discussion this year focused on the Microsoft formats -- probably because they're newer, and this conference thrives on the new. The critical factor seems to be that Microsoft is dead-set on maintaining pixel-perfect compatibility with at least 10 years of
legacy Microsoft Office binary formats. This means the newly minted ECMA Open XML standard is really just an XML encoding of a legacy format. Consequently, the specification was severely constrained by the requirements of compatibility.
The result is a specification that's about 6,000 pages long. It's almost 10 times as large as the OpenDoc specification, for pretty much the same functionality. It's possibly the single largest XML specification I've ever encountered. I'm not sure even the combined family of WS-* specs matches it. This is more complete
than any similar spec Microsoft has published in the past, and it will help anyone who needs to read or generate Microsoft Office documents. However, I can't see that any other project will ever adopt this format for anything other than exchange with Microsoft Office. It's too big and too full of legacy detail. If you don't already have 10 years of legacy code that reads, writes, and displays something close to this, you have no hope of implementing it.
By contrast, the competing OpenDoc format, although originally derived from StarOffice legacy formats, is much simpler and more independent of its ancestry. It's already been adopted as the native file format by separate office suites and programs with independent code bases. I'd be surprised if anyone besides Microsoft tried that with Office Open XML and even more surprised if they succeeded. Even Microsoft itself hasn't yet been able to implement this format in Microsoft Office for the Mac -- and that's not an independent code base.
By the way: An extra demerit goes to Microsoft for naming its format Office Open XML, thereby thoroughly confusing it with OpenOffice's OpenDoc format.
DITA
Another ongoing development in the XML world is Darwin Information Typing Architecture (DITA), an XML format for modular documentation. Rather than books and articles, DITA documents are divided into topics, concepts, and tasks. Map documents indicate how the different topics are rearranged to make magazine articles, Web pages, tutorials, conference presentations, man pages, and more. Different map documents can reuse the same topics to make new and different collections of documentation. Even individual paragraphs, sentences, and words can be pointed to and transcluded into output documents.
For technical authors like myself, this sounds like a godsend. I don't have to keep rewriting or cutting and pasting the same content. After all, how many different ways can I explain a linked list? This is the writer's version of structured programming and the DRY principle (Don't Repeat Yourself). At least, that's the theory.
The reality might fail to meet expectations. Without mentioning DITA by name, Jon Bosak shot it down pretty conclusively in his closing keynote.
As he pointed out, we've been here before. He first encountered this idea in the late 1970s and was very enamored of it for a time.
The problem is that you can't cut and paste topics freely between documents. Doing so produces frankenbooks that don't have a consistent authorial voice, target audience, or flow. One of the worst problems is that when you write a topic, you don't know what you can or can't assume the reader already understands from previous chapters, because the chapters are always moving. Thus you end up either constantly repeating all prerequisites or leaving out necessary prerequisites.
Bosak noted that this technique was invented several times before, and it hasn't worked yet. Sprinkling magic XML pixie dust on a flawed concept won't make it work now. He suspects that writers like this approach because it helps them be treated like important, exciting software developers rather than lowly, boring tech writers. (It worked for him.) That's why this bad idea keeps getting reinvented as soon as its previous failure is forgotten.
The Web
In 2006, it's hard to find a technical conference or subject that doesn't involve the Web. XML 2006 was no exception. An entire track was devoted to XML and the Web. Web 2.0 themes like mashups, Asynchronous JavaScript and XML (Ajax), and user interactivity were especially prominent; but this part of the conference ranged widely from cell phones to servers, from RSS to Atom to HTML.
Atom and APP
Atom might be old hat for the markup geeks at this show, but the newer Atom Publishing Protocol (APP) is bleeding-edge enough to attract a lot of interest. APP may be a sleeper technology like XML was 10 years ago. Like XML, it's starting small with a simple use case. (For XML, the use case was putting SGML on the Web. For APP, it's posting blog entries.).
Not all the players in this space were present at this conference, but IBM was there with the Abdera library. Abdera is an Apache Incubator project that implements
APP as a Java™ class library that other applications can invoke on either the client or server side. Wrap a user interface around this, and you have a blog editor. Attach a database to the backend, and you have a content management system. Abdera looks promising; but even if it doesn't pan out, numerous other developers are working on similar libraries in many languages and environments.
REST
APP is also the poster child for Representational State Transfer (REST), the architecture on which Hypertext Transfer Protocol (HTTP) and the Web are based. REST wasn't specifically on the program, but it kept coming up. For instance, in a session on the Google Checkout API, Google evangelist Patrick Chanezon mentioned that only the earliest public APIs they designed were done with the WS-* stack in mind. These days, they try to design their APIs RESTfully. Maybe they implement a SOAP gateway somewhere for the developers who prefer those tools, but behind the scenes it's all REST.
I also noted that the distinction between WS-* and REST is blurring. REST seems to be winning in actual code, even if not yet in developers' minds. A lot of people are doing REST even when they don't have any idea that's what it's called. More than once, I saw "WS-*" or equivalent on a speaker's slides, only to find out in further conversation that all they were doing was sending plain old XML over HTTP -- not even using SOAP or Web Services Description Language (WSDL), much less the whole
family of specs that sit on top of that. I suppose I don't care what they call their designs, as long as they're doing it the right way. Web services has always been a fuzzy term. However, going forward, developers need to be aware that unqualified terms like Web services can have very different meanings to different people.
Metadata
Metadata might be another classic people problem masquerading as a technical problem. In brief, how do you get authors to create and enter reliable metadata for their content? Google has created one of the world's most effective search engines by ignoring metadata completely and focusing exclusively on the data.
The Semantic Web zealots aren't ready to give up on Resource Description Framework (RDF) yet (although topic maps were conspicuous by their absence), so several presentations focused on means of deriving RDF metadata from HTML data, relational databases, and other unannotated systems.
The most promising approach (probably because it promised the least) was a W3C effort called Gleaning Resource Descriptions from Dialects of Languages (GRDDL), presented by Harry Halpin. He's developing XSL transforms that produce RDF metadata from a variety of XML, HTML, and microformats. Ronald Reck and Ken Sall's effort to infer metadata from the CIA World Factbook, Wikipedia, and Project Gutenberg aimed for more but achieved less, in my opinion.
What was shocking about both these proposals, compared to earlier years, was that neither required any effort by or even cooperation from the original document authors. I wonder if this means the metadata community is finally coming to learn Google's lesson? If they can extract the implicit metadata embedded in the documents rather than ask authors to add explicit metadata outside the documents, they can get both more and better metadata.
Summary
XML 2006 was one of the most exciting and active conferences I've been to in several years. Every time slot had at least two and often three or four presentations I wanted to see. Hallway conversations and the exhibit floor were equally active. Even after the official end of the conference for the day, a lot of activity continued in the hotel lobby and bar. (Cold Boston weather may have contributed to this.)
Looking forward to 2007, I think XQuery and native XML databases will be very, very hot. Many large publishing businesses have already successfully implemented XQuery systems to excellent effect. Smaller publishers might want to wait until simpler, cheaper, possibly open source solutions become available; but they might not have to wait long. The biggest rumor at the conference was that principals from one of the largest pure XML database players and one of the largest hybrid XML database players have joined forces to create a new open source XQuery engine that will be competitive with the big boys..
Resources
About the author
Elliotte Rusty Harold is originally from New Orleans, to which he returns periodically in search of a decent bowl of gumbo. However, he resides in the Prospect Heights neighborhood of Brooklyn with his wife Beth, their dog Shayna and cats Charm and Marjorie. talking about Web Forms 2.0 at the SD West 2007 conference in Santa Clara in March.
Rate this page
Please take a moment to complete this form to help us better serve you.
Did the information help you to achieve your goal?
Please provide us with comments to help improve this page:
How useful is the information? | http://www.ibm.com/developerworks/library/x-xml2006conf.html | crawl-001 | en | refinedweb |
Internally, JE databases are organized as BTrees. This means that most database operations (inserts, deletes, reads, and so forth) involve BTree node comparisons. This comparison most frequently occurs based on database keys, but if your database supports duplicate records then comparisons can also occur based on the database data.
By default, JE performs all such comparisons using a byte-by-byte lexicographic comparison. This mechanism works well for most data. However, in some cases you may need to specify your own comparison routine. One frequent reason for this is to perform a language sensitive lexical ordering of string keys.
You override the default comparison function by providing a Java Comparator class to the database. The Java Comparator interface requires you to implement the Comparator.compare() method (see for details).
JE passes your Comparator.compare() method the byte arrays that you stored in the database. If you know how your data is organized in the byte array, then you can write a comparison routine that directly examines the contents of the arrays. Otherwise, you have to reconstruct your original objects, and then perform the comparison.
For example, suppose you want to perform unicode lexical comparisons instead of UTF-8 byte-by-byte comparisons. Then you could provide a comparator that uses String.compareTo(), which performs a Unicode comparison of two strings (note that for single-byte roman characters, Unicode comparison and UTF-8 byte-by-byte comparisons are identical – this is something you would only want to do if you were using multibyte unicode characters with JE). In this case, your comparator would look like the following:
package je.gettingStarted; import java.util.Comparator; public class MyDataComparator implements Comparator { public MyDataComparator() {} public int compare(Object d1, Object d2) { byte[] b1 = (byte[])d1; byte[] b2 = (byte[])d2; String s1 = new String(b1, "UTF-8"); String s2 = new String(b2, "UTF-8"); return s1.compareTo(s2); } }
You specify a Comparator using the following methods. Note that by default these methods can only be used at database creation time, and they are ignored for normal database opens. Also, note that JE uses the no-argument constructor for these comparators. Further, it is not allowable for there to be a mutable state in these comparators or else unpredictable results will occur.
DatabaseConfig.setBtreeComparator()
Sets the Java Comparator class used to compare two keys in the database.
DatabaseConfig.setDuplicateComparator()
Sets the Java Comparator class used to compare the data on two duplicate records in the database. This comparator is used only if the database supports duplicate records.
You can use the above methods to set a database's comparator after database creation time if you explicitly indicate that the comparator is to be overridden. You do this by using the following methods:
If you override your comparator, the new comparator must preserve the sort order implemented by your original comparator. That is, the new comparator and the old comparator must return the same value for the comparison of any two valid objects. Failure to observe this constraint will cause unpredictable results for your application.
If you want to change the fundamental sort order for your database, back up the contents of the database, delete the database, recreate it, and then reload its data.
DatabaseConfig.setOverrideBtreeComparator()
If set to true, causes the database's Btree comparator to be overridden with the Comparator specified on DatabaseConfig.setBtreeComparator(). This method can be used to change the comparator post-environment creation.
DatabaseConfig.setOverrideDuplicateComparator()
If set to true, causes the database's duplicates comparator to be overridden with the Comparator specified on DatabaseConfig.setDuplicateComparator().
For example, to use the Comparator described in the previous section:
package je.gettingStarted; import com.sleepycat.je.Database; import com.sleepycat.je.DatabaseConfig; import com.sleepycat.je.DatabaseException; import java.util.Comparator; ... // Environment open omitted for brevity try { // Get the database configuration object DatabaseConfig myDbConfig = new DatabaseConfig(); myDbConfig.setAllowCreate(true); // Set the duplicate comparator class myDbConfig.setDuplicateComparator(MyDataComparator.class); // Open the database that you will use to store your data myDbConfig.setSortedDuplicates(true); Database myDatabase = myDbEnv.openDatabase(null, "myDb", myDbConfig); } catch (DatabaseException dbe) { // Exception handling goes here } | http://www.oracle.com/technology/documentation/berkeley-db/je/GettingStartedGuide/comparator.html | crawl-001 | en | refinedweb |
In previous chapters in this book, we built applications that load and display several JE databases. In this example, we will extend those examples to use secondary databases. Specifically:
In Stored Class Catalog Management with MyDbEnv we built a class that we can use to open and manage a JE Environment and one or more Database objects. In Opening Secondary Databases with MyDbEnv we will extend that class to also open and manage a SecondaryDatabase.
In Cursor Example we built an application to display our inventory database (and related vendor information). In Using Secondary Databases with ExampleInventoryRead we will extend that application to show inventory records based on the index we cause to be loaded using ExampleDatabasePut.
Before we can use a secondary database, we must implement a class to extract secondary keys for us. We use ItemNameKeyCreator for this purpose.
Example 6.1 ItemNameKeyCreator.java
This class assumes the primary database uses Inventory objects for the record data. The Inventory class is described in Inventory.java.
In our key creator class, we make use of a custom tuple binding called InventoryBinding. This class is described in InventoryBinding.java.
You can find the following class in:
JE_HOME/examples/je/gettingStarted/ItemNameKeyCreator.java
where JE_HOME is the location where you placed your JE distribution.
package je.gettingStarted; import com.sleepycat.je.DatabaseEntry; import com.sleepycat.je.DatabaseException; import com.sleepycat.je.SecondaryDatabase; import com.sleepycat.je.SecondaryKeyCreator; import com.sleepycat.bind.tuple.TupleBinding; import java.io.IOException; public class ItemNameKeyCreator implements SecondaryKeyCreator { private TupleBinding theBinding; // Use the constructor to set the tuple binding ItemNameKeyCreator(TupleBinding binding) { theBinding = binding; } // Abstract method that we must implement public boolean createSecondaryKey(SecondaryDatabase secDb, DatabaseEntry keyEntry, // From the primary DatabaseEntry dataEntry, // From the primary DatabaseEntry resultEntry) // set the key data on this. throws DatabaseException { try { // Convert dataEntry to an Inventory object Inventory inventoryItem = (Inventory) theBinding.entryToObject(dataEntry); // Get the item name and use that as the key String theItem = inventoryItem.getItemName(); resultEntry.setData(theItem.getBytes("UTF-8")); } catch (IOException willNeverOccur) {} return true; } }
Now that we have a key creator, we can use it to generate keys for a secondary database. We will now extend MyDbEnv to manage a secondary database, and to use ItemNameKeyCreator to generate keys for that secondary database.
In Stored Class Catalog Management with MyDbEnv we built MyDbEnv as an example of a class that encapsulates Environment and Database opens and closes. We will now extend that class to manage a SecondaryDatabase.
Example 6.2 SecondaryDatabase Management with MyDbEnv
We start by importing two additional classes needed to support secondary databases. We also add a global variable to use as a handle for our secondary database.
// File MyDbEnv.java package je.gettingStarted; import com.sleepycat.bind.tuple.TupleBinding; import com.sleepycat.bind.serial.StoredClassCatalog; import com.sleepycat.je.Database; import com.sleepycat.je.DatabaseConfig; import com.sleepycat.je.DatabaseException; import com.sleepycat.je.Environment; import com.sleepycat.je.EnvironmentConfig; import com.sleepycat.je.SecondaryConfig; import com.sleepycat.je.SecondaryDatabase; import java.io.File; public class MyDbEnv { private Environment myEnv; // The databases that our application uses private Database vendorDb; private Database inventoryDb; private Database classCatalogDb; private SecondaryDatabase itemNameIndexDb; // Needed for object serialization private StoredClassCatalog classCatalog; // Our constructor does nothing public MyDbEnv() {}
Next we update the MyDbEnv.setup() method to open the secondary database. As a part of this, we have to pass an ItemNameKeyCreator object on the call to open the secondary database. Also, in order to instantiate ItemNameKeyCreator, we need an InventoryBinding object (we described this class in InventoryBinding.java). We do all this work together inside of MyDbEnv.setup().
public void setup(File envHome, boolean readOnly) throws DatabaseException { EnvironmentConfig myEnvConfig = new EnvironmentConfig(); DatabaseConfig myDbConfig = new DatabaseConfig(); SecondaryConfig mySecConfig = new SecondaryConfig(); // If the environment is read-only, then // make the databases read-only too. myEnvConfig.setReadOnly(readOnly); myDbConfig.setReadOnly(readOnly); mySecConfig.setReadOnly(readOnly); // If the environment is opened for write, then we want to be // able to create the environment and databases if // they do not exist. myEnvConfig.setAllowCreate(!readOnly); myDbConfig.setAllowCreate(!readOnly); mySecConfig.setAllowCreate(!readOnly); ... // Environment and database opens omitted for brevity ... // Open the secondary database. We use this to create a // secondary index for the inventory database // We want to maintain an index for the inventory entries based // on the item name. So, instantiate the appropriate key creator // and open a secondary database. ItemNameKeyCreator keyCreator = new ItemNameKeyCreator(new InventoryBinding()); // Set up the secondary properties mySecConfig.setAllowPopulate(true); // Allow autopopulate mySecConfig.setKeyCreator(keyCreator); // Need to allow duplicates for our secondary database mySecConfig.setSortedDuplicates(true); // Now open it itemNameIndexDb = myEnv.openSecondaryDatabase( null, "itemNameIndex", // Index name inventoryDb, // Primary database handle. This is // the db that we're indexing. mySecConfig); // The secondary config }
Next we need an additional getter method for returning the secondary database.
public SecondaryDatabase getNameIndexDB() { return itemNameIndexDb; }
Finally, we need to update the MyDbEnv.close() method to close the new secondary database. We want to make sure that the secondary is closed before the primaries. While this is not necessary for this example because our closes are single-threaded, it is still a good habit to adopt.
public void close() { if (myEnv != null) { try { //Close the secondary before closing the primaries itemNameIndexDb.close(); vendorDb.close(); inventoryDb.close(); classCatalogDb.close(); // Finally, close the environment. myEnv.close(); } catch(DatabaseException dbe) { System.err.println("Error closing MyDbEnv: " + dbe.toString()); System.exit(-1); } } } }
That completes our update to MyDbEnv. You can find the complete class implementation in:
JE_HOME/examples/je/gettingStarted/MyDbEnv.java
where JE_HOME is the location where you placed your JE distribution.
Because we performed all our secondary database configuration management in MyDbEnv, we do not need to modify ExampleDatabasePut at all in order to create our secondary indices. When ExampleDatabasePut calls MyDbEnv.setup(), all of the necessary work is performed for us.
However, we still need to take advantage of the new secondary indices. We do this by updating ExampleInventoryRead to allow us to query for an inventory record based on its name. Remember that the primary key for an inventory record is the item's SKU. The item's name is contained in the Inventory object that is stored as each record's data in the inventory database. But our new secondary index now allows us to easily query based on the item's name.
In the previous section we changed MyDbEnv to cause a secondary database to be built using inventory item names as the secondary keys. In this section, we will update ExampleInventoryRead to allow us to query our inventory records based on the item name. To do this, we will modify ExampleInventoryRead to accept a new command line switch, -s, whose argument is the name of an inventory item. If the switch is present on the command line call to ExampleInventoryRead, then the application will use the secondary database to look up and display all the inventory records with that item name. Note that we use a SecondaryCursor to seek to the item name key and then display all matching records.
Remember that you can find the following class in:
JE_HOME/examples/je/gettingStarted/ExampleInventoryRead.java
where JE_HOME is the location where you placed your JE distribution.
Example 6.3 SecondaryDatabase usage with ExampleInventoryRead
First we need to import a few additional classes in order to use secondary databases and cursors, and then we add a single global variable:
package je.gettingStarted; import com.sleepycat.je.Cursor; import com.sleepycat.je.Database; import com.sleepycat.je.DatabaseEntry; import com.sleepycat.je.DatabaseException; import com.sleepycat.je.LockMode; import com.sleepycat.je.OperationStatus; import com.sleepycat.je.SecondaryCursor; import com.sleepycat.bind.EntryBinding; import com.sleepycat.bind.serial.SerialBinding; import com.sleepycat.bind.tuple.TupleBinding; import java.io.File; import java.io.IOException; public class ExampleInventoryRead { private static File myDbEnvPath = new File("/tmp/JEDB"); // Encapsulates the database environment and databases. private static MyDbEnv myDbEnv = new MyDbEnv(); private static TupleBinding inventoryBinding; private static EntryBinding vendorBinding; // The item to locate if the -s switch is used private static String locateItem;
Next we update ExampleInventoryRead.run() to check to see if the locateItem global variable a value. If it does, then we show just those records related to the item name passed on the -s switch.
private void run(String args[]) throws DatabaseException { // Parse the arguments list parseArgs(args); myDbEnv.setup(myDbEnvPath, // path to the environment home true); // is this environment read-only? // Setup our bindings. inventoryBinding = new InventoryBinding(); vendorBinding = new SerialBinding(myDbEnv.getClassCatalog(), Vendor.class); if (locateItem != null) { showItem(); } else { showAllInventory(); } }
Finally, we need to implement ExampleInventoryRead.showItem(). This is a fairly simple method that opens a secondary cursor, and then displays every primary record that is related to the secondary key identified by the locateItem global variable.
private void showItem() throws DatabaseException { SecondaryCursor secCursor = null; try { // searchKey is the key that we want to find in the // secondary db. DatabaseEntry searchKey = new DatabaseEntry(locateItem.getBytes("UTF-8")); // foundKey and foundData are populated from the primary // entry that is associated with the secondary db key. DatabaseEntry foundKey = new DatabaseEntry(); DatabaseEntry foundData = new DatabaseEntry(); // open a secondary cursor secCursor = myDbEnv.getNameIndexDB().openSecondaryCursor(null, null); // Search for the secondary database entry. OperationStatus retVal = secCursor.getSearchKey(searchKey, foundKey, foundData, LockMode.DEFAULT); // Display the entry, if one is found. Repeat until no more // secondary duplicate entries are found while(retVal == OperationStatus.SUCCESS) { Inventory theInventory = (Inventory)inventoryBinding.entryToObject(foundData); displayInventoryRecord(foundKey, theInventory); retVal = secCursor.getNextDup(searchKey, foundKey, foundData, LockMode.DEFAULT); } } catch (Exception e) { System.err.println("Error on inventory secondary cursor:"); System.err.println(e.toString()); e.printStackTrace(); } finally { if (secCursor != null) { secCursor.close(); } } }
The only other thing left to do is to update ExampleInventoryRead.parseArgs() to support the -s command line switch. To see how this is done, see:
JE_HOME/examples/je/gettingStarted/ExampleInventoryRead.java
where JE_HOME is the location where you placed your JE distribution. | http://www.oracle.com/technology/documentation/berkeley-db/je/GettingStartedGuide/indexusage.html | crawl-001 | en | refinedweb |
how RapidMind delivers a single-source development platform for Cell/B.E. applications
Document options requiring JavaScript are not displayed
Discuss
Help us improve this content
Level: Intermediate
Michael McCool (mmccool@rapidmind.net), Founder and Chief Scientist, RapidMind
01 May 2007
The RapidMind Development Platform.
Before discussing the RapidMind Development Platform -- which can be used to develop applications for the Cell/B.E. processor that are able to effectively exploit the architecture by letting you write a single, single-threaded C++ program using an existing C++ compiler -- let me explain why Cell/B.E. processors and the RapidMind platform combine to make a good application development environment.
Introducing RapidMind and Cell/B.E.
The nine cores in the Cell/B.E. processor include:
The PPU is a general-purpose processor and is capable of running traditional operating system and application code. However, most of the computational performance of the Cell/B.E. processor resides in the more specialized SPEs. Each SPE core has a number of unique features, including a large vector register file, a vector ALU, and an explicitly managed high-speed local memory. The Cell/B.E. processor also includes a number of features that allow the PPU and the SPEs to communicate and synchronize with each other.
Why RapidMind on the Cell/B.E. processor?
To use the RapidMind platform on the Cell/B.E. processor, it is not necessary to understand the details of the SPE cores or perform any SPE-specific programming. It is only necessary to write a single-source, single-threaded program using an existing C++ compiler (such as g++) and run it on the PPU. This program only needs to include a single header file and link to the RapidMind platform library.
To execute a computation on the SPEs, it should be expressed as a data-parallel computation and executed using the RapidMind platform. The interface to the RapidMind platform is implemented as a library that integrates cleanly with existing IDEs and build environments. Although it is used as a library, the RapidMind platform can also be thought of as a high-performance, embedded parallel programming language with its own code management and parallel runtime. It is possible to express arbitrary computations with the RapidMind interface and have these execute at performance levels rivaling that of native hardware-dependent tools, but with a simple, maintainable, and portable programming model.
The RapidMind interface is based on a small set of types. These types mimic standard C++ types -- floating point numbers, arrays, and functions -- making it straightforward to port existing code. However, computations expressed using RapidMind types can be collected into computationally intense kernels, dynamically compiled directly to optimized SPE machine language, and run in parallel under the management of a sophisticated runtime system that includes automatic load balancing and synchronization with the host. The RapidMind platform automates most of the low-level tasks involved in using the additional high-performance SPE cores, making it easier to take advantage of the extreme computational performance provided by the Cell/B.E. processor for a range of applications.
Making C++ high performance
The C++ programming language is not usually considered suitable for high-performance programming. The C++ modularity mechanisms and memory model unfortunately incur overhead that is hard to eliminate with traditional compilation strategies. However, by using a dynamic compilation strategy, the RapidMind platform architecture bypasses these problems and makes it possible to eliminate overhead while targeting non-traditional architectures such as the SPEs.
In fact, using dynamic compilation, RapidMind implementations of benchmark applications can match or even significantly exceed the performance of the same applications written at a lower level with native tools. The RapidMind implementation is also often significantly simpler and more portable than comparable implementations.
In the following sections, I present a simple introduction to the RapidMind platform, the basic types and programming model, and an example showing how a simple loop kernel can be converted to run on the SPEs.
RapidMind interface and programming model
The RapidMind interface is based on three main types:
Value
Array
Program
These are defined in the rapidmind/platform.hpp include file. Including this file and linking to the rmplatform library is all that is needed to use the platform. RapidMind type declarations are protected from name collisions with user-defined types by the rapidmind namespace. Also, to express control flow, some macros are used that are usually protected with an "RM_" prefix. By including rapidmind/shortcuts.hpp, some aliases can be defined that omit these prefixes. For brevity I will use these aliases in my examples, and will also omit namespace qualifications.
RM_
The Value type
The Value<N,T> type represents a fixed-size container, a homogeneous tuple, with N instances of type T. The element type T can be any basic C++ numerical type, such as float or int. The type T may also be a bool.
Value<N,T>
N
T
float
int
bool
There are also short forms for value tuples of up to length four. For example, a Value3f is a 3-tuple of single-precision floating point numbers, and a Value4ui is a 4-tuple of unsigned integers. The Value<1,T> type is in most cases a direct drop-in replacement for a single instance of type T. For example, single-precision complex numbers can be implemented using the RapidMind platform with std::complex<Value1f>.
Value3f
Value4ui
Value<1,T>
std::complex<Value1f>
All the usual operators are overloaded on values; they operate component by component. Most of the standard C library functions are also extended to values and execute in parallel on each element. The value types also provide support for swizzling and writemasking:
For example, given a 4-tuple Value4f c, you can extract the first three components of c and reverse their order using the swizzle expression c(2,1,0). In a single operation this extracts components 2, 1, and 0 and packs them together into a 3-tuple.
The value type permits the explicit specification of vectorized computations which are a good match to the vector register architecture of the SPEs.
Value4f c
c
c(2,1,0)
The Array type
The Array<D,V> type represents a variable-sized multidimensional container in which D is the dimensionality (1, 2, or 3) and V is the element type (a RapidMind value). This type is intended for managing large amounts of data. Like the normal pointer arrays used in C++, RapidMind arrays can be assigned to one another in O(1) time. However, RapidMind arrays use by-value rather than by-reference semantics. This simplifies memory management and avoids unnecessary side effects.
Array<D,V>
D
V
The unique Program type
Finally, RapidMind supports a unique Program type which represents a computation. It is literally a container for program code and can be constructed dynamically. It can be thought of as a function that, unlike ordinary C++ functions, can be created and manipulated at run time. Basically, sequences of computations on RapidMind values can be "recorded" and stored in a RapidMind program object. Following is a simple code example of how a program object can be constructed:
Program prog = BEGIN {
In<Value3f> a, b;
Out<Value3f> c;
Value3f d = func(a, b);
c = d + a * 2.0f;
} END;
In this example, a C++ function func() is called. This function can be defined with ordinary C++ mechanisms and many other C++ modularity constructs, such as classes, can also be used. The value d is a local variable only visible inside the program. The values a and b are marked as inputs, and the value c is marked as an output. Inputs are initialized with the values of actual arguments, and values marked as outputs will be copied into actual outputs when the program is later bound and executed. Programs can have more than one input but also more than one output.
func()
d
a
b
The BEGIN macro switches from "immediate" to "retained" mode. By default, operations on RapidMind types actually execute on the host just like the C++ numerical types they emulate. In retained mode, however, operations are not executed, they are recorded and stored in the associated program object. When the END tag is encountered, the RapidMind platform switches back to immediate mode but also prepares the captured operations stored in the program object for execution on the SPEs.
BEGIN
END
Note that only operations on RapidMind types are captured when a program object is built. This is the mechanism that allows the platform to avoid C++ overhead. In effect, normal C++ modularity constructs act as scaffolding for generating intense sequences of numerical operations, but the overhead for these modularity constructs only has to be executed once when the program is created, not every time it is used.
Basically, C++ constructs are "baked" into the program object -- pointers are de-referenced, functions are inlined, loops are unrolled, and C++ variables are interpreted as constants, resulting in very dense and efficient code. The RapidMind platform, however, has dynamic versions of all these constructs, so they can be used when -- and only when -- they are necessary.
Dynamic code generation has other implications for performance. It is easy to create alternative versions of program objects whenever necessary to exploit special cases in the input data. It is also straightforward to tune program objects (either explicitly or automatically) by modifying construction parameters, such as a blocking factor or the number of times a loop is unrolled, until performance is maximized.
Syntactically, the construction of a program object looks like a dynamic function definition and the resulting program object can in fact be used much like an ordinary function. However, RapidMind program objects are normally applied to entire arrays. The computation represented by a program is executed concurrently on all elements of the array, using all the cores on the Cell/B.E. processor in parallel. If A, B, and C are RapidMind array objects, then parallel execution is initiated as follows: C = prog(A,B);.
C = prog(A,B);
Program objects can include arbitrary computations, including dynamic data-dependent control flow and random access reads from other arrays. An extension of the previous example, including dynamic control flow and random access into a 1D array B rather than streaming access, could be expressed as follows:
Program p = BEGIN {
In<Value3f> a;
In<Value1i> u;
Out<Value3f> c;
Value3f d = func(a, B[u]);
IF (all(a > 0.0f)) {
c = d + a * 2.0f;
} ELSE {
c = d - a * 2.0f;
} ENDIF;
} END;
The inclusion of control flow makes the programming model very general. Technically, RapidMind uses an SPMD (single program, multiple data) data-parallel programming model. A number of collective operations are also available, including higher-order reductions which can take an associative "combiner" program as an argument.
Access patterns also make it possible to read from only part of an input array and update only part of an output array. The combination of control flow in kernels and general collective operations on arrays makes it possible to use a bulk-synchronous style of parallel programming which has been shown to apply to a wide range of problems.
The data-parallel programming model used by the RapidMind platform allows the system to scale gracefully to variable numbers of cores. For example, under Yellow Dog Linux (from Terra Soft), a RapidMind-enabled program can execute in parallel on the six available SPE cores on the Sony® PLAYSTATION® 3. On an IBM QS20 Cell/B.E. blade which has two connected Cell/B.E. processors, the same program can automatically take advantage of all 16 available SPE cores.
Note that the amount of parallelism expressed in a RapidMind computation is usually much larger than the number of cores. After dividing the work over the available cores, the "extra" parallelism is used internally for various additional optimizations, including vectorization and latency hiding. The platform runtime also includes automatic load-balancing, a necessary feature since control flow can lead to differences in execution times for different instances of the computational kernel represented by a RapidMind program object.
Loop conversion example
This additional example should make the RapidMind interface concepts clearer. In the following section, I'll convert a nested loop kernel to a parallel implementation that runs on all SPEs:
#include <cmath>
const int w = 512, h = 512;
float f;
float a[w][h][4], b[w][h][4];
void compute() {
for (int i = 0; i < w; i++)
for (int j = 0; j < h; j++)
for (int k = 0; k < 4; k++) {
a[i][j][k] += f * b[i][j][k];
}
}
The conversion process has three steps as Figure 1 illustrates.
To enable use of the RapidMind platform, you'll need to include the RapidMind header files. For brevity in these examples, I'll also load the short forms of the macros and specify the global use of the rapidmind namespace:
rapidmind
#include <rapidmind/platform.hpp>
#include <Rapidmind/shortcuts.hpp>
using namespace rapidmind;
Now I'll convert the data to use the RapidMind types. Later I'll create a RapidMind program to implement the innermost loop using component-wise operations on the value tuples, so here I'll declare the arrays to hold value tuples of the appropriate length:
const int w = 512, h = 512;
Value1f f;
Array<2,Value4f> a(w,h), b(w,h);
You should build the program object representing the computation during an initialization phase. After you build it, you can use it repeatedly to perform computation. In more complex programs, you normally build program objects in class constructors; here, I'll use a simple initialization function to build the program object and will store it in a global variable. This function should be called during system initialization:
Program compute_prog;
void init_compute_prog () {
compute_prog = BEGIN {
In<Value4f> r, s;
Out<Value4f> t;
t = r + f * s;
} END;
}
Note that the program object actually refers to a non-local variable f of type Value1f. When the program object runs on the SPEs, it will use whatever the current value of this variable is when it runs, just like an ordinary C++ function. The RapidMind platform automatically tracks dependencies between program objects and references to non-local variables declared using RapidMind types and sets up the appropriate communication without further intervention from the developer. The length of the f 1-tuple is also automatically promoted to a 4-tuple by duplicating its scalar value before applying the component-wise multiplication operation.
f
Value1f
Finally, the program can be executed whenever necessary:
a = compute_prog(a,b);
This example illustrates one further point. Note that the array a is both an input and an output of the program. The RapidMind platform uses parallel assignment semantics -- inputs are always bound to the "previous" value, not the "new" value.
If you want to know more
This article gave a brief introduction to the RapidMind platform. For more information, including benchmark results and more detailed code examples, please visit the RapidMind Web site (see Resources), where a number of white papers are available. You can also sign up for an evaluation version of the RapidMind platform, complete with full documentation and sample code for a variety of applications.
Resources
About the author
An Associate Professor at the University of Waterloo and co-founder of RapidMind, Dr. McCool continues to perform research at the university and sit on the Board of Directors at RapidMind Inc. His research interests include high-quality real-time rendering, global and local illumination, hardware algorithms, parallel computing, reconfigurable computing, interval and Monte Carlo methods and applications, end-user programming and metaprogramming, image and signal processing, and sampling.
Rate this page
Please take a moment to complete this form to help us better serve you.
Did the information help you to achieve your goal?
Please provide us with comments to help improve this page:
How useful is the information? | http://www.ibm.com/developerworks/library/pa-rapidmind/ | crawl-001 | en | refinedweb |
Measure the time it takes to execute a request
Document options requiring JavaScript are not displayed
Sample code
Help us improve this content
Level: Introductory
Andre Tost, Senior Certified IT Specialist, IBM Software GroupRussell Butek, Software Engineer, IBM Software Group
04 Nov 2003
When developing a Web service, typically you would not want to put Web service-specific code in the implementation. In many cases, you would take existing code and simply add another access layer to it, namely a way to invoke it via SOAP over HTTP. This means that the service implementation knows nothing about SOAP, it knows nothing even about XML, and it certainly doesn't matter that it is invoked from a client that sits in another process, on another machine, possibly on the other side of the world! While this is a well known advantage of Web services technology (that is, the possibility to invoke it across a network or the Internet), it also creates challenges. You may want to measure the response time of the server; you may have to encrypt a message before you can send it across the network; or you may want to charge a client for using your Web service each time it is invoked. Luckily, the JAX-RPC specification provides us with a feature that can help us do these things: handlers. Below, we will describe how you can build a simple handler that measures the time it takes to execute a particular request and log the result in a file.
JAX-RPC handler basics
JAX-RPC handlers allow you to intercept a SOAP message at various times during a service invocation. Handlers can exist. We'll get back to this later.
To develop a JAX-RPC handler, you simply create a class that implements the javax.xml.rpc.handler.Handler interface. It has three methods to handle SOAP requests, responses and faults, respectively.
javax.xml.rpc.handler.Handler
Handler lifecycle
Handlers are defined in the JAX-RPC specification. However, the "Enterprise Web Services" (JSR109) specification describes how they are used in a J2EE environment and adds some clarification to the way handlers are managed by the application server (see Resources for more information on this specification). In this article, we will assume that your Web service runs on a J2EE application server and hence we will follow the definitions of JSR109 as well as JAX-RPC.
Handlers are shared across multiple service invocations. In other words, they can store information that is only valid for a particular client or server instance. You can compare this to the way servlets are handled. When a new instance of a handler is created, its init() method is called. That allows you to set up things that you can use for multiple invocations. Before the handler is removed, the destroy() method is called, so that you can do cleanup in there. As a rule of thumb, however, you should avoid storing any state in a handler altogether.
Handler configuration. Listing 1 shows the deployment descriptor for our sample service, showing how a handler class called handler. PerformanceHandler is registered for the HelloWorld service:
<webservices id="WebServices_1066491732483">
<webservice-description
<webservice-description-name>HelloWorldService</webservice-description-name>
<wsdl-file>WEB-INF/wsdl/HelloWorld.wsdl</wsdl-file>
<jaxrpc-mapping-file>WEB-INF/HelloWorld_mapping.xml</jaxrpc-mapping-file>
<port-component
<port-component-name>HelloWorld</port-component-name>
<wsdl-port
<namespaceURI></namespaceURI>
<localpart>HelloWorld</localpart>
</wsdl-port>
<service-endpoint-interface>pack.HelloWorld</service-endpoint-interface>
<service-impl-bean
<servlet-link>pack_HelloWorld</servlet-link>
</service-impl-bean>
<handler id="Handler_1066493401322">
<handler-name>handler.PerformanceHandler</handler-name>
<handler-class>handler.PerformanceHandler</handler-class>
</handler>
</port-component>
</webservice-description>
</webservices>
Multiple handlers would be defined here to form a chain as mentioned above.
Storing state
If multiple handlers. We will get to this in our example later.
Example: A service performance handler
Now let's look at how you can create a handler that measures the response time of your service implementation. We will assume that you have created a HelloWorld web service, which simply returns a String message. Listing 2 shows the code for the service implementation bean (you can find an EAR file with all the sources, including a test client in the Resources section):
public class HelloWorld {
public String helloWorld(String message) {
return "Hello "+message;
}
}
The handler will be configured for the server that hosts the Web service. It will be invoked on both the request and the response message, so that we can measure the elapsed time.
Initializing the handler
As mentioned earlier, each handler must implement the javax.xml.rpc.handler.Handler interface. Or, you can make your life a bit easier by simply inheriting the javac.xml.rpc.handler.GenericHandler class, which provides default implementations for all the methods. For storing performance results, we use a class called Logger, which we set up in the init() method. You can find the source for the Logger class in the Resources section. Moreover, the application server passes a javax.xml.rpc.handler.HandlerInfo object into this method, which we need to cache as well (see Listing 3):
javax.xml.rpc.handler.HandlerInfo
public class PerformanceHandler extends GenericHandler {
protected HandlerInfo info = null;
protected Logger logger = null;
public void init(HandlerInfo arg) {
info = arg;
logger = Logger.getLogger("c://temp//HelloWorldServiceLog");
}
public void destroy() {
try {
logger.close();
} catch (Exception x) {}
}
Note that we close the Logger object when the handler instance is destroyed.
Handling requests and responses
Each handler implements the handleRequest method, which is invoked when a request message arrives, as shown in Listing 4.
public boolean handleRequest(MessageContext context) {
try {
Date startTime = new Date();
context.setProperty("startTime", startTime);
} catch (Exception x) {
// insert error handling here
x.printStackTrace();
}
return true;
}
Here you can see that we store the current time in the message context as a property called "startTime". The application server will guarantee that the same message context object is passed into the handleResponse method, so that we can measure the elapsed time there, as shown in Listing 5.
public boolean handleResponse(MessageContext context) {
try {
Date startTime = (Date)context.getProperty("startTime");
Date endTime = new Date();
long elapsedTime = endTime.getTime()-startTime.getTime();
logger.write("Elapsed time is "+elapsedTime+"\n");
} catch (Exception x) {
// insert error handling here
x.printStackTrace();
}
return true;
}
Now you can deploy your service. Make sure you have configured the handler as shown in the deployment descriptor above, and you can measure how long it takes to execute a service request.
Summary messages as they pass through your system.
Download
Resources
About the authors.
Russell Butek is one of the developers of IBM's WebSphere Web services engine. He is also IBM's representative on the JAX-RPC Java Specification Request (JSR) expert group. He was involved in the implementation of Apache's AXIS SOAP engine, driving AXIS 1.0 to comply with JAX-RPC 1.0. Previously, he was a developer of IBM's CORBA ORB and an IBM representative on a number of OMG task forces: the portable interceptor task force (of which he was chair), the core task force, and the interoperability task force. You can contact Russell at butek at us.ibm.com.
Rate this page
Please take a moment to complete this form to help us better serve you.
Did the information help you to achieve your goal?
Please provide us with comments to help improve this page:
How useful is the information? | http://www.ibm.com/developerworks/webservices/library/ws-tipjax2.html | crawl-001 | en | refinedweb |
#include <archetype.h>
Element classes must contain public default constructor, copy constructor, assignment operator, and destructor. Note that primitive types such as int and double meet this specification.
In particular, the copy constructor is used to allow elements to be passed by value to a function.
Note that the documentation below of the specific methods, describes them as implemented in the archetypic element class. | http://www.linalg.org/linbox-html/classLinBox_1_1ElementArchetype.html | crawl-001 | en | refinedweb |
#include <Function.h>
List of all members.
Functions are used in various ways in PDF, including device-dependent rasterization information for high-quality printing (halftone spot functions and transfer functions), color transform functions for certain color spaces, and specification of colors as a function of position for smooth shadings. Functions in PDF represent static, self-contained numerical transformations.
PDF::Function represents a single, flat interface around all PDF function types.
Create a PDF::Function object from an existing SDF function dictionary.
If funct_dict is null, a non valid Function object is created.
Evaluate the function at a given point. | http://www.pdftron.com/net/html/classpdftron_1_1PDF_1_1Function.html | crawl-001 | en | refinedweb |
#include <ffpack.h>
Inheritance diagram for FFPACK:
This class only provides a set of static member functions. No instantiation is allowed.
It enlarges the set of BLAS routines of the class FFLAS, with higher level routines based on elimination.
[inline, static]
Computes the rank of the given matrix using a LQUP factorization. The input matrix is modified.
using rank computation with early termination.
using LQUP factorization with early termination.
Solve linear system using LQUP factorization.
LQUPtoInverseOfFullRankMinor Suppose A has been factorized as L.Q.U.P, with rank r. Then Qt.A.Pt has an invertible leading principal r x r submatrix This procedure efficiently computes the inverse of this minor and puts it into X. NOTE: It changes the lower entries of A_factors in the process (NB: unless A was nonsingular and square)
[static]
FfpackLQUP
NULL
FfpackHybrid
[inline, static, protected]
[static, protected] | http://www.linalg.org/linbox-html/classLinBox_1_1FFPACK.html | crawl-001 | en | refinedweb |
#include <rtt/Timer.hpp>
List of all members.
In Order to use this class, derive your class from Timer and implement the timeout() method. The resolution of this class depends completely on the timer resolution of the underlying operating system.
Definition at line 23 of file Timer.hpp.
Create a timer object which can hold max_timers timers.
A Timer must be executed in a SingleThread or it will refuse to start. If scheduler is set to -1 (default) no thread is created and you need to attach a thread yourself to this Timer.
Definition at line 83 of file Timer.cpp.
References TimeService::Instance(), Timer::mThread, Timer::mtimers, Timer::mTimeserv, and ThreadInterface::start().
The method that will be executed once when this class is run in a non periodic thread.
The default implementation calls step() once.
Reimplemented from RunnableInterface.
Definition at line 18 of file Timer.cpp.
References TimeService::getNSecs(), TimeService::InfiniteNSecs, Timer::m, Timer::mdo_quit, Timer::msem, Timer::mtimers, Timer::mTimeserv, Timer::timeout(), and Semaphore::waitUntil(). from RunnableInterface.
Definition at line 76 of file Timer.cpp.
References Timer::mdo_quit, Timer::msem, and Semaphore::signal().
This function is called each time an armed or periodic timer expires.
The user must implement this method to catch the time outs.
Definition at line 100 of file Timer.cpp.
Referenced by Timer::loop().
Change the maximum number of timers in this object.
Any added timer with id >= max will be removed.
Definition at line 105 of file Timer.cpp.
References Timer::m, and Timer::mtimers.
Start a periodic timer which starts first over period seconds and then every period seconds.
Definition at line 111 of file Timer.cpp.
References TimeService::getNSecs(), Timer::m, Timer::msem, Timer::mtimers, Timer::mTimeserv, RTT::Seconds_to_nsecs(), and Semaphore::signal().
Arm a timer to fire once over wait_time seconds.
Definition at line 127 of file Timer.cpp.
References TimeService::getNSecs(), Timer::m, Timer::msem, Timer::mtimers, Timer::mTimeserv, RTT::Seconds_to_nsecs(), and Semaphore::signal().
Returns the remaining time before this timer elapses.
Definition at line 152 of file Timer.cpp.
References TimeService::getNSecs(), Timer::m, Timer::mtimers, Timer::mTimeserv, and RTT::nsecs_to_Seconds().
Check if a given timer id is armed.
Definition at line 144 of file Timer.cpp.
References Timer::m, and Timer::mtimers.
Disable an armed timer.
Definition at line 165 of file Timer.cpp.
References Timer::m, and Timer::mtimers.
Get the thread this object is run in.
Definition at line 69 of file RunnableInterface.cpp.
Referenced by Timer::initialize(). | http://www.orocos.org/stable/documentation/rtt/v1.4.x/api/html/classRTT_1_1Timer.html | crawl-001 | en | refinedweb |
31282/is-accessing-kubernetes-dashboard-remotely-possible
To access to Kubernetes Dashboard via proxy from remote machine, you will need to grant ClusterRole to allow access to dashboard.
Create new file and insert following details.
vi dashboard-access.yaml
Now apply changes to Kubernetes Cluster to grant access to dashboard.
kubectl create -f dashboard-access.yaml
Kubernetes API server is exposed and accessible from outside you can directly access dashboard at:
https://<master-ip>:<apiserver-port>/api/v1/namespaces/kube-system/services/https:kubernetes-dashboard:/proxy/
It seems only the api request from localhost to the dashboard service can be accepted in proxy mode. Then the quest becomes how to make a request from a remote host look like from localhost of where the proxy is running (k8s master node, in my case).
Here are the steps:
1, deploy dashboard serive on master node:
im facing the issue from yesterday. i ...READ MORE
To access Kubernetes dashboard, you need to ...READ MORE
In principle, the NGINX ingress controller is ...READ MORE
Yes, it is possible to route traffic ..
This completely depends on what roles you ...READ MORE
OR | https://www.edureka.co/community/31282/is-accessing-kubernetes-dashboard-remotely-possible | CC-MAIN-2019-39 | en | refinedweb |
Package org.eclipse.swt.events
Class SegmentEvent
- java.lang.Object
- java.util.EventObject
- org.eclipse.swt.events.TypedEvent
- org.eclipse.swt.events.SegmentEvent
- All Implemented Interfaces:
Serializable
- Direct Known Subclasses:
BidiSegmentEvent
public class SegmentEvent extends TypedEventThis event is sent to SegmentListeners when a text content is to be modified. The segments field can be used in conjunction with the segmentsChars field or by itself. Setting only the segmentsChars field has no effect. When used by itself, the segments field specify text ranges that should be treated as separate segments.
The elements in the segments field specify the start offset of a segment relative to the start of the text. They must follow the following rules:
- elements must be in ascending order and must not have duplicates
- elements must not exceed the text length
The segments field may be left null if the entire text content doesn't require segmentation.A Segment text = "R1R2R3" + "R4R5R6" R1 to R6 are right-to-left characters. The quotation marks are part of the text. The text is 13 characters long. segments = null: entire text content will be reordered and thus the two R2L segments swapped (as per the bidi algorithm). visual display (rendered on screen) = "R6R5R4" + "R3R2R1" segments = [0, 5, 8] "R1R2R3" will be reordered, followed by [blank]+[blank] and "R4R5R6". visual display = text, the character at segmentsChars[i] is inserted at the offset specified by segments[i]. When both fields are set, the rules for the segments field are less restrictive:
- elements must be in ascending order, duplicates are allowed
- elements must not exceed the text length
- Since:
- 3.8
- See Also:
- Serialized Form
Field Summary
Fields inherited from class org.eclipse.swt.events.TypedEvent
data, display, time, widget
Fields inherited from class java.util.EventObject
source
Field Detail
lineOffset
public int lineOffsetThe start offset of the
lineTextrelative to text (always zero for single line widget)
segments
public int[] segmentsText ranges that should be treated as separate segments (e.g. for bidi reordering)
segmentsChars
public char[] segmentsCharsCharacters to be used in the segment boundaries (optional) | http://help.eclipse.org/2019-06/topic/org.eclipse.platform.doc.isv/reference/api/org/eclipse/swt/events/SegmentEvent.html | CC-MAIN-2019-39 | en | refinedweb |
Problem is not reproducible or outdated
Ok so I'm making an endless runner and I need it to instantiate the next part of the track every time the player reaches a "Marker" object. The collisions work and the next track section instantiates for the first marker. The position the track is created is based on an object that is a child of the marker (which is also part of the track prefab). My problem is that the 2nd section of track does not get instantiated and I do not believe the collisions are working properly either (The debug I have running does not show anything for the second marker). I also changed the speed of the player to make sure that wasn't a problem.
Here's the code:
using UnityEngine;
using System.Collections;
public class MarkerCollisionCheck : MonoBehaviour {
public GameObject objectToSpawn = null;
//public GameObject spawnPoint = null;
public bool spawning = false;
void OnTriggerEnter(Collider other)
{
if(other.tag == "Player" && spawning == false)
{
spawning = true;
Debug.Log (spawning);
//Instantiate(objectToSpawn, spawnPoint.transform.Find("GroundSpawnPoint").position, Quaternion.identity);
Instantiate(objectToSpawn, transform.FindChild("GroundSpawnPoint").position, Quaternion.identity);
spawning = false;
}
}
}
So what could be wrong? If you have any questions just ask :D
Thank You, expat1999
So have you debugged if the trigger happens or not ?
void OnTriggerEnter(Collider other)
{
Debug.Log (other);
Debug.Log (other.tag);
Debug.Log (spawning);
if(other.tag == "Player" && spawning == false)
{
Also, if you are not using multithreading, your variable spawning is not needed.
spawning
Thank you! While I may have not been very thorough when trying to figure this out for myself I definitely will remember that in the future.
It turns out the "spawning" variable (which I accidentally left in there for a now obsolete script) was always true when the prefabs were spawned. I am still not sure why that is but getting rid of the variable fixed the original problem.
Thank You: expat.
Checking Child collisions through parents
1
Answer
hidden children / instantiating as child
1
Answer
how to create a repeating environment in a 3D endless runner game
0
Answers
Detect Collision between an Instantiated Object and a non Convex mesh collider?
0
Answers
Is there any way to use parenting as an organization tool without creating a Transform bond?
3
Answers | https://answers.unity.com/questions/1023939/endless-runner-code.html | CC-MAIN-2019-39 | en | refinedweb |
if __name__ == "__main__"
On 06/04/2016 at 17:54, xxxxxxxx wrote:
I seem to be having a hard time understanding why this line is here.
import c4d def main() : doc = c4d.document.GetActiveDocument() print doc if __name__ == "__main__": main()
I understand that this quick sample will execute the main() function. What's the difference then if the code does this?
import c4d if __name__ == "__main__": doc = c4d.documents.GetActiveDocument() print doc
Obviously it's not going to run a main() function because now it doesn't exist.
You get the same result though.
Is one way preferred over another?
What would be the reason to use one over the other?
I feel like I'm mostly confident in programming but this seems like a glaring oversight in fundamentals on my part as to why I don't understand this.
On 07/04/2016 at 02:06, xxxxxxxx wrote:
Hello,
the line " if __name__ == "__main__"" is a typical Python idiom. It is used to distinct between loading the code (in a module) and executing the code.
Using a main() function makes no different but like any other pattern it is useful to organize the code.
See also
best wishes,
Sebastian
On 15/04/2016 at 09:12, xxxxxxxx wrote:
Hello Herbie,
was your question answered?
Best wishes,
Sebastian | https://plugincafe.maxon.net/topic/9438/12648_if-name--main | CC-MAIN-2019-39 | en | refinedweb |
flutter_timer 0.0.5
flutter_timer #
A flutter timer that shows time duration from initial time.
Screenshot #
Example #
import 'package:flutter/material.dart'; import 'package:flutter_timer/flutter_timer.dart'; class TimerPage extends StatefulWidget { @override _TimerPageState createState() => _TimerPageState(); } class _TimerPageState extends State<TimerPage> { bool running = false; @override Widget build(BuildContext context) { return Scaffold( body: Center( child: Column( mainAxisAlignment: MainAxisAlignment.center, children: <Widget>[ TikTikTimer( initialDate: DateTime.now(), running: running, height: 150, width: 150, backgroundColor: Colors.indigo, timerTextStyle: TextStyle(color: Colors.white, fontSize: 20), borderRadius: 100, isRaised: true, tracetime: (time) { // print(time.getCurrentSecond); }, ), Row( mainAxisAlignment: MainAxisAlignment.spaceAround, children: <Widget>[ RaisedButton( child: Text( 'Start', style: TextStyle(color: Colors.white), ), color: Colors.green, onPressed: () { try { if (running == false) setState(() { running = true; }); } on Exception {} }, ), RaisedButton( child: Text( 'Stop', style: TextStyle(color: Colors.white), ), color: Colors.red, onPressed: () { if (running == true) setState(() { running =_timer: _timer/flutter_timer_timer.dart. (-0.50 points)
Analysis of
lib/flutter_timer.dart reported 1 hint:
line 57 col 7: This class (or a class which this class inherits from) is marked as '@immutable', but one or more of its instance fields are not final: TikTikTimer.tracetime, TikTikTimer.isStarted, TikTikTimer.hour, TikTikTimer.minute, TikTikTimer.second
Maintenance suggestions
Maintain an example. (-10 points)
Create a short demo in the
example/ directory to show how to use this package.
Common filename patterns include
main.dart,
example.dart, and
flutter_timer. | https://pub.dev/packages/flutter_timer | CC-MAIN-2019-39 | en | refinedweb |
32188/how-to-delete-all-the-subnets-from-a-vpc-using-boto3
You can refer to this question here:
You can use the code for deleting the subnets as follows:
import boto3
ec2 = boto3.resource('ec2')
ec2client = ec2.meta.client
vpc = ec2.Vpc('vpc-01250c74f7a4d1236')
for subnet in vpc.subnets.all():
subnet.delete()
Hope this helps.
You can delete the file from S3 ...READ MORE
.terminate is used for instances and not ...READ MORE
To create the subnet in VPC:
subnet = ...READ MORE
You can delete the folder by using ...READ MORE
The error clearly says the error you ...READ MORE
Here is the code to attach a ...READ MORE
Here is a simple implementation. You need ...READ MORE
This is the code to delete the ...READ MORE
You can view this answer here :
Before ...READ MORE
Here is the simple way of implementing ...READ MORE
OR | https://www.edureka.co/community/32188/how-to-delete-all-the-subnets-from-a-vpc-using-boto3 | CC-MAIN-2019-39 | en | refinedweb |
GitHub repository
MicroPython works great on ESP32, but the most serious issue is still (as on most other MicroPython boards) limited amount of free memory.MicroPython works great on ESP32, but the most serious issue is still (as on most other MicroPython boards) limited amount of free memory.The repository can be used to build MicroPython for modules/boards with SPIRAM as well as for regular ESP32 modules/boards without SPIRAM.
As of Sep 18, 2017 full support for psRAM is included into esp-idf and xtensa toolchain.
Building on Linux, MacOS and Windows is supported.
You can support this project by donating via PayPal
ESP32 can use external SPI RAM (psRAM) to expand available RAM up to 16MB.
Currently, there are several modules & development boards which incorporates 4MB of SPIRAM:
ESP-WROVER-KIT boards from Espressif, AnalogLamb or Electrodragon.
ESP-WROVER from Espressif, AnalogLamb or Electrodragon.
ALB32-WROVER from AnalogLamb.
S01 and L01 OEM modules from Pycom.
The repository contains all the tools and sources necessary to build working MicroPython firmware which can fully use the advantages of 4MB (or more) of SPIRAM
It is huge difference between MicroPython running with less than 100KB of free memory and running with 4MB of free memory.
Some basic documentation specific to this MicroPython port is available. It will soon be updated to include the documentation for all added/changed modules.
Some examples can be found in modules_examples directory.
The MicroPython firmware is built as esp-idf component
This means the regular esp-idf menuconfig system can be used for configuration. Besides the ESP32 configuration itself, some MicroPython options can also be configured via menuconfig.
This way many features not available in standard ESP32 MicroPython are enabled, like unicore/dualcore, all Flash speed/mode options etc. No manual sdkconfig.h editing and tweaking is necessary.
Features and some differences from standard MicroPython ESP32 build
- MicroPython build is based on latest build (1.9.2) from main MicroPython repository.
- ESP32 build is based on MicroPython's ESP32 build) with added changes needed to build on ESP32 with SPIRAM and with esp-idf build system.
- Default configuration for SPIRAM build has 2MB of MicroPython heap, 20KB of MicroPython stack, ~200KB of free DRAM heap for C modules and functions
- MicroPython can be built in unicore (FreeRTOS & MicroPython task running only on the first ESP32 core, or dualcore configuration (MicroPython task running on ESP32 App core).
- ESP32 Flash can be configured in any mode, QIO, QOUT, DIO, DOUT
- BUILD.sh script is provided to make building MicroPython firmware as easy as possible
- Internal filesystem is built with esp-idf wear leveling driver, so there is less danger of damaging the flash with frequent writes. File system parameters (start address, size, ...) can be set via menuconfig.
- sdcard module is included which uses esp-idf sdmmc driver and can work in SD mode (1-bit and 4-bit) or in SPI mode (sd card can be connected to any pins). On ESP-WROVER-KIT it works without changes, for information on how to connect sdcard on other boards check the dicumentation
- Native ESP32 VFS support is used for spi Flash (FatFS or SPIFS) & sdcard filesystems.
- SPIFFS filesystem support, can be used instead of FatFS on SPI Flash. Configurable via menuconfig
- RTC Class is added to machine module, including methods for synchronization of system time to ntp server, deepsleep, wakeup from deepsleep on external pin level, ...
- Time zone can be configured via menuconfig and is used when syncronizing time from NTP server
- files timestamp is correctly set to system time both on internal fat filesysten and on sdcard
- Built-in ymodem module for fast transfer of text or binary files of any size to/from host. Uses the same uart on which REPL runs.
- Some additional frozen modules are added, like pye editor, urequests, functools, logging, ...
- Btree module is included, can be Enabled/Disabled via menuconfig.
- _threads module is greatly improved, inter-thread notifications and messaging included
- Neopixel module using ESP32 RMT peripheral with many new features, unlimited number of pixels
- i2c module uses ESP32 hardware i2c driver
- spi module uses ESP32 hardware spi driver
- curl module added, http/https get,post, send mail (including gMail), ftp client (get, put, list)
- ssh module added, sftp get, put, list, mkdir, execute any command on host
- display module added with full support for spi TFT displays
- DHT module implemented using ESP32 RMT peripheral
- mqtt module added, implemented in C, runs in separate task
- telnet module added, connect to **REPL via WiFi** using telnet protocol
- ftp server module added, runs as separate ESP32 task
- GSM module with PPPoS support, all network functions works the same as with WiFi, SMS, AT commands, ...
- NVS support in machine module
- Eclipse project files are included. To include the project into Eclipse goto File->Import->Existing Projects into Workspace->Select root directory->[select MicroPython_BUILD directory]->Finish. After opening, rebuild the index.
How to Build
Clone the repository:
Code: Select all
git clone
Goto MicroPython_BUILD directory
To change some ESP32 & Micropython options run:
Code: Select all
./BUILD.sh menuconfig
Code: Select all
./BUILD.sh
If using too high N the build may fail, if that happens, run build again or run without the -j option.
If no errors are detected, you can now flash the MicroPython firmware to your board. Run:
Code: Select all
./BUILD.sh flash
You can also run ./BUILD.sh monitor to use esp-idf's terminal program, it will reset the board automatically.
BUILD.sh
Included BUILD.sh script makes building MicroPython firmware easy.
Usage:
- ./BUILD.sh - run the build, create MicroPython firmware
- ./BUILD.sh -jn - run the build on multicore system, much faster build. Replace n with the number of cores on your system
- ./BUILD.sh menuconfig - run menuconfig to configure ESP32/MicroPython
- ./BUILD.sh clean - clean the build
- ./BUILD.sh flash - flash MicroPython firmware to ESP32
- ./BUILD.sh erase - erase the whole ESP32 Flash
- ./BUILD.sh monitor - run esp-idf terminal program
- ./BUILD.sh makefs - create SPIFFS file system image which can be flashed to ESP32
- ./BUILD.sh flashfs - flash SPIFFS file system image to ESP32, if not created, create it first
- ./BUILD.sh copyfs - flash the default SPIFFS file system image to ESP32
- ./BUILD.sh makefatfs - create FatFS file system image which can be flashed to ESP32
- ./BUILD.sh flashfatfs - flash FatFS file system image to ESP32, if not created, create it first
- ./BUILD.sh copyfatfs - flash the default FatFS file system image to ESP32
To build with SPIRAM support:
In menuconfig select → Component config → ESP32-specific → Support for external, SPI-connected RAM
In menuconfig select → Component config → ESP32-specific → SPI RAM config → Make RAM allocatable using heap_caps_malloc
After the successful build the firmware files will be placed into firmware directory. flash.sh script will also be created which can be used for flashing the firmware without building it first.
Using SPIFFS filesystem
SPIFFS filesystem can be used on internal spi Flash instead of FatFS.
If you want to use it configure it via menuconfig → MicroPython → File systems → Use SPIFFS
Prepared image file can be flashed to ESP32, if not flashed, filesystem will be formated after first boot.
SFPIFFS image can be prepared on host and flashed to ESP32:
Copy the files to be included on spiffs into components/spiffs_image/image/ directory. Subdirectories can also be added.
Execute:
Code: Select all
./BUILD.sh makefs
Execute:
Code: Select all
./BUILD.sh flashfs
Execute:
Code: Select all
./BUILD.sh copyfs
Some examples
Using new machine methods and RTC:
Code: Select all
import machine rtc = machine.RTC() rtc.init((2017, 6, 12, 14, 35, 20)) rtc.now() rtc.ntp_sync(server="<ntp_server>" [,update_period=]) <ntp_server> can be empty string, then the default server is used ("pool.ntp.org") rtc.synced() returns True if time synchronized to NTP server rtc.wake_on_ext0(Pin, level) rtc.wake_on_ext1(Pin, level) wake up from deepsleep on pin level machine.deepsleep(time_ms) machine.wake_reason() returns tuple with reset & wakeup reasons machine.wake_description() returns tuple with strings describing reset & wakeup reasons
Code: Select all
import uos uos.mountsd() uos.listdir('/sd')
Code: Select all
>>> import uos >>> uos.mountsd(True) --------------------- Mode: SD (4bit) Name: NCard Type: SDHC/SDXC Speed: default speed (25 MHz) Size: 15079 MB CSD: ver=1, sector_size=512, capacity=30881792 read_bl_len=9 SCR: sd_spec=2, bus_width=5 >>> uos.listdir() ['overlays', 'bcm2708-rpi-0-w.dtb', ......
Code: Select all
rst:0x1 (POWERON_RESET),boot:0x30010,len:4 load:0x3fff0014,len:5656 load:0x40078000,len:0 ho 12 tail 0 room 4 load:0x40078000,len:13220 entry 0x40078fe4 W (36) rtc_clk: Possibly invalid CONFIG_ESP32_XTAL_FREQ setting (40MHz). Detected 40 MHz. I (59) boot: ESP-IDF ESP32_LoBo_v1.9.1-13-gfecf988-dirty 2nd stage bootloader I (60) boot: compile time 21:07:29 I (108) boot: Enabling RNG early entropy source... I (108) boot: SPI Speed : 40MHz I (108) boot: SPI Mode : DIO I (115) boot: SPI Flash Size : 4MB I (128) boot: Partition Table: I (139) boot: ## Label Usage Type ST Offset Length I (162) boot: 0 nvs WiFi data 01 02 00009000 00006000 I (185) boot: 1 phy_init RF data 01 01 0000f000 00001000 I (209) boot: 2 MicroPython factory app 00 00 00010000 00270000 I (232) boot: 3 internalfs Unknown data 01 81 00280000 00140000 I (255) boot: End of partition table I (268) esp_image: segment 0: paddr=0x00010020 vaddr=0x3f400020 size=0x48a74 (297588) map I (613) esp_image: segment 1: paddr=0x00058a9c vaddr=0x3ffb0000 size=0x07574 ( 30068) load I (650) esp_image: segment 2: paddr=0x00060018 vaddr=0x400d0018 size=0xc83f4 (820212) map 0x400d0018: _stext at ??:? I (1525) esp_image: segment 3: paddr=0x00128414 vaddr=0x3ffb7574 size=0x052d0 ( 21200) load I (1551) esp_image: segment 4: paddr=0x0012d6ec vaddr=0x40080000 size=0x00400 ( 1024) load 0x40080000: _iram_start at /home/LoBo2_Razno/ESP32/MicroPython/MicroPython_ESP32_psRAM_LoBo/Tools/esp-idf/components/freertos/./xtensa_vectors.S:1675 I (1553) esp_image: segment 5: paddr=0x0012daf4 vaddr=0x40080400 size=0x1a744 (108356) load I (1711) esp_image: segment 6: paddr=0x00148240 vaddr=0x400c0000 size=0x0006c ( 108) load I (1712) esp_image: segment 7: paddr=0x001482b4 vaddr=0x50000000 size=0x00400 ( 1024) load I (1794) boot: Loaded app from partition at offset 0x10000 I (1794) boot: Disabling RNG early entropy source... I (1800) spiram: SPI RAM mode: flash 40m sram 40m I (1812) spiram: PSRAM initialized, cache is in low/high (2-core) mode. I (1834) cpu_start: Pro cpu up. I (1846) cpu_start: Starting app cpu, entry point is 0x400814e4 0x400814e4: call_start_cpu1 at /home/LoBo2_Razno/ESP32/MicroPython/MicroPython_ESP32_psRAM_LoBo/Tools/esp-idf/components/esp32/./cpu_start.c:219 I (0) cpu_start: App cpu up. I (4612) spiram: SPI SRAM memory test OK I (4614) heap_init: Initializing. RAM available for dynamic allocation: I (4615) heap_init: At 3FFAE2A0 len 00001D60 (7 KiB): DRAM I (4633) heap_init: At 3FFC30C0 len 0001CF40 (115 KiB): DRAM I (4653) heap_init: At 3FFE0440 len 00003BC0 (14 KiB): D/IRAM I (4672) heap_init: At 3FFE4350 len 0001BCB0 (111 KiB): D/IRAM I (4692) heap_init: At 4009AB44 len 000054BC (21 KiB): IRAM I (4712) cpu_start: Pro cpu start user code I (4777) cpu_start: Starting scheduler on PRO CPU. I (2920) cpu_start: Starting scheduler on APP CPU. FreeRTOS running on BOTH CORES, MicroPython task started on App Core. uPY stack size = 19456 bytes uPY heap size = 2097152 bytes (in SPIRAM using heap_caps_malloc) Reset reason: Power on reset Wakeup: Power on wake I (3130) phy: phy_version: 359.0, e79c19d, Aug 31 2017, 17:06:07, 0, 0 Starting WiFi ... WiFi started Synchronize time from NTP server ... Time set MicroPython ESP32_LoBo_v2.0.2 - 2017-09-19 on ESP32 board with ESP32 Type "help()" for more information. >>> >>> import micropython, machine >>> micropython.mem_info() stack: 736 out of 19456 GC: total: 2049088, used: 6848, free: 2042240 No. of 1-blocks: 37, 2-blocks: 9, max blk sz: 329, max free sz: 127565 >>> machine.heap_info() Free heap outside of MicroPython heap: total=2232108, SPISRAM=2097108, DRAM=135000 >>> >>> a = ['esp32'] * 200000 >>> >>> a[123456] 'esp32' >>> >>> micropython.mem_info() stack: 736 out of 19456 GC: total: 2049088, used: 807104, free: 1241984 No. of 1-blocks: 44, 2-blocks: 13, max blk sz: 50000, max free sz: 77565 >>>
Tested on ESP-WROVER-KIT v3, Adafruit HUZZAH32 - ESP32 Feather
| https://forum.micropython.org/viewtopic.php?p=20833 | CC-MAIN-2019-39 | en | refinedweb |
Serial port programming
This is a step-by-step guide to using the serial port from a program running under Linux; it was written for the Raspberry Pi serial port with the Raspbian Wheezy distribution. However, the same code should work on other systems.
Contents
Step 1: Connect to a terminal emulator using a PC
This step is not essential, but it is invaluable for checking that the hardware is working on a new system.
Follow the instructions at RPi_Serial_Connection#Connection_to_a_PC, so that you end up with your Pi's serial port connected to a PC, running a terminal emulator such as minicom or PuTTY.
The default Wheezy installation sends console messages to the serial port as it boots, and runs getty so you can log in using the terminal emulator. If you can do this, the serial port hardware is working.
Troubleshooting
Step 2: Test with Python and a terminal emulator
You will now need to edit files /etc/inittab and /boot/cmdline.
We will now write a simple Python program which we can talk to with the terminal emulator. You will need to install the PySerial package:
sudo apt-get install python-serial
Now, on the Raspberry Pi, type the following code into a text editor, taking care to get the indentation correct:
import serial port = serial.Serial("/dev/ttyAMA0", baudrate=115200, timeout=3.0) while True: port.write("\r\nSay something:") rcv = port.read(10) port.write("\r\nYou sent:" + repr(rcv))
Save the result as file serialtest.py, and then run it with:
python serialtest.py
If all is working, you should see the following lines appearing repeatedly, one every 3 seconds, on the terminal emulator:
Say something: You sent:''
Try typing some characters in the terminal emulator window. You will not see the characters you type appear straight away - instead you will see something like this:
Say something: You sent:'abcabc'
If you typed Enter in the terminal emulator, it will appear as the character sequence \r - this is Python's way of representing the ASCII "CR" (Control-M) character.
Troubleshooting
For other problems (e.g. text appears corrupted) refer to the troubleshooting table in Step 1.
More about reading serial input
The serial connection we are using above is:
- bi-directional - the PC transmits characters (actually, 8-bit values which are interpreted as ASCII characters) which are received by the Pi, and the Pi can transmit characters which are received by the PC.
- full-duplex - meaning that the PC-to-Pi transmission can happen at the same time as the Pi-to-PC transmission
- byte-oriented - each byte is transmitted and received independently of the next byte. In other words, the serial communication does not group transmitted data into packets, or lines of text; if you want to send messages longer than one byte, you will need to add your own means of grouping bytes together.
So, the line rcv = port.read(10) will wait for characters to arrive from the PC, and:
- if it has read 10 characters, the call to read() will finish, returning those 10 characters as a string.
- if it has been waiting for the timeout period given to serial.Serial() - in this case, 3 seconds - it will return whatever characters have arrived so far. (If no characters arrive, this will return an empty string).
Any characters which arrive after the read() call has finished will be saved (buffered) by the kernel and can be retrieved the next time you call read(). However, there is a limit to how many characters can be saved; once the buffer is full characters will be lost. | https://elinux.org/index.php?title=Serial_port_programming&oldid=244046 | CC-MAIN-2019-39 | en | refinedweb |
Lift.
Table of Contents
Key Features
- Log-based API for NATS
- Replicated for fault-tolerance
- Horizontally scalable
- Wildcard subscription support
- At-least-once delivery support
- Message key-value support
- Log compaction by key
- Single static binary (~16MB)
- Designed to be high-throughput (more on this to come)
- Supremely simple
What is Liftbridge?.
Why was it created?
Liftbridge was designed to bridge the gap between sophisticated log-based messaging systems like Apacha Kafka and Apache Pulsar and simpler, cloud-native systems. There is no ZooKeeper or other unwieldy dependencies, no JVM, no complicated API,.
Why not NATS Streaming?
NATS Streaming provides a similar log-based messaging solution. However, it is an entirely separate protocol built on top of NATS. NATS is simply the transport for NATS Streaming. This means there is no "cross-talk" between messages published to NATS and messages published to NATS Streaming.
Liftbridge was built to augment NATS with durability rather than providing a completely separate system. NATS Streaming also provides a broader set of features such as durable subscriptions, queue groups, pluggable storage backends, and multiple fault-tolerance modes. Liftbridge aims to have a small API surface area.
The key features that differentiate Liftbridge are the shared message namespace, wildcards, log compaction, and horizontal scalability. NATS Streaming replicates channels to the entire cluster through a single Raft group. Liftbridge allows replicating to a subset of the cluster, and each stream is replicated independently. This allows the cluster to scale horizontally.
How does it scale?
Liftbridge scales horizontally by adding more brokers to the cluster and creating more streams which are distributed among the cluster. In effect, this splits out message routing from storage and consumption, which allows Liftbridge to scale independently and eschew subject partitioning. Alternatively, streams can join a load-balance group, which effectively load balances a NATS subject among the streams in the group without affecting delivery to other streams.
What about HA?
High availability is achieved by replicating the streams. When a stream is created, the client specifies a replicationFactor , which determines the number of brokers to replicate the stream. Each stream has a leader who is responsible for handling reads and writes. Followers then replicate the log from the leader. If the leader fails, one of the followers can set up to replace it. The replication protocol closely resembles that of Kafka, so there is much more nuance to avoid data consistency problems. See the replication protocol documentation for more details.
What about performance?
Benchmarks soon to come...
Is it production-ready?
No, this project is early and still evolving.
Installation
$ go get github.com/liftbridge-io/liftbridge
Quick Start
Liftbridge currently relies on an externally running NATS server . By default, it will connect to a NATS server running on localhost. The --nats-servers flag allows configuring the NATS server(s) to connect to.
Also note that Liftbridge is clustered by default and relies on Raft for coordination. This means a cluster of three or more servers is normally run for high availability, and Raft manages electing a leader. A single server is actually a cluster of size 1. For safety purposes, the server cannot elect itself as leader without using the --raft-bootstrap-seed flag, which will indicate to the server to elect itself as leader. This will start a single server that can begin handling requests. Use this flag with caution as it should only be set on one server when bootstrapping a cluster.
$ liftbridge --raft-bootstrap-seed INFO[2019-06-28 01:12:45] Server ID: OoVo48CniWsjYzlgGtKLB6 INFO[2019-06-28 01:12:45] Namespace: liftbridge-default INFO[2019-06-28 01:12:45] Retention Policy: [Age: 1 week, Compact: false] INFO[2019-06-28 01:12:45] Starting server on :9292... INFO[2019-06-28 01:12:46] Server became metadata leader, performing leader promotion actions
Once a leader has been elected, other servers will automatically join the cluster. We set the --data-dir and --port flags to avoid clobbering the first server.
$ liftbridge --data-dir /tmp/liftbridge/server-2 --port=9293 INFO[2019-06-28 01:15:21] Server ID: zsQToZyzR8WZfAUBiHSFvX INFO[2019-06-28 01:15:21] Namespace: liftbridge-default INFO[2019-06-28 01:15:21] Retention Policy: [Age: 1 week, Compact: false] INFO[2019-06-28 01:15:21] Starting server on :9293...
We can also bootstrap a cluster by providing the explicit cluster configuration. To do this, we provide the IDs of the participating peers in the cluster using the --raft-bootstrap-peers flag. Raft will then handle electing a leader.
$ liftbridge --raft-bootstrap-peers server-2,server-3
Configuration
In addition to the command-line flags, Liftbridge can be fully configured using a configuration file which is passed in using the --config flag.
$ liftbridge --config liftbridge.conf
An example configuration file is shown below.
listen: localhost:9293 data.dir: /tmp/liftbridge/server-2 log.level: debug # Define NATS cluster to connect to. nats { servers: ["nats://localhost:4300", "nats://localhost:4301"] } # Specify message log settings. log { retention.max.age: "24h" } # Specify cluster settings. clustering { server.id: server-2 raft.logging: true raft.bootstrap.seed: true replica.max.lag.time: "20s" }
See the configuration documentation for full details on server configuration.
Client Libraries
Currently, there is only a high-level Go client library available. However, Liftbridge uses gRPC for its client API, so client libraries can be generated quite easily using the Liftbridge protobuf definitions .
Roadmap
Acknowled.
查看原文: Liftbridge: Lightweight, fault-tolerant message streams | https://www.ctolib.com/topics-142077.html | CC-MAIN-2019-39 | en | refinedweb |
Garbage Collector
The garbage collector API is intended to serve a number of complimentary needs. First, it should be possible for all C-compatible code in an application to interact with the garbage collector, not just the D code specifically. This both simplifies the interaction of mixed-language applications, and allows the D runtime to more easily leverage existing garbage collectors with little or no rewriting or wrapping in a D code layer. Second, while the bulk of D applications desire and expect the existence of a garbage collector, some applications may not, be it for code size or for other reasons. For these reasons, the garbage collector interface was designed as a set of specifically defined C routines, and requirements are such that not all API calls must be fully functional for an implementation to be considered conforming.
Interface Definition
This is a concise definition of the routines that every garbage collector implementation must expose:
extern (C) void gc_init(); extern (C) void gc_term(); extern (C) void gc_enable(); extern (C) void gc_disable(); extern (C) void gc_collect(); extern (C) void gc_minimize(); extern (C) uint gc_getAttr( void* p ); extern (C) uint gc_setAttr( void* p, uint a ); extern (C) uint gc_clrAttr( void* p, uint a ); extern (C) void* gc_malloc( size_t sz, uint ba = 0 ); extern (C) void* gc_calloc( size_t sz, uint ba = 0 ); extern (C) void* gc_realloc( void* p, size_t sz, uint ba = 0 ); extern (C) size_t gc_extend( void* p, size_t mx, size_t sz ); extern (C) size_t gc_reserve( size_t sz ); extern (C) void gc_free( void* p ); extern (C) void* gc_addrOf( void* p ); extern (C) size_t gc_sizeOf( void* p ); struct BlkInfo { void* base; size_t size; uint attr; } extern (C) BlkInfo gc_query( void* p ); extern (C) void gc_addRoot( void* p ); extern (C) void gc_addRange( void* p, size_t sz ); extern (C) void gc_removeRoot( void* p ); extern (C) void gc_removeRange( void* p );
Bit significance of the uint value used for block attribute passing is as follows, by position:
0: Block contains a class - finalize this block on collection
1: Block contains no pointers - do not scan through this block on collections
2: Block is pinned - do not move this block during collections
3-15: Reserved for future use by the D standard library
16-31: Reserved for internal use by the garbage collector and compiler
Implementation Requirements
The package name "gc" is reserved for use by the garbage collector, and all modules defined should live within this namespace to avoid collisions with user code. More requirements coming soon. | http://www.dsource.org/projects/druntime/wiki/GarbageCollectorD2_0 | CC-MAIN-2019-39 | en | refinedweb |
getusershell, setusershell, endusershell - get permitted user shells
Current Version:
Linux Kernel - 3.80
Synopsis
#include <unistd.h> char *getusershell(void); void setusershell(void); void endusershell(void);
Feature Test Macro Requirements for glibc (see feature_test_macros(7)):
getusershell(), setusershell(), endusershell():
- _BSD_SOURCE || (_XOPEN_SOURCE && _XOPEN_SOURCE < 500)
Description
The Value
The getusershell() function returns NULL on end-of-file.
Files
/etc/shells
Attributes
Conforming To:17:53 1993 by Rik Faith (faith@cs.unc.edu) | https://community.spiceworks.com/linux/man/3/getusershell | CC-MAIN-2019-39 | en | refinedweb |
Tensor.
This guide will be most useful if you intend to use the low-level programming
model directly. Higher-level APIs such as
tf.estimator.Estimator and Keras
hide the details of graphs and sessions from the end user, but this guide may
also be useful if you want to understand how these APIs are implemented.
Why dataflow graphs?
Dataflow is a common
programming model for parallel computing. In a dataflow graph, the nodes
represent units of computation, and the edges represent the data consumed or
produced by a computation. For example, in a TensorFlow graph, the
tf.matmul
operation would correspond to a single node with two incoming edges (the
matrices to be multiplied) and one outgoing edge (the result of the
multiplication).
Dataflow has several advantages that TensorFlow leverages when executing your programs:
Parallelism. By using explicit edges to represent dependencies between operations, it is easy for the system to identify operations that can execute in parallel.
Distributed execution. By using explicit edges to represent the values that flow between operations, it is possible for TensorFlow to partition your program across multiple devices (CPUs, GPUs, and TPUs) attached to different machines. TensorFlow inserts the necessary communication and coordination between devices.
Compilation. TensorFlow's XLA compiler can use the information in your dataflow graph to generate faster code, for example, by fusing together adjacent operations.
Portability. The dataflow graph is a language-independent representation of the code in your model. You can build a dataflow graph in Python, store it in a SavedModel, and restore it in a C++ program for low-latency inference.
What is a
tf.Graph?
A
tf.Graph contains two relevant kinds of information:
Graph structure. The nodes and edges of the graph, indicating how individual operations are composed together, but not prescribing how they should be used. The graph structure is like assembly code: inspecting it can convey some useful information, but it does not contain all of the useful context that source code conveys.
Graph collections. TensorFlow provides a general mechanism for storing collections of metadata in a
tf.Graph. The
tf.add_to_collectionfunction enables you to associate a list of objects with a key (where
tf.GraphKeysdefines some of the standard keys), and
tf.get_collectionenables you to look up all objects associated with a key. Many parts of the TensorFlow library use this facility: for example, when you create a
tf.Variable, it is added by default to collections representing "global variables" and "trainable variables". When you later come to create a
tf.train.Saveror
tf.train.Optimizer, the variables in these collections are used as the default arguments.
Building a
tf.Graph
Most TensorFlow programs start with a dataflow graph construction phase. In this
phase, you invoke TensorFlow API functions that construct new
tf.Operation
(node) and
tf.Tensor (edge) objects and add them to a
tf.Graph
instance. TensorFlow provides a default graph that is an implicit argument
to all API functions in the same context. For example:
Calling
tf.constant(42.0)creates a single
tf.Operationthat produces the value
42.0, adds it to the default graph, and returns a
tf.Tensorthat represents the value of the constant.
Calling
tf.matmul(x, y)creates a single
tf.Operationthat multiplies the values of
tf.Tensorobjects
xand
y, adds it to the default graph, and returns a
tf.Tensorthat represents the result of the multiplication.
Executing
v = tf.Variable(0)adds to the graph a
tf.Operationthat will store a writeable tensor value that persists between
tf.Session.runcalls. The
tf.Variableobject wraps this operation, and can be used like a tensor, which will read the current value of the stored value. The
tf.Variableobject also has methods such as
tf.Variable.assignand
tf.Variable.assign_addthat create
tf.Operationobjects that, when executed, update the stored value. (See Variables for more information about variables.)
Calling
tf.train.Optimizer.minimizewill add operations and tensors to the default graph that calculates gradients, and return a
tf.Operationthat, when run, will apply those gradients to a set of variables.
Most programs rely solely on the default graph. However,
see Dealing with multiple graphs for more
advanced use cases. High-level APIs such as the
tf.estimator.Estimator API
manage the default graph on your behalf, and--for example--may create different
graphs for training and evaluation.
Naming operations
A
tf.Graph object defines a namespace for the
tf.Operation objects it
contains. TensorFlow automatically chooses a unique name for each operation in
your graph, but giving operations descriptive names can make your program easier
to read and debug. The TensorFlow API provides two ways to override the name of
an operation:
Each API function that creates a new
tf.Operationor returns a new
tf.Tensoraccepts an optional
nameargument. For example,
tf.constant(42.0, name="answer")creates a new
tf.Operationnamed
"answer"and returns a
tf.Tensornamed
"answer:0". If the default graph already contains an operation named
"answer", then TensorFlow would append
"_1",
"_2", and so on to the name, in order to make it unique.
The
tf.name_scopefunction makes it possible to add a name scope prefix to all operations created in a particular context. The current name scope prefix is a
"/"-delimited list of the names of all active
tf.name_scopecontext managers. If a name scope has already been used in the current context, TensorFlow appends
"_1",
"_2", and so on. For"
The graph visualizer uses name scopes to group operations and reduce the visual complexity of a graph. See Visualizing your graph for more information.
Note that
tf.Tensor objects are implicitly named after the
tf.Operation
that produces the tensor as output. A tensor name has the form
"<OP_NAME>:<i>"
where:
"<OP_NAME>"is the name of the operation that produces it.
"<i>"is an integer representing the index of that tensor among the operation's outputs.
Placing operations on different devices
If you want your TensorFlow program to use multiple different devices, the
tf.device function provides a convenient way to request that all operations
created in a particular context are placed on the same device (or type of
device).
A device specification has the following form:
/job:<JOB_NAME>/task:<TASK_INDEX>/device:<DEVICE_TYPE>:<DEVICE_INDEX>
where:
<JOB_NAME>is an alpha-numeric string that does not start with a number.
<DEVICE_TYPE>is a registered device type (such as
GPUor
CPU).
<TASK_INDEX>is a non-negative integer representing the index of the task in the job named
<JOB_NAME>. See
tf.train.ClusterSpecfor an explanation of jobs and tasks.
<DEVICE_INDEX>is a non-negative integer representing the index of the device, for example, to distinguish between different GPU devices used in the same process.
You do not need to specify every part of a device specification. For example,
if you are running in a single-machine configuration with a single GPU, you
might use
tf.device to pin some operations to the CPU and GPU:
# Operations created outside either context will run on the "best possible" # device. For example, if you have a GPU and a CPU available, and the operation # has a GPU implementation, TensorFlow will choose the GPU. weights = tf.random_normal(...) with tf.device("/device:CPU:0"): # Operations created in this context will be pinned to the CPU. img = tf.decode_jpeg(tf.read_file("img.jpg")) with tf.device("/device:GPU:0"): # Operations created in this context will be pinned to the GPU. result = tf.matmul(weights, img)
If you are deploying TensorFlow in a typical distributed configuration,
you might specify the job name and task ID to place variables on
a task in the parameter server job (
"/job:ps"), and the other operations on
task in the worker job (
"/job:worker"):
with tf.device("/job:ps/task:0"): weights_1 = tf.Variable(tf.truncated_normal([784, 100])) biases_1 = tf.Variable(tf.zeros([100])) with tf.device("/job:ps/task:1"): weights_2 = tf.Variable(tf.truncated_normal([100, 10])) biases_2 = tf.Variable(tf.zeros([10])) with tf.device("/job:worker"): layer_1 = tf.matmul(train_batch, weights_1) + biases_1 layer_2 = tf.matmul(train_batch, weights_2) + biases_2
tf.device gives you a lot of flexibility to choose placements for individual
operations or broad regions of a TensorFlow graph. In many cases, there are
simple heuristics that work well. For example, the
tf.train.replica_device_setter API can be used with
tf.device to place
operations for data-parallel distributed training. For example, the
following code fragment shows how
tf.train.replica_device_setter applies
different placement policies to
tf.Variable objects and other operations:
with tf.device(tf.train.replica_device_setter(ps_tasks=3)): # tf.Variable objects are, by default, placed on tasks in "/job:ps" in a # round-robin fashion. w_0 = tf.Variable(...) # placed on "/job:ps/task:0" b_0 = tf.Variable(...) # placed on "/job:ps/task:1" w_1 = tf.Variable(...) # placed on "/job:ps/task:2" b_1 = tf.Variable(...) # placed on "/job:ps/task:0" input_data = tf.placeholder(tf.float32) # placed on "/job:worker" layer_0 = tf.matmul(input_data, w_0) + b_0 # placed on "/job:worker" layer_1 = tf.matmul(layer_0, w_1) + b_1 # placed on "/job:worker"
Tensor-like objects
Many TensorFlow operations take one or more
tf.Tensor objects as arguments.
For example,
tf.matmul takes two
tf.Tensor objects, and
tf.add_n takes
a list of
n
tf.Tensor objects. For convenience, these functions will accept
a tensor-like object in place of a
tf.Tensor, and implicitly convert it
to a
tf.Tensor using the
tf.convert_to_tensor method. Tensor-like objects
include elements of the following types:
tf.Tensor
tf.Variable
numpy.ndarray
list(and lists of tensor-like objects)
- Scalar Python types:
bool,
float,
int,
str
You can register additional tensor-like types using
tf.register_tensor_conversion_function.
Executing a graph in a
tf.Session
TensorFlow uses the
tf.Session class to represent a connection between the
client program---typically a Python program, although a similar interface is
available in other languages---and the C++ runtime. A
tf.Session object
provides access to devices in the local machine, and remote devices using the
distributed TensorFlow runtime. It also caches information about your
tf.Graph so that you can efficiently run the same computation multiple times.
Creating a
tf.Session
If you are using the low-level TensorFlow API, you can create a
tf.Session
for the current default graph as follows:
# Create a default in-process session. with tf.Session() as sess: # ... # Create a remote session. with tf.Session("grpc://example.org:2222"): # ...
Since a
tf.Session owns physical resources (such as GPUs and
network connections), it is typically used as a context manager (in a
with
block) that automatically closes the session when you exit the block. It is
also possible to create a session without using a
with block, but you should
explicitly call
tf.Session.close when you are finished with it to free the
resources.
tf.Session.init accepts three optional arguments:
target. If this argument is left empty (the default), the session will only use devices in the local machine. However, you may also specify a
grpc://URL to specify the address of a TensorFlow server, which gives the session access to all devices on machines that this server controls. See
tf.train.Serverfor details of how to create a TensorFlow server. For example, in the common between-graph replication configuration, the
tf.Sessionconnects to a
tf.train.Serverin the same process as the client. The distributed TensorFlow deployment guide describes other common scenarios.
graph. By default, a new
tf.Sessionwill be bound to---and only able to run operations in---the current default graph. If you are using multiple graphs in your program (see Programming with multiple graphs for more details), you can specify an explicit
tf.Graphwhen you construct the session.
config. This argument allows you to specify a
tf.ConfigProtothat controls the behavior of the session. For example, some of the configuration options include:
allow_soft_placement. Set this to
Trueto enable a "soft" device placement algorithm, which ignores
tf.deviceannotations that attempt to place CPU-only operations on a GPU device, and places them on the CPU instead.
cluster_def. When using distributed TensorFlow, this option allows you to specify what machines to use in the computation, and provide a mapping between job names, task indices, and network addresses. See
tf.train.ClusterSpec.as_cluster_deffor details.
graph_options.optimizer_options. Provides control over the optimizations that TensorFlow performs on your graph before executing it.
gpu_options.allow_growth. Set this to
Trueto change the GPU memory allocator so that it gradually increases the amount of memory allocated, rather than allocating most of the memory at startup.
Using
tf.Session.run to execute operations
The
tf.Session.run method is the main mechanism for running a
tf.Operation
or evaluating a
tf.Tensor. You can pass one or more
tf.Operation or
tf.Tensor objects to
tf.Session.run, and TensorFlow will execute the
operations that are needed to compute the result.
tf.Session.run requires you to specify a list of fetches, which determine
the return values, and may be a
tf.Operation, a
tf.Tensor, or
a tensor-like type such as
tf.Variable. These fetches
determine what subgraph of the overall
tf.Graph must be executed to
produce the result: this is the subgraph that contains all operations named in
the fetch list, plus all operations whose outputs are used to compute the value
of the fetches. For example, the following code fragment shows how different
arguments to
tf.Session.run cause different subgraphs to be executed:
x = tf.constant([[37.0, -23.0], [1.0, 4.0]]) w = tf.Variable(tf.random_uniform([2, 2])) y = tf.matmul(x, w) output = tf.nn.softmax(y) init_op = w.initializer with tf.Session() as sess: # Run the initializer on `w`. sess.run(init_op) # Evaluate `output`. `sess.run(output)` will return a NumPy array containing # the result of the computation. print(sess.run(output)) # Evaluate `y` and `output`. Note that `y` will only be computed once, and its # result used both to return `y_val` and as an input to the `tf.nn.softmax()` # op. Both `y_val` and `output_val` will be NumPy arrays. y_val, output_val = sess.run([y, output])
tf.Session.run also optionally takes a dictionary of feeds, which is a
mapping from
tf.Tensor objects (typically
tf.placeholder tensors) to
values (typically Python scalars, lists, or NumPy arrays) that will be
substituted for those tensors in the execution. For example:
# Define a placeholder that expects a vector of three floating-point values, # and a computation that depends on it. x = tf.placeholder(tf.float32, shape=[3]) y = tf.square(x) with tf.Session() as sess: # Feeding a value changes the result that is returned when you evaluate `y`. print(sess.run(y, {x: [1.0, 2.0, 3.0]})) # => "[1.0, 4.0, 9.0]" print(sess.run(y, {x: [0.0, 0.0, 5.0]})) # => "[0.0, 0.0, 25.0]" # Raises `tf.errors.InvalidArgumentError`, because you must feed a value for # a `tf.placeholder()` when evaluating a tensor that depends on it. sess.run(y) # Raises `ValueError`, because the shape of `37.0` does not match the shape # of placeholder `x`. sess.run(y, {x: 37.0})
tf.Session.run also accepts an optional
options argument that enables you
to specify options about the call, and an optional
run_metadata argument that
enables you to collect metadata about the execution. For example, you can use
these options together to collect tracing information about the execution:
y = tf.matmul([[37.0, -23.0], [1.0, 4.0]], tf.random_uniform([2, 2])) with tf.Session() as sess: # Define options for the `sess.run()` call. options = tf.RunOptions() options.output_partition_graphs = True options.trace_level = tf.RunOptions.FULL_TRACE # Define a container for the returned metadata. metadata = tf.RunMetadata() sess.run(y, options=options, run_metadata=metadata) # Print the subgraphs that executed on each device. print(metadata.partition_graphs) # Print the timings of each operation that executed. print(metadata.step_stats)
Visualizing your graph
TensorFlow includes tools that can help you to understand the code in a graph.
The graph visualizer is a component of TensorBoard that renders the
structure of your graph visually in a browser. The easiest way to create a
visualization is to pass a
tf.Graph when creating the
tf.summary.FileWriter:
# Build your graph. x = tf.constant([[37.0, -23.0], [1.0, 4.0]]) w = tf.Variable(tf.random_uniform([2, 2])) y = tf.matmul(x, w) # ... loss = ... train_op = tf.train.AdagradOptimizer(0.01).minimize(loss) with tf.Session() as sess: # `sess.graph` provides access to the graph used in a `tf.Session`. writer = tf.summary.FileWriter("/tmp/log/...", sess.graph) # Perform your computation... for i in range(1000): sess.run(train_op) # ... writer.close()
You can then open the log in
tensorboard, navigate to the "Graph" tab, and
see a high-level visualization of your graph's structure. Note that a typical
TensorFlow graph---especially training graphs with automatically computed
gradients---has too many nodes to visualize at once. The graph visualizer makes
use of name scopes to group related operations into "super" nodes. You can
click on the orange "+" button on any of these super nodes to expand the
subgraph inside.
For more information about visualizing your TensorFlow application with TensorBoard, see the TensorBoard guide.
Programming with multiple graphs
As noted above, TensorFlow provides a "default graph" that is implicitly passed to all API functions in the same context. For many applications, a single graph is sufficient. However, TensorFlow also provides methods for manipulating the default graph, which can be useful in more advanced use cases. For example:
A
tf.Graphdefines the namespace for
tf.Operationobjects: each operation in a single graph must have a unique name. TensorFlow will "uniquify" the names of operations by appending
"_1",
"_2", and so on to their names if the requested name is already taken. Using multiple explicitly created graphs gives you more control over what name is given to each operation.
The default graph stores information about every
tf.Operationand
tf.Tensorthat was ever added to it. If your program creates a large number of unconnected subgraphs, it may be more efficient to use a different
tf.Graphto build each subgraph, so that unrelated state can be garbage collected.
You can install a different
tf.Graph as the default graph, using the
tf.Graph.as_default context manager:
g_1 = tf.Graph() with g_1.as_default(): # Operations created in this scope will be added to `g_1`. c = tf.constant("Node in g_1") # Sessions created in this scope will run operations from `g_1`. sess_1 = tf.Session() g_2 = tf.Graph() with g_2.as_default(): # Operations created in this scope will be added to `g_2`. d = tf.constant("Node in g_2") # Alternatively, you can pass a graph when constructing a `tf.Session`: # `sess_2` will run operations from `g_2`. sess_2 = tf.Session(graph=g_2) assert c.graph is g_1 assert sess_1.graph is g_1 assert d.graph is g_2 assert sess_2.graph is g_2
To inspect the current default graph, call
tf.get_default_graph, which
returns a
tf.Graph object:
# Print all of the operations in the default graph. g = tf.get_default_graph() print(g.get_operations()) | https://tensorflow.google.cn/guide/graphs | CC-MAIN-2019-39 | en | refinedweb |
Your message dated Wed, 6 Jun 2007 11:03:22 +0200 with message-id <20070606090322.GA7341@.intersec.eu> and subject line Bug#427722: pthread_kill() declaration disappears when compiling with -ansthread_kill() declaration disappears when compiling with -ansi
- From: "Daniel F. Smith" <dfsmith@almaden.ibm.com>
- Date: Tue, 5 Jun 2007 19:04:27 -0700
- Message-id: <[🔎] 20070606020427.GA14395@porter.almaden.ibm.com>
- Reply-to: dfsmith@almaden.ibm.comPackage: libc6-dev Version: 2.5-9+b1 When compiling with the -ansi flag in gcc, pthread_kill() is implicitly defined. The old behavior worked with -ansi. See this example. cat <<EOF >test.c #include <pthread.h> #include <signal.h> void *start(void *arg) {return NULL;} void test(void) { pthread_t t; (void)pthread_create(&t,NULL,start,NULL); pthread_kill(t,SIGKILL); } EOF $ gcc -Wall -c test.c (no warnings) $ gcc -ansi -Wall -c test.c test.c: In function 'test': test.c:9: warning: implicit declaration of function 'pthread_kill' $ uname -a Linux porter 2.6.18-4-686 #1 SMP Mon Mar 26 17:17:36 UTC 2007 i686 GNU/Linux $ ls -l /lib/libc.so.6 lrwxrwxrwx 1 root root 11 Jun 4 13:26 /lib/libc.so.6 -> libc-2.5.so $ dpkg -s libc6 | grep Version Version: 2.5-9+b1 $ ls -l /usr/include/signal.h -rw-r--r-- 1 root root 13312 May 30 03:04 /usr/include/signal.h $ ls -l /usr/include/bits/pthreadtypes.h -rw-r--r-- 1 root root 4395 May 30 03:04 /usr/include/bits/pthreadtypes.h $ tail -20 /usr/include/signal.h | head -6 #if defined __USE_POSIX199506 || defined __USE_UNIX98 /* Some of the functions for handling signals in threaded programs must be defined here. */ # include <bits/pthreadtypes.h> # include <bits/sigthread.h> #endif /* use Unix98 */
--- End Message ---
--- Begin Message ---
- To: dfsmith@almaden.ibm.com, 427722-done@bugs.debian.org
- Subject: Re: Bug#427722: pthread_kill() declaration disappears when compiling with -ansi
- From: Pierre Habouzit <madcoder@debian.org>
- Date: Wed, 6 Jun 2007 11:03:22 +0200
- Message-id: <20070606090322.GA7341@.intersec.eu>
- In-reply-to: <[🔎] 20070606020427.GA14395@porter.almaden.ibm.com>
- References: <[🔎] 20070606020427.GA14395@porter.almaden.ibm.com>On Tue, Jun 05, 2007 at 07:04:27PM -0700, Daniel F. Smith wrote: > Package: libc6-dev > Version: 2.5-9+b1 > > When compiling with the -ansi flag in gcc, pthread_kill() is implicitly > defined. The old behavior worked with -ansi. See this example. Old behaviour is wrong. pthread_kill is defined in IEEE Std 2003.1 with threads extensions. Using -ansi, you use plain old C, with no POSIX extensions, hence need to "define" some features see features_test_macros(7). To have pthread_kill you need to -D_POSIX_C_SOURCE=199506 or -D_XOPEN_SOURCE=500 at the strict minimum. (see man page and/or <features.h> for explanations). -- ·O· Pierre Habouzit ··O madcoder@debian.org OOO
Attachment: pgpNi34wp3Gcd.pgp
Description: PGP signature
--- End Message --- | https://lists.debian.org/debian-glibc/2007/06/msg00104.html | CC-MAIN-2019-39 | en | refinedweb |
Host a Custom Skill as a Web Service
You can build a custom skill for Alexa by extending a servlet that accepts requests from and sends responses to the Alexa service in the cloud.
The servlet must meet certain requirements to handle requests sent by Alexa and adhere to the Alexa Skills Kit interface standards. For more information, see Host a Custom Skill as a Web Service in the Alexa Skills Kit technical documentation.
ASK SDK Servlet Support
The Alexa Skills Kit SDK (ASK SDK) for Java provides boilerplate code for request verification and timestamp verification through the ask-sdk-servlet-support package. This package provides the verification components and SkillServlet for skill invocation.
Installation
You can import the latest version of
ask-sdk-servlet-support by adding it as a maven dependency in your project's
pom.xml.
Skill Servlet
The SkillServlet class registers the skill instance from the
SkillBuilder object, and provides a
doPost method which is responsible for deserialization of the incoming request, verification of the input request before invoking the skill, and serialization of the generated response.
Usage
public class HelloWorldSkillServlet extends SkillServlet { @Override public HelloWorldSkillServlet() { super(getSkill()); } @Override private static Skill getSkill() { return Skills.standard() .addRequestHandlers( new CancelandStopIntentHandler(), new HelloWorldIntentHandler(), new HelpIntentHandler(), new LaunchRequestHandler(), new SessionEndedRequestHandler()) // Add your skill id below //.withSkillId("") .build(); } }
Sample skill with servlet support can be found here. | https://developer.amazon.com/it/docs/alexa-skills-kit-sdk-for-java/host-web-service.html | CC-MAIN-2019-39 | en | refinedweb |
personality - set the process execution domain
Synopsis
Description
Errors
Colophon
#include <sys/personality.h>
int personality(unsigned long persona);.
On success, the previous persona is returned. On error, -1 is returned, and errno is set appropriately.
personality() is Linux-specific and should not be used in programs intended to be portable.
This page is part of release 3.44 of the Linux man-pages project. A description of the project, and information about reporting bugs, can be found at. | http://manpages.sgvulcan.com/personality.2.php | CC-MAIN-2017-26 | en | refinedweb |
constrather than
#define. There's plenty of utilities I see used in C++ that came from C though. I could be wrong, but it seems that those who start in C++ eventually get better at coding and are more ready to use C in their code. If people like to accuse C++ of being dangerous though, well C is just as ( and to some more) tedious. I don't use C a whole lot but I have found it useful from time to time. I know a guy who's a hobbyist game programmer though, and he uses A LOT of C, even though he started out in C++.
#defineand
const, defines run the risk of adding hard to find bugs into you program due to them not having a type. It is better practice to use const instead due to them having a type.
#include <cstdlib>1.
#defineand
const, actually a const is better to make constants that will actually be USED in you program, but define is still usable in conditional compilation.
/*...*/would be better than using
#ifdefs | http://www.cplusplus.com/forum/beginner/103362/ | CC-MAIN-2017-26 | en | refinedweb |
Write operation Hadoop 2.0
- The client creates the file by calling create() method on DistributedFileSystem.
- DistributedFileSystem makes an RPC call to the namenode to create a new file in the filesystem’s namespace, with no blocks associated with it.
The namenode performs various checks to make sure the file doesn’t already exist and the client has the right permissions to create the file. If all these checks pass, the namenode makes a record of the new file; otherwise, file creation fails and the client is thrown an IOException. TheDistributedFileSystem returns an FSDataOutputStream for the client to start writing data to datanode. FSDataOutputStream wraps a DFSOutputStream which handles communication with the datanodes and namenode.
- default.
-.
- When the client has finished writing data, it calls close() on the stream.It.
Below diagram summarises file write operation in hadoop.
Read operation Hadoop 2.0
- The client opens the file by calling open() method on DistributedFileSystem.
- DistributedFileSystem makes an RPC call to the namenode to determine location of datanodes where files is stored in form of blocks.For each blocks,the namenode returns address of datanodes(metadata of blocks and datanodes) that have a copy of block. Datanodes are sorted according to proximity(depending of network topology information).. DFSInputStream,which has stored the datanode addresses for the first few blocks in the file, then connects to the first (closest) datanode for the first block in the file.
- Data is streamed from the datanode back to the client(in the form of packets) and read() is repeatedly called on the stream by client.
- When the end of the block is reached, DFSInputStream will close the connection to the datanode, then find the best datanode for the next block (Step 5 and 6)
- When the client has finished reading, it calls close() on the FSDataInputStream (step.
Reference :- Hadoop definitive guide by Tom White. | http://www.devinline.com/2015/11/read-and-write-operation-in-hadoop-20.html | CC-MAIN-2017-26 | en | refinedweb |
i am trying to create a program that uses * to make triangles depending on how big the user wishes to make them. This is the code i have so far however have no idea where to go from here.. can anyone help?? thankyou
def triangle(): totalRows = int(eval(input("Please enter a number: "))) for currentRows in range(1,totalRows+1),: for currentCol i range (1, currentRow+1): print("*", end = " ") main() | https://www.daniweb.com/programming/software-development/threads/305162/creating-an-asterics-pattern-of-triangles | CC-MAIN-2017-26 | en | refinedweb |
Enhancing Coordination in Cloud Infrastructures with an Extendable Coordination Service
- Curtis Wright
- 1 years ago
- Views:
Transcription
1 Enhancing Coordination in Cloud Infrastructures with an Extendable Coordination Service Tobias Distler 1, Frank Fischer 1, Rüdiger Kapitza 2, and Siqi Ling 1 1 Friedrich Alexander University Erlangen Nuremberg 2 TU Braunschweig ABSTRACT With application processes being distributed across a large number of nodes, coordination is a crucial but inherently difficult task in cloud environments. Coordination middleware systems like Chubby and ZooKeeper approach this problem by providing mechanisms for basic coordination tasks (e. g., leader election) and means to implement common data structures used for coordination (e. g., distributed queues). However, as such complex abstractions still need to be implemented as part of the distributed application, reusability is limited and the performance overhead may be significant. In this paper, we address these problems by proposing an extendable coordination service that allows complex abstractions to be implemented on the server side. To enhance the functionality of our coordination service, programmers are able to dynamically register high-level extensions that comprise a sequence of low-level operations offered by the standard coordination service API. Our evaluation results show that extension-based implementations of common data structures and services offer significantly better performance and scalability than their state-of-the-art counterparts. Categories and Subject Descriptors D.4.7 [Organization and Design]: Distributed Systems General Terms Keywords Design, Performance, Reliability Cloud, Coordination Service, ZooKeeper 1. INTRODUCTION Large-scale applications running on today s cloud infrastructures may comprise a multitude of processes distributed over a large number of nodes. Given these circumstances, fault-tolerant coordination of processes, although being an essential factor for the correctness of an application, is difficult to achieve. As a result, and to facilitate their design, This work was partially supported by an IBM Ph.D. Fellowship for T. Distler, and by the European Union s 7th Framework Programme (FP7/ ) under grant agreement n (TCloud. SDMCMM 12, December 3-4, 2012, Montreal, Quebec, Canada. Copyright 2012 ACM /12/12...$ fewer and fewer of such applications implement coordination primitives themselves, instead they rely on external coordination services. Large-scale distributed storage systems like BigTable [6] and HBase [2], for example, do not provide means for leader election but perform this task using the functionality of Chubby [5] and ZooKeeper [7], respectively. However, instead of implementing more complex services (e. g., leader election) directly, state-of-the-art coordination middleware systems only provide a basic set of low-level functions including file-system like access to key-value storage for small chunks of data, a notification-based callback mechanism, and rudimentary access control. On the one hand, this approach has several benefits: Based on this lowlevel functionality, more complex services and data structures for the coordination of application processes (e. g., distributed queues) can be implemented. Furthermore, the fact that state-of-the-art coordination services are replicated frees applications developers from the need to deal with fault-tolerance related problems, as the coordination service does not represent a single point of failure. On the other hand, this flexibility comes at a price: With more complex services being implemented at the coordination-service client (i. e., as part of the distributed application), reusability is limited and maintenance becomes more difficult. In addition, there is a performance overhead for cases in which a complex operation requires multiple remote calls to the coordination service. As we show in our evaluation, this problem gets worse the more application processes access a coordination service concurrently. To address the disadvantages of current systems listed above, we propose an extendable coordination service. In contrast to existing solutions, in our approach, more complex services and data structures are not implemented at the client side but within modules ( extensions ) that are executed at the servers running the coordination service. As a result, implementations of coordination-service clients can be greatly simplified; in fact, in most usage scenarios, only a single remote call to the coordination service is required. In our service, an extension is realized as a sequence of regular coordination-service operations that are processed atomically. This way, an extension can benefit from the flexibility offered by the low-level API of a regular coordination service while achieving good performance under contention. Besides enhancing the implementation of abstractions already used by current distributed applications, extensions also allow programmers to introduce new features that cannot be provided based on the functionality of traditional coordination services: By registering custom extensions, for example, it is possible to integrate assertions into our ex-
2 tendable coordination service that perform sanity checks on input data, improving protection against faulty clients. Furthermore, extensions may be used to execute automatic conversion routines for legacy clients, supporting scenarios in which the format of the coordination-related data managed on behalf of an application differs across program versions. In particular, this paper makes the following three contributions: First, it proposes a coordination service whose functionality can be enhanced dynamically by introducing customized extensions. Second, it provides details on our prototype of an extendable coordination service based on ZooKeeper [7], a coordination middleware widely used in industry. Third, it presents two case studies, a priority queue and a quota-enforcement service, illustrating both the flexibility and efficiency of our approach. 2. BACKGROUND This section provides background information on the basic functionality of a coordination service and presents an example of a higher-level abstraction built on top of it. 2.1 Coordination Services Despite their differences in detail, coordination services like Chubby [5] and ZooKeeper [7] expose a similar API to the client (i. e., a process of a distributed application, see Figure 1). Information is stored in nodes which can be created (create 1 ) and deleted (delete) by a client. Furthermore, there are operations to store (setdata) and retrieve (getdata) the data assigned to a node. In general, there are two different types of nodes: ephemeral nodes are automatically deleted when the session of the client who created the node ends (e. g., due to a fault); in contrast, regular nodes persist after the end of a client session. Besides managing data, current coordination services provide a callback mechanism to inform clients about certain events including, for example, the creation or deletion of a node, or the modification of the data assigned to a node (see Figure 1). On the occurrence of an event a client has registered a watch for, the coordination service performs a callback notifying the client about the event. Using this functionality, a client is, for example, able to implement failure detection of another client by setting a deletion watch on an ephemeral node created by the client to monitor. 2.2 Usage Example: Priority Queue Based on the low-level API provided by the coordination service, application programmers can implement more complex data structures to be used for the coordination of processes. Figure 2 shows an example implementation of a distributed priority queue (derived from the queue implementation in [13]) that can be applied to exchange data between two processes running on different machines: a producer and a consumer. New elements are added to the queue by the producer calling insert; the element with the highest priority is dequeued by the consumer calling remove. To insert an element b into the queue, the producer creates a new node and sets its data 2 to b (L. 8). The priority p of the element is thereby encoded in the node name by appending p to a default name prefix (L. 5). To remove the 1 Note that we use the ZooKeeper terms here as our prototype is based on this particular coordination service. 2 The ZooKeeper API allows a client to assign data to a node at creation time. Otherwise an additional setdata call would be necessary for setting the node data. Distributed Process p 1 Process p 2. Process p m CS Client CS Client CS Client 1. Register watch. 3. Notify. 2. Set node data. Coordination Service (CS) CS Replica r 1 CS Replica r 2. CS Replica r n Figure 1: Callback mechanism usage example: An application process p 1 registers a data watch on a node; when the node s data is updated, the coordination service notifies p 1 about the modification. head element of the queue, the consumer queries the coordination service to get the names of all nodes matching the default name prefix (L. 13). From the result set of node names, the consumer then locally determines the head of the queue by selecting the node name indicating the highest priority (L. 14). Knowing the head node, the consumer is able to retrieve its data from the coordination service (L. 17) before removing the node from the queue (L. 18). Note that the priority-queue implementation in Figure 2 has two major drawbacks: First, while the insert operation involves only a single remote call to the coordination service (L. 8), the remove operation requires three remote calls (L. 13, 17, and 18), resulting in additional latency. Second, the implementation does not scale for multiple consumer processes: In order to prevent different consumers from returning the same element, entire remove operations would either have to be executed sequentially (which is difficult to achieve when consumer processes run on different machines) or they would have to be implemented optimistically; that is, if the delete call (L. 18) aborts due to a concurrent remove operation already having deleted the designated head node, a consumer must retry its remove (omitted in Figure 2). In Section 5.1, we show that the performance of the optimistic variant suffers from contention when multiple consumer processes access the queue concurrently. 1 CoordinationService cs = establish connection; 3 void insert(byte[] b, Priority p) { 4 / Encode priority in node name. / 5 String nodename = "/node-" + p; 7 / Create node and set its data to b. / 8 cs.create(nodename, b); 9 } 11 byte[] remove() { 12 / Find the node with the highest priority. / 13 String[] nodes = get node names from cs; 14 String head = node from nodes with highest priority according to its name; 16 / Get node data and remove node. / 17 byte[] b = cs.getdata(head); 18 cs.delete(head); 19 return b; 20 } Figure 2: Pseudo-code implementation of a priorityqueue client (ZooKeeper): an element is represented by a node, the priority is encoded in the node name.
3 3. ENHANCING COORDINATION The priority-queue example discussed in Section 2.2 illustrates the main disadvantage of state-of-the art coordination services: With implementations of higher-level data structures and services being a composition of multiple low-level remote calls to the coordination service, performance and scalability become a major concern. We address this issue with an extendable coordination service that provides means to implement additional functionality directly at the server. 3.1 Basic Approach To add functionality to our coordination service, programmers write extensions that are integrated via software modules. Depending on the mechanism an extension operates on, we distinguish between the following three types: During integration, a node extension registers a virtual node through which the extension will be accessible to the client. In contrast to a regular node, client operations invoked on a virtual node (or one of its sub nodes) are not directly executed by the coordinationservice logic; instead, such requests are intercepted and redirected to the corresponding node extension. A watch extension may be used to customize/overwrite the behavior of the coordination service for a certain watch. Such an extension is executed each time a watch event of the corresponding type occurs. A session extension is triggered at creation and termination of a client session and is therefore suitable to perform initialization and cleanup tasks. Note that an extension module providing additional functionality may be a composition of multiple extensions of possibly different types. In general, an extension is free to use the entire API provided by the coordination service. As a consequence, a stateful extension, for example, is allowed to create own regular nodes to manage its internal state. Furthermore, a complex node extension, for example, may translate an incoming client request into a composite request comprising a sequence of low-level operations. Note that, in such a case, our coordination service guarantees that low-level operations belonging to the same composite request will be executed atomically (see Section 4.3). 3.2 Usage Example: Enhanced Priority Queue Figure 3 shows how the implementation of the priorityqueue client from Figure 2 can be greatly simplified by realizing the queue as a node extension that is accessed via a virtual node /queue. In contrast to the traditional implementation presented in Section 2.2, our extension variant only requires a single remote call for the removal of the head element from the queue. When a client inserts an element into the queue by creating a sub node of /queue (L. C5), the request is forwarded to the queue extension, which in turn processes it without any modifications (L. E5); that is, the extension creates the sub node as a regular node. To dequeue the head element, a client issues a getdata call to a (non-existent) sub node /queue/next (L. C10). On the reception of a getdata call using this particular node name, the extension removes the head element and returns its data to the client (L. E10-E17). C1 Client Implementation CoordinationService cs = establish connection; C3 void insert(byte[] b, Priority p) { C4 / Create node and set its data to b. / C5 cs.create("/queue/node-" + p, b); C6 } C8 byte[] remove() { C9 / Remove head node and return its data. / C10 return cs.getdata("/queue/next"); C11 } Coordination Service Extension Implementation E1 CoordinationServiceState local = local state; E3 void create(string name, byte[] b) { E4 / Process request without modifications. / E5 local.create(name, b); E6 } E8 byte[] getdata(string name) { E9 if("/queue/next".equals(name)) { E10 / Find the node with the highest priority. / E11 E12 String[] nodes = get node names from local; String head = node from nodes with highest priority according to its name; E14 / Get node data and remove node. / E15 byte[] b = local.getdata(head); E16 local.delete(head); E17 return b; E18 } else { E19 / Return data of regular node. / E20 return local.getdata(name); E21 } E22 } Figure 3: Pseudo-code implementation of a priority queue in our extendable coordination service: the extension is represented by a virtual node /queue. Although the steps executed during the dequeuing of the head element are identical to the corresponding procedure in the traditional priority-queue implementation (L in Figure 2), there is an important difference: the calls for learning the node names of queue elements (L. E11), for retrieving the data of the head element (L. E15), and for deleting the head-element node (L. E16) are all local calls with low performance overhead. Furthermore, with these three calls being processed atomically, the implementation does not suffer from contention, as shown in Section EXTENDABLE ZOOKEEPER In this section, we present details on the implementation of Extendable ZooKeeper (EZK), our prototype of an extendable coordination service, which is based on ZooKeeper [7]. 4.1 Overview EZK relies on actively-replicated ZooKeeper for fault tolerance. At the server side, EZK (like ZooKeeper) distinguishes between client requests that modify the state of the coordination service (e. g., by creating a node) and read-only client requests that do not (e. g., as they only read the data of a node). A read-only request is only executed on the server replica that has received the request from the client. In contrast, to ensure strong consistency, a state-modifying request is distributed using an atomic broadcast protocol [8] and then processed by all server replicas.
4 For EZK, we introduce an extension manager component into each server replica which is mainly responsible for redirecting the control and data flow to the extensions registered. The extension manager performs different tasks for different types of extensions (see Section 3.1): On the reception of a client request, the extension manager checks whether the request accesses the virtual node of a node extension and, if this is the case, forwards the request to the corresponding extension. This way, a node extension is able to control the behavior of an incoming request before the request had any impact on the system. In addition, the extension manager intercepts watch events and, if available, redirects them to the watch extensions handling the specific events, allowing the extension to customize callbacks to the client. Finally, the extension manager also monitors ZooKeeper s session tracker and notifies the session extensions registered about the start and end of client sessions. 4.2 Managing an Extension For extension management in EZK we provide a built-in management extension that is accessible through a virtual node /extensions. To register a custom extension, a client creates a sub node of /extensions and assigns all necessary configuration information as data to this management node. For a node extension, for example, the configuration information includes the name of the virtual node through which the extension can be used by a client, and a Java class containing the extension code to execute when a request accesses this virtual node. Furthermore, a client is able to provide an ephemeral flag indicating whether the extension should be automatically removed by EZK when the session of the client who registered the extension ends; apart from that, an extension can always be removed by explicitly deleting its corresponding management node. When EZK s management extension receives a request from a client to register an extension, it verifies that the extension code submitted is a valid Java class, and then distributes the request to all server replicas. By treating the request like any other state-modifying request, EZK ensures that all server replicas register the extension in a consistent manner. After registration is complete the extension manager starts to make use of the extension. 4.3 Atomic Execution of an Extension Traditional implementations of complex operations comprising multiple remote calls to the coordination service (as, for example, removing the head element of a priority queue, see Section 2.2) require the state they operate on not to change between individual calls. As a consequence, such an operation may be aborted when two clients modify the same node concurrently, resulting in a significant performance penalty (see Section 5.1). We address this problem in EZK by executing complex operations atomically. In ZooKeeper, each client request modifying the state of the coordination service is translated into a corresponding transaction which is then processed by all server replicas. In the default implementation a single state-modifying request leads to a single transaction. To support more complex operations, we introduce a new type of transaction in EZK, the container transaction, which may comprise a batch of multiple regular transactions. EZK guarantees that all transactions belonging to the same container transaction will be executed atomically on all server replicas, without interfer- Avg. througput [ops/s] Success rate: 59% Success rate: 100% Extendable ZooKeeper Respective success rate: 100% ZooKeeper Success rate: 4% Number of consumer processes Figure 4: Throughput (i. e., successful dequeue operations) for different priority-queue implementations for different numbers of consumer processes. ing with other transactions. By including all transactions of the same extension-based operation in the same container transaction, EZK prevents concurrent state changes during the execution of an extension. 5. CASE STUDIES In this section, we evaluate the priority-queue extension introduced in Section 3.2. Furthermore, we present an additional example of how extensions can be used in our coordination service to efficiently provide more complex functionality. All experiments are conducted using a coordinationservice cell comprising five server replicas (i. e., a typical configuration for ZooKeeper), each running in a virtual machine in Amazon EC2 [1]; coordination-service clients are executed in an additional virtual machine. As in practice distributed applications usually run in the same data center as the coordination service they rely on [5], we allocate all virtual machines in the same EC2 region (i. e., Europe). 5.1 Priority Queue Our first case study compares a traditional priority-queue implementation (see Section 2.2) against our extension-based EZK variant (see Section 3.2). For both implementations, we measure the number of successful dequeue operations per second for a varying number of consumer processes accessing the queue concurrently. At all times during the experiments, we ensure that there are enough producer processes to prevent the queue from running empty. As a result, no dequeue operation will fail due to lack of items to remove. Figure 4 presents the results of the experiments: For a single consumer process, the priority queues achieve an average throughput of 139 (ZooKeeper variant) and 195 (EZK) dequeue operations per second, respectively. The difference in performance is due to the fact that in the ZooKeeper implementation the remove operation comprises three (i. e., two read-only and one state-modifying) remote calls to the coordination service, whereas the extension-based EZK variant requires only a single (state-modifying) remote call. Our results also show that for multiple consumer processes the ZooKeeper priority queue suffers from contention: Due to its optimistic approach a dequeue operation may be aborted when issued concurrently with another dequeue operation (see Section 2.2), causing the success rate to decrease for an increasing number of consumers. In contrast, dequeue operations in our EZK implementation are executed atomically and therefore always succeed on a non-empty queue. As a result, the extension-based EZK variant achieves better scalability than the traditional priority queue.
5 1 CoordinationService cs = establish connection; 3 void allocate(int amount) { 4 do { 5 / Determine free quota and node version. / 6 (int free, int version) = cs.getdata("/memory"); 8 / Retry if there is not enough quota. / 9 if(free < amount) sleep and continue; 11 / Calculate and try to set new free quota. / 12 cs.setdata("/memory", free - amount, version); 13 } while(setdata call aborted); 14 } Figure 5: Pseudo-code implementation of a quotaserver client in ZooKeeper: the current amount of free quota is stored in the data of /memory; to release quota, allocate is called with a negative amount. 5.2 Quota Enforcement Service Our second case study is a fault-tolerant quota enforcement service guaranteeing upper bounds for the overall resource usage (e. g., number of CPUs, memory usage, network bandwidth) of a distributed application [3]. In order to enforce a global quota, each time an application process wants to dynamically allocate additional resources, it is required to ask the quota service for permission. The quota service only grants this permission in case the combined resource usage of all processes of the application does not exceed a certain threshold; otherwise the allocation request is declined and the application process is required to wait until additional free quota becomes available, for example, due to another process terminating and therefore releasing its resources Traditional Implementation Figure 5 illustrates how to implement a quota service based on a state-of-the-art coordination service. In this approach, information about free resource quotas (in the example: the amount of free memory available) is stored in the data assigned to a resource-specific node (i. e., /memory). To request permission for using additional quota, an application process invokes the quota client s allocate function indicating the amount of quota to be allocated (L. 3). Due to the traditional coordination service only providing functionality to get and set the data assigned to a node, but lacking means to modify node data based on its current value, the quota client needs to split up the operation into three steps: First, the client retrieves the data assigned to /memory (L. 6), thereby learning the application s current amount of free quota. Next, the quota client checks whether the application has enough free quota available to grant the permission (L. 9). If this is the case, the client locally computes the new amount of free quota and updates the corresponding node data at the coordination service (L. 12). Note that the optimistic procedure described above is only correct as long as the data assigned to /memory does not change between the getdata (L. 6) and setdata (L. 12) remote calls. However, as different quota clients could invoke allocate for the same resource type concurrently, this condition may not always be justified. To address this problem, state-of-the-art coordination services like Chubby [5] and ZooKeeper [7] use node-specific version counters (which are incremented each time the data of a node is reassigned) to provide a setdata operation with compare-and-swap semantics. Such an operation only succeeds if the current C1 Client Implementation CoordinationService cs = establish connection; C3 void allocate(int amount) { C4 do { C5 / Issue quota demand. / C6 cs.setdata("/memory-quota", amount); C7 } while(setdata call aborted); C8 } Coordination Service Extension Implementation E1 CoordinationServiceState local = local state; E3 void setdata(string name, int amount) { E4 if("/memory-quota".equals(name)) { E5 int free = local.getdata("/memory"); E7 / Abort if there is not enough quota. / E8 if(free < amount) abort; E10 / Calculate and set new free quota. / E11 local.setdata("/memory", free - amount); E12 } else { E13 / Set data of regular node. / E14 local.setdata(name, amount); E15 } E16 } Figure 6: Pseudo-code implementation of a quota server in our extendable coordination service: a call to setdata only aborts if there is not enough quota. version matches an expected value (L. 12), in this case, the version number that corresponds to the contents the quota client has retrieved (L. 6). If the two version numbers differ, the setdata operation aborts and the quota client retries the entire allocation procedure (L. 13) Extension-based Implementation In contrast to the traditional implementation presented in Section where remote calls issued by a quota client may be aborted due to contention, allocation requests in our extension-based EZK variant of the quota enforcement service (see Figure 6) are always granted when enough free quota is available. Here, to issue an allocation request, a client invokes a setdata call to the virtual /memory-quota node passing the amount of quota to allocate as data (L. C6). In the absence of network and server faults, this call only aborts if the amount requested exceeds the free quota currently available (L. E8), in which case the quota client retries the procedure (L. C7) after a certain period of time (omitted in Figure 6). At the EZK server, the quota enforcement extension functions as a proxy for a regular node /memory: For each incoming setdata call to the virtual /memory-quota node (L. E4), the extension translates the request into a sequence of operations (i. e., a read (L. E5), a check (L. E8), and an update (L. E11)) that are processed atomically Evaluation We evaluate both implementations of the quota enforcement service varying the number of quota clients accessing the service concurrently from 1 to 40. During a test run, each client repeatedly requests 100 quota units, and when the quota is granted (possibly after multiple retries), immediately releases it again. In all cases, the total amount of quota available is limited to 1500 units. As a consequence, in scenarios with more than 15 concurrent quota clients, allocation requests may be aborted due to lack of free quota.
6 Avg. througput [ops/s] 1, Extendable ZooKeeper (EZK) ZooKeeper (ZK) Number of quota clients (a) Overall throughput Remote calls/operation ZK (allocate) ZK (release) EZK (both) Number of quota clients (b) Costs per operation Figure 7: Throughput (i. e., successful allocation and release operations) and costs (i. e., remote calls per operation) for different quota-server variants; the total quota is limited to the demand of 15 clients. The throughput results for this experiment presented in Figure 7a show that our EZK quota server provides better scalability than the state-of-the-art ZooKeeper variant. For a small number of concurrent clients, the fact that the total amount of quota is limited has no effect: As in the priorityqueue experiment (see Section 5.1), the ZooKeeper implementation suffers from contention, whereas the throughput of the EZK quota server improves for multiple quota clients. For more than 15 quota clients, the fraction of aborted allocation requests increases in both implementations with every additional client, leading to an observable throughput decrease for the EZK quota server for more than 20 clients. Figure 7b shows that the costs for a single quota allocation greatly differ between both quota-service implementations: For 40 clients, due to contention and the limited amount of total quota, it takes a ZooKeeper client more than 57 remote calls to the coordination service to be granted the quota requested; an EZK quota client on average has to issue less than 2 remote calls for the same scenario. Note that in the ZooKeeper variant, release operations are also subject to contention, requiring up to 28 remote calls per successful operation. In contrast, the release operation in our EZK implementation always succeeds using a single remote call. 6. RELATED WORK With the advent of large distributed file systems emerged the need to coordinate read and write accesses on different nodes. This problem was solved by distributed lock managers [11], the predecessors of current coordination services. In contrast to the file-system oriented coordination middleware systems Chubby [5] and ZooKeeper [7], DepSpace [4] is a Byzantine fault-tolerant coordination service which implements the tuple space model. As the tuple space abstraction does not provide an operation to alter stored tuples, in order to update the data associated with a tuple, the tuple has to be removed from the tuple space, modified, and reinserted afterwards. In consequence, implementations of high-level data structures and services built over DepSpace are expected to also suffer from contention for multiple concurrent clients. Note that, with our approach not being limited to a specific interface, this problem could be approached by an extension-based variant of DepSpace. Boxwood [9] shares our goal of freeing application developers from the need to deal with issues like consistency, dependability, or efficiency of complex high-level abstractions. However, unlike our work, Boxwood focuses on storage infrastructure, not coordination middleware systems. In addition, the set of abstractions and services exposed by Boxwood is static, whereas our extendable coordination service allows clients to dynamically customize the behavior of existing operations and/or introduce entirely new functionality. Relational database management systems rely on stored procedures [12] (i. e., compositions of multiple SQL statements) to reduce network traffic between applications and the database, similar to our use of extensions to minimize the number of remote calls a client has to issue to the coordination service. In active database systems [10], triggers (i. e., a special form of stored procedures) can be registered to handle certain events, for example, the insertion, modification, or deletion of a record. As such, triggers are related to watches in coordination services. The main difference is that in general a trigger is a database-specific mechanism which is transparent to applications. As a result, applications are not able to change the behavior of a trigger. In contrast, our extendable coordination service offers applications the flexibility to customize the service using a composition of extensions operating on nodes, watches, and sessions. 7. CONCLUSION This paper proposed to enhance coordination of distributed applications by relying on an extendable coordination service. Such a service allows programmers to dynamically introduce custom high-level abstractions which are then executed on the server side. Our evaluation shows that by processing complex operations atomically, an extendable coordination service offers significantly better performance and scalability than state-of-the-art implementations. 8. REFERENCES [1] Amazon EC2. [2] Apache HBase. [3] J. Behl, T. Distler, and R. Kapitza. DQMP: A decentralized protocol to enforce global quotas in cloud environments. In Proc. of SSS 12, pages , [4] A. N. Bessani, E. P. Alchieri, M. Correia, and J. Fraga. DepSpace: A Byzantine fault-tolerant coordination service. In Proc. of EuroSys 08, pages , [5] M. Burrows. The Chubby lock service for looselycoupled distributed systems. In Proc. of OSDI 06, pages , [6] F. Chang, J. Dean, S. Ghemawat, W. Hsieh, D. A. Wallach, M. Burrows, T. Chandra, A. Fikes, and R. E. Gruber. Bigtable: A distributed storage system for structured data. In Proc. of OSDI 06, pages , [7] P. Hunt, M. Konar, F. P. Junqueira, and B. Reed. ZooKeeper: Wait-free coordination for Internet-scale systems. In Proc. of ATC 10, pages , [8] F. P. Junqueira, B. C. Reed, and M. Serafini. Zab: High-performance broadcast for primary-backup systems. In Proc. of DSN 11, pages , [9] J. MacCormick, N. Murphy, M. Najork, C. A. Thekkath, and L. Zhou. Boxwood: Abstractions as the foundation for storage infrastructure. In Proc. of OSDI 04, pages , [10] N. W. Paton and O. Díaz. Active database systems. ACM Computing Surveys, 31(1):63 103, [11] W. Snaman and D. Thiel. The VAX/VMS distributed lock manager. Digital Technical Journal, 5:29 44, [12] M. Stonebraker, J. Anton, and E. Hanson. Extending a database system with procedures. Transactions on Database Systems, 12(3): , [13] ZooKeeper Tutorial: Queues. org/hadoop/zookeeper/tutorial.
Providing Fault-tolerant Execution of Web-service based Workflows within Clouds
Providing Fault-tolerant Execution of Web-service based Workflows within Clouds Johannes Behl, Tobias Distler, Florian Heisig Friedrich Alexander University Erlangen Nuremberg {behl,distler,heisig}@cs.fau.de
ZooKeeper. Table of contents
by Table of contents 1 ZooKeeper: A Distributed Coordination Service for Distributed Applications... 2 1.1 Design Goals...2 1.2 Data model and the hierarchical namespace...3 1.3 Nodes and ephemeral nodes...
A programming model in Cloud: MapReduce
A programming model in Cloud: MapReduce Programming model and implementation developed by Google for processing large data sets Users specify a map function to generate a set of intermediate key/value
Hosting Transaction Based Applications on Cloud
Proc. of Int. Conf. on Multimedia Processing, Communication& Info. Tech., MPCIT Hosting Transaction Based Applications on Cloud A.N.Diggikar 1, Dr. D.H.Rao 2 1 Jain College of Engineering, Belgaum, Ind
Software Defined Network Support for Real Distributed Systems
Software Defined Network Support for Real Distributed Systems Chen Liang Project for Reading and Research Spring 2012, Fall 2012 Abstract Software defined network is an emerging technique that allow users
A Novel Cloud Computing Data Fragmentation Service Design for Distributed Systems
A Novel Cloud Computing Data Fragmentation Service Design for Distributed Systems Ismail Hababeh School of Computer Engineering and Information Technology, German-Jordanian University Amman, Jordan Abstract-
Hypertable Architecture Overview
WHITE PAPER - MARCH 2012 Hypertable Architecture Overview Hypertable is an open source, scalable NoSQL database modeled after Bigtable, Google s proprietary scalable database. It is written in C++ for
NoSQL Data Base Basics
NoSQL Data Base Basics Course Notes in Transparency Format Cloud Computing MIRI (CLC-MIRI) UPC Master in Innovation & Research in Informatics Spring- 2013 Jordi Torres, UPC - BSC HDFS
THE WINDOWS AZURE PROGRAMMING MODEL
THE WINDOWS AZURE PROGRAMMING MODEL DAVID CHAPPELL OCTOBER 2010 SPONSORED BY MICROSOFT CORPORATION CONTENTS Why Create a New Programming Model?... 3 The Three Rules of the Windows Azure Programming Model...
Distributed File Systems
Distributed File Systems Paul Krzyzanowski Rutgers University October 28, 2012 1 Introduction The classic network file systems we examined, NFS, CIFS, AFS, Coda, were designed as client-server applications.
Cluster Computing. ! Fault tolerance. ! Stateless. ! Throughput. ! Stateful. ! Response time. Architectures. Stateless vs. Stateful.
Architectures Cluster Computing Job Parallelism Request Parallelism 2 2010 VMware Inc. All rights reserved Replication Stateless vs. Stateful! Fault tolerance High availability despite failures If one
Cloud Computing at Google. Architecture
Cloud Computing at Google Google File System Web Systems and Algorithms Google Chris Brooks Department of Computer Science University of San Francisco Google has developed a layered system to handle webscale
Cassandra A Decentralized, Structured Storage System
Cassandra A Decentralized, Structured Storage System Avinash Lakshman and Prashant Malik Facebook Published: April 2010, Volume 44, Issue 2 Communications of the ACM Conclusion
Slave. Master. Research Scholar, Bharathiar University
Volume 3, Issue 7, July 2013 ISSN: 2277 128X International Journal of Advanced Research in Computer Science and Software Engineering Research Paper online at: Study on Basically, and Eventually
DQMP: A Decentralized Protocol to Enforce Global Quotas in Cloud Environments
Accepted for publication in Proceedings of the 14th International Symposium on Stabilization, Safety, and Security of Distributed Systems (SSS 1) DQMP: A Decentralized /
COMP5426 Parallel and Distributed Computing. Distributed Systems: Client/Server and Clusters
COMP5426 Parallel and Distributed Computing Distributed Systems: Client/Server and Clusters Client/Server Computing Client Client machines are generally single-user workstations providing a user-friendly
Fault Tolerance in Hadoop for Work Migration
1 Fault Tolerance in Hadoop for Work Migration Shivaraman Janakiraman Indiana University Bloomington ABSTRACT Hadoop is a framework that runs applications on large clusters which are built on
Objectives. Distributed Databases and Client/Server Architecture. Distributed Database. Data Fragmentation
Objectives Distributed Databases and Client/Server Architecture IT354 @ Peter Lo 2005 1 Understand the advantages and disadvantages of distributed databases Know the design issues involved in distributed
SOFT 437. Software Performance Analysis. Ch 5:Web Applications and Other Distributed Systems
SOFT 437 Software Performance Analysis Ch 5:Web Applications and Other Distributed Systems Outline Overview of Web applications, distributed object technologies, and the important considerations for,
1. INTRODUCTION TO RDBMS
Oracle For Beginners Page: 1 1. INTRODUCTION TO RDBMS What is DBMS? Data Models Relational database management system (RDBMS) Relational Algebra Structured query language (SQL) What Is DBMS? Data is one
CHAPTER 2 MODELLING FOR DISTRIBUTED NETWORK SYSTEMS: THE CLIENT- SERVER MODEL
CHAPTER 2 MODELLING FOR DISTRIBUTED NETWORK SYSTEMS: THE CLIENT- SERVER MODEL This chapter is to introduce the client-server model and its role in the development of distributed network systems. The chapter
What is Analytic Infrastructure and Why Should You Care?
What is Analytic Infrastructure and Why Should You Care? Robert L Grossman University of Illinois at Chicago and Open Data Group grossman@uic.edu ABSTRACT We define analytic infrastructure to be the services, Technical Overview Simple, Scalable, Object Storage Software Technical Overview Simple, Scalable, Object Storage Software Table of Contents Table of Contents... 1 Introduction & Overview... 1 Architecture... 2 How it Works... 2 APIs and Interfaces...
Basics Of Replication: SQL Server 2000
Basics Of Replication: SQL Server 2000 Table of Contents: Replication: SQL Server 2000 - Part 1 Replication Benefits SQL Server Platform for Replication Entities for the SQL Server Replication Model Entities
Adaptive Server Enterprise
Using Backup Server with IBM Tivoli Storage Manager Adaptive Server Enterprise 15.7 DOCUMENT ID: DC01176-01-1570-01 LAST REVISED: September 2011 Copyright 2011 by Sybase, Inc. All rights reserved. This
WSO2 Message Broker. Scalable persistent Messaging System
WSO2 Message Broker Scalable persistent Messaging System Outline Messaging Scalable Messaging Distributed Message Brokers WSO2 MB Architecture o Distributed Pub/sub architecture o Distributed Queues architecture
Distributed Systems. Tutorial 12 Cassandra
Distributed Systems Tutorial 12 Cassandra written by Alex Libov Based on FOSDEM 2010 presentation winter semester, 2013-2014 Cassandra In Greek mythology, Cassandra had the power of prophecy and the curse
The Win32 Network Management APIs
The Win32 Network Management APIs What do we have in this session? Intro Run-Time Requirements What's New in Network Management? Windows 7 Windows Server 2003 Windows XP Network Management Function Groups
Distributed storage for structured data
Distributed storage for structured data Dennis Kafura CS5204 Operating Systems 1 Overview Goals scalability petabytes of data thousands of machines applicability to Google applications Google Analytics
Chapter 1 - Web Server Management and Cluster Topology
Objectives At the end of this chapter, participants will be able to understand: Web server management options provided by Network Deployment Clustered Application Servers Cluster creation and management
Sharding and MongoDB. Release 3.0.7. MongoDB, Inc.
Sharding and MongoDB Release 3.0.7 MongoDB, Inc. November 15, 2015 2 MongoDB, Inc. 2008-2015 This work is licensed under a Creative Commons Attribution-NonCommercial- ShareAlike 3.0 United States License
Analysis and Research of Cloud Computing System to Comparison of Several Cloud Computing Platforms
Volume 1, Issue 1 ISSN: 2320-5288 International Journal of Engineering Technology & Management Research Journal homepage: Analysis and Research of Cloud Computing System to Comparison of
Distributed Lucene : A distributed free text index for Hadoop
Distributed Lucene : A distributed free text index for Hadoop Mark H. Butler and James Rutherford HP Laboratories HPL-2008-64 Keyword(s): distributed, high availability, free text, parallel, search Abstract:
IBM Software Information Management. Scaling strategies for mission-critical discovery and navigation applications
IBM Software Information Management Scaling strategies for mission-critical discovery and navigation applications Scaling strategies for mission-critical discovery and navigation applications Contents
Middleware and Web Services Lecture 11: Cloud Computing Concepts
Middleware and Web Services Lecture 11: Cloud Computing Concepts doc. Ing. Tomáš Vitvar, Ph.D. tomas@vitvar.com @TomasVitvar Czech Technical University in Prague Faculty of Information
Distributed Objects and Components
Distributed Objects and Components Introduction This essay will identify the differences between objects and components and what it means for a component to be distributed. It will also examine the Java
A Survey on Availability and Scalability Requirements in Middleware Service Platform
International Journal of Computer Sciences and Engineering Open Access Survey Paper Volume-4, Issue-4 E-ISSN: 2347-2693 A Survey on Availability and Scalability Requirements in Middleware Service Platform
Architecting for the cloud designing for scalability in cloud-based applications
An AppDynamics Business White Paper Architecting for the cloud designing for scalability in cloud-based applications The biggest difference between cloud-based applications and the applications running Insurance Policy Administration. Version 9.4.0.0
Oracle Insurance Policy Administration Coherence Version 9.4.0.0 Part Number: E18894-01 June 2011 Copyright 2009, 2011, Oracle and/or its affiliates. All rights reserved. This software and related documentation
Planning the Migration of Enterprise Applications to the Cloud
Planning the Migration of Enterprise Applications to the Cloud A Guide to Your Migration Options: Private and Public Clouds, Application Evaluation Criteria, and Application Migration Best Practices Introduction
Big Data and Scripting Systems beyond Hadoop
Big Data and Scripting Systems beyond Hadoop 1, 2, ZooKeeper distributed coordination service many problems are shared among distributed systems ZooKeeper provides an implementation that solves these avoid | http://docplayer.net/861536-Enhancing-coordination-in-cloud-infrastructures-with-an-extendable-coordination-service.html | CC-MAIN-2017-26 | en | refinedweb |
On Nov 11, 2006, at 9:16 AM, Rice Yeh wrote:
> Hi,
> It seems for now the function attribute in <map:call> must be a
> function in the global object. I think it might be better if this
> function could be within a scope object because our project groups all
> javascript objects like java packages (or just like what dojo does).
Agreed.
Here's a patch you can try. I couldn't figure out how to do it the
"real" way, so... this way feels kind of sleazy, but it works :-)
Index:
src/java/org/apache/cocoon/components/flow/javascript/fom/
FOM_JavaScriptInterpreter.java
===================================================================
---
src/java/org/apache/cocoon/components/flow/javascript/fom/
FOM_JavaScriptInterpreter.java (revision 472699)
+++
src/java/org/apache/cocoon/components/flow/javascript/fom/
FOM_JavaScriptInterpreter.java (working copy)
@@ -69,6 +69,7 @@
import java.io.InputStreamReader;
import java.io.OutputStream;
import java.io.PushbackInputStream;
+import java.io.StringReader;
import java.io.Reader;
import java.util.ArrayList;
import java.util.HashMap;
@@ -744,9 +745,15 @@
}
cocoon.setParameters(parameters);
- Object fun =
ScriptableObject.getProperty(thrScope, funName);
- if (fun == Scriptable.NOT_FOUND) {
- throw new ResourceNotFoundException("Function
\"javascript:" + funName + "()\" not found");
+ try {
+ final Object fun =
+ context.compileReader (
+ thrScope, new StringReader
(funName), null, 1, null
+ ).exec (context, thrScope);
+ } catch (EcmaError ee) {
+ throw new ResourceNotFoundException (
+ "Function \"javascript:" + funName + "()\"
not found"
+ );
}
If some other Cocoon developers think this is not too lame :-), then I
will make it official and fire up a JIRA issue.
> There are more javascript codes than java codes in our project, so
> goruping them in different naming spaces is needed. Cocoon itself is a
> good environment for easily javascript coding at the server side with
> its flowscript and continuation.
Yes :-)
> However, it seems not that care about naming space at javascript code
> itself. Maybe some ideas from dojo can be borrowed here. Like dojo has
> dojo naming space, cocoon has cocoon naming space, and within cocoon,
> there are cocoon.flow, cocoon.flow.continuation,...
I agree, Cocoon's own Javascript style is kind of flat that way.
—ml— | http://mail-archives.apache.org/mod_mbox/cocoon-dev/200611.mbox/%3C2d3892273f15bea520130bf35d9f7b32@wrinkledog.com%3E | CC-MAIN-2017-26 | en | refinedweb |
I was writing a twitter program using tweepy. When I run this code, it prints the Python ... values for them, like
<tweepy.models.Status object at 0x95ff8cc>
import tweepy, tweepy.api
key = XXXXX
sec = XXXXX
tok = XXXXX
tsec = XXXXX
auth = tweepy.OAuthHandler(key, sec)
auth.set_access_token(tok, tsec)
api = tweepy.API(auth)
pub = api.home_timeline()
for i in pub:
print str(i)
In general, you can use the
dir() builtin in Python to inspect an object.
It would seem the Tweepy documentation is very lacking here, but I would imagine the Status objects mirror the structure of Twitter's REST status format, see (for example)
So -- try
print dir(status)
to see what lives in the status object
or just, say,
print status.text print status.user.screen_name | https://codedump.io/share/uMzUwZgmcxY1/1/return-actual-tweets-in-tweepy | CC-MAIN-2017-26 | en | refinedweb |
In previous posts in this series I described a mixed-integer programming model for resource constrained project scheduling, introduced some project data structures, and wrote a class that encapsulates Solver Foundation code to create and solve the model. Now let’s tie it all together with a short sample program to try it out. First, let’s write a function to generate a random project. CreateProject takes as input a task count and resource count. The specified number of resources are created (all with 100% availability). Then the specified number of tasks are created. Each task has one resource assignment. We cycle through the resources one at a time, and we randomly generate a task duration. Then, we create a few links between the tasks. We don’t do this completely at random, since that may generate a cycle between tasks: we wouldn’t want both a link A –> B and B->A. So we take care always to have links going from lower-numbered tasks to higher-numbered tasks. You can modify this function to generate more interesting random projects if you like.
class ProjectUtilities { // (Replace this with code that creates a "real" project. This project is randomly generated.) public static Project CreateProject(int taskCount, int resourceCount) { System.Random random = new Random(0); int maxDuration = 5; int linkCount = Math.Max(taskCount / 5, 1); Resource[] resources = new Resource[resourceCount]; for (int i = 0; i < resources.Length; i++) { resources[i] = new Resource("R" + i, 1); } }
Looking at the SchedulingModel.Solve() method from last time, we see that it returns a dictionary that maps task IDs to start times. Let’s write a method that prints out the schedule in an easy-to-read format. PrintProjectSchedule prints the ID, start, and finish for each task.
public static void PrintProjectSchedule(Project project, IDictionary<int, double> schedule) { Console.WriteLine(); Console.WriteLine("SCHEDULE:"); foreach (var taskSchedule in schedule) { Task task = project.Tasks[taskSchedule.Key]; double start = Math.Round(taskSchedule.Value, 3); Console.WriteLine("{0}: [{1} - {2}]", task.ID, start, start + task.Duration); } }
The last thing I would like to do is to be able to export my schedule to Microsoft Project. To do this, I can write out the schedule in CSV (comma separated) format, and import into Microsoft Project. Microsoft Project’s CSV format is very easy to figure out – just open Project, and use “Save As” to save a project as CSV (go here for more). Stare at it for a bit and you will get an idea. The only trick is that we need to convert our start and finish times from doubles to DateTimes. We will interpret the doubles as representing the number of working days since the start of the project. Most people work Monday through Friday, so in order to do this conversion we will have to skip weekends. Also we want to honor the standard working hours of 8AM – 5PM. To pull this off I will introduce two more utility routines. If you don’t care about Microsoft Project, you can just copy the code and move on 🙂
private static DateTime AddDays(DateTime start, double days, bool isStart) { while (days > 0) { if (days > 1) { start = start.AddDays(1); start = NextWorkingTime(start, false); days -= 1.0; } else { start = start.AddHours(9 * days); if (start.TimeOfDay >= TimeSpan.FromHours(isStart ? 17 : 17.01)) { TimeSpan duration = start.TimeOfDay - TimeSpan.FromHours(17); start = NextWorkingTime(start, isStart).Add(duration); } days = 0; } } start = NextWorkingTime(start, isStart); return start; } private static DateTime NextWorkingTime(DateTime start, bool isStart) { if (start.TimeOfDay >= TimeSpan.FromHours(isStart ? 17 : 17.01)) { start = start.Date.AddHours(24 + 8); } else if (start.Hour < 8) { start = start.Date.AddHours(8); } while (start.DayOfWeek == DayOfWeek.Saturday || start.DayOfWeek == DayOfWeek.Sunday) { start = start.AddDays(1); } return start; } public static string ToCsv(Project project, IDictionary<int, double> schedule) { DateTime projectStart = new DateTime(2011, 2, 1, 8, 0, 0); StringBuilder build = new StringBuilder(40 + project.Tasks.Count * 30); build.AppendLine("ID,Name,Duration,Start_Date,Finish_Date,Predecessors,Resource_Names"); Dictionary<string, int[]> depMap = project.Dependencies .GroupBy(d => d.Destination.Name) .ToDictionary(g => g.Key, g => g.Select(d => d.Source.ID + 1).ToArray()); // need to add 1 for MS Project. foreach (Task task in project.Tasks) { string predNames = ""; int[] predIds = null; if (depMap.TryGetValue(task.Name, out predIds)) { predNames = "\"" + string.Join(",", predIds) + "\""; } string resourceNames = "\"" + string.Join(",", task.Assignments.Select(a => a.Resource.Name)) + "\""; double startDay = schedule[task.ID]; DateTime start = AddDays(projectStart, startDay, true); DateTime finish = AddDays(start, task.Duration, false); build.AppendFormat("{0},{1},{2}d,{3},{4},{5},{6}", task.ID + 1, task.Name, task.Duration, start, finish, predNames, resourceNames); build.AppendLine(); } return build.ToString(); } }
We’re almost done. All we need now is a Main() routine that creates a project, solves it, and prints out the results. Here goes:
using System; using System.Collections.Generic; using System.Diagnostics; using System.Linq; using System.Text; namespace ProjectScheduling { /// <summary>Driver. /// </summary> class Program { static void Main(string[] args) { int taskCount = args.Length > 0 ? Int32.Parse(args[0]) : 5; int resourceCount = args.Length > 1 ? Int32.Parse(args[1]) : 2; bool verbose = (args.Length > 2 ? Int32.Parse(args[2]) : 0) != 0; Project project = ProjectUtilities.CreateProject(taskCount, resourceCount); Console.WriteLine(project); Stopwatch stopwatch = new Stopwatch(); SchedulingModel m = new SchedulingModel(verbose); stopwatch.Start(); m.Initialize(project); Console.WriteLine("Init time = " + stopwatch.Elapsed); IDictionary<int, double> schedule = m.Solve(); Console.WriteLine("Total time = " + stopwatch.Elapsed); ProjectUtilities.PrintProjectSchedule(project, schedule); Console.WriteLine(ProjectUtilities.ToCsv(project, schedule)); } } }
As you can see, you can supply the task count, resource count, and whether to print debug output through the command line. Here is the output for “schedulingmodel 5 2”:
---------------------------------------- PROJECT TASKS 0: t0 duration = 4 1: t1 duration = 5 2: t2 duration = 4 3: t3 duration = 3 4: t4 duration = 2 LINKS 2 -> 4 RESOURCES 0: R0 max units = 1 1: R1 max units = 1 ---------------------------------------- Init time = 00:00:00.2296806 Total time = 00:00:00.8855667 SCHEDULE: 0: [0 - 4] 1: [0 - 5] 2: [4 - 8] 3: [5 - 8] 4: [8 - 10] ID,Name,Duration,Start_Date,Finish_Date,Predecessors,Resource_Names 1,t0,4d,2/1/2011 8:00:00 AM,2/4/2011 5:00:00 PM,,"R0" 2,t1,5d,2/1/2011 8:00:00 AM,2/7/2011 5:00:00 PM,,"R1" 3,t2,4d,2/7/2011 8:00:00 AM,2/10/2011 5:00:00 PM,,"R0" 4,t3,3d,2/8/2011 8:00:00 AM,2/10/2011 5:00:00 PM,,"R1" 5,t4,2d,2/11/2011 8:00:00 AM,2/14/2011 5:00:00 PM,"3","R0"
The nice thing is that you can copy and paste the bottom part of this output into a CSV file and then view the results in Project.
One last important point. If you enable the debug switch and examine the output from the Gurobi solver, you will notice that typically the solver is able to get a pretty good solution relatively quickly, and then spends a long time proving that the solution is optimal. Here is a typical output log: notice that the incumbent solution goes down to 41 days in only 4 seconds, even though the run takes over 30 seconds.
Nodes | Current Node | Objective Bounds | Work Expl Unexpl | Obj Depth IntInf | Incumbent BestBd Gap | It/Node Tim 0 0 0.02924 0 227 48.00000 0.02924 100% - 0s H 0 0 47.0000000 0.02924 100% - 0s 0 0 0.17738 0 167 47.00000 0.17738 100% - 0s 0 0 0.17835 0 174 47.00000 0.17835 100% - 0s 0 0 0.18278 0 168 47.00000 0.18278 100% - 1s 0 0 0.18509 0 170 47.00000 0.18509 100% - 1s H 0 0 44.0000000 0.18509 100% - 1s 0 0 0.22003 0 246 44.00000 0.22003 99.5% - 1s 0 0 0.22474 0 254 44.00000 0.22474 99.5% - 1s 0 0 0.22670 0 258 44.00000 0.22670 99.5% - 1s 0 0 0.22670 0 267 44.00000 0.22670 99.5% - 1s 0 0 0.26301 0 233 44.00000 0.26301 99.4% - 2s H 0 0 42.0000000 0.26301 99.4% - 2s 0 0 0.26435 0 262 42.00000 0.26435 99.4% - 2s H 0 0 41.0000000 0.26435 99.4% - 2s 0 0 0.26440 0 283 41.00000 0.26440 99.4% - 2s 0 0 0.26646 0 272 41.00000 0.26646 99.4% - 2s 0 0 0.26695 0 269 41.00000 0.26695 99.3% - 3s 0 0 0.26695 0 288 41.00000 0.26695 99.3% - 3s 0 0 0.28377 0 314 41.00000 0.28377 99.3% - 3s 0 0 0.28476 0 329 41.00000 0.28476 99.3% - 3s 0 0 0.28476 0 336 41.00000 0.28476 99.3% - 3s 0 0 0.28476 0 276 41.00000 0.28476 99.3% - 4s 0 0 0.28476 0 289 41.00000 0.28476 99.3% - 4s 0 0 0.28476 0 296 41.00000 0.28476 99.3% - 4s 0 0 0.28476 0 260 41.00000 0.28476 99.3% - 4s 0 2 0.28476 0 260 41.00000 0.28476 99.3% - 5s H 59 39 40.0000000 0.96732 97.6% 37.9 6s 822 556 1.50000 16 317 40.00000 1.00000 97.5% 14.4 10s 884 598 1.66667 24 84 40.00000 1.00000 97.5% 26.2 15s 1377 841 12.35714 69 39 40.00000 1.00000 97.5% 24.4 20s 2308 1201 11.00000 44 40 40.00000 1.00000 97.5% 21.2 25s 2900 1531 15.00000 60 32 40.00000 1.00000 97.5% 18.9 30s
You can use the properties on GurobiDirective to terminate earlier if you want. For example, if I want to solve for at most 30 seconds, I can change the first part of Solve() to:
GurobiDirective directive = new GurobiDirective(); directive.OutputFlag = verbose; directive.TimeLimit = (int)TimeSpan.FromSeconds(30).TotalMilliseconds;
GurobiDirective has lots of options for controlling the solver behavior: check the documentation for details.
2 thoughts on “Project scheduling and Solver Foundation revisited, Part IV”
Dear Nathan, 3 years ago I have decided to stop using OML language, because I could not model RCPSP scheduling problem. Up to now, even reading your posts, I dont have a clue how have you handled a resources capacity constraint when a start time of a given task is yet unknown.
Hi there – if you look at the paper I referenced in the first post in the series (), you will see that resource capacity constraints are handled in equation (2), which is implemented by the code. The idea is to simply make sure that the resource usage for each time period does not exceed the capacity. The resource capacity is calculated through other constraints that relate to the schedules for the project tasks. | https://nathanbrixius.wordpress.com/2011/01/10/project-scheduling-and-solver-foundation-revisited-part-iv/ | CC-MAIN-2017-26 | en | refinedweb |
I can't figure out a way to return a false boolean in this function. The function: nestedListContains(NL, target): takes a nested list of integers (NL) and an integer(target) and returns true if target is in NL. I can't get it to return false when the target integer is not in any part of the list. So nestedListContains([1,2,3,4],3) = true, but nestedListContains([1,2,3,4],5) would be false.
Everything I've tried returns a false statement before the entire nested list has been checked, so it returns false when actually true.
For example, nestedListContains([[9, 4, 5], [3, 8]], 3) is returning false but nestedListContains([2, 3, 5, 7, 9], 7) will actually work properly and return True.
I'm not sure how to think about this. Can you share some wisdom please?
BTW, this is an exercise, so I'm running this in an interpreter embedded in a website that gives programmed errors.
- Code: Select all
def nestedListContains(NL, target):
if NL == target:
return NL
if isinstance(NL,list):
for i in range(0,len(NL)):
x=nestedListContains(NL[i], target)
if x==target:
return True
return False
Program executed without crashing.
The grader said:
Running nestedListContains([2, 3, 5, 7, 9], 7) … its value True is correct!
Running nestedListContains([2, 3, 5, 7, 9], 8) … its value False is correct!
Running nestedListContains([[9, 4, 5], [3, 8]], 3) … Error: nestedListContains([[9, 4, 5], [3, 8]], 3) has wrong value False, expected True | http://www.python-forum.org/viewtopic.php?p=5898 | CC-MAIN-2017-26 | en | refinedweb |
Given I installed pytorch by conda install pytorch torchvision -c soumith
conda install pytorch torchvision -c soumith
The problem 1 _rebuild_tensor inside _utils/ does not in the following output, why is it?
_rebuild_tensor
_utils/
", ".join([t for t in dir(torch._utils)])
### inside _utils we have the following function, but why not shown up???
# def _rebuild_tensor(storage, storage_offset, size, stride):
# class_name = storage.__class__.__name__.replace('Storage', 'Tensor')
# module = importlib.import_module(storage.__module__)
# tensor_class = getattr(module, class_name)
# return tensor_class().set_(storage, storage_offset, size, stride)
# output is the following
# '__builtins__, __cached__, __doc__, __file__, __loader__, __name__, __package__, __spec__,
# _accumulate, _cuda, _import_dotted_name, _range, _type, torch'
The problem 2 _rebuild_tensor inside _utils/ does not in the following output, given it is imported inside torch.tensor.py file, why is it?
torch.tensor.py
", ".join([t for t in dir(torch.tensor)])
# output is the following
# '_TensorBase, __builtins__, __cached__, __doc__, __file__, __loader__, __name__, __package__,
# __spec__, _cuda, _range, _tensor_str, _type, sys, torch'
I think the two problems above is really the same one, that I can't have access to torch._utils._rebuild_tensor, I wonder why.
torch._utils._rebuild_tensor
Guesses1. I just did conda install pytorch torchvision -c soumith to install, was this the problem?2. when I ran ./run_test.sh, I got the following error:
./run_test.sh
ERROR: test_serialization_map_location (__main__.TestTorch)
----------------------------------------------------------------------
Traceback (most recent call last):
File "test_torch.py", line 2711, in test_serialization_map_location
tensor = torch.load(test_file_path, map_location=map_location)
File "/Users/Natsume/miniconda2/envs/dlnd-tf-lab/lib/python3.5/site-packages/torch/serialization.py", line 222, in load
return _load(f, map_location, pickle_module)
File "/Users/Natsume/miniconda2/envs/dlnd-tf-lab/lib/python3.5/site-packages/torch/serialization.py", line 370, in _load
result = unpickler.load()
AttributeError: Can't get attribute '_rebuild_tensor' on <module 'torch._utils' from '/Users/Natsume/miniconda2/envs/dlnd-tf-lab/lib/python3.5/site-packages/torch/_utils.py'>
Solution worked this time1. git clone pytorch2. create a new env3. conda install numpy setuptools cmake cffi4. pip install -r requirements.txt5. python setup.py installbut the 5th step has a lot of failures and warnings, I can't run import torchthen I did 6. pip install -e . then I can run import torch and the problems above are gone. but when I run cd test/ and ./run_test.sh, previous error is gone, but I got the following Exception:
import torch
pip install -e .
cd test/
Exception ignored in: <function WeakValueDictionary.__init__.<locals>.remove at 0x114c5da60>
Traceback (most recent call last):
File "/Users/Natsume/miniconda2/envs/pytorch-experiment/lib/python3.5/weakref.py", line 117, in remove
TypeError: 'NoneType' object is not callable
should I be worried about this one?
Thanks a lot!
I met the same problem. The reason may be the version of the pretrained model you load doesn't match with your pytorch. | https://discuss.pytorch.org/t/attributeerror-on--rebuild-tensor-caused-conda-install-avoided-by-pip-install/1226 | CC-MAIN-2017-26 | en | refinedweb |
Dear all,
I need your help on tis programming assignment I have.
I need to save all the atrributes information to a signle file.
I need to have 2 options :
Save to a default name backup.bak and to a users filename choice.
I have found this code in an ebook
#include <fstream.h> main() { ofstream file_ptr; // Declares a file pointer. file_ptr.open(“house.dat”, ios::out); // Creates the file. if (!file_ptr) { cout << “Error opening file.\n”; } else { // Rest of output commands go here.
The pseudocode I wrote haas the following structude.
I will call a function called backup()
I will create a menu with 2 options
Save to a default location with default name : "C:\Backup\backup.bak"
Save to a default location C:\Backup\ with a users filename choice
so
cout<<"Enter your choice"<<endl; cin >>choice; if (choice==1){} else if (choice==2){} else if (choice==3){return 0;} else {cout<<"Please enter a valid choice"<<endl;}
I need your help how I am going to save all the obects I have created in all classes I have created. | https://www.daniweb.com/programming/software-development/threads/333901/save-all-classes-information-to-a-file | CC-MAIN-2017-26 | en | refinedweb |
OSGi Abandons Snapshot Proposal
- |
-
-
-
-
-
-
Read later
Reading List
With the recently-released OSGi Release 5 early access documents, one of the most anticipated features of the upcoming specification – that of SNAPSHOT style versions for OSGi – has been dropped from the specification because of concerns with existing tooling:
But the big concern come around interactions with existing tooling, management and provisioning systems. These systems would not understand a bundle having a pre-release version string. They would require a lot of changes to properly handle support the pre-release version syntax.
The problem stems from the differences between the way that Maven (and thus, Maven-compatible resolvers/build systems such as Ivy and Gradle) and the way that OSGi handle the empty qualifier are reversed. In Maven,
1.2.3.2012 <= 1.2.3, but in OSGi,
1.2.3.2012 >= 1.2.3.
This causes problems when building components which are able to work in both a non-OSGi environment (say, consumed with Maven) and with those destined to run inside an OSGi container. The convention, in Maven terms, is to work against (a number of)
2.1-SNAPSHOT builds, before finally replacing the version number as
2.1. Often a repository manager (such as Artifactory or Nexus) will rewrite the snapshots to a dated file when published for traceability.
Eclipse PDE build, and thus Maven Tycho, works by explicitly naming each built component, typically with a changing date/timestamp as part of the build. Since each built component gets a new name, the versions may be incrementally installed into an OSGi runtime with the new version overriding another.
Unfortunately, this means that the versions of the 'final' released components also contain a build qualifier, which can in some cases be longer than the name of the artifact itself (e.g.
org.junit_4.8.2.v4_8_2_v20110321-1705). They also aren't consistent in the format of the qualifier (e.g.
org.eclipse.jdt.ui_3.7.1.r371_v20110824-0800.jar ).
Some producers, such as SpringSource, create versions in the form
1.2.3.M1, 1.2.3.M2, 1.2.3.RELEASE which work for both OSGi and Maven consumption.
Supporting
-SNAPSHOT style versions for OSGi would have solved this problem. It was proposed (and even implemented in Equinox) that the
Bundle-Version syntax could have been upgraded to permit
1.2.3-456 to sort lower than
1.2.3. This could have enabled bundle developers to use the
-SNAPSHOT style variant (with tooling like Tycho and PDE using
-SNAPSHOT as the 'magic replacement value' instead of
.qualifier) for development purposes, and then release
1.2.3 as the only build in that sequence, followed by a bump to
1.2.4-SNAPSHOT.
Unfortunately, the concerns raised were speculative rather than empirical:.
The complication was related to how build ranges are concerned. Existing use cases such as
[1.0,2.0) were considered. In this case, the
1.0 would have permitted snapshot
1.0-* versions, whereas the
2.0 at the other end would not. Ultimately, this rule boils down to If it's an inclusive range, it includes snapshots, and if it's a exclusive range it excludes snapshots which hardly seems difficult to recall.
The risk is that this decision actually dissuades the use of OSGi-generated content. For about a year, there have been discussions on how to represent the Eclipse-built artifacts in a Maven namespace, by mapping the components into an
org.eclipse.* style group and with the full artifact name. However, the proposal has been to just drop all the qualifiers to make it easier for humans to consume, at least from the Maven front end.
This highlights a problem with the use-case of using a qualifier for everything. Developers forget to update the version number and just use the auto-generated number when providing builds, with the result that the Eclipse repository contains multiple artifacts with the same major/minor/patch version number, but different build qualifiers. From a local Eclipse 3.7 instance, upgraded to 3.7.2, the following are just a subset of the plugins with duplicate major/minor/patch versions:
- org.eclipse.cdt.codan.checkers.ui_1.0.0.201109151620.jar
- org.eclipse.cdt.codan.checkers.ui_1.0.0.201202111925.jar
- org.eclipse.core.filebuffers_3.5.200.v20110505-0800.jar
- org.eclipse.core.filebuffers_3.5.200.v20110928-1504.jar
- org.eclipse.core.variables_3.2.500.v20110511.jar
- org.eclipse.core.variables_3.2.500.v20110928-1503.jar
- org.eclipse.emf.ecore_2.7.0.v20110912-0920.jar
- org.eclipse.emf.ecore_2.7.0.v20120127-1122.jar
- org.eclipse.equinox.frameworkadmin.equinox_1.0.300.v20110506.jar
- org.eclipse.equinox.frameworkadmin.equinox_1.0.300.v20110815-1438.jar
- org.eclipse.equinox.p2.updatesite_1.0.300.v20110510.jar
- org.eclipse.equinox.p2.updatesite_1.0.300.v20110815-1419.jar
- org.eclipse.jdt.compiler.tool_1.0.100.v_B76_R37x.jar
- org.eclipse.jdt.compiler.tool_1.0.100.v_B79_R37x.jar
- org.eclipse.jface_3.7.0.I20110522-1430.jar
- org.eclipse.jface_3.7.0.v20110928-1505.jar
- org.eclipse.ltk.core.refactoring_3.5.201.r371_v20110824-0800.jar
- org.eclipse.ltk.core.refactoring_3.5.201.r372_v20111101-0700.jar
- org.eclipse.pde.runtime_3.4.201.v20110819-0851.jar
- org.eclipse.pde.runtime_3.4.201.v20110928-1516.jar
- org.eclipse.ui_3.7.0.I20110602-0100.jar
- org.eclipse.ui_3.7.0.v20110928-1505.jar
The problem is that these numbers are generally meaningless to anyone who looks at the file system or tries to remember the number. This may not matter if you're using a repository tool such as P2 or OBR to materialise your artefacts, but the majority of the world's build tools are still built upon a
Require-Bundle type of dependency with an explicit version number and name. It also complicates management of many OSGi runtimes where an easy comparison of installed bundles becomes more difficult.
The
-SNAPSHOT/release/
-SNAPSHOT model solves this problem, because at promotion time the version number is explicitly incremented. (This doesn't rule out the possibility of doing a further testing stage after the version number has been finalised; but typically problems found at that stage lead to a bump in the patch level anyway.) This process is used successfully by the Apache Felix project who have a list of releases with short numbers and are easy to re-use in an existing build. Arguably it is easier to build against Apache Felix for headless OSGi builds than Equinox, both because of this and because the artifacts are available in Maven central already.
In InfoQ's opinion this is a missed opportunity. By not following through with the proposed implementation, based on minor concerns, the OSGi core platform expert group have turned a tooling problem into a human problem.
Rate this Article
- Editor Review
- Chief Editor Action
Hello stranger!You need to Register an InfoQ account or Login or login to post comments. But there's so much more behind being registered.
Get the most out of the InfoQ experience.
Tell us what you think
Case point: Scala
by
Daniel Sobral
Scala has just gone through a long discussion of how to do versioning because of this very problem. The end result is that Scala has now three different versions. For example:
[echo] maven version: 2.10.0-SNAPSHOT
[echo] OSGi version: 2.10.0.v20120403-064802-f2bc58ce1b
[echo] canonical version: 2.10.0-20120403-064802-f2bc58ce1b
Dont understand snapshots
by
Peter Kriens
(((develop build test )+ qa )+ production )+
Snapshots forces you to:
(((snapshot build test)+ qa)+ ((release build test)+ qa)+ production )+
Since release requires you to modify the metadata (the versions) you run the chance that there are changes in your code that cause errors. Since any prudent company will require that whatever goes into production is tested, this seems to require an extra test cycle after a release cycle. Since it is unlikely that versions generate errors, this extra test/qa cycle is mostly wasted ... I think.
I personally prefer to have multiple repositories and ALWAYS making the final version in a repository that is only shared with my co-developers. (Package) versions are baselined against the master repository. After QA approves the result, the tested artifacts are moved to the master repository and can then go into production since they are identical (this can be verified with digests/signing).
This means that for any major.minor.micro release there is just ONE instance in the master repository. In bnd(tools) we already ignore the qualifier for resolution reasons since semantically there must not be a difference. The qualifier was intended to describe the build instance, not become everbody's favorite place to let as many angels dance as possible.
In this model, the cycle then becomes.
(((develop build test)+ qa)+ promote production)+
I've no real experience with maven so maybe I am totally wrong ...
Because the multiple repositories sounds so much more attractive I lost the urge to fight for the negative qualifiers. If you really feel strong about them anyway, I think it is fairer to join the alliance and participate in the specification work. It is hard to advocate for something that you do not feel strong about ... These things need to be driven by interested parties.
Peter Kriens
semver
by
Barrie Treloar
It is concerning that multiple versions on an artefact only differ in the qualifier.
Forcing a "snapshot" style avoids this pollution of your production namespace as you can only consume released versions. | https://www.infoq.com/news/2012/04/osgi-snapshot/ | CC-MAIN-2017-26 | en | refinedweb |
Odoo Help
Odoo is the world's easiest all-in-one management software. It includes hundreds of business apps:
CRM | e-Commerce | Accounting | Inventory | PoS | Project management | MRP | etc.
Fill automatically some fields when customer enter a new record
Hi everyone,
I have a new table with differents fields.
Inside the Customer View ( Sale), OpenERP users can create a new record.
However, I would like to fill automatically the two first fields:
- One is the name of the customer
- The other one is the date
Is it possible to do it? If yes, how?
Many thanks for your help,
Selverine
Hello Selverineuse defaults in field declaration like
def _selectCustomer(self):
Customer_id = Customer id you want to set default
return Customer_id
order_date = fields.Datetime('Order Date', default=fields.datetime.now())
customer_id = fields.Many2one('res.partner', 'Customer', default=_selectCustomer)
About This Community
Odoo Training Center
Access to our E-learning platform and experience all Odoo Apps through learning videos, exercises and Quizz.Test it now | https://www.odoo.com/forum/help-1/question/fill-automatically-some-fields-when-customer-enter-a-new-record-100518 | CC-MAIN-2017-26 | en | refinedweb |
CodePlexProject Hosting for Open Source Software
Hi,
I downloaded a fresh copy of the orchard source from codeplex ().
I extracted the zip file and opened the solution in visual studio (with mvc3 installed). The project Orchard.Web is set as the startup project. When trying to build and play the solution, it throws 657 errors, all pertaining to unknown types or namespaces
and possible missing assembly references.
Am I missing a step? Just following a pluralsight tutorial. Any help would be appreciated.
Thanks
This should work just fine, there must have been something wrong while you extracted the files.
Are you sure you want to delete this post? You will not be able to recover it later.
Are you sure you want to delete this thread? You will not be able to recover it later. | http://orchard.codeplex.com/discussions/362539 | CC-MAIN-2017-26 | en | refinedweb |
in reply to Heisenberg Uncertainty Hash
#!perl
use warnings;
use strict;
sub safe_exists {
my ($href, @list_of_descent) = @_;
use List::Util 'first';
if (first {
not (exists $href->{$_} and $href=$href->{$_})
} @list_of_descent) {
return ();
}
else {
return \$href;
}
}
my %hash = ('a' => {
'value' => 1,
'foo' => 'bar',
},
'b' => {
'value' => 2,
'foo' => 'bar',
},
'c' => {
'value' => 3,
'foo' => 'bar',
},
);
for (qw(a b c d)) {
if (my $href = safe_exists(\%hash, $_, 'value')) {
print "$_ value is $$href\n";
}
else {
print "Key $_ doesn't exist. Valid keys are: ";
print join (", ", keys %hash), ". | http://www.perlmonks.org/index.pl?node_id=451848 | CC-MAIN-2017-26 | en | refinedweb |
I have to enter a three digit number and then get an output of
the first digit
the second digit
the third digit
~Sam Mixter~
Printable View
I have to enter a three digit number and then get an output of
the first digit
the second digit
the third digit
~Sam Mixter~
Please post your code. Its not easy to guess what's going on.
use an array
if you don't know how to do an array, here is a liitle bit of code...
hope this helpshope this helpsCode:
#include <iostream.h>
#include <stdlib.h>
int main()
{
int array[2]; // remember that C++ regards 0 as one of the parts of the array
cout << "Enter your 3 digit number: ";
cin >> array[2];
cout << "\n";
cout << array[0];
cout <<"\n";
cout << array[1];
cout << "\n";
cout << array[2];
system("pause"); // system calls are crap, I know
return 0;
}
Hi
just few remarks about the code above:
int array[2]; -
declare an array with only 2 elements of type int
they can be accessed via indexes 0 and 1 so the last cout with array[3] is illegal, it will print a garbage.
cin >> array[2]; -
is it supposed to read several int's from the cin? if true - then you are in trouble.
this expression just read one integer value from the keyboard and places it in an unexisting place - array[2];
so to correct the code above - enlarge the array ehough to hold all the three elements and read them one by one.
damyan
in this little code, when it inputs array[2], it gets all three numbers. eg. if you enter 111, it will output:
1
1
1
but if you input 1111 it will output a whole lot of garbage. In other words: it works, end of story.
Hi face_master,
:-) did you ever try this code, so becouse you didn't
just for example
let declare an variable of the same type as the elements of the array - int - just before the declaration of the array[2]
....
int foo = 5;
int array[2];
...
and output that variable - foo - after your
cin >> array[2];
cout << foo;
these are all local variables allocated in the stack
when you modify an unallocated variable - in our case array[2]
you change the memory just above the array elements - the variable declared just before the array or if such variable didn't exist you override the return addres of the function and it will not exit normally.
damyan
Damyan is right on declaring arrays. Declaring
int array[2]; will declare a 2 element array, with values array[0] and array[1].
As for inputting cin >> array[2] to input the entire number into each array element, maybe, but I've never seen that. You usually use a loop to input multiple values. cin >> array[2] would input only into the specific variable array[2], which in this case doesn't exist. It may be compiler dependent, but this will give out of bounds errors in msvc.
i'd write this program like this:
is this right? or is there some reason i shouldn't do this?is this right? or is there some reason i shouldn't do this?Code:
#include<iostream.h>
#include<stdio.h>
int main()
{
int x;
char y[2];
cout<<"Enter a three digit number\n";
cin>>x;
cout<<"\n";
sprintf(y, "%i", x);
cout<<y[0]<<"\n";
cout<<y[1]<<"\n";
cout<<y[2]<<"\n";
return 0;
}
Probably (if you add an extra charater to the array), but you can do it without the function call and array -Probably (if you add an extra charater to the array), but you can do it without the function call and array -Quote:
is this right?
#include <iostream>
using namespace std;
int main()
{
int a;
cout << "Enter 3 digit number: ";
cin>>a;
for(int i=100;i>0;i/=10)
{
cout <<a/i << " ";
a%=i;
}
return 0;
} | https://cboard.cprogramming.com/cplusplus-programming/1674-i-need-help-printable-thread.html | CC-MAIN-2017-26 | en | refinedweb |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.