text
stringlengths
20
1.01M
url
stringlengths
14
1.25k
dump
stringlengths
9
15
lang
stringclasses
4 values
source
stringclasses
4 values
. The bean implementation class is a regular Java class, also sometimes called a POJO, or a plain-old Java object. It does not implement an EnterpriseBean type. The declaration and the configuration in the deployment descriptor can be defined within the Java code, using the annotations metadata facility. In addition, default values are provided for most configurations, thus minimizing the bean-specific configuration requirements. Under the new specification, one could deploy session beans without any ejb-jar.xml deployment descriptors, though they still exist and could be used if the developer prefers that to the annotations model. In the case of EJB 3.0 session beans that implement a Web service, the methods exposed as Web service operations are annotated with the WebMethod descriptor. Session beans that serve as Web service endpoints are annotated as a WebService. Listing 2 illustrates the earlier example (from Listing 1) of the stateful session bean using the EJB 3.0 specification. Listing 2. An EJB 3.0-based banking service stateful session bean @Remote public interface BankingService { public void deposit(int accountId, float amount); public void withdraw(int accountId, float amount); public float getBalance(int accountId); publlic void doServiceLogout(); } @Stateful public class BankingServiceBean implements BankingService { public void deposit(int accountId, float amount) { //Business logic to deposit the specified amount and update the balance } public void withdraw(int accountId, float amount) { //Business logic to withdraw the desired amount and update the balance } public float getBalance(int accountId) { //Business logic to get the current balance } @Remove public void doServiceLogout () { //Service completion and logout logic } } In EJB 2.1, a message-driven bean class implemented the MessageDrivenBean interface and the message listener interface. The callback methods were implemented in the bean class and the container, on a particular event called the corresponding method. Message-driven beans never involved the concept of a home and a remote, or local, interface. Free Download - 5 Minute Product Review. When slow equals Off: Manage the complexity of Web applications - Symphoniq Free Download - 5 Minute Product Review. Realize the benefits of real user monitoring in less than an hour. - Symphoniq
http://www.javaworld.com/javaworld/jw-08-2006/jw-0814-ejb.html
crawl-001
en
refinedweb
, Meet poor Billy.. Monday's article, the Anti-Estrogens Report Card, was so popular that some of you asked to hear Billy's story all over again. So here it is... To understand what happened to poor Billy, in this issue of EliteFitness.com News, we'll examine estrogen and its relationship to male use of anabolic steroids - brought to you by EliteFitness.com and my friend Grendel from Anabolic Extreme. Also in this week's EFN, we'll look at drugs you can use to annihilate estrogen in a blinding burst of anabolic goodness! Read on, unless of course you actually want a career in the exotic dancing arts like our friend Billy. Why Billy Has Breasts: The Story of Estrogen! Before we begin to talk about all the great benefits of anabolic steroids I think it is important to take a moment to talk about side effects and how to prevent them.. Most of the actions of estrogens appear to be exerted via the estrogen receptor (ER) of target cells, an intracellular receptor that is a member of a large super family of proteins that function as ligand-activated transcription factors, regulating the synthesis of specific RNAs and proteins. This process is almost identical to the action by which anabolic steroids affect protein synthesis. (parabolan, or in most people's cases Finaplex) convert into progesterone. High dosages of steroids for prolonged periods also shut down the body's natural production of certain hormones (particularly testosterone) when steroid therapy is stopped the body attempts to establish homeostasis by adjusting hormonal levels. The average ratio of testosterone to estrogen in a healthy male is 100:1. When drugs increase the testosterone in the body, the body will respond by increasing the estrogen in the body. Additionally, estrogen circulates in the body bound to the protein SHBG (sex hormone binding globulin) as does the testosterone. SHBG is produced in the liver and use of steroids increases the production of this protein; which has a very high receptor affinity for testosterone. With more SHBG in the body, more testosterone is bound, becoming inactive as only free testosterone can activate an androgen receptor. SHBG, however, has poorer receptor affinity for estrogen and more active free estrogen circulates in the body, further altering the hormonal balance. These effects of steroids (i.e. the potential for conversion into estrogen, as well as the disruption of the hormonal balance in the body) can cause serious side effects in male users. Thus, steroid users seek ways to block this estrogen from affecting them. That is all a very nice and formal way of saying that you need to be taking anti-estrogens when you are using steroids. See, without the anti-estrogens you get all sorts of pleasant side effects, not limited to a nice pair of breasts (with oh -so tender nipples) and extra body fat! Without anti-estrogens you will end up like poor Billy, shaking his titties in the face of wealthy Japanese businessmen. No, seriously, this chapter will explore how to effectively use anti-estrogens to prevent many of the side effects that accompany anabolic steroid usage. The Drugs Are Your Friends Oral clomiphene citrate (Clomid) is an ovulation stimulant used to treat ovulatory failure in women. Oral tamoxifen citrate (Nolvadex) belongs to a class of antineoplastics called antiestrogens. It is used to treat breast cancer. Body builders use both of these drugs. Why on earth would they do that? The answer is that both of these drugs are anti-estrogens. The term anti-estrogen is a little inaccurate. This class of pharmaceutical does not engage in some sort of matter/anti-matter reaction, annihilating estrogen in a blinding burst of anabolic goodness. Rather, let us think of the classical anti-estrogen drugs (such as nolvadex and clomid) as estrogen receptor antagonists (ERA). These ERAs are chemicals that are close enough in structure to estrogen to fit into the estrogen receptor site; however these chemicals do not have the same chemical effect as estrogen. The result is that any estrogen produced by the body or exogenous estrogen cannot find an open receptor site to attach to. The free-floating estrogen then presents far less problems to homeostasis. There is a lot of conflict over using nolvadex, clomid and other ERAs. The regulation of estrogen-induced cellular effects is a multi-step molecular process. The diversity of estrogen and anti-estrogen effects on cellular functions is also modulated by tissue and gene specificity. This diversity of reaction may be explained by different levels of molecular regulation, including the presence of two distinct estrogen receptor isoforms (ER alpha and ER beta), their binding to activator or co-repressor transcriptional proteins, and their affinity to different DNA binding domains of target genes (estrogen responsive element or API). These mechanisms may account for the specific responses to estrogens or anti-estrogens according to tissue, cell or gene level. Therefore, in English, a drug like nolvadex, which targets breast tissues, is going to do a better job of preventing gynocomastia than is clomid. However clomid has the benefit of boosting the levels of follicle stimulating hormone, which helps restore the bodies natural testosterone levels and protects against testicular atrophy. Many people stop using their ERA drugs when they end the cycle. That is a terrible idea. Clomid, as we have already discussed, helps immensely with your recovery processes. But remember, there is almost always an estrogen backlash to having been using testosterone drugs for so long. Therefore, many symptoms of high estrogen levels appear after the cycle. I would continue to use both Clomid and Nolvadex for up to 3 weeks after the last of the drugs have left your body. Remember, if on Friday you take 500 mg of a longer acting drug like Sustanon, then don't consider the following few weeks as truly off time. That is why it is important to know how long the drugs are effective in your body and yet another reason to switch to faster acting drugs in the last few weeks of a cycle. "Who! Learn Everything at:! Effective dosages of these two drugs are debated. I would recommend that the two drugs be used together, Nolvadex at 20 mg per day, and clomid at 50 mg per day. If Nolvadex is used by itself, 20-40 mg are sufficient. 50-100 mg of clomid can be used if clomid is the only ERA drug. Clomid should be used for two weeks after the last steroid injection to help return your body to its natural hormonal state. Nolvadex and Clomid are mildly expensive, but very available because they are not scheduled drugs and can be legally imported. There is a second class of drug used to combat estrogen side effects from what is grandly called steroid therapy; there are aromatase inhibitors. As mentioned previously in this chapter, the body can convert testosterone into estrogen using the enzyme aromatase. This second group of drugs, which I will call the inhibitors, prevents this process from occurring at all. This class of medication is generally only prescribed for severe conditions and is generally more expensive then any of the ERA. Teslac, (testolactone), has fallen out of favor for several reasons. First of all, almost one gram daily is needed to achieve sufficient estrogen synthesis inhibition. This makes this a very expensive drug to use. Also, it is currently a scheduled drug because it is a testosterone derivate. Cytadren (aminoglutethimide) is a better choice, requiring dosages of between 250-500 mg per day to suppress estrogen synthesis. 250mg cytadren doesn't cause significant desmolase inhibition, so there would still be cortisol and other steroids, while estrogen is minimized! Cytadren is used therapeutically to combat Cushing's syndrome because it also interferes with the body's ability to synthesis cortisol. Sounds like fun, huh ... no cortisol, no estrogen. What a fantastic environment. Tell that to Andreas Munzer! Cytadren can cause cysts as well as effect things like blood clotting. It is reported that Munzer used 1-2g(!) of cytadren/day! Therefore cytadren use should be done with precision. Arimidex (anastrozole) is a drug designed to combat second stage breast cancer. It is an extremely potent drug; one pill per day is sufficient to almost entirely inhibit estrogen in the body. However, the draw back is that this one pill per day can cost you around ten dollars. The final conclusion about inhibitors is that these are far more powerful drugs then the ERA. All the drugs listed above effect a much wider hormonal spread then the anti-estrogens and they are also going to cost you a lot more. Of all the drugs mentioned, I think that arimidex is the most useful drug for the body builder. Duchaine helped promote cytadren, particularly because of its anti-catabolic ability to suppress cortisol. But, even he acknowledged the double-edged sword that this drug was. Too little cortisol is painful to the joints and in the end, extremely dangerous. I would not recommend the use of cytadren, but I have provided the moderate dosage schemes. The bottom line: These are not drugs to pop like M&Ms. The Argument Against Our Little Friends But these drugs decrease your gains right? Damn it. I hate hearing that phrase clutched to... you guessed it... peoples' breast like a mantra. First of all, there is no way of telling what your gains would have been like without nolvadex or clomid. The scientific evidence that gave rise to this whole dispute (which I believe Duchaine had a hand in too) is that in addition to its anti-estrogenic action requiring estrogen receptors (ER) and leading to growth arrest of breast cancers, studies have previously shown that the anti-hormone tamoxifen (nolvadex) is able to block EGF, insulin and IGF-I mitogenic activities in total absence of estrogens. Thus the excessive use of anti-estrogens will actually result in a loss of some of the most anabolic of hormones (insulin and insulin-like growth factor 1). Steroid antagonists can inhibit not only the action of agonist ligands of the receptors they are binding to, but can also modulate the action of growth factors by decreasing their receptor concentrations or altering their functionality. Translation: Yes, you are probably compromising your anabolic state by using ERA. But does that mean they shouldn't be used? No. I have heard statements so ridiculous as "Don't use anti-estrogens, they cut into your gains and cost too much. Just get surgery". Lovely, just fucking brilliant. Sure, like surgery isn't going to cut into your workouts or your gains. Anti-Estrogens Lets consider the top drugs used to combat the estrogen based side effects of anabolic steroids. Clomid Taken daily during a cycle as an anti-estrogen, dosages range between 50-100 mg per day if used exclusively. If combined with Nolvadex, 50 mg per day is sufficient. For more information on this drug, see the chapter entitled Some Specific Drugs Considered. Nolvadex If used alone then 20-40 mg are needed. Some athletes, because of evidence that it negatively impacts various growth factors in the body, dislike this drug. If combined with clomid, 10-20 mg are sufficient. For more information on this drug, see the chapter entitled Some Specific Drugs Considered. Proviron This drug binds to androgen receptors but is also helps prevent excess testosterone from converting into estrogen. I consider this effective when stacked with either clomid or nolvadex. 1 pill will do if combined with either 50 mg of clomid or 20 mg or nolvadex. On its own, I suggest at least 2 pills. Arimidex This is a very potent drug that prevents the body from converting testosterone into estrogen. The drawback is that is very expensive. The minimum effective dosage is around between a quarter and a half of a milligram/day. This drug does not need to be combined with any other during the cycle; however I recommend you begin using arimidex two weeks prior to commencing your cycle so that the drug can effectively eliminate the enzyme that permits conversion of testosterone to estrogen. Clomid is still useful in the post cycle period. For more information on this drug, see the chapter entitled Some Specific Drugs Considered. Remember, anti-estrogens are not scheduled. It is perfectly legal to import them and there are many online resources that do this with accuracy and reliability. Don't gamble with this aspect of your health and don't start until you have all that you need to cover yourself throughout the whole cycle. There is no excuse for not having plenty of clomid and nolvadex on hand. So, back to our fiend Billy. If he had only taken a few simple precautions, he would not find himself in the predicament he is in now. What has Billy been up to lately? Well, when last I heard from him, he was working at the Hooters - and not in the kitchen I might add. Yours in sport, George Spellwin.
http://www.elitefitness.com/articledata/efn/032906.html
crawl-001
en
refinedweb
I obtained a Persistance of Vision kit from ladyada.net . I soldered it together, and amazingly it worked immediately (I am horrible at soldering, I nead to practice more). Here's a picture of it in action, displaying ">KRW<": Here is some code that converts a text file into the binary values for the minipov.c program. make_pov.c To use it, first compile (only tested on Linux). Run like: ./make_pov < test > test.h (Sample input test and resulting output test.h ). Then change the minipov.c file to do an: #include "test.h" where the old const static int image[] line was. Then just compile and upload the code as described in the pov2 documentation. Back to VMW projects...
http://www.deater.net/weave/vmwprod/hardware/pov2/
crawl-001
en
refinedweb
After-Cop Report Curing Health Care Focus On The Corporation From Seattle to Tacoma: The Retreat of Organized Labor The General and the Judge Jon Youngdahl and the Tacoma Police--A Sorry Story Smearing the WTO Protests Volunteerus Chompus Eat These Shorts! Backtalk Activist Calendar Reclaim Our History Nature and Politics Goring Al In the recent somnambulistic primaries, the plodding Bradley drew blood from an unexpected flank: Gore's reputation as an honest broker. Bradley exposed Gore as a political transvestite, a lifelong conservative Democrat, who only adopts the mantle of liberalism when it's convenient (such as in Democratic primaries). He reeled off a litany of Gore flip-flops on abortion, gun control, tobacco, the Comprehensive Test Ban Treaty, affirmative action, welfare reform, and civil rights. This was, Bradley tried to remind people, the man who in his 1988 campaign race-baited Jesse Jackson and first raised the specter of Willie Horton against Michael Dukakis. Many observers were caught off-guard when Bradley also ridiculed Gore's reputation as an environmentalist. The corporate press snickered. "Attacking Gore on the environment is like questioning Mother Teresa's faith," said Jonathan Alter, Newsweek's chief talking head on the cable news shows. In the 1992 campaign, Gore used the environment as a sledgehammer against Bush and Quayle. One issue raised over and over was the WTI hazardous waste incinerator slated for East Liverpool, Ohio, which Gore vowed to block. But within weeks of taking office, the EPA, run by Gore's former staffer, Carol Browner, reversed course and issued the permit to the toxic plant. This was a sign of things to come. It was swiftly followed by capitulations on The Everglades, ancient forests, fuel efficiency standards, pesticides in foods, wetland protection, oil development in Alaska and the Gulf of Mexico, subsidies for nuclear power, organic food standards, and ozone-depleting chemicals. Connoisseurs of Gore's career aren't shocked by any of this. His voting record on environmental matters during his tenure in the House and Senate was mediocre by any standard. Gore, ever ready with an excuse, puts the blame on his home state of Tennessee, which he suggests is retrograde in environmental matters. But the people who know Gore best say he was rarely if ever there for them on pressing matters on the homefront, ranging from strip mining to radioactive contamination at Oak Ridge International. "More often than not Al Gore sided with the polluters against the people," said Maddy Cochrane, a longtime environmental organizer in Chattanooga. "We came to learn that he just followed the money." When confronted with the zigzagging pattern of his positions on these matters, Gore becomes petulant. Moments after he learned that Friends of the Earth had endorsed Bradley, Gore was on the phone to the CEOs of the other big green groups, claiming that he had been personally hurt by the decision. The ploy largely worked. Within days executives from the Sierra Club and NRDC issued statements vouching for Gore's green bona fides and chiding Friends of the Earth for its political heterodoxy. The move by the big groups to provide cover for Gore dismays America's premier green, David Brower. "Environmentalists and progressives cannot endorse rhetoric, and that's the greenest thing we have seen from the Vice President," says Brower, chairman of Earth Island Institute. "I first thought that he was just keeping bad company. So I created the bumpersticker `Free Al Gore!' and even got him delivered a sweatshirt with the slogan on it. But things have gotten worse since then and the Clinton-Gore Administration even seems to have fumbled the ball on an issue as non-controversial as offshore oil drilling." Gore hopes to pin the responsibility for the lame record of the last eight years on Clinton. But it won't sell. Clinton was indifferent to environmental issues and gave Gore free rein on green matters. In large measure, Gore's people were tapped to fill the key environmental posts. Aside from Browner, Katie McGinty, another Gore senate aide, headed the powerful Council on Environmental Quality until last year. Former Gore staffers were also at the Department of Energy, the Commerce Department, and the Office of Management and Budget. Gore intimate Tim Wirth, the former senator from Colorado, served as Assistant Secretary of State for the Environment, where he spearheaded the outrageous move to loosen protections for dolphins from industrial tuna-fishing fleets. Then there's George Frampton, who became Assistant Secretary of Interior, resigned in 1997, served for a year as Gore's lawyer during the campaign finance scandal, then went back to work in the administration in McGinty's old position at the CEQ. The Gore team ran the show from the beginning. The vice president himself has been caught red-handed on several occasions going to bat for corporations against the interests of environmentalists. A little reported example is Gore's fervent efforts on behalf of Monsanto, the St. Louis-based chemical giant. The vice president made a series of forceful calls to heads of state, including the presidents of Ireland and France, stressing his opposition to moves by the European Union to ban import of genetically engineered seeds and food products. The lesson of Al Gore's political career is that he is an ideologue. He gravitates toward the side that offers him the greatest advantage. Now that Bradley has been vanquished and the key progressive constituencies already sewn up, Gore will start his natural migration back to the right, stiff-arming blacks, working people, and greens. By the time he gets to LA in August, he'll be reading from the DLC pro-business playbook once again. The environmentalists could throw a monkey-wrench in Gore's plans by massing their support behind Ralph Nader's run on the Green Party ticket and making clear that they did so mainly because Gore was AWOL on the environment when it counted most. Nader won't win, but he could garner just enough votes to make Gore lose key states, such as California, New York, and Washington. Inflicting this kind of political pain is the only sure way to get the Democrats' (and Republicans') attention and redeem the credibility of the environmental movement. As Brower said: "It's time to start standing up for what we stand on." subscribe / donate / tiny print / guidelines for writers / index
http://eatthestate.org/04-16/NaturePolitics.htm
crawl-001
en
refinedweb
I orginal C# projects So for that and the subsequent VB .NET projects that you will find here I ask you to thank Robert Ranck. Cheers Robert, your contributions will surely make this series more open to all .NET developers. And another thanks also goes out Karl Shifflett (AKAs new series will be excellent and I urge you all to encourage Karl on this series. Its not easy obligating ones self to write an entire series in one language let alone 2. Karls 1st article is located right here, have a look for yourself. Personally I love it. This article is the 5th in my series of beginners articles for WPF. In this article we will discuss databinding. The proposed schedule for this series will still be roughly as follows: In this article I'm aiming to cover, is a brief introduction into the following: I will NOT be covering the following collection based binding areas, so if you want to know more about these you'll have to do a bit of extra research using the links provided (see i'm 1/2 ok, at least I researched the correct links for you) Databinding is actually not that new (ok how its done in WPF is new) but we have has binding in ASP .NET and Winforms for some time now. WPF has borrowed from both of these to create a really really good binding framework. But what is this binding stuff, for those of you that have never heard of it. Well basically binding allows UI Elements to obtain their data either from other UI Elements or business logic objects/classes. So in theory its very simply, we have a source that provides a value, and we have a target that wants to use the sources value, and we kind of glue them together using binding. A typical binding arrangement is as follows: Typically, each binding has these four components: a binding target object, a target property, a binding source, and a path to the value in the binding source to use. The target property must be a Depenedency Property (which I've just covered). Most UIElement properties are dependency properties and most dependency properties, except read-only ones, support data binding by default. That's a very simplified version of whats going on in binding. Of course to facilitate these binding operations there are many seperate considerations and bits of syntax that need to be considered. In the next sub sections you will look at some (no not all, i'd be here all year) of the binding syntax and ideas/approaches to creating happy bindings. There is one important thing to know before we get into the ins and outs of Databinding, and that is the DataContext property, that every FrameworkElement has. DataContext is a concept that allows elements to inherit information from their parent elements about the data source that is used for binding, as well as other characteristics of the binding, such as the path. DataContext. In XAML, DataContext is most typically set to as a Binding declaration. You can use either property element syntax or attribute syntax. And is normal set something like this DataContext FrameworkElement <Window.Resources> <src:LeagueList x: ... ... </Window.Resources> ... ... <DockPanel DataContext="{Binding Source={StaticResource MyList}}"> You can also use code to set DataContext, simply by using a <someElement>.DataContext = <someValue> Another thing to note is that if some object that inherits from a parents DataContext ommits which fields it will use to bind to, such as <MenuItem Tag="{Binding}"> This means that the entire object that is used as its parents DataContext will be used to assign to the Tag property Before we can proceed onto looking at the nitty griity of databinding there are several key areas which we need to cover first. Namely As shown above in the the general idea behind databinding section, flow of binding could be 2 way. There are several possibilities that can be configured when using databinding. These are controlled using the Binding.Mode values: Use the Binding.Mode property to specify the direction of the data flow. To detect source changes in one-way or two-way bindings, the source must implement a suitable property change notification mechanism such as INotifyPropertyChanged. For an example, see How to: Implement Property Change Notification. Change notification is such an important lesson to learn in databinding that we need to look at that right now. So lets have a look at an example of using this interface INotifyPropertyChanged. To support OneWay or TwoWay binding to enable your binding target properties to automatically reflect the dynamic changes of the binding source, your class needs to provide the proper property changed notifications, this is where INotifyPropertyChanged is used. using System.ComponentModel; namespace SDKSample { // This class implements INotifyPropertyChanged // to support one-way and two-way bindings // (such that the UI element updates when the source // has been changed dynamically) public class Person : INotifyPropertyChanged { private string name; // Declare the event public event PropertyChangedEventHandler PropertyChanged; public Person() { } public Person(string value) { this.name = value; } public string PersonName { get { return name; } set { name = value; // Call OnPropertyChanged whenever the property is updated OnPropertyChanged("PersonName"); } } // Create the OnPropertyChanged method to raise the event protected void OnPropertyChanged(string name) { PropertyChangedEventHandler handler = PropertyChanged; if (handler != null) { handler(this, new PropertyChangedEventArgs(name)); } } } } And heres a VB .NET version Imports System.ComponentModel ' This class implements INotifyPropertyChanged ' to support one-way and two-way bindings ' (such that the UI element updates when the source ' has been changed dynamically) Public Class Person Implements INotifyPropertyChanged Private personName As String Sub New() End Sub Sub New(ByVal Name As String) Me.personName = Name End Sub ' Declare the event Public Event PropertyChanged As PropertyChangedEventHandler Implements INotifyPropertyChanged.PropertyChanged Public Property Name() As String Get Return personName End Get Set(ByVal value As String) personName = value ' Call OnPropertyChanged whenever the property is updated OnPropertyChanged("Name") End Set End Property ' Create the OnPropertyChanged method to raise the event Protected Sub OnPropertyChanged(ByVal name As String) RaiseEvent PropertyChanged(Me, New PropertyChangedEventArgs(name)) End Sub End Class To implement INotifyPropertyChanged you need to declare the PropertyChanged event and create the OnPropertyChanged method. Then for each property you want change notifications for, you call OnPropertyChanged whenever the property is updated. I can not tell you how important the INotifyPropertyChanged interface is, but believe me its very very important, and if you plan to use Binding in WPF, just get used to using the INotifyPropertyChanged interface. Binding.Mode property of the binding. However, does your source value get updated while you are editing the text or after you finish editing the text and point your mouse away from the TextBox? The Binding.UpdateSourceTrigger property of the binding determines what triggers the update of the source. The options available are as follows : The following table provides an example scenario for each Binding.UpdateSourceTrigger value using the TextBox as an example: There are many properties that may be used within the Binding class, as such I will not have time to cover all of them, though I shall attempt to go through the most common buts of syntax. As most binding will usually be set in XAML I will be concentrating on the XAML syntax, though it should be noted that anything that can be done in XAML can also be done in C#/VB .NET code behind. Ok so lets have a look at the basic syntax (we will cover more advanced stuff in the sections below). The most basic form of binding if to create a binding that binds to a value of an existing element (this is covered in more detail below also) I just wanted to introduce the syntax and go back and show you how to do the Binding.Mode and Binding.UpdateSourceTrigger stuff first. So here is probably one of the simplist Bindings that you will see. This example has 2 buttons, the 1st button (btnSource) has a Yellow Background property. The 2nd button uses the 1st button (btnSource), as the source for a Binding where the 1st button (btnSource) Background value is being used to set the 2nd buttons Background. <Button x:Yellow BUtton</Button> <Button Background="{Binding ElementName=btnSource, Path=Background}" Width="150" Height="30">I am bound to be Yellow Background</Button> So that'd fairly simple right. But I just wanted to go back and have a quick look at how we could also use the Binding.Mode and Binding.UpdateSourceTrigger properties within the binding sytnax. Well as it turns out, its fairly easy, we just add the extra property and its desired value into the binding expression such as <TextBox x: <TextBox Width="150" Height="30" Text="{Binding ElementName=txtSource, Path=Text, Mode=TwoWay, UpdateSourceTrigger=LostFocus }"/> An Important Note : Recall from part 2 that I mentioned that Binding was a markup MarkupExtension. As such the XAML parser knows how to treat the { } sections. But really this is just shorthand, that can (if you prefer) be expressed using the longer more verbose syntax as shown below. <Button Margin="10,0,0,0" Content="Im bound to btnSource, using long Binding syntax"> <Button.Background> <Binding ElementName="btnSource" Path="Background"/> </Button.Background> </Button> This is a decision you will have to make yourself, me personally I prefer the { } syntax, though you dont get any intellisense help within Visual Studio if you do use the {} syntax. When you set out to set up a Binding there are several different things you need to consider Once you know or have considered all this, its really as easy as ABC. As part of the demo solution, you will find a project entitled "BindingToUIElements" which when run, will look like the following: This simple demo application shows 3 different bindings going on. Ill briefly discuss each of these now. This simple example uses the 1st buttons Background as the source value for the other 2 Buttons Background The code for which is as follows: <!-- Simple Element Binding--> <Label Content="Simple Element Binding" Margin="5,0,0,0" FontSize="14" FontWeight="Bold" /> <StackPanel Orientation="Horizontal" Margin="10,10,10,10" Background="Gainsboro"> <Label Content="Simple Element Binding"/> <Button x: <Button Margin="10,0,0,0" Background="{Binding ElementName=btnSource, Path=Background}" Content="Im bound to btnSource"/> <Button Margin="10,0,0,0" Content="Im bound to btnSource, using long Binding syntax"> <Button.Background> <Binding ElementName="btnSource" Path="Background"/> </Button.Background> </Button> </StackPanel> This simple example uses the SelectedItem.Content of a ComboBox as the source for a Binding. Where the Background of a Button is changed dependant on the SelectedItem.Content of the ComboBox SelectedItem.Content ComboBox Button <Label Content="More Elaborate Binding" Margin="5,0,0,0" FontSize="14" FontWeight="Bold" /> <StackPanel Orientation="Horizontal" Margin="10,10,10,10" Background="Gainsboro"> <Label Content="Choose a color"/> <ComboBox Name="cmbColor" SelectedIndex="0"> <ComboBoxItem>Green</ComboBoxItem> <ComboBoxItem>Blue</ComboBoxItem> <ComboBoxItem>Red</ComboBoxItem> </ComboBox> <Button Margin="10,0,0,0" Background="{Binding ElementName=cmbColor, Path=SelectedItem.Content}" Content="Im bound to btnSource"/> </StackPanel> This simple example uses 2 TextBoxs, where there is a TwoWay Binding.Mode applied and the as the Binding.UpdateSourceTrigger is set to PropertyChanged, which means that the source of the Binding will be updated when the 2nd TextBoxs value changes TextBox Binding.Mode Binding.UpdateSourceTrigger <!-- Using UpdateSourceTrigger/Mode--> <Label Content="Using UpdateSourceTrigger/Mode" Margin="5,0,0,0" FontSize="14" FontWeight="Bold" /> <StackPanel Orientation="Horizontal" Margin="10,10,10,10" Background="Gainsboro"> <TextBlock TextWrapping="Wrap" Text="This uses TwoWay Binding and UpdateSourceTrigger=PropertyChanged. Type in one textbox then the other,and see them update each other" Width="400"/> <TextBox x: <TextBox Width="50" Height="25" Margin="5,0,0,0" Text="{Binding ElementName=txtSource, Path=Text, Mode=TwoWay, UpdateSourceTrigger=PropertyChanged }"/> </StackPanel> XML is used a lot these days, both for configuration information and data exchange, and indeed these days even for UI design. Remember XAML is a XML derivative. But that's not all we can use XML data for. We can infact bind to XML data. This is fairly easy to do in XAML, we can either have the XAML hold the XML data (though this is probably not the norm) or use external XML files. Either way the normal proceeding is to use a XmlDataProvider within the XAML/code. Like I said earlier as I thinking most Bindings will be done in XAML, ill stick to that. XmlDataProvider As part of the demo solution, you will find a project entitled "BindingToXML" which when run, will look like the following: The top 2 sections of this demo app using XAML held XML data, and the bottom 2 use an external XML file. It is entirely possible to hold all the XML data within the XAML file, and use this data as a source for Binding. Lets see an example of this <Window x:Class="BindingToXML.Window1" xmlns="" xmlns: <Window.Resources> <!-- inline XML data--> <XmlDataProvider x: <x:XData> <Films xmlns=""> <Film Type="Horror" Year="1987"> <Director>Sam Raimi</Director> <Title>Evil Dead II</Title> </Film> <Film Type="Murder" Year="1991"> <Director>Jonathan Demme</Director> <Title>Silence Of The Lambs</Title> </Film> <Film Type="Sc" Year="1979"> <Director>Ridley Scott</Director> <Title>Alien</Title> </Film> </Films> </x:XData> </XmlDataProvider> </Window.Resources> <ScrollViewer> <StackPanel Orientation="Vertical"> <!-- Simple XPath Binding using inline XML data--> <Label Content="Show all Films (using inline XML data)" Margin="5,0,0,0" FontSize="14" FontWeight="Bold" /> <StackPanel Orientation="Horizontal" Margin="10,10,10,10" Background="Gainsboro"> <ListBox ItemsSource="{Binding Source={StaticResource xmlData}, XPath=Film/Title}"/> </StackPanel> <!-- More Complicated XPath Binding using inline XML data--> <Label Content="Show Only Films After 1991 (using inline XML data)" Margin="5,0,0,0" FontSize="14" FontWeight="Bold" /> <StackPanel Orientation="Horizontal" Margin="10,10,10,10" Background="Gainsboro"> <ListBox ItemsSource="{Binding Source={StaticResource xmlData}, XPath=*[@Year>\=1991]}"/> </StackPanel> </StackPanel> </ScrollViewer> </Window> It can be seen from this example, that we use an in line (in the XAML) xml dataset, for the XmlDataProvider. And that we use the XmlDataProvider as the BindingSource for the ListBox. In this example the 1st ListBox shows all Films/Titles as we are just fetching the Film/Title nodeset using the Binding.XPath=Film/Title so we get all Titles shows. ListBox Binding.XPath=Film/Title The 2nd ListBox is a bit fussier and uses a bit of XPath notation to traverse the attribute axis and ony fetch those nodes that have Year > 1991, so we get less nodes returned As I say though its going to be more common to use external XML files with the XmlDataProvider, which can be done as follows. Where the XmlDataProvider Source property is set to the external XML file. <XmlDataProvider x: </XmlDataProvider> Using this arrangement is much the same as we saw before where we can use XPath to fetch all the nodes or use XPath to only match those nodes where the attribute match the requirements. In this example the 2nd ListBox only shows "Mexican" resteraunts from the XML file, using the following XPath <ListBox ItemsSource="{Binding Source={StaticResource xmlDataSeperateFile}, XPath=*[@Type\=\'Mexican\']}"/> And heres the full example <Window x:Class="BindingToXML.Window1" xmlns="" xmlns: <Window.Resources> <!-- external XML data--> <XmlDataProvider x: </XmlDataProvider> </Window.Resources> <ScrollViewer> <StackPanel Orientation="Vertical"> <!-- Simple XPath Binding using seperate XML data--> <Label Content="Show all Resteraunts (using seperate XML data)" Margin="5,0,0,0" FontSize="14" FontWeight="Bold" /> <StackPanel Orientation="Horizontal" Margin="10,10,10,10" Background="Gainsboro"> <ListBox ItemsSource="{Binding Source= {StaticResource xmlDataSeperateFile}, XPath=Resteraunt/Name}"/> </StackPanel> <!-- More Complicated XPath Binding using seperate XML data--> <Label Content="Show Only Mexican Resteraunts (using inline XML data)" Margin="5,0,0,0" FontSize="14" FontWeight="Bold" /> <StackPanel Orientation="Horizontal" Margin="10,10,10,10" Background="Gainsboro"> <ListBox ItemsSource="{Binding Source={StaticResource xmlDataSeperateFile}, XPath=*[@Type\=\'Mexican\']}"/> </StackPanel> </StackPanel> </ScrollViewer> </Window> And heres the associated XML file (should you be curious about its structure) <?xml version="1.0" encoding="utf-8" ?> <Resteraunts xmlns=""> <Resteraunt Type="Meat"> <Name>The MeatHouse</Name> <Phone>01237 78516</Phone> </Resteraunt> <Resteraunt Type="Veggie"> <Name>VegHead</Name> <Phone>99999</Phone> </Resteraunt> <Resteraunt Type="Mexican"> <Name>Mexican't (El Mariachi)</Name> <Phone>464654654</Phone> </Resteraunt> </Resteraunts> NOTE : Using XML to provide Binding values, is fine, but dont expect that you will be able to update XML simply by using a Binding.Mode set to TwoWay. That wont work. XML data binding is a simple one way/not updatable type of arrangement. Although I'm not going to go into this, Beatriz The Binding Queen Costa has a good blog entry right here if you are curious Typically WPF development will more than likely involve binding to an entire collection at some point. Now this is very very easy to do in WPF. As with most things there is many ways to do this, i'll outline 2 possible way, but of course there will be more. I just like these ways thats all. One important thing to ALWAYS keep in mind is change notification, recall we addressed that for individual classes by using the INotifyPropertyChanged interface. But what about Collections that will hold objects, what should we do about that? Well as luck would have it, these days there is a nice ObserverableCollection that fills this spot quite nicely. This collection takes care of notifying the UI everytime an element in the collection is added/removed. We still need to make sure that each of the held objects does its own change notification using the INotifyPropertyChanged interface. But by using ObserverableCollection and classes that implement INotifyPropertyChanged we are sitting pretty, no change will pass us by. ObserverableCollection ObserverableCollection As part of the demo solution, you will find a project entitled "BindingToCollections" which when run, will look like the following: So binding to such a collection becomes a snap. Heres 2 possible ways to do this with using ListBoxes. The 1st ListBox has its ItemSource set in code behind as follows: ItemSource BindingToCollections { /// <summary> /// Interaction logic for Window1.xaml /// </summary> public partial class Window1 : Window { private People people = new People(); public Window1() { InitializeComponent(); people.Add(new Person { PersonName = "Judge Mental" }); people.Add(new Person { PersonName = "Office Rocker" }); people.Add(new Person { PersonName = "Sir Real" }); this.lstBox1.ItemsSource = people; } } } And in VB .NET Imports System Imports System.Collections.Generic Imports System.Linq Imports System.Text Imports System.Windows Imports System.Windows.Controls Imports System.Windows.Data Imports System.Windows.Documents Imports System.Windows.Input Imports System.Windows.Media Imports System.Windows.Media.Imaging Imports System.Windows.Navigation Imports System.Windows.Shapes Imports System.Collections.ObjectModel ''' <summary> ''' Interaction logic for Window1.xaml ''' </summary> Partial Public Class Window1 Inherits Window Private people As New People() Public Sub New() InitializeComponent() Dim _Person As New Person _Person.PersonName = "Judge Mental" people.Add(_Person) _Person.PersonName = "Office Rocker" people.Add(_Person) _Person.PersonName = "Sir Real" people.Add(_Person) lstBox1.ItemsSource = people End Sub End Class And the matching XAML for this Binding is as follows <Window x:Class="BindingToCollections.Window1" xmlns="" xmlns:x="" xmlns: <ScrollViewer HorizontalScrollBarVisibility="Auto" VerticalScrollBarVisibility="Auto"> <StackPanel Orientation="Vertical"> <!-- ListBox Source Set In Code Behind--> <StackPanel Orientation="Vertical"> <Label Content="ListBox Source Set In Code Behind" Margin="5,0,0,0" FontSize="14" FontWeight="Bold" /> <ListBox x: </StackPanel> </StackPanel> </ScrollViewer> </Window> The 2nd ListBox has its ItemSource set in XAML as follows: <Window x:Class="BindingToCollections.Window1" xmlns="" xmlns:x="" xmlns: <Window.Resources> <local:People x: <local:Person <local:Person </local:People> </Window.Resources> <ScrollViewer HorizontalScrollBarVisibility="Auto" VerticalScrollBarVisibility="Auto"> <StackPanel Orientation="Vertical"> <!-- ListBox Source Set By Using Resources --> <StackPanel Orientation="Vertical"> <Label Content="ListBox Source Set By Using Resources" Margin="5,0,0,0" FontSize="14" FontWeight="Bold" /> <ListBox x: </StackPanel> </StackPanel> </ScrollViewer> </Window> You can see that we have an instance of the People object directly in the XAML, within a resources section, and that the declared People object has several children of type Person. This is all thanks to the XAML parser which knows that children should be added using the Add() method of the ObserverableCollection. People Person I should point out that these examples are merely demonstrating how to Bind to collections. Probably in a production system, the collection would be part of a BAL layer/ or maybe part of a model within a MVC/MVVM pattern. Im going for quick an dirty to get the point across. One other thing that I would like to bring your attention to is the rendering of the items within the ListBox. See how they simply show "BindingToCollection.Persons", as plain text. This is obviously not the desired effect, but happens because WPF does not know what properties to show and how they should be shown for a Person object. This will be the subject of my next article Templates. I wont write any more about this, but just know that we can change the way a data item looks using a DataTemplate. If you really can't wait you can have a look at these links DataTemplate As I stated earlier I did not cover grouping/sorting/filtering etc etc, and if you really want to know more about these, please use the links provided at the start of this article. Imagine a situation where we have a bound data value that we wish to format in some way, say by using a short date format instead of a long date fomat. Up until now we have simply used the raw Binding value. So this wouldnt be possible, we would have to make sure the bound value had what we want to display. Luckily WPF has a trick up its sleeve, we can use a class that implements the IValueConverter interface to provide a new value for the binding. ValueConverters are like the sprintf of the WPF world. You can use a ValueConverter to literally provide a new object to a Binding. This may be an object of the same type as the orginal object, or could be a totally new object. For example in WPF there is a principle of a Freezable Object which is an imutable object. If you try and animate a Frozen object, you will have problems. I have in the past circumnaviagted this issue with using a Clone ValueConverter to provide a cloned object to a Binding which is then used in an animation. But more typically you may use ValueConverters for small formatting changes. ValueConverters sit between the source value and the bound target property, and their sole purpose is so take a value and supply a different value. The IValueConverter interface contains the following methods that must be implemented. object Convert(object value, Type targetType, object parameter, CultureInfo culture) Which is used to convert from the source object into the new value which will be used by the target object ConvertBack(object value, Type targetType, object parameter, CultureInfo culture) Which is used to convert from the target object back to the source. This will be used where a Bindings Binding.Mode has been set to TwoWay or OneWayToSource. More often than not this will not be used and will simply throw an exception. So how do we use of of these ValueConverters in our code. Well quite simply we use the normal Bindings expression but we also state which converter to use for the Binding.Converter property. Lets see an example shall we. Binding.Converter As part of the demo solution, you will find a project entitled "ValueConverters" which when run, will look like the following: This small example actually uses 2 ValueConverters, the top one converts from words to a Brush that is used to color a Rectangle. The 2nd value converter uses an explicitly set DataContext in the code behind where 2 labels have their DataContext set to a new DateTime. Lets see the code Brush Rectangle DataContext DateTime First the XAML <Window x:Class="ValueConverters.Window1" xmlns="" xmlns:x="" xmlns: <Window.Resources> <local:WordToColorConverter x: <local:DateConverter x: </Window.Resources> <ScrollViewer HorizontalScrollBarVisibility="Auto" VerticalScrollBarVisibility="Auto"> <StackPanel Orientation="Vertical"> <!-- Using Vaue Converter To Convert Text Into Colors --> <Label Content="Using Vaue Converter To Convert Text To Fill Color" Margin="5,0,0,0" FontSize="14" FontWeight="Bold" /> <StackPanel Orientation="Horizontal" Margin="10,10,10,10" Background="Gainsboro"> <TextBlock TextWrapping="Wrap" Text="Using Vaue Converter. Type 'hot' or 'cold' into the textbox and watch the rectangle change color" Width="400"/> <TextBox x: <Rectangle Width="50" Height="25" Margin
http://www.codeproject.com/KB/WPF/BeginWPF5.aspx
crawl-001
en
refinedweb
- Articles - Documentation - Distributions - Forums - Sponsor Solutions. Expect-lite is itself an expect script, so to use it you will need to install expect from your distribution's package repository. Once you have it installed, expand the expect-lite tarball and copy the expect-lite.proj/expect-lite file to somewhere in your $PATH and give it appropriate permissions. # tar xzvf /.../expect-lite_3.0.5.tar.gz # cp expect-lite.proj/expect-lite /usr/local/bin # chown root.root /usr/local/bin/expect-lite # chmod 555 /usr/local/bin/expect-lite The main goal of expect and expect-lite is to automate an interactive session with a command. I'll refer to the command that expect(-lite) is talking to as the spawned command. A basic expect(-lite) script might spawn a command, wait for the command to ask for some input, and send some data in response to that request. In expect, the send command sends information to the spawned command, and the expect command is used to wait for the spawned command to send a particular piece of information. Normally there is a timeout set to handle the case where the spawned command does not provide the expected output within a given time; in other words, when the spawned program does not behave as planned by the expect(-lite) script. I'll use gdb as an example to demonstrate the difference between using expect and expect-lite to automate a simple interaction. Shown below is the code for a trivial C++ application and the start of a normal gdb debug session for this application. $ cat main.cpp #include <iostream> using namespace std; int main( int, char** ) { int x = 2; x *= 7; cout << x << endl; return 0; } $ g++ -g main.cpp -o main $ gdb ./main GNU gdb Red Hat Linux (6.6-35.fc8rh) ... (gdb) br main Breakpoint 1 at 0x400843: file main.cpp, line 7. (gdb) r Breakpoint 1, main () at main.cpp:7 7 int x = 2; (gdb) The expect program below will automate the execution of the above program with gdb printing out the value of x at a point in the program execution. $ cat ./gdb-main-expect #!/usr/bin/expect spawn gdb ./main; send "br main\n"; expect "Breakpoint 1 at"; send "r\n"; expect "Breakpoint 1, main"; expect "(gdb)"; send "step\n"; expect "(gdb)"; send "step\n"; expect "(gdb)"; send "print x\n"; send "quit\n"; expect "Exit anyway"; send "y\n"; $ ./gdb-main-expect spawn gdb ./main br main GNU gdb Red Hat Linux (6.6-35) $ The same automated interaction is expressed in expect-lite below. Notice that the quoting and other syntax is now missing and the expect-lite program is much closer to what a user would actually see and type. Lines starting with >> are things that expect-lite will send to the spawned command. Lines starting with < are things that expect-lite will wait to see in the output of the spawned command. $ cat gdb-main-expect-lite.elt > gdb /home/ben/expect-lite-examples/main >>br main <Breakpoint 1 at >>r <Breakpoint 1, main >>step <(gdb) >>step <(gdb) >>print x >>quit <Exit anyway >>y Expect-lite was created primarily for running automated software tests. A side effect of this heritage is that any script execution requires the specification of the host where the test is to be executed. In the example below I have set up SSH to allow me to log in to the same user on localhost without a password, as detailed after the example. There should be no security implications with this. At the end of the expect-lite interaction you can see an overall result as passed, which is also due to expect-lite's primary purpose being software testing. $ expect-lite remote_host=localhost cmd_file=gdb-main-expect-lite.elt spawn ssh localhost Last login: ... 2008 from localhost.localdomain $ bash ->Check Colour Prompt Timed Out! $ gdb /home/ben/expect-lite-examples/main br main ... ) y ##Overall Result: PASS The below commands set up SSH to allow you to connect to the localhost without a passphrase. In this case the "elo" hostname is also available as a shortcut to connect using the correct SSH identity file. In the above session I could have used expect-lite remote_host=elo cmd_file=... to execute the command on localhost. $ mkdir -p ~/.ssh $ chmod 700 ~/.ssh $ cd ~/.ssh $ ssh-keygen -f expect-lite-key $ cat expect-lite-key.pub >> authorized_keys2 $ vi ~/.ssh/config Host localhost IdentityFile ~/.ssh/expect-lite-key Host elo HostName localhost IdentityFile ~/.ssh/expect-lite-key $ chmod 600 config $ chmod 600 authorized_keys2 $ ssh elo For the gdb example I could not use the more normal > to describe what expect-lite should send, because the > expect-lite command has some built-in logic to first detect a prompt from the spawned command. >> on the other hand just sends what you have specified right now. The prompt detection logic in expect-lite did not detect the gdb prompt in the gdb-main-expect-lite.elt example above and so would time out waiting for a prompt from the spawned command. Expect-lite did not detect the gdb prompt because the regular expressions that it uses in wait_for_prompt did not handle that prompt format. One way to change that situation would be to customize those regular expressions in the expect-lite file if you are using expect-lite against gdb often. Because I was forced to use >> to send data to gdb, I had to also include lines at some places which explicitly waited for gdb to send a prompt. If the prompt detection regular expressions can detect your prompts, then using the > command to send data will make a less cluttered script, as you will not need to explicitly wait for prompts from the spawned program. You can assign values to variables by capturing part of the output of the spawned command. When your expect-lite script starts executing, the spawned command will be bash, so you can directly run programs by sending the command line to bash with the > expect-lite command. As seen above, if you execute an interactive program such as gdb, then the > expect-lite command talks with gdb. The below script captures the output of the id command and throws away everything except the user name. The conditional expression in expect-lite has a similar syntax to that of a similar expression in C; the difference is that in expect-lite the IF portion is prefixed and terminated with a question mark and the separator between the THEN and ELSE statement is a double colon instead of a single colon. The reason that I wait for the final "..." at the last line of the script is so expect-lite will not close the session after issuing the echo command and before the results of echo are shown. $ id uid=777(ben) gid=777(ben) $ cat branches.elt >id +$user=[a-z]+=[0-9]+\(([a-z]+) ? $user == ben ? >echo "howdee ben..." :: >echo "A stranger eh?..." <... Expect-lite supports loops by using labels and jump to label together with conditionals. Because expect-lite starts a bash shell on the remote host, you can also use the shell's conditionals and looping support. Shown below is an expect-lite script that uses bash to perform a loop: $ cat shell-conditionals.elt >for if in `seq 1 10`; do > echo $if >done >echo ... <... A caveat is that to get at shell variables from expect-lite you have to first >>echo $bashvar and then read it into an expect-lite variable with something like +$expectvar=.*\n. For instance: $ cat pwd-test.elt >echo $PWD +$expectpwd=\n(.*)\n >echo $expectpwd... <... You can also directly embed expect code into an expect-lite script. This might come in handy if you already have some familiarity with expect and need to perform a more advanced interaction at some stage in the script. Expect-lite lines starting with ! are embedded expect. Needing to specify the host name to expect-lite all the time requires a level of preparation work before one can start using expect-lite. The usage message for expect-lite informs you that you can set EL_REMOTE_HOST to define a default value for the host, but if you try this then you will discover that expect-lite/read_args{} only works when passed two command-line arguments (version 3.0.5), so if you define a default host with the environment variable, you will need to pass a dummy argument to expect-lite in order for its argument parsing to allow script execution. Ben Martin has been working on filesystems for more than 10 years. He completed his Ph.D. and now offers consulting services focused on libferris, filesystems, and search solutions. Note: Comments are owned by the poster. We are not responsible for their content. Command line automation with Expect-litePosted by: soloport on March 11, 2008 02:00 AM # chown colon or periodPosted by: Anonymous [ip: 129.129.129.76] on March 11, 2008 08:49 AM In your chown command you used the "period" to separate username from group. I think it is more portable if you use the "colon". Apparently some UNIX allow for a period to be used in the username: Cheers, E # Python Command line automation with PexpectPosted by: Anonymous [ip: 66.92.218.124] on March 11, 2008 07:06 PM # This story has been archived. Comments can no longer be posted.
http://www.linux.com/feature/128384
crawl-001
en
refinedweb
If you are looking for car insurance in the UK at a low price, look no further than Adrian Flux. We are the largest broker of specialist car insurance for UK drivers. Our highly-knowledgeable staff can save you time and money by finding the lowest-priced policy to suit your needs. Get a quote for car insurance. Adrian Flux has been providing car insurance in the UK for 30 years. We have the experience, staff and technology to provide you with a first class service. All our policies are designed to order by our specialist departments so no matter what you drive, from an Alfetta to a Zodiac, we can insure it. Even if you feel your car is not particularly out of the ordinary, our team of expert insurance brokers can still save you money. Because our staff are used to finding policies to suit unusual and even completely unique cars and motorbikes, they know which insurers offer the best policies. There's nothing simpler than getting a quote online from Adrian Flux. Click on Quote me to bring up the quotes page where you can select the type of insurance you want. Fill in the required fields with your details and submit the form. All Internet quotes are handled by the quotations team, who will email you back with a competitive quote within 24 hours. If you would like to know more about car insurance, call Adrian Flux FREE on 08000 83 88 33. car insurance uk | holiday home insurance | import car insurance | kit car insurance landlords' insurance | performance car insurance | scooter insurance | womens car insurance Cheap car insurance from Adrian Flux © 2005
http://www.adrianflux.co.uk/_info/car-insurance.php
crawl-001
en
refinedweb
libmathevallibrary. Copyright © 2002, 2003, 2004, 2005, 2006, 2007, 2008 Aleksandar Samardzic Permission is granted to copy, distribute and/or modify this document under the terms of the GNU Free Documentation License, Version 1.1 or any later version published by the Free Software Foundation; with the Invariant Sections being “Rationale and history”, with no Front-Cover Texts, and with no Back-Cover Texts. A copy of the license is included in the section entitled “Copying”. libmathevalmanual GNU libmatheval is small library of procedures for evaluating mathematical functions. This manual documents how to use the library; this is manual edition 1.1.7, last updated 26 March 2008, corresponding to library version 1.1.7. GNU libmatheval is free software: you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation, either version 3 of the License, or (at your option) any later version. GNU libmathe libmatheval. If not, see <>. GNU libmatheval is library comprising several procedures that makes possible to create in-memory tree representation of mathematical functions over single or multiple variables and later use this representation to evaluate function for specified variable values, to create corresponding tree for function derivative over specified variable or to get back textual representation of in-memory tree. This section discuss use of programming interface exposed by library from C programs. Readers interested in Fortran interface should switch immediately to Fortran interface section. In order to use GNU libmatheval library from C code, it is necessary first to include header file matheval.h from all files calling GNU libmatheval procedures and then refer to libmatheval library among other linker option. Thus, command to compile C program using library and stored in file example.c using GNU C compiler would look like (supposing that library is installed using default prefix /usr/local/lib): gcc example.c -I/usr/local/include -L/usr/local/lib -lmatheval -o example Alternatively, pkg-config metadata file for libmatheval is installed along with the library too, thus on system with pkg-config installed following command could be used instead: gcc example.c $(pkg-config --cflags --libs) -o example First step in actually utilizing library after including appropriate header file would be to declare variable of void * type to point to evaluator object that will represent given mathematical function: void *f; Then, given that textual representation of function is stored into string buffer, evaluator object corresponding to given mathematical function could be created using evaluator_create procedure (see evaluator_create) as follows (see documentation for this procedure also for description of notation that should be used to describe mathematical functions): f = evaluator_create (buffer); assert (f); Return value should be always checked, because above procedure will return null pointer if there exist syntax errors in notation. After that, one could utilize evaluator_get_variables (see evaluator_get_variables) procedure to obtain a list of variable names appearing in function: { char **names; int count; int i; evaluator_get_variables (f, &names, &count); for (i = 0; i < count; i++) printf ("%s ", names[i]); printf ("\n"); } Procedure evaluator_evaluate (see evaluator_evaluate) could be used to evaluate function for specific variable values. Say that above function is over variable “x” only, then following code will evaluate and print function value for x = 0.1: { char *names[] = { "x" }; double values[] = { 0.1 }; printf ("f(0.1) = %g\n", evaluator_evaluate (f, 1, names, values)); } Or alternatively, since function is over variable with standard name “x”, convenience procedure evaluator_evaluate_x (evaluator_evaluate_x) could be used to accomplish same by following: printf ("f(0.1) = %g\n", evaluator_evaluate_x (f, 0.1)); Evaluator object for function derivative over some variable could be created from evaluator object for given function. In order to accomplish this, a declaration for derivative evaluator object should be added to variable declarations section: void *f_prim; After that (supposing that “x” is used as derivation variable), derivative evaluator object could be created using evaluator_derivative procedure (see evaluator_derivative): f_prim = evaluator_derivative (f, "x"); or alternatively using evaluator_derivative_x convenience procedure (see evaluator_derivative_x): f_prim = evaluator_derivative_x (f); Derivative evaluator object could be used to evaluate derivative values or say textual representation of derivative could be written to standard output through utilizing evaluator_get_string procedure (see evaluator_get_string) to get string representing given evaluator. Following code would accomplish this: printf (" f'(x) = %s\n", evaluator_get_string (f_prim)); All evaluator objects must be destroyed after finished with using them and evaluator_destroy procedure (see evaluator_destroy) is intended for this: evaluator_destroy (f); evaluator_destroy (f_prim); Here follows complete program connecting above fragments. Program read from standard input string representing function over variable “x”, create evaluators for function and its first derivative, print textual representation of function derivative to standard output, then read value of variable “x” and finally print to standard output values of function and its first derivative for given value of variable “x”. #include <stdio.h> #include <stdlib.h> #include <string.h> #include <assert.h> #include <matheval.h> /* Size of input buffer. */ #define BUFFER_SIZE 256 /* Program is demonstrating use of GNU libmatheval library of procedures for evaluating mathematical functions. */ int main (int argc, char **argv) { char buffer[BUFFER_SIZE]; /* Input buffer. */ int length; /* Length of above buffer. */ void *f, *f_prim; /* Evaluators for function and function derivative. */ char **names; /* Function variables names. */ int count; /* Number of function variables. */ double x; /* Variable x value. */ int i; /* Loop counter. */ /* Read function. Function has to be over variable x, or result may be undetermined. Size of textual represenatation of function is bounded here to 256 characters, in real conditions one should probably use GNU readline() instead of fgets() to overcome this limit. */ printf ("f(x) = "); fgets (buffer, BUFFER_SIZE, stdin); length = strlen (buffer); if (length > 0 && buffer[length - 1] == '\n') buffer[length - 1] = '\0'; /* Create evaluator for function. */ f = evaluator_create (buffer); assert (f); /* Print variable names appearing in function. */ evaluator_get_variables (f, &names, &count); printf (" "); for (i = 0; i < count; i++) printf ("%s ", names[i]); printf ("\n"); /* Create evaluator for function derivative and print textual representation of derivative. */ f_prim = evaluator_derivative_x (f); printf (" f'(x) = %s\n", evaluator_get_string (f_prim)); /* Read variable x value. */ printf ("x = "); scanf ("%lf", &x); /* Calculate and print values of function and its derivative for given value of x. */ printf (" f(%g) = %g\n", x, evaluator_evaluate_x (f, x)); printf (" f'(%g) = %g\n", x, evaluator_evaluate_x (f_prim, x)); /* Destroy evaluators. */ evaluator_destroy (f); evaluator_destroy (f_prim); exit (EXIT_SUCCESS); } Above example exercise most of library main procedures (see Main entry points), as well as some of convenience procedures (see Convenience procedures). For full documentation, see Reference. This section documents procedures constituting GNU libmatheval library. The convention is that all procedures have evaluator_ prefix. evaluator_create #include <matheval.h> void *evaluator_create (char *string);. Pointer to evaluator object if operation successful, null pointer otherwise. Evaluator object is opaque, one should only use return pointer to pass it to other functions from library. evaluator_destroy, evaluator_evaluate, evaluator_get_string, evaluator_get_variables, evaluator_derivative evaluator_destroy #include <matheval.h> void evaluator_destroy (void *evaluator); Destroy evaluator object pointer by evaluator pointer. After returning from this call evaluator pointer must not be dereferenced because evaluator object gets invalidated. None. evaluator_create evaluator_evaluate #include <matheval.h> double evaluator_evaluate (void *evaluator, int count, char **names, double *values); Calculate value of function represented by evaluator object for given variable values. Evaluator object is pointed by evaluator pointer. Variable names and corresponding values are given by names and values array respectively. Length of arrays is given by count argument.. evaluator_create, evaluator_destroy, evaluator_evaluate_x, evaluator_evaluate_x_y, evaluator_evaluate_x_y_z evaluator_get_string #include <matheval.h> char *evaluator_get_string (void *evaluator); Return textual representation (i.e. mathematical function) of evaluator object pointed by evaluator. For notation used, see evaluator_create documentation. String with textual representation of evaluator object. This string is stored in evaluator object and caller must not free pointer returned by this function. Returned string is valid until evaluator object destroyed. evaluator_create, evaluator_destroy, evaluator_get_variables evaluator_get_variables #include <matheval.h> void evaluator_get_variables (void *evaluator, char ***names, int *count); Return array of strings with names of variables appearing in function represented by evaluator. Address of array first element is stored by function in location pointed by second argument and number of array elements is stored in location pointed by third argument. Array with function variable names is stored in evaluator object and caller must not free any of strings returned by this function nor array itself. Returned values are valid until evaluator object destroyed. None. evaluator_create, evaluator_destroy, evaluator_get_string evaluator_derivative #include <matheval.h> void *evaluator_derivative (void *evaluator, char *name); Create evaluator for derivative of function represented by given evaluator object. Evaluator object is pointed to by evaluator pointer and derivation variable is determined by name argument. Calculated derivative is in mathematical sense correct no matters of fact that derivation variable appears or not in function represented by evaluator. Pointer to evaluator object representing derivative of given function. evaluator_create, evaluator_destroy, evaluator_derivative_x, evaluator_derivative_y, evaluator_derivative_z evaluator_evaluate_x #include <matheval.h> double evaluator_evaluate_x (void *evaluator, double x); Convenience function to evaluate function for given variable “x” value. Function is equivalent to following: char *names[] = { "x" }; double values[] = { x }; evaluator_evaluate (evaluator, sizeof (names) / sizeof(names[0]), names, values); See evaluator_evaluate for further information. Value of function for given value of variable “x”. evaluator_create, evaluator_destroy, evaluator_evaluate evaluator_evaluate_x_y #include <matheval.h> double evaluator_evaluate_x_y (void *evaluator, double x, double y); Convenience function to evaluate function for given variables “x” and “y” values. Function is equivalent to following: char *names[] = { "x", "y" }; double values[] = { x, y }; evaluator_evaluate (evaluator, sizeof (names) / sizeof(names[0]), names, values); See evaluator_evaluate for further information. Value of function for given values of variables “x” and “y”. evaluator_create, evaluator_destroy, evaluator_evaluate evaluator_evaluate_x_y_z #include <matheval.h> double evaluator_evaluate_x_y_z (void *evaluator, double x, double y, double z); Convenience function to evaluate function for given variables “x”, “y” and “z” values. Function is equivalent to following: char *names[] = { "x", "y", "z" }; double values[] = { x, y, z }; evaluator_evaluate (evaluator, sizeof (names) / sizeof(names[0]), names, values); See evaluator_evaluate for further information. Value of function for given values of variables “x”, “y” and “z”. evaluator_create, evaluator_destroy, evaluator_evaluate evaluator_derivative_x #include <matheval.h> void *evaluator_derivative_x (void *evaluator); Convenience function to differentiate function using “x” as derivation variable. Function is equivalent to: evaluator_derivative (evaluator, "x"); See evaluator_derivative for further information. Evaluator object representing derivative of function over variable “x”. evaluator_create, evaluator_destroy, evaluator_derivative evaluator_derivative_y #include <matheval.h> void *evaluator_derivative_y (void *evaluator); Convenience function to differentiate function using “y” as derivation variable. Function is equivalent to: evaluator_derivative (evaluator, "y"); See evaluator_derivative for further information. Evaluator object representing derivative of function over variable “y”. evaluator_create, evaluator_destroy, evaluator_derivative evaluator_derivative_z #include <matheval.h> void *evaluator_derivative_z (void *evaluator); Convenience function to differentiate function using “z” as derivation variable. Function is equivalent to: evaluator_derivative (evaluator, "z"); See evaluator_derivative for further information. Evaluator object representing derivative of function over variable “z”. evaluator_create, evaluator_destroy, evaluator_derivative Fortran interface to GNU libmatheval library is very similar to C interface; still, complete documentation from Reference is reproduced here using Fortran terms in order to have Fortran programmer not to mess with C terms that he may not understand. Besides documentation for all library exported procedures, an example Fortran program of structure similar to sequence of code fragments presented for C programmers in Introduction section as well as notes on how to link library with Fortran programs are presented here. Since passing arguments between C and Fortran is not (yet) standardized, Fortran interface of library applies only to GNU Fortran 77 compiler; but note that same interface is working fine for GNU Fortran 95 compiler. Requests to adapt interface to other Fortran compilers are welcome (see section Bugs for contact information), under condition that access to corresponding compiler is provided. evaluator_create integer*8 function evaluator_create (string) character(len=*) :: string end function evaluator_create. Positive 64-bit integer representing evaluator object unique handle if operation successful, 0 otherwise. Return value should be used only to pass it to other functions from library. Fortran evaluator_destroy, Fortran evaluator_evaluate, Fortran evaluator_get_string_length, Fortran evaluator_get_string_chars, Fortran evaluator_get_variables_length, Fortran evaluator_get_variables_chars, Fortran evaluator_derivative evaluator_destroy subroutine evaluator_destroy (evaluator) integer*8 :: evaluator end subroutine evaluator_destroy Destroy evaluator object denoted by evaluator handle. After returning from this call evaluator object gets invalidated, so value of evaluator handle should not be used any more. None. Fortran evaluator_create evaluator_evaluate double precision function evaluator_evaluate (evaluator, count, names, values) integer*8 :: evaluator integer :: count character(len=*) :: names double precision :: values dimension values(*) end function evaluator_evaluate Calculate value of function represented by evaluator object for given variable values. Evaluator object is identified by evaluator handle. Variable names are given by names string and corresponding values are given by values array respectively. Number of variables is given by count argument. Variable names in names string should be delimited by one or more blank characters.. Fortran evaluator_create, Fortran evaluator_destroy, Fortran evaluator_evaluate_x, Fortran evaluator_evaluate_x_y, Fortran evaluator_evaluate_x_y_z evaluator_get_string_length integer function evaluator_get_string_length (evaluator) integer*8 :: evaluator end function evaluator_get_string_length Return length of textual representation (i.e. mathematical function) of evaluator object pointed by evaluator. Evaluator textual representation string length. Fortran evaluator_create, Fortran evaluator_destroy, Fortran evaluator_get_string_chars evaluator_get_string_chars subroutine evaluator_get_string_chars (evaluator) integer*8 :: evaluator character(len=*) :: string end subroutine evaluator_get_string_chars Write textual representation (i.e. mathematical function) of evaluator object pointed by evaluator to string specified. For notation used, see Fortran evaluator_create documentation. In order to declare string of appropriate length to be passed to this function, Fortran evaluator_get_string_length function should be utilized. None. Fortran evaluator_create, Fortran evaluator_destroy, Fortran evaluator_get_string_length evaluator_get_variables_length integer function evaluator_get_variables_length (evaluator) integer*8 :: evaluator end function evaluator_get_variables_length Return length of string with names of all variables (separated by a blank character) appearing in evaluator object pointed by evaluator. Variable names string length. Fortran evaluator_create, Fortran evaluator_destroy, Fortran evaluator_get_variables_chars evaluator_get_variables_chars subroutine evaluator_get_variables_chars (evaluator) integer*8 :: evaluator character(len=*) :: string end subroutine evaluator_get_variables_chars Write names of all variables appearing in evaluator object pointed by evaluator into given string (separated by a blank character). In order to declare string of appropriate length to be passed to this function, Fortran evaluator_get_variables_length function should be utilized. None. Fortran evaluator_create, Fortran evaluator_destroy, Fortran evaluator_get_variables_length evaluator_derivative integer*8 function evaluator_derivative (evaluator, name) integer*8 :: evaluator character(len=*) :: name end function evaluator_derivative Create evaluator for derivative of function represented by given evaluator object. Evaluator object is identified by evaluator handle and derivation variable is determined by name argument. Calculated derivative is in mathematical sense correct no matters of fact that derivation variable appears or not in function represented by evaluator. 64-bit integer uniquely identifying evaluator object representing derivative of given function. Fortran evaluator_create, Fortran evaluator_destroy, Fortran evaluator_derivative_x, Fortran evaluator_derivative_y, Fortran evaluator_derivative_z evaluator_evaluate_x double precision function evaluator_evaluate_x (evaluator, x) integer*8 :: evaluator double precision :: x end function evaluator_evaluate_x Convenience function to evaluate function for given variable “x” value. Function is equivalent to following: evaluator_evaluate (evaluator, 1, 'x', (/ x /)) See Fortran evaluator_evaluate for further information. Value of function for given value of variable “x”. Fortran evaluator_create, Fortran evaluator_destroy, Fortran evaluator_evaluate evaluator_evaluate_x_y double precision function evaluator_evaluate_x_y (evaluator, x, y) integer*8 :: evaluator double precision :: x, y end function evaluator_evaluate_x_y Convenience function to evaluate function for given variables “x” and “y” values. Function is equivalent to following: evaluator_evaluate (evaluator, 2, 'x y', (/ x, y /)) See Fortran evaluator_evaluate for further information. Value of function for given values of variables “x” and “y”. Fortran evaluator_create, Fortran evaluator_destroy, Fortran evaluator_evaluate evaluator_evaluate_x_y_z double precision function evaluator_evaluate_x_y_z (evaluator, x, y, z) integer*8 :: evaluator double precision :: x, y, z end function evaluator_evaluate_x_y_z Convenience function to evaluate function for given variables “x”, “y” and “z” values. Function is equivalent to following: evaluator_evaluate (evaluator, 2, 'x y z', (/ x, y, z /)) See Fortran evaluator_evaluate for further information. Value of function for given values of variables “x”, “y” and “z”. Fortran evaluator_create, Fortran evaluator_destroy, Fortran evaluator_evaluate evaluator_derivative_x integer*8 function evaluator_derivative_x (evaluator) integer*8 :: evaluator end function evaluator_derivative_x Convenience function to differentiate function using “x” as derivation variable. Function is equivalent to: evaluator_derivative (evaluator, 'x'); See Fortran evaluator_derivative for further information. Evaluator object representing derivative of function over variable “x”. Fortran evaluator_create, Fortran evaluator_destroy, Fortran evaluator_derivative evaluator_derivative_y integer*8 function evaluator_derivative_y (evaluator) integer*8 :: evaluator end function evaluator_derivative_y Convenience function to differentiate function using “y” as derivation variable. Function is equivalent to: evaluator_derivative (evaluator, 'y'); See Fortran evaluator_derivative for further information. Evaluator object representing derivative of function over variable “y”. Fortran evaluator_create, Fortran evaluator_destroy, Fortran evaluator_derivative evaluator_derivative_z integer*8 function evaluator_derivative_z (evaluator) integer*8 :: evaluator end function evaluator_derivative_z Convenience function to differentiate function using “z” as derivation variable. Function is equivalent to: evaluator_derivative (evaluator, 'z'); See Fortran evaluator_derivative for further information. Evaluator object representing derivative of function over variable “z”. Fortran evaluator_create, Fortran evaluator_destroy, Fortran evaluator_derivative Here follows sample program demonstrating use of library Fortran interface. Hopefully, comments throughout code will be enough for Fortran programmer to get acquainted with library usage. Basic functioning of program is equivalent to code presented for C programmer in Introduction sequence, except that textual representation of function derivative is not printed to standard output and this is avoided simply because of Fortran 77 ugly string handling. Following code is written in Fortran 77 with GNU Fortran 77 compiler extensions (most notable of these certainly is free form of source code). ! Program is demonstrating use of GNU libmatheval library of procedures ! for evaluating mathematical functions. program evaluator implicit none ! Declarations of GNU libmatheval procedures used. integer*8 evaluator_create integer*8 evaluator_derivative_x double precision evaluator_evaluate_x external evaluator_destroy ! Size of input buffer. integer :: BUFFER_SIZE parameter(BUFFER_SIZE = 256) character(len = BUFFER_SIZE) :: buffer ! Input buffer. integer*8 :: f, f_prim ! Evaluators for function and function derivative. double precision :: x ! Variable x value. ! Read function. Function has to be over variable x, or result may ! be undetermined. Size of textual represenatation will be truncated ! here to BUFFER_SIZE characters, in real conditions one should ! probably come with something smarter to avoid this limit. write (*, '(A)') 'f(x) = ' read (*, '(A)') buffer ! Create evaluator for function. f = evaluator_create (buffer); if (f == 0) stop ! Create evaluator for function derivative. f_prim = evaluator_derivative_x (f); if (f_prim == 0) stop ! Read variable x value. write (*, '(A)') 'x = ' read (*, *) x ! Calculate and print values of function and its derivative for given ! value of x. write (*,*) ' f (', x, ') = ', evaluator_evaluate_x (f, x) write (*,*) ' f'' (', x, ') = ', evaluator_evaluate_x (f_prim, x) ! Destroy evaluators. call evaluator_destroy (f) call evaluator_destroy (f_prim) end program evaluator In order to be able to reference GNU libmatheval procedures from Fortran code, declarations of procedures that will be used should be repeated, like demonstrated by Fortran sample program (once when interface upgraded to Fortran 90, modules and use statement will be employed here). Command for compilation Fortran program using library and stored in file example.f using GNU Fortran 77 compiler would look like (again supposing that library is installed using default prefix /usr/local/lib): f77 example.f -ff90 -ffree-form -L/usr/local/lib -lmatheval -o example As usual with an open source project, ultimate reference for anyone willing to hack on it is its source code. Every effort is put to have source code properly commented; having in mind that GNU libmatheval is rather simple project, it is reasonable to expect that this would be enough for anyone interested in project internals to get acquainted with it. Still, this section will briefly explain project design. See Project structure section for description of where each functionality is located in source code. Mathematical functions are represented as trees in computer memory. There are five different nodes in such a tree: number, constants, variables, functions, unary operations and binary operations. Single data structure is employed for tree nodes, while union is used over what is different among them. Numbers have unique value, unary and binary operations have unique pointer(s) to their operand(s) node(s). To represent constants, variables and functions, a symbol table is employed; thus constants, variables and functions have unique pointers to corresponding symbol table records (functions also have unique pointer to their argument node). All operations related to functions (e.g. evaluation or derivative calculation) are implemented as recursive operations on tree nodes. There exist a node operation that is not visible as external procedure and this is node simplification; this operation is very important regarding overall efficiency of other operations and is employed each time when new tree created. Symbol table is implemented as hash table, where each bucket has linked list of records stored in it. Records store information of symbol name and type (variable or function), as well as some unique information related to evaluation: variable records store temporary variable value and function records store pointer to procedure used to actually calculate function value during evaluation. Hashing function described in A.V. Aho, R. Sethi, J.D. Ullman, “Compilers - Principle, Techniques, and Tools”, Addison-Wesley, 1986, pp 435-437 is used. Symbol tables are reference counted objects, i.e. could be shared. Evaluator objects actually consists of function tree and reference to symbol table. Most of operations on evaluator objects are simply delegated to function tree root node. For parsing strings representing mathematical functions, Lex and Yacc are employed. Scanner is creating symbol table records for variables, while for constants and functions it is only looking up existing symbol table records (before starting scanning, symbol table should be populated with records for constants and functions recognized by scanner). Parser is responsible for building function tree representation. Couple error reporting procedures, as well as replacements for standard memory allocation routines are also present. These are rather standard for all GNU projects, so deserve no further discussion. Further present in project are couple procedures for mathematical functions not implemented by C standard library, like cotangent, inverse cotangent and some hyperbolic and inverse hyperbolic functions. Also present in project are stubs for Fortran code calling library. These stubs uses knowledge of GNU Fortran 77 compiler calling conventions, take parameters from Fortran 77 calls, eventually mangle them to satisfy primary C library interface and call library procedures to actually do the work, finally eventually mangling return values to satisfy Fortran 77 calling conventions again. Most important thing to know before criticizing library design is that it is intentionally left as simple as it could be. Decision is now that eventual library usage should direct its improvements. Some obvious and intended improvements if enough interest for library arise are enumerated in Intended improvements section. If having further suggestions, pleas see Bugs sections for contact information. Interesting source files are mostly concentrated in lib subdirectory of distribution. Basic arrangement is rather standard for GNU projects, thus scanner is in scanner.l file, parser in parser.y, error handling routines are in error.c and error.h files, replacements for standard memory allocation routines are in xmalloc.c and xmalloc.h, additional mathematical functions are in xmath.c and xmath.c. Project specific files are: node.h and node.c files for tree representing mathematical function data structures and procedures, symbol_table.c and symbol_table.h for symbol table data structures and procedures and finally evaluator.c and matheval.h for evaluator object data structures and procedures (evaluator object data structure is moved to .c file because matheval.h is public header file and this data structure should be opaque). Fortran interface is implemented in f77_interface.c file. File libmatheval.texi under doc subdirectory of distribution contains Texinfo source of project documentation (i.e. what you are reading now). Subdirectory tests contains library test suite. Kind of mixed design is employed here - GNU autotest is used for test framework in order to achieve more portability, while number of small Guile scripts are performing tests. File matheval.c in tests subdirectory contains program extending Guile interpreter with GNU libmatheval procedures. Files with .at extension in same subdirectory in turn consist of fragments of Guile code that this extended Guile interpreter executes in order to conduct tests. File matheval.sh is shell wrapper for program contained in matheval.c file; this wrapper is used by autotest during testing instead of original program. Most interesting aspect of code from tests subdirectory is certainly Guile interface for library that is implemented in matheval.c file; anyone intending to write more tests must before approaching this task become familiar with this interface. As stated in Design notes section, GNU libmatheval is designed with intention to be simple and understandable and to eventually have its usage to govern improvements. Thus, further work will be primarily directed by user requests and of course, as usual with open source projects, with amount of spare time of primary developer (see Bugs for contact information). However, there exist several obvious improvements that I'm willing to work on immediately if any interest of library arise and these are (in random order) listed below: asserts left in code that may be replaced with some other mechanism, also probably error handling of more error conditions should be added to library. There exists also an improvement that is obvious and necessary but because I'm not native speaker I'm unfortunately not able to accomplish it anything more than I already tried: If you encounter something that you think is a bug, please report it immediately. Try to include a clear description of the undesired behavior. A test case that exhibits the bug or maybe even patch fixing it, would too be of course very useful. Suggestions on improving library would be also more than welcome. Please see Hacking, for further information. Please direct bug reports and eventual patches to bug-libmatheval@gnu.org mailing list. For suggestions regarding improvements and other libmatheval related conversation use author e-mail address asamardzic@gnu.org. The library is developed as a back-end for “Numerical Analysis” course taught during 1999/2000, 2000/2001 and 2001/2002 school years at Faculty of Mathematics, University of Belgrade. Most numerical libraries (library accompanying “Numerical Recipes” book most notably example) are asking programmer to write corresponding C code when it comes to evaluate mathematical functions. It seemed to me that it would be more appropriate (well, at least for above mentioned course) to have library that will make possible to specify functions as strings and then have them evaluated for given variable values, so I wrote first version of library during November 1999. Fortran interface is added to the library later; during January 2001 interface for Pacific Sierra VAST Fortran 90 translator was implemented and during September 2001 it was replaced by interface for Intel Fortran 90 compiler 1. This library eventually went into rather stable state and was tested by number of other programs implementing various numerical methods and developed for the same course. After completing engagement with this course, I thought it may be interesting for someone else to use this code and decided to make it publicly available. So, having some spare time during June 2002, I re-wrote whole library in preparation for public release, now employing simpler overall design and also using GNU auto-tools and what else was necessary according to GNU guidelines. The benefit is that final product looks much better now (well, at least to me and at least at the very moment of this writing), the drawback is that code is not thoroughly tested again. But certainly author would be more than happy to further improve and maintain it. Please see Bugs, for contact information. The library source code was hosted on Savannah () since Septembar 2002. In September 2003, library officially became part of GNU project.. evaluator_create: evaluator_create evaluator_derivative: evaluator_derivative evaluator_derivative_x: evaluator_derivative_x evaluator_derivative_y: evaluator_derivative_y evaluator_derivative_z: evaluator_derivative_z evaluator_destroy: evaluator_destroy evaluator_evaluate: evaluator_evaluate evaluator_evaluate_x: evaluator_evaluate_x evaluator_evaluate_x_y: evaluator_evaluate_x_y evaluator_evaluate_x_y_z: evaluator_evaluate_x_y_z evaluator_get_string: evaluator_get_string evaluator_get_variables: evaluator_get_variables Fortran, evaluator_create: Fortran evaluator_create Fortran, evaluator_derivative: Fortran evaluator_derivative Fortran, evaluator_derivative_x: Fortran evaluator_derivative_x Fortran, evaluator_derivative_y: Fortran evaluator_derivative_y Fortran, evaluator_derivative_z: Fortran evaluator_derivative_z Fortran, evaluator_destroy: Fortran evaluator_destroy Fortran, evaluator_evaluate: Fortran evaluator_evaluate Fortran, evaluator_evaluate_x: Fortran evaluator_evaluate_x Fortran, evaluator_evaluate_x_y: Fortran evaluator_evaluate_x_y Fortran, evaluator_evaluate_x_y_z: Fortran evaluator_evaluate_x_y_z Fortran, evaluator_get_string_chars: Fortran evaluator_get_string_chars Fortran, evaluator_get_string_length: Fortran evaluator_get_string_length Fortran, evaluator_get_variables_chars: Fortran evaluator_get_variables_chars Fortran, evaluator_get_variables_length: Fortran evaluator_get_variables_length libmathevalmanual evaluator_create evaluator_destroy evaluator_evaluate evaluator_get_string evaluator_get_variables evaluator_derivative evaluator_evaluate_x evaluator_evaluate_x_y evaluator_evaluate_x_y_z evaluator_derivative_x evaluator_derivative_y evaluator_derivative_z evaluator_create evaluator_destroy evaluator_evaluate evaluator_get_string_length evaluator_get_string_chars evaluator_get_variables_length evaluator_get_variables_chars evaluator_derivative evaluator_evaluate_x evaluator_evaluate_x_y evaluator_evaluate_x_y_z evaluator_derivative_x evaluator_derivative_y evaluator_derivative_z
http://www.gnu.org/software/libmatheval/manual/libmatheval.html
crawl-001
en
refinedweb
2003 Celebrating Penn Central How the Supreme Court's preservation of Grand Central Terminal helped preserve planning nationwide. By Jerold S. Kayden Can a local government legally rezone land if that action decreases the land's market value by 75 percent? Can the federal government prohibit an owner from building on wetlands totaling 80 percent of the property? Is a four-year building moratorium valid? What happens when a builder buys a dilapidated warehouse in order to demolish and replace it with new housing, only to be stopped when the historic preservation commission later designates the building as a landmark and prohibits the demolition? For planners and lawyers, answering these questions, and hundreds like them, requires the wisdom of Solomon, the patience of Job, and the calculation of U.S. Supreme Court Justice William J. Brennan. For the man who once wrote, "After all, if a policeman must know the Constitution, then why not a planner?" also wrote the most encyclopedic, influential judicial opinion on what is now known as regulatory takings. Penn Central Transportation Co. v. New York City, the 1978 decision that established the modern framework for constitutional takings analysis of all land-use and environmental laws, celebrates its 25th anniversary Despite criticism from expected (pro-private property) and, occasionally, from unexpected (pro-government regulation) quarters, this agenda-setting opinion remains in remarkably good health and has been fortified by the Court's latest decision on regulatory takings, the 2002 Tahoe-Sierra Preservation Council v. Tahoe Regional Planning Agency case, involving a government-imposed development moratorium at Lake Tahoe. From out of left field In many ways, Penn Central came out of nowhere. After a flurry of cases in the 1920s addressing the constitutionality of zoning (Euclid, Zahn, Gorieb, Nectow) and other land-use restrictions (Pennsylvania Coal), the Court stayed off the playing field for the next half century. That made sense, as long as planning regulations more or less followed the traditional zoning model approved in the Court's 1926 decision, Village of Euclid v. Ambler Realty Co. Even when such innovations as planned unit developments, cluster zoning, subdivision exactions, and incentive zoning emerged in the middle of the 20th century, the Court appeared satisfied with the treatments administered by state judges. But historic preservation laws pushed new buttons. The dramatic losses of such architectural masterworks as New York City's Pennsylvania Station, in 1963, propelled political support for a regulatory technique preventing owners from demolishing, or even modifying, historically significant structures without the permission of the local agency that had designated them or their districts as such. To be sure, lawmakers proclaimed that historic preservation laws were just as litigation-proof as zoning laws, but in their hearts there must have been some question about whether the Supreme Court would agree. After all, it is one thing to impose uniform rules across a large, continuous zoning district, and quite another to pick out one building for special, harsher treatment than its immediate neighbors. Only after a favorable Supreme Court decision would local historic preservation laws gain the traction needed for enactments around the country. Whose rights are these, anyway? The facts of the Penn Central case reveal the high stakes at issue. In 1965, New York City enacted its Landmarks Preservation Law, creating a locally appointed commission with the power to designate buildings and neighborhoods as landmarks and historic districts. The law defined landmarks as buildings 30 years old or older, with special character or special historical or aesthetic interest. Historic districts were areas containing buildings with special character or historical or aesthetic interest, and representing styles of architecture typical of the city's history. The crux of the law, though, was the impact of designation on what owners could subsequently do with their private property. An owner would lose the right to change a property's exterior appearance without commission approval. Activities ranging from replacing windows to total demolition would now be subject to the control of government overseers. Grand Central Terminal, that 1913 Beaux Arts masterpiece sitting astride Park Avenue in the heart of midtown Manhattan, was designated a landmark by the preservation commission in 1967. The terminal's owner, the Penn Central Transportation Company, entered into an agreement with a private developer called UGP Properties, Inc., to build an office skyscraper above the terminal that would generate millions of dollars annually in extra rental income for Penn Central. The tower complied with applicable zoning, but required permission under the landmarks law. Penn Central and the developer submitted two Marcel Breuer-designed proposals to the landmarks preservation commission in 1968. In one, a 55-story tower would sit cantilevered above the terminal and its 42nd Street facade. In another, a 53-story structure would rise above the terminal, but would also destroy part of the terminal and the facade. The commission rejected both proposals, citing harm to the terminal's landmark qualities. Penn Central and its development partner then sued New York City in state court, claiming a taking of property in violation of the federal constitution's Fifth Amendment command, "nor shall private property be taken for public use, without just compensation." The trial court ruled in favor of Penn Central; the two state appellate courts ruled for the city. In court By the time the case arrived at the Supreme Court in 1978, the arguments were crisply drawn. Penn Central emphasized the unfairness of being burdened with losing millions of dollars annually so the public could enjoy the benefits of its landmark building. If the public wanted to preserve landmarks, argued Penn Central, let the public pay for it. For its part, New York City emphasized the importance of public efforts to preserve selected elements of its built environment. It maintained that the burden of historic preservation was spread comprehensively enough throughout the city to escape the company's claim of unfair impact. Comprehensiveness was a main sticking point between Justice Brennan's majority and Justice William Rehnquist's dissent. (Rehnquist is now Chief Justice of the Court.) Generally in land-use cases, the broader the application of a rule, the more acceptable it becomes under the constitution because the wider scope reduces the chances of arbitrary picky-choosy outcomes. At the time the Penn Central litigation began, the commission had designated not only Grand Central Terminal, but also more than 400 other landmarks and 31 historic districts. Justice Rehnquist wasn't satisfied. Scoffing at any notion of comprehensiveness, he observed that New York City imposed landmark designations on "less than one one-tenth of one percent of the buildings in New York City for the general benefit of all its people." The landmark owner's pride of designation would quickly turn to horror once he or she understood the true import of designation, Justice Rehnquist said. With candor, Justice Brennan's six-to-three majority opinion conceded that the Court had been unable to "develop any 'set formula' for determining when 'justice and fairness' require that economic injuries caused by public action be compensated by the government, rather than remain disproportionately concentrated on a few persons." He could have added that there had been little movement beyond Justice Oliver Wendell Holmes's statement in the 1922 Pennsylvania Coal Co. v. Mahon decision that "if regulation goes too far it will be recognized as a taking." Right off the bat, the Brennan decision swiftly disposed of the public interest issue, declaring that landmarks preservation laws are an appropriate means of preserving the character and aesthetic features of a city. If there had been any lingering question about whether aesthetic goals were legally legitimate underpinnings for regulatory action, that doubt was ended once and for all. As to the meatier question of whether the New York law placed too great a burden on Penn Central, the Court found that the city's actions did not breach the economic protections afforded property rights by the just compensation clause. In place of set formulas, the majority announced that three factors would have to be taken into account on a case-by-case basis. These were the economic impact of the regulation on the claimant; its effect on his or her distinct investment-backed expectations; and finally, the character of the governmental action. The Court provided no general guidance about how to weight such factors; it instead demonstrated their application to facts in the Penn Central case itself. Relying on the first two factors, the Court enumerated three facts that worked in the city's favor. First, the Court noted that Penn Central had conceded in state court that rental income from existing terminal activities produced a reasonable return on its terminal investment. Second, the Court emphasized that Penn Central's primary investment-backed expectation was tied to the terminal itself, not to the construction of an office building above it, and that the landmarks law did not affect that expectation. Third, the Court noted the possibility that Penn Central might be able to gain some revenue by using a feature of the city's landmarks law known as "transfer of development rights," allowing the owner to take the zoning-defined air rights above the terminal and have them used on surrounding sites. The majority also insisted that its three-factor analysis should be applied to the "parcel as a whole," meaning that courts should consider the economic value associated with the usable portion of the parcel (the terminal), as well as the economic value, or lack thereof, associated with the restricted portion of the parcel (the air rights above the terminal). In many cases, the "parcel as a whole" concept would mean that the owner had not suffered as much as he or she claimed. How would Penn Central change the definition of private property? The short answer is, very little. After half a century of inaction, the Court recapitulated its core conclusion, registered in the 1920s, that government regulation could significantly cut into private property rights in order to bolster public interests without violating the constitution. Consistent with the 1926 Village of Euclid v. Ambler Realty Co. case and its progeny under the due process clause, Penn Central (under the just compensation clause) would treat the financial value of private property as its most malleable aspect, especially when that value was speculative rather than firmly grounded. Newly ascertained public goals, such as preservation of historic buildings and neighborhoods, would be no more troubling to the justices than the newly ascertained goals served by zoning in the 1920s. Teeth gnashing Over time, the opinion would be criticized from right, left, and center. Not surprisingly, the idea that millions of dollars of value could be eliminated by government regulation to procure public benefits rankled property rights advocates. Acceptance by the Court of the constitutional concept of a regulatory taking, in effect giving gold-plated legitimacy to Justice Holmes's "goes too far" formulation articulated more than 50 years before, troubled a number of planning and environmental activists and some constitutional scholars. The notion that the best the Court could do was a three-factor inquiry — what some would call situated judgment and others might deride as the "what the judge had for breakfast" test — would exasperate anyone seeking predictability, if not certainty, in judicial decision-making. If political action is the guide, this opinion most disturbed the property rights supporters. They shook all three branches of government, starting with the judiciary. A series of lawsuits challenged property rights infringements by land-use and environmental regulation. Starting in 1987, almost a decade after Penn Central, the Court appeared receptive, issuing a steady stream of opinions that, with few exceptions, favored private property owners. The list would include First English Evangelical Lutheran Church v. County of Los Angeles, Nollan v. California Coastal Commission, Lucas v. South Carolina Coastal Council, Dolan v. City of Tigard, Suitum v. Tahoe Regional Planning Agency, City of Monterey v. Del Monte Dunes, Ltd., and Palazzolo v. Rhode Island. Although close readings of the Court's opinions showed that they did not erode the fundamental architecture of Fifth Amendment jurisprudence as erected in Penn Central, their cumulative impact did imply a new judicial concern for private property rights. That genie has been put back in the bottle, at least for now, by the 2002 Tahoe-Sierra Preservation Council v. Tahoe Regional Planning Agency case. There, the Court held that a 32-month moratorium, temporarily denying a landowner all economically viable use of land, is not an automatic taking. Although the decision surely pleases moratorium practitioners, its true significance lies in its jurisprudential choices and rhetorical positioning. Indeed, Tahoe-Sierra is an unapologetic, unabashed champion of Penn Central's ad hoc, no-set-formula, factual decision-making approach. Writing for the majority, Justice John Paul Stevens invoked Justice Sandra Day O'Connor's words of praise for Penn Central, first announced in her concurring opinion in Palazzolo v. Rhode Island. Anything less than a complete elimination of value must be adjudicated under the Penn Central multi-factor inquiry, Stevens wrote. The "parcel as a whole" rule was, and is, good law. For Justice Stevens, who had joined Justice Rehnquist's dissent in Penn Central, and who now is the Court's most reliable vote in favor of land-use planning and regulatory values, this could be called judicial redemption. Tahoe-Sierra is a beacon, in and out of court, signaling that land-use regulation is constitutionally alive and well, and that owners cannot count on reflexive obedience to property rights from a majority of the justices. For a quarter century, Penn Central has remained the Supreme Court's most prominent and complete constitutional statement on the meaning of the just compensation clause. Later cases have elaborated on its basic analytical approach, even taken bites from its constitutional margins, but none have contradicted it. That property rights advocates eventually sought legislative and administrative remedies from the other branches of government merely underscores Penn Central's enduring constitutional power. In short, when it affirmed New York City's decision to save Grand Central Terminal, the Supreme Court also affirmed a nation's broad ability to use land-use and environmental planning to enhance — and preserve — its built and natural environment. Jerold S. Kayden is an associate professor of urban planning at the Harvard Design School. He was Justice Brennan's law clerk during the Supreme Court's 1980-1981 term, two years after Penn Central was decided. Court decisions. Justice Brennan wrote "After all, if a policeman must know the Constitution, then why not a planner?" in a dissenting opinion in San Diego Gas & Electric Co. v. City of San Diego, 450 U.S. 621 (1981). Other important decisions are:) APA amicus briefs. Click here for APA's amicus curiae briefs in the cases of Tahoe-Sierra, Palazzolo, and City of Monterey. Readings. Jerold Kayden and Charles Haar coauthored Landmark Justice: The Influence of William J. Brennan on America's Communities (Preservation Press, 1989). They coedited Zoning and the American Dream: Promises Still to Keep (American Planning Association, 1989).
http://www.planning.org/25anniversary/planning/2003jun.htm
crawl-001
en
refinedweb
Dear type them over again but I don't know how to do it. I'd appreciated very much if you could help me on this. Warmest regards, Kim Dear Kim, What you need to do, is first export the contacts from your old Outlook and then import it to your new Outlook. First, to export: In the Outlook window, click on "File" and then "Import & Export". In the import & export wizard window, click "Export to a File" and then click on For the type of file you want to export to, chose "Comma Separated Values" and click You will see a list of your Outlook folders, chose your contacts folder and click The next window prompts you to enter a file name and location to save the file to. Click the "Browse" button to select a folder. "My Documents" is a great place to save it. Now type a name for your file. I always name it addbk. Click "Next", and then "Finish". Once the export has run, go and check your "My Documents" folder to see if the file is there. It should be named "addbk.csv" and will probably be small enough to fit on a floppy. Transfer the file to your new computer, and follow these directions to import the addresses. In the import & export wizard window, click "Import from another program or file" and then click on "Next". For the file type, chose "comma separated values" and click "Next". Click the Browse button and find the file you exported. Now, you need to select the folder you want to import to, this would be your contacts folder. Click "Next" and then "Finish". Wait while the computer runs the import, and you will have your contacts in your new computer. Elizabeth
http://www.asktcl.com/page79.html
crawl-001
en
refinedweb
Library Python Imaging Library Handbook Tutorial ::: Prev Next: import sys import Image for infile in sys.argv[1:]: try: im = Image.open(infile) print infile, im.format, "%dx%d" % im.size, im.mode except IOError: pass). The Python Imaging Library provides a number of methods and modules that can be used to enhance images. The ImageFilter module contains a number of pre-defined enhancement filters that can be used with the filter method. import ImageFilter out = im.filter(ImageFilter.DETAIL). For more advanced image enhancement,.
http://www.pythonware.com/library/pil/handbook/introduction.htm
crawl-001
en
refinedweb
March 14, 2003 What is the difference between Class.forName() and ClassLoader.loadClass()? Both methods try to dynamically locate and load a java.lang.Class object corresponding to a given class name. However, their behavior differs regarding which java.lang.ClassLoader they use for class loading and whether or not the resulting Class object is initialized. The most common form of Class.forName(), the one that takes a single String parameter, always uses the caller's classloader. This is the classloader that loads the code executing the forName() method. By comparison, ClassLoader.loadClass() is an instance method and requires you to select a particular classloader, which may or may not be the loader that loads that calling code. If picking a specific loader to load the class is important to your design, you should use ClassLoader.loadClass() or the three-parameter version of forName() added in Java 2 Platform, Standard Edition (J2SE): Class.forName(String, boolean, ClassLoader). Additionally, Class.forName()'s common form initializes the loaded class. The visible effect of this is the execution of the class's static initializers as well as byte code corresponding to initialization expressions of all static fields (this process occurs recursively for all the class's superclasses). This differs from ClassLoader.loadClass() behavior, which delays initialization until the class is used for the first time. You can take advantage of the above behavioral differences. For example, if you are about to load a class you know has a very costly static initializer, you may choose to go ahead and load it to ensure it is found in the classpath but delay its initialization until the first time you need to make use of a field or method from this particular class. The three-parameter method Class.forName(String, boolean, ClassLoader) is the most general of them all. You can delay initialization by setting the second parameter to false and pick a given classloader using the third parameter. I recommend always using this method for maximum flexibility. Just because you successfully load a class does not mean there won't be any more problems. Recollect that static initialization code can throw an exception, and it will get wrapped in an instance of java.lang.ExceptionInInitializerError, at which point, the class becomes unusable. Thus, if it is important to process all such errors at a known point in code, you should use a Class.forName() version that performs initialization. Furthermore, if you handle ExceptionInInitializerError and take measures so that the initialization can be retried, it will likely not work. This code demonstrates what happens: public class Main { public static void main (String [] args) throws Exception { for (int repeat = 0; repeat < 3; ++ repeat) { try { // "Real" name for X is outer class name+$+nested class name: Class.forName ("Main$X"); } catch (Throwable t) { System.out.println ("load attempt #" + repeat + ":"); t.printStackTrace (System.out); } } } private static class X { static { if (++ s_count == 1) throw new RuntimeException ("failing static initializer..."); } } // End of nested class private static int s_count; } // End of class This code attempts to load the nested class X three times. Even though X's static initializer fails only on the first attempt, all of them fail: >java Main load attempt #0: java.lang.ExceptionInInitializerError at java.lang.Class.forName0(Native Method) at java.lang.Class.forName(Class.java:140) at Main.main(Main.java:17) Caused by: java.lang.RuntimeException: failing static initializer... at Main$X.<clinit>(Main.java:40) ... 3 more load attempt #1: java.lang.NoClassDefFoundError at java.lang.Class.forName0(Native Method) at java.lang.Class.forName(Class.java:140) at Main.main(Main.java:17) load attempt #2: java.lang.NoClassDefFoundError at java.lang.Class.forName0(Native Method) at java.lang.Class.forName(Class.java:140) at Main.main(Main.java:17) It is slightly surprising that the errors on subsequent load attempts are instances of java.lang.NoClassDefFoundError. What happens here is that the JVM has already noted the fact that X has been loaded (before the initialization is attempted), and the class cannot be unloaded until the current classloader is garbage collected. So, on subsequent calls to Class.forName(), the JVM does not attempt initialization again but, rather misleadingly, throws an instance of NoClassDefFoundError. The proper way to reload such a class is to discard the original classloader instance and create a new one. Of course, this can be done only if you had anticipated that and used the proper three-parameter form of forName(). I am sure you have used Java's X.class syntax to obtain a Class object for a class whose name is known at compile time. Less well known is how this is implemented at the byte-code level. The details are different across compilers, but all of them generate code that uses the one-parameter form of Class.forName() behind the scenes. For example, javac from J2SE 1.4.1 translates Class cls = X.class; into the following equivalent form: ... // This is how "Class cls = X.class" is transformed: if (class$Main$X == null) { class$Main$X = class$ ("Main$X"); } Class cls = class$Main$X; ... static Class class$ (String s) { try { return Class.forName (s); } catch (ClassNotFoundException e) { throw new NoClassDefFoundError (e.getMessage()); } } static Class class$Main$X; // A synthetic field created by the compiler Of course, everything mentioned above about Class.forName()'s short form always initializing the class in question applies to X.class syntactic form as well. The details are different when such syntax is used to get Class objects for primitive and array types, and I leave that as an exercise for curious readers. In the previous example, you saw that the result of loading the class was cached in a special package-private static field artificially created by the compiler, and a synthetic helper method executed Class.forName(). The reason this is convoluted may be because the syntax used was unavailable in early Java versions, so the feature was added on top of the Java 1.0 byte-code instruction set. Free Download - 5 Minute Product Review. When slow equals Off: Manage the complexity of Web applications - Symphoniq Free Download - 5 Minute Product Review. Realize the benefits of real user monitoring in less than an hour. - Symphoniq
http://www.javaworld.com/javaworld/javaqa/2003-03/01-qa-0314-forname.html
crawl-001
en
refinedweb
A Account receivable - see debtor Acquittal - Discharge of defendant when found not guilty Act - Law, as an act of parliament Adjudication - Judgment or decision of a Court or tribunal Administration charge - An ad valorem charge made by the invoice financier (often by deduction from the purchase price of debts) calculated on the amount of each debt purchased by the invoice financier. The charge is made to compensate the invoice financier for taking over certain administrative functions and /or the risk of bad debts Administration order - An order by a County Court directing a debtor to pay a specified monthly installment into Court in respect of outstanding debts. Then distributes the money between the creditors on a pro-rata basis Adoption - An act by which the rights and duties of the natural parents of a child are extinguished and equivalent rights and duties become vested in the adopter(s), to whom the child then stands in all respects as if born to them in marriage Adultery - Voluntary sexual intercourse between a married person and another person who is not the spouse, while the marriage is still valid Advance - The percentage of an invoice's face value which a invoice financier pays upon its purchase. Advocate - A barrister or solicitor representing a party in a hearing before a Court Affirmation - Declaration by a witness who has no religious belief, or has religious beliefs that prevent them taking the oath, that the evidence they are giving is the truth Aged balance report - A schedule of outstanding debts by reference of their due dates Agency factoring - Factoring disclosed to the debtors but with the sales accounting and collection functions retained by the client Alternative Dispute Resolution (ADR) - An alternative method to the Courts by which parties can resolve their dispute - could be arbitration Ancillary relief - Additional claims (e.g. in respect of maintenance) attached to the petition for divorce Ancillary rights - All rights under the contract of sale or service giving rise to a debt (including the right to return goods) and all guarantees and insurance in relation to a debt Annul - To declare no longer valid Appeal - Application to a higher Court or authority for review of a decision of a lower Court or authority Appellant- Person who appeals Applicant - Person making the request or demand Approved - In relation to a debt:. Asset Based Lending - A business loan where the borrower pledges as collateral for the loan any assets, such as invoices, purchase orders or equipment, used in the conduct of his or her business. Funds are used for business related expenses. All asset-based loans are secured. Assignment notice - The written instruction to the debtor to pay the invoice financier normally placed on the face of each invoice issued by the client other than in invoice discounting or undisclosed factoring Associate - A person connected with the client by reason of common control or relationship Attachment of earnings - An order that directs an employer of a debtor to deduct regularly an amount, fixed by the Court, from the debtor's earnings and pay that sum into Court Award - Result of an arbitration hearing or the amount of damages assessed by a Court Availability - The amount payable, at any time, by the invoice financier to the client for or on account of the purchase price of debts sold to the invoice financier Return to top B Back-to-back factoring - the provision of factoring services to a debtor in order to provide security for approval of the debtor's indebtedness arising from the sales of another client Balance payment - Payment, of the purchase price of a debt purchased by the invoice financier, on the due date for payment of the purchase price (the maturity date or collection date) after deducting any prepayment made in respect of that purchase Bail - Release of a defendant from custody, until their next appearance in Court, subject sometimes to security being given and/or compliance with certain conditions Bailiff - Officer of the County Court empowered to serve Court documents and execute warrants Bankrupt (insolvent) - unable to pay creditors and having all goods/effects administered by a liquidator or trustee and sold for the benefit of those creditors Bar - The collective term for barristers Barrister (see Counsel; Silk) - The branch of the legal profession which has rights of audience before all Courts Batch - A bundle of copy invoices accompany a notification Bulk factoring - see Agency factoring Bill of costs -(see Taxation of costs and Summary assessment) Bill of Exchange - An instrument similar to a time or sight draft which the buyer signs. This is acknowledgement of debt for the goods he is buying Bill of Lading - A shipping document which gives instructions to the company transporting the goods Bill of Sale - document used to transfer the title of certain goods from seller to buyer Bona fide (in good faith) - A Bona Fide agreement is one entered into genuinely without attempt to fraud Bona vacantia - Denotes the absence of any known person entitled to the estate of a deceased person Brief - Written instructions to counsel to appear at a hearing on behalf of a party prepared by the solicitor and setting out the facts of the case and any case law relied upon Return to top C Case number - A unique reference number allocated to each case by a Court Caveat - A formal notice given to the registrar, that asks a court to suspend action until the party which filed the challenge can be heard Chain - An international association of correspondent factors Charge - A formal accusation against a person that a criminal offence has been committed Charge back - An amount payable by the client to the invoice financier in respect of a debt the subject of recourse and debited to the clients account with the invoice financier Charging order - An order directing that a charge be registered at the Land Registry on property owned by the debtor. This is also a form of enforcing civil debt, preventing the sale or disposal of a property until the charge has been cleared Civil - Matters concerning private rights and not offences against the state Claim - Proceedings issued in the County or High Court. Previously known as an Action Claimant - The person issuing the claim. Previously known as the Plaintiff Claim form - The form that a claim is issued on. Previously known as a Summons Client - A business with which the invoice financier has entered into a factoring agreement Codicil - An document signed and executed which amends or adds something to a will Collection date - The date on which the invoice financier has received payment (by way of cleared funds) for a debt. Collection only - in relation to a factoring agreement, the invoice financier is responsible to pay the purchase price only on the collection date with no prepayments. Committal - Common law - The law established, by precedent, from Court decisions and established within a community Commission - Administration charge Compensation - Sum of money to make up for or make amends for loss, breakage, hardship, inconvenience or personal injury caused by another Compos mentis (of sound mind) - Legally fit to conduct/defend proceedings Concurrent sentence - A direction by a Court that a number of sentences of imprisonment should run at the same time Conditional discharge - A discharge of a convicted defendant without sentence on condition that he/she does not re-offend within a specified period of time Confidential factoring - factoring (either with or without recourse), without notices of the assignments to the debtors, whereby the client administers the sales ledger and collects as agent for the invoice financier Consecutive sentence - An order for a subsequent sentence of imprisonment to commence as soon as a previous sentence expires. Can apply to more than two sentences Contempt of Court - Disobedience or wilful disregard to the Court process Cor (Coram) in the presence of Co-respondent - A person named as an adulterer (or third person) in a petition for divorce Corroboration - Evidence to confirm evidence by another or supporting evidence, for example forensic evidence (bloodstain, fibres etc) in murder cases Correspondent invoice financier - a invoice financier who is prepared to act as an import invoice financier Counsel - A barrister Count - An individual offence Counterclaim - A claim made by a defendant against a claimant in an action. There is no limit imposed on a counterclaim, but a fee is payable according to the amount counterclaimed County Court - Sometimes inaccurately referred to as the Small Claims Court, it deals with civil matters including all monetary claims up to £15,000. Many County Courts have extra powers which enable them to deal with divorce and other family proceedings, bankruptcy actions, matters relating to children and cases involving ships/boats known as admiralty actions Court of appeal - Divided into: i) civil and, ii) criminal divisions and hears appeals: i) from decision in the High Court and County Courts and, ii) against convictions or sentences passed by the Crown Court Court of protection - The branch of the High Court with jurisdiction over the estates of people mentally incapable of handling their own financial affairs Covenant - A formal agreement or a contract constituting an obligation to perform an act Credit approval - The approval of a debt of in a non-recourse factoring arrangement whereby the credit risk in relation to that debt is accepted by the invoice financier without recourse to the client Credit limit -A limit established by the invoice financier in relation to a debtor within which outstanding debts are deemed to approved Credit risk - The risk of the inability of the debtor to pay for a debt, purchased by a invoice financier, solely by reason of financial inability to pay Creditor - A person to whom money is owed by a debtor Crown Court - The Crown Court deals with all crime committed for trial by Magistrates Courts Cases and are heard before a judge and jury. The Crown Court also acts as an appeal Court for cases heard and dealt with by the Magistrates. The Crown Court is divided into tiers, depending on the type of work dealt with.. Current account - An account between the invoice financier and the client for the recording of some or all of the transactions between them (according to the accounting system in use) Customer - Debtor Cut-off period - A clause in a contract of sale or service by which each invoice shall be deemed to arise from a separate contract i) civil and, ii) criminal divisions and hears appeals: i) from decision in the High Court and County Courts and, ii) against convictions or sentences passed by the Crown Court. · Defended High Court Civil work. · All classes of offence in criminal proceedings. · Committals for sentence from the Magistrates' Court. · Appeals against convictions and sentences imposed at Magistrates' Court. · All classes of offence in criminal proceedings. · Committals for sentence from Magistrates' Court. · Appeals against convictions and sentences imposed at Magistrates' Court. · Class 4 offences only in criminal proceedings. · Committals for sentence from Magistrates' Court. · Appeals against convictions and sentences. Return to top D Damages - An amount of money claimed as compensation for physical/material loss, e.g. personal injury, breach of contract Dated invoices - Invoices to which forward dating has been applied Debit back - Charge back Debt - All debtors obligations under a contract of sale or service Debtor - Person owing money to another party Debts purchased account - an account maintained by the invoice financier on which are recorded the value of all un matured debts purchased by the invoice financier Debt turn - The average period of credit taken by the debtors of a client Decree absolute - Certificate De Facto (in fact) - "As a matter of fact" Default judgment - Obtained by the claimant as a result of the failure of a defendant to comply with the requirements of a claim i.e. reply or pay within a 14 day period after service of the claim Defendant - Person sued; person standing trial or appearing for sentence Deposition - A statement of evidence written down and sworn on oath, or by affirmation Direct import factoring - The provision of factoring services by a invoice financier in the country of the debtors to an exporter in another country without the intervention of a correspondent invoice financier in that country Disapproved - In relation to a debt (see unapproved) Disclosed invoice discounting - term for agency factoring Dismissal - To make order or decision that a claim be ceased Discount/Discounting charge - The charge made by a invoice financier (often as a deduction in calculating the purchase price of debts) for the provision of prepayments Dispute- The failure of the debtor to accept the goods or services and the invoice the subject of a debt purchased by the invoice financier for any reason whatsoever Dispute notice - A written notice from the invoice financier to the client (or, in the two invoice financier system, from the import invoice financier) advising the latter of a dispute District judge - A judicial officer of the Court whose duties involve hearing applications made within proceedings and final hearings subject to any limit of jurisdiction Divorce - Dissolution or nullity of marriage Domestic factoring - The factoring of debts arising from sales of a client to debtors in the same country Drop-in - A provision in non-recorse factoring whereby a debt (or part of) in excess of a credit limit may fall within it to the extent that an item within the limit is paid or credited E Early payment - Prepayment EAT - Employment Appeal Tribunal Eligible debt - A debt in relation to which the invoice financier has prescribed that the client may draw a prepayment on account of the purchase price. Enforcement - Method of pursuing a civil action after judgment has been made in favour of a party. Process carried out by Magistrates Court to collect fines and other monetary orders made in the Crown Court Estate - The rights and assets of a person in property Execution - (see Levy) Seizure of debtors goods following non payment of a Court order Exempt - To be freed from liability Exhibit - Item or document referred to in a statement or used as evidence during a Court trial or hearing Export invoice financier - A invoice financier (normally in the country of his client) providing factoring for exports and using the two invoice financier system. Ex Parte (by a party) - An ex parte application is made to the Court during proceedings by one party in the absence of another or without notifying the other party Expert witness - Person employed to give evidence on a subject in which they are qualified or have expertise Return to top F Facultative agreement - An agreement which provides for each debt to be offered by the client to the invoice financier who may exercise his discretion as to whether or not to accept it. Fast track - The path that defended claims of more than £5000 but not more than £15000 are allocated to Full factoring - A factoring arrangement whereby the invoice financier takes on the administrative functions of the sales ledger and collection from debtors, relieves the client from dad debts and provides finance by way of prepayments. Funds in use - The aggregate amount, at any one time, of prepayments and amounts charged to the client by the invoice financier un recovered by prepayments from the debtors Return to top G Garnishee - A summons issued by a plaintiff, against a third party, for seizure of money or other assets in their keeping, but belonging to the defendant Guarantor - Someone who promises to pay if payment is not made by the person responsible for repayments of a loan Guardian - A person appointed to safeguard/protect/manage the interests of a child or person under mental disability H Habeas corpus (produce the body) - A writ which directs a person to produce someone held in custody before the court High Court - A civil Court which consists of 3 Import invoice financier - A correspondent factor (normally in the country of the debtors) who is prepared to take sub-assignment of debts owing by such debtor and consequently be responsible for collection and/or the credit risk Indictable offence - A criminal offence triable only by the Crown Court. The different types of offence are classified 1, 2, 3 or 4. Murder is a class 1 offence Indirect payment - A payment made by a debtor to the client for a debt purchased by the invoice financier contrary to a notice of the assignment of that debt Ineligible debt - A debt a invoice financier is not prepared to make a prepayment on account of the purchase price e.g. for reason of breach of warranty by the client or the the amount is in excess of a prescribed limit Initial payment - Prepayment Injunction - An order by a Court either restraining a person or persons from carrying out a course of action or directing a course of action be complied with. Failure to carry comply may be punishable by imprisonment Inter-factor agreement - An agreement between corresponding invoice financiers whereby they mutually agree to act as import and export invoice financiers in accordance with a code of practice (e.g. factors chain international) Introductory letter - Letter sent to each of the debtors by a client at the start of a factoring arrangement to explain the factoring arrangement and instructing the debtor to pay the invoice financier for all supplies until further notice Invoice discounting - Confidential factoring, normally without recourse Insolvent - see Bankrupt Inter Alia (among other things) - Indicates that the details given are only an extract, not the full picture Intestate - Without leaving a will Issue - To initiate legal proceedings in pursuit of a claim J Judgment - The final decision of a Court Judicial/Judiciary - i) Relating to the administration of justice or to the judgment of a Court ii) A judge or other officer empowered to act as a judge Juror - (see Jury) A person who has been summoned by a Court to be a member of the jury Juvenile - Person under 17 years of age K L Law Lords - Describes the judges of the House of Lords who are known as the Lords of Appeal Lease - The letting of land, e.g. rent etc, for property for a prescribed period) A duty carried out by a bailiff or sheriff under the authority of a warrant for a sum of money, whereby goods of value belonging to the debtor are claimed with a view to removal and sale at a public auction in an attempt to obtain payment Libel - A written and published statement/article which infers damaging remarks on a persons reputation Litigation - Legal proceedings Lord Chief Justice - Senior judge of the Court of Appeal (Criminal Division) who also heads the Queens Bench division of the High Court Lord Justice of Appeal - Title given to certain judges sitting in the Court of Appeal M Maintenance pending suit - A temporary order for financial provision made within divorce proceedings until the proceedings are finalised Master of the Rolls - Senior judge of the Court of Appeal (Civil Division) Maturity fixed date the due date for a payment of the purchase price of a debt purchased by the invoice financier after deduction of any prepayment made in respect of that debt. Maturity factoring - A factoring arrangement where the purchase price of each and every debt is made on a maturity date often without any provision for prepayments. Maturity date -The number of days after a invoice date (or the end of the month in which the invoice is dated or the date of the receipt by the invoice financier of copy invoices) fixed by reference to a historical average period of credit taken by debtors. The period is the basis for fixing a maturity date Magistrates Court - A Court where criminal and some civil proceedings are commenced before Justices of the Peace who examine the evidence/statements and either deal with the case themselves or commit to the Crown Court for trial or sentence Minimum balance - Retention Minor - Someone below 18 years of age and unable to sue or be sued without representation, other than for wages Mitigation - Reasons submitted on behalf of a guilty party in order to excuse or partly excuse the offence committed, in an attempt to minimise the sentence Mortgage - A loan of money advanced to purchase property. The property is used as security for payment Mortgagor - The party obtaining the loan Mortgage - The party that gives the loan Motion - An application by one party to the High Court for an order in their favour Multi track - The path that defended claims over £15000 are allocated to N Non-molestation - An order within an injunction to prevent one person physically attacking another Non-notification factoring - Confidential factoring Non recourse factoring - Full factoring or any variation by which the invoice financier has no right of recourse in relation of approved unpaid by reason only of the financial inability of the debtor to pay Non-suit - Proceedings where the plaintiff has failed to establish to the Court's satisfaction that there is a case for the defendant to answer Notary public - Someone who is authorised to swear oaths and certify the execution of deeds Notice to quit - Gives prior notice, when served in possession proceedings, of termination of a tenancy Notification - A report by the client to the invoice financier of the coming into existence of debts (already purchased by the invoice financier pursuant to a whole turnover type agreement often by submission of copy invoices. The expression is also used to denote notice to the debtor of the assignment of his indebtedness. Nullity - Application to the Court for a declaration that a marriage be declared 'void' or be annulled, i.e. declared never to have existed until the Court dissolved it O Oath - (see Affirmation) A verbal promise by a person with religious beliefs to tell the truth Open account credit -Credit not covered by a bill of exchange or promissory note Order - A direction by a Court i) a liquidator when a company is being wound up; ii) a trustee when an individual is made bankrupt. The duties of an official receiver will include examining the company/bankrupt's property which is available to pay the debts and distributing the money amongst the creditors P Particulars - Details relevant to a claim Party - Any of the participants in a Court action or proceedings Party and party - Costs that one party must pay to another Penal notice - Directions attached to an order of a Court stating the penalty for disobedience may result in imprisonment Permitted limit - Credit limit Petition - A method of commencing proceedings whereby the order from the petitioner to the Court is expressed as a prayer, e.g. the petitioner prays that the marriage be dissolved (divorce proceedings) Petitioner - A person who presents the petition Plaintiff - see Claimant Plea - A defendant's reply to a charge put to him by a court; i.e. guilty or not guilty Pleadings - Documents setting out claim/defence of parties involved in civil proceedings Possession proceedings - Legal proceedings by a landlord to recover land/property e.g. house Precedent - The decision of a case which established principles of law that act as an authority for future cases of a similar nature Probate - The legal recognition of the validity of a will Prepayment - A payment, made by the invoice financier to the client on receipt of a notification or schedule offer, on account of the purchase price of the debts included in the notification or offer Pro Forma (a matter of form) - A procedure performed subject to and following an agreed manner Pro Rata (in proportion) - Money distributed on a pro rata basis would be according to the amount of investment Prosecution - The conduct of criminal proceedings against a person Prosecutor - Person who prosecutes i) Trustee for Trusts managed by the Public Trust Office; ii) Accountant General for Court Funds; iii) Receiver (of last resort) for Court of Protection patients Putative father - The alleged or supposed father of an illegitimate child Q Quantum - In a damages claim the amount to be determined by the court Quash - To annul; i.e. to declare no longer valid Queens Counsel - Barristers of 10 years experience may apply to become queen's counsel. QCs undertake work of an important nature and are referred to as 'silks' which is derived from the Courts gown they wear. Will be known as king's counsel if a king assumes the throne R Receivable - Debt Receiver - Person appointed by the Court of Protection to act on behalf of a patient Recorder - Members of the legal profession (barristers or solicitors) who are appointed to act in a judicial capacity on a part time bases. They may progress to become a full time judge Recourse - The right of the invoice financier to be guaranteed the due payment of a purchased debt by the debtor or to have an unpaid debt repurchased by the client. Refactoring charge - An additional administration charge made by some invoice financiers in respect of any debt, which is subject to recourse and remains unpaid after a specified period, in consideration for the invoice financiers not exercising his right of recourse Related rights - Ancillary rights Registrar - (see District Judge) Registrars and deputy registrars were renamed District Judges and Deputy District Judges respectively in the Courts and Legal Services Act 1990 Remand - To order an accused person to be kept in custody or placed on bail pending further Court appearance Respondent - The person on whom a petition is served Right of Audience - Entitlement to appear before a Court in a legal capacity and conduct proceedings on behalf of a party S Schedule of offer - a list of invoices constructing an offer under a facultative type of agreement to the invoice financier by the client of the debts represented by the invoices Self billing - the buyer raises invoices on the basis of orders placed and goods delivered Seller - Client Service charge - Administration charge Sheriff - An officer of the Crown whose duties, amongst other things, consist of the enforcement of High Court writs Silk - Queens Counsel, a senior barrister sometimes referred to as leading counsel Slander - Spoken words which have a damaging effect on a person's reputation Small claims track - The path that defended claims of no more than £5000 (and Personal Injury and Housing Disrepair claims of no more than £1000) are allocated Solicitor - Member of the legal profession chiefly concerned with advising clients and preparing their cases and representing them in some Courts. May also act as advocates before certain Courts or tribunals Squatter - A person occupying land or property without the owners consent Statutory instrument - A document issued by the delegated authority (usually a Government Minister or committee) named within an act of parliament which affects the workings of the original Act, eg The County Courts Act 1984 confers authority on to the County Court Rule Committee to make rules relating to the operation of the County Courts act Stay of execution - An order stating judgment cannot be enforced without instruction from the court Suit - Legal proceedings commenced by petition Summary assessment (of costs) - Where the question of costs is dealt with at the conclusion of the hearing Summary judgment - Judgment obtained by a plaintiff where there is no defence to the case or the defence contains no valid grounds Summary offence - (see Indictable offence) A criminal offence which is triable only by a Magistrates Court Summing up - A review of the evidence and directions as to the law by a judge immediately before a jury retires to consider its verdict Summons - Order to appear or to produce evidence to a Court, the old name for a claim form Summons (jury) - Order to attend for jury service Summons (witness) - Order to appear as a witness at a hearing Supplier - Client Surety - A person's undertaking to be liable for another's default or non-attendance at Court Suspended sentence - A custodial sentence which will not take effect unless there is a subsequent offence within a specified period Survey - An investigation by a invoice financier as to the suitability of a prospective client T Take-on - The start of an agreement. All debts are purchased by the invoice financier at the date of commencement of the factoring agreement. Taxation of costs - (see Summary assessment) An examination of a solicitor's bill in civil proceedings by a Court to ensure that all charges against the legal aid fund are fair and reasonable Testator - A person who makes a will Tort - A civil wrong committed against a person for which compensation may be sought through a civil court, e.g. personal injury, negligent driving, libel etc Tribunal - A group of people consisting of a chairman (normally solicitor/barrister) and others who exercise a judicial function to determine matters related to specific interests, eg) TUC - Trade Union's Congress - With member unions representing over six and a half million working people, TUC campaigns for a fair deal at work and for social justice at home and abroad. TUPE - Transfer of Undertakings Protection of Employment regulations U Unapproved - Debts subject to recourse and are not eligible for prepayment Undisclosed factoring - Confidential invoice discounting V Verdict - The finding of guilty or not guilty by a jury Vice Chancellor - Senior judge and head of the Chancery Division of the High Court of Justice (although the Lord Chancellor is the nominal head) W Waiver - A release of debts from a charge or other encumbrance, eg transfer to a different school, medical treatment etc Whole turnover agreement - An agreement which itself provides for the assignment to the invoice financier of all existing and future debts (or all of a class of debts) of the client without any further act of transfer of the individual debts Will - A declaration of a person's intentions to distribute his/her estate and assets Winding up - The voluntary or compulsory closure of a company and the subsequent realisation of assets and payment to creditors Witness - A person who gives evidence in Court (see also Expert witness) X Y Z
http://www.firstfactoruk.com/Glossary.asp
crawl-001
en
refinedweb
Of course, this is largely true of TP6, too, but in real mode we can do segment arithmetic with impunity. In Windows' protected mode environment, 32-bit pointers no longer consist of a 16-bit segment and a 16-bit offset, but of a 16-bit selector and a 16-bit offset. The selector is essentially a pointer into an relatively small array of segment descriptors, so that while any 32-bit value represents a valid address in real mode, the same is not true in protected mode. (In fact, since a selector is a 16-bit pointer to an eight byte descriptor, the odds are 7 to 1 that any given 16-bit value will be an invalid selector. What's more, since all 8192 possible descriptors will probably not be filled in, the odds against an arbitrary 16-bit value mapping to a valid selector are significantly higher than 7 to 1.) While, of course, this hardware pickiness about selector values is what makes invalid pointer bugs so much easier to find under Windows than under DOS, it also complicates random access to huge structures. We can't just increment the segment part of a pointer to slide our 64K window 16 bytes forward; we can only use selectors that correspond to descriptors that Windows has already built. As it happens, whenever you allocate a block of memory larger than 64K, Windows defines only as many selectors as are necessary to completely 'cover' the block with 64K "tiles". That is, even though byte $10000 within a large structure is logically and (may be) physically contiguous with byte $FFFF, we have to treat them almost as if they were in two completely different 64K structures: We can not use one selector to refer to both bytes. Similarly, we have to be careful that all references to multibyte types (like numbers, or records, or the 640 byte arrays in Figure 1) are completely contained within a single segment. Trying to read or write past the end of a segment will cause a UAE: We either have to make sure no multibyte structures straddle segment boundaries; refer to them byte-at-a-time; or use multiple block moves to move data to and from an intermediate buffer. Working within this rigid framework of 64K peepholes into large blocks of memory complicates and slows down any code that has to deal with huge arrays, but certainly shouldn't deter anyone who really deserves their 'programming license'. What does cause trouble is the " __ahincr double whammy": not only does TPW not provide a Pascal binding for __ahincr, the standard Windows constant that you can add to a pointer's selector to step it 64K forward, the SDK manuals only talk about how to use __ahincr, not how to obtain it, since that is normally handled by the C runtime library, or by the file MACROS.INC that comes with MASM, not the SDK. Since I use TASM, not MASM, I had to ask around until I found someone who could tell me "Oh, __ahincr is just the offset of a routine you import from the KERNEL DLL just like you'd import any other API function." (Rule Number 1 of Windows programming is Know A Guru: sooner or later, you're bound to run into something that's not in the manuals and that you can't find by trial and error.) As you can see, given __ahincr, the HugeModl unit in Listing 1 is pretty straightforward. The unit supports huge model programming on a variety of levels: It provides enhanced GetMem and FreeMem routines that allow you to allocate blocks larger than $FFFF bytes, and it provides three levels of tools for manipulating large data objects. The lowest level is, of course, a Pascal binding for __ahincr. This is used throughout the unit, and you may find your own uses for it, too. The middle level is a set of functions and macros that can step a pointer by any amount or calculate the offset of any byte within an object, while the top level is a Move routine that can move huge blocks without any apparent concern for the 64K tile boundaries. The enhanced GetMem and FreeMem routines look exactly the same as the System unit routines they "shadow" except that the block size parameter can be greater than $FFFF. This lets you use the same set of routines for small data as for large data, without having to do anything but put HugeModl in your uses clause(s). GetMem and FreeMem pass 'small' (less than 64K) requests on to the System unit routines, and call GetHuge and FreeHuge to handle the 'large' (at least 64K) requests. (Bear in mind that in "Standard" (286) mode, Windows won't allocate any blocks larger than a megabyte.) The GetHuge routine uses the GlobalAlloc function, which returns a handle to the allocated memory block, and then uses the GlobalLock routine to convert the handle into a pointer. The FreeHuge routine in turn uses the GlobalHandle function to convert the pointer's selector back to a handle, and then uses the GlobalFree call to free the handle. One important thing to note is that FreeHuge (and, transitively, HugeModl.FreeMem) can only free pointers that came from GetHuge/GetMem: You cannot allocate a block then free some bytes from the end (you'll have to use the Windows GlobalRealloc function for that) nor can you FreeMem a largish typed constant, say, from the middle of your data segment. Of course, once you lay claim to the continent, you have to get over the Appalachians! HugeModl supplies three pointer math routines that can take you safely past the 'mountains at the end of the segment': OfsOf, which adds a long offset to a base pointer; PostInc, which steps a pointer by a long and returns the original value; and PreInc, which steps a pointer by a long and returns the new value. All three add the (32-bit) step to the base pointer's (16-bit) offset, using the low word of the result as the new offset, and using the high word to step the selector by the appropriate number of 64K "tiles". (With the {$X+} enhancements, PreInc and PostInc can be used as procedures that modify the pointer argument. PreInc is better for this than PostInc, as it does two fewer memory accesses and is thus a little faster.) All three pointer math routines are defined as inline macros and as assembler routines. If the HugeModl unit is compiled with Huge_InLine $define-d, the routines are defined as macros, while otherwise each operation requires a far call. It's probably best to use the macros because virtually every routine that takes a huge parameter will need to do some pointer calculations, even though that very ubiquity also means that using the macros can add quite a lot to your object code's size. Whether you use OfsOf or PostInc/PreInc is largely a matter of taste, though it is often simpler and/or cheaper to step a pointer (using PostInc/PreInc) by the array element's size than it is to keep multiplying a stepped array index by the element size. In either case, you'll quickly find that there is one major drawback to using huge arrays "by hand" instead of letting the compiler do it all for you transparently: the routines in the HugeModl unit all return untyped pointers, and you'll end up having to use a lot of casts, if you don't want to use Move for everything. For example, something like for Idx := 1 to ArrayEls do SmallArray[Idx] := Idx; will become Ptr := HugeArrayPtr; for Idx := 1 to ArrayEls do LongPtr(PostInc(Ptr, SizeOf(Long)))^ := Idx;. Of course, untyped pointers are no problem when you do use the Move routine as it, too, is meant to extend the range of the System unit routine it shadows without breaking any existing code. Thus, the first two arguments are untyped var parameters pointing to the data's source and destination, while the third argument is the number of bytes to copy. (Naturally, unlike the System unit's Move routine, HugeModl's can move any number of bytes from 0 to $7FFFFFFF.) You may find that reading the code for the huge Move routine will help you to write your own huge model code. It breaks a potentially huge block move, which might span several segments, into a succession of <= 64K moves, each entirely within a single segment. For each submove, it uses the larger of the source and destination offsets to decide how many bytes it can move without walking off the end of the source or destination segments. It then calls a word move routine to move that many bytes, increments the pointers, and decrements the byte count. Since the block move is by words, not bytes, it can easily handle a 64K byte move (when both the source and destination offsets are 0) as a single 32K word move. Now, while the HugeModl.Move routine can handle structures that straddle segment boundaries, compiler-level pointer expressions like Ptr^ := Fn(Ptr^); can not. This means that you should only use pointer expressions when you know that they will not cause an UAE by hitting the end of a segment. If the base structure's size is a power of two (words, longs, doubles, &c), you can generally use pointer expressions so long as the array starts an integral number of elements from the high end of the initial segment. That is, an array of words should start on a word boundary (the offset must be even) while an array of doubles should start on a qword boundary (the offset must be a multiple of eight). Since you have to process unaligned data byte-at-a-time or via a buffer you Move data to and from, it may be worth adding blank fields to "short" structures so that their size will be a power of two. Since GetHuge always returns a pointer with an offset of 0, you don't have to worry about the alignment of "simple" (fixed-length, headerless) arrays. However, most variable length arrays will have a fixed length header and a variable length tail, which can leave the tail unaligned. Rather than using the object-oriented technique of Hax #?? [PC Techniques volume 2, #1], where we declare the header as an object then declare the actual variable length object as a descendant of the header object that adds a large "template" array, we can retain control over our huge tails' alignment by simply putting a tail pointer in the header. Then, when we allocate the object, we can simply allocate as many extra bytes as we might need to insert between the header and the tail to maintain proper alignment. Thus, with a array of longint tail, we would just allocate three extra bytes and say something like TailPtr := PChar(HdrPtr) + ((Ofs(HdrPtr^) + SizeOf(Hdr) + 3) and $FFFC);. If you've gotten the impression that writing 'huge model' code under TPW is a lot more work than writing normal, '16-bit' code, you're both right and wrong. Yes, you will be making a lot of 'calls' to PostInc/PreInc, and you will be making a lot of casts of their results, but if you already write code that sweeps arrays by stepping pointers, you will probably find that using PostInc/PreInc makes for fewer source lines, which in turn tends to counter the legibility lost in all the calls and casts. Not to mention that huge model Pascal code is a lot easier to write and read than its 32-bit assembler equivalent! Copyright © 1992 by Jon Shemitz - jon@midnightbeach.com - html markup 8-25-94 Return to Publications list
http://www.midnightbeach.com/jon/pubs/huge-model.htm
crawl-001
en
refinedweb
React Native Logging ToolsReact Native Logging Tools A react native module that lets you: - Connect your app to reactotron easily - Send logs to multiple services in one time - Send crash/error reports to multiple services in one time - Register a global error handler which will capture fatal JS exceptions and send a report to your crash reporter libraries - Can be plugged to Flipper to display all events sent to different service. and all this, as easily as possible - Getting started - Status of supported libraries - Usage Getting startedGetting started $ yarn add react-native-logging-tools or $ npm install react-native-logging-tools Status of supported librariesStatus of supported libraries UsageUsage ImportsImports To start, you have to import methods from react-native-logging-tools which will be used. import { init, createFirebaseLogger, createCrashlyticsLogger, createSentryLogger, createTealiumLogger, createAdobeLogger, setupReactotron, logEvent, } from 'react-native-logging-tools'; And the others external libraries to plug to react-native-logging-tools import Reactotron from 'reactotron-react-native'; import { reactotronRedux } from 'reactotron-redux'; import Instabug from 'instabug-reactnative'; import analytics from '@react-native-firebase/analytics'; import crashlytics from '@react-native-firebase/crashlytics'; import * as Sentry from "@sentry/react-native"; import AsyncStorage from '@react-native-community/async-storage'; import { ACPCore } from '@adobe/react-native-acpcore'; import { addPlugin } from 'react-native-flipper'; InitializationInitialization Before any call to react-native-logging-tools's features, you have to initialize it (eg. in your App.ts or store.ts) init({ config: { reportJSErrors: !__DEV__, }, analytics: [createFirebaseLogger(analytics())], errorReporters: [createCrashlyticsLogger(crashlytics())], }); FeaturesFeatures LoggersLoggers Debug EventsDebug Events You can call this function where do you want/need to send logs to each plugged libraries to analytics during the initialization step logEvent('EVENT_NAME', { your_key: 'value', ... }); logDebugEvent('EVENT_NAME', { your_key: 'value', ... }); logWarningEvent('EVENT_NAME', { your_key: 'value', ... }); logNetworkEvent('EVENT_NAME', { your_key: 'value', ... }); logErrorEvent('EVENT_NAME', { your_key: 'value', ... }); If you use react-navigation and you want send to analytics navigation events e.g, you can add logEvent to his event handler (React-navigation docs) Error EventsError Events You can call this function where do you want/need to send logs to each plugged libraries to errorReporters during the initialization step recordError('EVENT_NAME', { your_key: 'value', ... });
https://preview.npmjs.com/package/react-native-logging-tools
CC-MAIN-2021-04
en
refinedweb
Let’s start with a couple of statements: - Amazon Elastic Container Service (Amazon ECS) is a highly scalable, high-performance container orchestration service that supports Docker containers and allows you to easily run and scale containerized applications on AWS. - AWS Fargate is a compute engine for Amazon. If you aren’t sure when to use ECS with a Fargate or an EC2 model, then the quick rule of thumb to be applied is: - EC2 for large constant workloads. - Fargate for small, batch or occasional burst workloads. Hopefully by now you are well on your way to refactoring any monolithic EC2 instance applications onto a new microservices architecture utilising containers on ECS and possibly Fargate. Testing your new microservices has been going well and now you want to extend its reachability to outside your AWS account. This could be access over the internet or a Private link (Private links are great for secure SaaS connectivity) to a verified 3rd party’s AWS account. However, you don’t want to create a new service for each load balancer, and why would you. You already have an internal load balancer connected to a target group that is registered to your ECS service within a Cluster running your tasks according to your task definition utilizing Fargate. As that was a bit of a mouthful so let’s see that in a simple diagram. For simplicity I’ve captured the ECS Cluster, Service and Task within the “Fargate task”. The descriptions of the inner working will be a topic for another blog post This topology allows: - Inbound internal traffic via your internal ALB. - External outbound traffic via your NAT gateway. As per the subject of this post it does not yet allow any external inbound connectivity. To enable this, we need to: - Create a new external ALB - Associate the Target Group with the Fargate task ENI. This will then give us a target topology as below: Great we can now see what we need. Creating the ALB is a simple task, however it’s not possible to connect the external load balancers target group to an existing cluster, you can’t even create a new cluster and associate more than one Target Group within the AWS console. Try it, it’s just not possible. AWS do state that your Amazon ECS service can serve traffic from multiple load balancers and expose multiple load balanced ports when you specify multiple target groups in a service definition. But…. If you want to create a service specifying multiple target groups, you must create the service using the Amazon ECS API, SDK, AWS CLI, or an AWS CloudFormation template. After the service is created, you can view the service and the target groups registered to it with the AWS Management Console. Just like this: OK so we just jumped to the end of this story, lets rewind a little and take stock of what AWS are telling us (I’ve bolded the salient points): - Your Amazon ECS service can serve traffic from multiple load balancers. - If you want to create a service specifying multiple target groups, you must create the service using the Amazon ECS API, SDK, AWS CLI, or an AWS CloudFormation template. I’ll assume you know how know to set up an external ALB and associate a Target Group to it. However, this time don’t add any listeners as this task is taken care of when the Target Group is associated to the service. For my example I’m going to show you how to take the configuration from your existing ECS setup and create a new service via the Python Boto3 SDK. Create an instance with Python 3.x installed with boto3, even better run as a Jupyter notebook and execute the following Python code to get a list of your existing clusters (replacing <region> with your region): import boto3 session = boto3.Session(region_name="<region>") ecs = session.client('ecs') ecslist = ecs.list_clusters() for cluster in ecslist['clusterArns']: print (cluster) Find the existing Cluster ARN you are interested in and your existing Service Name and issue the command below: ecs.describe_services(cluster="<ClusterARN>",services=["<ServiceName>"]) You will now receive the JSON output of your current service which you then need for the next command. Remember you are creating a new service, so use a new name for <new-servicename>. All variables below within <> need to be updated: ecs.create_service( cluster='<clusterarn>, serviceName='<new-servicename>', taskDefinition='<taskdefinition>', loadBalancers=[ { 'targetGroupArn': ‘<internal-loadbalancer-arn>’, 'containerName': '<Task definition>', 'containerPort': <port> }, { 'targetGroupArn': ‘<externall-loadbalancer-arn>’, 'containerName': '<Task definition>', 'containerPort': <port> }, ], serviceRegistries=[], desiredCount=1, clientToken='<justauniquetoken>', launchType='FARGATE', platformVersion='LATEST', deploymentConfiguration={ 'maximumPercent': 200, 'minimumHealthyPercent': 100 }, placementConstraints=[], placementStrategy=[], networkConfiguration={'awsvpcConfiguration': {'subnets': ['<subnet-id1>', '<subnet-id2>', '<subnet-id3>'], 'securityGroups': ['<sg-id>'], 'assignPublicIp': 'DISABLED' } }, healthCheckGracePeriodSeconds=0, schedulingStrategy='REPLICA' ) If you have entered all your variables correctly then it should echo the full JSON of the configuration without error. The AWS console will now reflect these changes as per the earlier snip. Now delete your old service (with just the single internal load balancer) and you’ll have a brand-new service that allows internal and external connectivity at the same time using the two load balancers. No responses yet
https://www.rucloud.technology/2019/11/15/musings-with-containers/
CC-MAIN-2021-04
en
refinedweb
Plotting a frequency chart from a set of values Hello, I have written a code now to count elements in a list named "results". It looks something like def count_element(): Y = [] for x in results: y = results.count() Y.append(y) return(y) It seems it doesn't work properly, because it just shows 0 as a result. Can I define y axis in a similar way and then define x axis as values itself and make them interact?
https://ask.sagemath.org/question/39677/plotting-a-frequency-chart-from-a-set-of-values/?sort=oldest
CC-MAIN-2021-04
en
refinedweb
C++ Program to check String Palindrome Hello Everyone! In this tutorial, we will learn how to demonstrate how to check if the String is Palindrome or not, in the C++ programming language. Condition for a String to be Palindrome: A String is considered to be a Palindrome if it is the same as its reverse. Steps to check for String Palindrome: Take the String to be checked for Palindrome as input. Initialize another array of characters of the same length to store the reverse of the string. Traverse the input string from its end to the beginning and keep storing each character in the newly created array of char. If the characters at each of the positions of the old chararray are the same as the new chararray, then the string is a palindrome. Else it isn't. Code: #include <iostream> #include <stdio.h> //This header file is used to make use of the system defined String methods. #include <string.h> using namespace std; int main() { cout << "\n\nWelcome to Studytonight :-)\n\n\n"; cout << " ===== Program to Determine whether String is Palindrome or not, in CPP ===== \n\n"; //String Variable Declaration char s1[100], c = 'a'; int n1, i = 0; cout << "\n\nEnter the String you want to check : "; cin >> s1; //Computing string length without using system defined method while (c != '\0') { c = s1[i++]; } n1 = i-1; char s2[n1+1]; cout << "Length of the entered string is : " << n1 << "\n\n"; i = 0; //Computing reverse of the String without using system defined method while (i != n1 + 1) { s2[i] = s1[n1 - i - 1]; i++; } cout << "Reverse of the entered string is : " << s2 << "\n\n\n"; i = 0; //Logic to check for Palindrome while (i != n1) { if (s2[i] != s1[i]) break; i++; } if (i != n1) cout << "The String \"" << s1 << "\"" << " is not a Palindrome."; else cout << "The String \"" << s1 << "\"" << " is a Palindrome."; cout << "\n\n"; return 0; } Output: We hope that this post helped you develop a better understanding of how to check string is palindrome or not in C++. For any query, feel free to reach out to us via the comments section down below. Keep Learning : )
https://studytonight.com/cpp-programs/cpp-program-to-check-string-palindrome
CC-MAIN-2021-04
en
refinedweb
Generate a torus. More... #include <vtkParametricTorus.h> Generate a torus. vtkParametricTorus generates a torus. For further information about this surface, please consult the technical description "Parametric surfaces" in in the "VTK Technical Documents" section in the VTk.org web pages. Definition at line 37 of file vtkParametricTorus.h. Definition at line 41 of file vtkParametricTorus. Construct a torus with the following parameters: MinimumU = 0, MaximumU = 2*Pi, MinimumV = 0, MaximumV = 2*Pi, JoinU = 1, JoinV = 1, TwistU = 0, TwistV = 0, ClockwiseOrdering = 1, DerivativesAvailable = 0, RingRadius = 1, CrossSectionRadius = 0.5. Set/Get the radius from the center to the middle of the ring of the torus. Default is 1.0. Set/Get the radius of the cross section of ring of the torus. Default is 0.5. Return the parametric dimension of the class. Implements vtkParametricFunction. Definition at line 76 of file vtkParametricTorus.h. A tor 108 of file vtkParametricTorus.h. Definition at line 109 of file vtkParametricTorus.h.
https://vtk.org/doc/nightly/html/classvtkParametricTorus.html
CC-MAIN-2021-04
en
refinedweb
Materialize CSS has been a great tool for quick and elegant front-end development. I originally wrote this post to show you how to install the vanilla Materialize CSS in Angular (still here in the second section ) however, I discovered the open source project: ngx-materalize and I highly recommend using this package if you are using Materialize CSS with Angular. It may not be as quick to get started, but in the long run you will end up saving a bunch of time and frustration using it’s built-in features. It makes development much cleaner and easier than the vanilla version. Here’s how you can install it. Once you’ve created your project, open up your terminal and run: npm install --save ngx-materialize This installs Materailize CSS, jQuery and the added features of ngx-materalize." ] Add jQuery and materalize-css types to your .tsconfig file: { "extends": "../tsconfig.json", "compilerOptions": { "outDir": "../out-tsc/app", "module": "es2015", "types": [ "jquery", // Add "materialize-css" // Add ] }, "exclude": [ "src/test.ts", "**/*.spec.ts" ] } Install the icons if you choose by running this in your terminal: npm install --save @mdi/font Add the icons to your angular.json file: "styles": [ "src/styles.scss", "node_modules/materialize-css/dist/css/materialize.min.css", "node_modules/@mdi/font/css/materialdesignicons.min.css" // Add ], Lastly, you’ll probably want to install Animations, so run this in your terminal: npm install --save @angular/animations You use animations by adding them to every module you want them to be used. If you want them to be used throughout your entire app then you could add them to your app.module.ts file like this: import { BrowserModule } from '@angular/platform-browser'; import { NgModule } from '@angular/core'; import { AppRoutingModule } from './app-routing.module'; import { AppComponent } from './app.component'; // Add line below import { BrowserAnimationsModule } from '@angular/platform-browser/animations'; @NgModule({ declarations: [ AppComponent ], imports: [ BrowserModule, AppRoutingModule, // Add line below: BrowserAnimationsModule, ], providers: [], bootstrap: [AppComponent] }) export class AppModule { } Now you should be on your way to using Materialize CSS with Angular using ngx-materalize. Don’t forget to restart your server! Once you’ve created your project, open up your terminal and run: npm i materialize-css jquery --S This installs Materailize CSS and jQuery If you want to use the Material Design icons add this to the your index.html file: <head> ... ... <link href="" rel="stylesheet"> </head> You can host the icons yourself but I prefer a CDN as it’s likely people have it downloaded to their browser’s cache and any new fonts are automatically updated/added, but you may want to host them yourself if you are building an app that needs to work offline (e.g. a Progressive Web App (PWA)). If that’s the case you can go to the Material Design site for directions on how to do that." ] Restart your server and you should be ready to go!
https://colinstodd.com/posts/code/how-to-install-materialize-css-in-angular.html
CC-MAIN-2021-04
en
refinedweb
You are given a rod of length N units along with an array that contains the prices of all pieces of size smaller than N (1 to N). You need to find out the maximum profit that can be obtained by cutting the rod into pieces and then selling it. Sample Input: Prices – [ 1, 5, 8, 9, 10, 17, 17, 20 ], N = 8 Expected Output: 22 Explanation: The rod can be cut into two pieces of size 2 and 6 with total profit as 22. Algorithm: - We will use dynamic programming to solve this problem. Create a DP array of size N+1 intialized to 0. - Iterate for all sizes starting from 1 to N. - For each size, consider all sizes less than or equal to the current size. - If we take size j ( j<=i, where i is the length of the rod ) and cut the rod into two pieces of size j and ( i – j ), the result will be maximum of DP[ i ] and ( prices[ j ] + DP[ i – j ] ). We do this for all j from 1 to i and for all i from 1 to N. We keep storing the result for each subproblem in the DP array and return the final result stored in DP[ N ]. #include <bits/stdc++.h> using namespace std; int helper(vector<int> prices, int n){ vector<int> dp(n+1, 0); for(int i=1; i<=n; i++){ for(int j=1; j<=i; j++){ dp[i] = max(dp[i], prices[j-1] + dp[i-j]); } } return dp[n]; } int main(){ vector<int> prices = {1, 5, 8, 9, 10, 17, 17, 20}; int n=4; cout<<helper(prices, n); return 0; } Also take a look at another popular interview question Maximum Sum Subarray.
https://nerdycoder.in/2020/08/06/rod-cutting-problem-dp-02/
CC-MAIN-2021-04
en
refinedweb
C++ Performing basic Operations using Class Hello Everyone! In this tutorial, we will learn how to perform basic operations using Class and its members, in the C++ programming language. To understand the concept of Class and its members, we will recommend you to visit here: C++ Class Concept, where we have explained it from scratch. Code: #include <iostream> #include <vector> using namespace std; //defining the class operations to implement all the basic operations class operations { //declaring member variables public: int num1, num2; //defining member function or methods public: void input() { cout << "Enter two numbers to perform operations on: \n"; cin >> num1 >> num2; cout << "\n"; } public: void addition() { cout << "Addition = " << num1 + num2; cout << "\n"; } void subtraction() { cout << "Subtraction = " << num1 - num2; cout << "\n"; } void multiplication() { cout << "Multiplication = " << num1 * num2; cout << "\n"; } void division() { cout << "Division = " << (float) num1 / num2; cout << "\n"; } }; //Defining the main method to access the members of the class int main() { cout << "\n\nWelcome to Studytonight :-)\n\n\n"; cout << " ===== Program to perform basic operations using Class, in CPP ===== \n\n"; //Declaring class object to access class members from outside the class operations op; cout << "\nCalling the input() function from the main() method\n"; op.input(); cout << "\nCalling the addition() function from the main() method\n"; op.addition(); cout << "\nCalling the subtraction() function from the main() method\n"; op.subtraction(); cout << "\nCalling the multiplication() function from the main() method\n"; op.multiplication(); cout << "\nCalling the division() function from the main() method\n"; op.division(); cout << "\nExiting the main() method\n\n\n"; return 0; } Output: We hope that this post helped you develop a better understanding of the concept of Class in C++. For any query, feel free to reach out to us via the comments section down below. Keep Learning : )
https://studytonight.com/cpp-programs/cpp-performing-basic-operations-using-class
CC-MAIN-2021-04
en
refinedweb
Python program to find the factorial of a number using recursion : The factorial of a number is the product of all the numbers from 1 to that number. e.g. factorial of 5 is 1 * 2 * 3 * 4 * 5 i.e. 120 . In this tutorial, we will learn how to find out the factorial of a number using a recursive method. Factorial is denoted by “!”: 5 factorial is denoted by 5! Recursive method : The recursive method calls itself to solve a problem. This is called a recursion process. These types of methods will call itself again and again till a certain condition is satisfied. Finding out the factorial of a number is one of the classic problem used for recursion. The factorial of a number ’n’ is the product of all numbers from ‘1’ to ‘n’. Or, we can say that the factorial of ’n’ is equal to ’n’ times the factorial of n - 1. If the value of ’n’ is ’1’, its factorial is ‘1’. def fact(x): if x == 0 : return 1 return x * fact(x - 1) print(fact(5)) We can implement it in python like below : - fact() method is used to find out the factorial of a number. It takes one number as its argument. The return value of this method is the factorial of the argument number. This method calls itself recursively to find out the factorial of the argument number. - Inside this method, we are checking if the value of the argument is ’1’ or not. If it is ’1’, we are returning ’1’. Else, we are returning the multiplication of the argument number to fact(x -1) or the factorial of the number_ (x - 1)_. This line calls the same method again. - fact(x -1) will again call the method fact(). If the value of (x-1) is ’1’, it will return ’1’. Else, it will return (x -1) * fact(x -2) . So, the same method will be called again and again recursively. - This product chain will continue until the value of ’x’ is ’1’. It will return ‘x * (x - 1) * (x - 2)…1’ or the factorial of ’x‘. The output of the above program is “120“ Explanation : In the above example, - fact() function takes one argument “x“ - If “x ” is “1“, it will return 1. Because we don’t need to find the factorial of_ ‘1’. The factorial of _‘1’ is_ ‘1’_ itself. - Else it will return x * fact(x-1) i.e. fact(x-1) will call fact() function one more time with_ (x-1)_ as argument . If ’x _’ is _10, it will call _fact(9). _ - It will continue till x is 1, i.e. the function will return 1 and no more steps we need to move inside. So, for 5, - it will call 5 * fact (4) - fact(4 ) will be 4 * fact (3) - fact(3) will be 3 * fact (2 ) - fact(2) will be 2 * fact (1) - fact(1) will be 1 - That means, the final output is 5 * fact(4) = 5 * 4 * fact(3) = 5 * 4 * 3 * fact(2) = 5 * 4 * 3 * 2 * fact(1) = 5 * 4 * 3 * 2 * 1 * fact(0) = 5 * 4 * 3 * 2 * 1 * 1 = 120 Try changing the input number to different and check the result. Conclusion : In this example, we have learned how to find the factorial of a number in python recursively. The recursive method comes in handy if you need to execute the same process again and again. Try to run the above example and try it with different numbers to find the factorial. You can download the program from the GitHub link mentioned above. If you have any queries, don’t hesitate to drop one comment below. Similar tutorials : - Python program to find the gcd of two numbers using fractions module -
https://www.codevscolor.com/python-program-find-factorial-python-tutorial/
CC-MAIN-2020-29
en
refinedweb
select(2) select(2) NAME select(), pselect(), FD_CLR(), FD_ISSET(), FD_SET(), FD_ZERO() - syn‐ chronous I/O multiplexing SYNOPSIS For UNIX 2003: For Standards prior to UNIX 2003: For Backward Compatibility Only: (_XOPEN_SOURCE_EXTENDED not defined) DESCRIPTION Prototypes for are available by including as required by standards prior to UNIX 2003. The UNIX 2003 standard requires including to expose these prototypes along with Refer to the man page for information on how to compile programs by using the correct namespace for different UNIX standards. The and functions indicate which of the specified file descriptors is ready for reading, ready for writing, or has an error condition pend‐ ing. If the specified condition is false for all of the specified file descriptors, and block, up to the specified timeout interval, until the specified condition is true for at least one of the specified file descriptors. The and functions support regular files, terminal and pseudo-terminal devices, STREAMS-based files, FIFOs and pipes. The behaviour of and on file descriptors that refer to other file types is unspecified. The nfds argument specifies the range of file descriptors to be tested. The and functions test file descriptors in the range of 0 to nfds −1. File descriptor f is represented by the bit 1<<f in the masks. More formally, a file descriptor is represented by:.. If the timeout argument is not a null pointer, it specifies a maximum interval to wait for the selection to complete. The timeout argument points to an object of type for and to an object of type for If the members of these structures are 0, or will not block. If the timeout argument is a null pointer, or will block until an event causes one of the masks to be returned with a valid (non-zero) value. If the time limit expires before any event occurs that would cause one of the masks to be set to a non-zero value, or completes successfully and returns 0. Implementations may place limitations on the maximum timeout interval supported. On all implementations, the maximum timeout interval sup‐ ported will be at least 31 days. If the timeout argument specifies a timeout interval greater than the implementation-dependent maximum value, the maximum value will be used as the actual timeout value. Implementations may also place limitations on the granularity of time‐ out intervals. If the requested timeout interval requires a finer granularity than the implementation supports, the actual timeout inter‐ val will be rounded up to the next supported value. If sigmask is not a null pointer, then the function shall replace the signal mask of the process by the set of signals pointed to by sigmask before examining the descriptors, and shall restore the signal mask of the process before returning. If the readfds, writefds, and errorfds arguments are all null pointers and the timeout argument is not a null pointer, or blocks for the time specified, or until interrupted by a signal. If the readfds, writefds, and errorfds arguments are all null pointers and the timeout argument is a null pointer, or blocks until interrupted by a signal. File descriptors associated with regular files always select true for ready to read, ready to write, and error conditions. On failure, the objects pointed to by the readfds, writefds, and errorfds arguments are not modified. If the timeout interval expires without the specified condition being true for any of the specified file descriptors, the objects pointed to by the readfds, writefds, and errorfds arguments have all bits set to 0. Ttys and sockets are ready for reading if a would not block for one or more of the following reasons: · input data is available. · an error condition exists, such as a broken pipe, no carrier, or a lost connection. Similarly, ttys and sockets are ready for writing if a would not block for one or more of the following reasons: · output data can be accepted. · an error condition exists, such as a broken pipe, no carrier, or a lost connection. TCP sockets select true on reads only for normal data. They do not select true on reads if out-of-band data ("urgent" data) arrives. TCP sockets select true on exceptions for out-of-band data. AF_CCITT sockets select true on reads for normal and out-of-band data and information, including supervisory frames. Pipes are ready for reading if there is any data in the pipe, or if there are no writers left for the pipe. Pipes are ready for writing if there is room for more data in the pipe AND there are one or more read‐ ers for the pipe, OR there are no readers left for the pipe. returns the same results for a pipe whether a file descriptor associated with the read-only end or the write-only end of the pipe is used, since both file descriptors refer to the same underlying pipe. So a of a read- only file descriptor that is associated with a pipe can return ready to write, even though that particular file descriptor cannot be written to. File descriptor masks of type fd_set can be initialized and tested with and It is unspecified whether each of these is a macro or a function. If a macro definition is suppressed in order to access an actual func‐ tion, or a program defines an external identifier with any of these names, the behaviour is undefined. Clears the bit for the file descriptor fd in the file descriptor set fdset. Returns a non-zero value if the bit for the file descriptor fd is set in the file descriptor set pointed to by fdset, and 0 otherwise. Sets the bit for the file descriptor fd in the file descriptor set fdset. Initializes the file descriptor set fdset to have zero bits for all file descriptors. The behaviour of these macros is undefined if the fd argument is less than 0 or greater than or equal to The use of a timeout does not affect any pending timers set up by or On successful completion, the object pointed to by the timeout argument of may be modified. The is used in the definition of structure. It is set to a value of 2048 to accommodate 2048 file descriptors. Any user code that uses or the structure should redefine to a smaller value (greater than or equal to the number of open files the process will have) in order to save space and execution time. Similarly, any user code that wants to test more than 2048 file descriptors should redefine to the required higher value. The user can also allocate the space for structure dynamically, depend‐ ing upon the number of file descriptors to be tested. The following code segment illustrates the basic concepts. RETURN VALUE and return no value. returns a non-zero value if the bit for the file descriptor fd is set in the file descriptor set pointed to by fdset, and 0 otherwise. On successful completion, and return the total number of bits set in the bit masks. Otherwise, −1 is returned, and is set to indicate the error. ERRORS Under the following conditions, and fail and set to: One or more of the file descriptor sets specified a file descriptor that is not a valid open file descriptor. This could happen either if the file descriptor sets are not ini‐ tialized or nfds argument is greater than The function was interrupted before any of the selected events occurred and before the timeout interval expired. If has been set for the interrupting signal, it is implementation-dependent whether the function restarts or returns with One or more of the pointers was invalid. The reliable detection of this error is implementation dependent. An invalid timeout interval was specified. Valid values for the number of nanoseconds (in should be a non-negative integer less than 1,000,000,000. For the number of microseconds (in struct timeval), a non-nega‐ tive integer less than 1,000,000 should be used. Sec‐ onds should be specified using a non-negative integer. The nfds argument is less than 0, or is greater than or equal to the value of which specifies the absolute maxi‐ mum number of files a process can have open at one time. If the resource limit for a process is less than or equal to 2048, is considered to be 2048. One of the specified file descriptors refers to a STREAM or multiplexer that is linked (directly or indirectly) downstream from a multiplexer. EXAMPLES The following call to checks if any of 4 terminals are ready for read‐ ing. times out after 5 seconds if no terminals are ready for reading. Note that the code for opening the terminals or reading from the termi‐ nals is not shown in this example. Also, note that this example must be modified if the calling process has more than 32 file descriptors open. Following this first example is an example of select with more than 32 file descriptors. The following example is the same as the previous example, except that it works for more than 32 open files. Definitions for and are in WARNINGS The file descriptor masks are always modified on return, even if the call returns as the result of a timeout. DEPENDENCIES supports the following devices and file types: · pipes · fifo special files (named pipes) · all serial devices · All ITEs (internal terminal emulators) and HP-HIL input devices · lan(7) special files · pty(7) special files · sockets AUTHOR was developed by HP and the University of California, Berkeley. SEE ALSO fcntl(2), poll(2), read(2), sigprocmask(2), write(2), thread_safety(5), standards(5), select(2)[top]
http://www.polarhome.com/service/man/?qf=FD_ISSET&tf=2&of=HP-UX&sf=2
CC-MAIN-2020-29
en
refinedweb
KDECore KCharsets Class ReferenceCharset font and encoder/decoder handling. More... #include <kcharsets.h> Detailed DescriptionCharset font and encoder/decoder handling. This is needed, because Qt's font matching algorithm gives the font family a higher priority than the charset. For many applications this is not acceptable, since it can totally obscure the output, in languages which use non iso-8859-1 charsets. Definition at line 43 of file kcharsets.h. Constructor & Destructor Documentation Protected constructor. If you need the kcharsets object, use KGlobal::charsets() instead. Definition at line 360 of file kcharsets.cpp. Destructor. Definition at line 365 of file kcharsets.cpp. Member Function Documentation Lists all available encodings as names. - Returns: - the list of all encodings Definition at line 475 of file kcharsets.cpp. Tries to find a QTextCodec to convert the given encoding from and to Unicode. If no codec could be found the latin1 codec will be returned an ok will be set to false. - Returns: - the QTextCodec. If the desired codec could not be found, it returns a default (Latin-1) codec Definition at line 528 of file kcharsets.cpp. Provided for compatibility. - Parameters: - - Returns: - the QTextCodec. If the desired codec could not be found, it returns a default (Latin-1) codec Definition at line 522 of file kcharsets.cpp. Lists the available encoding names together with a more descriptive language. - Returns: - the list of descriptive encoding names Definition at line 509 of file kcharsets.cpp. Returns the encoding for a string obtained with descriptiveEncodingNames(). - Parameters: - - Returns: - the name of the encoding Definition at line 492 of file kcharsets.cpp. Overloaded member function. Tries to find an entity in the QString str. - Parameters: - - Returns: - a decoded entity if one could be found, QChar::null otherwise Definition at line 406 of file kcharsets.cpp. Converts an entity to a character. The string must contain only the entity without the trailing ';'. - Parameters: - - Returns: - QChar::null if the entity could not be decoded. Definition at line 370 of file kcharsets.cpp. Returns the language the encoding is used for. - Parameters: - - Returns: - the language of the encoding Definition at line 485 of file kcharsets.cpp. Scans the given string for entities (like &) and resolves them using fromEntity. - Parameters: - - Returns: - the clean string - Since: - 3.1 Definition at line 429 of file kcharsets.cpp. Converts a QChar to an entity. The returned string does already contain the leading '&' and the trailing ';'. - Parameters: - - Returns: - the entity Definition at line 422 of file kcharsets.cpp. The documentation for this class was generated from the following files:
https://api.kde.org/3.5-api/kdelibs-apidocs/kdecore/html/classKCharsets.html
CC-MAIN-2020-29
en
refinedweb
I want to get a list of the data contained in a histogram bin. I am using numpy, and Matplotlib. I know how to traverse the data and check the bin edges. However, I want to do this for a 2D histogram and the code to do this is rather ugly. Does numpy have any constructs to make this easier? For the 1D case, I can use searchsorted(). But the logic is not that much better, and I don’t really want to do a binary search on each data point when I don’t have to. Most of the nasty logic is due to the bin boundary regions. All regions have boundaries like this: [left edge, right edge). Except the last bin, which has a region like this: [left edge, right edge]. Here is some sample code for the 1D case: import numpy as np data = [0, 0.5, 1.5, 1.5, 1.5, 2.5, 2.5, 2.5, 3] hist, edges = np.histogram(data, bins=3) print 'data =', data print 'histogram =', hist print 'edges =', edges getbin = 2 #0, 1, or 2 print '---' print 'alg 1:' #for i in range(len(data)): for d in data: if d >= edges[getbin]: if (getbin == len(edges)-2) or d < edges[getbin+1]: print 'found:', d #end if #end if #end for print '---' print 'alg 2:' for d in data: val = np.searchsorted(edges, d, side='right')-1 if val == getbin or val == len(edges)-1: print 'found:', d #end if #end for Here is some sample code for the 2D case: import numpy as np xdata = [0, 1.5, 1.5, 2.5, 2.5, 2.5, \ 0.5, 0.5, 0.5, 0.5, 1.5, 1.5, 1.5, 1.5, 1.5, 2.5, 2.5, 2.5, 2.5, 2.5, 2.5, \ 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 1.5, 1.5, 1.5, 1.5, 1.5, 1.5, 1.5, 1.5, 2.5, 2.5, 2.5, 2.5, 2.5, 2.5, 2.5, 2.5, 3] ydata = [0, 5,5, 5, 5, 5, \ 15, 15, 15, 15, 15, 15, 15, 15, 15, 15, 15, 15, 15, 15, 15, \ 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 30] xbins = 3 ybins = 3 hist2d, xedges, yedges = np.histogram2d(xdata, ydata, bins=(xbins, ybins)) print 'data2d =', zip(xdata, ydata) print 'hist2d =' print hist2d print 'xedges =', xedges print 'yedges =', yedges getbin2d = 5 #0 through 8 print 'find data in bin #', getbin2d xedge_i = getbin2d % xbins yedge_i = int(getbin2d / xbins) #IMPORTANT: this is xbins for x, y in zip(xdata, ydata): # x and y left edges if x >= xedges[xedge_i] and y >= yedges[yedge_i]: #x right edge if xedge_i == xbins-1 or x < xedges[xedge_i + 1]: #y right edge if yedge_i == ybins-1 or y < yedges[yedge_i + 1]: print 'found:', x, y #end if #end if #end if #end for Is there a cleaner / more efficient way to do this? It seems like numpy would have something for this. digitize, from core NumPy, will give you the index of the bin to which each value in your histogram belongs: import numpy as NP A = NP.random.randint(0, 10, 100) bins = NP.array([0., 20., 40., 60., 80., 100.]) # d is an index array holding the bin id for each point in A d = NP.digitize(A, bins) how about something like: In [1]: data = numpy.array([0, 0.5, 1.5, 1.5, 1.5, 2.5, 2.5, 2.5, 3]) In [2]: hist, edges = numpy.histogram(data, bins=3) In [3]: for l, r in zip(edges[:-1], edges[1:]): print(data[(data > l) & (data < r)]) ....: ....: [ 0.5] [ 1.5 1.5 1.5] [ 2.5 2.5 2.5] In [4]: with a bit of code to handle the edge cases.
https://pythonpedia.com/en/knowledge-base/2275924/how-to-get-data-in-a-histogram-bin
CC-MAIN-2020-29
en
refinedweb
Opened 6 years ago Closed 5 years ago Last modified 5 years ago #7407 enhancement closed fixed (fixed) Port twisted.web.client.Agent to Python 3 Description For the patches, please refer to GitHub. They enable successful run of the examples from client docs. I need help with testing this on Python 2 I am not sure if I do not break anything. Attachments (2) Change History (25) comment:1 Changed 6 years ago by comment:2 Changed 6 years ago by comment:3 Changed 6 years ago by Changed 6 years ago by comment:4 Changed 6 years ago by comment:5 Changed 6 years ago by Changed 6 years ago by comment:6 Changed 6 years ago by comment:7 Changed 6 years ago by I work on porting twisted.web.client to Python 3 too. And I have a question. There are stubs for "context factories for https" that are used in tests, but they are now deprecated in favor of IPolicyForHTTPS providers. Should tests be adjusted to reflect this change? comment:8 Changed 6 years ago by Should tests be adjusted to reflect this change? Not if doing so would remove test coverage for supported functionality - even if that functionality is deprecated. comment:9 Changed 5 years ago by comment:10 Changed 5 years ago by UPDATE: Please disregard. I was confused. My original comment is preserved below. But see my comment:12. My guess is that you also want the following in t.w.c.CookieAgent.request: --- twisted/web/client.py +++ twisted/web/client.py @@ -1768,7 +1768,7 @@ class CookieAgent(object): # cookies. if not headers.hasHeader(b'cookie'): self.cookieJar.add_cookie_header(lastRequest) - cookieHeader = lastRequest.get_header('Cookie', None) + cookieHeader = lastRequest.get_header(b'Cookie', None) if cookieHeader is not None: headers = headers.copy() headers.addRawHeader(b'cookie', networkString(cookieHeader)) comment:11 Changed 5 years ago by Also, per @adiroiban's comment in #6197 regarding the use of encode('utf-8') in twisted/web/test/test_webclient.py, I noticed that the same encode call appears in the line modified in [45485]. You may want to make the same change here to avoid a potentially confusing merge conflict later, depending on which branch is merged first. comment:12 Changed 5 years ago by Regarding my (now retracted) question regarding t.w.c.CookieAgent.request, there's something a bit unsettling about having literals out there that are interpreted as different types depending on which interpreter is used. I realize that's what twisted.python.compat is for, but is there any reason not to use from __future__ import unicode_literals in as many modules as possible as part of the Python 3 general strategy? This risks being off-topic, but I think the following is appropriate: --- twisted/python/compat.py +++ twisted/python/compat.py @@ -439,6 +439,8 @@ if _PY3: def networkString(s): + if isinstance(s, bytes): + return s if not isinstance(s, unicode): raise TypeError("Can only convert text to bytes on Python 3") return s.encode('ascii') @@ -454,11 +456,9 @@ else: lazyByteSlice = buffer def networkString(s): - if not isinstance(s, str): + if not isinstance(s, basestring): # falls through for both str and unicode raise TypeError("Can only pass-through bytes on Python 2") - # Ensure we're limited to ASCII subset: - s.decode('ascii') - return s + return s.encode('ascii') # always a str, if successful iterbytes.__doc__ = """ Return an iterable wrapper for a C{bytes} object that provides the behavior of A side effect of such a change is that one could submit bytes literals to t.w.c._FakeUrllib2Request.get_header (per my original comment:10), which (if I understand correctly), one probably should be able to do anyway? comment:13 Changed 5 years ago by What about updating the docs, or should that be handled under a separate ticket? More specifically, the code samples in docs/web/howto/client.rst will only run on Python 2 (they don't call --- docs/web/howto/client.rst +++ docs/web/howto/client.rst @@ -371,6 +371,7 @@ an *HTTPS* URL with no certificate verification. .. code-block:: python + from __future__ import print_function, unicode_literals from twisted.python.log import err from twisted.web.client import Agent from twisted.internet import reactor @@ -381,13 +382,13 @@ an *HTTPS* URL with no certificate verification. return ClientContextFactory.getContext(self) def display(response): - print "Received response" - print response + print("Received response") + print(response) def main(): contextFactory = WebClientContextFactory() agent = Agent(reactor, contextFactory) - d = agent.request("GET", "") + d = agent.request(b"GET", b"") d.addCallbacks(display, err) d.addCallback(lambda ignored: reactor.stop()) reactor.run() The __future__ import is probably unnecessary for this particular example, but I find it useful for narrowing behavioral gaps between 2 and 3 (e.g., generating similar errors in both environments when one makes a mistake). Either way, it looks like many of the sub-listings may be affected as well: % grep -l Agent docs/web/howto/listings/client/* docs/web/howto/listings/client/cookies.py docs/web/howto/listings/client/endpointconstructor.py docs/web/howto/listings/client/filesendbody.py docs/web/howto/listings/client/gzipdecoder.py docs/web/howto/listings/client/request.py docs/web/howto/listings/client/response.py docs/web/howto/listings/client/responseBody.py docs/web/howto/listings/client/sendbody.py comment:14 Changed 5 years ago by comment:15 Changed 5 years ago by This is merged forward. The twistedchecker error is spurious, the static analysis can't pick up that it's set on __doc__ later. Please review (or rather, Glyph, please review, rgd previous discussions). comment:16 Changed 5 years ago by comment:17 Changed 5 years ago by Merged forward again, builders spun. comment:18 Changed 5 years ago by Thanks for the constant slog towards 3, hawkowl. All this work is very helpful. There are a couple of things that should really be addressed before merging. shouldRetryshould get a @paramon its methodparameter now that it's being explicitly checked against bytes. (I mean, it always should have.) _requestWithEndpoint: nativeStringis really intended to be called with an ASCII-only string literal, not arbitrary user bytes. In fairness its docstring doesn't say this, but it should; the idea is that sometimes you need a literal to use for native-str processing as with i.e. identifiers. By using it with uri.hostin various places you're exposing it to input from users, which means that you're exposing it to potential unicode input, which means you are inviting inscrutable exceptions when non-ASCII is passed in. Of course, the right solution to this is (drum roll please...) #5388. In the meanwhile, I can see why nativeStringappeals; nothing else converts to the requisite ASCII-only value for the hostname, but please deal with the returned errors and emit something meaningful, like ValueError("The URI b'\xe2\x80\x94bar/' contains non-ASCII octets in its hostname")or something. _FakeUrllib2Requestshould really update its documentation to say native L{str}on all its @ivars to be more explicit. (Given that _FakeUrllib2Requestis effectively always constructed by a string literal in the tests, nativeStringis appropriate here, in contrast to the above criticism of its use in _requestWithEndpoint; the exception raised will be directly where the problem string was passed.) I think you can address these and land; although there's a lot to think about with the second point, there's no really a whole lot to do - just one test to check an error condition. It looks like the rest of the branch is just sprinkling b prefixes around all the strings in the tests, so otherwise pretty good. Some things I noticed that weren't really related to this branch but I should nevertheless raise, though: - Why are we using TCP4ClientEndpointand not HostnameEndpointin the httpcase? I realize that `TLSEndpoint` isn't quite ready yet, so we still need SSL4ClientEndpoint, but there should probably be a ticket for fixing this. - Looking at the code in this branch I am really starting to want the methods on Headersto be more permissive in terms of what types they take. - I don't like the idiom used for redirections in compat. Conditional definitions like this should take into account the fact that twistedcheckeris going to complain about them; there are ways to do this so it doesn't complain (which actually look nicer to read), and we should do it that way. However, I don't want to hold up this branch on its copying of a bad pattern; I'll do a separate branch that fixes the pattern overall within that module. - thank you for getting rid of the usage of splithostand splittype. Pretty sure those were not supposed to be public APIs. - It seems slightly inconsistent that urlquote, urlunquote, and cookielibare in compat, but urlunparse, urljoin, and urldefragare in twisted.web.clientdirectly. Not really worth addressing though; I'll probably get rid of these from twisted.web.cliententirely and do the conditional imports in twisted.python.url(to ensure it's easier to vendor out). Thanks again! comment:19 follow-up: 20 Changed 5 years ago by comment:20 Changed 5 years ago by T GLYPH THANX FOR THE REVIEW Have you? I don't see fixes on the github link. but I don't understand #2. _requestWithEndpointdoesn't seem to have a call to nativeString. Could you give me some line numbers? Sorry. endpointForURI. 1441. What I mean is just that the encoding error from nativeString isn't sufficiently expressive, and you should catch it to have an error that is intentionally chosen and displays a comprehensible representation of the entire URL. Otherwise, I'll put this back up for review in the meantime, as in fixing #3 I added more docstrings. I don't see these docstrings either. Did the mirroring hook break or is there supposed to be a new branch? comment:21 Changed 5 years ago by Hi Glyph, all those things are actually pushed up now. Builders spun, please review. comment:22 Changed 5 years ago by OK, looks pretty good. Apropos IRC discussion, I guess nativeString is OK for now. Please land. Thanks. Please split this up into multiple tickets. The patch that finally ports Agentshouldn't involve changes to twisted/internet/and twisted/python/. Please refer to (in particular the Tips section) for a guideline on how to contribute porting effort. Thanks again.
https://twistedmatrix.com/trac/ticket/7407
CC-MAIN-2020-29
en
refinedweb
Request for Comments: 6071 Obsoletes: 2411 Category: Informational ISSN: 2070-1721 NIST S. Krishnan Ericsson February 2011 IP Security (IPsec) and Internet Key Exchange (IKE) Document Roadmap. IPsec/IKE Background Information ................................5 2.1. Interrelationship of IPsec/IKE Documents ...................5 2.2. Versions of IPsec ..........................................6 2.2.1. Differences between "Old" IPsec (IPsec-v2) and "New" IPsec (IPsec-v3) ..............................6 2.3. Versions of IKE ............................................7 2.3.1. Differences between IKEv1 and IKEv2 .................8 2.4. IPsec and IKE IANA Registries ..............................9 3. IPsec Documents .................................................9 3.1. Base Documents .............................................9 3.1.1. "Old" IPsec (IPsec-v2) ..............................9 3.1.2. "New" IPsec (IPsec-v3) .............................11 3.2. Additions to IPsec ........................................11 3.3. General Considerations ....................................14 4. IKE Documents ..................................................15 4.1. Base Documents ............................................15 4.1.1. IKEv1 ..............................................15 4.1.2. IKEv2 ..............................................16 4.2. Additions and Extensions ..................................17 4.2.1. Peer Authentication Methods ........................17 4.2.2. Certificate Contents and Management (PKI4IPsec) ....18 4.2.3. Dead Peer Detection ................................19 4.2.4. Remote Access ......................................19 5. Cryptographic Algorithms and Suites ............................21 5.1. Algorithm Requirements ....................................22 5.2. Encryption Algorithms .....................................23 5.3. Integrity-Protection (Authentication) Algorithms ..........27 5.4. Combined Mode Algorithms ..................................30 5.5. Pseudo-Random Functions (PRFs) ............................33 5.6. Cryptographic Suites ......................................34 5.7. Diffie-Hellman Algorithms .................................35 6. IPsec/IKE for Multicast ........................................36 7. Outgrowths of IPsec/IKE ........................................38 7.1. IPsec Policy ..............................................38 7.2. IPsec MIBs ................................................39 7.3. IPComp (Compression) ......................................39 7.4. Better-Than-Nothing Security (BTNS) .......................39 7.5. Kerberized Internet Negotiation of Keys (KINK) ............40 7.6. IPsec Secure Remote Access (IPSRA) ........................41 7.7. IPsec Keying Information Resource Record (IPSECKEY) .......42 8. Other Protocols That Use IPsec/IKE .............................42 8.1. Mobile IP (MIPv4 and MIPv6) ...............................42 8.2. Open Shortest Path First (OSPF) ...........................44 8.3. Host Identity Protocol (HIP) ..............................45 8.4. Stream Control Transmission Protocol (SCTP) ...............46 8.5. Robust Header Compression (ROHC) ..........................46 8.6. Border Gateway Protocol (BGP) .............................47 8.7. IPsec Benchmarking ........................................47 8.8. Network Address Translators (NAT) .........................48 8.9. Session Initiation Protocol (SIP) .........................48 8.10. Explicit Packet Sensitivity Labels .......................49 9. Other Protocols That Adapt IKE for Non-IPsec Functionality .....49 9.1. Extensible Authentication Protocol (EAP) ..................49 9.2. Fibre Channel .............................................49 9.3. Wireless Security .........................................50 10. Acknowledgements ..............................................50 11. Security Considerations .......................................50 12. References ....................................................50 12.1. Informative References ...................................50 Appendix A. Summary of Algorithm Requirement Levels ..............61 1. Introduction. In addition to the base documents for IPsec and IKE, there are numerous RFCs that reference, extend, and in some cases alter the core specifications. This document obsoletes [RFC2411]. It attempts to list and briefly describe those RFCs, providing context and rationale where indicated. The title of each RFC is followed by a letter that indicates its category in the RFC series [RFC2026], as follows: o S: Standards Track (Proposed Standard, Draft Standard, or Standard) - E: Experimental - B: Best Current Practice - I: Informational For each RFC, the publication date is also given. This document also categorizes the requirements level of each cryptographic algorithm for use with IKEv1, IKEv2, IPsec-v2, and IPsec-v3. These requirements are summarized in Appendix A. These levels are current as of February 2011; subsequent RFCs may result in altered requirement levels. This document does not define requirement levels; it simply restates those found in the IKE and IPsec RFCs. If there is a conflict between this document and any other RFC, then the other RFC takes precedence. The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT", "SHOULD", "SHOULD NOT", "RECOMMENDED", "MAY", and "OPTIONAL" in this document are to be interpreted as described in [RFC2119]. 2. IPsec/IKE Background Information 2.1. Interrelationship of IPsec/IKE Documents The main documents describing the set of IPsec protocols are divided into seven groups. This is illustrated in Figure 1. There is a main Architecture document that broadly covers the general concepts, security requirements, definitions, and mechanisms defining IPsec technology. There are an Encapsulating Security Payload (ESP) Protocol document and an Authentication Header (AH) Protocol document that cover the packet format and general issues regarding the respective protocols. The "Encryption Algorithm" document set, shown on the left, is the set of documents describing how various encryption algorithms are used for ESP. The "Combined Algorithm" document set, shown in the middle, is the set of documents describing how various combined mode algorithms are used to provide both encryption and integrity protection for ESP. The "Integ-Protection Algorithm" document set, shown on the right, is the set of documents describing how various integrity-protection algorithms are used for both ESP and AH. The "IKE" documents, shown at the bottom, are the documents describing the IETF Standards-Track key management schemes. +--------------+ | Architecture | +--------------+ v v +<-<-<-<-<-<-<-<-+ +->->->->->->->->+ v v +----------+ +----------+ | ESP | | AH | | Protocol | | Protocol | +----------+ +----------+ v v v v v +->->->->->->->->+->->->->->->->->+ v v v v v v v v v v v v v v v +------------+ +-----------+ +----------------+ v v | +------------+ | +------------+ | +----------------+ v v | | Encryption | | | Combined | | |Integ-Protection| v v +-| Algorithm | +-| Algorithm | +-| Algorithm | v v +------------+ +------------+ +----------------+ v v v v v v v v v v v +>->->->-+->->->->->->->->->--<-<-<-<-<-<-<-<-<-+-<-<-<-<-+ ^ ^ +------------+ | IKE | | Protocol | +------------+ Figure 1. IPsec/IKE Document Interrelationships 2.2. Versions of IPsec Two versions of IPsec can currently be found in implementations. The "new" IPsec (referred to as IPsec-v3 in this document; see Section 3.1.1 for the RFC descriptions) obsoleted the "old" IPsec (referred to as IPsec-v2 in this document; see Section 3.1.2 for the RFC descriptions); however, IPsec-v2 is still commonly found in operational use. In this document, when the unqualified term IPsec is used, it pertains to both versions of IPsec. An earlier version of IPsec (defined in RFCs 1825-1829), obsoleted by IPsec-v2, is not covered in this document. 2.2.1. Differences between "Old" IPsec (IPsec-v2) and "New" IPsec (IPsec-v3) IPsec-v3 incorporates "lessons learned" from implementation and operational experience with IPsec-v2 and its predecessor, IPsec-v1. Knowledge was gained about the barriers to IPsec deployment, the scenarios in which IPsec is most effective, and the requirements that needed to be added to IPsec to facilitate its use with other protocols. In addition, the documentation for IPsec-v3 clarifies and expands details that were underspecified or ambiguous in IPsec-v2. Changes to the architecture document [RFC4301] include: - More detailed descriptions of IPsec processing, both unicast and multicast, and the interactions among the various IPsec databases - In IPsec-v2, an SA (Security Association) is uniquely identified by a combination of the SPI (Security Parameters Index), protocol (ESP or AH) and the destination address. In IPsec-v3, a unicast SA is uniquely identified by the SPI and, optionally, by the protocol; a multicast SA is identified by a combination of the SPI and the destination address and, optionally, the source address. - More flexible SPD (Security Policy Database) selectors, including ranges of values and ICMP message types as selectors o Decorrelated (order-independent) SAD (Security Association Database) replaced the former ordered SAD - Extended sequence numbers (ESNs) were added - Mandatory algorithms defined in standalone document Changes to ESP [RFC4303] include: - Combined mode algorithms were added, necessitating changes to packet format and processing o NULL authentication, mandatory (MUST) in ESP-v2, is optional (MAY) in ESP-v3 2.3. Versions of IKE Two versions of IKE can currently be found in implementations. The "new" IKE (generally referred to as IKEv2) obsoleted the "old" IKE (generally referred to as IKEv1); however, IKEv1 is still commonly found in operational use. In this document, when the unqualified term IKE is used, it pertains to both versions of IKE. 2.3.1. Differences between IKEv1 and IKEv2 As with IPsec-v3, IKEv2 incorporates "lessons learned" from implementation and operational experience with IKEv1. Knowledge was gained about the barriers to IKE deployment, the scenarios in which IKE is most effective, and the requirements that needed to be added to IKE to facilitate its use with other protocols as well as in general-purpose use. The documentation for IKEv2 replaces multiple, at times contradictory, documents with a single document; it also clarifies and expands details that were underspecified or ambiguous in IKEv1.. Changes to IKE include: - Replaced multiple alternate exchange types with a single, shorter exchange - Streamlined negotiation format to avoid combinatorial bloat for multiple proposals - Protect responder from committing significant resources to the exchange until the initiator's existence and identity are confirmed - Reliable exchanges: every request expects a response - Protection of IKE messages based on ESP, rather than a method unique to IKE - Add traffic selectors: distinct from peer IDs and more flexible - Support of EAP-based authentication methods and asymmetric authentication (i.e., initiator and responder can use different authentication methods) 2.4. IPsec and IKE IANA Registries Numerous IANA registries contain values that are used in IPsec, IKE, and related protocols. They include: - IKE Attributes (): values used during IKEv1 Phase 1 exchanges, defined in [RFC2409]. - "Magic Numbers" for Internet Security Association and Key Management Protocol (ISAKMP) (): values used during IKEv1 Phase 2 exchanges, defined in [RFC2407], [RFC2408], and numerous other cryptographic algorithm RFCs. - IKEv2 Parameters (): values used in IKEv2 exchanges, defined in [RFC5996] and numerous other cryptographic algorithm RFCs. 3. IPsec Documents 3.1. Base Documents IPsec protections are provided by two special headers: the Encapsulating Security Payload (ESP) Header and the Authentication Header (AH). In IPv4, these headers take the form of protocol headers; in IPv6, they are classified as extension headers. There are three base IPsec documents: one that describes the IP security architecture, and one for each of the IPsec headers. 3.1.1. "Old" IPsec (IPsec-v2) 3.1.1.1. RFC 2401, Security Architecture for the Internet Protocol (S, November 1998) [RFC2401] specifies the mechanisms, procedures, and components required to provide security services at the IP layer. It also describes their interrelationship and the general processing required to inject IPsec protections into the network architecture. The components include: - SA (Security Association): a one-way (inbound or outbound) agreement between two communicating peers that specifies the IPsec protections to be provided to their communications. This includes the specific security protections, cryptographic algorithms, and secret keys to be applied, as well as the specific types of traffic to be protected. - SPI (Security Parameters Index): a value that, together with the destination address and security protocol (AH or ESP), uniquely identifies a single SA. - SAD (Security Association Database): each peer's SA repository. The RFC describes how this database functions (SA lookup, etc.) and the types of information it must contain to facilitate SA processing; it does not dictate the format or layout of the database. SAs can be established in either transport mode or tunnel mode (see below). - SPD (Security Policy Database): an ordered database that expresses the security protections to be afforded to different types and classes of traffic. The three general classes of traffic are traffic to be discarded, traffic that is allowed without IPsec protection, and traffic that requires IPsec protection. RFC 2401 describes general inbound and outbound IPsec processing; it also includes details on several special cases: packet fragments, ICMP messages, and multicast traffic. 3.1.1.2. RFC 2402, IP Authentication Header (S, November 1998) [RFC2402] defines the Authentication Header (AH), which provides integrity protection; it also provides data-origin authentication, access control, and, optionally, replay protection. A transport mode AH SA, used to protect peer-to-peer communications, protects upper- layer data, as well as those portions of the IP header that do not vary unpredictably during packet delivery. A tunnel mode AH SA can be used to protect gateway-to-gateway or host-to-gateway traffic; it can optionally be used for host-to-host traffic. This class of AH SA protects the inner (original) header and upper-layer data, as well as those portions of the outer (tunnel) header that do not vary unpredictably during packet delivery. Because portions of the IP header are not included in the AH calculations, AH processing is more complex than ESP processing. AH also does not work in the presence of Network Address Translation (NAT). Unlike IPsec-v3, IPsec-v2 classifies AH as mandatory to implement. 3.1.1.3. RFC 2406, IP Encapsulating Security Payload (ESP) (S, November 1998) [RFC2406] defines the IP Encapsulating Security Payload (ESP), which provides confidentiality (encryption) and/or integrity protection; it also provides data-origin authentication, access control, and, optionally, replay and/or traffic analysis protection. A transport mode ESP SA protects the upper-layer data, but not the IP header. A tunnel mode ESP SA protects the upper-layer data and the inner header, but not the outer header. 3.1.2. "New" IPsec (IPsec-v3) 3.1.2.1. RFC 4301, Security Architecture for the Internet Protocol (S, December 2005) [RFC4301] obsoletes [RFC2401], and it includes a more complete and detailed processing model. The most notable changes are detailed above in Section 2.2.1. IPsec-v3 processing incorporates an additional database: - PAD (Peer Authorization Database): contains information necessary to conduct peer authentication, providing a link between IPsec and the key management protocol (e.g., IKE) 3.1.2.2. RFC 4302, IP Authentication Header (S, December 2005) 3.1.2.3. RFC 4303, IP Encapsulating Security Payload (ESP) (S, December 2005) 3.2. Additions to IPsec Once the IKEv1 and IPsec-v2 RFCs were finalized, several additions were defined in separate documents: negotiation of NAT traversal, extended sequence numbers, UDP encapsulation of ESP packets, opportunistic encryption, and IPsec-related ICMP messages. Additional uses of IPsec transport mode were also described: protection of manually configured IPv6-in-IPv4 tunnels and protection of IP-in-IP tunnels. These documents describe atypical uses of IPsec transport mode, but do not define any new IPsec features. Once the original IPsec Working Group concluded, additional IPsec- related issues were handled by the IPsecME (IPsec Maintenance and Extensions) Working Group. One such problem is the capability of middleboxes to distinguish unencrypted ESP packets (ESP-NULL) from encrypted ones in a fast and accurate manner. Two solutions are described: a new protocol that requires changes to IKEv2 and IPsec-v3 and a heuristic method that imposes no new requirements. Another issue that was addressed is the problem of using IKE and IPsec in a high-availability environment. 3.2.1. RFC 3947, Negotiation of NAT-Traversal in the IKE (S, January 2005) [RFC3947] defines an optional extension to IKEv1. It enables IKEv1 to detect whether there are any NATs between the negotiating peers and whether both peers support NAT traversal. It also describes how IKEv1 can be used to negotiate the use of UDP encapsulation of ESP packets for the IPsec SA. For IKEv2, this capability is described in [RFC5996]. 3.2.2. RFC 3948, UDP Encapsulation of IPsec ESP Packets (S, January 2005) [RFC3948] is an optional extension for IPsec-v2 and IPsec-v3. It defines how to encapsulate ESP packets in UDP packets to enable the traversal of NATs that discard packets with protocols other than UDP or TCP. This makes it possible for ESP packets to pass through the NAT device without requiring any change to the NAT device itself. The use of this solution is negotiated by IKE, as described in [RFC3947] for IKEv1 and [RFC5996] for IKEv2. 3.2.3. RFC 4304, Extended Sequence Number (ESN) Addendum to IPsec Domain of Interpretation (DOI) for Internet Security Association and Key Management Protocol (ISAKMP) (S, December 2005) The use of ESNs allows IPsec to use 64-bit sequence numbers for replay protection, but to send only 32 bits of the sequence number in the packet, enabling shorter packets and avoiding a redesign of the packet format. The larger sequence numbers allow an existing IPsec SA to be used for larger volumes of data. [RFC4304] describes an optional extension to IKEv1 that enables IKEv1 to negotiate the use of ESNs for IPsec SAs. For IKEv2, this capability is described in [RFC5996]. 3.2.4. RFC 4322, Opportunistic Encryption using the Internet Key Exchange (IKE) (I, December 2005) Opportunistic encryption allows a pair of end systems to use encryption without any specific pre-arrangements. [RFC4322] specifies a mechanism that uses DNS to distribute the public keys of each system involved and uses DNS Security (DNSSEC) to secure the mechanism against active attackers. It specifies the changes that are needed in existing IPsec and IKE implementations. The majority of the changes are needed in the IKE implementation and these changes relate to the handling of key acquisition requests, the lookup of public keys and TXT records, and the interactions with firewalls and other security facilities that may be co-resident on the same gateway. 3.2.5. RFC 4891, Using IPsec to Secure IPv6-in-IPv4 Tunnels (I, May 2007) [RFC4891] describes how to use IKE and transport-mode IPsec to provide security protection to manually configured IPv6-in-IPv4 tunnels. This document uses standard IKE and IPsec, without any new extensions. It does not apply to tunnels that are initiated in an automated manner (e.g., 6to4 tunnels [RFC3056]). 3.2.6. RFC 3884, Use of IPsec Transport Mode for Dynamic Routing (I, September 2004) [RFC3884] describes the use of transport-mode IPsec to secure IP-in- IP tunnels, which constitute the links of a multi-hop, distributed virtual network (VN). This allows the traffic to be dynamically routed via the VN's trusted routers, rather than routing all traffic through a statically routed IPsec tunnel. This RFC has not been widely adopted. 3.2.7. RFC 5840, Wrapped Encapsulating Security Payload (ESP) for Traffic Visibility (S, April 2010) ESP, as defined in [RFC4303], does not allow a network device to easily determine whether protected traffic that is passing through the device is encrypted or only integrity protected (referred to as ESP-NULL packets). [RFC5840] extends ESPv3 to provide explicit notification of integrity-protected packets, and extends IKEv2 to negotiate this capability between the IPsec peers. 3.2.8. RFC 5879, Heuristics for Detecting ESP-NULL packets (I, May 2010) [RFC5879] offers an alternative approach to differentiating between ESP-encrypted and ESP-NULL packets through packet inspection. This method does not require any change to IKE or ESP; it can be used with ESP-v2 or ESP-v3. 3.3. General Considerations 3.3.1. RFC 3715, IPsec-Network Address Translation (NAT) Compatibility Requirements (I, March 2004) [RFC3715] "describes known incompatibilities between NAT and IPsec, and describes the requirements for addressing them". This is a critical issue, since IPsec is frequently used to provide VPN access to the corporate network for telecommuters, and NATs are widely deployed in home gateways, hotels, and other access networks typically used for remote access. 3.3.2. RFC 5406, Guidelines for Specifying the Use of IPsec Version 2 (B, February 2009) [RFC5406] offers guidance to protocol designers on how to ascertain whether IPsec is the appropriate security mechanism to provide an interoperable security solution for the protocol. If this is not the case, it advises against attempting to define a new security protocol; rather, it suggests using another standards-based security protocol. The details in this document apply only to IPsec-v2. 3.3.3. RFC 2521, ICMP Security Failures Messages (E, March 1999) [RFC2521] specifies an ICMP message for indicating failures related to the use of IPsec protocols (AH and ESP). The specified ICMP message defines several codes for handling common failure modes for IPsec. The failures that are signaled by this message include invalid or expired SPIs, failure of authenticity or integrity checks on datagrams, decryption and decompression errors, etc. These messages can be used to trigger automated session-key management or to signal to an operator the need to manually reconfigure the SAs. This RFC has not been widely adopted. Furthermore, [RFC4301] discusses the pros and cons of relying on unprotected ICMP messages. 3.3.4. RFC 6027, IPsec Cluster Problem Statement (I, October 2010) [RFC6027] describes the problems of using IKE and IPsec in a high availability environment, in which one or both of the peers are clusters of gateways. It details the numerous types of stateful information shared by IKE and IPsec peers that would have to be available to other members of the cluster in order to provide high- availability, load sharing, and/or failover capabilities. 4. IKE Documents 4.1. Base Documents 4.1.1. IKEv1 IKE is the preferred key management protocol for IPsec. It is used for peer authentication; to negotiate, modify, and delete SAs; and to negotiate authenticated keying material for use within those SAs. The standard peer authentication methods used by IKEv1 (pre-shared secret keys and digital certificates) had several shortcomings related to use of IKEv1 to enable remote user authentication to a corporate VPN: it could not leverage the use of legacy authentication systems (e.g. RADIUS databases) to authenticate a remote user to a security gateway; and it could not be used to configure remote users with network addresses or other information needed in order to access the internal network. Automatic key distribution is required for IPsec-v2, but alternatives to IKE may be used to satisfy that requirement. Several Internet Drafts were written to address these problems: two such documents include "Extended Authentication within IKE (XAUTH)" [IKE-XAUTH] (and its predecessor, "Extended Authentication within ISAKMP/Oakley (XAUTH)" [ISAKMP-XAUTH]) and "The ISAKMP Configuration Method" [IKE-MODE-CFG] (and its predecessor [ISAKMP-MODE-CFG]). These Internet Drafts did not progress to RFC status due to security flaws and other problems related to these solutions. However, many current IKEv1 implementations incorporate aspects of these solutions to facilitate remote user access to corporate VPNs. These solutions were not standardized, and different implementations implemented different versions. Thus, there is no assurance that the implementations adhere fully to the suggested solutions or that one implementation can interoperate with others that claim to incorporate the same features. Furthermore, these solutions have known security issues. All of those problems and security issues have been solved in IKEv2; thus, use of these non-standardized IKEv1 solutions is not recommended. 4.1.1.1. RFC 2409, The Internet Key Exchange (IKE) (S, November 1998) This document defines a key exchange protocol that can be used to negotiate authenticated keying material for SAs. This document implements a subset of the Oakley protocol in conjunction with ISAKMP to obtain authenticated keying material for use with ISAKMP, and for other security associations such as AH and ESP for the IETF IPsec DOI. While, historically, IKEv1 was created by combining two security protocols, ISAKMP and Oakley, in practice, the combination (along with the IPsec DOI) has commonly been viewed as one protocol, IKEv1. The protocol's origins can be seen in the organization of the documents that define it. 4.1.1.2. RFC 2408, Internet Security Association and Key Management Protocol (ISAKMP) (S, November 1998) This document defines procedures and packet formats to establish, negotiate, modify, and delete Security Associations (SAs). It is intended to support the negotiation of SAs for security protocols at all layers of the network stack. ISAKMP can work with many different key exchange protocols, each with different security properties. 4.1.1.3. RFC 2407, The Internet IP Security Domain of Interpretation for ISAKMP (S, November 1998) Within ISAKMP, a Domain of Interpretation is used to group related protocols using ISAKMP to negotiate security associations. Security protocols sharing a DOI choose security protocol and cryptographic transforms from a common namespace and share key exchange protocol identifiers. This document defines the Internet IP Security DOI (IPSEC DOI), which instantiates ISAKMP for use with IP when IP uses ISAKMP to negotiate security associations. 4.1.1.4. RFC 2412, The OAKLEY Key Determination Protocol (I, November 1998) [RFC2412] describes a key establishment protocol that two authenticated parties can use to agree on secure and secret keying material. The Oakley protocol describes a series of key exchanges -- called "modes" -- and details the services provided by each (e.g., perfect forward secrecy for keys, identity protection, and authentication). This document provides additional theory and background to explain some of the design decisions and security features of IKE and ISAKMP; it does not include details necessary for the implementation of IKEv1. 4.1.2. IKEv2 4.1.2.1. RFC 4306, Internet Key Exchange (IKEv2) Protocol (S, December 2005) This document contains the original description of version 2 of the Internet Key Exchange (IKE) protocol. It covers what was previously covered by separate documents: ISAKMP, IKE, and DOI. It also addresses NAT traversal, legacy authentication, and remote address acquisition. IKEv2 is not interoperable with IKEv1. Automatic key distribution is required for IPsec-v3, but alternatives to IKE may be used to satisfy that requirement. This document has been superseded by [RFC5996]. 4.1.2.2. RFC 4718, IKEv2 Clarifications and Implementation Guidelines (I, October 2006) [RFC4718] clarifies many areas of the original IKEv2 specification [RFC4306] that were seen as potentially difficult to understand for developers who were not intimately familiar with the specification and its history. It does not introduce any changes to the protocol, but rather provides descriptions that are less prone to ambiguous interpretations. The goal of this document was to encourage the development of interoperable implementations. The clarifications in this document have been included in the new version of the IKEv2 specification [RFC5996]. 4.1.2.3. RFC 5996, Internet Key Exchange Protocol Version 2 (IKEv2) (S, September 2010) [RFC5996] combines the original IKEv2 RFC [RFC4306] with the Clarifications RFC [RFC4718], and resolves many implementation issues discovered by the community since the publication of these two documents. This document was developed by the IPsecME (IPsec Maintenance and Extensions) Working Group, after the conclusion of the original IPsec Working Group. Automatic key distribution is required for IPsec-v3, but alternatives to IKE may be used to satisfy that requirement. 4.2. Additions and Extensions 4.2.1. Peer Authentication Methods 4.2.1.1. RFC 4478, Repeated Authentication in Internet Key Exchange (IKEv2) Protocol (E, April 2006) [RFC4478] addresses a problem unique to remote access scenarios. How can the gateway (the IKE responder) force the remote user (the IKE initiator) to periodically reauthenticate, limiting the damage in the case where an unauthorized user gains physical access to the remote host? This document defines a new status notification, that a responder can send to an initiator, which notifies the initiator that the IPsec SA will be revoked unless the initiator reauthenticates within a specified period of time. This optional extension applies only to IKEv2, not to IKEv1. 4.2.1.2. RFC 4739, Multiple Authentication Exchanges in the Internet Key Exchange (IKEv2) Protocol (E, November 2006) IKEv2 supports several mechanisms for authenticating the parties but each endpoint uses only one of these mechanisms to authenticate itself. [RFC4739] specifies an extension to IKEv2 that allows the use of multiple authentication exchanges, using either different mechanisms or the same mechanism. This extension allows, for instance, performing certificate-based authentication of the client host followed by an EAP authentication of the user. This also allows for authentication by multiple administrative domains, if needed. This optional extension applies only to IKEv2, not to IKEv1. 4.2.1.3. RFC 4754, IKE and IKEv2 Authentication Using the Elliptic Curve Digital Signature Algorithm (ECDSA) (S, January 2007) [RFC4754] describes how the Elliptic Curve Digital Signature Algorithm (ECDSA) may be used as the authentication method within the IKEv1 and IKEv2 protocols. ECDSA provides many benefits including computational efficiency, small signature sizes, and minimal bandwidth compared to other available digital signature methods like RSA and DSA. This optional extension applies to both IKEv1 and IKEv2. 4.2.1.4. RFC 5998, An Extension for EAP-Only Authentication in IKEv2 (S, September 2010) IKEv2 allows an initiator to use EAP for peer authentication, but requires the responder to authenticate through the use of a digital signature. [RFC5998] extends IKEv2 so that EAP methods that provide mutual authentication and key agreement can also be used to provide peer authentication for the responder. This optional extension applies only to IKEv2, not to IKEv1. 4.2.2. Certificate Contents and Management (PKI4IPsec) The format, contents, and interpretation of Public Key Certificates (PKCs) proved to be a source of interoperability problems within IKE and IPsec. PKI4IPsec was an attempt to set in place some common procedures and interpretations to mitigate those problems. 4.2.2.1. RFC 4809, Requirements for an IPsec Certificate Management Profile (I, February 2007) [RFC4809] enumerates requirements for Public Key Certificate (PKC) lifecycle transactions between different VPN System and PKI System products in order to better enable large scale, PKI-enabled IPsec deployments with a common set of transactions. This document discusses requirements for both the IPsec and the PKI products. These optional requirements apply to both IKEv1 and IKEv2. 4.2.2.2. RFC 4945, The Internet IP Security PKI Profile of IKEv1/ISAKMP, IKEv2, and PKIX (S, August 2007) [RFC4945] defines a profile of the IKE and Public Key Infrastructure using X.509 (PKIX) frameworks in order to provide an agreed-upon standard for using PKI technology in the context of IPsec. It also documents the contents of the relevant IKE payloads and further specifies their semantics. In addition, it summarizes the current state of implementations and deployment and provides advice to avoid common interoperability issues. This optional extension applies to both IKEv1 and IKEv2. 4.2.2.3. RFC 4806, Online Certificate Status Protocol (OCSP) Extensions to IKEv2 (S, February 2007) When certificates are used with IKEv2, the communicating peers need a mechanism to determine the revocation status of the peer's certificate. OCSP is one such mechanism. [RFC4806] defines the "OCSP Content" extension to IKEv2. This document is applicable when OCSP is desired and security policy (e.g., firewall policy) prevents one of the IKEv2 peers from accessing the relevant OCSP responder directly. This optional extension applies only to IKEv2, not to IKEv1. 4.2.3. Dead Peer Detection 4.2.3.1. RFC 3706, A Traffic-Based Method of Detecting Dead Internet Key Exchange (IKE) Peers (I, February 2004) When two peers communicate using IKE and IPsec, it is possible for the connectivity between the two peers to drop unexpectedly. But the SAs can still remain until their lifetimes expire, resulting in the packets getting tunneled into a "black hole". [RFC3706] describes an approach to detect peer liveliness without needing to send messages at regular intervals. This RFC defines an optional extension to IKEv1; dead peer detection (DPD) is an integral part of IKEv2, which refers to this feature as a "liveness check" or "liveness test". 4.2.4. Remote Access The IKEv2 Mobility and Multihoming (MOBIKE) protocol enables two additional capabilities for IPsec VPN users: 1) moving from one address to another without re-establishing existing SAs and 2) using multiple interfaces simultaneously. These solutions are limited to IPsec VPNs; they are not intended to provide more general mobility or multihoming capabilities. The IPsecME Working Group identified some missing components needed for more extensive IKEv2 and IPsec-v3 support for remote access clients. These include efficient client resumption of a previously established session with a VPN gateway, efficient client redirection to an alternate VPN gateway, and support for IPv6 client configuration using IPsec configuration payloads. 4.2.4.1. RFC 4555, IKEv2 Mobility and Multihoming Protocol (MOBIKE) (S, June 2006) IKEv2 assumes that an IKE SA is created implicitly between the IP address pair that is used during the protocol execution when establishing the IKEv2 SA. IPsec-related documents had no provision to change this pair after an IKE SA was created. [RFC4555] defines extensions to IKEv2 that enable an efficient management of IKE and IPsec Security Associations when a host possesses multiple IP addresses and/or where IP addresses of an IPsec host change over time. 4.2.4.2. RFC 4621, Design of the IKEv2 Mobility and Multihoming (MOBIKE) Protocol (I, August 2006) [RFC4621] discusses the involved network entities and the relationship between IKEv2 signaling and information provided by other protocols. It also records design decisions for the MOBIKE protocol, background information, and records discussions within the working group. 4.2.4.3. RFC 5266, Secure Connectivity and Mobility Using Mobile IPv4 and IKEv2 Mobility and Multihoming (MOBIKE) (B, June 2008) [RFC5266] describes a solution using Mobile IPv4 (MIPv4) and mobility extensions to IKEv2 (MOBIKE) to provide secure connectivity and mobility to enterprise users when they roam into untrusted networks. 4.2.4.4. RFC 5723, Internet Key Exchange Protocol Version 2 (IKEv2) Session Resumption (S, January 2010) [RFC5723] enables a remote client that has been disconnected from a gateway to re-establish SAs with the gateway in an expedited manner, without repeating the complete IKEv2 negotiation. This capability requires changes to IKEv2. This optional extension applies only to IKEv2, not to IKEv1. 4.2.4.5. RFC 5685, Re-direct Mechanism for the Internet Key Exchange Protocol Version 2 (IKEv2) (S, November 2009) [RFC5685] enables a gateway to securely redirect VPN clients to another VPN gateway, either during or after the IKEv2 negotiation. Possible reasons include, but are not limited to, an overloaded gateway or a gateway that needs to shut down. This requires changes to IKEv2. This optional extension applies only to IKEv2, not to IKEv1. 4.2.4.6. RFC 5739, IPv6 Configuration in Internet Key Exchange Protocol Version 2 (IKEv2) (E, February 2010) In IKEv2, a VPN gateway can assign an internal network address to a remote VPN client. This is accomplished through the use of configuration payloads. For an IPv6 client, the assignment of a single address is not sufficient to enable full-fledged IPv6 communications. [RFC5739] proposes several solutions that might remove this limitation. This optional extension applies only to IKEv2, not to IKEv1. 5. Cryptographic Algorithms and Suites Two basic requirements must be met for an algorithm to be used within IKE and/or IPsec: assignment of one or more IANA values and an RFC that describes how to use the algorithm within the relevant protocol, packet formats, special considerations, etc. For each RFC that describes a cryptographic algorithm, this roadmap will classify its requirement level for each protocol, as either MUST, SHOULD, or MAY [RFC2119]; SHOULD+, SHOULD-, or MUST- [RFC4835]; optional; undefined; or N/A (not applicable). A designation of "optional" means that the algorithm meets the two basic requirements, but its use is not specifically recommended for that protocol. "Undefined" means that one of the basic requirements is not met: either there is no relevant IANA number for the algorithm or there is no RFC specifying how it should be used within that specific protocol. "N/A" means that use of the algorithm is inappropriate in the context (e.g., NULL encryption for IKE, which always requires encryption; or combined mode algorithms, a new feature in IPsec-v3, for use with IPsec-v2). This document categorizes the requirement level of each algorithm for IKEv1, IKEv2, IPsec-v2, and IPsec-v3. If an algorithm is recommended for use within IKEv1 or IKEv2, it is used either to protect the IKE SA's traffic (encryption and integrity-protection algorithms) or to generate keying material (Diffie-Hellman or DH groups, Pseudorandom Functions or PRFs). If an algorithm is recommended for use within IPsec, it is used to protect the IPsec/child SA's traffic, and IKE is capable of negotiating its use for that purpose. These requirements are summarized in Table 1 (Appendix A). These levels are current as of February 2011; subsequent RFCs may result in altered requirement levels. For algorithms, this could mean the introduction of new algorithms or upgrading or downgrading the requirement levels of current algorithms. The IANA registries for IKEv1 and IKEv2 include IANA values for various cryptographic algorithms. IKE uses these values to negotiate IPsec SAs that will provide protection using those algorithms. If a specific algorithm lacks a value for IKEv1 and/or IKEv2, that algorithm's use is classified as "undefined" (no IANA #) within IPsec-v2 and/or IPsec-v3. 5.1. Algorithm Requirements Specifying a core set of mandatory algorithms for each protocol facilitates interoperability. Defining those algorithms in an RFC separate from the base protocol RFC enhances algorithm agility. IPsec-v3 and IKEv2 each have an RFC that specifies their mandatory- to-implement (MUST), recommended (SHOULD), optional (MAY), and deprecated (SHOULD NOT) algorithms. For IPsec-v2, this is included in the base protocol RFC. That was originally the case for IKEv1, but IKEv1's algorithm requirements were updated in [RFC4109]. 5.1.1. RFC 4835, Cryptographic Algorithm Implementation Requirements for Encapsulating Security Payload (ESP) and Authentication Header (AH) (S, April 2007) [RFC4835] specifies the encryption and integrity-protection algorithms for IPsec (both versions). Algorithms for IPsec-v2 were originally defined in [RFC2402] and [RFC2406]. [RFC4305] obsoleted those requirements, and was in turn obsoleted by [RFC4835]. Therefore, [RFC4835] applies to IPsec-v2 as well as IPsec-v3. Combined mode algorithms are mentioned, but not assigned a requirement level. 5.1.2. RFC 4307, Cryptographic Algorithms for Use in the Internet Key Exchange Version 2 (IKEv2) (S, December 2005) [RFC4307] specifies the encryption and integrity-protection algorithms used by IKEv2 to protect its own traffic, the Diffie- Hellman (DH) groups used within IKEv2, and the pseudorandom functions used by IKEv2 to generate keys, nonces, and other random values. [RFC4307] contains conflicting requirements for IKEv2 encryption and integrity-protection algorithms. Where there are contradictory requirements, this document takes its requirement levels from Section 3.1.1, "Encrypted Payload Algorithms", rather than from Section 3.1.3, "IKEv2 Transform Type 1 Algorithms", or Section 3.1.4, "IKEv2 Transform Type 2 Algorithms". 5.1.3. RFC 4109, Algorithms for Internet Key Exchange version 1 (IKEv1) (S, May 2005) [RFC4109] updates IKEv1's algorithm specifications, which were originally defined in [RFC2409]. It specifies the encryption and integrity-protection algorithms used by IKEv1 to protect its own traffic; the Diffie-Hellman (DH) groups used within IKEv1; the hash and pseudorandom functions used by IKEv1 to generate keys, nonces and other random values; and the authentication methods and algorithms used by IKEv1 for peer authentication. 5.2. Encryption Algorithms The encryption-algorithm RFCs describe how to use these algorithms to encrypt IKE and/or ESP traffic, providing confidentiality protection to the traffic. They. When any encryption algorithm is used to provide confidentiality, the use of integrity protection is strongly recommended. If the encryption algorithm is a stream cipher, omitting integrity protection seriously compromises the security properties of the algorithm. DES, as described in [RFC2405], was originally a required algorithm for IKEv1 and ESP-v2. Since the use of DES is now deprecated, this roadmap does not include [RFC2405]. 5.2.1. RFC 2410, The NULL Encryption Algorithm and Its Use With IPsec (S, November 1998) [RFC2410] is a tongue-in-cheek description of the no-op encryption algorithm (i.e., using ESP without encryption). In order for IKE to negotiate the selection of the NULL encryption algorithm for use in an ESP SA, an identifying IANA number is needed. This number (the value 11 for ESP_NULL) is found on the IANA registries for both IKEv1 and IKEv2, but it is not mentioned in [RFC2410]. Requirement levels for ESP-NULL: IKEv1 - N/A IKEv2 - N/A ESP-v2 - MUST [RFC4835] ESP-v3 - MUST [RFC4835] NOTE: RFC 4307 erroneously classifies ESP-NULL as MAY for IKEv2; this has been corrected in an errata submission for RFC 4307. 5.2.2. RFC 2451, The ESP CBC-Mode Cipher Algorithms (S, November 1998) [RFC2451] describes how to use encryption algorithms in cipher-block- chaining (CBC) mode to encrypt IKE and ESP traffic. It specifically mentions Blowfish, CAST-128, Triple DES (3DES), International Data Encryption Algorithm (IDEA), and RC5, but it is applicable to any block-cipher algorithm used in CBC mode. The algorithms mentioned in the RFC all have a 64-bit blocksize and a 64-bit random Initialization Vector (IV) that is sent in the packet along with the encrypted data. Requirement levels for 3DES-CBC: IKEv1 - MUST [RFC4109] IKEv2 - MUST- [RFC4307] ESP-v2 - MUST [RFC4835] ESP-v3 - MUST- [RFC4835] Requirement levels for other CBC algorithms (Blowfish, CAST, IDEA, RC5): IKEv1 - optional IKEv2 - optional ESP-v2 - optional ESP-v3 - optional 5.2.3. RFC 3602, The AES-CBC Cipher Algorithm and Its Use with IPsec (S, September. 2003) [RFC3602] describes how to use AES in cipher block chaining (CBC) mode to encrypt IKE and ESP traffic. AES is the successor to DES. AES-CBC is a block-mode cipher with a 128-bit blocksize, a random IV that is sent in the packet along with the encrypted data, and keysizes of 128, 192 and 256 bits. If AES-CBC is implemented, 128-bit keys are MUST; the other sizes are MAY. [RFC3602] includes IANA values for use in IKEv1 and ESP-v2. A single IANA value is defined for AES-CBC, so IKE negotiations need to specify the keysize. Requirement levels for AES-CBC with 128-bit keys: IKEv1 - SHOULD [RFC4109] IKEv2 - SHOULD+ [RFC4307] ESP-v2 - MUST [RFC4835] ESP-v3 - MUST [RFC4835] Requirement levels for AES-CBC with 192- or 256-bit keys: IKEv1 - optional IKEv2 - optional ESP-v2 - optional ESP-v3 - optional 5.2.4. RFC 3686, Using Advanced Encryption Standard (AES) Counter Mode With IPsec Encapsulating Security Payload (ESP) (S, January 2004) [RFC3686] describes how to use AES in counter (CTR) mode to encrypt ESP traffic. AES-CTR is a stream cipher with a 32-bit random nonce (1/SA) and a 64-bit IV. If AES-CTR is implemented, 128-bit keys are MUST; 192- and 256-byte keys are MAY. Reuse of the IV with the same key and nonce compromises the data's security; thus, AES-CTR should not be used with manual keying. AES-CTR can be pipelined and parallelized; it uses only the AES encryption operations for both encryption and decryption. Requirement levels for AES-CTR: IKEv1 - undefined (no IANA #) IKEv2 - optional [RFC5930] ESP-v2 - SHOULD [RFC4835] ESP-v3 - SHOULD [RFC4835] 5.2.5. RFC 5930, Using Advanced Encryption Standard Counter Mode (AES- CTR) with the Internet Key Exchange version 02 (IKEv2) Protocol (I, July 210). [RFC5930] extends [RFC3686] to enable the use of AES-CTR to provide encryption and integrity protection for IKEv2 messages. 5.2.6. RFC 4312, The Camellia Cipher Algorithm and Its Use with IPsec (S, December 2005) [RFC4312] describes how to use Camellia in cipher block chaining (CBC) mode to encrypt IKE and ESP traffic. Camellia-CBC is a block- mode cipher with a 128-bit blocksize, a random IV that is sent in the packet along with the encrypted data, and keysizes of 128, 192, and 256 bits. If Camellia-CBC is implemented, 128-bit keys are MUST; the other sizes are MAY. [RFC4312] includes IANA values for use in IKEv1 and IPsec-v2. A single IANA value is defined for Camellia-CBC, so IKEv1 negotiations need to specify the keysize. 5.2.7. RFC 5529, Modes of Operation for Camellia for Use with IPsec (S, April 2009) [RFC5529] describes the use of the Camellia block-cipher algorithm in conjunction with several different modes of operation. It describes the use of Camellia in cipher block chaining (CBC) mode and counter (CTR) mode as an encryption algorithm within ESP. It also describes the use of Camellia in Counter with CBC-MAC (CCM) mode as a combined mode algorithm in ESP. This document defines how to use IKEv2 to generate keying material for a Camellia ESP SA; it does not define how to use Camellia within IKEv2 to protect an IKEv2 SA's traffic. However, this RFC, in conjunction with IKEv2's generalized description of block-mode encryption, provide enough detail to allow the use of Camellia-CBC algorithms within IKEv2. All three modes can use keys of length 128 bits, 192 bits, or 256 bits. [RFC5529] includes IANA values for use in IKEv2 and IPsec-v3. A single IANA value is defined for each Camellia mode, so IKEv2 negotiations need to specify the keysize. Requirement levels for Camellia-CBC: IKEv1 - optional IKEv2 - optional ESP-v2 - optional ESP-v3 - optional Requirement levels for Camellia-CTR: IKEv1 - undefined (no IANA #) IKEv2 - undefined (no RFC) ESP-v2 - optional (but no IANA #, so cannot be negotiated by IKE) ESP-v3 - optional Requirement levels for Camellia-CCM: IKEv1 - N/A IKEv2 - undefined (no RFC) ESP-v2 - N/A ESP-v3 - optional 5.2.8. RFC 4196, The SEED Cipher Algorithm and Its Use with IPsec (S, October 2005) [RFC4196] describes how to use SEED in cipher block chaining (CBC) mode to encrypt ESP traffic. It describes how to use IKEv1 to negotiate a SEED-ESP SA, but does not define the use of SEED to protect IKEv1 traffic. SEED-CBC is a block-mode cipher with a 128-bit blocksize, a random IV that is sent in the packet along with the encrypted data, and a keysize of 128 bits. [RFC4196] includes IANA values for use in IKEv1 and IPsec-v2. [RFC4196] includes test data. Requirement levels for SEED-CBC: IKEv1 - undefined (no IANA #) IKEv2 - undefined (no IANA #) ESP-v2 - optional ESP-v3 - optional (but no IANA #, so cannot be negotiated by IKE) 5.3. Integrity-Protection (Authentication) Algorithms The integrity-protection algorithm RFCs describe how to use these algorithms to authenticate IKE and/or IPsec traffic, providing integrity protection to the traffic. This protection is provided by computing an Integrity Check Value (ICV), which is sent in the packet. The RFCs. Some of these algorithms generate a fixed-length ICV, which is truncated when it is included in an IPsec-protected packet. For example, standard HMAC-SHA-1 (Hashed Message Authentication Code) generates a 160-bit ICV, which is truncated to 96 bits when it is used to provide integrity protection to an ESP or AH packet. The individual RFC descriptions mention those algorithms that are truncated. When these algorithms are used to protect IKEv2 SAs, they are also truncated. For IKEv1, HMAC-SHA-1 and HMAC-MD5 are negotiated by requesting the hash algorithms SHA-1 and MD5, respectively; these algorithms are not truncated when used to protect an IKEv1 SA. For HMAC-SHA-1 and HMAC-MD5, the IKEv2 IANA registry contains values for both the truncated version and the standard non- truncated version; thus, IKEv2 has the capability to negotiate either version of the algorithm. However, only the truncated version is used for IKEv2 SAs and for IPsec SAs. The non-truncated version is reserved for use by the Fibre Channel protocol [RFC4595]. For the other algorithms (AES-XCBC, HMAC-SHA-256/384/512, AES-CMAC, and HMAC- RIPEMD), only the truncated version can be used for both IKEv2 and IPsec-v3 SAs. One other algorithm, AES-GMAC [RFC4543], can also provide integrity protection. It has two versions: an integrity-protection algorithm for use within AH-v3, and a combined mode algorithm with null encryption for use within ESP-v3. [RFC4543] is described in Section 5.4, "Combined Mode Algorithms". 5.3.1. RFC 2404, The Use of HMAC-SHA-1-96 within ESP and AH (S, November 1998) [RFC2404] describes HMAC-SHA-1, an integrity-protection algorithm with a 512-bit blocksize, and a 160-bit key and Integrity Check Value (ICV). For use within IPsec, the ICV is truncated to 96 bits. This is currently the most commonly used integrity-protection algorithm. Requirement levels for HMAC-SHA-1: IKEv1 - MUST [RFC4109] IKEv2 - MUST [RFC4307] IPsec-v2 - MUST [RFC4835] IPsec-v3 - MUST [RFC4835] 5.3.2. RFC 3566, The AES-XCBC-MAC-96 Algorithm and Its Use With IPsec (S, September 2003) [RFC3566] describes AES-XCBC-MAC, a variant of CBC-MAC, which is secure for messages of varying lengths (unlike classic CBC-MAC). It is an integrity-protection algorithm with a 128-bit blocksize and a 128-bit key and ICV. For use within IPsec, the ICV is truncated to 96 bits. [RFC3566] includes test data. Requirement levels for AES-XCBC-MAC: IKEv1 - undefined (no RFC) IKEv2 - optional IPsec-v2 - SHOULD+ [RFC4835] IPsec-v3 - SHOULD+ [RFC4835] 5.3.3. RFC 4868, Using HMAC-SHA-256, HMAC-SHA-384, and HMAC-SHA-512 with IPsec (S, May 2007) [RFC4868] describes a family of algorithms, successors to HMAC-SHA-1. HMAC-SHA-256 has a 512-bit blocksize and a 256-bit key and ICV. HMAC-SHA-384 has a 1024-bit blocksize and a 384-bit key and ICV. HMAC-SHA-512 has a 1024-bit blocksize and a 512-bit key and ICV. For use within IKE and IPsec, the ICV is truncated to half its original size (128 bits, 192 bits, or 256 bits). Each of the three algorithms has its own IANA value, so IKE does not have to negotiate the keysize. Requirement levels for HMAC-SHA-256, HMAC-SHA-384, HMAC-SHA-512: IKEv1 - optional IKEv2 - optional IPsec-v2 - optional IPsec-v3 - optional 5.3.4. RFC 2403, The Use of HMAC-MD5-96 within ESP and AH (S, November 1998) [RFC2403] describes HMAC-MD5, an integrity-protection algorithm with a 512-bit blocksize and a 128-bit key and Integrity Check Value (ICV). For use within IPsec, the ICV is truncated to 96 bits. It was a required algorithm for IKEv1 and IPsec-v2. The use of plain MD5 is now deprecated, but [RFC4835] states: "Weaknesses have become apparent in MD5; however, these should not affect the use of MD5 with HMAC". Requirement levels for HMAC-MD5: IKEv1 - MAY [RFC4109] IKEv2 - optional [RFC4307] IPsec-v2 - MAY [RFC4835] IPsec-v3 - MAY [RFC4835] 5.3.5. RFC 4494, The AES-CMAC-96 Algorithm and Its Use with IPsec (S, June 2006) [RFC4494] describes AES-CMAC, another variant of CBC-MAC, which is secure for messages of varying lengths. It is an integrity- protection algorithm with a 128-bit blocksize and 128-bit key and ICV. For use within IPsec, the ICV is truncated to 96 bits. [RFC4494] includes test data. Requirement levels for AES-CMAC: IKEv1 - undefined (no IANA #) IKEv2 - optional IPsec-v2 - optional (but no IANA #, so cannot be negotiated by IKE) IPsec-v3 - optional 5.3.6. RFC 2857, The Use of HMAC-RIPEMD-160-96 within ESP and AH (S, June 2000) [RFC2857] describes HMAC-RIPEMD, an integrity-protection algorithm with a 512-bit blocksize and a 160-bit key and ICV. For use within IPsec, the ICV is truncated to 96 bits. Requirement levels for HMAC-RIPEMD: IKEv1 - undefined (no IANA #) IKEv2 - undefined (no IANA #) IPsec-v2 - optional IPsec-v3 - optional (but no IANA #, so cannot be negotiated by IKE) 5.3.7. RFC 4894, Use of Hash Algorithms in Internet Key Exchange (IKE) and IPsec (I, May 2007) In light of recent attacks on MD5 and SHA-1, [RFC4894] examines whether it is necessary to replace the hash functions currently used by IKE and IPsec for key generation, integrity protection, digital signatures, or PKIX certificates. It concludes that the algorithms recommended for IKEv2 [RFC4307] and IPsec-v3 [RFC4305] are not currently susceptible to any known attacks. Nonetheless, it suggests that implementors add support for AES-XCBC-MAC-96 [RFC3566], AES- XCBC-PRF-128 [RFC4434], and HMAC-SHA-256, -384, and -512 [RFC4868] for future use. It also suggests that IKEv2 implementors add support for PKIX certificates signed with SHA-256, -384, and -512. 5.4. Combined Mode Algorithms IKEv1 and ESP-v2 use separate algorithms to provide encryption and integrity protection, and IKEv1 can negotiate different combinations of algorithms for different SAs. In ESP-v3, a new class of algorithms was introduced, in which a single algorithm can provide both encryption and integrity protection. [RFC5996] describes how IKEv2 can negotiate combined mode algorithms to be used in ESP-v3 SAs. [RFC5282] adds that capability to IKEv2, enabling IKEv2 to negotiate and use combined mode algorithms for its own traffic. When properly designed, these algorithms can provide increased efficiency in both implementation and execution. Although ESP-v2 did not originally include combined mode algorithms, some IKEv1 implementations have added the capability to negotiate combined mode algorithms for use in IPsec SAs; these implementations do not include the capability to use combined mode algorithms to protect IKE SAs. IANA numbers for combined mode algorithms have been added to the IKEv1 registry. 5.4.1. RFC 4309, Using Advanced Encryption Standard (AES) CCM Mode with IPsec Encapsulating Security Payload (ESP) (S, December 2005) [RFC4309] describes how to use AES in counter with CBC-MAC (CCM) mode, a combined algorithm, to encrypt and integrity protect ESP traffic. AES-CCM is a block-mode cipher with a 128-bit blocksize; a random IV that is sent in the packet along with the encrypted data; a 24-bit salt value (1/SA); keysizes of 128, 192, and 256 bits and ICV sizes of 64, 96 and 128 bits. If AES-CCM is implemented, 128-bit keys are MUST; the other sizes are MAY. ICV sizes of 64 and 128 bits are MUST; 96 bits is MAY. The salt value is generated by IKE during the key-generation process. Reuse of the IV with the same key compromises the data's security; thus, AES-CCM should not be used with manual keying. [RFC4309] includes IANA values that IKE can use to negotiate ESP-v3 SAs. Each of the three ICV lengths has its own IANA value, but IKE negotiations need to specify the keysize. [RFC4309] includes test data. [RFC4309] describes how IKE can negotiate the use of AES-CCM to use in an ESP SA. [RFC5282] extends this to the use of AES-CCM to protect an IKEv2 SA. Requirement levels for AES-CCM: IKEv1 - N/A IKEv2 - optional ESP-v2 - N/A ESP-v3 - optional [RFC4835] NOTE: The IPsec-v2 IANA registry includes values for AES-CCM, but combined mode algorithms are not a feature of IPsec-v2. Although some IKEv1/IPsec-v2 implementations include this capability (see Section 5.4), it is not part of the protocol. 5.4.2. RFC 4106, The Use of Galois/Counter Mode (GCM) in IPsec Encapsulating Security Payload (ESP) (S, June 2005) [RFC4106] describes how to use AES in Galois/Counter (GCM) mode, a combined algorithm, to encrypt and integrity protect ESP traffic. AES-GCM is a block-mode cipher with a 128-bit blocksize; a random IV that is sent in the packet along with the encrypted data; a 32-bit salt value (1/SA); keysizes of 128, 192, and 256 bits; and ICV sizes of 64, 96, and 128 bits. If AES-GCM is implemented, 128-bit keys are MUST; the other sizes are MAY. An ICV size of 128 bits is a MUST; 64 and 96 bits are MAY. The salt value is generated by IKE during the key-generation process. Reuse of the IV with the same key compromises the data's security; thus, AES-GCM should not be used with manual keying. [RFC4106] includes IANA values that IKE can use to negotiate ESP-v3 SAs. Each of the three ICV lengths has its own IANA value, but IKE negotiations need to specify the keysize. [RFC4106] includes test data. [RFC4106] describes how IKE can negotiate the use of AES-GCM to use in an ESP SA. [RFC5282] extends this to the use of AES-GCM to protect an IKEv2 SA. Requirement levels for AES-GCM: IKEv1 - N/A IKEv2 - optional ESP-v2 - N/A ESP-v3 - optional [RFC4835] NOTE: The IPsec-v2 IANA registry includes values for AES-GCM, but combined mode algorithms are not a feature of IPsec-v2. Although some IKEv1/IPsec-v2 implementations include this capability (see Section 5.4), it is not part of the protocol. 5.4.3. RFC 4543, The Use of Galois Message Authentication Code (GMAC) in IPsec ESP and AH (S, May 2006) [RFC4543] is the variant of AES-GCM [RFC4106] that provides integrity protection without encryption. It has two versions: an integrity- protection algorithm for use within AH, and a combined mode algorithm with null encryption for use within ESP. It can use a key of 128-, 192-, or 256-bits; the ICV is always 128 bits, and is not truncated. AES-GMAC uses a nonce, consisting of a 64-bit IV and a 32-bit salt (1/SA). The salt value is generated by IKE during the key generation process. Reuse of the salt value with the same key compromises the data's security; thus, AES-GMAC should not be used with manual keying. For use within AH, each keysize has its own IANA value, so IKE does not have to negotiate the keysize. For use within ESP, there is only one IANA value, so IKE negotiations must specify the keysize. AES-GMAC cannot be used by IKE to protect its own SAs, since IKE traffic requires encryption. Requirement levels for AES-GMAC: IKEv1 - N/A IKEv2 - N/A IPsec-v2 - N/A IPsec-v3 - optional NOTE: The IPsec-v2 IANA registry includes values for AES-GMAC, but combined mode algorithms are not a feature of IPsec-v2. Although some IKEv1/IPsec-v2 implementations include this capability (see Section 5.4), it is not part of the protocol. 5.4.4. RFC 5282, Using Authenticated Encryption Algorithms with the Encrypted Payload of the Internet Key Exchange version 2 (IKEv2) Protocol (S, August 2008) [RFC5282] extends [RFC4309] and [RFC4106] to enable the use of AES- CCM and AES-GCM to provide encryption and integrity protection for IKEv2 messages. 5.5. Pseudo-Random Functions (PRFs) IKE uses pseudorandom functions (PRFs) to generate the secret keys that are used in IKE SAs and IPsec SAs. These PRFs are generally the same algorithms used for integrity protection, but their output is not truncated, since all of the generated bits are generally needed for the keys. If the PRF's output is not long enough to supply the required number of bits of keying material, the PRF is applied iteratively until the requisite amount of keying material is generated. For each IKEv2 SA, the peers negotiate both a PRF algorithm and an integrity-protection algorithm; the former is used to generate keying material and other values, and the latter is used to provide protection to the IKE SA's traffic. IKEv1's approach is more complicated. IKEv1 [RFC2409] does not specify any PRF algorithms. For each IKEv1 SA, the peers agree on an unkeyed hash function (e.g., SHA-1). IKEv1 uses the HMAC version of this function to generate keying material and to provide integrity protection for the IKE SA. Therefore, PRFs that are not HMACs cannot currently be used in IKEv1. Requirement levels for PRF-HMAC-SHA1: IKEv1 - MUST [RFC4109] IKEv2 - MUST [RFC4307] Requirement levels for PRF-HMAC-SHA-256, PRF-HMAC-SHA-384, and PRF- HMAC-SHA-512: IKEv1 - optional [RFC4868] IKEv2 - optional [RFC4868] 5.5.1. RFC 4434, The AES-XCBC-PRF-128 Algorithm for the Internet Key Exchange Protocol (IKE) (S, February 2006) [RFC3566] defines AES-XCBC-MAC-96, which is used for integrity protection within IKE and IPsec. [RFC4434] enables the use of AES- XCBC-MAC as a PRF within IKE. The PRF differs from the integrity- protection algorithm in two ways: its 128-bit output is not truncated to 96 bits, and it accepts a variable-length key, which is modified (lengthened via padding or shortened through application of AES-XCBC) to a 128-bit key. [RFC4434] includes test data. Requirement levels for AES-XCBC-PRF: IKEv1 - undefined (no RFC) IKEv2 - SHOULD+ [RFC4307] NOTE: RFC 4109 erroneously classifies AES-XCBC-PRF as SHOULD for IKEv1; this has been corrected in an errata submission for RFC 4109. 5.5.2. RFC 4615, The Advanced Encryption Standard-Cipher-based Message Authentication Code-Pseudorandom Function-128 (AES-CMAC-PRF-128) Algorithm for the Internet Key Exchange Protocol (IKE) (S, August 2006) [RFC4615] extends [RFC4494] to enable the use of AES-CMAC as a PRF within IKEv2, in a manner analogous to that used by [RFC4434] for AES-XCBC. Requirement levels for AES-CMAC-PRF: IKEv1 - undefined (no IANA #) IKEv2 - optional 5.6. Cryptographic Suites 5.6.1. RFC 4308, Cryptographic Suites for IPsec (S, December 2005) An IKE negotiation consists of multiple cryptographic attributes, both for the IKE SA and for the IPsec SA. The number of possible combinations can pose a challenge to peers trying to find a common policy. To enhance interoperability, [RFC4308] defines two pre- defined suites, consisting of combinations of algorithms that comprise typical security policies. IKE/ESP suite "VPN-A" includes use of 3DES, HMAC-SHA-1, and 1024-bit modular exponentiation group (MODP) Diffie-Hellman (DH); IKE/ESP suite "VPN-B" includes AES-CBC, AES-XCBC-MAC, and 2048-bit MODP DH. These suites are intended to be named "single-button" choices in the administrative interface, but do not prevent the use of alternative combinations. 5.6.2. RFC 4869, Suite B Cryptographic Suites for IPsec (I, May 2007) [RFC4869] adds four pre-defined suites, based upon the United States National Security Agency's "Suite B" specifications, to those specified in [RFC4308]. IKE/ESP suites "Suite-B-GCM-128" and "Suite- B-GCM-256" include use of AES-CBC, AES-GCM, HMAC-SHA-256, or HMAC- SHA-384, and 256-bit or 384-bit elliptic-curve (EC) DH groups. IKE/AH suites "Suite-B-GMAC-128" and "Suite-B-GMAC-256" include use of AES-CBC, AES-GMAC, HMAC-SHA-256, or HMAC-SHA-384, and 256-bit or 384-bit EC DH groups. While [RFC4308] does not specify a peer- authentication method, [RFC4869] mandates pre-shared key authentication for IKEv1; public key authentication using ECDSA is recommended for IKEv1 and required for IKEv2. 5.7. Diffie-Hellman Algorithms IKE negotiations include a Diffie-Hellman exchange, which establishes a shared secret to which both parties contributed. This value is used to generate keying material to protect both the IKE SA and the IPsec SA. IKEv1 [RFC2409] contains definitions of two DH MODP groups and two elliptic curve (EC) groups; IKEv2 [RFC5996] only references the MODP groups. The requirements levels of these groups are: Requirement levels for DH MODP group 1: IKEv1 - MAY [RFC4109] IKEv2 - optional Requirement levels for DH MODP group 2: IKEv1 - MUST [RFC4109] IKEv2 - MUST- [RFC4307] Requirement levels for EC groups 3-4: IKEv1 - MAY [RFC4109] IKEv2 - undefined (no IANA #) 5.7.1. RFC 3526, More Modular Exponential (MODP) Diffie-Hellman groups for Internet Key Exchange (IKE) (S, May 2003) [RFC2409] and [RFC5996] define two MODP DH groups (groups 1 and 2) for use within IKE. [RFC3526] adds six more groups (groups 5 and 14-18). Group 14 is a 2048-bit group that is strongly recommended for use in IKE. Requirement levels for DH MODP group 14: IKEv1 - SHOULD [RFC4109] IKEv2 - SHOULD+ [RFC4307] Requirement levels for DH MODP groups 5, 15-18: IKEv1 - optional [RFC4109] IKEv2 - optional 5.7.2. RFC 4753, ECP Groups For IKE and IKEv2 (I, January 2007) [RFC4753] defines three EC DH groups (groups 19-21) for use within IKE. The document includes test data. Requirement levels for DH EC groups 19-21: IKEv1 - optional [RFC4109] IKEv2 - optional 5.7.3. RFC 5903, Elliptic Curve Groups modulo a Prime (ECP Groups) for IKE and IKEv2 (I, June 2010) 5.7.4. RFC 5114, Additional Diffie-Hellman Groups for Use with IETF Standards (I, January 2008) [RFC5114] defines five additional DH groups (MODP groups 22-24 and EC groups 25-26) for use in IKE. It also includes three EC DH groups (groups 19-21) that were originally defined in [RFC4753]; however, the current specification for these groups is [RFC5903]. The IANA group numbers are specific to IKE, but the DH groups are intended for use in multiple IETF protocols, including Transport Layer Security/Secure Socket Layer (TLS/SSL), Secure/Multipurpose Internet Mail Extensions (S/MIME), and X.509 Certificates. Requirement levels for DH MODP groups 22-24, EC groups 25-26: IKEv1 - optional IKEv2 - optional 6. IPsec/IKE for Multicast [RFC4301] describes IPsec processing for unicast and multicast traffic. However, classical IPsec SAs provide point-to-point protection; the security afforded by IPsec's cryptographic algorithms is not applicable when the SA is one-to-many or many-to-many, the case for multicast. The Multicast Security (msec) Working Group has defined alternatives to IKE and extensions to IPsec for use with multicast traffic. Different multicast groups have differing characteristics and requirements: number of senders (one-to-many or many-to-many), number of members (few, moderate, very large), volatility of membership, real-time delivery, etc. Their security requirements vary as well. Each solution defined by msec applies to a subset of the large variety of possible multicast groups. 6.1. RFC 3740, The Multicast Group Security Architecture (I, March 2004) [RFC3740] defines the multicast security architecture, which is used to provide security for packets exchanged by large multicast groups. It defines the components of the architectural framework; discusses Group Security Associations (GSAs), key management, data handling, and security policies. Several existing protocols, including Group DOI (GDOI) [RFC3547], Group Secure Association Key Management Protocol (GSAKMP) [RFC4535], and Multimedia Internet KEYing (MIKEY) [RFC3830], satisfy the group key management requirements defined in this document. Both the architecture and the components for Multicast Group Security differ from IPsec. 6.2. RFC 5374, Multicast Extensions to the Security Architecture for the Internet Protocol (S, November 2008) [RFC5374] extends the security architecture defined in [RFC4301] to apply to multicast traffic. It defines a new class of SAs (GSAs - Group Security Associations) and additional databases used to apply IPsec protection to multicast traffic. It also describes revisions and additions to the processing algorithms in [RFC4301]. 6.3. RFC 3547, The Group Domain of Interpretation (S, July 2003) GDOI [RFC3547] extends IKEv1 so that it can be used to establish SAs to protect multicast traffic. This document defines additional exchanges and payloads to be used for that purpose. 6.4. RFC 4046, Multicast Security (MSEC) Group Key Management Architecture (I, April 2005) [RFC4046] sets out the general requirements and design principles for protocols that are used for multicast key management. It does not go into the specifics of an individual protocol that can be used for that purpose. 6.5. RFC 4359, The Use of RSA/SHA-1 Signatures within Encapsulating Security Payload (ESP) and Authentication Header (AH) (S, January 2006) [RFC4359] describes the use of the RSA digital signature algorithm to provide integrity protection for multicast traffic within ESP and AH. The algorithms used for integrity protection for unicast traffic (e.g., HMAC) are not suitable for this purpose when used with multicast traffic. 7. Outgrowths of IPsec/IKE Operational experience with IPsec revealed additional capabilities that could make IPsec more useful in real-world scenarios. These include support for IPsec policy mechanisms, IPsec MIBs, payload compression (IPComp), extensions to facilitate additional peer authentication methods (Better-Than-Nothing Security (BTNS), Kerberized Internet Negotiation of Keys (KINK), and IPSECKEY), and additional capabilities for VPN clients (IPSRA). 7.1. IPsec Policy The IPsec Policy (ipsp) Working Group originally planned an RFC that would allow entities with no common Trust Anchor and no prior knowledge of each other's security policies to establish an IPsec- protected connection. The solutions that were proposed for gateway discovery and security policy negotiation proved to be overly complex and fragile, in the absence of prior knowledge or compatible configuration policies. 7.1.1. RFC 3586, IP Security Policy (IPSP) Requirements (S, August 2003) [RFC3586] describes the functional requirements of a generalized IPsec policy framework, that could be used to discover, negotiate, and manage IPsec policies. 7.1.2. RFC 3585, IPsec Configuration Policy Information Model (S, August 2003). This RFC has not been widely adopted. 7.2. IPsec MIBs Over the years, several MIB-related Internet Drafts were proposed for IPsec and IKE, but only one progressed to RFC status. 7.2.1. RFC 4807, IPsec Security Policy Database Configuration MIB (S, March 2007) [RFC4807] defines a MIB module that can be used to configure the SPD of an IPsec device. This RFC has not been widely adopted. 7.3. IPComp (Compression) The IP Payload Compression Protocol (IPComp) is a protocol that provides lossless compression for IP datagrams. Although IKE can be used to negotiate the use of IPComp in conjunction with IPsec, IPComp can also be used when IPsec is not applied. The IPComp protocol allows the compression of IP datagrams by supporting different compression algorithms. Three of these algorithms are: DEFLATE [RFC2394], LZS [RFC2395], and the ITU-T V.44 Packet Method [RFC3051], which is based on the LZJH algorithm. 7.3.1. RFC 3173, IP Payload Compression Protocol (IPComp) (S, September 2001) IP payload compression is especially useful when IPsec-based encryption is applied to IP datagrams. Encrypting the IP datagram causes the data to be random in nature, rendering compression at lower protocol layers ineffective. If IKE is used to negotiate compression in conjunction with IPsec, compression can be performed prior to encryption. [RFC3173] defines the payload compression protocol, the IPComp packet structure, the IPComp Association (IPCA), and several methods to negotiate the IPCA. 7.4. Better-Than-Nothing Security (BTNS) One of the major obstacles to widespread implementation of IPsec is the lack of pre-existing credentials that can be used for peer authentication. Better-Than-Nothing Security (BTNS) is an attempt to sidestep this problem by allowing IKE to negotiate unauthenticated (anonymous) IPsec SAs, using credentials such as self-signed certificates or "bare" public keys (public keys that are not connected to a public key certificate) for peer authentication. This ensures that subsequent traffic protected by the SA is conducted with the same peer, and protects the communications from passive attack. These SAs can then be cryptographically bound to a higher-level application protocol, which performs its own peer authentication. 7.4.1. RFC 5660, IPsec Channels: Connection Latching (S, October 2009) [RFC5660]. 7.4.2. RFC 5386, Better-Than-Nothing-Security: An Unauthenticated Mode of IPsec (S, November 2008) [RFC5386] specifies how to use IKEv2 to set up unauthenticated security associations (SAs) for use with the IPsec Encapsulating Security Payload (ESP) and the IPsec Authentication Header (AH). This document does not require any changes to the bits on the wire, but specifies extensions to the Peer Authorization Database (PAD) and Security Policy Database (SPD). 7.4.3. RFC 5387, Problem and Applicability Statement for Better-Than- Nothing Security (BTNS) (I, November 2008) [RFC5387] considers that the need to deploy authentication information and its associated identities is a significant obstacle to the use of IPsec. This document explains the rationale for extending the Internet network security protocol suite to enable use of IPsec security services without authentication. 7.5. Kerberized Internet Negotiation of Keys (KINK) Kerberized Internet Negotiation of Keys (KINK) is an attempt to provide an alternative to IKE for IPsec peer authentication. It uses Kerberos, instead of IKE, to establish IPsec SAs. For enterprises that already deploy the Kerberos centralized key management system, IPsec can then be implemented without the need for additional peer credentials. Some vendors have implemented proprietary extensions for using Kerberos in IKEv1, as an alternative to the use of KINK. These extensions, as well as the KINK protocol, apply only to IKEv1, and not to IKEv2. 7.5.1. RFC 3129, Requirements for Kerberized Internet Negotiation of Keys (I, June 2001) [RFC3129] considers that peer-to-peer authentication and keying mechanisms have inherent drawbacks such as computational complexity and difficulty in enforcing security policies. This document specifies the requirements for using basic features of Kerberos and uses them to its advantage to create a protocol that can establish and maintain IPsec security associations ([RFC2401]). 7.5.2. RFC 4430, Kerberized Internet Negotiation of Keys (KINK) (S, March 2006) [RFC4430] defines a low-latency, computationally inexpensive, easily managed, and cryptographically sound protocol to establish and maintain security associations using the Kerberos authentication system. This document reuses the Quick Mode payloads of IKEv1 in order to foster substantial reuse of IKEv1 implementations. This RFC has not been widely adopted. 7.6. IPsec Secure Remote Access (IPSRA) IPsec Secure Remote Access (IPSRA) was an attempt to extend IPsec protection to "road warriors", allowing IKE to authenticate not only the user's device but also the user, without changing IKEv1. The working group defined generic requirements of different IPsec remote access scenarios. An attempt was made to define an IKE-like protocol that would use legacy authentication mechanisms to create a temporary or short-lived user credential that could be used for peer authentication within IKE. This protocol proved to be more cumbersome than standard Public Key protocols, and was abandoned. This led to the development of IKEv2, which incorporates the use of EAP for user authentication. 7.6.1. RFC 3457, Requirements for IPsec Remote Access Scenarios (I, January 2003) [RFC3457] explores and enumerates the requirements of various IPsec remote access scenarios, without suggesting particular solutions for them. 7.6.2. RFC 3456, Dynamic Host Configuration Protocol (DHCPv4) Configuration of IPsec Tunnel Mode (S, January 2003) [RFC3456] explores the requirements for host configuration in IPsec tunnel mode, and describes how the Dynamic Host Configuration Protocol (DHCPv4) may be used for providing such configuration information. This RFC has not been widely adopted. 7.7. IPsec Keying Information Resource Record (IPSECKEY) The IPsec Keying Information Resource Record (IPSECKEY) enables the storage of public keys and other information that can be used to facilitate opportunistic IPsec in a new type of DNS resource record. 7.7.1. RFC 4025, A method for storing IPsec keying material in DNS (S, February 2005) [RFC4025] describes a method of storing IPsec keying material in the DNS using a new type of resource record. This document describes how to store the public key of the target node in this resource record. This RFC has not been widely adopted. 8. Other Protocols That Use IPsec/IKE IPsec and IKE were designed to provide IP-layer security protection to other Internet protocols' traffic as well as generic communications. Since IPsec is a general-purpose protocol, in some cases, its features do not provide the granularity or distinctive features required by another protocol; in some cases, its overhead or prerequisites do not match another protocol's requirements. However, a number of other protocols do use IKE and/or IPsec to protect some or all of their communications. 8.1. Mobile IP (MIPv4 and MIPv6) 8.1.1. RFC 4093, Problem Statement: Mobile IPv4 Traversal of Virtual Private Network (VPN) Gateways (I, August 2005) [RFC4093] describes the issues with deploying Mobile IPv4 across virtual private networks (VPNs). IPsec is one of the VPN technologies covered by this document. It identifies and describes practical deployment scenarios for Mobile IPv4 running alongside IPsec in enterprise and operator environments. It also specifies a set of framework guidelines to evaluate proposed solutions for supporting multi-vendor seamless IPv4 mobility across IPsec-based VPN gateways. 8.1.2. RFC 5265, Mobile IPv4 Traversal across IPsec-Based VPN Gateways (S, June 2008) [RFC5265] describes a basic solution that uses Mobile IPv4 and IPsec to provide session mobility between enterprise intranets and external networks. The proposed solution minimizes changes to existing firewall/VPN/DMZ deployments and does not require any changes to IPsec or key exchange protocols. It also proposes a mechanism to minimize IPsec renegotiation when the mobile node moves. 8.1.3. RFC 3776, Using IPsec to Protect Mobile IPv6 Signaling Between Mobile Nodes and Home Agents (S, June 2004) This document specifies the use of IPsec in securing Mobile IPv6 traffic between mobile nodes and home agents. It specifies the required wire formats for the protected packets and illustrates examples of Security Policy Database and Security Association Database entries that can be used to protect Mobile IPv6 signaling messages. It also describes how to configure either manually keyed IPsec security associations or IKEv1 to establish the SAs automatically. Mobile IPv6 requires considering the home address destination option and Routing Header in IPsec processing. Also, IPsec and IKE security association addresses can be updated by Mobile IPv6 signaling messages. 8.1.4. RFC 4877, Mobile IPv6 Operation with IKEv2 and the Revised IPsec Architecture (S, April 2007) This document updates [RFC3776] in order to work with the revised IPsec architecture [RFC4301]. Since the revised IPsec architecture expands the list of selectors to include the Mobility Header message type, it becomes much easier to differentiate between different mobility header messages. Since the ICMP message type and code are also newly added as selectors, this document uses them to protect Mobile Prefix Discovery messages. This document also specifies the use of IKEv2 configuration payloads for dynamic home address configuration. Finally, this document describes the use of IKEv2 in order to set up the SAs for Mobile IPv6. 8.1.5. RFC 5026, Mobile IPv6 Bootstrapping in Split Scenario (S, October 2007) [RFC5026] extends [RFC4877] to support dynamic discovery of home agents and the home network prefix; for the latter purpose, it specifies a new IKEv2 configuration attribute and notification. It describes how a Mobile IPv6 node can obtain the address of its home agent, its home address, and create IPsec security associations with its home agent using DNS lookups and security credentials preconfigured on the Mobile Node. It defines how a mobile node (MN) can request its home address and home prefixes through the Configuration Payload in the IKE_AUTH exchange and what attributes need to be present in the CFG_REQUEST messages in order to do this. It also specifies how the home agent can authorize the credentials used for IKEv2 exchange. 8.1.6. RFC 5213, Proxy Mobile IPv6 (S, August 2008) [RFC5213] describes a network-based mobility management protocol that is used to provide mobility services to hosts without requiring their participation in any mobility-related signaling. It uses IPsec to protect the mobility signaling messages between the two network entities called the mobile access gateway (MAG) and the local mobility anchor (LMA). It also uses IKEv2 in order to set up the security associations between the MAG and the LMA. 8.1.7. RFC 5568, Mobile IPv6 Fast Handovers (S, July 2009) When Mobile IPv6 is used for a handover, there is a period during which the Mobile Node is unable to send or receive packets because of link switching delay and IP protocol operations. [RFC5568] specifies a protocol between the Previous Access Router (PAR) and the New Access Router (NAR) to improve handover latency due to Mobile IPv6 procedures. It uses IPsec ESP in transport mode with integrity protection for protecting the signaling messages between the PAR and the NAR. It also describes the SPD entries and the PAD entries when IKEv2 is used for setting up the required SAs. 8.1.8. RFC 5380, Hierarchical Mobile IPv6 (HMIPv6) Mobility Management (S, October 2008) [RFC5380] describes extensions to Mobile IPv6 and IPv6 Neighbor Discovery to allow for local mobility handling in order to reduce the amount of signaling between the mobile node, its correspondent nodes, and its home agent. It also improves handover speed of Mobile IPv6. It uses IPsec for protecting the signaling between the mobile node and a local mobility management entity called the Mobility Anchor Point (MAP). The MAP also uses IPsec Peer Authorization Database (PAD) entries and configuration payloads described in [RFC4877] in order to allocate a Regional Care-of Address (RCoA) for mobile nodes. 8.2. Open Shortest Path First (OSPF) 8.2.1. RFC 4552, Authentication/Confidentiality for OSPFv3 (S, June 2006) OSPF is a link-state routing protocol that is designed to be run inside a single Autonomous System. OSPFv2 provided its own authentication mechanisms using the AuType and Authentication protocol header fields but OSPFv3 removed these fields and uses IPsec instead. [RFC4552] describes how to use IPsec ESP and AH in order to protect OSPFv3 signaling between two routers. It also enumerates the IPsec capabilities the routers require in order to support this specification. Finally, it also describes the operation of OSPFv3 with IPsec over virtual links where the other endpoint is not known at configuration time. Since OSPFv3 exchanges multicast packets as well as unicast ones, the use of IKE within OSPFv3 is not appropriate. Therefore, this document mandates the use of manual keys. 8.3. Host Identity Protocol (HIP) 8.3.1. RFC 5201, Host Identity Protocol (E, April 2008) IP addresses perform two distinct functions: host identifier and locator. This document specifies a protocol that allows consenting hosts to securely establish and maintain shared IP-layer state, allowing separation of the identifier and locator roles of IP addresses. This enables continuity of communications across IP address (locator) changes. It uses public key identifiers from a new Host Identity (HI) namespace for peer authentication. It uses the HMAC-SHA-1-96 and the AES-CBC algorithms with IPsec ESP and AH for protecting its signaling messages. 8.3.2. RFC 5202, Using the Encapsulating Security Payload (ESP) Transport Format with the Host Identity Protocol (HIP) (E, April 2008) The HIP base exchange specification [RFC5201] does not describe any transport formats or methods for describing how ESP is used to protect user data to be used during the actual communication. [RFC5202] specifies a set of HIP extensions for creating a pair of ESP Security Associations (SAs) between the hosts during the base exchange. After the HIP association and required ESP SAs have been established between the hosts, the user data communication is protected using ESP. In addition, this document specifies how the ESP Security Parameter Index (SPI) is used to indicate the right host context (host identity) and methods to update an existing ESP Security Association. 8.3.3. RFC 5206, End-Host Mobility and Multihoming with the Host Identity (E, April 2008) When a host uses HIP, the overlying protocol sublayers (e.g., transport layer sockets) and Encapsulating Security Payload (ESP) Security Associations (SAs) are bound to representations of these host identities, and the IP addresses are only used for packet forwarding. [RFC5206] defines a generalized LOCATOR parameter for use in HIP messages that allows a HIP host to notify a peer about alternate addresses at which it is reachable. It also specifies how a host can change its IP address and continue to send packets to its peers without necessarily rekeying. 8.3.4. RFC 5207, NAT and Firewall Traversal Issues of Host Identity Protocol (HIP) (I, April 2008) [RFC5207] discusses the problems associated with HIP communication across network paths that include network address translators and firewalls. It analyzes the impact of NATs and firewalls on the HIP base exchange and the ESP data exchange. It discusses possible changes to HIP that attempt to improve NAT and firewall traversal and proposes a rendezvous point for letting HIP nodes behind a NAT be reachable. It also suggests mechanisms for NATs to be more aware of the HIP messages. 8.4. Stream Control Transmission Protocol (SCTP) 8.4.1. RFC 3554, On the Use of Stream Control Transmission Protocol (SCTP) with IPsec (S, July 2003) The Stream Control Transmission Protocol (SCTP) is a reliable transport protocol operating on top of a connection-less packet network such as IP. [RFC3554] describes functional requirements for IPsec and IKE to be used in securing SCTP traffic. It adds support for SCTP in the form of a new ID type in IKE [RFC2409] and implementation choices in the IPsec processing to account for the multiple source and destination addresses associated with a single SCTP association. This document applies only to IKEv1 and IPsec-v2; it does not apply to IKEv2 AND IPsec-v3. 8.5. Robust Header Compression (ROHC) 8.5.1. RFC 3095, RObust Header Compression (ROHC): Framework and four profiles: RTP, UDP, ESP, and uncompressed (S, July 2001) ROHC is a framework for header compression, intended to be used in resource-constrained environments. [RFC3095] applies this framework to four protocols, including ESP. 8.5.2. RFC 5225, RObust Header Compression Version 2 (ROHCv2): Profiles for RTP, UDP, IP, ESP, and UDP-Lite (S, April 2008) [RFC5225] defines an updated ESP/IP profile for use with ROHC version 2. It analyzes the ESP header and classifies the fields into several classes like static, well-known, irregular, etc., in order to efficiently compress the headers. 8.5.3. RFC 5856, Integration of Robust Header Compression over IPsec Security Associations (I, May 2010) [RFC5856] describes a mechanism to compress inner IP headers at the ingress point of IPsec tunnels and to decompress them at the egress point. Since the Robust Header Compression (ROHC) specifications only describe operations on a per-hop basis, this document also specifies extensions to enable ROHC over multiple hops. This document applies only to tunnel mode SAs and does not support transport mode SAs. 8.5.4. RFC 5857, IKEv2 Extensions to Support Robust Header Compression over IPsec (S, May 2010) ROHC requires initial configuration at the compressor and decompressor ends. Since ROHC usually operates on a per-hop basis, this configuration information is carried over link-layer protocols such as PPP. Since [RFC5856] operates over multiple hops, a different signaling mechanism is required. [RFC5857] describes how to use IKEv2 in order to dynamically communicate the configuration parameters between the compressor and decompressor. 8.5.5. RFC 5858, IPsec Extensions to Support Robust Header Compression over IPsec (S, May 2010) [RFC5856] describes how to use ROHC with IPsec. This is not possible without extensions to IPsec. [RFC5858] describes the extensions needed to IPsec in order to support ROHC. Specifically, it describes extensions needed to the IPsec SPD, SAD, and IPsec processing including ICV computation and integrity verification. 8.6. Border Gateway Protocol (BGP) 8.6.1. RFC 5566, BGP IPsec Tunnel Encapsulation Attribute (S, June 2009) [RFC5566] adds an additional BGP Encapsulation Subsequent Address Family Identifier (SAFI), allowing the use of IPsec and, optionally, IKE to protect BGP tunnels. It defines the use of AH and ESP in tunnel mode and the use of AH and ESP in transport mode to protect IP in IP and MPLS-in-IP tunnels. It also defines how public key fingerprints (hashes) are distributed via BGP and used later to authenticate IKEv2 exchange between the tunnel endpoints. 8.7. IPsec Benchmarking The Benchmarking Methodology WG in the IETF is working on documents that relate to benchmarking IPsec [BMWG-1] [BMWG-2]. 8.7.1. Methodology for Benchmarking IPsec Devices (Work in Progress) [BMWG-1] defines a set of tests that can be used to measure and report the performance characteristics of IPsec devices. It extends the methodology defined for benchmarking network interconnecting devices to include IPsec gateways and adds further tests that can be used to measure IPsec performance of end-hosts. The document focuses on establishing a performance testing methodology for IPsec devices that support manual keying and IKEv1, but does not cover IKEv2. 8.7.2. Terminology for Benchmarking IPsec Devices (Work in Progress) [BMWG-2] defines the standardized performance testing terminology for IPsec devices that support manual keying and IKEv1. It also describes the benchmark tests that would be used to test the performance of the IPsec devices. 8.8. Network Address Translators (NAT) 8.8.1. RFC 2709, Security Model with Tunnel-mode IPsec for NAT domains (I, October 1999) NAT devices provide transparent routing to end-hosts trying to communicate from disparate address realms, by modifying IP and transport headers en route. This makes it difficult for applications to pursue end-to-end application-level security. [RFC2709] describes a security model by which tunnel mode IPsec security can be architected on NAT devices. It defines how NATs administer security policies and SA attributes based on private realm addressing. It also specifies how to operate IKE in such scenarios by specifying an IKE-ALG (Application Level Gateway) that translates policies from private realm addressing into public addressing. Although the model presented here uses terminology from IKEv1, it can be deployed within IKEv1, IKEv2, IPsec-v2, and IPsec-v3. This security model has not been widely adopted 8.9. Session Initiation Protocol (SIP) 8.9.1. RFC 3329, Security Mechanism Agreement for the Session Initiation Protocol (SIP) (S, January 2003) [RFC3329] describes how a SIP client can select one of the various available SIP security mechanisms. In particular, the method allows secure negotiation to prevent bidding down attacks. It also describes a security mechanism called ipsec-3gpp and its associated parameters (algorithms, protocols, mode, SPIs and ports) as they are used in the 3GPP IP Multimedia Subsystem. 8.10. Explicit Packet Sensitivity Labels 8.10.1. RFC 5570, Common Architecture Label IPv6 Security Option (CALIPSO) (I, July 2009) [RFC5570] describes a mechanism used to encode explicit packet Sensitivity Labels on IPv6 packets in Multi-Level Secure (MLS) networks. The method is implemented using an IPv6 hop-by-hop option. This document uses the IPsec Authentication Header (AH) in order to detect any malicious modification of the Sensitivity Label in a packet. 9., October305] Eastlake 3rd, D., "Cryptographic Algorithm Implementation Requirements for Encapsulating Security Payload (ESP) and Authentication Header (AH)", RFC 4305, December 2005. [RFC4306] Kaufman, C., Ed., "Internet Key Exchange (IKEv2) Protocol", RFC 4306, December 2005. [RFC4307] Schiller, J., "Cryptographic Algorithms for Use in the Internet Key Exchange Version 2 (IKEv2)", RFC 4307, 2006. [RFC4535] Harney, H., Meth, U., Colegrove, A., and G. Gross, "GSAKMP: Group Secure Association Key Management Protocol", RFC 4535, June 2006. [RFC4543] McGrew, D. and J. Viega, "The Use of Galois Message Authentication Code (GMAC) in IPsec ESP and AH", RFC 4543, May 2006. [RFC 2006. [RFC4615]. [RFC, February 2007. [RFC480
https://pike.lysator.liu.se/docs/ietf/rfc/60/rfc6071.xml
CC-MAIN-2020-29
en
refinedweb
. One of my favorite things about TensorFlow 2.0 is that it offers multiple levels of abstraction, so you can choose the right one for your project. In this article, I’ll explain the tradeoffs between two styles you can use to create your neural networks. The first is a symbolic style, in which you build a model by manipulating a graph of layers. The second is an imperative style, in which you build a model by extending a class. I’ll introduce these, share notes on important design and usability considerations, and close with quick recommendations to help you choose the right one. The mental model we normally use when we think of a neural network is a “graph of layers”, illustrated in the image below. This graph can be a DAG, shown on the left, or a stack, shown on the right. When we build models symbolically, we do so by describing the structure of this graph. If that sounds technical, it may surprise you to learn you have experience doing so already, if you’ve used Keras. Here’s a quick example of building a model symbolically, using the Keras Sequential API. In the above example, we’ve defined a stack of layers, then trained it using a built-in training loop, _model.fit_. Building a model with Keras can feel as easy as “plugging LEGO bricks together.” Why? In addition to matching our mental model, for technical reasons covered later, models built this way easy to debug by virtue of thoughtful error messages provided by the framework. TensorFlow 2.0 provides another symbolic model-building APIs: Keras Functional. Sequential is for stacks, and as you’ve probably guessed, Functional is for DAGs. The Functional API is a way to create more flexible models. It can handle non-linear topology, models with shared layers, and models with multiple inputs or outputs. Basically, the Functional API is a set of tools for building these graphs of layers. We’re working a couple new tutorials in this style for you now. There are other symbolic APIs you may have experience with. For example, TensorFlow v1 (and Theano) provided a much lower level API. You’d build models by creating a graph of ops, which you compile and execute. At times, using this API could feel like you were interacting with a compiler directly. And for many (including the author) it was difficult to work with. By contrast, in Keras the level of abstraction matches our mental model: a graph of layers, plugged together like Lego bricks. This feels natural to work with, and it’s one of the model-building approaches we’re standardizing on in TensorFlow 2.0. There’s another one I’ll describe now (and there’s a good chance you’ve used this too, or will have a chance to give it a try soon). In an imperative style, you write your model like your write NumPy. Building models in this style feels like Object-Oriented Python development. Here’s a quick example of a subclassed model: From a developer perspective, the way this works is you extend a Model class defined by the framework, instantiate your layers, then write the forward pass of your model imperatively (the backward pass is generated automatically). TensorFlow 2.0 supports this out of the box with Keras Subclassing API. Along with the Sequential and Functional APIs, it’s one of the recommended ways you develop models in TensorFlow 2.0. Although this style is new for TensorFlow, it may surprise you to learn it was introduced by Chainer in 2015 (time flies!). Since then, many frameworks have adopted a similar approach, including Gluon, PyTorch, and TensorFlow (with Keras Subclassing). Surprisingly, code written in this style in different frameworks can appear so similar, it may be difficult to tell apart! This style gives you great flexibility, but it comes with a usability and maintenance cost that’s not obvious. More on that in a bit. Models defined in either the Sequential, Functional, or Subclassing style can be trained in two ways. You can use either a built-in training routine and loss function (see the first example, where we use model.fit and model.compile), or if you need the added complexity of a custom training loop (for example, if you’d like to write your own gradient clipping code) or loss function, you can do so easily as follows: Having both these approaches available is important, and can be handy to reduce code complexity and maintenance costs. Basically, you can use additional complexity when it’s helpful, and when it’s unnecessary — use the built-in methods and spend your time on your research or project. Now that we have a sense for symbolic and imperative styles, let’s look at the tradeoffs. Benefits With Symbolic APIs your model is a graph-like data structure. This means your model can be inspected, or summarized. _keras.utils.plot_model_), or simply use _model.summary()_or to see a description of the layers, weights, and shapes. Likewise, when plugging together layers, library designers can run extensive layer compatibility checks (while building the model, and before execution). Symbolic models provide aconsistent API. This makes them simple to reuse and share. For example, in transfer learning you can access intermediate layer activations to build new models from existing ones, like this: from tensorflow.keras.applications.vgg19 import VGG19 base = VGG19(weights=’imagenet’) model = Model(inputs=base.input, outputs=base_model.get_layer(‘block4_pool’).output) image = load(‘elephant.png’) block4_pool_features = model.predict(image) Symbolic models are defined by a data structure that makes them natural to copy or clone. _model.get_config()_, _model.to_json()_, _model.save()_, _clone_model(model)_, with the ability to recreate the same model from just the data structure (without access to the original code used to define and train the model). While a well-designed API should match our mental model for neural networks, it’s equally important to match the mental model we have as programmers. For many of us, that’s an imperative programming style.. Limitations The current generation of symbolic APIs are best suited to developing models that are directed acyclic graphs of layers. This accounts for the majority of use-cases in practice, though there are a few special ones that don’t fit into this neat abstraction, for example, dynamic networks like tree-RNNs, and recursive networks. That’s why TensorFlow also provides an imperative model-building API style (Keras Subclassing, shown above). You get to use all the familiar layers, initializers, and optimizers from the Sequential and Functional APIs. The two styles are fully interoperable as well, so you can mix and match (for example, you can nest one model type in another). You take a symbolic model and use it as a layer in a subclassed model, or the reverse. Benefits Your forward pass is written imperatively, making it easy to swap out parts implemented by the library (say, a layer, activation, or loss function) with your own implementation. This feels natural to program, and is a great way to dive deeper into the nuts and bolts of deep learning. Imperative APIs gives you maximum flexibility, but at a cost. I love writing code in this style as well, but want to take a moment to highlight the limitations (it’s good to be aware of the tradeoffs). Limitations Importantly, when using an imperative API, your model is defined by the body of a class method. Your model is no longer a transparent data structure, it is an opaque piece of bytecode. When using this style, you’re trading usability and reusability to gain flexibility. Debugging happens during execution, as opposed to when you’re defining your model. Imperative models can be more difficult to reuse. For example, you cannot access intermediate layers or activations with a consistent API. Imperative models are also more difficult to inspect, copy, or clone. _model.save()_, _model.get_config()_, and clone_model do not work for subclassed models. Likewise, _model.summary()_only gives you a list of layers (and doesn’t provide information on how they’re connected, since that’s not accessible). It’s important to remember model-building is only a tiny fraction of working with machine learning in practice. Here’s one of my favorite illustrations on the topic. The model itself (the part of the code where you specify your layers, training loop, etc.) is the tiny box in the middle. Symbolically defined models have advantages in reusability, debugging, and testing. For example, when teaching — I can immediately debug a student’s code if they’re using the Sequential API. When they’re using a subclassed model (regardless of framework), it takes longer (bugs can be more subtle, and of many types). TensorFlow 2.0 supports both of these styles out of the box, so you can choose the right level of abstraction (and complexity) for your project. I hope this was a helpful overview, and thanks for reading! To learn more about the TensorFlow 2.0 stack, beyond these model building APIs, check out this article. To learn more about the relationship between TensorFlow and Keras, head here. Originally published by Josh Gordon at ☞ Complete Guide to TensorFlow for Deep Learning with Python ☞ Deep Learning for Computer Vision with Tensor Flow and Keras ☞ Python for Data Science and Machine Learning Bootcamp ☞ Machine Learning, Data Science and Deep Learning with Python ☞ Data Science: Deep Learning in Python ☞ Master Deep Learning with TensorFlow in Python tensorflow machine-learning What is neuron analysis of a machine? Learn machine learning by designing Robotics algorithm. Click here for best machine learning course models with AI. Machine Learning (ML) is one of the fastest-growing technologies today. ML has a lot of frameworks to build a successful app, and so as a developer, you might be getting confused about using the right framework. Herein we have curated top 5... This lecture talks about 1D and 2D gradient descent mechanisms along with Batch Gradient Descent. The lecture also shows how to get the job done on Python. Machine Learning with TensorFlow & Scikit-learn on Python, we will introduce what is Machine Learning? Why Machine Learning? Types of Machine Learning, Supervised Learning, Unsupervised Learning, AlphaZero AI, Semi-supervised Learning
https://morioh.com/p/7dd04f1827fe
CC-MAIN-2020-29
en
refinedweb
Getting Started with the React Native Navigation Library Free JavaScript Book! Write powerful, clean and maintainable JavaScript. RRP $11.95. Want to learn React Native from the ground up? This article is an extract from our Premium library. Get an entire collection of React Native books covering fundamentals, projects, tips and tools & more with SitePoint Premium. Join now for just $9/month. Learn PHP for free! Make the leap into server-side programming with a comprehensive cover of PHP & MySQL. Normally RRP $11.95 Yours absolutely free In your android/gradle/wrapper/gradle-wrapper.properties, update Gradle’s distributionUrl to use version 4.4 if it’s not already using it: distributionUrl=https\://services.gradle.org/distributions/gradle-4.4-all.zip" } } MainActivity.java On your app/src/main/java/com/rnnavigation/MainActivity.java file, extend from RNN’s NavigationActivity instead: package com.rnnavigation; // import com.facebook.react.ReactActivity; import com.reactnativenavigation.NavigationActivity; // public class MainActivity extends ReactActivity { public class MainActivity extends NavigationActivity { // remove all code inside } Since we’ve removed the default code that React Native uses, we’ll no longer need to register the main component as you’ll see later in the Building the App section: // index.js import { AppRegistry } from "react-native"; import App from "./App"; import { name as appName } from "./app.json"; AppRegistry.registerComponent(appName, () => App); MainApplication.java Next, you also need to update the app/src/main/java/com/rnnavigation/MainApplication.java file to inherit from RNN’s class instead. Start by importing RNN’s classes: import com.facebook.react.shell.MainReactPackage; import com.facebook.soloader.SoLoader; // add these import com.reactnativenavigation.NavigationApplication; import com.reactnativenavigation.react.NavigationReactNativeHost; import com.reactnativenavigation.react.ReactGateway; Next, inherit from RNN’s NavigationApplication class instead: // public class MainApplication extends Application implements ReactApplication { public class MainApplication extends NavigationApplication { } Lastly, completely replace the contents of the class with the following: @Override protected ReactGateway createReactGateway() { ReactNativeHost host = new NavigationReactNativeHost(this, isDebug(), createAdditionalReactPackages()) { @Override protected String getJSMainModuleName() { return "index"; } }; return new ReactGateway(this, isDebug(), host); } @Override public boolean isDebug() { return BuildConfig.DEBUG; } protected List<ReactPackage> getPackages() { return Arrays.<ReactPackage>asList( ); } @Override public List<ReactPackage> createAdditionalReactPackages() { return getPackages(); } The only thing that might look familiar to you in the above code is the getPackages() method. Just like in the standard React Native project, this is where you can initialize the classes of your native dependencies. iOS Setup In this section, we’ll set up RNN for iOS. Before you proceed, it’s important that you have updated Xcode to the latest available version. This helps to avoid potential errors that comes with outdated Xcode modules. First, open the ios/RNNavigation.xcodeproj file with Xcode. This is the corresponding Xcode project for the app. Once opened, in your project navigator (left pane where the project files are listed), right-click on Libraries -> Add files to RNNavigation. On the file picker that shows up, go to the node_modules/react-native-navigation/lib/ios directory and select the ReactNativeNavigation.xcodeproj file: Next, click on RNNavigation in the TARGETS row and click on the Build Phases tab. From there, look for Link Binary With Libraries, expand it, then click on the button for adding a new item: On the modal window that pops up, search for the libReactNativeNavigation.a file, select it, and then click on the Add button: Next, open the AppDelegate.m file and replace its contents with the following. This is basically the entry point of the React Native app (similar to the MainApplication.java in Android). We’re updating it so that it uses RNN instead: #import "AppDelegate.h" #import <React/RCTBundleURLProvider.h> #import <React/RCTRootView.h> #import <ReactNativeNavigation/ReactNativeNavigation.h> @implementation AppDelegate - (BOOL)application:(UIApplication *)application didFinishLaunchingWithOptions:(NSDictionary *)launchOptions { NSURL *jsCodeLocation = [[RCTBundleURLProvider sharedSettings] jsBundleURLForBundleRoot:@"index" fallbackResource:nil]; [ReactNativeNavigation bootstrap:jsCodeLocation launchOptions:launchOptions]; return YES; } @end That’s pretty much it for the linking of RNN to iOS. But if you encounter any issues, you’ll probably have to follow the next section as well. Common Issues In this section, we’ll go through some of the most common issues that you may encounter when linking RNN to iOS. When you’ve edited and saved the AppDelegate.m file, you’ll probably encounter this issue. This will appear in red squiggly lines in the file itself: ! 'RCTBundleURLProvider.h' file not found This happens because the React scheme is missing from your project. To make it available, execute the following commands inside your project’s root directory ( RNNavigation): npm install -g react-native-git-upgrade react-native-git-upgrade This will add the missing files to your project. Once that’s done, go to Product -> Scheme -> Manage Schemes, then click on the + button to add a new scheme. From the Target dropdown, select React, then click OK to add it: Once added, make sure both the Show and Shared checkboxes are checked: The next issue that you might encounter is this: 'ReactNativeNavigation/ReactNativeNavigation.h' file not found. This means it couldn’t find the said file in your project’s path. To solve it, click on RNNavigation in the TARGETS. Then click on Build Settings and below it, make sure the All and Combined is used as a filter, then look for header search in the search bar. Double-click on Header Search Paths, click on the + button, and add $(SRCROOT)/../node_modules/react-native-navigation/lib/ios as a path: Setting up AsyncStorage and Vector Icons The process of setting up AsyncStorage and Vector Icons is very similar to the install processes outlined above, so I’ll just link to the official documentation. Here are the instructions for AsyncStorage: - Android Setup. - iOS Setup. Be sure to follow the non-pods version, as that’s easier to get set up with. The non-pods (short for non CocoaPods) is one that doesn’t use CocoaPods as a dependency manager. The main file of the native module will simply be linked as a library. That’s basically the same thing we did with the libReactNativeNavigation.afile earlier. And here are the instructions for Vector Icons. To simplify things, just follow the instructions for adding the whole Fonts folder to the Xcode project: - Android Setup. Be sure to follow the instructions for getImageSource as well, because we’ll be using that later to load the icons. - iOS Setup. Note: you can always check the GitHub repo if you get lost. Okay, so that was a lot of manual setup. If you really need RNN in your project, then it’s a necessity. Every time you introduce a new dependency that has a native link, you have to follow the instructions for manually linking it. react-native link won’t work, because we’ve replaced ReactApplication with NavigationApplication in MainApplication.java, so the linking is now done a little bit differently. You can probably get away with still running react-native link if you know exactly what it’s doing and you can completely undo what it’s done. But it’s safer to just follow the manual linking instructions. Building the App Now we’re ready to finally start building the app. index.js First, open the existing index.js on the root of the project directory and replace its contents with the following.-community/async-storage"; Next, create the component itself. When the component is mounted, we try to get the username of the logged-in user from local storage. If it exists, then. loadIcons.js Here’s the code for the loadIcons file that we imported in the index.js file earlier. This uses icons from FontAwesome. Here, we’re using the getImageSource() method from Vector Icons to get the actual image resource. This allows us to use it as an icon for the bottom tabs: // loadIcons.js import Icon from "react-native-vector-icons/FontAwesome"; (function() { Promise.all([ Icon.getImageSource("home", 11), // name of icon, size Icon.getImageSource("image", 11), Icon.getImageSource("rss-square", 11) ]).then(async values => { global.icons = values; // make it available globally so we don't need to load it again }); })(); Login Screen The Login screen is the default screen that the user will see if they aren’t logged in. From here, they can log in by entering their username or they can click on forgot password to view the screen for resetting their password. As mentioned earlier, all of this is just mocked and no actual authentication code is used: // src/screens/Login.js import React, { Component } from "react"; import { Navigation } from "react-native-navigation"; import { View, Text, TextInput, Button, TouchableOpacity, StyleSheet } from "react-native"; import AsyncStorage from "@react-native-community/async-storage"; import { goToTabs } from "../../navigation"; export default class Login extends Component { static get options() { return { topBar: { visible: false // need to set this because screens in a stack navigation have a header by default } }; } state = { username: "" }; render() { return ( <View style={styles.wrapper}> <View style={styles.container}> <View style={styles.main}> <View style={styles.fieldContainer}> <Text style={styles.label}>Enter your username</Text> <TextInput onChangeText={username => this.setState({ username })} style={styles.textInput} /> </View> <Button title="Login" color="#0064e1" onPress={this.login} /> <TouchableOpacity onPress={this.goToForgotPassword}> <View style={styles.center}> <Text style={styles.link_text}>Forgot Password</Text> </View> </TouchableOpacity> </View> </View> </View> ); } // next: add login code } // } }); Here’s the login code. This simply stores the username to local storage and navigates the user to the tabbed screens: login = async () => { const { username } = this.state; if (username) { await AsyncStorage.setItem("username", username); goToTabs(global.icons, username); } }; Lastly, here’s the code for navigating to another screen via stack navigation. Simply call the Navigation.push() method and pass in the ID of the current screen as the first argument, and the screen you want to navigate to as the second. The name should be the same one you used when you called Navigation.registerComponent() in the navigation.js file earlier: goToForgotPassword = () => { Navigation.push(this.props.componentId, { component: { name: "ForgotPasswordScreen" } }); }; As mentioned earlier, this screen is simply used as a filler to demonstrate stack navigation. Make sure that the topBar is set to visible, because it’s where the back button for going back to the Login screen is located: // src/screens/ForgotPassword.js import React, { Component } from "react"; import { View, Text, TextInput, Button, StyleSheet } from "react-native"; export default class ForgotPassword extends Component { static get options() { return { topBar: { visible: true, // visible title: { text: "Forgot Password" } } }; } state = { email: "" }; render() { return ( <View style={styles.wrapper}> <View style={styles.container}> <View style={styles.main}> <View style={styles.fieldContainer}> <Text style={styles.label}>Enter your email</Text> <TextInput onChangeText={email => this.setState({ email })} style={styles.textInput} /> </View> <Button title="Send Email" color="#0064e1" onPress={this.sendEmail} /> </View> </View> </View> ); } // sendEmail = async () => {}; } // } }); You can also have a separate button for going back to the previous screen. All you have to do is call the Navigation.pop() method: Navigation.pop(this.props.componentId); Home Screen The Home screen is the default screen for the tabbed navigation, so it’s what the user will see by default when they log in. This screen shows the user’s name that was passed as a navigation prop as well as a button for logging out. Clicking the logout button will simply delete the username from local storage and navigate the user back to the login screen: // src/screens/Home.js import React, { Component } from "react"; import { View, Text, Button, StyleSheet } from "react-native"; import Icon from "react-native-vector-icons/FontAwesome"; import AsyncStorage from "@react-native-community/async-storage"; import { goToLogin } from "../../navigation"; export default class Home extends Component { render() { const { username } = this.props; return ( <View style={styles.container}> <Text style={styles.text}>Hi {username}!</Text> <Button onPress={this.logout} </View> ); } // logout = async () => { await AsyncStorage.removeItem("username"); goToLogin(); }; } const styles = StyleSheet.create({ container: { flex: 1, justifyContent: "center", alignItems: "center" }, text: { fontSize: 18, fontWeight: "bold" } }); In case you’re wondering how we got access to the username, we’ve passed it as a navigation prop from the navigation file earlier: // navigation.js { component: { name: "HomeScreen", options: { ... }, // here: passProps: { username }, } }, Gallery Screen The Gallery screen is just a filler screen so we won’t be delving too much into it. Basically, it just shows a photo gallery UI: // src/screens/Gallery.js import React, { Component } from "react"; import { View, Text, FlatList, Image, Dimensions, StyleSheet } from "react-native"; const { width } = Dimensions.get("window"); const base_width = width / 2; const images = [ { id: 1, src: require("../images/blake-richard-verdoorn-20063-unsplash.jpg") }, { id: 2, src: require("../images/casey-horner-487085-unsplash.jpg") }, { id: 3, src: require("../images/sacha-styles-XK7thML3zEQ-unsplash.jpg") }, { id: 4, src: require("../images/eberhard-grossgasteiger-1036384-unsplash.jpg") }, { id: 5, src: require("../images/justin-kauffman-449060-unsplash.jpg") }, { id: 6, src: require("../images/vincent-guth-182001-unsplash.jpg") } ]; export default class Gallery extends Component { render() { return ( <View style={styles.container}> <FlatList data={images} keyExtractor={(item, index) => item.id.toString()} numColumns={2} renderItem={this.renderImage} /> </View> ); } // renderImage = ({ item }) => { return ( <Image source={item.src} style={{ width: base_width, height: 250 }} /> ); }; } const styles = StyleSheet.create({ container: { flex: 1 } }); Feed Screen Just like the Gallery screen, the Feed screen is also a filler. It simply shows a news feed UI: // src/screens/Feed.js import React, { Component } from "react"; import { View, Text, FlatList, Image, TouchableOpacity, StyleSheet } from "react-native"; const news_items = [ { id: 1, title: "The HTML Handbook", summary: "HTML is the foundation of the marvel called the Web. Discover all you need to know about it in this handy handbook!", image: require("../images/amanda-phung-1281331-unsplash.jpg") }, { id: 2, title: "Angular RxJs In-Depth", summary: "In this tutorial, we'll learn to use the RxJS 6 library with Angular 6 or Angular 7...", image: require("../images/daniil-silantev-318853-unsplash.jpg") }, { id: 3, title: "How to Create Code Profiles in VS Code", summary: "This post piggybacks off of the work done by @avanslaars who is a fellow instructor at egghead.io....", image: require("../images/vincent-van-zalinge-38358-unsplash.jpg") } ]; export default class Feed extends Component { render() { return ( <View style={styles.container}> <FlatList data={news_items} keyExtractor={(item, index) => item.id.toString()} renderItem={this.renderItem} /> </View> ); } // renderItem = ({ item }) => { return ( <TouchableOpacity onPress={this.goToNews}> <View style={styles.news_item}> <View style={styles.news_text}> <View style={styles.text_container}> <Text style={styles.title}>{item.title}</Text> <Text>{item.summary}</Text> </View> </View> <View style={styles.news_photo}> <Image source={item.image} style={styles.photo} /> </View> </View> </TouchableOpacity> ); }; // goToNews = () => {}; } // const styles = StyleSheet.create({ container: { flex: 1 }, news_item: { flex: 1, flexDirection: "row", paddingRight: 20, paddingLeft: 20, paddingTop: 20, paddingBottom: 20, borderBottomWidth: 1, borderBottomColor: "#E4E4E4" }, news_text: { flex: 2, flexDirection: "row", padding: 15 }, title: { fontSize: 28, fontWeight: "bold", color: "#000", fontFamily: "georgia" }, news_photo: { flex: 1, justifyContent: "center", alignItems: "center" }, photo: { width: 120, height: 120 } }); Running the App At this point, you should be able to run the app: react-native run-android react-native run-ios Try out the app and see if it performs better than React Navigation (if you’ve used it before). Conclusion and Next Steps In this tutorial, you learned how to use the React Native Navigation library. Specifically, you learned how to set up React Native Navigation and use the stack and tab navigation. You also learned how to load icons from React Native Vector Icons instead of using image icons. As a next step, you might want to check out how animations can be customized, how to implement a side menu navigation, or view the examples of different layout types. If you’re still unsure about which navigation library to use for your next project, be sure to check out this post: React Navigation vs. React Native Navigation: Which is right for you? You can find the source code of the sample app on this GitHub repo. Learn valuable skills with a practical introduction to Python programming! Give yourself more options and write higher quality CSS with CSS Optimization Basics.
https://www.sitepoint.com/react-native-navigation-library/
CC-MAIN-2020-29
en
refinedweb
One :-). Outline This article describes how to enable a bare-metal (no RTOS) in RAW/native (no sockets, TCP only) lwip application running the MQTT protocol with TLS. The project used in this article is available on GitHub: The project runs a MQTT client application which initiates TLS handshaking and then communicates securely with a Mosquitto broker. Prerequisites: Software/Tools In this article I have used the following software and tools: - MQTT broker running with TLS on port 8883, e.g. see Enable Secure Communication with TLS and the Mosquitto Broker - MCUXpresso IDE v10.0.0 b344: - MCUXpresso SDK for FRDM-K64F: which includes - lwip 2.0.0 - mbed TLS 2.3.0 - MQTT lwip stub from lwip 2.0.2: - mbed TLS 2.4.2 (Apache): - Optional: Processor Expert v3.2.0 (see “MCUXpresso IDE: Installing Processor Expert into Eclipse Neon“) - TCP based (raw) example, e.g. the lwip tcp ping application (or the project from MQTT with lwip and NXP FRDM-K64F Board). But any other software/tool combination should do it too :-). mbed TLS As outlined in “Introduction to Security and TLS (Transport Layer Security)“, I have selected mbed TLS because its licensing terms are very permissive (Apache). Get mbed TLS from (I recommend the Apache version as it is permissive). Another way is to get it from the NXP MCUXpresso SDK for the FRDM-K64F. Use the ‘import SDK examples’ function from the quickstart panel and import the mbedtls_selftest example. The advantage of this method is that it comes with the random number generator drivers (RNG): Adding mbedTLS From the mbed TLS distribution, add the ‘mbedtls’ folder to the project. You need - mbedtls\include\mbedtls - mbedtls\library The mbed TLS implementation uses a ‘port’ which takes advantage of the hardware encryption unit of the on the NXP Kinetis K64F device. That ‘port’ is part of the MCUXpresso SDK, place it inside mbedtls\port. And finally I need the driver for the mmCAU (Memory-Mapped Cryptographic Acceleration Unit) of the NXP Kinetis device: - mmcau_common: common mmCAU files and interface - \libs\lib_mmcau.a: library with cryptographic routines The mbed configuration file is included with a preprocessor symbol. Add the following to compiler Preprocessor defined symbols: MBEDTLS_CONFIG_FILE='"ksdk_mbedtls_config.h"' Next, add the following to compiler include path settings so it can find all the needed header files: "${workspace_loc:/${ProjName}/mbedtls/port/ksdk}" "${workspace_loc:/${ProjName}/mbedtls/include/mbedtls}" "${workspace_loc:/${ProjName}/mbedtls/include}" "${workspace_loc:/${ProjName}/mmcau_common}" And add the mmCAU library to the linker options so it gets linked with the application (see “Creating and using Libraries with ARM gcc and Eclipse“): Last but not least, make sure that the random number generator (RNG) source files of the MCUXpresso SDK are part of the project: - drivers/fsl_rnga.c - drivers/fsl_rnga.h This completes the files and settings to add mbed TLS to the project :-). MQTT without TLS In an application with MQTT, the MQTT communication protocol is handled between the application and the stack: The block diagram below shows the general flow of the MQTT application interacting with lwip in RAW (tcp) mode. With lwip the application has basically two call backs: - recv_cb(): callback called when we receive a packet from the network - sent_cb(): callback called *after* a packet has been sent 💡 There is yet another call back, the error callback. To keep things simple, I ignore that callback here.s In raw/bare metal mode, the application calls ethernet_input() which calls the ‘received’ callback. With using MQTT, the MQTT parses the incoming data and passes it to the application (e.g. CONNACK message). If the application is e.g. sending a PUBLISH request, that TCP message is constructed by the MQTT layer and put into a buffer (actually a ring buffer). That data is only sent if the mqtt_output_send() is called (which is not available to the application). mqtt_output_send() is called for ‘sending’ functions like mqtt_publish() or as a side effect of the mqtt_tcp_sent_cb() callback which is called after a successful tcp_write(). The MQTT sent_cb() is forwarded to the application sent_cb() callback. MQTT with TLS The TLS encryption is happening between the application/MQTT part and the TCP/IP (lwip) layers. That way it is transparent between the protocol and the physical layer: While things look easy from the above block diagram, it is much more complex to get the cryptographic library working between MQTT and lwip: - mbed TLS needs to be initialized properly - The application needs to first start the TLS handshaking, adding an extra state to the application state handling - All calls to the lwip TCP layer needs to be replaced with calls to the mbedtls_ssl layer - The communication flow is not any more ‘send one message, receive one sent_cb() callback). Because the TLS layer is doing the handshaking, multiple messages will be transmitted and received, so the call backs need to be separated too. - Without the TLS layer, the communication flow is more synchronous, and received messages are directly passed up to the application layer. With the TLS between, there is the need for an extra buffering of the incoming messages. - Messages to be sent from the MQTT layer are already buffered in the non-TLS version. To ensure that they are sent, an extra function mqtt_output.send() has been added. 💡 I’m not very happy with that mqtt_output_send() method, but that’s the best I was able to come up to get things working. I might need to refactor this. To make the above working, I had the tweak the existing MQTT implementation with comes with lwip. Several things should be considered for a general refactoring or done with extra callbacks. I might be able to implement and improve it over the next weeks. All the needed changes in the application to support TLS are enabled with the following macro inside mqtt_opts.h: #ifndef MQTT_USE_TLS #define MQTT_USE_TLS 1 /*!< 1: enable TLS/SLL support; 0: do not use TLS/SSL */ #endif The following sections explain the implementation in more details. Random Number Generator Before using the random number generator, it needs to be initialized: #if MQTT_USE_TLS /* initialize random number generator */ RNGA_Init(RNG); /* init random number generator */ RNGA_Seed(RNG, SIM->UIDL); /* use device unique ID as seed for the RNG */ if (TLS_Init()!=0) { /* failed? */ printf("ERROR: failed to initialize for TLS!\r\n"); for(;;) {} /* stay here in case of error */ } #endif TLS Initialization To use the mbed TLS library, several objects have to be initialized at application startup: static mbedtls_entropy_context entropy; static mbedtls_ctr_drbg_context ctr_drbg; static mbedtls_ssl_context ssl; static mbedtls_ssl_config conf; static mbedtls_x509_crt cacert; static mbedtls_ctr_drbg_context ctr_drbg; static int TLS_Init(void) { /* inspired by */ int ret; const char *pers = "ErichStyger-PC"; /* initialize the different descriptors */ mbedtls_ssl_init( &ssl ); mbedtls_ssl_config_init( &conf ); mbedtls_x509_crt_init( &cacert ); mbedtls_ctr_drbg_init( &ctr_drbg ); mbedtls_entropy_init( &entropy ); if( ( ret = mbedtls_ctr_drbg_seed( &ctr_drbg, mbedtls_entropy_func, &entropy, (const unsigned char *) pers, strlen(pers ) ) ) != 0 ) { printf( " failed\n ! mbedtls_ctr_drbg_seed returned %d\n", ret ); return -1; } /* * First prepare the SSL configuration by setting the endpoint and transport type, and loading reasonable * defaults for security parameters. The endpoint determines if the SSL/TLS layer will act as a server (MBEDTLS_SSL_IS_SERVER) * or a client (MBEDTLS_SSL_IS_CLIENT). The transport type determines if we are using TLS (MBEDTLS_SSL_TRANSPORT_STREAM) * or DTLS (MBEDTLS_SSL_TRANSPORT_DATAGRAM). */ if( ( ret = mbedtls_ssl_config_defaults( &conf, MBEDTLS_SSL_IS_CLIENT, MBEDTLS_SSL_TRANSPORT_STREAM, MBEDTLS_SSL_PRESET_DEFAULT ) ) != 0 ) { printf( " failed\n ! mbedtls_ssl_config_defaults returned %d\n\n", ret ); return -1; } /* The authentication mode determines how strict the certificates that are presented are checked. */ mbedtls_ssl_conf_authmode(&conf, MBEDTLS_SSL_VERIFY_NONE ); /* \todo change verification mode! */ /* The library needs to know which random engine to use and which debug function to use as callback. */ mbedtls_ssl_conf_rng( &conf, mbedtls_ctr_drbg_random, &ctr_drbg ); mbedtls_ssl_conf_dbg( &conf, my_debug, stdout ); mbedtls_ssl_setup(&ssl, &conf); if( ( ret = mbedtls_ssl_set_hostname( &ssl, "ErichStyger-PC" ) ) != 0 ) { printf( " failed\n ! mbedtls_ssl_set_hostname returned %d\n\n", ret ); return -1; } /* the SSL context needs to know the input and output functions it needs to use for sending out network traffic. */ mbedtls_ssl_set_bio(&ssl, &mqtt_client, mbedtls_net_send, mbedtls_net_recv, NULL); return 0; /* no error */ } 💡 Notice that with I’m using MBEDTLS_SSL_VERIFY_NONE. I need to change this in a next iteration, see “Tuturial: mbedTLS SLL Certificate Verification with Mosquitto, lwip and MQTT“. Application Main Loop The application runs in an endless loop. To keep things simple, it uses a timer/timestamp to connect and then periodically publish MQTT data: static void DoMQTT(struct netif *netifp) { uint32_t timeStampMs, diffTimeMs; #define CONNECT_DELAY_MS 1000 /* delay in seconds for connect */ #define PUBLISH_PERIOD_MS 10000 /* publish period in seconds */ MQTT_state = MQTT_STATE_IDLE; timeStampMs = sys_now(); /* get time in milli seconds */ for(;;) { LED1_On(); diffTimeMs = sys_now()-timeStampMs; if (MQTT_state==MQTT_STATE_IDLE && diffTimeMs>CONNECT_DELAY_MS) { MQTT_state = MQTT_STATE_DO_CONNECT; /* connect after 1 second */ timeStampMs = sys_now(); /* get time in milli seconds */ } if (MQTT_state==MQTT_STATE_CONNECTED && diffTimeMs>=PUBLISH_PERIOD_MS) { MQTT_state = MQTT_STATE_DO_PUBLISH; /* publish */ timeStampMs = sys_now(); /* get time in milli seconds */ } MqttDoStateMachine(&mqtt_client); /* process state machine */ /* Poll the driver, get any outstanding frames */ LED1_Off(); ethernetif_input(netifp); sys_check_timeouts(); /* Handle all system timeouts for all core protocols */ } } With ethernetif_input() it polls for any incoming TCP packets. With sys_check_timeouts() it checks for any timeout and for example sends periodic PINGREQ messages to the MQTT broker. Application State Machine The application state machine goes through init, connect and TLS handshake sequence, follwed by a periodic PUBLISH: static void MqttDoStateMachine(mqtt_client_t *mqtt_client) { switch(MQTT_state) { case MQTT_STATE_INIT: case MQTT_STATE_IDLE: break; case MQTT_STATE_DO_CONNECT: printf("Connecting to Mosquito broker\r\n"); if (mqtt_do_connect(mqtt_client)==0) { #if MQTT_USE_TLS MQTT_state = MQTT_STATE_DO_TLS_HANDSHAKE; #else MQTT_state = MQTT_STATE_WAIT_FOR_CONNECTION; #endif } else { printf("Failed to connect to broker\r\n"); } break; #if MQTT_USE_TLS case MQTT_STATE_DO_TLS_HANDSHAKE: if (mqtt_do_tls_handshake(mqtt_client)==0) { printf("TLS handshake completed\r\n"); mqtt_start_mqtt(mqtt_client); MQTT_state = MQTT_STATE_WAIT_FOR_CONNECTION; } break; #endif case MQTT_STATE_WAIT_FOR_CONNECTION: if (mqtt_client_is_connected(mqtt_client)) { printf("Client is connected\r\n"); MQTT_state = MQTT_STATE_CONNECTED; } else { #if MQTT_USE_TLS mqtt_recv_from_tls(mqtt_client); #endif } break; case MQTT_STATE_CONNECTED: if (!mqtt_client_is_connected(mqtt_client)) { printf("Client got disconnected?!?\r\n"); MQTT_state = MQTT_STATE_DO_CONNECT; } #if MQTT_USE_TLS else { mqtt_tls_output_send(mqtt_client); /* send (if any) */ mqtt_recv_from_tls(mqtt_client); /* poll if we have incoming packets */ } #endif break; case MQTT_STATE_DO_PUBLISH: printf("Publish to broker\r\n"); my_mqtt_publish(mqtt_client, NULL); MQTT_state = MQTT_STATE_CONNECTED; break; case MQTT_STATE_DO_DISCONNECT: printf("Disconnect from broker\r\n"); mqtt_disconnect(mqtt_client); MQTT_state = MQTT_STATE_IDLE; break; default: break; } } In the MQTT_STATE_CONNECTED it calls mqtt_tls_output_send() to send any outstanding MQTT packets. It uses mqqt_recv_from_tls() to poll any incoming TCP packets. Connecting to the Broker The following is the connection code to the broker: static int); memset(client, 0, sizeof(mqtt_client_t)); /* initialize all fields */ /*; ci.keep_alive = 60; /* timeout */ /* Initiate client and connect to server, if this fails immediately an error code is returned otherwise mqtt_connection_cb will be called with connection result after attempting to establish a connection with the server. For now MQTT version 3.1.1 is always used */ #if MQTT_USE_TLS client->ssl_context = &ssl; err = mqtt_client_connect(client, &broker_ipaddr, MQTT_PORT_TLS, mqtt_connection_cb, 0, &ci); #else err = mqtt_client_connect(client, &broker_ipaddr, MQTT_PORT, mqtt_connection_cb, 0, &ci); #endif /* For now just print the result code if something goes wrong */ if(err != ERR_OK) { printf("mqtt_connect return %d\n", err); return -1; /* error */ } return 0; /* ok */ } At this state, the only difference between TLS and unencrypted communication is that it uses uses a different port (8883 instead of 1883) and that it stores the SSL context in the client descriptor. Inside mqtt_client_connect(), it will directly call the tcp_connect() function. Connection Callback If the TCP connection succeeds, it calls the connection callback: /** * TCP connect callback function. @see tcp_connected_fn * @param arg MQTT client * @param err Always ERR_OK, mqtt_tcp_err_cb is called in case of error * @return ERR_OK */ static err_t mqtt_tcp_connect_cb(void *arg, struct tcp_pcb *tpcb, err_t err) { mqtt_client_t* client = (mqtt_client_t *)arg; if (err != ERR_OK) { LWIP_DEBUGF(MQTT_DEBUG_WARN,("mqtt_tcp_connect_cb: TCP connect error %d\n", err)); return err; } /* Initiate receiver state */ client->msg_idx = 0; #if MQTT_USE_TLS /* Setup TCP callbacks */ tcp_recv(tpcb, tls_tcp_recv_cb); tcp_sent(tpcb, tls_tcp_sent_cb); tcp_poll(tpcb, NULL, 0); LWIP_DEBUGF(MQTT_DEBUG_TRACE,("mqtt_tcp_connect_cb: TCP connection established to server, starting TLS handshake\n")); /* Enter MQTT connect state */ client->conn_state = TLS_HANDSHAKING; /* Start cyclic timer */ sys_timeout(MQTT_CYCLIC_TIMER_INTERVAL*1000, mqtt_cyclic_timer, client); client->cyclic_tick = 0; #else /* Setup TCP callbacks */ tcp_recv(tpcb, mqtt_tcp_recv_cb); tcp_sent(tpcb, mqtt_tcp_sent_cb); tcp_poll(tpcb, mqtt_tcp_poll_cb, 2); LWIP_DEBUGF(MQTT_DEBUG_TRACE,("mqtt_tcp_connect_cb: TCP connection established to server\n")); /* Enter MQTT connect state */ client->conn_state = MQTT_CONNECTING; /* Start cyclic timer */ sys_timeout(MQTT_CYCLIC_TIMER_INTERVAL*1000, mqtt_cyclic_timer, client); client->cyclic_tick = 0; /* Start transmission from output queue, connect message is the first one out*/ mqtt_output_send(client, &client->output, client->conn); #endif return ERR_OK; } In TLS mode, it configures the special call backs for tls handling (tls_rcp_recv_cb() and tls_tcp_sent_cb()) and moves the connection state into TLS_HANDSHAKING mode. Receiver Callback Because the normal receiver callback mqtt_tcp_recv_cb() does not work with the TLS layer, I have implemented a function which reads from the TLS layer: 💡 Note that the error handling is not completed yet! err_t mqtt_recv_from_tls(mqtt_client_t *client) { int nof; mqtt_connection_status_t status; struct pbuf p; /*! \todo check if can we really use rx_buffer here? */ nof = mbedtls_ssl_read(client->ssl_context, client->rx_buffer, sizeof(client->rx_buffer)); if (nof>0) { printf("mqtt_recv_from_tls: recv %d\r\n", nof); memset(&p, 0, sizeof(struct pbuf)); /* initialize */ p.len = nof; p.tot_len = p.len; p.payload = client->rx_buffer; status = mqtt_parse_incoming(client, &p); if (status!=MQTT_CONNECT_ACCEPTED) { return ERR_CONN; /* connection error */ /*! \todo In case of communication error, have to close connection! */ } } return ERR_OK; } TLS Sent Callback For every packet sent, the callback tsl_tcp_sent_cb() gets called: #if MQTT_USE_TLS /** * TCP data sent callback function. @see tcp_sent_fn * @param arg MQTT client * @param tpcb TCP connection handle * @param len Number of bytes sent * @return ERR_OK */ static err_t tls_tcp_sent_cb(void *arg, struct tcp_pcb *tpcb, u16_t len) { printf("tls_tcp_sent_cb\r\n"); return mqtt_tcp_sent_cb(arg, tpcb, 0); /* call normal (non-tls) callback */ } #endif /*MQTT_USE_TLS */ It call the corresponding callback in the MQTT layer: /** * TCP data sent callback function. @see tcp_sent_fn * @param arg MQTT client * @param tpcb TCP connection handle * @param len Number of bytes sent * @return ERR_OK */ static err_t mqtt_tcp_sent_cb(void *arg, struct tcp_pcb *tpcb, u16_t len) { mqtt_client_t *client = (mqtt_client_t *)arg; LWIP_UNUSED_ARG(tpcb); LWIP_UNUSED_ARG(len); if (client->conn_state == MQTT_CONNECTED) { struct mqtt_request_t *r; printf("mqtt_tcp_sent_cb: and MQTT_CONNECTED\r\n"); /* Reset keep-alive send timer and server watchdog */ client->cyclic_tick = 0; client->server_watchdog = 0; /* QoS 0 publish has no response from server, so call its callbacks here */ while ((r = mqtt_take_request(&client->pend_req_queue, 0)) != NULL) { LWIP_DEBUGF(MQTT_DEBUG_TRACE,("mqtt_tcp_sent_cb: Calling QoS 0 publish complete callback\n")); if (r->cb != NULL) { r->cb(r->arg, ERR_OK); } mqtt_delete_request(r); } /* Try send any remaining buffers from output queue */ mqtt_output_send(client, &client->output, client->conn); } return ERR_OK; } Net.c Functions The interface to the network/lwip layer for mbed TLS is implemented in net.c. The receiving function puts the incoming data into a ring buffer and returns the number of bytes received /* * Read at most 'len' characters */ int mbedtls_net_recv( void *ctx, unsigned char *buf, size_t len ) { struct mqtt_client_t *context; context = (struct mqtt_client_t *)ctx; if(context->conn == NULL) { return( MBEDTLS_ERR_NET_INVALID_CONTEXT ); } if (RNG1_NofElements()>=len) { printf("mbedtls_net_recv: requested nof: %d, available %d\r\n", len, (int)RNG1_NofElements()); if (RNG1_Getn(buf, len)==ERR_OK) { return len; /* ok */ } } return 0; /* nothing read */ } The sending function writes the data with tcp_write(): /* * Write at most 'len' characters */ int mbedtls_net_send( void *ctx, const unsigned char *buf, size_t len ) { struct mqtt_client_t *context; context = (struct mqtt_client_t *)ctx; int err; if(context->conn == NULL) { return( MBEDTLS_ERR_NET_INVALID_CONTEXT ); } printf("mbedtls_net_send: len: %d\r\n", len); err = tcp_write(context->conn, buf, len, TCP_WRITE_FLAG_COPY /*| (wrap ? TCP_WRITE_FLAG_MORE : 0)*/); if (err!=0) { return MBEDTLS_ERR_SSL_WANT_WRITE; } return len; /* >0: no error */ } Sending MQTT Messages With TLS, sending MQTT messages is using mbedtls_ssl_write() instead of tcp_write(): /** * Try send as many bytes as possible from output ring buffer * @param rb Output ring buffer * @param tpcb TCP connection handle */ static void mqtt_output_send(mqtt_client_t *client, struct mqtt_ringbuf_t *rb, struct tcp_pcb *tpcb) { err_t err; int nof; u8_t wrap = 0; u16_t ringbuf_lin_len = mqtt_ringbuf_linear_read_length(rb); u16_t send_len = tcp_sndbuf(tpcb); LWIP_ASSERT("mqtt_output_send: tpcb != NULL", tpcb != NULL); if (send_len == 0 || ringbuf_lin_len == 0) { return; } LWIP_DEBUGF(MQTT_DEBUG_TRACE,("mqtt_output_send: tcp_sndbuf: %d bytes, ringbuf_linear_available: %d, get %d, put %d\n", send_len, ringbuf_lin_len, ((rb)->get & MQTT_RINGBUF_IDX_MASK), ((rb)->put & MQTT_RINGBUF_IDX_MASK))); if (send_len > ringbuf_lin_len) { /* Space in TCP output buffer is larger than available in ring buffer linear portion */ send_len = ringbuf_lin_len; /* Wrap around if more data in ring buffer after linear portion */ wrap = (mqtt_ringbuf_len(rb) > ringbuf_lin_len); } #if MQTT_USE_TLS printf("mqtt_output_send: | (wrap ? TCP_WRITE_FLAG_MORE : 0)); #endif if ((err == ERR_OK) && wrap) { mqtt_ringbuf_advance_get_idx(rb, send_len); /* Use the lesser one of ring buffer linear length and TCP send buffer size */ send_len = LWIP_MIN(tcp_sndbuf(tpcb), mqtt_ringbuf_linear_read_length(rb)); #if MQTT_USE_TLS printf(); #endif } if (err == ERR_OK) { mqtt_ringbuf_advance_get_idx(rb, send_len); /* Flush */ tcp_output(tpcb); } else { LWIP_DEBUGF(MQTT_DEBUG_WARN, ("mqtt_output_send: Send failed with err %d (\"%s\")\n", err, lwip_strerr(err))); } } Summary In order to get MQTT working with TLS/SLL and lwip, I had to deep dive into how TLS and lwip works. I have to admit that things were not as easy as I thought as both MQTT and TLS are new to me, and I only had used lwip as a ‘black box’. The current implementation is very likely not ideal, not that clean and lacks some error handling. But it ‘works’ fine so far with a local Mosquitto broker. Plus I have learned a lot new things. I plan to clean it up more, add better error handling, plus to add FreeRTOS in a next step. Will see how it goes :-). I hope this is useful for you. I have pushed the application for the NXP FRDM-K64F board on GitHub (). I plan to update/improve/extend the implementation, so make sure you check the latest version on GitHub. I hope you find this useful to add TLS to your MQTT application with lwip. 💡 There is an alterative lwip API which should be easier to use with TLS. I have found that one after writing this article: How to add server certificate verification, see my next article: “Tuturial: mbedTLS SLL Certificate Verification with Mosquitto, lwip and MQTT“. Happy Encrypting 🙂 Links - MQTT: - lwip: - mbed TLS library: - mbed TLS tutorial: - TLS/SSL: - MQTT on NXP FRDM-K64F: MQTT with lwip and NXP FRDM-K64F Board - Introduction to TLS: Introduction to Security and TLS (Transport Layer Security) - Enable TLS in Mosquitto Broker: Enable Secure Communication with TLS and the Mosquitto Broker - Project used in this article on GitHub: - Tuturial: mbedTLS SLL Certificate Verification with Mosquitto, lwip and MQTT - TLS Security: - TLS and web certificates explained: - Understanding security in IoT: - Understanding Transport Layer Security: Thanks, Erich. Will have to give it ago. Have you played with FNET at all? Found that a while ago when looking for lwip alternatives. looks like that has mbed tls built in too. Hi Carl, yes, I’m aware of FNET. But as it (currently?) only supports NXP devices, I have not used it. lwip on the other side is much more broadly used and supported. Erich Hi Erich, Compliments for gettin it running ! I expect that my implementation ( MQX based ) should be a little bit simpler since RTCS give socket support with asyncronous receive and sending of the TCP/IP packets. Anyway, thank very much for all these article about MQTT and SSL : they are really inspiring and offer also a list of useful links. Luca Thanks :-). Yes, I could have used sockets as well with lwip, but I wanted to start on the very low level, as it is easier to add a sockets layer later then to go the other way round. Pingback: Tuturial: mbedTLS SLL Certificate Verification with Mosquitto, lwip and MQTT | MCU on Eclipse Hi Erich, Great introduction! It’s very useful. I have one question about the reduce RAM usage part. The size of MBEDTLS_MPI_MAX_SIZE is reduced from 1024(default) to 512. I dont know what is MPI. How can you decide the 512 is enough? Can you give me some clues? 🙂 Thanks! Hi Amanda, see. MPI stands for Message Passing Interface, and that define is used in many places for buffers. It basically describes the maxiumum number of bytes which can be passed in a single message. Basically it means that the keys exchanged should not be larger than 512 bytes. I hope this helps, Erich It is possible to use MQTT mbedTLS with ESP8266 ? You mean connecting from a NXP FRDM board (e.g. FRDM-K64F) with an MQTT client using mbedTLS to a MQTT server running on a ESP8266? Not done that, but I don’t see a reason why this would not work if the software on the ESP8266 is properly running the protocol. I meant connect the NXP FRDM board to a ESP8266 (over UART) and send to a MQTT broker ( AWS ) values or strings. I didn’t find any exemples in the internet using AT command that’s why I was wondering if it was possible ? I don’t see a reason why this should not work. I have done similar things (see). Where are the certificates loaded? I didnt found where the CA, certificate and private key are passed. – Richmond Hi Richmond, see Erich FYI, LWIP now supports MQTT over TLS via ALTCP_TLS. It now supports loading of CA certificate, client certificate and private key. Tested it working with Amazon AWS IoT cloud. Hi Erich, I’m trying since months to follow your amazing tutorial step by step. I bought all materials and installed all tools on Linux, and I’m trying to connect the board with a local mosquitto broker (in my PC). My two Questions: 1- How to configure the board (client-publisher) with the local mosquitto broker? Note: In “config.h” I made in /* broker settings */ CONFIG_USE_BROKER_LOCAL (1) and CONFIG_USE_BROKER_HSLU (0), and for /* client configuration settings */ I gave my PC settings in not WORK_NETZWORK. And in /* connection settings to broker */ I set again my PC settings (HOST_NAME, HOST_IP) in CONFIG_USE_BROKER_LOCAL. Unfortunatly I could not see the result becuse it was a problem in the debugging. 2- How to make a correct debug, every time I make debug I get this message in the “Debugger Console” of MCUXpresso: “GNU gdb (GNU Tools for Arm Embedded Processors 7-2017-q4-major) 8.0.50.20171128”. Program stopped. ResetISR () at ../startup/startup_mk64f12.c:460 460 void ResetISR(void) { Temporary breakpoint 1, main () at ../source/lwip_mqtt.c:844 844 SYSMPU_Type *base = SYSMPU;” And before I ge this message in the Console “[MCUXpresso Semihosting Telnet console for ‘FRDM-K64F_lwip_mqtt_bm LinkServer Debug’ started on port 52358 @ 127.0.0.1]” The Console writes this: “MCUXpresso IDE RedlinkMulti Driver v10.2 (Jul 25 2018 11:28:11 – crt_emu_cm_redlink build 555) Reconnected to existing link server Connecting to probe 1 core 0:0 (using server started externally) gave ‘OK’ ============= SCRIPT: kinetisconnect.scp ============= Kinetis Connect Script DpID = 2BA01477 Assert NRESET Reset pin state: 00 Power up Debug MDM-AP APID: 0x001C0000 MDM-AP System Reset/Hold Reset/Debug Request MDM-AP Control: 0x0000001C MDM-AP Status (Flash Ready) : 0x00000032 Part is not secured MDM-AP Control: 0x00000014 Release NRESET Reset pin state: 01 MDM-AP Control (Debug Request): 0x00000004 MDM-AP Status: 0x0001003A MDM-AP Core Halted ============= END SCRIPT ============================= Probe Firmware: MBED CMSIS-DAP (MBED) Serial Number: 024002014D87DE5BB07923E3 VID:PID: 0D28:0204 USB Path: /dev/hidraw0 Using memory from core 0:0 after searching for a good core debug interface type = Cortex-M3/4 (DAP DP ID 2BA01477) over SWD TAP 0 processor type = Cortex-M4 (CPU ID 00000C24) on DAP AP 0 number of h/w breakpoints = 6 number of flash patches = 2 number of h/w watchpoints = 4 Probe(0): Connected&Reset. DpID: 2BA01477. CpuID: 00000C24. Info: Debug protocol: SWD. RTCK: Disabled. Vector catch: Disabled. Content of CoreSight Debug ROM(s): RBASE E00FF000: CID B105100D PID 04000BB4C4 ROM dev (type 0x1) ROM 1 E000E000: CID B105E00D PID 04000BB00C ChipIP dev SCS (type 0x0) ROM 1 E0001000: CID B105E00D PID 04003BB002 ChipIP dev DWT (type 0x0) ROM 1 E0002000: CID B105E00D PID 04002BB003 ChipIP dev FPB (type 0x0) ROM 1 E0000000: CID B105E00D PID 04003BB001 ChipIP dev ITM (type 0x0) ROM 1 E0040000: CID B105900D PID 04000BB9A1 CoreSight dev TPIU type 0x11 Trace Sink – TPIU ROM 1 E0041000: CID B105900D PID 04000BB925 CoreSight dev ETM type 0x13 Trace Source – core ROM 1 E0042000: CID B105900D PID 04003BB907 CoreSight dev ETB type 0x21 Trace Sink – ETB ROM 1 E0043000: CID B105900D PID 04001BB908 CoreSight dev CSTF type 0x12 Trace Link – Trace funnel/router Inspected v.2 On chip Kinetis Flash memory module FTFE_4K.cfx Image ‘Kinetis SemiGeneric Feb 17 2017 17:24:02’ Opening flash driver FTFE_4K.cfx Sending VECTRESET to run flash driver Flash variant ‘K 64 FTFE Generic 4K’ detected (1MB = 256*4K at 0x0) Closing flash driver FTFE_4K.cfx NXP: MK64FN1M0xxx12 ( 65) Chip Setup Complete Connected: was_reset=true. was_stopped=true Awaiting telnet connection to port 3330 … GDB nonstop mode enabled Opening flash driver FTFE_4K.cfx (already resident) Sending VECTRESET to run flash driver Flash variant ‘K 64 FTFE Generic 4K’ detected (1MB = 256*4K at 0x0) Writing 142880 bytes to address 0x00000000 in Flash” I hope you can help. 1) in lwip_app.c you can onfigure the IP address of the broker (I have not used DHCP for this) 2) I recommend you use the SEGGER J-Link firmware instead I hope this helps, Erich Hi Erich, thanks for your reply. I didn’t find any file with the name lwip_app.c! Hi Erich, 1- I’m not working on mqtt-FreeRTOS without mbedtls but with mbedtls “FRDM-K64F_lwip_lwip_mqtt_bm”. I think, I can configure the IP address in /source/config.h file. 2- When MCUXpresso searches for a propes then it findes only LinkServer type. So I have no chance to make with SEGGER J-Link. Hi Ahmed, you can use the OpenSDA as a Segger J-Link or P&E Multilink, see Erich Hello Erich, first, thank you very much for such a good job. Do you think is it possible not to use the RNG1 Module? how should I modify the mbedtls_net_recv? Thank you, To be more precise, I see that, porting this example in another microcontroller, the mbedtls_net_recv always has no data in the RNG1 Ringbuffer. Could you explain how it is used? Thank you, Mattia In that case it means that you are not receiving the data. Have you tried to run it on a K64F as well? I’m waiting for the MCUXpresso to be available to download, then I will test on my K64F. However, I can’t understand very well the firmware. if (RNG1_Getn(buf, len)==ERR_OK) { return len; /* ok */ } But who writes on buf? The strange thing is that with no TLS the program is perfect, so it seems that some union between TLS and TCP is missing. I found what the problem was: it seems that tcp_write didn’t send data. I put a tcp_output, now it works! Thanks, Mattia You can use whatever you like. The RNG1 is a simple implementation of a ring buffer, but you can replace it with your own implementation too. Hi Erich, I’m working on a project to use the device shadow service from AWS IoT. Some of the libraries that i’m using could only working properly in the environment with OS so I’m trying to port MQTT Client with TLS in bare metal way. The project mentoned in the article on github needs FreeRTOS currently. Is there any old working versions without FreeRTOS? That would be very helpful for the project I’m working on. I think I started that project in bare metal mode, so you might go back in the history of git to get that version. But I believe using anything like this does not make any sense without an RTOS. It makes it so much more complicated without an RTOS, so I recommend you use one.
https://mcuoneclipse.com/2017/04/17/tutorial-secure-tls-communication-with-mqtt-using-mbedtls-on-top-of-lwip/?like_comment=90039&_wpnonce=fd4bb3e0f4
CC-MAIN-2020-29
en
refinedweb
Name EXT_device_base Name Strings EGL_EXT_device_base Contributors James Jones Daniel Kartch Jamie Madill Contacts James Jones, NVIDIA (jajones 'at' nvidia.com) Status Complete Rewritten in terms of split functionality in EXT_dispay_device and EXT_device_enumeration. Version Version 9 - March 24th, 2015 Number EGL Extension #72 Extension Type EGL client extension Dependencies Written against the wording of EGL 1.5. The specifications of EGL_EXT_device_query and EGL_EXT_device_enumeration are required to determine the specification of this extension, although those extensions may not be supported.. This extension defines the first step of this bootstrapping process: Device enumeration. New Types As defined by EGL_EXT_device_query. New Functions As defined by EGL_EXT_device_query and EGL_EXT_device_enumeration. New Tokens As defined by EGL_EXT_device_query. Add to section "3.2 Devices" "EGL_EXT_device_base is equivalent to the combination of the functionality defined by EGL_EXT_device_query and EGL_EXT_device_enumeration." Issues 1. Should there be a mechanism (such as an attribute list) to filter devices in eglQueryDevicesEXT()? RESOLVED: No. This could develop too much complexity, like the EGLConfig mechanism. Instead, force applications to query all devices and implement any desired filtering themselves. 2. Should there be an eglSetDeviceAttribEXT()? RESOLVED: No. Device properties are immutable. 3. Should a device file descriptor attribute be included in the base specification? RESOLVED: No. It seems like an arbitrary attribute to include in the base extension. Other extensions can easily be added if this or other device attributes are needed. 4. Should EGLDeviceEXT handles be opaque pointers or 32-bit values? RESOLVED: Opaque pointers. The trend seems to be to use opaque pointers for object handles, and opaque pointers allow more implementation flexibility than 32-bit values. Additionally, the introduction of the EGLAttrib type allows inclusion of pointer-sized types in attribute lists, which was the only major advantage of 32-bit types. 5. Should eglQueryDisplayAttribEXT be defined as part of this extension? RESOLVED: Yes. There are no other known uses for this function, so it should be defined here. If other uses are found, future extension specifications can reference this extension or retroactively move it to a separate extension. 6. How should bonded GPU configurations, such as SLI or Crossfire be enumerated? What about other hybrid rendering solutions? RESOLVED: Bonded GPUs should appear as one device in this API, since the client APIs generally treat them as one device. Further queries can be added to distinguish the lower-level hardware within these bonded devices. Hybrid GPUs, which behave independently but are switched between in a manner transparent to the user, should be enumerated separately. This extension is intended to be used at a level of the software stack below this type of automatic switching or output sharing. 7. Should this extension require all displays to have an associated, queryable device handle? RESOLVED: Yes. This allows creating new namespace containers that all displays can be grouped in to and allows existing applications with display-based initialization code to easily add device-level functionality. Future extensions are expected to expose methods to correlate EGL devices and native devices, and to use devices as namespaces for future objects and operations, such as cross-display EGL streams. 8. Are device handles returned by EGL valid in other processes? RESOLVED: No. Another level of indirection is required to correlate two EGL devices in separate processes. 9. Is a general display pointer query mechanism needed, or should an eglGetDevice call be added to query a display's associated device? RESOLVED: A general mechanism is better. It may have other uses in the future. 10. Should a new type of extension be introduced to query device- specific extensions? RESOLVED: Yes. Without this mechanism, it is likely that most device extensions would require a separate mechanism to determine which devices actually support them. Further, requiring all device-level extensions to be listed as client extensions forces them to be implemented in the EGL client library, or "ICD". This is unfortunate since vendors will likely wish to expose vendor-specific device extensions. These advantages were weighed against the one known disadvantage of a separate extension type: Increasing the complexity of this extension and the EGL extension mechanism in general. 11. Is eglQueryDeviceStringEXT necessary, or should the device extension string be queried using eglQueryDeviceAttribEXT? RESOLVED: Using a separate query seems more consistent with how the current extension strings are queried. 12. Should this extension contain both device enumeration and the ability to query the device backing an EGLDisplay? RESOLVED: This extension initially included both of these abilities. To allow simpler implementations to add only the ability to query the device of an existing EGLDisplay, this extension was split into two separate extensions: EGL_EXT_device_query EGL_EXT_device_enumeration The presence of this extension now only indicates support for both of the above extensions. Revision History: #9 (March 24th, 2015) James Jones - Split the extension into two child extensions: EGL_EXT_device_query EGL_EXT_device_enumeration #8 (May 16th, 2014) James Jones - Marked the extension complete. - Marked all issues resolved. #7 (April 8th, 2014) James Jones - Renamed eglGetDisplayAttribEXT back to eglQueryDisplayAttribEXT. - Update wording based on the EGL 1.5 specification. - Use EGLAttrib instead of EGLAttribEXT. - Assigned values to tokens. #6 (November 6th, 2013) James Jones - Added EGL_BAD_DEVICE_EXT error code. - Renamed some functions for consistency with the core spec #5 (November 6th, 2013) James Jones - Specified this is a client extension - Renamed eglQueryDisplayPointerEXT eglGetDisplayAttribEXT and modified it to use the new EGLAttribEXT type rather than a void pointer - Introduced the "device" extension type. - Added eglQueryDeviceStringEXT to query device extension strings - Removed issues 5, 10, and 12 as they are no longer relevant - Added issues 10 and 11. #4 (May 14th, 2013) James Jones - Merged in EGL_EXT_display_attributes - Changed eglGetDisplayPointerEXT to eglQueryDisplayPointerEXT - Remove eglGetDisplayAttribEXT since it has no known use case #3 (April 23rd, 2013) James Jones - Include EGL_NO_DEVICE_EXT - Added issues 8 and 9 #2 (April 18th, 2013) James Jones - Reworded issue 3 and flipped the resolution - Added issues 5, 6, and 7 - Filled in the actual spec language modifications - Renamed from EGL_EXT_device to EGL_EXT_device_base - Fixed some typos #1 (April 16th, 2013) James Jones - Initial Draft
https://docs.nvidia.com/drive/active/5.1.6.0L/nvvib_docs/DRIVE_OS_Linux_SDK_Development_Guide/baggage/EGL_EXT_device_base.html
CC-MAIN-2020-29
en
refinedweb
On Saturday 10 August 2002 14:55, Jens Vagelpohl wrote: > if you want to ensure to always get the "physical" parent (the parent the > object is actually situated in) then do it like this:: [skip] Why not self.getParentNode().method ? -- Sincerely yours, Bogdan M. Maryniuck +#if defined(__alpha__) && defined(CONFIG_PCI) + /* + * The meaning of life, the universe, and everything. Plus + * this makes the year come out right. + */ + year -= 42; +#endif (From the patch for 1.3.2: (kernel/time.c), submitted by Marcus Meissner)
https://mail.python.org/pipermail/python-list/2002-August/166572.html
CC-MAIN-2020-29
en
refinedweb
Counting Sort Reading time: 20 minutes | Coding time: 7 minutes Counting sort is an algorithm for sorting integers in linear time. It can perform better than other efficient algorithms like Quick Sort, if the range of the input data is very small compared to the number of input data. It is a stable, non-comparison and non-recursive based sorting. It takes in a range of integers to be sorted. It uses the range of the integers (for example, the range of integers between 0–100), and counts the number of times each unique number appears in the unsorted input. It works by counting the number of objects having distinct key values (kind of hashing). Then doing some arithmetic to calculate the position of each object in the output sequence. In Counting sort, the frequencies of distinct elements of the array to be sorted is counted and stored in an auxiliary array, by mapping its value as an index of the auxiliary array. Is counting sort a comparison based sorting algorithm? Algorithm Let's assume that, array A of size N needs to be sorted. - Step 1: Initialize the auxilliary array Aux[] as 0. (Note: The size of this array should be ≥max(A[])) - Step 2: Traverse array A and store the count of occurrence of each element in the appropriate index of the Aux array, which means, execute Aux[A[i]]++ for each i, where i ranges from [0,N−1]. - Step 3: Initialize the empty array sortedA[] - Step 4: Traverse array Aux and copy i into sortedA for Aux[i] number of times where 0≤ i ≤max(A[]). Example The operation of COUNTING - SORT on an input array A[1...8], where each element of A is a non-negative integer no larger than k = 5. Now taking new array B of same length as of original. Put number at the value at that index and then increament the value at the index of the count array.[]. - C++ C++ // C++ Program for counting sort #include <iostream> Using namespace std;} Complexity The array A is traversed in O(N) time and the resulting sorted array is also computed in O(N) time. Aux[] is traversed in O(K) time. Therefore, the overall time complexity of counting sort algorithm is O(N+K). Worst case time complexity: Θ(N+K) Average case time complexity: Θ(N+K) Best case time complexity: O(N+K) Space complexity: O(N+K) where N is the range of elements and K is the number of buckets Important Points - Counting sort works best if the range of the input integers to be sorted is less than the number of items to be sorted - It is a stable, non-comparison and non-recursive based sorting. - It assumes that the range of input is known - It is often used as a sub-routine to another sorting algorithm like radix sort - Counting sort uses a partial hashing to count the occurrence of the data object in O(1) time - Counting sort can be extended to work for negative inputs as well
https://iq.opengenus.org/counting-sort/
CC-MAIN-2020-29
en
refinedweb
Lopy4 does not send messages in DR4 in US915 frequencies Hello, we're are experiencing some problems with Lopy4 and US915 channels 64-72. Sending a message (ABP mode) using DR4 does not seem to work because the message is sent always using DR0. Moreover, when we try to use the OTAA join mode, the join request messages are sent using only the channels 0 - 63, never the channels 64 - 72. From LoraWAN specification, the device "SHALL transmit the join request message on random 125KHz channels amongst the 64 125KHz channes defined using DR0 and on 500KHz channels amongst the 8 500KHz channels defined using DR4". Below the code we used to perform the test in ABP mode. from network import LoRa import socket import time import struct import binascii import pycom lora = LoRa(mode=LoRa.LORAWAN, region=LoRa.US915) for i in range(0, 72): lora.remove_channel(i) lora.add_channel(0, frequency=902700000, dr_min=0, dr_max=3) lora.add_channel(64, frequency=903000000, dr_min=4, dr_max=4) dev_addr = struct.unpack(">l",binascii.unhexlify("xxxx"))[0] nwk_swkey = binascii.unhexlify("xxxxx") app_swkey = binascii.unhexlify("xxxxx") lora.join(activation=LoRa.ABP, auth=(dev_addr, nwk_swkey, app_swkey),timeout=0, dr=4) i = 0 s = socket.socket(socket.AF_LORA, socket.SOCK_RAW) s.setsockopt(socket.SOL_LORA, socket.SO_DR, 0) s.setblocking(False) pkt = bytes([i]) print('Sending:', pkt) s.send(pkt) time.sleep(10) s = socket.socket(socket.AF_LORA, socket.SOCK_RAW) s.setsockopt(socket.SOL_LORA, socket.SO_DR, 1) s.setblocking(False) pkt = bytes([i]) print('Sending:', pkt) s.send(pkt) time.sleep(10) s = socket.socket(socket.AF_LORA, socket.SOCK_RAW) s.setsockopt(socket.SOL_LORA, socket.SO_DR, 2) s.setblocking(False) pkt = bytes([i]) print('Sending:', pkt) s.send(pkt) time.sleep(10) s = socket.socket(socket.AF_LORA, socket.SOCK_RAW) s.setsockopt(socket.SOL_LORA, socket.SO_DR, 3) s.setblocking(False) pkt = bytes([i]) print('Sending:', pkt) s.send(pkt) time.sleep(10) s = socket.socket(socket.AF_LORA, socket.SOCK_RAW) s.setsockopt(socket.SOL_LORA, socket.SO_DR, 4) s.setblocking(False) pkt = bytes([i]) print('Sending:', pkt) s.send(pkt) The first 4 messages are sent and recieved correctly using DR0 to DR3. The last one is sent (and received by the network server) using DR0 instead of DR4 (see image) Our device is connected to the Senet network. Has anyone had the same problem? We need to address urgently this issue. Thanks for any advice. Hi @oligauc. Your suggestion to set dr_min=0 for the channel 64 did the trick! Modifying just that using the same code now the device send four messages using DR0 through DR3 at 902.7 and the last one at DR4 at 903.0 If I go back putting dr_min=4 I obtain the same problem again. I still need to make other tests using the OTAA mode, but in the mean time I want to thank you for your support. @oligauc As you can see in the sample code, I send 4 messages one after another setting the DR rate each time from 0 to 4. The first 3 messages are sent using the DR that was specified in the socket option directive (DR0 through DR3). The fourth one is sent at DR0 disregarding the DR4. Since the test code uses ABP there is no join request. I think that the problem with OTAA mode is related to the same cause, for this reason I did the test with ABP and attached this code. I think that solving the problem with ABP mode will solve the issue with OTAA too. Thanks. @_peter When creating the socket you are specifying a datarate of 0 (s.setsockopt(socket.SOL_LORA, socket.SO_DR, 0). Please see the following snapshots for typical join request and uplink data as seen on the loraserver. Thank you, @oligauc I'll try setting the dr_min=0 as you said and changing the frequency. Beside this, can you explain to me why the last message sent by the code above uses DR0 and is received by the gateway with DR0 instead of DR4? If the problem was in the configuration of the gateway, the Lopy4 should have sent the message at DR4 anyway and the gateway should have not be able to receive it. Instead it seems that the DR4 directive when I create the socket is ignored, no exception is raised, and the message is sent and received using DR0. Thank you again for your time. Q1: Most Lorawan gateways support only 1 500KHz uplink channel. You can specify any frequency you wish, the 904.6 MHz was an arbitrary choice, however, this should reflect the gateway configuration. Q2. You are correct the dr_min should be equal to dr_max = 4. This is just a current firmware requirement, that might change soon. Q3: We carried out test on the US 500KHz channels and if your gateway and lora server are configured correctly it should work. Hi @oligauc. I'm not sure to understand the solution you propose. Following are the points that are unclear to me: 1- Why add the channel 64-72 using the same frequency value? And, in addition, why start from 904.6 Mhz instead of 903.0 stated in the LoraWAN regional settings for US915? 2- From the regional settings specifications, the channels 64-72 can only use DR4, why set the channel with minimum DR=0? 3- Our objective with the code attached was not to test the automatic hop between frequencies, but to force the transmission with a specified DR, which worked for DR0-3 but not with DR4 As a side note about the third point, in another test we did adding the channels 0-7 and 64 in the same manner as shown in the code, the device hopped correctly between frequencies using however only the channel 0-7, never the 64 channel. This was the reason we tried to force the DR with the above test code. I hope I was able to explain clearly my doubts, english is not my first language. Thank you for your reply and your time. @_peter If you are going to manually add / remove channels, there will not be any frequency hopping. To send at DR4 you need to remove all the 125kHz channels and add only the 500KHz channels.See below for i in range(0, 72): lora.remove_channel(i) for i in range (64, 72): lora.add_channel(i, frequency=904600000, dr_min=0, dr_max=4)
https://forum.pycom.io/topic/5394/lopy4-does-not-send-messages-in-dr4-in-us915-frequencies/4?lang=en-US
CC-MAIN-2020-29
en
refinedweb
Syllabus & Course Policies Overview.; all of the old calculus-based examples have been removed over the years. However, taking calculus is a great way to brush up on the arithmetic and algebra that appear regularly in CS 61A. There are no formal programming-related prerequisites. This course was built for students without prior programming experience. It try everything out to figure out what combination of these events works best for you. Lecture There are three 50-minute lectures per week. Slides and videos will be posted before each lecture. A screencast of the live lecture will be posted soon after each lecture occurs. This course moves fast, and lecture is tightly coordinated with section. Please attend or watch each lecture the day it is given and before you attend section. Section There are two sections each week: one lab and one discussion.. Office Hours In office hours, you can ask questions about the material, receive guidance on assignments, and work with peers and course staff in a small group setting. See the office hour schedule and come by; no appointments are needed. Group Mentoring Sections Optional group mentoring sections are held each week, starting the week after midterm 1. Each section features worksheets that review topics from the course. These sections of at most 5 or 6 students meet weekly and exist to create a stronger feeling of community in the class and reinforce conceptual understanding of course material. These will be recurring sections which will have the same group of students, and sign-ups for these sections will open after the first midterm. Assignments Each week, there will be problems assigned for you to work on, most of which will involve writing and analyzing, of which 11 will be scored. Completing lab exercises on time contributes toward your participation score. Homework Homeworks are weekly assignments meant to help you apply the concepts learned in lecture and section on more challenging problems. They will usually be released on Friday night and be due the following Thursday. - Answering some questions correctly. - For the questions not answered correctly, describe what you did try and where you got stuck. if you ask. You may also work alone on all projects, although partners are recommended for the paired projects. Projects are graded on both correctness and composition. Exams Midterm 1 will be held 7pm-9pm Monday, September 16. You are permitted to bring one double-sided, letter-sized, handwritten sheets of notes. Midterm 2 will be held 8pm-10pm Thursday, October 24. You are permitted to bring two double-sided, letter-sized, handwritten sheets of notes. One of them can be your notes from Midterm 1. Students who have a course conflict with a midterm should notify the staff before the exam in question, and special arrangements will be made. Details of how to report an exam conflict will be announced shortly. The final exam will be held 3pm-6pm Thursday, December 19. You are permitted to bring three double-sided, letter-sized, handwritten sheets of notes. Two of them can be your notes from Midterm 2. If you have a direct conflict with another final exam, we will allow you to take an alternate final 7pm-10pm on Thursday, December 19. We will not provide final alternates for any other reason. Resources Textbook The online textbook for the course is Composing Programs, which was created specifically for this course, based on the classic textbook Structure and Interpretation of Computer Programs.. Grading Your course grade is computed using a point system with a total of 300 points. - Midterm 1, worth 40 points. - Midterm 2, worth 50 points. - The final exam, worth 75 points. - Four projects, worth 100 points. - Homework, worth 25 points. - Section participation, worth 10 points. There are a handful extra credit points available throughout the semester, perhaps around 10, that are available to everyone. Each letter grade for the course corresponds to a range of scores: A+ ≥ 300 A ≥ 285 A- ≥ 270 B+ ≥ 250 B ≥ 225 B- ≥ 205 C+ ≥ 195 C ≥ 185 C- ≥ 175 D+ ≥ 170 D ≥ 165 D- ≥ 160 we try to avoid that scenario. If all goes according to plan, then these are the exact thresholds that will be used at the end of the course to assign grades (contrary to popular rumor). In a typical semester, about 60% of students taking the course for a letter grade will receive a B+ or higher. Incomplete grades will be granted only for medical or personal emergencies that cause you to miss the final or last part of the course, only for students who have completed the majority of the coursework, and only if work up to the point of the emergency has been satisfactory. Lab and Discussion Participation The participation score is designed to make sure that all students attend at least the first few weeks of lab and discussion sections to try them out. Continuing to attend provides a safety net in case of a low midterm score. Attending a discussion or submitting a scored lab will earn you one participation credit. There will be about 23 possible credits available (12 discussions after the first and 11 scored labs). To earn a perfect participation score in the course, you need to earn at least 10 credits. Your course section participation score is the number of participation credits you earn over the semester, up to 10. Additional participation credits beyond the first 10 contribute to recovery points on midterms. Earning 20 or more participation credits will give you the maximum amount of midterm recovery. We calculate your midterm recovery using the following logic, where participation is the number of participation credits you earn: def exam_recovery(your_exam_score, participation, max_exam_score, cap=20): half_score = max_exam_score / 2 max_recovery = max(0, (half_score - your_exam_score) / 2) recovery_ratio = min(participation, cap) / cap return max_recovery * recovery_ratio According to this formula, if you receive more than half the available points on each midterm,. There are no recovery points available on the final exam. Late Policy If you cannot turn in an assignment on time, contact your TA and partner as early as possible. Depending on the circumstance, we may grant extensions. - Labs: We very rarely accept late lab submissions. There is no partial credit. - Homework: We very rarely accept late homework submissions. - Projects: Submissions within 24 hours after the deadline will receive 75% of the earned score. Submissions that are 24 hours or more after the deadline will receive 0+fa19@berkeley.edu, denero@berkeley.edu,. If you are unsure if what you are doing is cheating, please clarify with the instructor!
https://inst.eecs.berkeley.edu/~cs61a/fa19/articles/about.html
CC-MAIN-2020-29
en
refinedweb
BH1750FVI lux (lx). Features 1) I2CC slave-address. 11) Adjustable measurement result for influence of optical window ( It is possible to detect min. 0.11 lx, max. 100000 lx by using this function. ) 12) Small measurement variation (+/- 20%) 13) The influence of infrared is very small. What is Typical Lux values These were taken from Wikipedia Typically to use this sensor you will need to purchase a module, here is a picture of one For those that are interested this is a schematic of the module Layout An easy module to connect being an I2C one Code This code example purposefully did not use any third party libraries, sometimes its nice to see how to work with I2C devices just by using the Arduino wire library #include <Wire.h> #include <math.h> int BH1750address = 0x23; //i2c address byte buff[2]; void setup() { Wire.begin(); Serial.begin(57600); } void loop() { int i; uint16_t val=0; BH1750_Init(BH1750address); delay(200); if(2==BH1750_Read(BH1750address)) { val=((buff[0]<<8)|buff[1])/1.2; Serial.print(val,DEC); Serial.println("lux"); } delay(150); } int BH1750_Read(int address) // { int i=0; Wire.beginTransmission(address); Wire.requestFrom(address, 2); while(Wire.available()) // { buff[i] = Wire.read(); // receive one byte i++; } Wire.endTransmission(); return i; } void BH1750_Init(int address) { Wire.beginTransmission(address); Wire.write(0x10);//1lx reolution 120ms Wire.endTransmission(); } Testing Open the serial monitor and change the light intensity, here is what I saw 1070 lux 5006 lux 7427 lux 8200 lux 8996 lux 8656 lux 6569 lux 150 lux 5 lux 2 lux 2 lux 3 lux 2 lux 10 lux 38 lux 266 lux 259 lux Links BH1750 BH1750FVI light intensity illumination module 3V-5V
http://arduinolearning.com/code/arduino-bh1750-sensor.php
CC-MAIN-2020-29
en
refinedweb
Omayevskiy - Total activity 72 - Last activity - Member since - Following 0 users - Followed by 0 users - Votes 0 - Subscriptions 21 Omayevskiy created a post, Download statistics for Android Studio not workingThe plugin download statistics for Android Studio seems not to work. Are there any known issues?Plugin page -> view download statistics -> downloads by products -> Android Studio is not listedYes, ... Omayevskiy created a post, Show Jacoco coverage datausing IDEA Ultimate 13.1.6Given the project: running:$ mvn clean installthis generates:target/javoco.exectarget/site/jacoco/If I open th... Omayevskiy created a post, Is it possible to add inspection classes dynamically?Hello,assuming registering inspections like:public class MynspectionToolProvider implements InspectionToolProvider { @Override public Class[] getInspectionClasses() { Collection<Class>... Omayevskiy created a post, How to update status of ProgressIndicator from within GlobalInspectionContextExtensionI have source code like:public class MyInspectionContext implements GlobalInspectionContextExtension< MyInspectionContext > { @Override public void performPreRunActivities(...) { ... Omayevskiy created a post, How to run a Backgroundable Task in progress and synchronously?Hello,what I am trying to achieve:Run multiple tasks in a background threads, but synchronously with order each after the other.I have tried to search the forum for the answer, but still have an op... Omayevskiy created a post, Plugin development for multiple Jetbrains productsHello,I am developing an open and free plugin for all Intellij, but this plugin should also work for smaller products like webstorm and phpstorm.I have an Ultimate licence and develop currently aga...
https://intellij-support.jetbrains.com/hc/en-us/profiles/1378942362-Omayevskiy
CC-MAIN-2020-29
en
refinedweb
Content Count7 Joined Last visited Community Reputation7 Newbie About Cameron Knight - RankNewbie Recent Profile Visitors The recent visitors block is disabled and is not being shown to other users. - Cameron Knight started following ScrollTrigger ScrollTrigger Cameron Knight commented on GreenSock's product in PluginsThis is absolutely incredible! Thanks Greensock team for all your hard work! 👏👏👏 - You guys rock @GreenSock. Thanks for your help. After playing around with the animation and digging through some forums, I found the solution! I had to add the following CSS to the element that was flickering and it stopped the flickering in Safari 🥳 -webkit-transform-style: preserve-3d - Hi @ZachSaucier, Thanks so much for the quick reply. I'll do what you suggested and start stripping back a few elements to see what the issue is. Appreciate the feedback. - Cameron Knight started following Page transition with GSAP and Barba.js and GSAP animation flashing in safari}); - I finally got a simple transition working with gsap 3 & barba V2! 🎉 The transition function that worked for me looks like this: function leaveAnimation(container) { return new Promise(async resolve => { await gsap .to(container, { duration: 1, opacity: 0, }) .then(); resolve(); }); } One really simple thing that took me a while to figure out was making the enter animation work. This was fixed by just adding css to make the barba container positioned absolute: .container { position: absolute; left: 0; top: 0; width: 100%; } I'm going to leave this working example here incase anyone else needs it to reference from. - Hey @ZachSaucier, really appreciate the quick response. The GSAP community is the bee's knees. Love it. I did make a post at Barba a few days ago so I'll wait to hear back from them. If I have any luck I'll post a working example here. Cameron Knight changed their profile photo - Hey gang. I've been trying to get a simple page transition working using GSAP 3 and Barba.js V2 for some time now but not having any luck. All the docs and examples out there haven't seemed to help me with the latest versions. I'm not a great developer but it's become my life long goal to one day solve this I've tried 2 methods below (onComplete and .then) but no dice. I'd appreciate any help I can get! Thank you! See the full code on codesandbox.io function leaveAnimation(e) { var done = this.async(); gsap.to(e, 1, { opacity: 0, onComplete: done }); } function enterAnimation(e) { return new Promise(resolve => { gsap.to(e, 2, { opacity: 1 }).then(resolve()); }); } barba.use(barbaPrefetch); barba.init({ debug: true, transitions: [ { name: "one-pager-transition", to: { namespace: ["one-pager"] }, sync: true, leave: data => leaveAnimation(data.current.container), enter: ({ next }) => enterAnimation(next.container) } ] });
https://greensock.com/profile/78004-cameron-knight/
CC-MAIN-2020-29
en
refinedweb
shapeless: Generic programming for Scala | Latest stable release 2.3.3 | Code of conduct def check[A, C <: Coproduct, K <: HList](data: Seq[A])( implicit gen: LabelledGeneric.Aux[A,C] keys: ops.union.Keys.Aux[C,K] toList: ops.hlist.ToTraversable.Aux[K,List,Symbol] ): List[(String,Boolean)] = { val classes: List[Symbol] = toList(keys()) classes.map(sym => (sym.name, data.exists(_.getClass.getSimpleName == sym.name))) } Int-> Doubleand Int-> String. Is this correct? Or am I using the library incorrectly? Rel[Int, Double]and Rel[Int, String], and you do get(1), how will the typesystem know whether the value is of type Doubleor String? vaultlibrary for a similar abstraction with different constraints Key[A], and then you can get the corresponding value implicit ev: SelectAll[H,I]to ensure all members of Iare in H. implicit ev: IsHCons[I]to ensure that Iis not empty. def foo[H <: Hlist, I <: HList](xs: H, ys: I)(implicit ev: SelectAll[H,I], ev2: IsHCons[I]): Unit = { println(xs.select[ev2.H]) }?
https://gitter.im/milessabin/shapeless?at=5c7fee5a1c597e5db693346b
CC-MAIN-2020-29
en
refinedweb
This is the mail archive of the gcc-patches@gcc.gnu.org mailing list for the GCC project. On Thu, 1 Aug 2019, Uros Bizjak wrote: > On Wed, Jul 31, 2019 at 1:21 PM Richard Biener <rguenther@suse.de> wrote: > > > > On Sat, 27 Jul 2019, Uros Bizjak wrote: > > > > > On Sat, Jul 27, 2019 at 12:07 PM Uros Bizjak <ubizjak@gmail.com> wrote: > > > > > > > > How would one write smaxsi3 as a splitter to be split after > > > > > reload in the case LRA assigned the GPR alternative? Is it > > > > > even worth doing? Even the SSE reg alternative can be split > > > > > to remove the not needed CC clobber. > > > > > > > > > > Finally I'm unsure about the add where I needed to place > > > > > the SSE alternative before the 2nd op memory one since it > > > > > otherwise gets the same cost and wins. > > > > > > > > > > So - how to go forward with this? > > > > > > > > Sorry to come a bit late to the discussion. > > > > > > > > We are aware of CMOV issue for quite some time, but the issue is not > > > > understood yet in detail (I was hoping for Intel people to look at > > > > this). However, you demonstrated that using PMAX and PMIN instead of > > > > scalar CMOV can bring us big gains, and this thread now deals on how > > > > to best implement PMAX/PMIN for scalar code. > > > > > > > > I think that the way to go forward is with STV infrastructure. > > > > Currently, the implementation only deals with DImode on SSE2 32bit > > > > targets, but I see no issues on using STV pass also for SImode (on > > > > 32bit and 64bit targets). There are actually two STV passes, the first > > > > one (currently run on 64bit targets) is run before cse2, and the > > > > second (which currently runs on 32bit SSE2 only) is run after combine > > > > and before split1 pass. The second pass is interesting to us. > > > > > > > > The base idea of the second STV pass (for 32bit targets!) is that we > > > > introduce a DImode _doubleword instructons that otherwise do not exist > > > > with integer registers. Now, the passes up to and including combine > > > > pass can use these instructions to simplify and optimize the insn > > > > flow. Later, based on cost analysis, STV pass either converts the > > > > _doubleword instructions to a real vector ones (e.g. V2DImode > > > > patterns) or leaves them intact, and a follow-up split pass splits > > > > them into scalar SImode instruction pairs. STV pass also takes care to > > > > move and preload values from their scalar form to a vector > > > > representation (using SUBREGs). Please note that all this happens on > > > > pseudos, and register allocator will later simply use scalar (integer) > > > > registers in scalar patterns and vector registers with vector insn > > > > patterns. > > > > > > > > Your approach to amend existing scalar SImode patterns with vector > > > > registers will introduce no end of problems. Register allocator will > > > > do funny things during register pressure, where values will take a > > > > trip to a vector register before being stored to memory (and vice > > > > versa, you already found some of them). Current RA simply can't > > > > distinguish clearly between two register sets. > > > > > > > > So, my advice would be to use STV pass also for SImode values, on > > > > 64bit and 32bit targets. On both targets, we will be able to use > > > > instructions that operate on vector register set, and for 32bit > > > > targets (and to some extent on 64bit targets), we would perhaps be > > > > able to relax register pressure in a kind of controlled way. > > > > > > > > So, to demonstrate the benefits of existing STV pass, it should be > > > > relatively easy to introduce 64bit max/min pattern on 32bit target to > > > > handle 64bit values. For 32bit values, the pass should be re-run to > > > > convert SImode scalar operations to vector operations in a controlled > > > > way, based on various cost functions. > > > > I've looked at STV before trying to use RA to solve the issue but > > quickly stepped away because of its structure which seems to be > > tied to particular modes, duplicating things for TImode and DImode > > so it looked like I have to write up everything again for SImode... > > ATM, DImode is used exclusively for x86_32 while TImode is used > exclusively for x86_64. Also, TImode is used for different purpose > before combine, while DImode is used after combine. I don't remember > the details, but IIRC it made sense for the intended purpose. > > > > It really should be possible to run the pass once, handling a set > > of modes rather than re-running it for the SImode case I am after. > > See also a recent PR about STV slowness and tendency to hog memory > > because it seems to enable every DF problem that is around... > > Huh, I was not aware of implementation details... > > > > Please find attached patch to see STV in action. The compilation will > > > crash due to non-existing V2DImode SMAX insn, but in the _.268r.stv2 > > > dump, you will be able to see chain building, cost calculation and > > > conversion insertion. > > > > So you unconditionally add a smaxdi3 pattern - indeed this looks > > necessary even when going the STV route. The actual regression > > for the testcase could also be solved by turing the smaxsi3 > > back into a compare and jump rather than a conditional move sequence. > > So I wonder how you'd do that given that there's pass_if_after_reload > > after pass_split_after_reload and I'm not sure we can split > > as late as pass_split_before_sched2 (there's also a split _after_ > > sched2 on x86 it seems). > > > > So how would you go implement {s,u}{min,max}{si,di}3 for the > > case STV doesn't end up doing any transform? > > If STV doesn't transform the insn, then a pre-reload splitter splits > the insn back to compare+cmove. OK, that would work. But there's no way to force a jumpy sequence then which we know is faster than compare+cmove because later RTL if-conversion passes happily re-discover the smax (or conditional move) sequence. > However, considering the SImode move > from/to int/xmm register is relatively cheap, the cost function should > be tuned so that STV always converts smaxsi3 pattern. Note that on both Zen and even more so bdverN the int/xmm transition makes it no longer profitable but a _lot_ slower than the cmp/cmov sequence... (for the loop in hmmer which is the only one I see any effect of any of my patches). So identifying chains that start/end in memory is important for cost reasons. So I think the splitting has to happen after the last if-conversion pass (and thus we may need to allocate a scratch register for this purpose?) > (As said before, > the fix of the slowdown with consecutive cmov insns is a side effect > of the transformation to smax insn that helps in this particular case, > I think that this issue should be fixed in a general way, there are > already a couple of PRs reported). > > > You could save me some guesswork here if you can come up with > > a reasonably complete final set of patterns (ok, I only care > > about smaxsi3) so I can have a look at the STV approach again > > (you may remember I simply "split" at assembler emission time). > > I think that the cost function should always enable smaxsi3 > generation. To further optimize STV chain (to avoid unnecessary > xmm<->int transitions) we could add all integer logic, arithmetic and > constant shifts to the candidates (the ones that DImode STV converts). > > Uros. > > > Thanks, > > Richard. > > > > > The testcase: > > > > > > --cut here-- > > > long long test (long long a, long long b) > > > { > > > return (a > b) ? a : b; > > > } > > > --cut here-- > > > > > > gcc -O2 -m32 -msse2 (-mstv): > > > > > > _.268r.stv2 dump: > > > > > > Searching for mode conversion candidates... > > > insn 2 is marked as a candidate > > > insn 3 is marked as a candidate > > > insn 7 is marked as a candidate > > > Created a new instruction chain #1 > > > Building chain #1... > > > Adding insn 2 to chain #1 > > > Adding insn 7 into chain's #1 queue > > > Adding insn 7 to chain #1 > > > r85 use in insn 12 isn't convertible > > > Mark r85 def in insn 7 as requiring both modes in chain #1 > > > Adding insn 3 into chain's #1 queue > > > Adding insn 3 to chain #1 > > > Collected chain #1... > > > insns: 2, 3, 7 > > > defs to convert: r85 > > > Computing gain for chain #1... > > > Instruction conversion gain: 24 > > > Registers conversion cost: 6 > > > Total gain: 18 > > > Converting chain #1... > > > > > > ... > > > > > > (insn 2 5 3 2 (set (reg/v:DI 83 [ a ]) > > > (mem/c:DI (reg/f:SI 16 argp) [1 a+0 S8 A32])) "max.c":2:1 66 > > > {*movdi_internal} > > > (nil)) > > > (insn 3 2 4 2 (set (reg/v:DI 84 [ b ]) > > > (mem/c:DI (plus:SI (reg/f:SI 16 argp) > > > (const_int 8 [0x8])) [1 b+0 S8 A32])) "max.c":2:1 66 > > > {*movdi_internal} > > > (nil)) > > > (note 4 3 7 2 NOTE_INSN_FUNCTION_BEG) > > > (insn 7 4 15 2 (set (subreg:V2DI (reg:DI 85) 0) > > > (smax:V2DI (subreg:V2DI (reg/v:DI 84 [ b ]) 0) > > > (subreg:V2DI (reg/v:DI 83 [ a ]) 0))) "max.c":3:22 -1 > > > (expr_list:REG_DEAD (reg/v:DI 84 [ b ]) > > > (expr_list:REG_DEAD (reg/v:DI 83 [ a ]) > > > (expr_list:REG_UNUSED (reg:CC 17 flags) > > > (nil))))) > > > (insn 15 7 16 2 (set (reg:V2DI 87) > > > (subreg:V2DI (reg:DI 85) 0)) "max.c":3:22 -1 > > > (nil)) > > > (insn 16 15 17 2 (set (subreg:SI (reg:DI 86) 0) > > > (subreg:SI (reg:V2DI 87) 0)) "max.c":3:22 -1 > > > (nil)) > > > (insn 17 16 18 2 (set (reg:V2DI 87) > > > (lshiftrt:V2DI (reg:V2DI 87) > > > (const_int 32 [0x20]))) "max.c":3:22 -1 > > > (nil)) > > > (insn 18 17 12 2 (set (subreg:SI (reg:DI 86) 4) > > > (subreg:SI (reg:V2DI 87) 0)) "max.c":3:22 -1 > > > (nil)) > > > (insn 12 18 13 2 (set (reg/i:DI 0 ax) > > > (reg:DI 86)) "max.c":4:1 66 {*movdi_internal} > > > (expr_list:REG_DEAD (reg:DI 86) > > > (nil))) > > > (insn 13 12 0 2 (use (reg/i:DI 0 ax)) "max.c":4:1 -1 > > > (nil)) > > > > > > Uros. > > > > > > > -- > >)
https://gcc.gnu.org/legacy-ml/gcc-patches/2019-08/msg00011.html
CC-MAIN-2020-29
en
refinedweb
Setup Development Environment There are some prerequisites needed to develop and build an operator using Ansible. Also this guide and the operator-sdk assume you know Ansible roles. If you aren not yet up to speed please read about Ansible roles before proceeding. Install Docker 17.03+ Add the docker ce repositories. Note: you can also use podman and buildah instead of Docker for those that want a complete and clean divorce. $ sudo dnf -y install dnf-plugins-core $ sudo dnf config-manager \ --add-repo \ Install docker-ce $ sudo dnf -y install docker-ce docker-ce-cli $ sudo systemctl start docker $ sudo systemctl enable docker Install Ansible and Module Dependencies Install ansible $ dnf install ansible The Ansible runner and http runner is used to run a local version of the operator. This is very useful for development and testing. $ pip3 install --user ansible-runner $ pip3 install --user ansible-runner-http Install required python modules $ pip3 install --user requests $ pip3 install --user openshift Install Operator Framework SDK You can simply downloada pre-built release and install it under /usr/local/bin. Building Your First Operator The SDK is able to generate not only boilerplate code but also the CRDs, controller and API endpoints. In this example we will create a cars operator. It will simply provide an endpoint allowing us to do CRUD on car objects. Each car object will spawn a pod with a base image. Using operatore-sdk cli create boilerplate code. $ operator-sdk new cars-operator --api-version=cars.example.com/v1alpha1 --kind=Car --type=ansible The operator sdk creates boilerplate roles. $ ls cars-operator build deploy molecule requirements.yml roles watches.yaml The build directory containers Dockerfile, deploy is where the yaml files are for creating the k8s operator objects (including CRD/CR), the roles directory contains a role called car and finally watches.yaml is the mapping from a k8s object / CRD to a roll. By default when a car instance is created the operator will execute the role car. You can of course change the behavior, even configure finalizers for dealing with deletion of components not under the control of k8s (see the sdk guide) Run Operator locally The Ansible operator can be run locally. Simply deploy the CRD, service account and role. Run Operator locally $ operator-sdk run --local Create a car $ vi car.yaml apiVersion: cars.example.com/v1alpha1 kind: Car metadata: name: example-car spec: size: 3 bmw: model: m3 audi: model: rs4 $ oc create -f car.yaml Anything under ‘spec’ will be passed to Ansible as a variable. If you want to reference the size in the ansible role you simply use ‘{{ size }}’. You can also access nested variables such as the car model by using ‘{{ bmw.model }}’. Build Ansible Operator Create User on Quay.io We will be using quay to store operator images. Authenticate using Github or Gmail to. Once authenticated go to account settings and set a password. Test Quay.io Credentials $ sudo docker login quay.io Build Operator Using operator-sdk cli build the operator which will push the image to your local Docker registry. Make sure you are in the cars-operator directory. $ sudo operator-sdk build quay.io/ktenzer/cars-operator Note: Substitute ktenzer for your username. Push Operator to your Quay.io Account $ sudo docker push quay.io/ktenzer/cars-operator:latest Make Quay Repository Public By default your Quay repository will be private. If we want to access it from OpenShift we need to make it public. Login to quay.io with your username/password Select the cars-operator repository Under Repository Settings enable visibility to be public Update Image in operator.yaml We need to point the image to the location in our Quay repository. vi deploy/operator.yaml --- # Replace this with the built image name image: quay.io/ktenzer/cars-operator command: --- Deploy Cars Operator on OpenShift Now that the operator is built and pushed to our Quay repository we can deploy it on an OpenShift cluster. Authenticate to OpenShift Cluster $oc login Deploy our operator $ oc create -f deploy/operator.yaml Create the CR for our operator $ oc create -f deploy/crds/cars.example.com_v1alpha1_car_cr.yaml Using Cars Operator The cars operator will automatically deploy an example-car. Whe can query our car object just like any other Kubernetes object. This is the beauty of CR/CRDs and operators. We can easily extend the Kubernetes API without needing to understand it’s complexity. $ oc get car NAME AGE example-car 31m Next we can get information about our example-car. $ oc get car example-car -o yaml apiVersion: cars.example.com/v1alpha1 kind: Car metadata: creationTimestamp: "2020-01-25T12:15:45Z" generation: 1 name: example-car namespace: cars-operator resourceVersion: "2635723" selfLink: /apis/cars.example.com/v1alpha1/namespaces/cars-operator/cars/example-car uid: 6a424ef9-3f6c-11ea-a391-fa163e9f184b spec: size: 3 Looking at the running pods in our cars-operator project we see the operator and our example-car. $ oc get pods -n cars-operator NAME READY STATUS RESTARTS AGE cars-operator-b98bff54d-t2465 1/1 Running 0 5m example-car-pod 1/1 Running 0 5m Create a new Car Lets now create a BMW car. $ vi bmw.yaml kind: Car metadata: name: bmw spec: size: 3 bmw: model: m3 $ oc create -f bmw.yaml Here we can see we now have a BMW car. $ oc get car NAME AGE bmw 11m example-car 31m Of course we can get information about our BMW car. $ oc get car bmw -o yaml apiVersion: cars.example.com/v1alpha1 kind: Car metadata: creationTimestamp: "2020-01-25T12:35:25Z" generation: 1 name: bmw namespace: cars-operator resourceVersion: "2644044" selfLink: /apis/cars.example.com/v1alpha1/namespaces/cars-operator/cars/bmw uid: 294bc47f-3f6f-11ea-b32c-fa163e3e8e24 spec: size: 1 Finally as with the example-car, the operator will start a new pod when the BMW car is created. $ oc get pods -n cars-operator NAME READY STATUS RESTARTS AGE bmw-pod 1/1 Running 0 10m cars-operator-b98bff54d-t2465 1/1 Running 0 14m example-car-pod 1/1 Running 0 14m Cleanup Follow these steps to remove the operator cleanly. $ oc delete -f deploy/crds/cars.example.com_v1alpha1_car_cr.yaml $ oc delete -f deploy/operator.yaml $ oc delete -f deploy/role.yaml $ oc delete -f deploy/role_binding.yaml $ oc delete -f deploy/service_account.yaml $ oc delete -f deploy/crds/cars.example.com_cars_crd.yaml $ oc delete project cars-operator Summary In this article a step-by-step guide was provided to setup a development environment, generate boilerplate code and deploy our custom cars operator using Ansible on OpenShift with the Operator Framework. Happy Operatoring! (c) 2020 Keith Tenzer
https://keithtenzer.com/2020/04/23/openshift-operator-sdk-ansible-getting-started-guide-part-iii/
CC-MAIN-2020-29
en
refinedweb
The engine for developing bots for soc. networks, instant messengers and other systems. Project description Kutana The engine for developing bots for soc. networks, instant messengers and other systems. Nice foundation for bot using kutana engine - kubot. Installation - Download and install python (3.5.3+) - Install kutanamodule (use python3 if needed) python -m pip install kutana Usage - Create Kutanaengine and add managers. - Register your plugins in the executor. You can import plugin from folders with function load_plugins. Files should be a valid python modules with available pluginfield with your plugin ( Plugin). - Start engine. Example run.py (token for VKManager is loaded from the file "configuration.json" and plugins are loaded from folder "plugins/") from kutana import * # Create engine kutana = Kutana() # Add VKManager to engine kutana.add_manager( VKManager( load_value( "vk_token", "configuration.json" ) ) ) # Load and register plugins kutana.executor.register_plugins( load_plugins("plugins/") ) # Run engine kutana.run() Example plugins/echo.py from kutana import Plugin plugin = Plugin(name="Echo") @plugin.on_startswith_text("echo") async def on_echo(message, env, body): await env.reply("{}".format(body)) Available managers - VKManager (for vk.com groups) - TGManager (for telegram.org) document's type is named docinside of engine. TGAttachmentTempis used for storing attachments before sending them with send_messageor reply. Attachments can't be uploaded other way. - If you want to download file (attachment) from telegram, you have to use TGEnvironment.get_file_from_attachment. Authors - Michael Krukov - @michaelkrukov - Other contributors Project details Release history Release notifications | RSS feed Download files Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
https://pypi.org/project/kutana/2.1.3/
CC-MAIN-2020-29
en
refinedweb
The library for developing systems for messengers and social networks Project description Kutana The library for developing systems for messengers and social networks. Great for developing bots. Refer to example for the showcase of the library abilities. This library uses generalized attachment types, possible actions e.t.c. for flexibility to use plugins with different backends. Installation python -m pip install kutana Usage - Create Kutanaapplication and add managers. - Register your plugins in the executor. You can import plugin from folders with function load_plugins. Files should be a valid python modules with available pluginfield with your plugin ( Plugin) or field pluginswith list of instances of Pluginclass. - Start the application. Example run.py Token for Vkontakte is loaded from the file "config.json" and plugins are loaded from folder "plugins/" import json from kutana import Kutana, load_plugins from kutana.backends import Vkontakte # Import configuration with open("config.json") as fh: config = json.load(fh) # Create application app = Kutana() # Add manager to application app.add_backend(Vkontakte(token=config["vk_token"])) # Load and register plugins app.add_plugins(load_plugins("plugins/")) if __name__ == "__main__": # Run application app.run() Example plugin ( plugins/echo.py) from kutana import Plugin plugin = Plugin(name="Echo") @plugin.on_commands(["echo"]) async def _(msg, ctx): await ctx.reply(ctx.body, attachments=msg.attachments) If your function exists only to be decorated, you can use _to avoid unnecessary names Available backends - Vkontakte (for vk.com groups) - Telegram (for telegram.org bots) Authors - Michael Krukov - @michaelkrukov - Other contributors Project details Release history Release notifications | RSS feed Download files Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
https://pypi.org/project/kutana/4.2.0/
CC-MAIN-2020-29
en
refinedweb
Variations on goodness of fit tests for SciPy. Project description Provides variants of Kolmogorov-Smirnov, Cramer-von Mises and Anderson-Darling goodness of fit tests for fully specified continuous distributions. Example >>> from scipy.stats import norm, uniform >>> from skgof import ks_test, cvm_test, ad_test >>> ks_test((1, 2, 3), uniform(0, 4)) GofResult(statistic=0.25, pvalue=0.97...) >>> cvm_test((1, 2, 3), uniform(0, 4)) GofResult(statistic=0.04..., pvalue=0.95...) >>> data = norm(0, 1).rvs(random_state=1, size=100) >>> ad_test(data, norm(0, 1)) GofResult(statistic=0.75..., pvalue=0.51...) >>> ad_test(data, norm(.3, 1)) GofResult(statistic=3.52..., pvalue=0.01...) Simple tests Scikit-gof currently only offers three nonparametric tests that let you compare a sample with a reference probability distribution. These are: - ks_test() - Kolmogorov-Smirnov supremum statistic; almost the same as scipy.stats.kstest() with alternative='two-sided' but with (hopefully) somewhat more precise p-value calculation; - cvm_test() - Cramer-von Mises L2 statistic, with a rather crude estimation of the statistic distribution (but seemingly the best available); - ad_test() - Anderson-Darling statistic with a fair approximation of its distribution; unlike the composite scipy.stats.anderson() this one needs a fully specified hypothesized distribution. Simple test functions use a common interface, taking as the first argument the data (sample) to be compared and as the second argument a frozen scipy.stats distribution. They return a named tuple with two fields: statistic and pvalue. For a simple example consider the hypothesis that the sample (.4, .1, .7) comes from the uniform distribution on [0, 1]: if ks_test((.4, .1, .7), unif(0, 1)).pvalue < .05: print("Hypothesis rejected with 5% significance.") If your samples are very large and you have them sorted ahead of time, pass assume_sorted=True to save some time that would be wasted resorting. Extending Simple tests are composed of two phases: calculating the test statistic and determining how likely is the resulting value (under the hypothesis). New tests may be defined by providing a new statistic calculation routine or an alternative distribution for a statistic. Functions calculating statistics are given evaluations of the reference cumulative distribution function on sorted data and are expected to return a single number. For a simple test, if the sample indeed comes from the hypothesized (continuous) distribution, the values passed to the function should be uniformly distributed over [0, 1]. Here is a simplistic example of how a statistic function might look like: def ex_stat(data): return abs(data.sum() - data.size / 2) Statistic functions for the provided tests, ks_stat(), cvm_stat(), and ad_stat(), can be imported from skgof.ecdfgof. Statistic distributions should derive from rv_continuous and implement at least one of the abstract _cdf() or _pdf() methods (you might also consider directly coding _sf() for increased precision of results close to 1). For example: from numpy import sqrt from scipy.stats import norm, rv_continuous class ex_unif_gen(rv_continuous): def _cdf(self, statistic, samples): return 1 - 2 * norm.cdf(-statistic, scale=sqrt(samples / 12)) ex_unif = ex_unif_gen(a=0, name='ex-unif', shapes='samples') The provided distributions live in separate modules, respectively ksdist, cvmdist, and addist. Once you have a statistic calculation function and a statistic distribution the two parts can be combined using simple_test: from functools import partial from skgof.ecdfgof import simple_test ex_test = partial(simple_test, stat=ex_stat, pdist=ex_unif) Exercise: The example test has a fundamental flaw. Can you point it out? Installation pip install scikit-gof Requires recent versions of Python (> 3), NumPy (>= 1.10) and SciPy. Please fix or point out any errors, inaccuracies or typos you notice. Project details Download files Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
https://pypi.org/project/scikit-gof/0.1.1/
CC-MAIN-2020-29
en
refinedweb
In a real-time system, interrupts are widely used. And the system must respond to these external events in a timely manner. What are the factors affecting these response times, and the means available to optimize them? Two important definitions regarding interrupts are: The interrupt latency: it is the interval of time from an external interrupt request signal being raised to the execution of the first instruction of the specific interrupt service routine. The interrupt jitter: It’s the variation in latency. Often it is qualified by taking the minimum and maximum values of latency (the worst case of latency). We therefore notice that latency alone is not a sufficient criterion. In practice we want to limit the jitter, while also having the lowest possible latency. For some applications the ideal is to have a zero jitter, with as low a latency as possible. How to get near zero latency interrupts? Causes of interrupt latency The factors affecting the interrupt latency are hardware and software. Some hardware factors include: - Interrupt controller design. Reaction time is usually very fast (a few ns) and constant. - Bus occupancy at time of interruption: type of access, DMA… - The instruction being executed at the time of the interruption. The CPU waits for the end of the instruction before serving the interruption. A move from register to register is very fast, but a multiplication is much longer, and a memory access has no fixed duration depending on the context. - Interrupt nesting. If the controller allows interrupt nesting, an interruption of a given priority cannot interrupt the service of a higher priority interrupt. We must wait for the end of the high priority ISR, which induces an indefinite delay since it depends on the application. Among the hardware factors, the latter is the most decisive. Some software factors. - The constraints on the interrupt handler introduced by the real-time kernel. In general it is mandatory to call a particular function at the entry of the handler. This introduces a delay before the execution of the handler’s first useful instruction. This generally represents a fraction of µs. - The implementation of critical sections by the software. If this is done by temporarily inhibiting interruptions, this factor becomes critical. - The last factor concerns the inhibition of interruptions by libraries or device drivers. This is sometimes uncontrollable if the sources of these software are not available. The inhibition of interrupts is the determining software factor for interrupt latency. Implementation of critical sections Some kernels claim that they never inhibit interrupts. At first glance this is the right solution to control the latency of interrupts. However we must take a closer look. Often these kernels still inhibit interrupts for very short periods of time to carry out the mandatory atomic operations. The most advanced kernels use special instructions of the CPU to carry out these atomic operations, but the complexity of these instructions makes that their use is rare. These short periods of time while the interrupts are inhibited induces a small interrupt latency jitter. In order not to inhibit interrupts during critical sections, the scheduler is inhibited. Therefore the processing of an interrupt is done in several steps: - In the ISR, the minimum processing of the interruption is carried out The rest of the processing is outsourced to a deferred function. In particular the processes which access the kernel variables or which use shared variables with the device handler. - If the scheduler is not disabled at the time of the interruption, the deferred function is called immediately. - If the scheduler is disabled at the time of the interruption, a descriptor of the interrupt is inserted in a circular buffer, in order to call the deferred function later. - When the scheduler is validated again, if the circular buffer of the deferred functions is not empty, these deferred functions are called. This implementation of critical sections by inhibiting the scheduler introduces complexity, in particular: - The processing of all interrupts must be split in two. The first ISR instructions are executed with low latency. The delayed function can have a very high latency which cannot be controlled. In particular, it is necessary to manage the case where the same interrupt occurs several times before the scheduler is validated. - It is necessary to finely estimate the size of the buffer for deferred functions. The AdAstra-RTK kernel chose to implement critical sections by inhibiting interrupts managed by the kernel. This implementation decision was made because : - The implementation of the ISR is much simpler. - The ARM NVIC controller of the Cortex-M architecture allows you to specify the priority level of inhibited interrupts. In other words, it makes it possible to inhibit all interrupts of priority lower than or equal to a certain level, while leaving validated the interrupts of priority higher than this level. Most modern MCUs have this functionality. So it is possible to segregate two types of interrupts: - Kernel aware interrupts which can be inhibited during critical sections, but which have access to the kernel API. - Non kernel aware interrupts or “Zero latency interrupts” which are never inhibited by the kernel. However, these interruptions have constraints on the use of the kernel API. These interrupts must have a higher priority than kernel aware interrupts. Zero latency interrupts (ZLI) Interrupts with low latency and very little jitter can be considered. To do this some conditions must be met: - The interrupt controller must allow interrupt nesting and interrupt prioritization. - The real-time kernel must segregate interrupts. Higher priority interrupts should never be inhibited by the kernel. - Device drivers must not globally inhibit interrupts, but use inhibition / validation functions provided by the kernel. - A single interruption will really be “zero latency”: that of the highest priority. Indeed, this can preempt an interruption of lower priority. - In the ZLI ISR don’t use the kernel API, except the API specifically designed for this use. Last remark: the term “zero latency” refers to the software. Hardware latency and jitter are still present. Zero latency interrupt management with AdAstra-RTK A ZLI cannot use the kernel API, which is penalizing for communicating with the application. To resolve this, the two-step interrupt processing technique is used: - In the ZLI ISR the minimal processing is carried out. Then a software interrupt is issued. - The software interrupt is kernel aware, that is to say that it can be inhibited by the kernel. Its ISR performs the rest of the interrupt processing and can access the kernel API. This technique is very close to that adopted by the kernels which “never inhibit interrupts”. But it does not require a special buffer for deferred treatments. In addition, the priority of the software interrupt can be chosen as required. It can be the highest priority of kernel aware interrupts if its treatment is urgent. Practical example Now let’s see how to actually implement a ZLI. The envisaged application consists of: - Generate a timer interrupt, this is the ZLI. - The ZLI ISR increments the interrupt counter, then triggers a software interrupt. - The software interrupt uses a task signal to activate the task. - The task can process the data supplied by the ZLI, It displays the value of the interrupt counter. This example is typical of interrupt handling: The interrupt service routine performs some treatment, and then activates a task. The déclarations : #include "aa.h" #include "aakernel.h" // For aaIntEnter()/aaIntExit() in IRQ handler #include "timbasic.h" // To use a timer #include "extibasic.h" // To use a software interrupt #include "aaprintf.h" #define ZLI_SWI_LINE 0 // The number of the SWI interrupt #define ZLI_SIGNAL 1 // The signal to wake up the task #define ZLI_FREQ 10 // ZLI frequency #define ZLI_TIM_PRIORITY (BSP_MAX_INT_PRIO-1) // The lower priority never disabled // The timer descriptor: TIM4, with output CH1 on PB6 // Choose a timer who doesn't share its IRQ with another device (TIMx, DACx...) // In timerbasic.h set AA_WITH_TIM4 to 0, to disable this IRQ handler in timbasic.c static const timDesc_t timInterrupt = { 4, // Descriptor identifier 4, // Timer number 1..14 RCC_APB1RSTR_TIM4RST_Pos, // RCC_APBxRSTR_TIMnRST_Pos RCC_APB1ENR_TIM4EN_Pos, // RCC_APBxRSTR_TIMnEN_Pos TIM_CAP_TYPE_GPT1 | TIM_CAP_CHAN_1_4 | TIM_CAP_UP_DOWN | TIM_CAP_DMA | TIM_CAP_ETR, // Capabilities flags TIM4, // Pointer to TIM registers TIM4_IRQn, { { 'B', 6, AA_GPIO_AF_2, AA_GPIO_SPEED_HIGH | AA_GPIO_PUSH_PULL | AA_GPIO_PULL_UP }, // Ch1 { 0, 7, AA_GPIO_AF_2, AA_GPIO_SPEED_HIGH | AA_GPIO_PUSH_PULL | AA_GPIO_PULL_UP }, // Ch2 { 0, 8, AA_GPIO_AF_2, AA_GPIO_SPEED_HIGH | AA_GPIO_PUSH_PULL | AA_GPIO_PULL_UP }, // Ch3 { 0, 9, AA_GPIO_AF_2, AA_GPIO_SPEED_HIGH | AA_GPIO_PUSH_PULL | AA_GPIO_PULL_UP }, // Ch4 { 0, 0, AA_GPIO_AF_2, 0 }, // Ch1N { 0, 0, AA_GPIO_AF_2, 0 }, // Ch2N { 0, 0, AA_GPIO_AF_2, 0 }, // Ch3N { 0, 0, AA_GPIO_AF_2, AA_GPIO_SPEED_HIGH | AA_GPIO_PUSH_PULL | AA_GPIO_PULL_UP }, // ETR { 0, 0, AA_GPIO_AF_2, 0 }, // BKIN }, } ; STATIC uint32_t zliCounter = 0 ; STATIC aaTaskId_t zliTaskId ; The ZLI interrupt service routine : void TIM4_IRQHandler (void) { // ZLI : No aaIntEnter() / aaIntExit() // Increment the counter zliCounter ++ ; // Triggers the software interrupt extiSwiTrigger (ZLI_SWI_LINE) ; } The software interrupt service routine : void EXTI0_IRQHandler (void) { aaIntEnter () ; if ((EXTI->PR & EXTI_PR_PR0) != 0) { // Clear bit EXTI->PR = EXTI_PR_PR0 ; aaSignalSend (zliTaskId, ZLI_SIGNAL) ; // Wake up the application task } aaIntExit () ; } The test function : void testZli (void) { const timDesc_t * pTimGen = & timInterrupt ; // The timer descriptor // Configure the timer to generate a frequency on CH1 output // and the UPDATE interrupt, that call the ISR timTimeBaseInit (pTimGen, 0, ZLI_FREQ, TIM_INIT_ARR_PRELOADEN) ; timConfigureIntr (pTimGen, ZLI_TIM_PRIORITY, NULL, 0, TIM_INTR_UPDATE) ; timOutputChannelEnable (pTimGen, 1, TIM_OCMODE_PWM1) ; // Initialize the SWI interrupt to send the signal extiResetInterrupt (ZLI_SWI_LINE) ; extiInitSwi (ZLI_SWI_LINE, BSP_IRQPRIOMIN_PLUS(2)) ; // Prepare the task to receive the signal zliTaskId = aaTaskSelfId () ; aaSignalClear (zliTaskId, ZLI_SIGNAL) ; timStart (pTimGen) ; // Forever loop waiting for the signal from the SWI while (1) { aaSignalWait (ZLI_SIGNAL, NULL, AA_SIGNAL_AND, AA_INFINITE) ; aaPrintf (« ZLI count : %d\n », zliCounter) ; } } That’s all ! The timer is configured to output the freqency signal. This allows you to instrument the code to measure the latency of the timer interrupt. For example, the following function is used to send a pulse to the output BSP_OUTPUT0 defined by the BSP. void TIM4_IRQHandler (void) { // ZLI : No aaIntEnter() / aaIntExit() bspOutput (BSP_OUTPUT0, 1) ; // Increment the counter zliCounter ++ ; // Triggers the software interrupt extiSwiTrigger (ZLI_SWI_LINE) ; bspOutput (BSP_OUTPUT0, 0) ; } Connecting two channels of an oscilloscope to pins TIM4_CH1 and PORTG_0 allows measuring the time between the generation of the interrupt and the execution of the first instruction of the ISR. As well as its jitter, this determines the worst case of latency. In the image below, the trace 2 corresponds to the rising edge of the timer signal which triggers the interrupt, and the trace 1 corresponds to the 1st instruction of the interrupt routine: bspOutput (BSP_OUTPUT0, 1). Latency is measured by the cursors: 74 ns. We can also see the jitter: about 10 ns. Curiously, the jitter values are not random. With a trace persistence for a long time, only 3 values were observed. This test was carried out with a STM32H743 at 400 MHz. The file testzli.c in the distribution of AdAstra-RTK presents more complete examples of use of ZLI, and the result of the measurements. Conclusion AdAstra-RTK allows to segregate interrupts. Thus some interrupt priorities can be reserved for non kernel aware interrupts nown as ZLI. This technique removes the software related interrupt latencies. But the hardware latencies still remain and depend on the architecture of the MCU. Some links ARM Nested Vector Interrupt Controller (NVIC) beginner-guide-on-interrupt-latency-and-interrupt-latency-of-the-arm-cortex-m-processors Some articles about interrupt latency. 2 thoughts on “How to get near Zero Latency Interrupts?” Great content! Super high-quality! Keep it up! 🙂 Thank you very much for your interest in this article.
http://adastra-soft.com/zero-latency-interrupt/
CC-MAIN-2020-29
en
refinedweb
Data Member Converter Class Definition Provides a type converter that can retrieve a list of data members from the current component's selected data source. public ref class DataMemberConverter : System::ComponentModel::TypeConverter public class DataMemberConverter : System.ComponentModel.TypeConverter type DataMemberConverter = class inherit TypeConverter Public Class DataMemberConverter Inherits TypeConverter - Inheritance - DataMemberConverter Examples // Associates the DataMemberConverter with a string property. public: [TypeConverterAttribute(DataMemberConverter::typeid)] property String^ dataMember { String^ get() { return member; } void set( String^ value ) { member = value; } } private: String^ member; // Associates the DataMemberConverter with a string property. Remarks DataMemberConverter provides methods that can retrieve a list of data members from the current data source of a design-time component. This type converter is used by Visual Studio 2005 to provide the values that appear in the list of data members in the Properties window. Caution You should never access a type converter directly. Instead, call the appropriate converter by using TypeDescriptor. For more information, see the examples in the TypeConverter base class. For more information about type converters, see the TypeConverter base class and How to: Implement a Type Converter.
https://docs.microsoft.com/en-gb/dotnet/api/system.web.ui.design.datamemberconverter?view=netframework-4.8&viewFallbackFrom=netcore-2.0
CC-MAIN-2020-29
en
refinedweb
ASP.NET MVC 3: Integrating with the jQuery UI date picker and adding a jQuery validate date range validator UPDATE: I've blogged about an more flexible way to wire up the editor template here. This is post looks at working with dates in ASP.NET MVC 3. We will see how to integrate the jQuery UI date picker control automatically for model properties that are dates and then see an example of a custom validator that ensures that the specified date is in a specified range. Additionally, we will add client-side validation support and integrate with the date picker control to guide the user to pick a date in the valid range. - The demo project - Adding the date picker - Adding date range validation - Adding date range validation - client-side support - Going the extra mile The demo project For the purposes of this post I have created a Foo class (and a view model with an associated message): public class Foo { public string Name { get; set; } public DateTime Date { get; set; } } public class FooEditModel { public string Message { get; set; } public Foo Foo { get; set; } } I’ve created the following minimal Edit action (omitting any save logic since we’re not interested in that aspect - it just displays a “saved” message): [HttpGet] public ActionResult Edit() { FooEditModel model = new FooEditModel { Foo = new Foo {Name = "Stuart", Date = new DateTime(2010, 12, 15)} }; return View(model); } [HttpPost] public ActionResult Edit(Foo foo) { string message=null; if (ModelState.IsValid) { // would do validation & save here... message = "Saved " + DateTime.Now; } FooEditModel model = new FooEditModel { Foo = foo, Message = message }; return View(model); } Finally, the view is implemented in razor using templated helpers @Model.Message @using (Html.BeginForm()) { @Html.EditorFor(m => m.Foo) <input id="submit" name="submit" type="submit" value="Save" /> } When the project is run, the output is as shown below: In the screenshot above, notice how the Date property is displayed with both the date and time parts. The Foo class models an entity that has a name and associated date, but the .NET Framework doesn’t have a “date” type so we model it with a DateTime. The templated helpers see a DateTime and render both parts. Fortunately the templated helpers work on the model metadata and we can influence the metadata using a range of attributes. In this case we will apply the DataType attribute to add more information about the type of a property: public class Foo { public string Name { get; set; } [DataType(DataType.Date)] public DateTime Date { get; set; } } With this attribute in place, the templated helpers know that we’re not interested in the time portion of the DateTime property and the output is updated: Adding the date picker So far so good, but there’s still no date picker! Since I’m lazy (and quite like consistency) I’d like the templated helpers to automatically wire up the date picker whenever it renders an editor for a date (i.e. marked with DataType.Date). Fortunately, the ASP.NET MVC team enabled exactly this scenario when they created the templated helpers. If you’re not familiar with how the templated helpers attempt to resolve partial views then check out Brad Wilson’s series of posts. The short version is that if you add the DataType.Date annotation, the templated helpers will look for a partial view named “Date” under the EditorTemplates folder. To demonstrate, I created the following partial view as Views\Home\EditorTemplates\Date.cshtml (I’m using Razor, but the same applies for other view engines) @model DateTime @Html.TextBox("", Model.ToString("dd/MM/yyyy")) ** TODO Wire up the date picker! ** There are a couple of things to note (apart from the fact that I’m ignoring localisation and using a UK date format!). The first is that we’ve specified the model type as DateTime, so the Model property is typed as DateTime in the view. The second is that the first parameter we’re passing to the TextBox helper is an empty string. This is because the view has a context that is aware of what the field prefix is, so the textbox will be named “Foo.Date”. With this partial view in place the output now looks like: We’re now ready to wire up the date picker! The first change is to Date.cshtml to add a date class to tag the textbox as a date: @model DateTime @Html.TextBox("", Model.ToString("dd/MM/yyyy"), new { @class = "date" }) With this in place we will create a script that looks for any textboxes with the date class set and adds the date picker behaviour. I’m going to use the jQuery UI date picker as jQuery UI is now in the standard template for ASP.NET MVC 3. The first step is to add a script reference to jQuery-ui.js and to reference jQuery-ui.css (in Content/themes/base). I then created a script called EditorHookup.js with the following script: /// <reference path="jquery-1.4.4.js" /> /// <reference path="jquery-ui.js" /> $(document).ready(function () { $('.date').datepicker({dateFormat: "dd/mm/yy"}); }); Again, I’m simply hardcoding the date format to a UK date format. jQuery UI has some support for localisation, or you could manage the date format server-side and output it as an attribute that your client-side logic picks up (this is left as an exercise for the reader ). Now we just need to reference the EditorHookup script and run the site to see the date picker in action: Adding date range validation Now that we have the date picker implemented, we will look at how to create a validator to ensure that the specified date is within a specified range. The following code shows how we will apply the validator to our model: public class Foo { public string Name { get; set; } [DataType(DataType.Date)] [ DateRange("2010/12/01", "2010/12/16" )] public DateTime Date { get; set; } } (Note that we can’t pass a DateTime in as attribute arguments must be const) The implementation of the attribute isn’t too bad. There’s some plumbing code for parsing the arguments etc and the full code is shown below: public class DateRangeAttribute : ValidationAttribute { private const string DateFormat = "yyyy/MM/dd"; private const string DefaultErrorMessage= "'{0}' must be a date between {1:d} and {2:d}."; public DateTime MinDate { get; set; } public DateTime MaxDate { get; set; } public DateRangeAttribute(string minDate, string maxDate) : base(DefaultErrorMessage) { MinDate = ParseDate(minDate); MaxDate = ParseDate(maxDate); } public override bool IsValid(object value) { if (value == null || !(value is DateTime)) { return true; } DateTime dateValue = (DateTime)value; return MinDate <= dateValue && dateValue <= MaxDate; } public override string FormatErrorMessage(string name) { return String.Format(CultureInfo.CurrentCulture, ErrorMessageString, name, MinDate, MaxDate); } private static DateTime ParseDate(string dateValue) { return DateTime.ParseExact(dateValue, DateFormat, CultureInfo.InvariantCulture); } } The core logic is inside the IsValid override and is largely self-explanatory. One point of note is that the validation isn’t applied if the value is missing as we can add a Required validator to enforce this. we now get a validation error if we attempt to pick a date outside of the specified range: Adding date range validation – client-side support As it stands, the validation only happens when the data is POSTed to the server. The next thing we’ll add is client-side support so that the date is also validated client-side before POSTing to the server. There are a few things that we need to add to enable this. Firstly there’s the client script itself and then there’s the server-side changes to wire it all up. Creating the client-side date range validator ASP.NET MVC 3 defaults to jQuery validate for client-side validation and uses unobtrusive javascript and we will follow this approach (for more info see my post on confirmation prompts and Brad Wilson’s post on unobtrusive validation). Server-side, the validation parameters are written to the rendered HTML as attributes on the form inputs. These attributes are then picked up by some client-side helpers that add the appropriate client-side validation. The RangeDateValidator.js script below contains both the custom validation plugin for jQuery and the plugin for the unobtrusive validation adapters (each section is called out by comments in the script. (function ($) { // The validator function $.validator.addMethod('rangeDate', function (value, element, param) { if (!value) { return true; // not testing 'is required' here! } try { var dateValue = $.datepicker.parseDate("dd/mm/yy", value); } catch (e) { return false; } return param.min <= dateValue && dateValue <= param.max; }); // The adapter to support ASP.NET MVC unobtrusive validation $.validator.unobtrusive.adapters.add('rangedate', ['min', 'max'], function (options) { var params = { min: $.datepicker.parseDate("yy/mm/dd", options.params.min), max: $.datepicker.parseDate("yy/mm/dd", options.params.max) }; options.rules['rangeDate'] = params; if (options.message) { options.messages['rangeDate'] = options.message; } }); } (jQuery)); As metnioned previously, the script is hard-coded to use the dd/mm/yyyy format for user input to avoid adding any further complexity to the blog post. The min/max values are output in yyyy/mm/dd as a means of serialising the date from the server consistently. The validator function retrieves arguments giving the value to validate, the element that it is from and the parameters. These parameters are set up by the unobtrusive adapter based on the attributes rendered by the server. The next step is to get the server to render these attributes. Returning to our DateRangeAttribute class, we can implement IClientValidatable (this interface is new in ASP.NET MVC 3 - see my earlier post for how this makes life easier). IClientValidatable has a single method that returns data describing the client-side validation (essentially the data to write out as attributes). public class DateRangeAttribute : ValidationAttribute, IClientValidatable { // ... previous code omitted... public IEnumerable<ModelClientValidationRule> GetClientValidationRules (ModelMetadata metadata, ControllerContext context) { return new[] { new ModelClientValidationRangeDateRule( FormatErrorMessage(metadata.GetDisplayName()), MinDate, MaxDate) }; } } public class ModelClientValidationRangeDateRule : ModelClientValidationRule { public ModelClientValidationRangeDateRule(string errorMessage, DateTime minValue, DateTime maxValue) { ErrorMessage = errorMessage; ValidationType = "rangedate"; ValidationParameters["min"] = minValue.ToString("yyyy/MM/dd"); ValidationParameters["max"] = maxValue.ToString("yyyy/MM/dd"); } } Looking back at the client script, you can see that the validator name (“rangedate”) matches on both client and server, as do the parameters (“min” and “max”). All that remains is to add the necessary script references in the view (jquery.validate.js, jquery.validate.unobtrusive.js and our RangeDateValidator.js). With that in place, we get immediate client-side validation for our date range as well as server-side validation. Going the extra mile We’ve already achieved quite a lot. By adding a few script references we can apply the DataType(DataType.Date) attribute to enable the date picker control for date properties on our model. If we want to ensure the date is within a specified date range then we can also apply the DateRange attribute and we will get both client-side and server-side validation. There are many other features that could be added, including: - localisation support – currently hardcoded to dd/mm/yyyy format for the date input - adding support for data-driven validation – currently the min/max dates are fixed, but you could validate against properties on the model that specify the min/max dates - adding support for rolling date windows– e.g. for date of birth entry you might want the date to be at least/most n years from the current date. All of the above are left as an exercise for the reader. We will, however, add a couple more features. The first is that when using the DateRange attibute you still have to specify the DataType attribute to mark the field as a date so that the date picker is wired up. The second is to take advantage of the support for date ranges in the date picker. Removing the need to add DataType and DateRange Since you’re likely to want to have the date picker for date fields and applying the DateRange attribute implies that you’re working with a date property, it would be nice if you didn’t have to apply the DataType attribute in this case. Fortunately this is made very simple in ASP.NET MVC 3 via the addition of the IMetadataAware interface. By implementing this on the DateRangeAttribute we can hook in to the metadata creation and mark the property as a date automatically: public class DateRangeAttribute : ValidationAttribute, IClientValidatable, IMetadataAware { // ... previous code omitted... public void OnMetadataCreated(ModelMetadata metadata) { metadata.DataTypeName = "Date"; } } Integrating the date range with the date picker The jQuery UI date picker allows you to specify a date range to allow the user to pick from. When the date range validator is applied then we have code to ensure that the date entered is in the specified range, but it would be nice to steer the user towards success by limiting the date range that the date picker displays. To do this we will modify the client-code that wires up the date picker so that it checks for the validation attributes and applies the date range if specified. All that is required for this is a slight change to the EditorHookup.js script: /// <reference path="jquery-1.4.4.js" /> /// <reference path="jquery-ui.js" /> $(document).ready(function () { function getDateYymmdd(value) { if (value == null) return null; return $.datepicker.parseDate("yy/mm/dd", value); } $('.date').each(function () { var minDate = getDateYymmdd($(this).data("val-rangedate-min")); var maxDate = getDateYymmdd($(this).data("val-rangedate-max")); $(this).datepicker({ dateFormat: "dd/mm/yy", minDate: minDate, maxDate: maxDate }); }); }); With these final two changes we no longer require the DataType attribute when we are specifying a DateRange, and the date picker will respect the date range for the validation and help to steer the user away from validation errors: Summary This has been a fairly lengthy post, but we’ve achieved a lot. By ensuring that we have a few script references, we will automatically get the date picker for any fields marked with DataType.Date or DateRange. If DateRange is specified then we get client-side and server-side validation that the date entered is within the specified date range, and the date picker limits the dates the user can pick to those that fall within the allowed date range. That’s quite a lot of rich functionality that can be enabled by simply adding the appropriate attribute(s). I've attached a zip file containing an example project. It adds a couple of features like not requiring both min and max values for the date range, but is otherwise the code from this post.
https://docs.microsoft.com/en-us/archive/blogs/stuartleeks/asp-net-mvc-3-integrating-with-the-jquery-ui-date-picker-and-adding-a-jquery-validate-date-range-validator
CC-MAIN-2020-29
en
refinedweb
Important changes to forums and questions All forums and questions are now archived. To start a new conversation or read the latest updates go to forums.mbed.com. 3 years ago. How to measure 1 and 0 in digital signal ? I have an wind speed sensor which gives me digital signal of 1 and 0, with "1" on 3.3 V and "0" on 0 V. i use MBED NXP LPC1768 I want to count how many "1" is in my signal in 0.5 seconds This is my code below : Problem is in that I have for output in counter numbers like 87642,72348,93484 which I think it's not right. Has anyone have solution ? I would much appreciate that. 1 Answer 3 years ago. Hello Stjepan, Below is a code counting pulses (rising edges) in 500ms. #include "mbed.h" Serial pc(USBTX, USBRX); InterruptIn input(p9); Timeout timeout; volatile bool measuringEnabled = false; volatile int counter; //ISR counting pulses void onPulse(void) { if(measuringEnabled) counter++; } // ISR to stop counting void stopMeasuring(void) { measuringEnabled = false; } // Initializes counting void startMeasuring(void) { counter = 0; timeout.attach(callback(&stopMeasuring), 0.5); measuringEnabled = true; } int main() { input.rise(callback(&onPulse)); // assign an ISR to count pulses while(1) { startMeasuring(); while(measuringEnabled); // wait until the measurement has completed pc.printf("counter = %d\r\n", counter); } } TIP FOR EDITING: You can copy and paste a code into your question as text and enclose it within tags as below (each tag on separate line). Then it's going to be displayed as code in a frame. <<code>> and <</code>> <<code>> #include "mbed.h" DigitalOut led1(LED1); int main() { while (1) { led1 = !led1; wait(0.5); } } <</code>> So you are taking a polling approach where you spin in a loop and continuously check the input. This could work if you don't have anything else to do in the program. A more common approach would be to set a rising edge interrupt on p9. The hardware will look for the low to high transition for you and then run specified function every time a low to high transition is detected. You don't say what sensor you are using. We would need to know exactly what output it generates in order to decode it. Should we be counting pulses? So X number of pulses in 500ms = Y wind speed? This means you want to count the number of transitions from low input to high input in 500ms. Make a variable to keep track of current pin state. When the pin state changes from low to high, increment your counter. Another possibility is the output time high vs time low represents the wind speed. In which case you just need to add another counter and increment it when the input = 0. At the end of 500ms you get a ratio of time_high : time_low.posted by Graham S. 19 Jun 2017
https://os.mbed.com/questions/78331/How-to-measure-1-and-0-in-digital-signal/
CC-MAIN-2020-29
en
refinedweb
How to have tracked FBX Avatars in vizconnect using 3DSMax and Mixamo’s Auto Rigging The Autobiped script was created by Ofer Zelichover and Dan Babcock and it is freely released to Mixamo customers The script will convert any character rigged using Mixamo auto-rigger into a Biped system in 3dsMax If going from Fuse just send to Mixamo and Auto-rig Go to mixamo.com Select “Upload Character” If coming out of other programs (such as Character Creator) Import downloaded avatar into 3DSMax Check size of avatar (Under Utilities- Measure) May have to uncheck “Always Deform” on the mesh (except for feet) - Under Modify- Advanced Properties. This is if you get undesirable results on how the skeleton attaches to the mesh. Add and run this script (AutoBiped) to convert to 3DSMax biped mesh object (Scripting- Run script) Click on "Create Biped" Rename all bones from Bip001* to Bip01* in 3ds Max (Tools > Rename Objects). Expand all bones in the skeleton and select them all first. Export with Cal3D Install Cal3D exporter (download from here) Refer to the Cal3D section of the Vizard documentation on how to export as a .cfg file Use as an imported Complete Character in vizconnect Start Vizconnect by going to Tools- Vizconnect in Vizard Setup your trackers/ inputs/ etc. For help with vizconnect and avatars see the Vizard documentation. You can also choose from one of the common presets. Click on “Avatars” tab and remove the current avatar if there is one there. Add new avatar Choose “imported Complete Characters” In Settings under file, choose your exported .cfg file Make sure to then go to the “Animator” tab under "Avatars" and map whatever trackers you are using to the avatar model Drag the main camera under the head component in the scene graph, as well as any tools you may be using under the hand components. Lastly, add gestures and map them to your input device Afterwards you will want to manually edit the vizconnect file where the avatar is added to change the lines to use vizfx instead of viz.add : import vizfx avatar = vizfx.addChild(filename) If wanting to change the avatar, you can just change the path in the code.
http://kb.worldviz.com/articles/3298?utm_source=rss&utm_medium=rss&utm_campaign=how-to-have-tracked-fbx-avatars-in-vizconnect-using-3dsmax-and-mixamos-auto-rigging
CC-MAIN-2020-29
en
refinedweb
These logs document versioned changes to the Graph API and Marketing API that are no longer available. To learn more about versions, please see our Platform Versioning documentation. Use the API Upgrade Tool to determine which version changes will affect your API calls. For more information on upgrading, please see our upgrade guide. Changelog entries are categorized in the following way: New Features, Changes, and Deprecations only affect this version. 90-Day Breaking Changes affect all versions. Breaking Changes are not included here since they are not tied to specific releases. This. Released October 7, 2015 | Available until July 13, 2016 Ad, Ad set, and Campaign object read path additions: effective_statusto reflect the real Ad delivery status. For example for an active adset in a paused campaign, the effective_statusof this adset is CAMPAIGN_PAUSED. configured_statusto reflect the status specified by user. Insights - The following metrics have been added: inline_link_clicks cost_per_inline_link_click inline_post_engagement cost_per_inline_post_engagement Admin Name - Default the name of the first visible admin to current user if not specified. Campaign -The following objective names have changed: WEBSITE_CLICKSto LINK_CLICKS WEBSITE_CONVERSIONSto CONVERSIONS DPA - Enforce Terms of Service for all Product Catalogues and Dynamic Product Ads users. Advertisers implicitly accept by creating their first catalog through Business Manager. For API developers, Terms of Service need to be accepted through BM when the product catalog is first created. Local Awareness Ads - Require page_idin promoted_objectat the /adsetlevel for Local Awareness Ads - No longer require custom_locationsfor location targeting, and also allow sub-national, single country targeting. Now all location types (except for country) must be within the same country. - Change /adcampaign_groupsto /campaigns - Change /adcampaignsto /adsets - Change /adgroupsto /ads - In the write path, change campaign_group_status, campaign_status, adgroup_status to status - Response for call to /search?type=adgeolocationwill contain only the 'City' instead of 'City, State' - Change {cpc, cpm, cpa}_{min, max, median}, such as cpc_minfields in /reachestimate to bid_amount_min, bid_amount_median, bid_amount_max. Change bid_forto optimize_for - Remove friendly_name - Remove USE_NEW_APP_CLICK - Remove buying_type - Remove ability to create campaign with Objective='NONE'. 'NONE' is still a valid read-only objective. You can continue to edit existing campaigns. - Remove REACHBLOCK, DEPRECATED_REACH_BLOCKfrom campaign buying_type. For older campaigns using these two buying_type, the API will return RESERVEDfor REACHBLOCKand FIXED_PRICEfor DEPRECATED_REACH_BLOCK. FIXED_CPMis replaced with FIXED_PRICE. - Remove filter on adgroup_id, campaign_id, or campaign_group_id. Instead use ad.id, adset.idand campaign.id - Note Prior to 2.6 We decided to extend the timeline to deprecate Clicks(All) and CPC metrics until the deprecation of API v2.6. In 2.5, removed cpcand clicksfields in Insights API. Newly added related metrics include inline_link_clicks, cost_per_inline_link_click, inline_post_engagement, cost_per_inline_post_engagement. Please note clicks(all) historically counted any engagement taken within the ad unit as a click. Thus, clicks(all) won't be a simple addition of link_clicksand post_engagement. The newly added fields are available in v2.4 and v2.5. Reach Estimate - Require optimize_for at the /reachestimate endpoint. Targeting - Remove want_localized_nameparam for /search?type=adgeolocation. - Remove /search?type=adcityand adregion, use /search?type=adgeolocationinstead. - Remove engagement_specsand excluded_engagement_specsfrom targeting spec. The similar video remarketing functionality is supported through /customaudiencesendpoint. Deprecate or make private certain targeting options: - Instead of being returned in /search, private categories will now be returned in /act_{AD_ACCOUNT_ID}/broadtargetingcategoriesor /act_{AD_ACCOUNT_ID}/partnercategories. - Deprecated categories will no longer be returned. - Updates to ad set targeting using private or deprecated categories will error. It's recommended to query the validation endpoint before updating the ad set targeting to identify which targeting options to remove. Released July 8th, 2015 | No longer available In v2.4 we expose a set of new Page Video metrics, such as page_video_views_paid, page_video_views_autoplayed, and page_video_views_organic available from the Graph API via GET /v2.4/{page_id}/insights/?metric={metric}. These metrics require the read_insights permission. The Video node now contains the following fields in GET|POST operations to /v2.4/{video_id}: content_categorywhich supports categorizing a video during video upload, and can be used for suggested videos. Content categories include: Business, Comedy, Lifestyle, etc and a full list of categories can be viewed on the Videonode docs page. unpublished_content_typewill expose 3 new types (Scheduled, Draft, and Ads_Post) which will help coordinate how the video is posted. expirationand expiration_typeallows the video expiration time to be set, along with the type (hide, delete). embeddableboolean flag is now available to control if 3rd party websites can embed your video. We've simplified how you access content on a person's Timeline. Instead of handling different object types for statues and links, the API now returns a standardized Post node with attachments which represent the type of content that was shared. For more details view the User reference docs. For Marketing API (formerly known as Ads API) v2.4 new features, see Facebook Marketing API Changelog. admin_creatorobject of a Post in now requires a Page access token. POST /v2.4/{page_id}/offersand DELETE /v2.4/{offer_id}now require a Page access token with manage_pagesand publish_pagespermissions. GET /v2.4/{page_id}/milestones, POST /v2.4/{milestone_id}, and DELETE /v2.4/{milestone_id}now require a Page access token with manage_pagesand publish_pagespermission. Pagenode GET /v2.4/{page_id}/promotable_postsnow require a user access token with ads_managementpermission or a Page access token. global_brand_parent_pageobject has been renamed to global_brand_root_id. GET /v2.4/{global_brand_default_page_id/insightswill now return only insights of the default Page insight data, instead of the insights for the Global Brand hierarchy. Use the Root Page ID to retrieve the integrated insights of the whole hierarchy. GET|POST /v2.4/{page_id}/promotable_postshas renamed the field is_inlineto include_inline. limit=100. This will impact GEToperations made on the feed, posts, and promotable_postsedges. The default pagination ordering of GET {user-id}/events now begins with the newest event first, and is ordered in reverse chronological order. Graph API v2.4 now supports filtering of GET /v2.4/{user_id}/accounts with new boolean fields: is_promotablefilter results by ones that can be promoted. is_businessfilter results associated with a Business Manager. is_placeInclude Place as results filter. To try to improve performance on mobile networks, Nodes and Edges in v2.4 requires that you explicitly request the field(s) you need for your GET requests. For example, GET /v2.4/me/feed no longer includes likes and comments by default, but GET /v2.4/me/feed?fields=comments,likes will return the data. For more details see the docs on how to request specific fields. GET /v2.4/{id}/linksand GET /v2.4/{id}/statuseswill no longer be available beginning in v2.4. As an alternative, we suggest using GET /v2.4/{id}/feed. GET|POST /v2.4/{page_id}/?fields=global_brand_parent_pageis being deprecated in this version and replaced by /v2.4/{page_id}/?fields=global_brand_root_id. GET|POST /v2.4/{page_id}/global_brand_default_page_id/global_brand_childrenwill no longer function in v2.4. As an alternative, please use the root page ID. GET /v2.4/{page_id}/promotable_postswill no longer support the filter and type params in v2.4. For example, a call to GET /v2.4/{page_id}/promotable_posts?type=STATUSwill return an empty result set. Eventnode no longer supports GEToperations on the endpoints /v2.4/{event_id}/invited, /v2.4/{event_id}/likes, or /v2.4/{event_id}/sharedposts. GET /v2.4/{user_id}/home, GET /v2.4/{user_id}/inbox, and GET /v2.4/{user_id}/notificationsoperations as well as read_stream, read_mailbox, and manage_notificationspermissions are deprecated in v2.4. user_groupspermission has been deprecated. Developers may continue to use the user_managed_groupspermission to access the groups a person is the administrator of. This information is still accessed via the /v2.4/{user_id}/groupsedge which is still available in v2.4. GET /v2.4/{event_id}/?fields=privacyis deprecated in v2.4. GET /v2.4/{event_id}/?fields=feed_targetingis deprecated in v2.4. GET /v2.4/{event_id}/?fields=is_date_onlyis deprecated in v2.4. From October 6, 2015 onwards, in all previous API versions, these endpoints will return empty arrays, the permissions will be ignored if requested in the Login Dialog, and will not be returned in calls to the /v2.4/me/permissions endpoint. Released March 25th, 2015 | No Longer available user_posts Permission - We have a new permission user_posts that allows an app to access the posts on a person's Timeline. This includes the someone's own posts, posts they are tagged in and posts other people make on their Timeline. Previously, this content was accessible with the read_stream permission. The user_posts permission is automatically granted to anyone who previously had read_stream permission. all_mutual_friends Edge - This Social context edge enables apps to access the full list of mutual friends between two people who use the app. This includes mutual friends who use the app, as well as limited information about those who don't. Both users for whom you're calling this endpoint must have granted the user_friendspermission. If you calling this endpoint for someone not listed in your app's Roles sectionyou must submit your app for review by Facebook via App Review. Although this edge is new to Graph API v2.3, to make it easier for you to migrate we also added this edge to v2.0, v2.1, and v2.2. Debug Mode - Provides extra information about an API call in the response. This can help you debug possible problems and is now in Graph API Explorer, as well as the iOS and Android SDKs. For more information see Graph API Debug Mode. New Pages Features - Real-time Updates - As of March 25, 2015 We now send content in Page real-time updates (RTUs). Previously, only the object's ID was in the RTU payload. Now we include content in addition to the ID including: statuses, posts, shares, photos, videos, milestones, likes and comments. In order for the app to receive these types of updates, you must have enabled the "Realtime Updates v2.0 Behavior" migration in your app's dashboard. - Page Reviews now support real-time updates. Apps can subscribe to the ratingsproperty to receive a ping every time a public review is posted on pages the app is subscribed to. In order for the app to receive this type of update, you must have enabled the "Realtime Updates v2.0 Behavior" migration in your app's dashboard. - Page Posts, admin_creator- All Page Posts now include a new admin_creatorfield that contains the idand nameof the Page Admin that created the post. This is visible when you use a Page access token, or the user access token of a person who has a role on the Page. - New Page Fields - GET|POST /v2.3/{page-id}now supports fetching and updating these fields: food_styles, public_transit, general_manager, attire, culinary_team, restaurant_services, restaurant_specialties, and start_info. - New Page Settings - GET|POST /v2.3/{page-id}/settingsnow supports four new settings: REVIEW_POSTS_BY_OTHER, COUNTRY_RESTRICTIONS, AGE_RESTRICTIONS, and PROFANITY_FILTER. - Larger Videos with Resumable Upload - We now support larger video sizes uploads up to 1.5GB or 45 minutes long with resumable video upload. See Video Upload with Graph API. /v2.3/{object_id}/videosedges you can create a new Video object from the web by providng the file_urlparameter. - Resumable, Chunked Upload - All /v2.3/{object_id}/videosedges support resumable, chunked uploads. For more information, see Video Upload with Graph API. Video Playlists - You can now create and manage video playlists for Pages by using GET|POST|DELETE on the /v2.3/{page_id}/videolist edge. you can also add videos to a playlist by POST /v2.3/{videolist_id}/videos and GET the videos of a playlist. Page's Featured Video - You can now set and get the featured_video of a page using GET|POST /v2.3/{page_id}/featured_videos_collection. - Publish Fields - All Video nodes objects at /v2.3/{video_id}now return the new fields: published, a boolean which indicates if the video is currently published, and scheduled_publish_time. - Custom Thumbnail - You can now upload and manage custom video thumbnails as JPEGs or PNGs with GET|POST /v2.3/{video_id}/thumbnail. See Graph API Reference, Video Thumbnail. - Targeting Restrictions - POST /v2.3/{page-id}/videosnow supports targeting restrictions by country, locale, age range, gender, zipcode, timezone and excludin locations. - Delete - DELETE /v2.3/{video_id}removes videos. This is supported if you have edit permissions on the video object. - New Read and Write Fields - The Video node now supports retrieving and modifying additional fields: backdated_time, and backdated_time_granularity. - Subtitles, Localized Captions - GET|POST /v2.3/{video-id}now supports supplying and retrieving subtitles and localized captions. - Visibility of Video - With POST /v2.3/{page_id}/videosyou can control where the video is seen with no_storyand backdated_post.hide_from_newsfeedparameters. These parameters target visibility on feeds and page timelines. Page Plugin - Is the new name for Like Box Social Plugin and it has a new design. Comments Plugins Has a new design and it now supports Comment Mirroring (private beta). Requests are now GameRequests - Previously this term and object-type created confusion for non-game app developers, and the intended usage of these objects are to invite others to a game Non-game apps should use App Invites. The /v2.3/{user-id}/apprequests edge is now limited to game apps. See Games, Requests. read_custom_friendlists - Is the new name for read_friendlists Login permission. This is to clarify that the permission grants access to a person's custom friendlists, not the full list of that person's friends. Changes to Page APIs: -. - Page Publish Operations - now accept the type of access token the requests are made with. This includes publishing posts, likes and comments. When a user access token of a Page admin is in the request such as POST /v2.3/{page-id}/feed, the action occurs with the voice of the user, instead of the Page. To publish as the Page, you must now use the Page access token. - Removing Comments on Page posts - As a Page admin with DELETE /v2.3/{comment-id}now requires a Page access token. - POST /v2.3/{page-id}- Now requires country as sub-parameter if you update the locationfield without specifying the city_idsub-parameter. The state sub-parameter is also required if country sub-parameter is 'United States'. - Countries - The country subfield of the feed_targetingfield on a Page post is renamed countrieswhen you make a POST|GET /v2.3/{page-id}/{post-id}. - Page Field Updates - POST /v2.3/{page-id}- Now supports complete field updates for: hours, parking, payment_options. Previously, the update behavior on these fields was to replace a specific key/value pair that was specified as part of the POST, leaving all other keys intact, but the new functionality of a POST on one of these fields will replace all key/value pairs with what is posted. - Premium Video Metrics - Insights for Page Premium Video Posts are now deprecated. - page_consumptions_by_consumption_typeinsight now returns data for a local page instead of a parent - In earlier versions of the insights API for a Page, asking for this insight would return data for a parent of a local page. In v2.3, we now return the data for only that local page. Picture Error - For the Link, Post, Thread, Comment, Status, and AppRequest nodes, and other nodes which don't have a picture, the /v2.3/{object}/picture edge now returns an error message. Before, the API returned a placeholder image of a 'question mark' for the picture edge requested on these nodes. [Oauth Access Token] Format - The response format of returned when you exchange a code for an access_token now return valid JSON instead of being URL encoded. The new format of this response is {"access_token": {TOKEN}, "token_type":{TYPE}, "expires_in":{TIME}}. We made this update to be compliant with section 5.1 of RFC 6749. Consistent Place Data - Content objects tagged with a place now share a consistent place data structure which contains the ID, name, and the geolocation of the place. This includes Events, Photos, Videos, Statuses and Albums. As a result, the venue and location fields have been removed from the Event node. Developers should access the place field instead. In addition, if someone tags a Photo, Video, Status, or Album at an event, that object will contain an eventfield. Serialized Empty Arrays - All Graph API endpoints now consistently serialize empty arrays as [] and empty objects as {}. Previously, some empty objects were incorrectly serialized as empty arrays. Default Result Limit - All edges will now return 25 results per page by default when the limit param is not specified. Ads API now Marketing API - We recently renamed the Facebook Ads API to the Facebook Marketing API. For details on Marketing API changes see the Marketing API v2.3 Changelog. is now deprecated and will stop returning data from June 23, 2015. Developers should call the Graph API's /v2.3/{page_id}/feed endpoint instead. This returns JSON rather than RSS/XML. Social Plugins - The following are now deprecated and will no longer render after June 23, 2015: - Facepile Plugin - Recommendations Feed Plugin - Activity Feed Plugin - Like Box Social Plugin Released October 30th, 2014 | No longer Available POST /v2.2/{comment_id}?is_hidden=trueto hide, POST /v2.2/{comment_id}?is_hidden=falseto unhide. You can determine if a comment is hidden, or if you have the ability to hide/unhide the comment by checking the is_hiddenor can_hidefields on the commentnode. token_for_businessfield makes it easier to identify the same person across multiple apps owned by the same business: In addition to the Business Mapping API, there is now a new token_for_businessfield on the userobject. This emits a token which is stable for the same person across multiple apps owned by the same business. This will only be emitted if the person has logged into the app. For games developers, this property will also be emitted via the signed_requestobject passed to a canvas page on load. Note that this is not an ID - it cannot be used against the Graph API, but may be stored and used to associate the app-scoped IDs of a person across multiple apps. Also note that if the owning business changes, this token will also change. If you request this field for an app which is not associated with a business, the API call will return an error. commentnode has a new objectfield which emits the parent object on which the comment was made. To get the ID and owner of the comment's parent object (for example, a Page Post) you might call: /v2.2/{comment_id}?fields=object.fields(id,from) - In previous versions, any app which was added as a Page Tab also received realtime updates. From v2.2 onwards there is a dedicated endpoint for managing these subscriptions: - GET /v2.2/{page-id}/subscribed_appsreturns the apps subscribed to realtime updates of the Page. This must be called with a Page Access Token. - POST /v2.2/{page-id}/subscribed_appssubscribes the calling app to receive realtime updates of the Page. This must be called with a Page Access Token and requires the calling person to have at least the Moderator role on the Page. - DELETE /v2.2/{page-id}/subscribed_appsstops the calling app from receiving realtime updates of the Page. This may be called with a Page or App Access Token. If called with Page Access Token, it requires the calling person to have at least the Moderator role on the Page. feed_targetingparameter is now supported when publishing videos to a Page: POST /v2.2/{page_id}/videos?feed_targeting={targeting_params}. This lets you specify a number of parameters (such as age, location or gender) to help target who sees your content in News Feed. This functionality is already supported on POST /{page_id}/feedso we're extending this to videos too. /v2.2/{page_id}: - payment_optionstakes an object which lets you specify the payment options accepted at the place. - price_rangeaccepts an enum of strings which represent the self reported price range of the Page's business. - latitudeand longitudecan now be specified as properties of the locationfield in order to let apps programmatically update Page's physical location. Both are floats. - ignore_coordinate_warnings(boolean) determines if the API should throw an error when latitude and longitude are specified in location field for updating the Page's location. If set to false, an error will be thrown if the specified coordinates don't match with the Page's address. - is_always_openlets apps set the status of the place to “Always Open”. This can only be set to true, and will clear previously specified hours in hoursfield. To set specific hours, use hoursfield. - is_publishedtakes a boolean which lets you publish or unpublish the Page. Unpublished pages are only visible to people listed with a role on the Page. - The Page node has a new readable field to let apps read a Page's information: The following field are now supported with a GET to /v2.2/{page_id}: - name_with_location_descriptorreturns a string which provides additional information about the Page's location besides its name. - There's a new APPEARS_IN_RELATED_PAGESsetting on /v2.2/{page_id}/settings. This boolean determines if your page can be included in the list of suggested pages which are presented when they like a page similar to yours. You may set or read this value. - You can now read the permissions your app has been approved for via an API. A new edge on your App object, called /{app-id}/permissions, allows you to view the permissions that your app has been approved for via Login Review. ?ids=ID1,ID2. This reduces the likelihood of timeouts when requesting data about a large number of IDs in a single request. blockededge on the Page node now requires a Page Access Token - this endpoint can no longer be called with an App or User token. That endpoint is available at: GET|POST|DELETE /v2.2/{page_id}/blocked?access_token={page_access_token}. tabsedge on the Page node now requires a Page token for GETs. POSTs and DELETEs on this endpoint already require page tokens. That endpoint is available at: GET|POST|DELETE /v2.2/{page_id}/tabs?access_token={page_access_token}Calling GET on this edge now works for people in the 'Analyst' role. It previously required the 'Editor' role. tabsedge on the Page node will now throw an error if the caller does not have permission. Previously it would only return an empty string. /{page_id}/adminsedge on the Page node has been renamed to /v2.2/{page_id}/roles. In addition, in the response, the values returned in the role field have been changed: - MANAGERhas been renamed to Admin. - CONTENT_CREATORhas been renamed to Editor. - MODERATORhas been renamed to Moderator. - ADVERTISERhas been renamed to Advertiser. - INSIGHTS_ANALYSThas been renamed to Analyst. settingsedge on the Page node will no longer include entries for settings where the valuefield would be null. POST /{page_id}/settingswill no longer support the settingand valueparams. Instead, you should specify the optionparam which should be an object containing a single key/value pair with the setting enum as the key. GET /v2.2/{page_id}/notificationsmust use a Page Access Token. Previously this required the manage_notificationspermission. User Access Tokens will no longer work for this endpoint. /v2.2/{group_id}/albumsendpoint has changed to match the response of /{user_id}/albums. /v2.2/me/friendsendpoint now defaults to 25. fb:namesocial plugin has been deprecated and will stop working on Jan 28, 2015. Developers should instead use the FB.api()method of the Javascript SDK to retrieve the names of users. page_fanFQL table or the /{user_id}/likes/{app_page_id}Graph API endpoint without needing the user_likespermission. Starting in v2.2, the user_likespermission will be required to query these endpoints. Also, we will require the user_likespermission on versions older than v2.2 starting 90 days from today, on Jan 28, 2015. Facebook will not grant the user_likespermission solely for the purpose of checking if a person has liked an app's page. This change was announced on August 7, 2014 and will come into effect on November 5, 2014. /v2.2/{page_id}/feed. POST /{page-id}/tabsand DELETE /{page-id}/tabswill no longer support subscribing or unsubscribing an app for realtime updates. This will take effect in all previous API versions on January 28, 2015. To subscribe an app to realtime updates for a page, use the new /v2.2/{page_id}/subscribed_appsendpoint. On October 14, 2014, we dropped support for SSL 3.0 across Facebook properties, including the Facebook Platform API and the Real-Time Updates API, after a vulnerability in the protocol was revealed on October 14, 2014. This change helps protect people’s information. If your HTTP library forces the use of SSL 3.0 rather than TLS, you will no longer be able to connect to Facebook. Please update your libraries and/or configuration so that TLS is available. Released August 7, 2014 | No longer Available This is Facebook's first new API update after version 2.0 was launched at the 2014 f8 conference. API versions are supported for two years after the next version is released. This means that: screennamesedge: this returns a list of the other, non-Facebook accounts associated with the brand or entity represented by a Facebook Page. The following deprecations will take place on November 5, 2014 - 90 days after the launch of v2.1. { “success”: true } urishould instead use url. These policy changes will come into effect on November 5, 2014 - 90 days after the launch of v2.1. They apply to all apps across all API versions. As of version 2.5, future Marketing API changelogs will be published under the Graph API Changelog. For expired versions please see the Graph API Changelog Archive. v2.5 Version v2.4 will be available until April, 2016. Before that, you should migrate your API calls to v2.5. Going forward, Marketing API changelogs will be published under the Graph API Changelog. v2.4 Version v2.3 will be available until Oct 7th, 2015. Before that date, you should migrate your API calls to v2.4. A new API that lets developers tag campaigns/adsets/ads/creatives with arbitrary strings (“labels”) and organize/group their ad objects. The Labels API also allows querying adobjects by labels, including AND/OR queries, and query aggregated Insights by label. Now it is possible to answer questions like “how many clicks did all adsets with the 'US-country' label get?” Added a new field 'app_install_state' in the ad set's targeting which takes a enum value of {installed, not_installed}. This field is intended to make inclusion/exclusion targeting for apps easier. The field can be used for any objective, but will only take effect if a promoted object is present. optimization_goal: What the advertiser is optimizing for (video_views, link_clicks, etc...) billing_event(required): What the advertiser is paying by (pay per impression, pay per link click, pay per video view etc...) bid_amount: The value the advertiser is bidding per optimization goal rtb_flagwhich should be used in favor of bid_type=CPM_RTB. bid_amount: The value the advertiser is bidding per optimization goal bid_info: will be removed on all write paths but can still be read. Use ad set's bid_amount field instead. bid_type bid_info: will be removed on all write paths but can still be read. Use ad's bid_amountfield instead. conversion_specs: will be removed on all write paths but can still be read. Use ad set's optimization_goal field instead. is_autobid v2.3terms, CPA for Page Post Engagement). v2.3- definition of CPC will be represented in v2.4as having billing_event=POST_ENGAGEMENTand optimization_goal=POST_ENGAGEMENT page_typesinto an array of single items, e.g. Instead of page_types:['desktop-and-mobile-and-external']you would specify: page_types=['desktopfeed', 'rightcolumn', 'mobilefeed', 'mobileexternal']. The previously available groupings of page_types and validations remain the same. ['mobile']or ['mobilefeed-and-external']must now be replaced with ['mobilefeed', 'mobileexternal'] targetingof an ad set, by default Facebook will include Audience Network page_types, and may include other page_typeswithout notice in the future. conjunctive_user_adclustersand excluded_user_adclustersin favor of flexible targeting act_{AD_ACCOUNT_ID}/connectionobjectswill no longer return connections based on advertiser email. statusfield. Developers should instead use last_firing_time subtypewill become a required parameter in custom audience and lookalike creation { "tos_accepted": { "206760949512025": 1, "215449065224656": 1 }, return "tos_accepted": {"web_custom_audience_tos": 1, "custom_audience_tos": 1 } daily_spend_limitfrom ad account spend_captype change from uint32 to numeric string DAILY_BUDGET, BUDGET_REMAINING, LIFETIME_BUDGETtype change from uint32 to numeric string For the full list of Graph API changes, refer to the Graph API changelog. v2.3 Version v2.2 will be available till July 8, 2015. Before that date, you should migrate your API calls to v2.3. A new /insights edge consolidates functionality among /stats, /conversions, and /reportstats edges. It provides a single, consistent interface for ad insights. This edge is provided with every ad object node, including Business Manager. It provides functionality such as grouping results by any level, sorting, filtering on any field, and real time updates for asynchronous jobs. The new preferred way to upload ad videos is to do it chunk by chunk. When you upload an ad video, especially large ones, this method greatly increases the stability of video uploads. The previous one-step video uploading API still works, but we suggest you to switch to the new method for all videos. interval_frequency_cap_reset_periodto R&F which allows you to set a custom period of time that a frequency is applied to. Previously the cap was applied to the entire duration of the campaign. Introduced new lead ad unit designed to capture leads within Facebook's native platforms. CALL_NOWand MESSAGE_PAGEcall to action for local awareness ads The following summarizes all changes. For more information on upgrading, including code samples, see Facebook Marketing API Upgrade Guide. amount_spent, balance, daily_spend_limit, and spend_capfields for an ad account are changed from integers to numeric strings. businessfield of /act_{AD_ACCOUNT_ID}and /{USER_ID}/adaccountsis now a JSON object. nameis required while creating a Custom Audience pixel for an ad account using the /act_{AD_ACCOUNT_ID}/adspixelsendpoint. pixel_idis required while creating website custom audiences using the /act_{AD_ACCOUNT_ID}/customaudiencesendpoint. DELETED. start_timeand end_timefields when updating an ad set. promoted_objectin an ad set for an offer ad, you will provide the page_idinstead of the offer_id. bid_infofield of ad set or ad will not be available if is_autobidof the ad set is true. creative_idsfield of an ad is no longer available. objectivefield of an ad is no longer available. multi_share_optimizedfield defaults to truenow. You use this field when you create a Multi Product Ad with /{PAGE_ID}/feedor object_story_specin /act_{AD_ACCOUNT_ID}/adcreatives. /{APP_SCOPED_SYSTEM_USER_ID}/ads_access_tokenis replaced by /{APP_SCOPED_SYSTEM_USER_ID}/access_tokens, for Business Manager System User access token management. paramsis removed from the response of targeting description obtained with /{AD_GROUP_ID}/targetingsentencelines mobileand mobilefeed-and-externalplaces ads on News Feed on mobile as well as on Audience Network. The new option mobilefeedis for News Feed on mobile only. You specify the placement option with page_typesfield of targeting specs. v2.2 This is Facebook's first new API update after versioning was announced. API versions are supported for 90 days after the next version is released. This means that version 2.1 would be available until 90 days from v2.2, January 28, 2015. However, we have extended the adoption timeline for v2.2 this time to March 11, 2015. For more information, please see our blog post. The below is a summarized list of all changes. For more info on upgrading, including code samples, please see our expanded upgrade guide. targetingand bid_typewill be required at ad set level, and will no longer be available at ad level. bid_infowill be required at ad set level, while optional at ad level. Affected Endpoints: /act_{AD_ACCOUNT_ID}/adgroups /act_{AD_ACCOUNT_ID}/adcampaigns /{CAMPAIGN_GROUP_ID}/adcampaigns /{CAMPAIGN_GROUP_ID}/adgroups /{AD_SET_ID}/adgroups /{AD_SET_ID} /{AD_ID} A new field promoted_object will be required for creating an ad set when the campaign objective is website conversions, page likes, offer claims, mobile app install/engagement or canvas app install/engagement. promoted_objectwill not be allowed to set a promoted_object. You must create a new ad set if you want to specify a promoted_object. Those existing ad sets without that setting will still run, and still can be updated/deleted as usual. promoted_objectis specified, conversion_specswill be automatically inferred from the objectiveand promoted_objectcombo and cannot be changed/overwritten. promoted_object. Affected Endpoints: /{AD_SET_ID} target_specsendpoint will be replaced with target_spec, only allowing for one spec per prediction. target_specfield returns an object where target_specsused to return an array. story_event_type, will be added. This field will be used to specify when an ad set may or may not have video ads and is required when targeting all mobile devices. app_idsfield is required when "schema"="UID". use FacebookAds\Object\CustomAudience; use FacebookAds\Object\Values\CustomAudienceTypes; // Add Facebook IDs of users of certain applications $audience = new CustomAudience(<CUSTOM_AUDIENCE_ID>); $audience->addUsers( array(<USER_ID_1>, <USER_ID_2>), CustomAudienceTypes::ID, array(<APPLICATION_ID>)); from facebookads.adobjects.customaudience import CustomAudience audience = CustomAudience('<CUSTOM_AUDIENCE_ID>') users = ['<USER_ID_1>', '<USER_ID_2>'] apps = ['<APP_ID>'] audience.add_users(CustomAudience.Schema.uid, users, '<APP_ID>'s=apps) User user = new CustomAudience(<CUSTOM_AUDIENCE_ID>, context).createUser() .setPayload("{\"schema\":\"UID\",\"data\":[\"" + <USER_ID_1> + "\",\"" + <USER_ID_2> + "\"],\"app_ids\":[\"" + <APPLICATION_ID> + "\"]}") .execute(); curl \ -F 'payload={ "schema": "UID", "data": ["<USER_ID_1>","<USER_ID_2>"], "app_ids": ["<APPLICATION_ID>"] }' \ -F 'access_token=<ACCESS_TOKEN>' \<CUSTOM_AUDIENCE_ID>/users Starting with version 2.2 the following changes will be in affect for the endpoints below. count, offset, and limitwill no longer be returned and you must instead use a cursor-based approach to paging. total_countis only returned when the flag summary=trueis set. Affected Endpoints: /act_{AD_ACCOUNT_ID}/asyncadgrouprequestsets /act_{AD_ACCOUNT_ID}/adreportschedules /{SCHEDULE_REPORT_ID}/adreportruns /act_{AD_ACCOUNT_ID}/stats /act_{AD_ACCOUNT_ID}/adcampaignstats /act_{AD_ACCOUNT_ID}/adgroupstats /act_{AD_ACCOUNT_ID}/conversions /act_{AD_ACCOUNT_ID}/adcampaignconversions /act_{AD_ACCOUNT_ID}/adgroupconversions /act_{AD_ACCOUNT_ID}/connectionobjects /act_{AD_ACCOUNT_ID}/partnercategories /act_{AD_ACCOUNT_ID}/reachfrequencypredictions /act_{AD_ACCOUNT_ID}/asyncadgrouprequestsets /act_{AD_ACCOUNT_ID}/broadtargetingcategories /act_{AD_ACCOUNT_ID}/targetingsentencelines /act_{AD_ACCOUNT_ID}/ratecard /act_{AD_ACCOUNT_ID}/reachestimate /act_{AD_ACCOUNT_ID}/users /{AD_ACCOUNT_GROUP}/users /{AD_ACCOUNT_GROUP}/adaccounts /{CAMPAIGN_ID}/asyncadgrouprequests /{ADGROUP_ID}/reachesttimate /{ADGROUP_ID}/keywordstats /{ADGROUP_ID}/targetingsentencelines /search?type=adgeolocation (location_types: city, region) /search?type=adlocale /search?type=adworkemployer /search?type=adworkposition /search?type=adeducationschool /search?type=adzipcode /search?type=adcountry /search?type=adcity /{CUSTOM_AUDIENCE_ID}/users /{CUSTOM_AUDIENCE_ID}/adaccounts /{REACH_FREQUENCY_PREDICTIONS_ID}/ /{ASYNC_REQUEST_SET_ID}/ /{ASYNC_REQUEST_SET_ID}/requests/ /{ASYNC_REQUEST_ID} /{USER_ID}adaccountswhich has additional changes: businessreturns an ID rather than an object. usersreturns a object with fields idrather than uid, permissions, and role. created_time, which is the time that the account was created.
https://developers.facebook.com/docs/graph-api/changelog/archive
CC-MAIN-2018-17
en
refinedweb
ncl_cppklb man page CPPKLB — Picks a set of labels for labeled contour levels. Synopsis CALL CPPKLB (ZDAT, RWRK, IWRK) C-Binding Synopsis #include <ncarg/ncargC.h> void c_cppklb LB is called by Conpack when labels for the contour levels are needed. You can call CPPKLB directly (after the initialization call to CPRECT, CPSPS1 or CPSPS2) when you want to modify the resulting parameter arrays that specify the labels. If the constant-field-flag 'CFF' is non-zero, indicating that, during the last call to CPRECT, CPSPS1, or CPSPS2, the data were found to be essentially constant, CPPKLB does nothing. Otherwise, CPPKLB examines the first 'NCL' elements of the parameter array 'CLV', which defines the contour levels, and the associated parameter arrays, looking for levels that are to be labeled ('CLU' = 2 or 3) for which no label is specified (the associated element of 'LLT' is ' ', a single blank.) If any such levels are found, labels are generated for them. The scale factor 'SFU' may be set as a byproduct of choosing the labels. See the description of the parameters 'SFU' (scale factor used) and 'SFS' (scale factor selector) in the conpack_params man page. After calling CPPKLB, a user program may examine the generated labels and change them in various ways. Examples Use the ncargex command to see the following relevant example: ccpklb. Access To use CPPKLB or c_cppklb,cl,.
https://www.mankier.com/3/ncl_cppklb
CC-MAIN-2018-17
en
refinedweb
DEFICIT FINANCING ; Rajaji in Swarajya 1960 PROF. B. R. Shenoy is bringing out for lay readers a booklet on inflation in India, in which he deals with the causes of the evil and the remedy. I have had the privilege of reading the manuscript and this is what I have gathered from what the Professor sets out with clarity and with figures. I have no doubt the booklet, when published, will help people to understand the gravity of the situation. In all low income countries, expansion of money put in circulation results quickly in price rises. Inflation is the word used when we look at the cause and discuss the situation in terms of money. Price rise is the phrase used when we speak from the point of view of commodities. If the expansion of money, whatever be the motive or reason for such expansion, outpaces the physical volume of output of commodities, we have a state of inflation and prices rise as a result. The Ministry of Commerce publishes the average of wholesale prices. From the hand-outs of the Reserve Bank of India we can obtain information about money supply. There has been a continual rise in the general index of prices. We see also that money supply has considerably expanded, faster than the output of national products. With 1938-39 as base, the general index of prices in August 1960 was 478, a rise of nearly five times. The present changeover of the base from 1938-39 to 1952-53 obscures the enormous magnitude of the price rise. Government collects funds from the people by taxation, loan issues, small savings and profits of public sector undertakings. From these funds disbursements are made for administrative expenditure, repayment of past loans, and Plan investment outlay. When these and other items of disbursements exceed the total receipts, what is called budget deficit arises. These deficits are covered by notes printed at the Government Security Press at Nasik. This is called deficit financing. This expansion of money is followed by what is called secondary expansion through credits given by commercial banks. For every Rs.100 crores of additional Nasik money, there is usually another Rs.100 crores of credit creation. Inflation that now prevails in India began in 1955-56. Budget deficits rose from 97 crores in 54-55 to 225 crores in 55-56. In 57-58 the Plan outlay was so great that, with additional defence expenditure, the budget deficit that year reached a peak of 495 crores. These yearly deficits have a cumulative action. The rise in prices due to inflation reduces the value of money and life becomes unhappy for people living on wages and fixed incomes. Their real income is reduced, and some of them would have to draw on past savings for current expenditure. The rise in price corrodes all savings. This leads in the case of the better placed classes to the transfer of their savings to urban property, to gold and to concealed exports of capital. Speculative transactions acquire additional attraction. Hoarding of goods is encouraged, eating into savings. For a time production may be deceptively stimulated on account of higher prices, but soon it gets retarded on account of increased costs. Foreign purchasers of our goods will move to other markets. Imported goods rise in price giving windfall profits to importers and smugglers. As a result of inflation, income shifts from the masses to upper income groups. The middle classes are most hit. The strike of the Union Government employees was a symptom of this suffering. Industrialists and their labour force, who are able to extract a share in the receipts, do not suffer much but the condition of the vastly larger number of farm lands is worsened. Inflation must be followed by price controls and import restrictions. These produce a great deal of economic and social disorder and injustice. The controls over steel, coal, cement, sugar, rubber, fertilizers and food-grains have cast a gloom over the life of the people. Far from equalizing incomes, the policy of controls makes the rich richer. The stagnant percapita consumption of cloth and of food-grains is the best evidence of the condition of the people, and this has resulted from the misguided policies of the present administration. In the case of all imported goods including gold, there is a great gap between landed cost and market price, ranging from 30 per cent to 500 per cent, depending on the nature of the commodity. The difference between the landed cost of gold and the market price is seventy rupees per tola. The import markets are illegal and the gap between cost and market price is officially ignored but this does not nullify the reality. The benefit of all the gaps in cost of imports and market price goes to importers and smugglers. Excluding government imports where the difference may be treated as a concealed tax, according to a reliable estimate, the ill-gotten gains on imports during two years would be of the order of Rs.1,000 crores. This amount has several co-sharers - corrupt officials who handle the issue of licences, the recipients of the licences, including both those who just sell them in the black market and real importers. The accounts of cost are falsified by inter-sales and the like, so as to bring the declared cost to near the market price, and so as also to replenish the importers for their payments for the purchase of the licences and for corrupt transactions with officials and go-betweens. All these incomes are tax-free, being illicit in nature. It is these earnings that enable some people to give large political donations to the ruling party and also to other groups for purchasing peace. The beneficiaries of the illegal gain, on account of import controls, are the upper income groups and the money is obtained from those who consume the imported commodities or articles into the production of which such imported materials go. The total anti-social money that goes thus from consumers' pockets to upper income groups has been estimated as being of the order of Rs.300 crores per year. Inflation, import restrictions and other controls have affected the moral standards of the nation, and have led to the emergence of a new undesirable profession engaged in touting for obtaining licences, permits and contracts, in illicit trafficking in import licences, and in smuggling gold, diamonds, watches, cigarettes, fountain pens, razor blades, photographic accessories, etc. The talent for enterprise tends to gravitate around officialdom and to practices to become rich quickly without spending energy. In the absence of inflation and controls, the talent and resources would be actively engaged in adding to national wealth under the free play of competition, the normal road to progress. Inflation and controls discourage efficiency and progress and honesty. Easy money being available to some under controls and inflation, they favour continued 'planning' which to them means continued inflation and controls which provide them opportunities to amass money. Political parties in power also favour controls, as these give an opportunity for the exercise of power and for acquiring personal and political gains. Conscience pricks are quelled by the thought that it is all done in the national interests and the gains are only an incidental by-product. Never were the interests of the anti-social elements so well looked after as under the present administration. These controls must go or the Government should change, if the country is to be extricated from the morass it has got stuck in. It is not true, as is argued sometimes, that rising prices and controls and import restrictions and exchange controls are inherent in a developing economy. The experience of other countries-Canada, Belgium, West Germany, Mexico, Japan, Italy and France-have demonstrated the untruth of this plea. It is not true, as is sometimes stated, that prices all over the world have risen. West German national income rose in real terms at 13 per cent per year in each year of the period 1951 to 1958. But prices rose there by less than one per cent per year, 5 per cent only in all seven years. And West Germany was in the forefront to remove restrictions on imports and on payments abroad. In ever so many countries price stability and surplus in balance of payments, and abolition of restrictions on imports and payments, have gone together with rapid economic growth. Since 1955, Indian price-rise stands out almost alone. Prices in May 1960 in India were 33 per cent higher than in 1954. In France and Italy prices declined during that period. In Germany, Belgium and Japan and other countries the annual price rise was 1 per cent or at most 2 per cent. There is a notion that curtailing bank credit will reduce inflation. Bank credit is so closely related to deficit financing that keeping the latter going and reducing inflation by control over credit is a futility. It only adds to the confusion. To restrict credit against food-grains and certain other commodities would raise the cost of banking services generally, and in particular the cost of credit to the trade in those commodities, which are essential for the economic life of the community. Naturally, such policies encourage advances against assets outside the banned list and drive the business of credit from scheduled banks to others which are not under control. The policy of credit controls has demonstrably failed. Tampering with the credit- machinery will not achieve anything as long as deficit financing is continuing. The fact is that the attempt to 'invest' non-available resources - which is what deficit financing amounts to—is a wrong and futile policy. No plan can be larger than the resources available for investment, be it internal or that obtained from generous outsiders. Even as water is no substitute for milk, inflation is no real resource. The fallacy produces high prices and distress. A plan based on inflation will retard progress instead of accelerating it. The Third Plan is tremendously inflationary. The overt deficit financing of this Plan is Rs.550 crores. This is misleading. Without totalitarian and physical suppression of consumption, in order to mop up people's money by reducing consumption, the amount of supposed availability of savings estimated at Rs.7,200 crores is an over-estimate. The over-estimate is at least of the order of Rs.1,300 crores. Thus what the Plan requires by way of foreign aid, (over and above the amount required for repayments due) is not Rs.2,790 crores but Rs.5,350 crores. The deficit financing therefore will not be only Rs.550 crores as planned but six times that figure. If the foreign aid does not arrive according to the time table, whatever the causes may be, the gap will be much greater. And there are good reasons for apprehending this. We know that deficit financing to the extent of Rs.367 crores during the five years ending 59-60 led to a price rise of 32 per cent. The deficit financing inherent in the Third Plan will certainly involve 'runaway inflation', like the one that swept over Germany after the first World War. During the five months ending August 1960, prices have been rising at a rate computed at 14.2 per cent per annum. This is an indication to take note of Deficit financing has already gone too far. Foreign aid and drafts on currency reserves, cannot go on indefinitely. Holding the price line, which is continually promised, would be just King Canute's command to the waves of the sea. It is pathetically argued that inflation will be stopped by increase in production. Inflation retards production. It drives up costs and the commodities manufactured must be sold at higher prices. Prices and costs rise simultaneously with inflation, and will continue to rise with continuing inflation. Inflation is the disease and the prices only indicate the temperature. There is no good attempting to reduce the symptom while keeping the disease going. Fair price shops of any kind or number cannot achieve control of prices. Even if buffer stocks released for sale depress food prices artificially, this will shift agriculture to other than food-crops, and render the food position worse. Any commodity distribution at arbitrary prices will fail, because the stocks will be bought up as soon as they are put on the market, and go to feed the black market. The cost of any remedy put in action by way of subsidies will ultimately fall on the shoulders of the tax-payers. The net result, so far as the price level is concerned will be nil. The price problem resulting from inflation cannot be corrected by a change in the machinery of distribution. The diagnosis must be kept in mind when treatments are attempted The money let loose being the cause, remedies other than reducing the money flow will not avail. The favourite notion that prices result from traders' conspiracies is stupid. Such conspiracies are impossible. The prevailing price rise is not the outcome of either monopolies or impossible conspiracies but of deficit financing. Prices have risen despite bumper crops and heavy annual import of food-grains of three million tons for four years. To stop prices from rising, we must restore the balance between the flow of production and the flow of money. Inflation and excessive State interference are the two evils of the Indian economy of today. If and only when these two evils are removed, can we expect to be saved from rising prices. If not, it is a case of the ground being prepared for communists to take totalitarian charge. September 24, 1960 Swarajya __________________________________________________ Do You Yahoo!? Tired of spam? Yahoo! Mail has the best spam protection around
http://athiyaman.blogspot.com/2007_08_01_archive.html
CC-MAIN-2018-17
en
refinedweb
ctx.lookup("java:comp/env"); times outMarco Schulze Dec 18, 2003 5:18 AM Hello! Accessing java:comp/env results in a timeout. Please help!!! Hashtable env = new Hashtable(); env.put(Context.INITIAL_CONTEXT_FACTORY,"org.jboss.naming.HttpNamingContextFactory"); env.put(Context.PROVIDER_URL,""); env.put("java.naming.factory.url.pkgs","org.jboss.naming:org.jnp.interfaces"); Context ctx = new InitialContext(env); Object obj = ctx.lookup("java:comp/env"); // Here, I get a timeout Best regards, Marco. 1. Re: ctx.lookup(patrick Dec 23, 2003 12:53 AM (in response to Marco Schulze) java.naming.factory.initial=org.jnp.interfaces.NamingContextFactory java.naming.provider.url=localhost:1099 java.naming.factory.url.pkgs=org.jboss.naming:org.jnp.interfaces Just set thos properties. They also can be set in a jndi.properties file in your classpath somewhere 2. Re: ctx.lookup(Adrian Brock Dec 23, 2003 6:12 AM (in response to Marco Schulze) No the java: namespace is only available if you do new InitialContext() which has no provider url. Regards, Adrian 3. Re: ctx.lookup(..) => HTTP absolutely necessaryMarco Schulze Dec 23, 2003 7:06 AM (in response to Marco Schulze) Hello! Thanks a lot for your quick response! First: It is absolutely necessary that ALL traffic is going through http because we are developing a large application that spans over a lot of companies, servers and a lot of firewalls. Dynamic ports are a knock-out-criteria. Thus, I've the following question: If I use the HttpNamingContextFactory, is then the whole traffic going through http or are there still dynamic ports opened? If so, how can I avoid it? Did I understand that correctly: I can only use the absolute java:-notation if I'm within my server. From the client, I always need to use the relative path. Which means in jboss-IDE's XDoclet, I've always to set @ejb.util generate = "physical"?! Is there a way to avoid @ejb.util generate = "physical" and set this globally as default? Best regards, thousand thanks for your help, a merry XMas and a happy new year!!! Marco ;-)
https://developer.jboss.org/thread/75372
CC-MAIN-2018-17
en
refinedweb
As of now, my "Shipping Cost" calculator runs well on an infinite loop. Here's how I'd like the program to be, if possible: prompt the user for the Weight and the Distance 13 times, then, AFTER 13 prompts for user input of weight & distance, display the weight, the distance, and the cost to ship the item for each of their input, if possible in 13 different lines. Here's the current code as it is, on an infinite loop. The driver is below. Ex: //User prompts x 13 Weight: #.## Distance: ###.# Weight: ## Distance: ## . . . //Report x 13 Weight: #.## Distance: ###.# Cost: ##.# Weight: ## Distance: ## Cost: ##.# . . . public class ShippingCharges { private double weight; private double shippingDistance; //class declarations public ShippingCharges(){ } //mutators and accessors (setters and getters) public ShippingCharges(double weight, double shippingDistance) { this.weight = weight; this.shippingDistance = shippingDistance; } public double getShippingDistance() { return shippingDistance; } public void setShippingDistance(double shippingDistance) { this.shippingDistance = shippingDistance; } public double getWeight() { return weight; } public void setWeight(double weight) { this.weight = weight; } public double getShippingCharges(){ //Determines the rate by setting numPer500 = (distance/500.25), rounds up double numPer500=(double)((int)(shippingDistance/500.25)+1); //series of if statements to determine the shipping charges if(shippingDistance==0) return 0.00*numPer500; if(weight==0) return 0.00*numPer500; if(weight<=2) return 1.10*numPer500; if(weight>2&&weight<=6) return 2.20*numPer500; if(weight>6&&weight<=10) return 3.70*numPer500; if(weight>10) return 4.80*numPer500; return 0; } //API- toString: "This object, (which is already a string!), is itself returned." public String toString(){ return "Weight of package="+weight+"kg. Shipping distance="+shippingDistance+"km. Cost to send the package is $"+getShippingCharges(); } } DRIVER import java.util.Scanner; //import scanner class import java.text.NumberFormat; //NumberFormat class to help parse numbers import java.text.DecimalFormat; //DecimalFormat class to help format decimals public class TesterShippingCharges { public static void main(String[] args){ NumberFormat formatter = new DecimalFormat("#0.00"); ShippingCharges num = new ShippingCharges();//num finds out the multiplication Scanner scanner = new Scanner(System.in); // factor for the price ShippingCharges scan=new ShippingCharges(); while (true){ //Causes infinite loop System.out.print("Weight of package: "); // to continue prompting user. double weight = scanner.nextDouble(); //Retrieves user's input weight num.setWeight(weight); // and sets equal to weight. System.out.print("Distance to be sent: "); //Prompts user for distance double distance = scanner.nextDouble(); //Retrieves user's input distance num.setShippingDistance(distance); // and sets eual to distance scan.setWeight(weight); scan.setShippingDistance(distance); num.getShippingCharges(); System.out.println(scan); if (weight==10000) break; } } } Thanks for reading! Thanks even more if you could help with a fix! This post has been edited by MemoryLeak2013: 15 July 2010 - 02:20 PM
http://www.dreamincode.net/forums/topic/181721-print-a-report-after-instead-of-case-by-case/
CC-MAIN-2018-17
en
refinedweb
I have a template: var colDateTemplate = { editable: true, width: 200, align: "center", sortable: true, sorttype: "date", editrules: { date: true }, editoptions: { size: 20, maxlengh: 10, dataInit: function(element) { $(element).datepicker({ showOn: 'both', dateFormat: 'dd/mm/yy', buttonImage: '/images/calendar-icon.png', buttonImageOnly: true }); } }, searchoptions: { sopt: ['eq', 'ne', 'lt', 'le', 'gt', 'ge', 'nu', 'nn'], dataInit: function(element) { $(element).datepicker({ showOn: 'both', dateFormat: 'dd/mm/yy', buttonImage: '/images/calendar-icon.png', buttonImageOnly: true }); } } } And when I'm searching and select th Is that a Custom Control? If so, you need to add a public property in it's code behind that will get or set the readonly property of the textbox of your calendar. public bool ReadOnly { get { return WatermarkExtender1.ReadOnly; } set { WatermarkExtender1.ReadOnly = value; } } Then you can set it like dtSupBookedFromDate.ReadOnly = true; You should make your Custom Control like this... <UserControl x:Class="WpfApplication2.ucCalender" xmlns="" xmlns:x="" xmlns:mc="" xmlns:d="" mc:Ignorable="d" d: <Grid Height="48" Width="303"> <Grid.ColumnDefinitions> <ColumnDefinition Width="100*" /> <ColumnDefinition Width="100*" /> <ColumnDefinition Width="100*" /> </Grid.ColumnDefinitions> <ComboBox Height="30" HorizontalAlignment="Left" Margin="2,2,2,2" You have to unbind/bind your datepicker to your TextBox every time the page (re-)loads. I don't know how your timepicker works, but with jQueryUI's you can use something like that (js): function pageLoad() { $("#myTextBox").unbind(); $("#myTextBox").datepicker(); } Your textbox should use ClientIDMode="Static" for that. can you please check this link where you can popup datepicker with textbox id I think that the reason of the problem which you described is wrong parameters of jQuery UI Datepicker which you use. You use formatter and formatoptions parameters of datepicker which not exists. Instead of that you should use dateFormat option which format described here. UPDATED: The main reason of the problem which you describe is wrong date format which you use. It's important to understand that srcformat and newformat of formatoptions should have PHP date format, but jQuery UI Datepicker supports another format which described here for example. The input data 11/1/2013 12:00:00 AM contains both date and time. On the other side jQuery UI Datepicker support date only. Because you use newformat: 'yy-M-d' then I suppose that you want just ignore the time part of the date. In the case I I think the problem is, that the DatePicker isn't a single Widget but a bunch of them. And according to the documentation they are always used within Dialogs, e.g. DatePickerDialog. So I suggest you create a button dynamically and add an on-click callback that opens a DatePickerDialog. The best idea is to consider using a custom binding like the one in this answer: knockoutjs databind with jquery-ui datepicker or this answer: Knockout with Jquery UI datepicker. If you don't need the two-way data-binding, then a minimal custom binding would be simply: ko.bindingHandlers.datepicker = { init: function(element) { $(element).datepicker(); //handle disposal (if KO removes by the template binding) ko.utils.domNodeDisposal.addDisposeCallback(element, function () { $(element).datepicker("destroy"); }); } }; Note: This answer isn't complete. However, as it's too large for a comment and probably helpful (towards a solution) anyhow I've taken the liberty and posted it as an answer. Anyone that can complete this answer is invited to do so through an edit or by spinning off into another better answer. The thing that seems to be spoiling the party (taken from the documentation, emphasis mine): This will be called once when the binding is first applied to an element, and again whenever the associated observable changes value. If I hackisly prevent the update from doing its datepicker calls in the first update the datepicker won't break anymore (other issues do arise though). For the init I've added this line at the end: element.isFirstRun = true; Then the update method will do the follow Bootstrap datapicker plugin just support left orientation. You can check the original code here. However, it has some smartness to keep it in view. All you can use is: $("#dp3").datepicker({ orientation: 'left' }); orientation as right, top are not supported. I thought a little bit and I think what I would do is not possible... How could the datpicker calculate the days of the month if the year is not specified? So, Have you got any suggestions to force my users to change the year default value ? Please try the below: <script type = "text/javascript"> $(function () { $("#from").datepicker({ onSelect: function (date) { $("#to").datepicker("option","minDate", date); $("#to").datepicker("option","maxDate", new Date()); }, maxDate: 0 }); }); </script> red one, It would be helpful to share a jsFiddle of your code. At least let us know what scripts you are loading and styles you are loading and in which order. Since the datepicker is dependent on jQuery you need to make sure that jQuery is loading before your .js files. You asked about the bootstrap-responsive and bootstrap files. As long as you are linking to the correct one it should work. Hopefully that gives you a little direction. Will update when you share more information. Based on your comment above, I would guess your code is not executing because you have not included jquery.timePicker.js in your code. Download the file and include it like this: <script src="path/to/your/js/file/jquery.timePicker.js"></script> You don't need to include datePicker.js because datepicker is a plugin included with the jquery-ui library. (So really you do have datepicker included!) Also, looking at your comment, you do not need to have a ; after declaring a <script> tag Like this: <script></script> Not like this: <script></script>; EDIT I found the issue, it appears the jquery.timepicker.js library is quite old (2009). It was developed with a much older jquery version. When I run jquery.timepicker.js with a newer version of Did you google around? There are lot of links that can help you Instead of using date type inputs, just use text inputs. Since you don't care about date input support, you no longer need the Modernizr test. You may need to give them all the same class so that you can select them to initialize the date picker widget: HTML <input type="text" class="date"/> jQuery $('input.date').datepicker(); Working Demo take a look at the following demo: There is button which says "Set Min Date". so this will set Min date of another date picker. Basically the controls have a rich client side API. so you need to use the right methods to work with the controls from client side. So here is the JavaScript code to set Min Date on a date picker control: function getDatePicker() { var datePicker = $('#DatePicker').data("tDatePicker"); return datePicker; } function minDatePicker() { var value = $('#DatePickerMinOrMax').data("tDatePicker").value(); getDatePicker().min(value); } In the above code - DatePicker is the main control on which I want to set a min date. DatePickerMinorMax Below is the snippet for above requirement...... <head> <title>Testing</title> <link href="css/jquery-ui-start-custom-1.10.3.css" rel="stylesheet" type="text/css" /> <style type="text/css"> .ui-datepicker-next, .ui-datepicker-prev { display: none; } </style> <script src="scripts/jquery-1.10.1.min.js" type="text/javascript"></script> <script src="scripts/jquery-ui-custom-1.10.3.min.js" type="text/javascript"></script> <script type="text/javascript"> $(document).ready(function() { $('.ui-datepicker-prev ui-corner-all').hide(); $("#datepicker").datepicker({ changeMonth: true, changeYear: true, yearRange: "-0:c); That version of DatePicker uses hided inputs. So what you see is not what your input has as value, it has a unix datestamp. If you noticed the Date.parse($('arrival_date').value); was in the 70's, that's because javascript uses miliseconds and unix time is in seconds. To get the "real" day from the $('arrival_date').value you need to multiply by 1000. But you actually dont need that. To make the second DatePicker get the day from the first you can use this: onSelect: function (){ var depDate = $('departure_date'); if (depDate.value == ""){ depDate.value = $('arrival_date').value + 86400; } } This will add one day to the first datepicker date if the second is empty. The thing is that you will not see the change until the you open the datepicker, so if you change th You can get the options of the existing datepicker using this: var options = jquery('#datepicker').datepicker('option', 'all'); to get one specific option, for example the numberOfMonths: var numberOfMonths = jquery('#datepicker').datepicker('option', 'numberOfMonths'); I used this code snippet in a project once. Hope it helps: $(function() { calendar(); }); function calendar(){ //omitted datepicker init $(".calendar").datepicker("option", $.datepicker.regional["<%=locale%>"]); } All date field's css attributes are specified as calender and locale is retrieve in java code. I'm not familiar with CookieLocaleResolver, I used SessionLocalResolver but there should be equivalent way to get the Locale. public static Locale getLocaleFrom(HttpSession session) { return (Locale) session .getAttribute(SessionLocaleResolver.LOCALE_SESSION_ATTRIBUTE_NAME); } And I remember we modified the datepicker for different locale values between Java and datepicker. You could use a mapping mechanism as an alternative solution.. Try binding SelectedItem property to ControlTemplate <ScrollViewer x:Name="PART_ContentHost" Content={TemplateBinding SelectedItem} can declare the DataContext property of the <TextBlockControl /> as the Text property of the "editor" <TextBox />: <wpfSandbox:TextBlockControl DataContext="{Binding Text, ElementName=editor}" /> and inside your control: <Grid> <TextBlock Text="{Binding}" /> </Grid>?
http://www.w3hello.com/questions/-DatePicker-control-in-Asp-net-
CC-MAIN-2018-17
en
refinedweb
load_config( )) Because the environment is reused between builds, the config is not cached on the environment but needs to be explicitly loaded. This happens with the help of the load_config method. It returns a config object that gives access to the settings in the project file. These settings are work in progress and if you want to know how to use the config file and what to do with it, you have to consult the source code. from lektor.project import Project project = Project.discover() env = project.make_env() config = env.load_config()
https://www.getlektor.com/docs/api/environment/load-config/
CC-MAIN-2018-17
en
refinedweb
If you were to broadly characterize the source code of C and C++ programs, you might say that a C program is a set of functions and data types, and that a C++ program is a set of functions and classes. A C# program, however, is a set of type declarations. The source code of a C# program or DLL is a set of one or more type declarations. For an executable, one of the types declared must be a class that includes a method called Main. A namespace is a way of grouping a related set of type declarations and giving the group a name. Since your program is a related set of type declarations, you will generally declare your program inside a namespace you create. For example, ... No credit card required
https://www.safaribooksonline.com/library/view/illustrated-c-2008/9781590599549/ch03.html
CC-MAIN-2018-17
en
refinedweb
You can block almost all signals, with the notable exception of SIGKILL. By default the kill command sends SIGTERM, which you can block. Read about the sigaction system call to learn how to block signals. No; when you stop Tomcat, the application context is torn down and Spring Integration doesn't have any control. We introduced a new Orderly Shutdown feature in 2.2 using a JMX operation that allows stopping all active components (pollers, JMS listener containers, etc) then waiting some time for messages to quiesce. Some endpoints (such as the http inbound) are aware of this state and won't allow new requests to come in, while staying active to handle active threads. It's not a perfect solution but it covers the vast majority of use cases. I have amended your code as per below: Option Explicit Dim result result = MsgBox ("Shutdown?", vbYesNo, "Yes/No Exm") Select Case result Case vbYes MsgBox("shuting down ...") Dim objShell Set objShell = WScript.CreateObject("WScript.Shell") objShell.Run "C:WINDOWSsystem32shutdown.exe -r -t 20" Case vbNo MsgBox("Ok") End Select The main issues were that "option explicit" has to be at the top, and as a result the "result" variable then must be declared using the "dim" keyword. The above code works fine when I executed it via the command line. I also added a timeout of 20, but you can easily change this back to the original value of 0. You can use the appcmd command line utility for managing sites on IIS. It's located in %systemroot%system32inetsrvAPPCMD. I think it is available in IIS v7 and above only though, not sure if your using an older version of IIS. To stop and start a site, the command will look like the following: %systemroot%system32inetsrvAPPCMD stop site <Your Site's Name> %systemroot%system32inetsrvAPPCMD start site <Your Site's Name> More info on the appcmd utility is here: This is actually not JUnit but the external systems which run JUnit test (like Eclipse or Maven) who are responsible for terminating JVM. Those call System.exit which stops all the threads. If JUnit did it then the external system would have no chance to process the results. You need to handle the WM_QUERYENDSESSION messsage. It's sent to each application before Windows starts the shutdown process. Do what you need quickly, because failure to respond rapidly enough causes the behavior you're observing in FireFox, which is usually a sign of a badly designed app (and the user may terminate it before you get a chance to finish). interface ... type TForm1 = class(TForm) procedure WMQueryEndSession(var Msg: TWMQueryEndSession); message WM_QUERYENDSESSION; end; implementation procedure TForm1.WMQueryEndSession(var Msg: TWMQueryEndSession); begin // Do what you need to do (quickly!) before closing Msg.Result := True; end; (Just as an aside: The enabling/disabling of sounds is a per-user setting, and you should have a very good need for inter The if (running == 0) but is pointless! while (running == 1) { commands(); } return 0; Does exactly the same - once running is 0 it falls out the bottom of the loop, and main returns. The whole idea of the global running is getting into side effect programming, which is a bad thing! Block all incoming request. Add a filter to your application with a boolean to accept request, by default it accepts. Add a ContextListener. and complete the method onContextDestroyed() add your code to modify that boolean @echo off && for /r %F in (*) do if %~zF==0 del "%F" > NUL The > NUL is because I can't recall if certain situations cause del to try to output This usually results when you format the namenode and don't do a datanode clean up and you namespace changes. Probable solution: Try deleting the /app/hadoop/tmp/dfs/data directory and restart the datanode Updated answer. Some options: In your terminal (dev mode basically), just type "Ctrl-C" If you started it as a daemon (-d) find the PID and kill the process: SIGTERM will shut Elasticsearch down cleanly If running as a service, run something like service elasticsearch stop. Previous answer. It's now deprecated from 1.6. Yeah. See admin cluster nodes shutdown documentation Basically: # Shutdown local node $ curl -XPOST '' # Shutdown all nodes in the cluster $ curl -XPOST '' Specify a datastore path that is not in /tmp. By default /tmp is a memory based filesystem and will therefore be cleared on each reboot. For instance: dev_appserver.py --datastore_path=/home/newapp_datastore /home/newapp The issue is that you have not been given XML and the parser legitimately gets in a mess as it sees data that is not legal.. The XML specification says. Thus you have to alter the XML and replace & by & We dealt with the same problem earlier this year. First, you need to look through your log files to determine which site(s) are getting attacked. My guess would be that the sites being hit at 1.5 sites, without exception that was the case on our servers. If that is the case, then those sites need to be let go. If they don't want to upgrade, they need to take their sites elsewhere. Simple as that. You cannot risk your other sites and email blacklists due to customers that don't want to upgrade. We don't allow J1.5 on our servers any more. iGoogle has already expired but not the gadget link you have posted. Currently it returns 200 OK with probably your desired message. Update: It got expired too. No Gadgets are available at the moment. Update: Available from here This is a code from threading.py module: import sys as _sys class Thread(_Verbose): def _bootstrap_inner(self): # some code # If sys.stderr is no more (most likely from interpreter # shutdown) use self._stderr. Otherwise still use sys (as in # _sys) in case sys.stderr was redefined since the creation of # self. if _sys: _sys.stderr.write("Exception in thread %s: %s " % (self.name, _format_exc())) else: # some code might be helpful. The error you see comes from else statement. So in your case: import sys as _sys while True: if not _sys: break/return/die/whatever do_something() time.sleep(interval) I'm not sure if it works though I have not tried with spring quartz. However, normally while using Quartz, we shutdown the quartz gracefully. By gracefully, I mean here that I will shutdown the scheduler only after executing all my pending jobs(jobs which are executed currently but have not yet marked their completion). For graceful shutdowns, we pass attribute true while using method shutdown. Refer API here I am eager to know how spring quartz implementation does this. I'd suggest creating a project directory for each VM you plan to use. If you change into that empty project directory before doing vagrant init a dedicated Vagrantfile is created for that project/VM, which then can be customized to your needs. To use that customized Vagrantfile then, just run vagrant up from inside your projects directory. Not sure if this solves your problem but it's worth a try I guess. ;-) Btw. you can check if your VM is running with the command vagrant status [machine-name]. The org.eclipse.jetty.server.Server has a .stop() method you can call to perform a graceful shutdown of the Jetty server instance. Also, in Eclipse, be sure you have the "Debug View" open. Window > Show View > Other > Debug That will show you the list of running processes that any Eclipse plugin started for you. You might think you terminated it, but it might still be registered as running. I'm not sure if there is some shutdown method, but there are two methods which can help. As it is written in the docs: When the dispatcher no longer has any actors registered, how long will it wait until it shuts itself down, defaulting to your akka configs "akka. You can set akka.actor.default-dispatcher.shutdown-timeout in reference.conf and then detach you actor from your dispatcher. Try this solution: "I was able to fix it: Uninstalling Snippet Designer and deleting the Code Snippets folder on C:UsersUSERDocumentsVisual Studio 2012" Taken
http://www.w3hello.com/questions/-Shut-down-of-asp-net-process-
CC-MAIN-2018-17
en
refinedweb
Testimonial User Guide Web Time Clock and Attendance Management Overview HR.my Attendance Management is made easy with the integration of free Time Clock and Field Check-In , where employees could easily punch in or punch out through their Employee Web Accounts . Nevertheless, using the Time Clock is not mandatory, and you are free to enter employees' attendance in Attendance Management manually. Time Clock and Field Check-In are features that you may use independently, which means it is not compulsory for you to use both features together. Both features support Geolocation and Selfie for better tracking and security purpose. Time Clock is used for daily employee attendance management purpose. For instance, when an employee reports to work, he/ she clocks in and when the employee is about to leave, he/ she clocks out. Field Check-In, on the other hand, is a feature that enables you to track your mobile workforce with ease. For instance, you may have sales people that are doing outside sales. When the sales people are visiting clients, you may request them to check in regularly, such as on every new client visit. Find out more about using Field Check-In to track mobile workforce below. In the event that you have already had an in-house Time Clock or Biometric Attendance System, you may still want to import your Time Clock data into your Employer account. Doing so will save you a good deal of effort of routinely having to convert raw Time Clock data into insightful attendance reports. Additionally, your employees will also be able to check their attendance records from their web accounts, which makes the attendance management more transparent and efficient. Since Time Clock tracks employee clock-in and clock-out times, it is very important that you have updated your Employer Time Zone Setting correctly. You may do so via Employer->Settings . Geolocation and Selfie Geolocation is a feature that captures the latitude and longitude of an employee's location when he/ she performs clocking. When Geolocation data is available, you may easily locate the employee's position on map services such as Google Maps . You may check an employee's location on Google Maps by clicking at the Geolocation link in Team->Attendance Management . Selfie is mainly used to prevent Buddy Punching . You may view an employee's Selfie by clicking at the Selfie link in Team->Attendance Management . You could customize both Geolocation and Selfie options for Time Clock via Employer->Settings . The default setting for both Geolocation and Selfie for Time Clock is Optional . Since Field Check-In is used for tracking mobile workforce , as a result, both Geolocation and Selfie are mandatory. Both Geolocation and Selfie are viewable via Team->Field Check-In Summary . Web Clock In and Clock Out by Employee As an employee, you may sign into your Employee Web Account, then perform clock-in or clock-out via Home->Time Clock . A Time Clock window will be shown with the proper action, i.e. Check In or , depending on the employee's Last Check-In or Last Check-Out state. When an employee is clocking in or out, he/ she may leave a note in the Remark field. For usual 9 to 5 work session, just leave "Today" selected in the For field. Check out more for clocking out overnight in the section below. Together with the clock-in/ clock-out time, the IP Address from where an employee performs the action will also be captured. An employee can always check his/ her current month (or any other month's) attendance records via Team->Attendance in his/ her Employee Web Account. Overnight Clocking for Night Shifters When an employee attends night shift (such as a shift that starts at 2018-01-18 22:00:00 and ends at 2018-01-19 04:00:00), he/ she will just clock in as usual with "Today" selected in the For field (i.e. 2018-01-18). However, when the employee is clocking out overnight, he/ she will be presented with an additional option in the For field of the Time Clock window, which is last night's date (i.e. 2018-01-18) besides "Today" (i.e. 2018-01-19). For this example, the night shift employee will need to make sure that last night's date (i.e. 2018-01-18) is selected, otherwise his/ her clocking will start a new day instead. By default, within 12 hours from the first clock-in, HR.my will automatically select last night's date for overnight clocking. An employee is free to change the For field option, particularly if there is a need to start the work session for a new day. Once a new day clocking is started, an employee will not be able to clock for last night any more without the intervention of HR.my Administrator. Attendance Tracking If you add a new employee attendance record manually via Team->Attendance , it will first appear as "Absent" until you manually clock in for him/ her. Similar to employee web clock-in, once you have added a clock-in for an employee manually, his/ her status will change to "IN" to indicate that the employee is currently clocked in. When an employee clocks out, or if you add a last clock out record to the employee, his/ her status will be updated to "Present" for that particular day. When you are adding employee clock-in/ clock-out records in the Employer account, your role will also be recorded, so that it is very easy to discern if a time record is clocked by an employee, or by HR.my Manager . The work duration will be automatically calculated based on the difference between the First Check-In and the Last Check-Out . Employee Forgetting to Clock In or Clock Out It is quite common that employees may forget to clock in or clock out at times. When this happens: Should an employee forget to clock out last night, when he/ she arrives at work the next morning, he/ she may have 2 options depending on the time elapsed since his/ her first clock-in on the previous day. If it is within 24 hours from the first clock-in, then the employee would still be able to perform a clock-out (an incorrect record though), or to clock in for the new day. The attendance of previous day will continue to show as "IN" for the day that the employee forgets to clock out, until a clock-out record is added by HR.my Administrator or HR.my Manager. Should an employee forget to clock in on any day, when he/ she is about to leave, he/ she cannot perform clock-out by then as he/ she is not previously clocked in. In either case, the employee will need to request HR.my Administrator or HR.my Manager to perform a manual clock-out/ clock-in for him/ her on the day when clock-out/ clock-in is forgotten. Using Field Check-In to Track Mobile Workforce Field Check-In is designed to help Employers to track mobile workforce with ease. You may have the need to manage outside-sales people, field service agents etc in your organization. One of the commonly faced problem is knowing or verifying the whereabouts of the mobile workforce at certain moments, particularly when they claim to be present at locations that are highly dubious. In this case, you may request them to check in regularly, such as on every new client visit, or on certain time intervals. To perform a Field Check-In, just click at Home->Field Check-In . Wait until your web browser successfully retrieves current Geolocation info, then the Check In button will be enabled for clicking. Unlike the Time Clock window, the button in the Field Check-In window will always show as Check In , as this is the only action required. HR.my provides a Field Check-In report that is accessible to both Employer and Employees to check the location maps and selfies of your mobile workforce. If you are accessing this report as an employee, you will be able to view your own Field Check-In records, plus your team members' if you are either a Line Manager, Head of Department or Head of Branch.
https://hr.my/doc/timeclock-attendance-management.html
CC-MAIN-2018-17
en
refinedweb
I tried out lovely.remotetask for cron jobs in a sample grok site. It was high time for me to get dirty with utilities in a grok site. First things first: put lovely.remotetask in the install_requires section of your setup.py. Secondly, you need a “task service”: something that actually runs the tasks you give it to run. I registered it as a local utility in my grok site: class CronService(lovely.remotetask.TaskService): pass class Mysite(grok.Site): interface.implements(IMysite) grok.local_utility( CronService, provides=lovely.remotetask.interfacesITaskService, name='cronservice') Thirdly, you need the actual task. This just has to be a callable. For now I restricted myself to a simple “print” instead of immediately doing a zodb pack. Wrap it in a lovely.remotetask.task.SimpleTask and register it as a global utility with some name: def echo(): print "something" grok.global_utility(SimpleTask(echo), name='echo', direct=True) The last thing is to add the task from step 3 to the task service from step 2. There’s no out-of-the-box way to do that right away with utility registration or through configuration. You could react to the site’s object-created event, I guess. I recently experimented with a new buildout recipe that creates a site in an empty Data.fs and optionally calls a method after that by feeding a generated python file to zopectl run. So I set up a method that I could call from there: def setup_cron_job(site): every_minute = tuple(range(60)) service = site.getSiteManager().queryUtility(ITaskService, 'cronservice') if not service: raise RuntimeError("No cronservice utility found") tasks = [job.task for job in service.jobs.values()] if u'echo' not in tasks: service.addCronJob(u'echo', minute=every_minute) I got rewarded by a “something” being printed every minute. I’m not sure I’m happy with it yet. A regular cron job that calls a view is straightforward. But there’s no password in plain text anywhere (which you need with a cron job) on your filesystem and that is again a huge bonus. You even do without an admin user... I’ll have to see how it plays out in more elaborate situations. Comments? Better examples? An actual simple cron example with zc.async? Please mail them to reinout@vanrees.org :-) Comment by Radim Novotny: I’m using lovely.remotetask with Plone to query third party server for organization data. It is not cron-like service, but asynchronous query service. User enters ID of the organization, and as soon as blurs from the input field, remote task service is started. While user is entering additional details (email address) the service queries remote server for organization name and address. As soon as data from the remote server are ready, they are displayed on the page and entered to hidden fields of the input form. Remote query take about 1-2):
https://reinout.vanrees.org/weblog/2009/07/08/lovely-remotetask.html
CC-MAIN-2018-17
en
refinedweb
Here are some of the many common tasks where the Code Engine can help you transform and enrich your events: Suppose that an input sets a field for the user’s login status, and you wish to only record events from users who are logged in. The following code could be used to discard events where the user is not logged in. def transform(event): if (event['login_status'] == false): return None else: return event If you have a data source that includes, for example, a table you want to blacklist/remove, you can do something like this: # # Removing Data from Alooma and Blacklisting Certain Data # # Input <-> table blacklist mapping BLACKLIST_MAPPING = { '<input_label>': ["ugly_table"] } def is_blacklisted(event, input_label): if input_label in BLACKLIST_MAPPING: event_type = event['_metadata']['event_type'] if event_type in BLACKLIST_MAPPING[input_label]: return True return False # The transform checks to see if the event/input_label is blacklisted def transform(event): input_label = event['_metadata']['input_label'] if is_blacklisted(event, input_label): return None event['_metadata']['event_type'] = "%s.%s" %(dataset_name, event['_metadata']['event_type']) return event An event can be split into multiple events. For example, suppose incoming events each include a list of websites visited by a user, and you want a separate event for every website that each user visits. For example, from a single event, the below function returns a list of event dictionaries, where each dictionary is composed of a site and the user from the original event. def transform(event): event_list = [] for site in event['sites']: site_visit = {} site_visit['site'] = site site_visit['user'] = event['user'] event_list.append(site_visit) return event_list After returning multiple events, each event is automatically packaged with a _metadata dictionary corresponding to its parent event. However, the metadata fields on such events are not available for access in the Code Engine. Thus, the _metadata fields cannot be transformed unless explicitly copied to each event object. The following code example amends the previous example with an explicit metadata copy and field assignment: from copy import deepcopy def transform(event): event_list = [] for site in event['sites']: site_visit = {} site_visit['site'] = site site_visit['user'] = event['user'] site_visit['_metadata'] = deepcopy(event['_metadata']) site_visit['_metadata']['event_type'] = "transform_code" event_list.append(site_visit) return event_list Regardless of whether the _metadata dictionary is added automatically or explicitly, the dictionary will appear in the Mapper. The _metadata dictionary and its fields are discussed here. The Alooma Code Engine supports direct extraction of geographical information from IP addresses. Simply import the geoip library and call the geoip.lookup function on an IP address. The function returns an object containing the country, country code, continent, continent code, region, city, and postal (zip) code. Given the following sample event: { "_metadata": { "input_label": "REST_Endpoint", "event_type": "REST_Endpoint", "client_ip": "194.153.110.160", "@version": "1", "@timestamp": "2015-10-16T15:26:57.027Z", "host": "172.17.0.73", "@uuid": "af721753-370a-4cc8-9463-5f62c82988e2", "@parent_uuid": "" } } And the following transform code: The transformed event appears below. Note the new country, country_code, continent, continent_code, region, city and { "country": "France", "countrycode": "FR", "continent": "Europe", "continent_code": "EU", "region": "J", "city": "Paris", "postal_code": "75001", "_metadata": { "@timestamp": "2015-10-16T15:26:57.027Z", "@uuid": "af721753-370a-4cc8-9463-5f62c82988e2", "@version": "1", "host": "172.17.0.73", "client_ip": "194.153.110.160", "input_label": "REST_Endpoint", "event_type": "REST_Endpoint", "@parent_uuid": "" } } Geo-IP resolution works on both IPv4 and IPv6 addresses. If an IP address is invalid, or in the rare case that a country/continent cannot be found, then the lookup function returns None for both values. City and postal code data is less comprehensive, and may be None if there is no information for a given IP address. Alooma provides an API to generate notifications that appear in the notification pane of the Dashboard page. You can generate notifications to display information, warnings, and errors. A notification has two string arguments: a title and a description. Multiple notifications are aggregated by their title when received within 15 minutes of one another. Aggregated notifications can be expanded in the notification pane of the Dashboard page in order to see the separate descriptions for each notification. Note that when running code in the Code Engine, notifications from the execution will not appear in the notification pane. The Alooma Code-Engine supports user-agent parsing using the ua-parser library. Given the following sample event: { "user_agent": "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_9_4) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/41.0.2272.104 Safari/537.36", "_metadata": { ... } } And the following transform code: from ua_parser import user_agent_parser def transform(event): result = user_agent_parser.Parse(event['user_agent']) event['browser'] = result['user_agent']['family'] event['OS'] = result['os']['family'] return event The transformed event appears below. { "user_agent": "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_9_4) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/41.0.2272.104 Safari/537.36", "browser": "Chrome", "OS": "Mac OS X", "_metadata": { ... } } Flexible date/time string parsing is provided via the Python dateutil module. Given the following sample event: { "user_time": "Sat Oct 14 07:13:46 UTC 2013", "_metadata": { ... } } And the following transform code: from dateutil.parser import parse def transform(event): timestamp = parse(event['user_time']) # type(timestamp) => <type 'datetime.datetime'> event['user_time'] = timestamp.isoformat() return event The transformed event appears below. Notice user_time is now ISO-format. { "user_time": "2013-10-14T07:13:46+00:00", "_metadata": { ... } } Transform code often has long or nested conditional statements to check for the presence of nested dictionary elements in the event object. This convention can result in cumbersome code, but is necessary to avoid KeyError exceptions when accessing a dictionary. The following get function provides a shortcut to retrieve values if they exist, and avoids KeyError exceptions if values do not exist. def get(dictobj, *path): ''' Get path from dictobj. Returns None if path does not exist. e.g.: dictobj = {'parent': {'child': {'grandchild': 'foo'}}} get(dictobj, 'parent', 'child', 'grandchild') > 'foo' get(dictobj, 'some', 'other', 'path') > None Can be used to simplify code like: if ('data' in event and 'url' in event['data'] and event['data']['url'] == 'xxx') to: if get(event, 'data', 'url') == 'xxx' ''' element = dictobj for path_element in path[:-1]: if path_element not in element: return None element = element.get(path_element) return element.get(path[-1], None) For example, accessing event['_metadata']['client_ip'] results in a KeyError if the event is missing either the _metadata dictionary or client_ip key. In contrast, get(event, ‘_metadata’,’client_ip’) gracefully returns None if any dictionary elements are missing or if the returned value is equal to None. Now that you've seen the basics of the Code Engine, continue to learn about testing your code in the UI or programmatically.. api = alooma.Alooma( 'app.alooma.com', '<YOUR USERNAME>', '<YOUR PASSWORD>',. # This step can equivalently be done testing your Code Engine code using alooma.py. Article is closed for comments.
https://support.alooma.com/hc/en-us/articles/360000698651-Code-Engine-Common-Tasks
CC-MAIN-2018-17
en
refinedweb
Sending complex typed data from web service to SilverLight client – Part I In this post I’ll explain in how to send complex typed data between web service and SilverLight client code. In order to avoid security issues I’ll use JavaScript as mediator between my server code and SilverLight code. First – Let’s have a look at a solution contains web server side code with all layers (Presentation that will host silverlight control , BL, DAL, Structures) and another SilverLight project Now, leaving out all server-side lower layers and focusing on presentation layer and SilverLight code, let’s have a look at what we’ll demonstrate: 1. Getting complex typed data from the server via JavaScript . 2. Sending some of the data to SilverLight control 3. Presenting data on Silverlight control (A simple ListBox for that matter). Step 1 – Create a server web method to provide as with the data we need: [WebMethod(EnableSession = true)] public List<Employee> GetAllEmployees() { var result = new List<Employee>(); //Adding some employees to the collection result.Add(new Employee() {EmployeeNumber = 1, FirstName = “Ran”, LastName = “Wahle”}); result.Add(new Employee() {EmployeeNumber = 2, FirstName = “Anonymous”, LastName = “Employee” }); result.Add(new Employee() {EmployeeNumber = 3 , FirstName = “Known”, LastName = “Employee”}); return result; } Step 2 – Create object similar to the one in the server site. At this stage the word “Why” might come to mind followed by a question mark… The reason for that is simple – we cannot make reference from SilverLight project to a regular .Net one because SilverLight runs on compact framework and therefore can have referenced to SilverLight projects only. However, You might of course use the service referenced object if you connect to a service, here we don’t do it to avoid security failures may caused by connection service directly from the Silverlight object. public class Employee { public string FirstName { get; set; } public string LastName { get; set; } public string FullName { get { return string.Format(“{0} {1}”, FirstName, LastName); } } public int EmployeeNumber { get; set; } } On the second part I’ll explain how to serialize the objects through JSON. Will you publish the full code for download? Yes. Thanks for the remark.
http://blogs.microsoft.co.il/ranw/2008/07/29/sending-complex-typed-data-from-web-service-to-silverlight-client-part-i/
CC-MAIN-2018-17
en
refinedweb
Introduction In my point of view the release of the SDK for the ABAP development tools (cf. First version: SDK for ABAP development tools) was a major change to the way SAP provides developer tools. For the first time it is possible to easily extend the development environment provided by SAP. Therefore, the SDK enables developers to enhance the development environment with new functionality or missing features. One feature that is missing in the the SE80 as well as in the ABAP development tools is an simple way to create transports of copies. While the usage of transports of copies is recommended e.g. by the German SAP User Group (DSAG) in its development best practices guide line (cf. Andreas Wiegensteins blog DSAG Best Practice Guideline for ABAP Development now available in English), creating transports of copies is quite cumbersome. As a consequence custom transactions or reports are usually used to simplify the creation of transports of copies. While such a report simplifies the creation of transports of copies it is not integrated into the development environment. consequently, the usability is far from perfect. In this blog series I’ll show how I extended the transport organizer view of the ABAP development tools. The plug-in that I’ve developed enable the creation of a transport of copies with just one click. The blog series consist of three parts. In the first one I’ll explain the eclipse side of the plug-in, in the second part (Creating a ABAP in Eclipse plug-in using the ADT SDK – Part 2) the ABAP side. The third part will focus on releasing the plugin using github. The idea behind this blog series is twofold. First, I want to provide an overview of what is necessary in order to create a ABAP development tool plug-in that invokes some back end functionality. Second, I hope to encourage other developers to create and release useful enhancements for the ABAP development tools. Consequently the complete source code of the plug-in is available at ceedee666/adt_transport_utils_plugin · GitHub. Getting Started If you are not yet familiar with the SDK for the ABAP development tools I would recommend the following documents and blogs. The two blogs Starting with ADT SDK is easy – Pt.1 and Starting with ADT SDK is easy – create your own transaction launcher (Pt.2) by SAP mentor Martin Steinberg provide a nice starting point. In addition to that the excellent document NetWeaver How-To Guide: SDK for the ABAP Development Tools by Wolfgang Woehrle provides a full tutorial including the development and invocation of back end functionality. Developing the Eclipse Plug-In As Martin Steinberg already has described hot to create a simple eclipse plug-in I will skip this step and start with the necessary configuration of the the plug-in project. In order to use the SDK some ABAP development tools plug-ins need to be added to the list of required plug-ins. This is done by editing the plugin.xml file. After opening the plugin.xml file in the editor, the required plug-ins can be added in the dependencies tab. Unfortunately there is currently no documentation available which plug-in contains which classes. So finding the correct plug-in to add to required plug-ins list is sometimes quite challenging. Furthermore, the plug-ins are provided by SAP without the plug-in source code. This makes understanding the usage of some methods as well as debugging quite difficult. Hopefully, SAP will address these two issues in future releases of the SDK. Extending the Transport Organizer View The next step is to extend the transport organizer view with the menu entries to create the transport of copies. I wanted to add two feature to the transport organizer view: - the possibility to create a transport of copies based on existing transport request - the possibility to create and directly release a transport of copies based on en existing request. To provide this functionality, I created the menu entries shown in the following screen shot: The eclipse RCP enable the extension of views and menus using extension points. In order to extend the menu of the transport organizer view it is first necessary to identify the appropriate extension point. This can be done using the eclipse plug-in spy. By pressing <ATL>+<SHIFT>+<F1> the plug-in spy shows information regarding the current view. The information provided by the plug-in spy are shown in the following screenshot. In order to extend the menu of the transport organizer view the information regarding the active menu contribution identifier, in this case com.sap.adt.tm.ui.tmview, is important. In order to extend the popup menu of the transport organizer view a menu extension (extension point org.eclipse.ui.menus) needs to be added to the “Extensions” section of the plugin.xml file. The easiest way to add an extension is to use the “Extensions Wizards”. These wizards can be started by clicking on the “Add” button in the extensions tab. In order to create the menu items shown above I added a menu contribution consisting of a menu and two commands to the extension point. Furthermore, I created a icon for each of the menu and the commands. The icons are located in the icons folder in the plug-in project. To add an icon to a menu or command is simply done using the “Browse…” button next to the icon element in the “Extension Element Details” part of the editor. Finally, I also externalized the strings like label and tooltip to allow for late internationalization of the plug-in. This can easily be done by right clicking on an extension element and selecting “Externalize strings” in the popup menu. This will create a folder named OSGI-INF in the plug-in project. The folder contains a file named bundle.properties that contains the externalized strings. Implementing the Menu Command Handlers The next step is to implement the command handlers underlying the menu entries. Command handlers are created by extending the extension point org.eclipse.ui.handlers. The link between a command and a handler is created via the commandID. In the screenshot above the command “Create Transport of Copies” is linked to the commandID adt_transport_utils_plugin.commands.createAndRelease. The command itself is implemented by a class de.drumm.adt_transport_utils.ui.CreateAndReleaseTransportOfCopiesHandler. Furthermore, a expression is linked to each command specifying when the command is enabled. In this example the commands are only enabled when only one item is selected an the item in a subclass of com.sap.adt.tm.IRequest. The expressions are defined using the Command Core Expressions. A details explanation of Command Core Expressions is beyond the scope of this blog post. A detailed explanation of Command Core Expressions is available at Command Core Expressions – Eclipsepedia. The handler class for a command is implemented by inheriting from org.eclipse.core.commands.AbstractHandler and implementing the execute method. The following code snippet shows the complete code of the CreateAndReleaseTransportOfCopiesHandler class. In the execute method the current selection is read from the active window. Using this information an instance of the TransportOfCopiesRequest class is created and the its executePost method is invoked. public class CreateAndReleaseTransportOfCopiesHandler extends AbstractHandler { public CreateAndReleaseTransportOfCopiesHandler() { } public Object execute(ExecutionEvent event) throws ExecutionException { IWorkbenchWindow window = HandlerUtil .getActiveWorkbenchWindowChecked(event); ITreeSelection selection = (ITreeSelection) window .getSelectionService().getSelection(); new TransportOfCopiesRequest(window, selection, true).executePost(); return null; } } The executePost method is shown in the next code snippet. In the method first the destination of the ABAP project is read. Next, the getResourceUri method uses teh discovery service provided by the ABAP development tools SDK to get the URI of the back end service implementation. Finally, the post method of this URI is invoked. public void executePost() { String destination = this.getAbapProjectDestination(); URI tocResourceUri = this.getResourceUri(destination); if (tocResourceUri != null) { this.executePost(destination, tocResourceUri); } else { MessageDialog.openError(this.window.getShell(), "ADT Transport Utils Error", "Necessary backend resource could not be found. "); } } In the getRessourceUri method the discovery service is instantiated wit a DISCOVERY_URI (in this case /ztransportutils/discovery). Next the discovery service is used to get an instance of IAdtDiscoveryCollectionMember using the TOC_RESOURCE_SCHEME () and the TOC_TERM (create_toc). These strings will be relevant when implementing the discoverable resource in the backend as they provide the connection between the discovery service and the concrete service implementation. private URI getResourceUri(String destination) { IAdtDiscovery discovery = AdtDiscoveryFactory.createDiscovery( destination, URI.create(IAdtTransportUtilsConstants.DISCOVERY_URI)); IAdtDiscoveryCollectionMember collectionMember = discovery .getCollectionMember( IAdtTransportUtilsConstants.TOC_RESOURCE_SCHEME, IAdtTransportUtilsConstants.TOC_TERM, new NullProgressMonitor()); if (collectionMember == null) { return null; } else { return collectionMember.getUri(); } } The executePost method is quite simple. It adds the ID of the selected transport request and if the transport of copies shall be released immediately as query parameters and executes the post method. The remainder of the executePost method then mainly consists of error handling. Plug-in Build Parameters & Testing The last thing that is necessary to complete the implementation of the eclipse part of the plug-in is to adjust the build parameters. Here it is necessary to add the icons folder to set of folders that need to be included in the binary build. This is done on the build tab of the plugin.xml file. While this isn’t required to test the plug-in in the development environment it is required to create a fully functional release of the plug-in afterwards. With the build parameters set up correctly it is now possible to perform a fist test of the plug in. The easiest way to do this is to right-click on the plug-in project an select Run as -> Eclipse application. This will start a new eclipse instance including the new plug-in. Summary The completion of the eclipse plug-in concludes the first part of the blog series. If you want to have a detailed lock on the source code of the plug-in itself, the easiest way is to use the Import -> Projects from Git wizard in eclipse to simply clone the git repository (ceedee666/adt_transport_utils_plugin · GitHub). If you only want to try out the plug-in, there is also a brief description of the installation procedure for the plug-in (for both, the eclipse and the ABAP part) available on the GitHup repository. Christian Hi Christian Many thanks for your update. I will check you text in order to update my document. Have a nice day. Kind regarads, Wolfgang Hi Wolfgang, thanks for the feedback. You should also have a look at the second part. The implementation of the ABAP side was the part that proved most difficult for me. Now that I know how to do it, I was able to understand your document much better 😉 Christian Hi Christian Thanks for your feedback. I would be happy to get more details from you, what needs to be made more understandable in the second part. Which information did you miss? I look forward to hearing from you. Kind regards, Wolfgang Hi Wolfgang! I am also going through your document and also Christian’s blog about the ABAP part of his plugin. I have to admit that I still do not get the whole picture. This will be my personal fault but I would love to know why I have to implement this BAdI and that Ressource App. Is there a new version of your tutorial available or is there some other document to guide someone through the first necessary steps? Thanks! Hi Markus Thank you for your feedback. Just to go for sure: Did you find the following document? There are many links in this blog. So, you might could have overseen it. In this document, see above, there are overviews in chapter 1.1 and 1.7 -> What information would you also like to see here? Regarding BAdi and Ressource App: Please see chapter 1.6., 2.3, and 2.3.3. Don´t hesitate to contact me if you have further questions. Thank you. Kind regards, Wolfgang Hi, thanks for this tutorial. I am currently working together with Markus Theilen on a plugin and we just managed to get the ADT SDK running. However there seems to be a problem with out launch configuration and we get an out of memory exception from time to time, as we start the new Eclipse application with the ADT plugins enabled. But currently I can’t yet narrow down the problem to post a bug report 🙁 . But I keep in searching. Thanks again! Hi Tim Could you send me the Support Collector file to this error, please? wolfgang.woehrle@sap.com To create it, choose “Help > Collect Support Information” from the ABAP in Eclipse menu bar. Thank you for your help. Kind regards, Wolfgang Hi Wolfgang! Thank you for your support! We finally managed to get what we wanted, but it took some time ti wrap my brain around the SDK and the tutorial. On Wednesday or Thursday I will probably meet with Thomas Fiedler to talk about the little plugin we did. I will try to make my difficulties speakable to that time. 🙂 Thanks, Markus
https://blogs.sap.com/2014/08/27/creating-a-abap-in-eclipse-plug-in-using-the-adt-sdk/
CC-MAIN-2018-17
en
refinedweb
Hello *, could someone please tell me whether JNDI can differentiate between a local and a remote access? I tested to lookup a resource adapter factory from a remote client and it failed with a NameNotFoundException. Hence, I assume, it is somehow possible to hide objects which are bound in JNDI from the public (but make them available locally). Is that correct? If yes, how I can configure this for a whole subcontext (and its children). Could someone please give me a hint or an URL to a howto? Thanks a lot in advance! Happy easter! Marco ;-) Anything in the "java:/" namespace cannot be accessed remotely. Seems to work! Thanks a lot!!! Marco ;-)
https://developer.jboss.org/thread/77203
CC-MAIN-2018-17
en
refinedweb
Dev Build 3163 is out now at This fixes a Windows performance regression in 3162, and has some character spacing tweaks for both MacOS and Windows. It also runs on FreeBSD again (via the Linux emulation layer). The Windows scrolling performance issue seems to be fixed now. Thanks! However maybe because of the change to text rendering, now the text is "flashing" when scrolling - it is most visible on the line numbers column and with a light colour scheme. 3162 & 3163 both suffer from this, 3161 is ok. Windows 10 x64, light theme & scheme, no changes to default text rendering settings in the OS. A few questions: font_options I used to use the Win 10 setting for "Bypassing the DPI behaviour of the application" (not sure how it is called in english version of Windows) in the compatibility tab, but since a few builds ago (3158 afaik), I turned that off, because ST became per-display DPI aware. 3163 is having trouble importing all files in a plugin. This is on OSX (all I've tested so far). Certain files don't get imported while others do. Did something change in how you guys import modules? This happens to a lot of my plugins, but here is one example: reloading plugin BracketHighlighter.bh_core Traceback (most recent call last): File "/Applications/Sublime Text.app/Contents/MacOS/sublime_plugin.py", line 116, in reload_plugin m = importlib.import_module(modulename) File "./python3.3/importlib/__init__.py", line 90, in import_module File "<frozen importlib._bootstrap>", line 1584, in _gcd_import File "<frozen importlib._bootstrap>", line 1565, in _find_and_load File "<frozen importlib._bootstrap>", line 1532, in _find_and_load_unlocked File "/Applications/Sublime Text.app/Contents/MacOS/sublime_plugin.py", line 1182, in load_module exec(compile(source, source_path, 'exec'), mod.__dict__) File "/Users/facelessuser/Library/Application Support/Sublime Text 3/Packages/BracketHighlighter/bh_core.py", line 17, in <module> import BracketHighlighter.bh_popup as bh_popup ImportError: No module named 'BracketHighlighter.bh_popup' reloading plugin BracketHighlighter.bh_logging reloading plugin BracketHighlighter.bh_plugin reloading plugin BracketHighlighter.bh_regions reloading plugin BracketHighlighter.bh_remove reloading plugin BracketHighlighter.bh_rules reloading plugin BracketHighlighter.bh_search reloading plugin BracketHighlighter.bh_swapping reloading plugin BracketHighlighter.bh_wrapping Looks like all imports of modules which are located within an overloading package fail. Example: This issue exists with all overloading packages, no matter whether they overload a default package or one located within the Installed Packages path. Installed Packages Not every overloaded file fails, but I do have a number of packages that are installed in Installed Packages, and I develop on them, unpacked, in Packages. So it kind of sounds like some override issues. Packages I may hold off on upgrading to this to see if we get some fixes in regards to package importing. I was a big fan of the character spacing in the previous version. Is there an option we can use to change it ourselves? Seems like the ZipLoader::has() method in sublime_plugin.py is the culprit. Since 3161 the first evaluation (commented out here) always returns ignoring all the following code paths. ZipLoader::has() sublime_plugin.py By reverting it, the import error disapears. class ZipLoader(object): def __init__(self, zippath): self.zippath = zippath self.name = os.path.splitext(os.path.basename(zippath))[0] self._scan_zip() def has(self, fullname): # name, key = fullname.split('.', 1) # return name == self.name and key in self.contents # <3161 key = '.'.join(fullname.split('.')[1:]) if key in self.contents: return True # <3161 override_file = os.path.join(override_path, os.sep.join(fullname.split('.')) + '.py') if os.path.isfile(override_file): return True override_package = os.path.join(override_path, os.sep.join(fullname.split('.'))) if os.path.isdir(override_package): return True return False Emoji in text files no longer show up for me in 3163. Using macOS Sierra 10.12.6 (16G29). You can get the character spacing of recent dev builds on macOS by turning on the no_round font option. Things are more complex than that on Windows though. I'm seeing a bug with character widths in 3163, at least with braces: In both cases, there are exactly 80 brace characters, all selected. The rendered characters don't seem to have changed, while the ruler and selection have changed and no longer line up. Update: "font_options": ["no_round"] suffices as a workaround for now "font_options": ["no_round"] I actually had some time to confirm what @deathaxe was saying. Overrides is kinda broken. So I did have an older .sublime-package file for many of the failing packages, but then I had the unpacked overrides which no longer override properly. That is why certain files were failing as they did not exist in the older zipped packages which were normally overridden by the unpacked packages. .sublime-package @timkang it sounds like you aren't using a monospace font? I'm using the default MacOS font, Menlo. I've confirmed the overrides issue and got a fix in that will resolve a couple of other related issues Whops, that was on my part. I was running the patch locally for quite a while and didn't notice any problems though. Probably because I didn't have nested overrides (as top-level ones were fine).
https://forum.sublimetext.com/t/dev-build-3163/36279
CC-MAIN-2018-17
en
refinedweb
Which style is preferable? Style A: def foo(): import some_module some_module.something import some_module def foo(): some_module.something some_module Indeed, as already noted, it's usually best to follow the PEP 8 recommendation and do your imports at the top. There are some exceptions though. The key to understanding them lies in your embedded question in your second paragraph: "at what stage does the import ... happen?" Import is actually an executable statement. When you import a module, all the executable statements in the module run. "def" is also an executable statement; its execution causes the defined name to be associated with the (already-compiled) code. So if you have: def f(): import something return None in a module that you import, the (compiled) import and return statements get associated with the name "f" at that point. When you run f(), the import statement there runs. If you defer importing something that is "very big" or "heavy", and then you never run the function (in this case f), the import never happens. This saves time (and some space as well). Of course, once you actually call f(), the import happens (if it has already happened once Python uses the cached result, but it still has to check) so you lose your time advantage. Hence, as a rule of thumb, "import everything at the top" until after you have done a lot of profiling and discovered that importing "hugething" is wasting a lot of time in 90% of your runs, vs saving a little time in 10% of them.
https://codedump.io/share/qcwiwAAD8XGP/1/imports-at-global-level-or-at-function-level
CC-MAIN-2016-50
en
refinedweb
0 Hey everyone. I am just learning assembly and I am understanding everything, but I keep having a problem a with this program. The main is in C and is supposed to receive a string from a user. Then, in assembly, I am supposed to count the number of words. But whenever I run the program, it crashes. Obviously there's something wrong. Here is my program: #include <stdio.h> #include <stdlib.h> extern int countwords(char string); int main(int argc, char *argv[]) { char mystring[256]; int count; printf("Give me a string: "); scanf("%s", mystring); count = countwords(mystring); printf("Your string has %d letters \n", count); system("PAUSE"); return 0; } .global _countwords _countwords: pushl %ebp movl %esp, %ebp movl 8(%ebp), %ebx xor %esi, %esi xor %eax, %eax movl 0x20, %ecx cmpb $0, (%ebx, %esi, 1) je done loop: cmp %ecx, (%ebx, %esi, 1) je word cmpb $0, (%ebx, %esi, 1) je done inc %esi jmp loop word: inc %eax inc %esi cmpb $0, (%ebx, %esi, 1) je done jmp loop done: movl %ebp, %esp popl %ebp ret I was wondering if any of you more experienced programmers can help an amateur out? Any assistance would be greatly appreciated! Thanks
https://www.daniweb.com/programming/software-development/threads/351893/counting-words-in-a-string
CC-MAIN-2016-50
en
refinedweb
I have already wrote the main funstion but I cannot figure out what to put in the other function to find the average of the three arrays?#include <iostream> #include <iomanip> using namespace std; double avg1( double [], int); double avg2( double [], int); double avg3( double [], int); double computeAverage(int [], double []); int main() { double x[] = { 11.11, 66.66, 88.88, 33.33, 55.55 }; double y[] = { 9, 6, 5, 8, 3, 4, 7, 4, 6, 3, 8, 5, 7, 2 }; double z[] = { 123, 400, 765, 102, 345, 678, 234, 789 }; cout << fixed << setprecision(2); double avg1 = computeAverage (x, sizeof (x) / sizeof (x[0]) ); double avg2 = computeAverage (y, sizeof (x) / sizeof (x[0])); cout << "Average of numbers in array x = " << avg1 << '\n'; cout << "Average of numbers in array y = " << avg2 << '\n'; cout << "Average of numbers in array z = " << ComputeAverage (z, sizeof (z) / sizeof (z[0])) << '\n'; return 0; } double computeAverage () { } Any help would be appreciated. Thanks, Nate2430
http://cboard.cprogramming.com/cplusplus-programming/5562-how-can-i-find-average-arrays.html
CC-MAIN-2016-50
en
refinedweb
Cannot find the error on my program ( verry basic C stuff ) - From: Chakib Benziane <spykspygel@xxxxxxxxx> - Date: Mon, 23 Mar 2009 17:50:24 +0100 Hi folks, I'am learning C programming from ANSI_C Reference Manual of Kernighan & D.Ritchie . I'm still working on the tutorial chapter , in have some issues with exercice 1-18 . The goal is to write a program to remove trailing blanks and tabs from each line of input, and to delete entirely blank lines. Here's my source code : ********************** /* Remove trailing blanks and tabs from each line ** not worked yet on removing tabs */ #include <stdio.h> #define MAXLINE 1000 /* max input line length */ #define IN 1 #define OUT 0 /* function prototypes */ int getline (char line[], int maxline); void clean (char line[], char cleaned[]); /* prints input lines after removing trailing blanks and tabs */ int main(void) { char line[MAXLINE]; /* current line */ char cleaned[MAXLINE]; /* cleaned line */ while (getline (line, MAXLINE)){ clean (line, cleaned); printf ("\n%s", cleaned); } } /* get new line into an char array */ int getline(char line[], int lim) { int c, i; for (i=0; i < lim - 1 && ((c = getchar()) != EOF) && (c != '\n'); ++i) line[i] = c; if (c == '\n'){ line[i] = c; ++i; line[i] = '\0'; return 1; } if (c == EOF) return 0; } void clean(char line[], char cleaned[]) { int space; int i; if (line[0] == ' ') space == IN; else space == OUT; for (i=0; line[i] != '\0'; ++i){ if (space == OUT) if (line[i] != ' ') cleaned[i] = line[i]; else if (line[i] == ' '){ space == IN; cleaned[i] = line[i]; } else if (space == IN) if (line[i] != ' '){ cleaned[i] = line[i]; space == OUT; } else if (line[i] == ' ') ; } cleaned[i] = line[i]; } ************************************** The output gives me 8 whatever the input was . If somebody can help me to resolve this problem it would be great , i need just to find where's the error not to get the solution to this probleme as i have the solution book . Thx in advance and sorry for my poor english Chakib .B . - Follow-Ups: - Re: Cannot find the error on my program ( verry basic C stuff ) - From: Chakib Benziane - Re: Cannot find the error on my program ( verry basic C stuff ) - From: Ahem A Rivet's Shot - Prev by Date: Re: hash tables, non-random keys - Next by Date: Re: hash tables, non-random keys - Previous by thread: Advertisement for Techno Intellectrium - Next by thread: Re: Cannot find the error on my program ( verry basic C stuff ) - Index(es):
http://coding.derkeiler.com/Archive/General/comp.programming/2009-03/msg00159.html
CC-MAIN-2016-50
en
refinedweb
wizard public class Naerling : Lazy<Person>{ public void DoWork(){ throw new NotImplementedException(); } } The show promises to follow six budding tech entrepreneurs as they try to market their ideas, launch companies and get really, really rich. But Bravo is not interested in watching engineers code. General News Suggestion Question Bug Answer Joke Praise Rant Admin Use Ctrl+Left/Right to switch messages, Ctrl+Up/Down to switch threads, Ctrl+Shift+Left/Right to switch pages.
https://www.codeproject.com/Lounge.aspx?msg=4394264
CC-MAIN-2016-50
en
refinedweb
- Data Definition and XPath Intro - A Simple Example: Three Different Ways - A Final Example - Going Forward Recently Microsoft released Beta 2 of its new platform offering to the general public. Among all the announcements that this platform is more stable and faster, there are many changes to the available classes. With what seems to be a sincere dedication to the XML standard, Microsoft is pushing forward with it in .NET. As always, Microsoft has a knack for integrating products and building fairly useful software that others can digest. This time it's not just ODBC or OLE DB, but also XML classes using similar interfaces. Microsoft is embracing the standard and even stuffing some nice features into its classes. In this article, we review some of what is available to developers who intend to invest in this platform. Requirements for running the example code in this article are as follows: .NET Beta 2 SDK. Windows 2000 and IIS 5.0. A basic understanding of .NET (namespace compiles, and deployment). Your favorite text editor. Visual Studio.NET will not be covered. NOTE Beta 2 of .NET was recently released with many reported changes in libraries and operation of the platform. If you need a crash course in .NET, then put on your helmet and read my previous article based on Beta 1. Upon opening the SDK documentation, you will realize that .NET offers a very rich set of XML libraries. The list of available namespaces for XML functionality in .NET is rather large compared to those in other sections: System.Xml System.Xml.Xsl System.Xml.XPath System.Xml.Schema System.Xml.Serialization Beyond the namespaces are many classes to provide much needed or wanted functionality when working with XML data in .NET. As with all programming languages and platforms, you will use some frequently and some not at all. You will find that System.Xml has all the standard W3C DOM classes. System.Xml.Xsl and System.Xml.XPath provide support beyond System.Xml. Table 1 gives a summary of some classes and their roles from differing XML namespaces. Table 1-Available Namespaces and Classes This is just a small sampling of what is actually available in .NET, but these are some that the article will demonstrate. With all these classes, it's hard to decide which direction is best for your application. Let's try to lift some of that fog for your own future references and get to building the initial piece of a Web contacts manager. Data Definition and XPath Intro No application is complete without data, and we need to define a particular dataset for the rest of this article. We will keep it simple but try to hit on differing aspects of XML: <personal_information> <contacts> <entry id="0" initials="jhd" keywords="friend, bar" /> <name first="John" middle="Hobo"> That about covers everything, with enough complexity to demonstrate not only element values, but also attributes. This could contain more information, but that is beyond the exercise of this discussion. Copy and paste the XML code and save it to a file named contacts.xml under your working webroot. If you like, more <entry /> nodes can be added for experimentation. NOTE Make sure that any XML code added is well formed; otherwise, it could cause problems. XML syntax can be quickly verified by opening the saved file in Internet Explorer. .NET does produce errors with decent debugging information, and I suggest trying out the tracing and other debugging functionality. For those who are not familiar with it, XPath is a great mechanism for selecting nodes with complex or simple conditions. If you need just a set of nodes with an attribute that meets a certain condition and is second in the list, then XPath is the solution. For example, this next line gives you the second entry in the list of selected nodes: /personal_information/contacts/entry[position() = 2] Far more can be done with XPath, but it could take up books on its own. You can read more about XPath at Zvon. Now that you're familiar with the data and have seen XPath, let's examine some of the .NET XML offerings.
http://www.informit.com/articles/article.aspx?p=23606
CC-MAIN-2016-50
en
refinedweb
Opened 11 years ago Closed 11 years ago Last modified 11 years ago #1495 closed enhancement (wontfix) template variable dictionary lookups use arguments as strings not as variable names Description I was looking at trying to do something in a template like: {% for obj in object_list %} {{ ratings_dict.obj }} But it didn't work. Testing shows that my problem is that the template system doesn't do a dictionary lookup of the VARIABLE obj, but rather of the string "obj". The docs show an example: from django.core.template import Context, Template t = Template("My name is {{ person.first_name }}.") d = {"person": {"first_name": "Joe", "last_name": "Johnson"}} t.render(Context(d)) "My name is Joe." Perhaps you might want to first attempt to resolve the variable "first_name" (as in the above example) and then if it doesn't resolve, then treat it as a character string. Or, since it would be easy to explicitly specify it AS a character string (e.g. {{ person."first_name" }} for those times when you want the string first_name instead of the variable first_name to be used, that might be an alternative implementation. I don't know if this is a "defect" or an enhancement, but I'll opt for "enhancement" and let the developers decide. Thank you Change History (2) comment:1 Changed 11 years ago by comment:2 Changed 11 years ago by ok... then please update the docs to make it clear when one uses template variables which parts are taken as literal strings and which parts are resolved as variables. i.e. {{ alpha.bravo.delta.echo }} Is it just "alpha" that's resolved as a variable? or is it dependent on which lookup succeeds? (I still think that always resolving each part as a variable unless it's explicitly quoted text is better -- i.e. if when you want it, just say alpha."bravo",delta,"echo" ) Thanks This is a bit out of the scope of the template system at this point.
https://code.djangoproject.com/ticket/1495
CC-MAIN-2016-50
en
refinedweb
CodePlexProject Hosting for Open Source Software I just installed the 0.9 version of orchard using wpi. When I try to run the orchard site this error appears: Compilation Error Description: An error occurred during the compilation of a resource required to service this request. Please review the following specific error details and modify your source code appropriately. Compiler Error Message: CS0234: The type or namespace name 'WebData' does not exist in the namespace 'WebMatrix' (are you missing an assembly reference?) Source Error: Line 23: using System.Web.Mvc; Line 24: using WebMatrix.Data; Line 25: using WebMatrix.WebData; Line 26: using System.Web.Mvc.Ajax; Line 27: using System.Web.Mvc.Html; Source File: c:\Windows\Microsoft.NET\Framework\v4.0.30319\Temporary ASP.NET Files\root\ec900aad\eeb6dffb\App_Web_index.cshtml.d5b2430e.utgewwra.0.cs Line: 25 Anyone know what this might be about? I'm not familiar with webmatrix and don't see a place to check for missing references etc. I did a repair of the webmatrix installation and no help. I may pull down this install and choose the IIS install instead, but thought I'd see if this could be resolved first. The same 35 items are in the folder at \my websites\orchard\bin as are in the bin folder for the orchard 0.9 I downloaded and unzipped, including WebMatrix.Data.dll Yes that was it - thanks! Now I get the orchard setup page and it says it does not have write perms to app_data, but that's probably easy to resolve. After closing the instance of orchard which had the perms error listed in my previous post, and hitting Run again, Orchard fired up without any errors. Thanks, same issue, reinstall of MVC 3 RC2 did the trick. I have installed it using my web host company and the error continues. Maybe the final release of webmatrix is bogus? Or maybe your web host company does not have the latest bits installed. Are you sure you want to delete this post? You will not be able to recover it later. Are you sure you want to delete this thread? You will not be able to recover it later.
https://orchard.codeplex.com/discussions/238856
CC-MAIN-2016-50
en
refinedweb
0 Hello guys: I am certain if I was not trying slef-teaching C++, that this question would have been answer before. What is the use of using namespace std; ? I am not certain, I just know that when I use #include<iostream> I have to appended. Simple question, but a matter that I need to get to the bottom of it. Thank you , Your rookie programmer GCard
https://www.daniweb.com/programming/software-development/threads/109029/what-its-the-use-of-using-namespace-std
CC-MAIN-2016-50
en
refinedweb
. Efficient locale-sensitive support for text IO is also supported. These modules are intended to be imported qualified, to avoid name clashes with Prelude functions, e.g. import qualified Data.Text as T To use an extended and very rich family of functions for working with Unicode text (including normalization, regular expressions, non-standard encodings, text breaking, and locales), see the text-icu package: Properties Modules - Data - Data.Text - Data.Text.Array - Data.Text.Encoding - Data.Text.Encoding.Error - Data.Text.Foreign - Data.Text.IO - Data.Text.Internal - Data.Text.Lazy - Data.Text.Lazy.Builder - Data.Text.Lazy.Builder.Int - Data.Text.Lazy.Builder.RealFloat - Data.Text.Lazy.Encoding - Data.Text.Lazy.IO - Data.Text.Lazy.Internal - Data.Text.Lazy.Read - Data.Text.Read Downloads - text-0.11.1.0.tar.gz [browse] (Cabal source package) - Package description (included in the package) Maintainer's Corner For package maintainers and hackage trustees
http://hackage.haskell.org/package/text-0.11.1.0
CC-MAIN-2016-50
en
refinedweb
Odoo Help This community is for beginners and experts willing to share their Odoo knowledge. It's not a forum to discuss ideas, but a knowledge base of questions and their answers. How to update a pre defined DATE field value Hello, I'm trying to modify hr.contract model so the '`end_date`' field gets the value of '`effective_date`' which is in another model '`resignation_application`'. The concept is when an employee fill a resignation application it updates the contract end date. Here's my code: class resignation_application(osv.osv): _name = 'resignation.application' _columns = { 'employee_id': fields.many2one('hr.employee', "Employee", select=True, invisible=False, readonly=True, states={'draft':[('readonly',False)], 'confirm':[('readonly',False)]}), 'effective_date': fields.date('Effective Date', readonly=True, states={'draft':[('readonly',False)], 'confirm':[('readonly',False)]}, select=True, copy=False), class hr_contract(osv.osv): _inherit = 'hr.contract' _columns = { 'end_date': fields.date('End Date', compute= '_compute_effective_date', store=True), } @api.model def create(self, values): if 'end_date' in values and not values['end_date']: del(values['end_date']) return super(hr_contract, self).create(values) @api.one @api.depends('end_date','employee_id') def _compute_effective_date(self): recs = self.env['resignation.application'] # retrieve an instance of MODEL recs = recs.search([('state', '=', 'validate')]) # search returns a recordset for rec in recs: # iterate over the records if self.employee_id == rec.employee_id: self.end_date = rec.effective_date return recs.write({'end_date': rec.effective_date}) But it didn't return the end date.. I know there's something wrong with my return statement but I don't know how to fix it.. Also I want to add an inverse method to `end_date` field so the hr officer can add an end date to the employee contract. Any help will!
https://www.odoo.com/forum/help-1/question/how-to-update-a-pre-defined-date-field-value-98380
CC-MAIN-2016-50
en
refinedweb
Content-type: text/html #include <smartcard/ifdhandler.h> RESPONSECODE IFDHControl(DWORD Lun, PUCHAR TxBuffer, DWORD TxLength, PUCHAR RxBuffer, PDWORD RxLength); The IFDHControl() function takes the following parameters: Lun Logical Unit Number TxBuffer Control bytes to send TxLength Length of bytes to send RxLength Expected length of response RxBuffer Buffer to receive response RxLength Length of response received The IFDHControl() performs control information exchange with some types of readers such as PIN pads, biometrics, and LCD panels according to the MCT and CTBCS specification. This function does not exchange data with the card. The following values are returned: IFD_SUCCESS Successful completion. IFD_RESPONSE_TIMEOUT The response has timed out. IFD_COMMUNICATION_ERROR An error has.
http://backdrift.org/man/SunOS-5.10/man3smartcard/IFDHControl.3smartcard.html
CC-MAIN-2016-50
en
refinedweb
namespace::autoclean - Keep imports out of your namespace version 0.28. used), ); } namespace::autoclean will leave behind anything that it deems a method. For Moose classes, this the based on the get_method_list method on from the Class::MOP::Class. For non-Moose classes, anything defined within the package will be identified as a method. This should match Moose's definition of a method. Additionally, the magic subs installed by overload will not be cleaned. Sometimes you don't want to clean imports only, but also helper functions you're using in your methods. The -also switch can be used to declare a list of functions that should be removed additional to any imports:($_) == $_ } ]; This takes exactly the same options as -also except that anything this matches will not be cleaned. When used with Moo classes, the heuristic used to check for methods won't work correctly for methods from roles consumed at compile time. package My::Class; use Moo; use namespace::autoclean; # Bad, any consumed methods will be cleaned BEGIN { with 'Some::Role' } # Good, methods from role will be maintained with 'Some::Role'; Additionally, method detection may not work properly in Mouse classes in perls earlier than 5.10. Bugs may be submitted through the RT bug tracker (or bug-namespace-autoclean@rt.cpan.org). There is also a mailing list available for users of this distribution, at. There is also an irc channel available for users of this distribution, at irc://irc.perl.org/#moose. Florian Ragwitz <rafl@debian.org> This software is copyright (c) 2009 by Florian Ragwitz. This is free software; you can redistribute it and/or modify it under the same terms as the Perl 5 programming language system itself.
http://search.cpan.org/dist/namespace-autoclean/lib/namespace/autoclean.pm
CC-MAIN-2016-50
en
refinedweb
Details Description As described in the JDBC 4 spec, sections 13.1 and 3.1. This adds support for new Statement methods added by JDBC4 and not addressed by other JIRAs: isClosed() and getResultSetHoldability(). Issue Links - is blocked by DERBY-1095 Closing an embedded connection does not seem to close associated EmbedStatements - Closed Activity Statement.isClosed() will not return correct values on the embedded side when parent connection is closed until the blocking issue DERBY-1095 is resolved. ' DERBY-953-2a.diff' implements EmbedStatement.isClosed() in different way. If the statement is marked as active, it goes to the parent connection to verify this. Tests have been run locally, but they are not yet submitted for commit. See DERBY-1097 for testing code. Derbyall has not been run, the patch only adds new code that is not used anywhere yet. Implementation is in line with the comment for the initial submission, and with the comments on DERBY-1095. Nothing have been changed for the client side since the previous patch. See ' DERBY-953-1a.stat' for svn status (unchanged). Please review, and when acceptable, commit. I don't like the approach of using an exception to determine if the statement is closed. I see your motivation – you want to reuse the code that sets active to false. I think the better way to do this is to refactor out the code inside checkExecStatus() that sets the active field, thusly: protected final boolean checkActive() throws SQLException{ if (!getConnection().isClosed()) return true; active = false; return false; } protected final void checkExecStatus() throws SQLException { if ( ! checkActive() ) } public boolean isClosed() throws SQLException{ if ( active ) checkActive(); return !active; } The code snippet posted in the previous comment still has the same problem as the original code, which was the reason why I returned true in the catch block. A NoCurrentConnection exception can be thrown in getConnection(). active would then still not be set to false, and isClosed would throw this exception. I do not like that isClosed can throw an exception in this case, and in this situation I would dare say a NoCurrentConnection is the same as the statement being closed and we could simply return true. So I don't quite see how the new proposal would solve the issue. It would also introduce yet another method for checking the state, taking the number up to three; checkStatus, checkExecStatus and checkActive. If you still want this to happen, give me a little more pushback, I'm not yet convinced I want to do this. I do however see that I could have checked that the exception thrown actually is a NoCurrentConnection exception, and then re-throw the exception if it is not. Would that ease your concerns? Sorry, I missed that about getConnection(). But my point still stands, we shouldn't use exceptions for making decisions mainline execution. We can refactor getConnection() using the same approach: /** - Try to get the connection. Returns null if it's not valid */ protected final Connection getConnectionInternal() { java.sql.Connection appConn = getEmbedConnection().getApplicationConnection(); if (appConn != applicationConnection) { appCon = null; return appConn; } /** - Check the status without throwing an exception. Returns true if the statement - is active and has a valid, open connection, false otherwise */ protected final boolean checkExecNoException() { Connection conn = getConnectionInternal(); if ( conn == null || conn.isClosed() ) active = false; return active; } protected final void checkExecStatus() throws SQLException{ checkStatus(); if ( ! checkExecStatusNoException() ) throw Util.noCurrentConnection() } public final java.sql.Connection getConnection() throws SQLException{ checkStatus(); java.sql.Connection appConn = getConnectionInternal(); if ( appConn == null ) throw Util.noCurrentConnection(); } public final boolean isClosed() throws SQLException{ return ( ! active || checkExecStatusNoException() ); } '. I was expecting the patch to be very similar to the one for ResultSet.isClosed(), but it seems to have gained in complexity for little value. Not sure I understand David's comment about "using exception for mainline decisions", I don't see that happening in the simpler version of the patch (ie. one similar to the changes made for ResultSet.isClosed()). An execption is only used when the Statement is closed, that's not the mainline execution, it's the exception case. Sorry, I wasn't able to comment earlier, I've been busy,. I could be stubborn about my point of view, but I think that's not worthwhile. Dan has a good point, and Kristian has also pointed out to me that there are exceptions all the way down so there's no way to avoid catching an exception. So I must humbly apologize to Kristian and say I think I have to agree with Dan that the first patch is simpler and is more in line with ResultSet.isClosed(). I would like it to check for a specific SQL State (e.g. SQLState.NO_CURRENT_CONNECTION) rather than swallow any old exception. I'll make that change and commit. David. I think this is the correct fix, a slightly modified version of the second patch, removing the assumption that if checkExecStatus throws an exception that the statement is closed. This them matches the ResultSet.isClosed() approach. /** + * Tell whether this statment has been closed or not. + * + * @return <code>true</code> is closed, <code>false</code> otherwise. + * @exception SQLException if a database access error occurs. + */ + public boolean isClosed() + throws SQLException { + // If active, verify state by consulting parent connection. + if (active) { + try catch (SQLException sqle){ + } + } + return !active; + } Thanks for your tips on this, Dan. Your version looks just right. I'll wait to hear from Kristian, but if he's OK, I can make the change you suggest and commit. One question on this: your example swallows any SqlException. I think this assumes that the only possible exception is going to be NO_CURRENT_CONNECTION. Is that a fair assumption? I can't say I fully understand the code. Shouldn't we be throwing other exceptions besides NO_CURRENT_CONNECTION? No, that's not the assumption I made. The assumption is that after a call to checkExecStatus() the active field will correctly represent the closed/open state of the Statement. I can add comments to three methods that are involved in this code stating what exceptions they throw if you think that will help. I guess I just feel uncomfortable with swallowing all exceptions. Can you explain it to me like I were a novice why that's OK in this case? Why wouldn't it be better to check for the specific exception?. Thanks, Dan, this was what I'm looking for. I'm working on committing this patch. Resolved, revision 388234. With StatementTest.junit added to the jdbc40 test suite, the suite passes. I did not run derbyall because this is a JDBC4-specific method, and only new, JDBC4-specific code was added; no existing code was modified. Task completed. Ported DERBY-935 / DERBY-1565 (433814) to 10.2 branch at subversion revision 436974.for?).
https://issues.apache.org/jira/browse/DERBY-953
CC-MAIN-2014-15
en
refinedweb
This guide provides a technical overview of how to use Azure Web Sites to create line-of-business applications. For the purposes of this document, these applications are assumed to be intranet applications that should be secured for internal business use. There are two distinctive characteristics of business applications. These applications require authentication, typically against a corporate directory. And they normally require some access or integration with on-premises data and services. This guide focuses on building business applications on Azure Web Sites. However, there are situations where Azure Cloud Services or Azure Virtual Machines would be a better fit for your requirements. It is important to review the differences between these options in the topic Digital Marketing Campaigns. Because line-of-business applications typically target corporate users, you should consider the reasons to use the Cloud versus on-premises corporate resources and infrastructure. First, there are some of the typical benefits of moving to the Cloud, such as the ability to scale up and down with dynamic workloads. For example, consider an application that handles annual performance reviews. For most of the year, this type of application would handle very little traffic. But during the review period, traffic would spike to high levels for a large company. Azure provides scaling options that enable the company to scale out to handle the load during the high-traffic review period while saving money by scaling back for the rest of the year. Another benefit of the Cloud is the ability to focus more on application development and less on infrastructure acquisition and management. In addition to these standard advantages, placing a business application in the Cloud provides greater support for employees and partners to use the application from anywhere. Users do not need to be connected to the corporate network through in order to use the application, and the IT group avoids complex reverse proxy solutions. There are several authentication options to make sure that access to company applications are protected. These options are discussed in the following sections of this guide. For the business application scenario, your authentication strategy is one of the most important decisions. There are several options: For this business application scenario, the first scenario using Azure Active Directory provides the fastest path for implementing an authentication strategy for your application. The remainder of this guide focuses on Azure Active Directory. However, depending on your business requirements, you might find that one of the other two solutions is a better fit. For example, if you are not permitted to synchronize identity information to the Cloud, then the ADFS solution might be the better option. Or if you must support other identity providers, like Facebook, then an ACS solution would fit better. Before you begin to setup a new Azure Active Directory, you should note that existing services, such as Office 365 or Windows Intune, already use Azure Active Directory. In that case, you would associate your existing subscription with you Azure subscription. For more details, see What is an Azure AD tenant?. If you do not currently have a service that already uses Azure Active Directory, you can create a new directory in the Management Portal. Use the Active Directory section of the Management Portal to create a new directory. Once created, you have the option to create and manage users, integrated applications, and domains. For a full walkthrough of these steps, see Adding Sign-On to Your Web Application Using Azure AD. If you are using this new directory as a standalone resource, then the next step is to develop applications that integrate with the directory. However, if you have on-premises Active Directory identities, you typically want to synchronize those with the new Azure Active Directory. For more information on how to do this, see Directory integration. Once the directory is created and populated, you must create web applications that require authentication and then integrate them with the directory. These steps are covered in the following two sections. In the Global Web Presence scenario, we discussed various options for creating and deploying a new web site. If you are new to Azure Web Sites, it is a good idea to review that information. An ASP.NET application in Visual Studio is a common choice for an intranet web application that uses window authentication. One of the reasons for this is the tight integration and support for this scenario provided by ASP.NET and Visual Studio. For example, when creating an ASP.NET MVC 4 project in Visual Studio, you have the option to create an Intranet Application in the project creation dialog. This makes changes to the project settings to support Windows Authentication. Specifically, the web.config file has the authentication element's mode attribute set to Windows. You must make this change manually if you create a different ASP.NET project, such as a Web Forms project, or if you are working with an existing project. For an MVC project, you must also change two values in the project properties window. Set Windows Authentication to Enabled and Anonymous Authentication to Disabled. In order to authenticate with Azure Active Directory, you have to register this application with the directory, and you must then modify that application configuration to connect. This can be done in two ways in Visual Studio: One method is to use the Identity and Access Tool (which you can download and install). This tool integrates with Visual Studio on the project context menu. The following instructions and screenshots are from Visual Studio 2012. Right-click the project, and select Identity and Access. There are three things that must be configured. On the Providers tab, you must provide the path to the STS metadata document and the APP ID URI (to obtain these values, see the section on Registering Applications in Azure Active Directory). The final configuration change that must be made is on the Configuration tab of the Identity and Access dialog. You must select the Enable web farm cookies checkbox. For a detailed walkthrough of these steps, see Adding Sign-On to Your Web Application Using Azure AD. In order to fill out the Providers tab, you need to register your application with Azure Active Directory. On the Azure Management Portal, in the Active Directory section, select your directory and then go to the Applications tab. This provides an option to add your Azure Web Site by URL. Note that when you go through these steps, you initially set the URL to the localhost address provided for local debugging in Visual Studio. Later you change this to the actual URL of your web site when you deploy. Once added, the portal provides both the STS metadata document path (the Federation Metadata Document URL) and the APP ID URI. These values are used on the Providers tab of the Identity and Access dialog in Visual Studio. An alternate method to accomplish the previous steps is to use the Microsoft ASP.NET Tools for Azure Active Directory. To use this, click the Enable Azure Authentication item from the Project menu in Visual Studio. This brings up a simple dialog that asks for the address of your Azure Active Directory domain (not the URL for your application). If you are the administrator of that Active Directory domain, then select the Provision this application in the Azure AD checkbox. This will do the work of registering the application with Active Directory. If you are not the administrator, then uncheck that box and provide the information displayed to an administrator. That administrator can use the Management Portal to create an integrated application using the previous steps on the Identity and Access tool. For detailed steps on how to use the ASP.NET Tools for Azure Active Directory, see Azure Authentication. When managing your line-of-business application, you have the ability to use any of the supported source code control systems for deployment. However, because of the high level of Visual Studio integration in this scenario, it is more likely that Team Foundation Service (TFS) is your source control system of choice. If so, you should note that Azure Web Sites provides integration with TFS. In the Management Portal, go to the Dashboard tab of your web site. Then select Set up deployment from source control. Follow the instructions for using TFS. Many line-of-business applications must integrate with on-premises data and services. There are multiple reasons why certain types of data cannot be moved to the Cloud. These reasons can be practical or regulatory. If you are in the planning stages of deciding what data to host in Azure and what data should remain on-premises, it is important to review the resources on the Azure Trust Center. Hybrid web applications run in Azure and access resources that must remain on-premises. When using Virtual Machines or Cloud Services, you can use a Virtual Network to connect applications in Azure with a corporate network. However, Web Sites does not support Virtual Networks, so the best way to perform this type of integration with Web Sites is through the use of the Azure Service Bus Relay Service. The Service Bus Relay service allows applications in the cloud to securely connect to WCF services running on a corporate network. Service Bus allows for this communication without opening firewall ports. In the diagram below, both the cloud application and the on-premises WCF service communicate with Service Bus through a previously created namespace. The on-premises WCF service has access to internal data and services that cannot be moved to the Cloud. The WCF service registers an endpoint in the namespace. The web site running in Azure connects to this endpoint in Service Bus as well. They only need to be able to make outgoing public HTTP requests to complete this step. Service Bus then connects the cloud application to the on-premises WCF service. This provides a basic architecture for creating hybrid applications that use services and resources on both Azure and on-premises. For more information, see How to Use the Service Bus Relay Service and the tutorial Service Bus Relayed Messaging Tutorial. For a sample that demonstrates this technique, see Enterprise Pizza - Connecting Web Sites to On-premise Using Service Bus. Business applications benefit from the standard Web Site capabilities, such as scaling and monitoring. For a business application that experiences variable load during specific days or hours, the Autoscale (Preview) feature can assist with scaling the site out and back to efficiently use resources. Monitoring options include endpoint monitoring and quota monitoring. All of these scenarios were covered in more detail in the Global Web Presence and Digital Marketing Campaign scenarios. Monitoring needs can vary between different line-of-business applications that have different levels of importance to the business. For the more mission critical applications, consider investing in a third-party monitoring solution, such as New Relic. Line-of-business applications are typically managed by IT staff. In the event of unexpected errors or behavior, IT workers can enable more detailed logging and then analyze the resulting data to determine problems. In the Management Portal, go to the Configure tab, and review the options in the application diagnostics and site diagnostics sections. Use the various application and site logs to troubleshoot problems with the web site. Note that some of the options specify File System, which places the log files on the file system for your site. These can be accessed through FTP, Azure PowerShell, or the Azure Command-Line tools. Other options specify Storage. This sends the information to the Azure storage account that you specify. For Web Server Logging, you also have an option of specifying a disk quota for the file system or a retention policy for the storage option. This prevents you from storing a indefinitely increasing the amount of stored logging data. For more information on these logging settings, see How to: Configure diagnostics and download logs for a web site. In addition to viewing the raw logs through FTP or storage utilities, such as Azure Storage Explorer, you can also view log information in Visual Studio. For detailed information on using these logs in troubleshooting scenarios, see Troubleshooting Azure Web Sites in Visual Studio. Azure enables you to host secure intranet applications in the Cloud. Azure Active Directory provides the ability to authenticate users, in order that only members of your organization can access the applications. The Service Bus Relay Service provides a mechanism for web applications to communicate with on-premises services and data. This hybrid application approach makes it easier to publish a business application to the Cloud without migrating all dependent data and services as well. Once deployed, business applications benefit from the standard scaling and monitoring capabilities provided by Azure Web Sites. For more information, see the following technical articles. Want to edit or suggest changes to this content? You can edit and submit changes to this article using GitHub.
http://azure.microsoft.com/en-us/documentation/articles/web-sites-business-application-solution-overview/
CC-MAIN-2014-15
en
refinedweb
Lightweight Dart messaging server; supporting STOMP messaging protocol. See also Stomp Dart Client. Add this to your pubspec.yaml (or create it): dependencies: ripple: Then run the Pub Package Manager (comes with the Dart SDK): pub install First, you have to import: import "package:ripple/ripple.dart"; Then, you can start Ripple server by binding it to any number of Internet addresses and ports. new RippleServer() ..start() //bind to port 61626 ..startSecure(); //bind to port 61627 and using SSL For how to use a STOMP client to access Ripple server, please refer to STOMP Dart Client. You can have Ripple server to serve a WebSocket connection. For example, HttpServer httpServer; RippleServer rippleServer; ... httpServer.listen((request) { if (...) { //usually test request.uri to see if it is mapped to WebSocket WebSocketTransformer.upgrade(request).then((WebSocket webSocket) { rippleServer.serveWebSocket(webSocket); }).catchError((ex) { // Handle error }); } else { // Do normal HTTP request processing. } }); Add this to your package's pubspec.yaml file: dependencies: ripple: ">=0.5.3 <0:ripple/ ripple.dart';ripple.dart';
http://pub.dartlang.org/packages/ripple
CC-MAIN-2014-15
en
refinedweb
Hello, I just found this site yesterday and its a good thing I din't post then cause there would of been alot of Woohoos and Heehaws.I was realy excited and still am this morning. *\Stayed up till 3:30 having fun/* Been wanting to learn programing for a number of years now. So thanks alot for putting up this site and i will try hard to be a good student. On to my 1st problem. I think we should learn the pause command right in lesson 1, so we can admire and study the finished work. I found this pause tip in the faqs and I try to omite line saying "press button" #include <iostream.h> #include <conio.h> //for "getch()" int main() { cout<<"I am learnig c++ \n"; cout<<"I am using Dev-c++ to compile \n"; cout<<"ok, ok... Hello World!\n"; getch(); return 0; } I get a mistake at line 9 (implicit declaration of function `int getchar(...)' Are we mixing c & c++ (cause of printf instead of cout) just guessing. Greatly appreciate your help
http://cboard.cprogramming.com/cplusplus-programming/2207-lesson1.html
CC-MAIN-2014-15
en
refinedweb
27 August 2010 00:03 [Source: ICIS news] HOUSTON (ICIS)--ConocoPhillips declared a limited force majeure (FM) on polypropylene (PP) impact copolymers, the company said on Thursday. In a brief statement, the company said it would keep customers informed of the progress to restore production and delivery of the product. Details about the cause of the FM declaration were not provided. According to a market source, the company had supply issues affecting all PP grades and was cutting shipments in some cases where customers had alternate supplies available. ConocoPhillips has a PP capacity of 750m lb/year (340,000 tonnes/year) at its Bayway refinery in ?xml:namespace> For more on ConocoPhillips’
http://www.icis.com/Articles/2010/08/27/9388667/conocophillips-declares-fm-on-pp-impact-copolymers.html
CC-MAIN-2014-15
en
refinedweb
NAME vga_setpage - set the 64K SVGA page number SYNOPSIS #include <vga.h> void vga_setpage(int page); DESCRIPTION Usually SVGA cards have more than the 64K memory which suffice for an ordinary VGA. However, the memory window mapped at vga_getgraphmem(3) is only 64K bytes large. vga_setpage() selects the page-th 64K memory chunk to be visible at position vga_getgraphmem(3). The fun(6) and vgatest(6) show the basic use of this function. SEE ALSO svgalib(7), vgagl(7), libvga.config(5), fun(6), vgatest(6), vga_getgraphmem.
http://manpages.ubuntu.com/manpages/karmic/man3/vga_setpage.3.html
CC-MAIN-2014-15
en
refinedweb
The Managed Provider and the DataSet provide the core functionality of ADO.NET. 1. The Managed Provider. The Managed Provider supplies the following four classes. The DataReader class provides read-only and forward-only access to the data source. We will benchmark this object later. The final class of the Managed Provider component is the DataAdapter class. The DataAdapter is the channel through which our second component, the DataSet, connects to the Managed Provider. 2. The DataSet The DataSet class consists of in-memory DataTables, columns, rows, constraints and relations. The DataSet provides an interface to establish relationships between tables, retrieve and update data. Perhaps one of the most important benefits of the DataSet is its ability to be synchronized with an XmlDataDocument. The DataSet provides real-time hierarchical access to the XmlDataDocument object. Managed Providers currently come in two flavors. Each is represented by its own namespace in the .NET Framework. The first, System.Data.SQLClient provides access to SQL Server 7.0 and higher databases. The second Managed Provider is found within the System.Data.OleDb namespace, and is used to access any OleDb source. The SQLClient Provider takes advantage of Microsoft SQL Server's wire format (TDS), and therefore should give better performance than accessing data through the OleDb provider. The performance tests done later in this article will be performed for both provider types. If we were to create a diagram explaining the general structure of ADO.NET, it would look like this. In the diagram above, two ways exist to retrieve data from a data source. The first is through the DataReader class in the Managed Provider component. The second is through the DataSet, which accesses the data source through the DataAdapter class of the Managed Provider. The robust Dataset object gives the programmer the ability to perform functions such as establishing relationships between tables. The DataReader provides read-only and forward-only retrieval of data. The tests uncover what performance loss, if any, we encounter when using the dataset rather than the DataReader. Please enable Javascript in your browser, before you post the comment! Now Javascript is disabled. Your name/nickname Your email WebSite Subject (Maximum characters: 1200). You have 1200 characters left.
http://www.devx.com/vb2themax/Article/19887
CC-MAIN-2014-15
en
refinedweb
appreciated! Thanks virgo25 Hi friend, Java DAO Implementation... not require knowledge of JDBC, EJB, Hibernate, or Spring interfaces. Use a Data...implementing DAO Hi Java gurus I am pure beginner in java Hibernate Spring - Hibernate Struts Hibernate Spring HI Deepak, This is reddy.i want expamle for struts hibernate spring example. Hi Friend, Please visit the following spring - Spring spring hi, I need sample example user update profile using spring with hibernate and spring jsp. I need the sample code very urgently. Please... the following link: DAO in Struts DAO in Struts Can Roseindia provide a simple tutorial for implementation of DAO with struts 1.2? I a link already exits plz do tell me. Thank you spring with hibernate - Spring spring with hibernate Hi, I need the sample code for user registration using spring , hibernate with my sql. Please send the code as soon... the following link: Struts Projects . Understanding Spring Struts Hibernate DAO Layer... In this tutorial I will show you how to integrate Struts and Hibernate... Struts, Hibernate and Integrate It This section explains you DAO DTO design pattern DAO DTO design pattern Hi,using dao and dto i want to perform insert,update and delete operation.and the data should navigate from 1 frame 2...; </center> </form> </body> TASK: You will need Hi.. - Struts Hi.. Hi Friends, I am new in hibernate please tell me... me Hi Soniya, I am sending you a link. I hope that, this link.../struts/struts-hibernate/struts-hibernate-plugin.shtml Thanks plese tell -Struts or Spring - Spring , You need to study both struts and spring. Please visit the following links.../hibernate/index.shtml... about spring. which frameork i should do Struts or Spring and which version. i Hi - Struts Hi Hi friends, must for struts in mysql or not necessary... very urgent.... Hi friend, I am sending you a link...:// http Java Data Layer - Framework Java Data Layer how does Ojb, JDO, EJB 3.0-Entity Bean, JPA, OpenJPA and Hibernate are differ from each other? Hi friend, For more information on JPA/Hibernate visit to : Hibernate and Spring and Spring for developing web applications. Read how you can integration Hibernate and Spring in web application at Developing Struts, Hibernate and Spring based... can integrate Hibernate, MVC framework (Struts/Spring Web), Transaction managers Spring portlet with Hibernate Hi All, Could you please help me integrate the code for Forgot Password using spring portlet, hibernate withn MYSQL... link: Thanks Hi... - Struts Hi... Hello, I want to chat facility in roseindia java expert please tell me the process and when available experts please tell me Firstly you open the browser and type the following url in the address bar - Hibernate with hibernate? Hi friend, For solving the problem Spring with Hibernate visit to : Thanks Integration Struts Spring Hibernate. Integration Struts Spring Hibernate. Integration Struts Spring Hibernate DAO,DTO,VO Design patterns DAO,DTO,VO Design patterns explain dao,dto,vo design patterns in strut 1.3? Data Access Object (DAO) pattern is the most popular design patterns. It is used when you want to separate your presentation code from Hi Hi Hi All, I am new to roseindia. I want to learn struts. I do not know anything in struts. What exactly is struts and where do we use it. Please help me. Thanks in advance. Regards, Deepak The Complete Spring Tutorial will show you how you can integrate struts, spring and hibernate in your web... services, Schedulers, Ajax, Struts, JSF and many other frameworks. The Spring... describes you the way to make a spring web application that displays An introduction to spring framework . Just as Hibernate attacks CMP as primitive ORM technology, Spring attacks... the criticisms. There is a debate going on in the Java community about Spring... that instructs Spring on where and how to apply aspects. 4. Spring DAO: The Spring's JDBC CRUD DAO CRUD DAO how to create dao for create,read,update and delete? /* *ConnectionManager * * *Version:1.0 * *Date:25-Nov-2011... if (!more) { logger .warn("Sorry, you Java - Spring /struts/hibernate-spring/index.shtml Hope that it will be helpful for you...Java Hi Roseindia, Can you provide code for searching based on Id and name using Spring,Hibernate & Struts. MAny Thanks Ragahvendra   hi to html dropdown list...and when user selects perticular country it should go 2 db n update in db ....so with out using javascript ...only html,java,db should hi to html dropdown list...and when user selects perticular country it should go 2 db n update in db ....so with out using javascript ...only html,java,servlets,db Data Access object (DAO) Design Pattern student.findAll(); } } When you run this application it will display... Layer has proven good in separate business logic layer and persistent layer. The DAO design pattern completely hides the data access implementation spring - Spring spring what is the difference between spring and hibernate and ejb3.0 Hi mamatha, Spring provides hibernate template and it has many...:// Thanks. Amardeep HI!!!!!!!!!!!!!!!!!!!!! HI!!!!!!!!!!!!!!!!!!!!! import java.awt.*; import java.sql....+bal; stmt.executeUpdate("update bankdata set balance="+ts+" where pass='"+t9...("Do you want to withdraw"); Object[] options = new String[] { "Yes hi....... hi....... import java.awt.; import java.sql.*; import javax.swing....(); st1.executeUpdate("update bankdata set balance="+ts+" where pass='"+t9... JOptionPane("Do you want to withdraw"); Object[] options = new String[] { "Yes hi.. hi.. I want upload the image using jsp. When i browse the file then pass that file to another jsp it was going on perfect. But when i read that image file, we cant read that file.it return -1. and my console message insert code using spring - Spring :// Hope that it will be helpful...insert code using spring Hi, Can any one send code for inserting the data in database using spring and hibernate .. Many thanks Struts Articles is that you don't have to. The Spring architecture allows you to connect Struts... is isolated from the user). Bridge the gap between Struts and Hibernate Hibernate and Struts are currently among the most popular open Hi.. - Struts Hi.. Hi, I am new in struts please help me what data write in this file ans necessary also... struts-tiles.tld,struts-beans.tld,struts........its very urgent Hi Soniya, I am sending you a link. This link integration with struts 2.0 & spring 2.5 - Framework integration with struts 2.0 & spring 2.5 Hi All, The total integration is Client (JSP Page) --- Struts 2.0--Spring 2.5 --- Hibernate 3.0--MySQL... for more information. DAO module turorial spring DAO module turorial how to integrate springDAO and spring webMVC struts - Struts :// Hope that the above links...struts Hi, I am new to struts.Please send the sample code for login... the code immediately. Please its urgent. Regards, Valarmathi Hi Friend Hibernate Book the mapping for you. Not only that, Hibernate makes it easy. Positioned as a layer... to develop yourself. Hibernate in Action carefully explains the concepts you need, then gets you going. It builds on a single example to show you how to use Implementing Data Access Layer with Hibernate Implementing Data Access Layer with Hibernate  ... we are using Hibernate to implement data access layer. Hibernate is an open source O/R mapping framework that handles all the persistence logic. Hibernate Spring SimpleJdbcTemplate update Spring SimpleJdbcTemplate update This section is about update method using... where you need to issue single update .It is easy and effective in this condition... are going to update the database table row using update method . We are using myfaces,hibernate and spring integration - Hibernate myfaces,hibernate and spring integration sorry, in the previous.... when i write in my url (my port...). i wll be obliged. Might be you have deploying problem. deploy Update value Update value How to update value of database using hibernate ? Hi Samar, With the help of this code, you will see how can update database using hibernate. package net.roseindia.DAO; import Integrating MyFaces , Spring and Hibernate Integrating MyFaces , Spring and Hibernate  ... to configure Hibernate, Spring and MyFaces to use MySQL Database to build real world..., let's start developing Login and Registration application using Hibernate, Spring Tutorial for spring - Spring Tutorial for spring Hi Deepak, Iam new to SpringFramework,give some.... Hi friend, I am sending book name of spring framework. 1... applications using the latest 2.0 release of the Spring Framework. you can also Spring - Spring give me solution for this problem.Thank you inadvance. Hi friend, I am sending you a link. This link will help you. Please visit for more information. Struts with other frameworks like Struts2 with Hibernate, Struts2 with Spring web MCV...Struts Why struts rather than other frame works? Struts is used into web based enterprise applications. Struts2 cab be used with Spring configuration - Struts configuration Can you please tell me the clear definition of Action class,ActionForm,Model in struts framework. What we will write in each of this? Hi friend, A model represents an application?s data Is Singleton Default in Struts - Struts best practice. Whether or not you use Struts doesn't really impact the service layer. Spring MVC, like Struts, uses a single shared instance for each Action (called a Controller in Spring MVC). If you have any problem then send me Hibernate - Hibernate . Thanks. Amardeep Hi, This happens usually when...Hibernate Hai this is jagadhish while running a Hibernate... schema update Aug 14, 2008 12:06:11 PM org.hibernate.tool.hbm2ddl.SchemaUpdate Update - JDBC ; Step1: Retrive the column value which one you want to update and store...Update Hi all, I'm having problems updating a column in a table... in the 'quantity' column when I enter 5, the value should now be 15. The code below DAO Example DAO Example Dear Friends Could any one please give me any example of DAO application in struts? Thanks & Regards Rajesh java - Spring java hi deepak, can u send simple program on spring with alkl jar files ,same as how u put in struts in the sites..plse help i am in trouble ...i am new 2 spring
http://roseindia.net/tutorialhelp/comment/1150
CC-MAIN-2014-15
en
refinedweb
Str have 2 java pages and 2 jsp pages in struts registration.jsp for client to register user registeraction.java to forward... with source code to solve the problem. For read more information on Struts visit java - Struts java what is the default Action class display multiple images on struts using Arraylist java - Struts java How To Connect oracle Database With Struts Project error - Struts java struts error my jsp page is post the problem...*; import javax.servlet.http.*; public class loginaction extends Action{ public...*; import javax.servlet.http.*; public class loginform extends ActionForm{ private Java - Struts Java What is the difference between Struts and Struts2. Pls explain with a simple example. HI, Please check http... between Struts 1 and Struts 2. Thanks - Struts in DispatchAction in Struts. How can i pass the method name in "action" and how can i map at in struts-config.xml; when i follow some guidelines... more the one action button in the form using Java script. please give me struts interview Question - Struts struts interview question and answer java struts interview question and answer java doubt is when we are using struts tiles, is there no posibulity to use action class... in struts-config file i wrote the following action tag... action class and tiles freame work simultaniously --->it is not working java - Struts *) Action class public class MyAction extends...java how can i get dynavalidation in my applications using struts... : *)The form beans of DynaValidatorForm are created by Struts and you configure java - Struts of the Application. Hi friend, Struts is an open source framework... pattern. It uses and extends the Java Servlet API to encourage developers...:// java - Struts friend. what can i do. In Action Mapping In login jsp Hi friend, You change the same "path" and "action" in code : Java + struts - Struts java.sql.ResultSet; public class ImportingAction extends Action { public ActionForward...Java + struts my problem is : import multiple .xls workbooks... org.apache.struts.upload.FormFile; public class ImportFileBean extends ActionForm zylog.web.struts.actionform.LoginForm; public class LoginAction extends Action... friend, Check your code having error : struts-config.xml In Action...: Submit struts be done in Struts program? Hi Friend, 1)Java Beans are reusable software Java - Struts :// Thanks. my doubt is some java - Struts is in action class,because it is not exucted ,but formbean will executed...this is my problem...see the code i have created formbean class,formaction class... java.io.IOException; public class LoginAction extends Action Struts Architecture - Struts Struts Architecture Hi Friends, Can u give clear struts architecture with flow. Hi friend, Struts is an open source... (MVC) design pattern. It uses and extends the Java Servlet API to encourage java - Struts Java JRE latest Which is Java JRE latest Version java - Struts Java string tokenizer What is Java string tokenizer java - Struts Java API Documentation Java API Documentation example in eclipse java - Struts class LoginAction extends Action { public ActionForward execute... config /WEB-INF/struts-config.xml 1 action *.do Thanks...! struts-config.xml using captcha with Struts - Struts application using Struts framework, and i would like to use captcha Hi friend,Java Captcha in Struts 2 Application : java - Struts architecture. * The RequestProcessor selects and invokes an Action class...java What is Java as a programming language? and why should i learn java over any other oop's? Hello,ActionServlet provides the " java - Struts Inheriting a class in java constructor Need example of inheriting a class in java constructor java - Struts Java long to string conversion What is the steps to convert a long to String in Java Beginners java struts Hi sir i need complete digram of struts... how i can configer the struts in my eclipse... please send me the complete picture diagram java - Struts :// Thanks what is struts? - Struts of the Struts framework is a flexible control layer based on standard technologies like Java...what is struts? What is struts?????how it is used n what... Commons packages. Struts encourages application architectures based on the Model -differecne between RequestProcessor and RequestDispatcher What is differecne betweenw RequestProcessor and RequestDispatcher? What is differecne b/w RequestProcessor and RequestDispatcher?http - Java Interview Questions Struts Interview Questions I need Java Struts Interview Questions and examples java - Struts you want in your project. Do you want to work in java swing? Thanks struts - Java Beginners java struts i want to do the project in the struts.... how i can configure the project in my eclipse... can u help me in this issues Struts 2 problem - Struts Struts 2 problem Hello I have a very strange problem. I have an application that we developed in JAVA and the application works ok in Windows... seemed to worked fine, until the user reported to us a problem. After doing Query - Struts Writing quires in Struts How to write quires in Java Struts Java - Struts java - Struts java - Struts action code java - Struts java When i am using Combobox. when i am selecting the particular value. how can i pass the value to the action. please give me the suggestion as early as possible. Hi friend, To solve the problem struts - Java Beginners struts how to calculate monthly billing in struts, give me example - Jboss - I-Report - Struts Struts - Jboss - I-Report Hi i am a beginner in Java programming and in my application i wanted to generate a report (based on database) using Struts, Jboss , I Report what are Struts ? what are Struts ? What are struts ?? explain with simple example. The core of the Struts framework is a flexible control layer based on standard technologies like Java Servlets, JavaBeans, ResourceBundles, and XML About Struts processPreprocess method - Struts About Struts processPreprocess method Hi java folks, Help me... that the request need not travel to Action class to find out that the user... will abort request processing. For more information on struts visit java keyboard shortcuts - Struts java keyboard shortcuts hi, i hav an application already developed using struts framework.now i would like to add some keboard shortcuts to my application, so that the user need not to go for the usage of the mouse every time DynaActionform in JAVA - Struts DynaActionform in JAVA how to use dynaActionFrom?why we use it?give me a demo if possible? Hi Friend, Please visit the following link: Hope
http://roseindia.net/tutorialhelp/comment/64832
CC-MAIN-2014-15
en
refinedweb
Instant Ember.js Application Development How-to [Instant] — Save 50% Your first step in creating amazing web applications with this book and ebook.. (For more resources related to this topic, see here.) Introduction to Ember.js Ember.js is a frontend MVC JavaScript framework that runs in the browser. It is for developers who are looking to build ambitious and large web applications that rival native applications. Ember.js was created from concepts introduced by native application frameworks, such as Cocoa. Ember.js helps you to create great experiences for the user. It will help you to organize all the direct interactions a user may perform on your website. A common use case for Ember.js is when you believe your JavaScript code will become complex; when the code base becomes complex, problems about maintaining and refactoring the code base will arise. MVC stands for model-view-controller. This kind of structure makes it easy to make modifications or refactor changes to any part of your code. It will also allow you to adhere to Don't Repeat Yourself (DRY) principles. The model is responsible for notifying associated views and controllers when there has been a change in the state of the application. The controller sends CRUD requests to the model to notify it of a change in state. It can also send requests to the view to change how the view is representing the current state of the model. The view will then receive information from the model to create a graphical rendering. If you are still unclear on how the three parts interact with each other, the following is a simple diagram illustrating this: Ember.js decouples the problematic areas of your frontend, enabling you to focus on one area at a time without worrying about affecting other parts of your application. To give you an example of some of these areas of Ember.js, take a look at the following list: - Navigation : Ember's router takes care of your application's navigation - Auto-updating templates : Ember view expressions are binding-aware, meaning they will update automatically if the underlying data ever changes - Data handling : Each object you create will be an Ember object, thus inheriting all Ember.object methods - Asynchronous behavior : Bindings and computed properties within Ember help manage asynchronous behavior Ember.js is more of a framework than a library. Ember.js expects you to build a good portion of your frontend around its methodologies and architecture, creating a solid application architecture once you are finished with it. This is the main difference between Ember and a framework like Angular.js. Angular allows itself to be incorporated into an existing application, whereas an Ember application would have had to have been planned out with its specific architecture in mind. Backbone.js would be another example of a library that can easily be inserted into existing JavaScript projects. Ember.js is a great framework for handling complex interactions performed by users in your application. You may have been led to believe that Ember.js is a difficult framework to learn, but this is false. The only difficulty for developers lies in understanding the concepts that Ember.js tries to implement. How to set up Ember.js The js folder contains a subfolder named libs and the app.js file. libs is for storing any external libraries that you will want to include into your application. app.js is the JavaScript file that contains your Ember application structure. index.html is a basic HTML index file that will display information in the user's browser. We will be using this file as the index page of the sample application that we will be creating. We create a namespace called MovieTracker where we can access any necessary Ember.js components. Initialize() will instantiate all the controllers currently available with the namespace. After that is done, it injects all the controllers onto a router. We then set ApplicationController as the rendering context of our views. Your application must have ApplicationController, otherwise your application will not be capable of rendering dynamic templates. Router in Ember is a subclass of the Ember StateManager. The Ember StateManager tracks the current active state and triggers callbacks when states have changed. This router will help you match the URL to an application state and detects the browser URL at application load time. The router is responsible for updating the URL as the application's state changes. When Ember parses the URL to determine the state, it attempts to find Ember.Route that matches this state. Our router must contain root and index. You can think of root as a general container for routes. It is a set of routes. An Ember view is responsible for structuring the page through the view's associated template. The view is also responsible for registering and responding to user events. ApplicationView we are creating is required for any Ember application. The view we created is associated with our ApplicationController as well. The templateName variable is the name we use in our index.html file. The templateName variable can be changed to anything you wish. Creating an Ember Object. Creating an Ember Controller In Ember.js, controllers are split into three different categories: - ArrayController - ObjectController - Controller ArrayController is used for managing a collection of objects, that is, a collection of movies and actors. Each ArrayController uses a content property to store its data. Properties and methods in this controller will have a proxy that will allow access to its content property. In moviesController, we create an empty content array and then populate it with some example data. The this._super() call lets you access the init function of the parent class that you are overriding. ObjectController is essentially the same as ArrayController, except that it is used for one object as opposed to a collection. For our application, we are going to need a controller for changing a movie to watched when the user clicks on a corresponding button for this action. In selectedMovieController, we are only concerned with one specific Movie object. The function inside this controller will change the watched property of the associated movie to true if it was false previously and vice versa. The Controller class in Ember is used when you have a controller that is not a proxy. In other words, the controller does not take care of an object or an array. If we look back at the code in our application.js within the controllers folder, we added the following: MovieTracker.ApplicationController = Ember.Controller.extend(); This line does not need any arrays or objects contained within it, so we simply assign it the Controller class. This controller handles the controls at the application level. Mustache Templates This basic template has data-template-name that is typically used as a reference by a view. The {{title}} expression will print the value of the title property that we send to the template using a view. Ember determines whether a path is global or relative to the view by checking if the first letter is capitalized. This is why your Ember.Application name should start with a capital letter. The # symbol within {{}} means that it is a block expression. These expressions require a closing expression (such as {{/if}} in the previous example). If the movie property is not false, undefined, or null, then it will display the movie title. {{#with movie}} <h2>{{title}}</h2> {{/with}} We changed the context of our template block to movie in this block of code, and thus we were able to reference the title directly. <a {{bindAttr href="url"}}>More Details</a> When we used bindAttr for our URL, the template took the url property from the view and inserted it as an href attribute. <a href="">More Details</a> {{view.mainActor}} within the actor template would render as John Smith . The {{view}} helpers in Handlebars can include optional parameters to help in displaying information. - class: Used to assign class names. - tagName: By default, new instances of Ember.View create the <div> elements. Use tagName to override this. - contentBinding: Binds the specified content as the context of the template block. Summary This article helped you understand the basics of Ember.js framework. It also helped you in setting up Ember.js. It gave you a brief idea on creating Ember objects and controllers. And explaining what mustache templates are. Resources for Article : Further resources on this subject: - Understanding Backbone [Article] - Facelets Components in JSF 1.2 [Article] - Creating and Using Templates with Cacti 0.8 [Article] About the Author : Marc Bodmer Marc Bodmer is a recent graduate with an honors degree in Computer Science. He is based in Toronto, Ontario and will begin a frontend developer position at 500px () in May 2013. Marc loves to experiment with all kinds of various web frameworks as well as creating and contributing to open source projects. He loves to attend developer conferences to keep up-to-date on web technologies and to meet developers with great ideas. Post new comment
https://www.packtpub.com/article/introducing-the-ember.JS-framework
CC-MAIN-2014-15
en
refinedweb
im trying to merge more than 20 xml files into one, any of them contains more than 7000 nodes code that im using for that is like this private void createFile(String month, String year) throws... im trying to merge more than 20 xml files into one, any of them contains more than 7000 nodes code that im using for that is like this private void createFile(String month, String year) throws... I Solve the problem using INTERSOLV 3.11 32-BIT ParadoxFile (*.db) driver That the jdbc-odbc driver for paraodx don't support update, i found paradox driver from corel PdxJdbc.jar but i don't conw how to use it I'm using a Button for update an Parodox db file my code is private void jButton1ActionPerformed(java.awt.event.ActionEvent evt) { try { ... TableModels is a class where I have Different Models of tables witch i use in my code, in this method i use Table Model with 3 columns and i fill table created with that table model with result set... public void roundTable() { TableModles tm = new TableModles(); tm.roundTable(); try { Connection con = Paradox.createConnection(); Statement s =... Im trying to get data from Paradox Database (table with columns Kolo, Od_datuma, Do_datuma) and i like to fill JTable with this values for ex. one row of JTable to contain data from one row of... ok, and how to fill JTable with that results, i need Kolo, Od_datuma, Do_datuma in one table row I have read on microsoft site that Paradox Data type can't be converted with operation cast or convert, so im looking for another solution I need values from 3 columns from the table, "Kolo", "Od_datuma" and "Do_datuma", in the table columns Od_datuma and Do_datuma hold datetime values, I only need DATE values from this two columns i search for that and i don't find nothing similar I have Delphi program who use this database and i like to create Java Program for the database i can't find any solution. Can i connect to Borland Paradox database from java ? Please help i need to know if there are way to create this connection Thanks How to create a simple Web Service for translate with SOAP and WSDL, to translate few words. I need example without any API's I know how to make it with JTable, but the point is to make it with AWT Components, is there other way to display data from database in AWT ? I like to delete this post in AWT / Java Swing but i didn't find how to do that Hello Is There way to create Table in Java AWT? Please help I need to create simple Java AWT program for insert, edit, delete and read users from database. Hello Is There way to create Table in Java AWT? Please help I need to create simple Java AWT program for insert, edit, delete and read users from database. I have a class that create an Excel file using apache poi, the class getting files from JTable and creating Excel file. My Question is how to set all string values in excel file to be UPPERCASE, is... I SOLVED this i move pstm.executeUpdate(); before index++; I have a problem when updating MySQL database with values from JTable when i run this code package rd; import java.sql.Connection; import java.sql.PreparedStatement; import javax.swing.*; I SOLVED this problem by moving the class into the package Thanks I have two packages one is "rd" where is my "loginform.java" and the other is "classes" where is "sdown.java" I run it from the cmd with this command java -jar "C:\Documents and Settings\Daniel... this is the exception 1312
http://www.javaprogrammingforums.com/search.php?s=db2970609243516486bc47db9679e9ba&searchid=837628
CC-MAIN-2014-15
en
refinedweb
Occasionally, usually when setting up a new C++ project, I see this error from IntelliSense and the C++ compiler: cannot open source file "_config-eccp.h" (dependency of "Mstn/MdlApi/MdlApi.h"). It's sole mention is in include file _config.h... cannot open source file "_config-eccp.h" (dependency of "Mstn/MdlApi/MdlApi.h") _config.h #if defined (__EDG__) \ && !defined (__DECCXX) \ && !defined (__HP_aCC) \ && !defined (__INTEL_COMPILER) \ && !defined (_SGI_COMPILER_VERSION) // FIXME: make this more robust by detecting the EDG eccp demo // during library configuration (and avoid relying on compiler // specific macros) # include "_config-eccp.h" #endif // __EDG__ IntelliSense is correct: file _config-eccp.h is not part of the SDK: it doesn't exist. _config-eccp.h Presumably something (the VC++ compiler?) is defining macro __EDG__, but those other macros are all undefined. Judging by the comment, someone in Bentley Systems development planned to do something about this, but hasn't yet got around to it. __EDG__ What can I do to suppress that message? Is it safe simply to #undefine __EDG__? #undefine __EDG__ Hi Jon Summers, Though not ideal, if you have an application/common library header, then you can include this there or directly in your .cpp file having issue. As Jan mentions, I am looking into improving the MicroStation SDK by eventually removing the 8DOT3 name requirements that are known to cause issues with IntelliSense. If this issue is not resolved by those changes, then a separate defect will be filed to address this issue as well. // Need to undefine __EDG__ to work-around intellisense error // Prevents config-eccp.h error #ifdef __EDG__ #undef __EDG__ #endif HTH,Bob Robert Hook said: if you have an application/common library header, then you can include this #ifdef __EDG__ #undef __EDG__ #endif Yes, that's the conclusion I reached. Robert Hook said:If this issue is not resolved by those changes, then a separate defect will be filed to address this issue as well I don't see a connection with 8DOT3 file naming. The file _config-eccp.h simply doesn't exist in the delivered include folders... Directory of C:\PROGRA~1\Bentley\MICROS~1\SDK\include\Bentley\stdcxx\rw 27/11/2020 06:59 9,921 _config-gcc.h 27/11/2020 06:59 5,678 _config-msvc.h 27/11/2020 06:59 6,360 _config-msvcrt.h 27/11/2020 06:59 19,322 _config.h With a certain combination of preprocessor macros, the include logic attempts to read that non-existent file. Robert Hook said: in your .cpp file having issue It occurs with #include <Mstn/MdlApi/MdlApi.h>. That file is commonly used as the first header in any C++ project. #include <Mstn/MdlApi/MdlApi.h> Regards, Jon Summers LA Solutions so in order to get rid of the pesky _config-eccp.h error, what I do is undefine the __EDG__ macro? I have one project that runs fine with nmake set with bmake project_name. When I try to do it with a Bentley example, it just pop up with the error. But if I just bmake the two, both compile without problem. Hi amender, please respect the rules and do not steal existing (and nearly a year old) discussion with own new question. Ask in a new post and specify you environment (SDK version, VS version...). amender carapace said:so in order to get rid of the pesky _config-eccp.h error, what I do is undefine the __EDG__ macro? I do not understand what your question is: Did you try solution provided by Bob Hook and verified by Jon, and it does not work? amender carapace said:When I try to do it with a Bentley example What is "Bentley example"? amender carapace said:But if I just bmake the two, both compile without problem. What is "the two" and "both"? Regards, Jan Bentley Accredited Developer: iTwin Platform - AssociateLabyrinth Technology | dev.notes() | cad.point
https://communities.bentley.com/products/programming/microstation_programming/f/microstation-programming---forum/211476/connect-c-_config-eccp-h/692951
CC-MAIN-2022-27
en
refinedweb
Convert Decimal fraction to binary in Java In this section, we are going to learn how to convert a decimal fraction into binary in Java. So, we can divide this problem into two parts i.e one for integral and other for decimal part. 1. To calculate a binary number of an Integral Part: The logic behind this is to divide the number by two and store its remainder. - We will implement a while loop that will work till the time number becomes zero. - Divide a number by two and store the remainder in a variable called rem. - Update a number to a number divided by two i.e, n=n/2. - Insert the remainder at the first index of String. 2. To calculate a binary number of a Fractional Part: The logic behind this is to multiply the number i.e fractional part by two and store its integral part. In this, we have taken an integer called k which denotes precision up-to k digits. If we have not taken this then in most of the cases the loop will run infinitely. - We will implement a while loop that will work till the time k becomes zero. - Then, multiply a number by two and store its integral part in a variable called integralPart. - Update fractional number by extracting a fractional part of number*2. - Lastly, reduce the value of k by one. Java program to convert Decimal fraction to Binary package CodeSpeedy; import java.util.Scanner; public class decimalToBinary { public static void main(String[] args) { // TODO Auto-generated method stub double num,fractionalPart= 0,number; int rem=0,integralPart,k; StringBuilder s=new StringBuilder(); Scanner scn=new Scanner(System.in); System.out.println("Enter the number"); num=scn.nextDouble(); //112.564 System.out.println("Enter number upto which precision is required"); k=scn.nextInt(); //5 System.out.print("Output is "); int n=(int) num; fractionalPart=num-n; while(n!=0) { rem=n%2; n=n/2; s.insert(0,rem); } System.out.print(s+"."); s=new StringBuilder(); while(k!=0) { integralPart=(int) (fractionalPart*2); s.append(integralPart); number=fractionalPart*2; fractionalPart=number-integralPart; k--; } System.out.println(s); //1110000.10010 } } Output: Enter the number 112.564 Enter number upto which precision is required 5 Output is 1110000.10010 The catch in this problem is that rather of taking a String, we have used StringBuilder. StringBuilder helps to modify the string from any end which means we can insert an element at any Index which is not supported by normal String. So by using this, we don’t have to implement a function to reverse a String. Also read: How to convert a HashMap to TreeMap in Java Can you convert the fractional part to binary without using string builder or array
https://www.codespeedy.com/convert-decimal-fraction-to-binary-in-java/
CC-MAIN-2022-27
en
refinedweb
H E A L T H Y L I V I N G H E A L T H Y P L A N E T natural awakenings magazine The Better Brain Diet Eat Right to Stay Sharp Palate Pleasers Six Powerhouse Foods for Kids Himalayan Salt Himalayan Healing Power Thyroid Disease The Bold Truth March 2013 | GoNaturalAwakenings.com March 2013 1 Live happier, healthier, and more intentionally. MAY 4-5, 2013 GEORGIA WORLD CONGRESS CENTER Join your peers and spend the weekend listening to some of the most inspiring authors of today in a unique setting at the I Can Do It! Atlanta Georgia Conference. DOREEN VIRTUE BRIAN L. WEISS KRIS CARR DR. WAYNE W. DYER CHERYL RICHARDSON BE ENTERTAINEDâ&#x20AC;ŚGET EDUCATED. Incredible wisdom from cutting-edge authors and speakers in the mindful spirituality, health, holistic, and sustainability lifestyles movement. These trendsetting speakers will inspire you with topics like: Personal Transformation Relationships Community Empowerment Spirituality Health Mindfulness Lifestyle and so much more! Act Now! Seats are Limited and this Event Will Sell Out! 2 Register online at Call 800-654-5126 or visit Printed on recycled paper to protect the environment March 2013 3 Natural Awakenings is your guide to nutrition, fitness, personal growth, sustainable and “green” living, organic food, Buy Local, the Slow Food and Slow Money movements, creative expression, wholistic health care, and products and services that support a healthy lifestyle for people of all ages. ~ Features ~ 12 Publisher Carolyn Rose Blakeslee, Ocala Editors Sharon Bruckman S. Alison Chabonais Linda Sechrist Design + Production Stephen Gray-Blancett Carolyn Rose Blakeslee Jessi Miller, Contact Us 352-629-4000 Fax 352-351-5474 GoNaturalAwakenings@gmail.com P.O. Box 1140, Anthony, FL 32617 Subscriptions Mailed subscriptions are available for $36/ year. Digital is free. Pick up the printed version at your local health food stores, area Publix and Sweetbay stores, and other locations—that’s free, too. Natural Awakenings Gainesville/Ocala/The Villages is published every month in full color. 20,000 copies are distributed to health food stores, public libraries, Publix and Sweetbay stores, medical offices, restaurants and cafes, and other locations throughout North Central Florida. Natural Awakenings cannot be responsible for the products or services herein. To determine whether a particular product or service is appropriate for you, consult your family physician or licensed wholistic practitioner. Gardening Without (Much) Money by David Y. Goodman, UF/IFAS Master Gardener 14 The Better Brain Diet by Lisa Marshall 15 The Healing Power of Silence 16 The “WINS” of Change 17 Ceviche 18 Yin & Tonic by Melody Murphy Eat Right to Stay Sharp by Robert Rabbin by Paula Koger, RN, MA, DOM by Clark Dougherty, LMT Home is Where the Heart Is 20 Six Powerhouse Foods for Kids by Susan Enfield Esrey With Palate-Pleasing Tips 22 The Bold Truth of Thyroid Disease 23 The Gift of Empathy by Margret Aldrich by Dr. Michael J. Badanek, DC, BS, CNS, DACBN How to Be a Healing Presence 26 The Birthright of Human Beings 28 At Peace in Your Mind 29 The Healing Power of Himalayan Salt by David Wolf by Rev. Bill Dodd by Nuris Lemire, MS, OTR/L, NC 4 Printed on recycled paper to protect the environment ~ Departments ~ NewsBriefs 6 HealthBriefs 8 CommunityResource Guide 30 CalendarofEvents 32 Coupons/Special Offers 39 April in Paris Advertising & Submissions ADVERTISING n To advertise with us or request a media kit, please call 352-629-4000 or email GoNaturalAwakenings@gmail.com. n Design services are available, FREE (limited time offer). n Advertisers are included online FREE and receive other significant benefits including FREE “Calendar of Events” listings (normally $15 each). n For information on our Coupons/Special Offers page: Visit. EDITORIAL AND CALENDAR SUBMISSIONS n For article submission guidelines, please visit. n Calendar: visit /news.htm. n Email all items to GoNaturalAwakenings@gmail.com. MATERIALS DUE n Deadline for all materials is the 15th of the month (i.e. March 15th for the April issue). Read us online! n Free, easy, instant access n The same magazine as the print version with enhancements n Ads and story links are hot-linked April 13-20 Just $3,295/person double occupancy *, INCLUDES n Round-trip air fare Orlando-Paris n Accommodations during our entire stay (no packing and moving) n Guided tours with all admission paid (no waiting in lines) n Two exquisite wine dinners n All group transportation, airport transfers, etc. n and much more—please see website for details. * KING-SIZED BED $3,885/single occupancy Info: 352-286-1779 naturalawakeningsncfl.com/paris.html Still accepting last-minute travelers but please CALL TODAY! March 2013 5 newsbriefs Gainesville MedSpa Relocates North Florida Medical Area G ainesville MedSpa is the first Chiropractic Health Center to join the North Florida Medical Community. This is the only office providing chiropractic, physical therapy, and massage therapy in this mainstream medical community. After serving the northwest Gainesville community for six years, Dr. Angela Leone and her team of Licensed Massage Therapists have joined the practice of Dr. Lian at the Oriental Healing Center (located behind Red Lobster). Chiropractor Dr. Angela Leone, and leading massage therapist Wendy Noon, are gifted practitioners who use gentle hands-on and instrument techniques to achieve healthy balance for their patients. Massage techniques vary from pregnancy massage to hot stone. Ms. Noon is extending a special offer through April 29, 2013 of $49 for a one-hour massage. She says, “Your specific needs can be met with personalized attention and care. With evenings and weekends available, scheduling is flexible and easy.” Gainesville MedSpa’s new location is 7003 NW 11th Place, Suite 5, Gainesville 352-374-0909. For more information, visit www. gainesvillemedspa.com. 6 Qi Activation May 25-28, Orlando Convention Center A fter 65 events and 40,000 attendees, Qi Activation, formerly known as Qi Revolution, has upgraded the curriculum to what people said in surveys was “most useful in life.” Besides Qigong exercises and breathing techniques, the four-day Qi Activation event will focus on food healing and will devote an hour each to food-based protocols for cancer, diabetes and heart disease. The Qi Activation seminar this year will also introduce Qigong foot reflexology for on-the-spot pain relief and endocrine boosting effects. The clinical applications are in addition to the Qigong/Energy components of the four-day, total immersion experience. Qigong breathing techniques are often described as “the best natural high” and Qi activator, hence the name of the event. Activation is a biological process and part of enlightenment. The root chakra point, when activated, releases dormant energy (called Kundalini) up the spine, boosting the endocrine system and therefore our longevity. This is perhaps the first longevity conference that speaks about, and facilitates the experience of, the dormant energy hidden within our nervous system. Qigong Practitioner Jeff Primack and 100 other instructors will be leading the group practices. Participants can expect to learn and experience several breathing techniques, Qigong strength training, and more. Special guest, Medical Qigong Master, Jerry Allan Johnson, will speak about the future of energy medicine within the Western model. Other gifted healers and speakers will be in attendance, including American New Thought minister Michael Beckwith. Attendees may participate for one, two, three, or all four days’ training for only $129. The event is non-denominational and open to all people. For therapists, 32 CE hours (massage) and 24 PDA hours (acupuncture) are available. The event is fun, experiential, and educational. Seating is limited. For tickets or more information, call 800298-8970 or visit. com. Be sure to book your hotel early, as it sells out rapidly. Visit www. QiActivation.com for recommended hotels and rates. Printed on recycled paper to protect the environment vv.” NaturalOrder coaching & organizing Helen Kornblum, MA m, M A Helen Kornblu Contact me at 352.871.4499 or naturalorder@cox.net to explore how coaching can help your student—and you! ©2012 Natural Order Organizing. All rights reserved. Neuromuscular Massage by Design Patricia Sutton LMT, NMT, CRT MA22645 Refresh u Rejuvenate u Revive u Relax Renew u Replenish • Certified Neuromuscular Massage • Cranial Release Technique • ETPS Acupuncture • Most Insurance Accepted + PIP + WorkerComp • Referrals from Physicians + Chiropractors Accepted 20% Discount: Pre-Purchase of 4 or More Sessions Treating the pain you were told you would have to live with: Back & neck pain ~ Post-surgical pain ~ Fibromyalgia ~ Migraine ~ TMJ Call Today (352) 694-4503 1920 SW 20th Place, Suite 202, Ocala FL 34471 March 2013 7. 8 Why We Need More Vitamin C R esearchers at the Linus Pauling Institute at Oregon State University, a leading global authority on the role of vitamin C in optimum health, found. Printed on recycled paper to protect the environment Yoga at $10/class, Thursdays 8:30-9:30am Beginning Yoga 9:45-10:45am Chair Yoga Private Lessons Also Available Annie Osterhout, BS, E-RYT, IAYT 315-698-9749 OsterhoutYoga@yahoo.com Not So Nice Rice N ew research by the nonprofit Consumers Union (CU), which publishes Consumer Reports, may cause us to reconsider what we place in our steamer or cookpot. Rice—a staple of many diets, vegetarian or not—is frequently contaminated with arsenic, a known carcinogen that is also believed to interfere with fetal development. Rice contains more arsenic than grains such as oats or wheat because it is grown in water-flooded conditions, and so more readily absorbs the heavy metal from soil or water than most plants. Even most U.S.-grown rice comes from the south-central region, where crops such as cotton were heavily treated with arsenical pesticides for decades. Thus, some organically grown rice in the region is impacted, as well. CU analysis of more than 200 samples of both organic and conventionally grown rice and rice products on U.S. grocery shelves found that nearly all contained some level of arsenic; many with alarmingly high amounts. There is no federal standard for arsenic in food, but there is a limit of 10 parts per billion in drinking water, and CU researchers found that one serving of contaminated rice may have as much arsenic as an entire day’s worth of water. To reduce the risk of exposure, rinse rice grains thoroughly before cooking and follow the Asian practice of preparing it with extra water to absorb arsenic and/or pesticide residues; and then drain the excess water before serving. See CU’s chart of arsenic levels in tested rice products at Tinyurl.com/ ArsenicReport. Wainwright’s RAW Milk Glass Ju g Free-Roaming gs Brown & Pastel Eg $4.50 plus deposit* $3.25 dozen* Massage Reflexology Acupuncture Now located at the “Yamassee Tribal Trading Post” “Holistic products for all God’s masterpieces” 352-486-1838 • 380 South Court Street • Bronson • FL License #MM27383 *Raw milk/farm eggs sold for animal consumption per FL law Reeser’s Nutrition Center, Inc. / ReesersNutritionCenter.com Do you suffer from any of the following symptoms? n A.D.D. n n n Parasites n Sinusitis n Candidiasis n Crohn’s Disease n Substance Abuse n Insomnia n Fibromyalgia n Shingles Cirrhosis of the Liver Immune Disorder n Impotence/Prostrate n Chronic Fatigue Syndrome n Osteoporosis/Arthristis n Menopausal Syndrome n Multiple Sclerosis n High Blood Pressure n Irritable Bowel Syndrome Free Initial Consultation with CNHP. Offering: n Nutritional Analysis n Adrenal/Thyroid n Metabolic DHEA REAMS Analysis n Oral Chelation n Gluten Free Foods n Hormone Testing n Detoxification n Vitamins / Herbals n n Enzyme Therapy Blood Analysis n Alkaline Water n Hair Analysis n Weight Loss n Homeopathic n Saliva Test n Drug Tests n BMI Analysis n 10% Every Day Discounts on Vitamin Supplements (Restrictions Apply) 3243 E. Silver Springs Blvd., Ocala / 352-732-0718 / 352-351-1298 March 2013 9 healthbriefs Dining App for Special-Needs Diets F ood be able their meals from restaurants. E ating yogurt could reduce the risk of developing high blood pressure (hypertension), according to new research presented at the American Heart Association 2012 Scientific Sessions. During their 15-year study, researchers followed more than 2,000 volunteers who did not initially have high blood pressure and reported on their yogurt consumption at three intervals. Participants who routinely consumed at least one six-ounce cup of low-fat yogurt every three days were 31 percent less likely to develop hypertension. Bad Fats Are Brain-Busters ew who consumed the highest amounts of saturated fat, such as that found in red meat, exhibited worse overall cognition and memory than peers who ate the lowest amounts. Women who consumed mainly monounsaturated fats, such as olive oil, demonstrated better cognitive scores over time. 10 Waste Water Cuts Fertilizer Use T Yogurt Hinders Hypertension N Dishpan Plants he. Printed on recycled paper to protect the environment High Springs Emporium The Only Rock Shop in N. Central Florida Chakra balancing among the crystals with Crystal Tone singing bowls - $20 TUCSON TREASURES Bio-Identical Hormone Replacement Therapy Uruguay amethyst, Gemmy rhodochrosite and Peruvian epidote; Pakistani green herderite; Arizona chrysocolla; aegerine with smoky quartz from Malawi; huge selection of beautiful spheres & much more! F or women experiencing distressing menopause and postmenopause symptoms such as hot flashes, hormone replacement therapy (HRT) is sometimes considered in order to replace waning estrogen and progesterone. However, there are unresolved questions about synthetic HRT, including its potential cause of breast cancer and serious side effects that can include depression, weight gain, and high blood pressure. Most women in the U.S. are unaware of bio-identical hormone replacement therapy (BHRT). BHRT is derived from plant sources, mimics the function of human hormones, and produces significantly fewer side effects. Why do so few health care providers offer BHRT? One simple answer is that natural hormones cannot be patented. Pharmaceutical makers alter estrogen and progesterone molecules from their natural structures; this allows them to obtain patents. It is important to obtain a baseline bone density test and lipid panel (total, HDL, LDL cholesterol and triglycerides) when beginning BHRT. For more information on BHRT, visit Experience optimal health naturally! 352-291-9459 11115 SW 93rd Ct Rd, Suite 600, Ocala, Florida 34481 Mon, Wed, Thurs, Fri 8 – 5 Tuesday 9-6 Closed every day from 12-1 Stone of the Month - Aquamarine All Aquamarine 20% off through MARCH! 660 N.W. Santa Fe Blvd. High Springs, FL 386-454-8657 Monday-Saturday 11-6 • Sunday noon-5 In network BC/BS for acupuncture, chiropractic and massage Kristin Shiver, DC Dean Chance, DC 507 NW 60th Street, Suite A • Gainesville, FL 32607 • 352-271-1211 BC/BS network March 2013 11 Gardening Without (Much) Money by David Y. Goodman, UF/IFAS Marion County Master Gardener “R eally, when it comes right down to it, gardening costs more money than it saves,” a friend said recently, giving me a knowing look. He continued, “I mean, I suppose if you count in. For. These” tall peppers might feel good, but between transplant shock, light differences, and their often root-bound condition, they’ll often get out-produced by the seeds you plant. For the price of one or two of those pepper transplants, you can buy enough seeds to plant 50 directly into the ground. Now, about those bugs. What do we do with those? Well, there’s this awesome chemical, see, and it’s non-toxic, 12 and it makes cutworms decapitate themselves, vaporizes caterpillars and magically turns aphids into ladybugs … and it’s only $50 a pint … and I’m totally just kidding! We’re not spending money, remember? We need to change our attitude aboutdaboom! Dead! You can also find plenty of info on cheap insect repellents/killers online. Garlic, red pepper, and plain old dish soap Gardeners. Plant and save heirloom seeds. Put up rain barrels. See what’s growing great for your neighbors and plant that. Get seed from the bulk bins at the organic market (that’s where I got my fava beans and rye, among other things). Learn to like okra and zucchini. And never, ever give up. David Goodman is a Master Gardener, writer, musician, artist and father, as well as the creator of FloridaSurvivalGardening.com, an online resource for people who are serious about growing food in Florida. Printed on recycled paper to protect the environment Ecological Preserve Organic Farm Farm Stead Saturday Every Saturday 9am-3pm HIGH-FLYING COMEDY March 21 - April 14 Prince Charm A ZANY FAIrYtALE March 9-10 & 16-17 Starter plants for sale Country store: Gifts, books, gourmet spreads and jellies Playground Now Available in the Store Tracy Lee Farms Grass-Fed Beef Prices vary depending on the cut March 30, 9-3 Spring Garden Kickoff Demonstrations, Workshops, 1,500+ seedlings Free admission, $3/workshop On the Outdoor Stage April 20, 9-3 Spring Sustainability and Natural Foods Gala Music, Demonstrations, Wonderful food Gift Certifica tes On Sale $1 admission, $1/food sample Come hungry and expect to have fun! Cash or checks only. We do not accept credit cards. Please do not bring pets. No smoking on farm. Store Hours 9am-3pm â&#x20AC;˘ Open 7 days/week 6411 NE 217th Place Citra, FL Email catcrone@aol.com Call 352-595-3377 for more information March 2013 13 The Better Brain Diet Aim for three weekly servings of fatty fish. Vegetarians can supplement meals with 1,000-1,500 milligrams daily of DHA, says Isaacson. Eat Right To Stay Sharp by Lisa Marshall W ithiterraneantype such as white bread or sugar- 14 sweetened sodas can eventually impair the metabolization of sugar (similar to Type 2 diabetes), causing blood vessel damage and hastened aging. A high-carb diet has also been linked to increased levels of beta-amyloid, a fibrous plaque that harms brain cells. A 2012 Mayo Clinic study of 1,230 people ages 70 to 89 found that those who ate the most carbs had four times the risk of developing MCI than those that ate the least. Inversely, a small study by University of Cincinnati researchers found that when adults with MCI were placed on a low-carb diet for six weeks, their memories improved. Choose fats wisely: Arizona neurologist Dr. Marwan Sabbagh, co-author of The Alzheimer’s Prevention Cookbook, points to numerous studies suggesting a link between saturated fat in cooking oil, cheese and processed meats and increased risk of Alzheimer’s. “In animals, it seems to promote amyloid production in the brain,” he says. In contrast, those who did not. Rich in antioxidant flavonoids, blueberries may even have what Sabbagh calls “specific anti-Alzheimer’s and cell-saving properties.” Isaacson highlights the helpfulness of green leafy vegetables, which are loaded with antioxidants and brainboosting B vitamins. One recent University of Oxford study of 266 elderly people with mild cognitive impairment found that those taking a blend of vitamins B12, B6 and folate daily showed significantly less brain shrinkage over a two-year period than those near Boulder, CO. Connect at Lisa@LisaAnnMarshall.com. Printed on recycled paper to protect the environment The Healing Power of Silence by Robert Rabbin O ne day I disappeared into reveal another heart that knows how Silence … It was more than to meet life with open arms. Silence grace, an epiphany or a mystiknows that thoughts about life are cal union; it was my soul’s homecoming, not life itself. If we touch life through my heart’s overflowing love, my mind’s Silence, life touches us back intimately eternal peace. In Silence, I experienced and we become one with life itself. freedom, clarity and joy as my true self, Then the mystery, wonder, beauty and felt my core identity and essential nature sanctity becomes our life. Everything as a unity-in-love with all creation, and but wonderment falls away; anger, fear realized it is within this and violence disapessence that we learn When I return from silence pear as if they never to embody healing in existed. our world. Knowing SiI am less than when I lence is knowing our This Silence entered: less harried, self and our world belongs to us all—it for the first time. We is who and what we fearful, anxious and only have to be still are. Selfless silence until that Silence knows only the present egotistical. Whatever the comes forth from moment, this incredwithin to illumiible instant of pure life gift of silence is, it is one nate and embrace when time stops and of lessening, purifying, us, serving as the we breathe the high-alteacher, teaching and titude air we call love. softening. The “I” that path, redeeming and Let us explore Silence as a way of knowing returns is more loving than restoring us in love. In this truthand being, which we filled moment, know, which we are. the “I” who left. we enter our Self Silence is within. fully and deeply. It is within our breath, ~ Rabbi Rami Shapiro We know our own like music between beauty, power and thoughts, the light in magnificence. As the embodiment of our eyes. It is felt in the high arc of birds, the rhythm of waves, the innoSilence, we are perfection itself, a treacence of children, the heart’s deepest sure that the world needs now. Right emotions that have no cause. It is seen now the Universe needs each of us to in small kindnesses, the stillness of be our true Self, expressing the healing nights and peaceful early mornings. power of our heart, in Silence. It is present when beholding a loved one, joined in spirit. As a lifelong mystic, Robert Rabbin is an In Silence, we open to life and innovative self-awareness teacher and life opens to us. It touches the center author of The 5 Principles of Authentic of our heart, where it breaks open to Living. Connect at RobertRabbin.com. Vegetarian & Regular Cuisine Drawings of Your Spirit Meet your Guides and Angels Aqua-Chi Chair Massage Aura Photography Tarot, Psychic, and Astrological Readings Healing Sessions Ear Candling Merchants Past Life Regression Handmade Jewelry I ncense Rocks From Around the World Candles & MUCH, MUCH MORE WE ARE NOT AFFILIATED WITH ANY OTHER PSYCHIC GATHERINGS See What Planting A Seed Can Accomplish Start with your ad in Natural Awakenings magazine and watch your business grow. Natural Awakenings is published monthly in full color and distributed throughout north central Florida, enabling you to reach your target audience. Together we will create the ideal package for your marketing needs. Design is included. Your Healthy Lifestyle Multimedia Resource in Print, Online and Mobile FOR RESULTS Call 352-629-4000 March 2013 15 The “WINS” of Change by Dr. Paula Koger, RN, MA, DOM “We would rather be ruined than change.” —W.H. Auden O ur present perception is creating our present and future reality. I call this the Habitual Box. We are in a gridlock, because what is happening in our body, mind, and life is a clear reflection of the hidden mind. Shankar Vedantam, science writer at The Washington Post and author of The Hidden Mind, shares massive amounts of data confirming that subliminal information from media and our surroundings are programming and influencing our unconscious. Do you wonder why it is so difficult to change or get out of the box? According to Dr. Bruce Lipton, cellular biologist and author of The Biology of Perception, 95% of the information that controls our behaviors, choices and cellular function has been locked in the unconscious and becomes what runs our body/mind activities. This becomes our disease. Because the unconscious is such a strong force comprised of memories and data stored in the neurotransmitters and the very memory of cell systems of the body, it is the majority—majority rules. We have been trained by our family dynamics and culture to suffer. Our challenge is how to become the master of our fate, the captain of our soul. Even if we learn to quiet the mind, exercise, and eat right, many things don’t change. Left unresolved, the patterns— according to many experts including Dr.Ryke Hamer—may become cancer and other diseases. 16 Here is how the programmed mind behaves. I had a patient who was responding very well to acupuncture and other modalities for treatment of back pain. In four treatments she was, by her account, 75% improved. Suddenly she announced she was going to have surgery. She had quit what was working, had surgery, and is now in a nursing home. I have often asked myself what causes people to make these choices. It seems obvious we are driven by unconscious needs, habits, and programs. I was with a group of people at a social function who were talking about their illnesses. One man said he was having his sixth surgery for skin cancer. I dared to risk speaking up and said maybe it is time to find out why they are reappearing and do something about the cause. That statement was a “big hit.” Another woman was proudly talking about how much Medicare had paid to fix her eye after three surgeries had failed. She said this as she lit up a cigarette. What shall we do? Shall we leave it as it is and die for our unwillingness to take charge of identifying and releasing the programs, traumas, behaviors, and environmental factors that are causing our limitations? On the other hand, we can enjoy suffering with the masses of sufferers and fit into the Habitual Box by having our own story to tell. We can continue to be sociopolitical robots. I will tell you my story. It is perhaps not like many stories. I am usually not the hit of the party at gatherings where people talk about their illnesses. Most people don’t seem eager to hear I have healed myself from autoimmune disease, arthritis, elevated cancer indicators, skin cancer, thyroid dysfunction, and Lymes disease, and that I live an active, healthy, productive life as I continue to address the “issues in the tissues” and other factors that contribute to the causes of disease. Many turn a deaf ear when I say thousands of people have improved their health by healing their hidden minds and energy imbalances. Can we afford to federally fund the consequences of unconsciousness? I don’t use my Medicare benefits other than eye exams and annual checkups even though I have paid for these benefits all my life. I will use them eventually if necessary, but it won’t be through self-neglect. Dr. Koger will be presenting a free “How to Change Your Cellular Biology” workshop on March 30th from 3-5pm in Ocala. For details and reservations, call 941-539-4232 (www. WealthOfHealthCenter.com). TIRED OF SO-SO ADVERTISING RESULTS? BEST DEAL IN THE AREA: Supplement your ad in Natural Awakenings with News Briefs and/or articles. Educate and inform your existing and future customers! CONTACT US FOR DETAILS: 352-629-4000 GoNaturalAwakenings@gmail.com We work for you. Printed on recycled paper to protect the environment Ceviche by Clark Dougherty T In a medium-sized glass or ceramic (non-reactive) bowl, combine lime juice, lemon juice, ginger and olive oil. Add sliced fresh fish, tossing to coat. Cover and refrigerate for at least 4 hours (overnight is okay). The citric juices will “cook” the fish, which should appear white and opaque. Drain and discard half the liquid from bowl. Mix in cilantro, onion and tomato. Season to taste with salt and pepper. Serve with crispini, hearty crackers, tortilla chips, or toast points; atop thinly sliced baguettes; or on a bed of Romaine lettuce leaves with hard-boiled egg wedges. Enjoy! his is a popular South American dish made entirely from fresh ingredients. It can be served as an appetizer (dip or spread) or salad. And no, there are no “cooking” instructions— the fish is cured by the citrus juices. Take care to use a ceramic or glass bowl for a safe, quick curing process. 1 cup fresh-squeezed lime juice ¼ cup fresh-squeezed lemon juice 1 tsp. grated fresh ginger 2 tbsp. extra virgin olive oil 1 lb. fresh firm fish fillets (sea bass, flounder, tilapia), sliced (¼” thick by 1” long) ½ cup chopped fresh cilantro ½ cup sliced green onion 1 red onion, thinly sliced 1 or 2 finely chopped Roma tomatoes March 2013 17 in & Tonic by Melody Murphy Home Is Where the Heart Is B artow, Florida, is the little town where my parents did most of their growing up. It’s about a hundred miles south of here. One of my grandmothers still lives there; the other one is currently staying in Ocala, but still has her house there. The parents of my dearest friends, sisters Biddy Thatcher and Toad Sawyer (of last summer’s boat trip), also grew up in Bartow. Mr. Thatcher-Sawyer, in fact, introduced my parents. So I have a lot for which to thank Bartow. Though I only lived there for a year, when I was six, I’ve spent a great deal of my life there—enough that I consider it my hometown. I think the places you love in childhood will always feel like home. Especially if you keep returning to them throughout your life. But Bartow is special. I love it not just out of stubborn nostalgia, but genuinely and with present delight. Bartow looks like a hybrid of Mayberry and Bedford Falls, as painted by Norman Rockwell. The first time I took Puck Finn (also of last summer’s boat trip) there 20 years ago, she said in wonder, “It’s like Brigadoon. It could be any time at all from the ‘30s through the ‘80s.” That was in the ‘90s. It takes Bartow a while to catch up to the present. And that is why I love it. As the seat of Polk County, Bartow has two courthouses, the old one and the “new” one, but no movie theatre. 18 It’s coming along with restaurants, but you still have to drive to a neighboring town to do any real shopping. People say there isn’t much to do there, but I’ve always found plenty to do. When I visit, I have my favorite spots of pilgrimage. There are the two duck ponds, each near a grandmother’s house, where I grew up feeding the ducks. (And still do.) One has the old sycamore I used to climb ... am still tempted to climb. There’s the park full of old live oaks where I played as a child. (And as a young adult; Puck Finn will recall the unfortunate seesaw incident of a nottoo-bygone year.) I go there now not to swing or slide, but to admire the Florida Cracker rose garden or the way sunlight slants through the Spanish moss. There’s the turn-of-the-century ice cream shop downtown where I buy an ice cream cone and go sit under my favorite ancient live oak on the old courthouse lawn to enjoy it. Across Main Street, there’s the quaint little gift shop I always pop into and can find the best presents ... or a little something for myself. There’s the old cemetery, older than Polk County itself, where people born in the 1700s and the veterans of every war since then are buried. I played there as a child with the boy who lived across the street from my grandparents. My grandfather called him “The Rascal.” The Rascal and I would venture through the woods behind his house, climb the low stone wall around the cemetery, and go exploring. An old cemetery full of big trees, tall gravestones, and large monuments is an excellent place to play hide and seek. This may be part of the reason why I like old cemeteries. When you grow up playing in them, they lose their fear factor. I still go there to walk around and read the crumbling old stones and enjoy the quiet. I take a lot of pictures when I visit Bartow. I think it’s because I want to preserve so many moments in time, just as Bartow does—and because the tree-lined streets of beautiful old houses, the shops on Main Street, the old courthouse and cemetery, the duck ponds and the park are really lovely, and ask to be captured as snapshots for remembrance. Bartow is a lovely old town, but it’s at its most beautiful in springtime. Known poetically as “The City of Oaks and Azaleas,” that’s a true statement. The azaleas are glorious in spring. So are the camellias, plumbago, bougainvillea, hibiscus, jacaranda and tabebuia trees; there isn’t a time of year when Bartow isn’t beautifully in bloom with something, but spring is really its finest hour. And oh, the orange trees! Citrus is big doings in Polk County, so groves and citrus trees are everywhere. When they bloom in springtime, it’s pure heaven. I would never miss a pilgrimage to Bartow in orange blossom season. Next month, I’ll share some more “snapshots” of some recent sojourns in Bartow. I make my pilgrimages there for more than just the orange blossoms— and every time, there is something to delight me and reaffirm my feeling that you can, indeed, go home again. I’ll take you along in April. Printed on recycled paper to protect the environment HIPPODROME THEATRE Alternative Wholistic Health Care Solving problems... naturally Michael Badanek, DC, BS, CNS, DACBN, DCBCN, DM(P) N O O M E H T ’ O G KIN DZICK BY TOM DAUVID SHELTON YD IRECTED B D on stage february 22-march 17 discount previews feb 20 & 21 Board Certified in Clinical Nutrition, Certified in Applied Kinesiology, and Promoter of Alternative Complementary Medicine. 30 Years of Clinical Practice Autoimmune disorders, Lyme disease, Autism, ADD/ADHD, Musculoskeletal conditions, Heavy metal toxicity, Cardiovascular & endocrine conditions, Nutritional deficiencies/testing. Courtesy consultations available (352) 622-1151 3391 E. Silver Springs Blvd., Suite B Ocala, FL 34470 THEHIPP.ORG | 352.375.HIPP | 25 SE 2ND PLACE | GAINESVILLE A Simple Solution For Rejuvenation Incredible products, astonishing prices. Featured Product of the Month Life Force tea A customer favorite! Our Life Force tea will bring you a new sense of energy and clarity and allow you to feel more present in your body all while detoxifying your system gently. Results you will feel right away! “After!” DT - Bellingham, WA Herbal Teas &Tinctures • Natural Skin Care • Wellness Oils 1.866.607.2146 Using organic and wildcrafted herbs. Made in the USA. Same Day Shipping, free on all orders over $35. Get 10% off your first order when you use the discount code NACHIT We know our products are making a difference in people’s lives because our customers call, write and email to tell us so! And they continue to recommend our products to their friends and family. We couldn’t. alternativewholistichealth.com ocalaalternativemedicine.com March 2013 19 healthykids Six Powerhouse Foods for Kids With Palate-Pleasing Tips by Susan Enfield Esrey Aiolo- 20 gist antioxi- dant delicious healthy smoothie. Chia seeds: This South American grain (the most researched variety is Salba seeds) may be the world’s healthiest, says Sears.. Susan Enfield Esrey is the senior editor of Delicious Living magazine. Printed on recycled paper to protect the environment { } a fresh take on healthy living vitamins | supplements | health foods most items 20% o f f e V e ry d ay ! 352.509.6839 VitalizeNutrition.com 4414 SW College Rd, Ocala | Market Street at Heath Brook | 1 mi. W of 75 by Panera • • • • • • • • • • Himalayan Salt Room FAR-Infrared Sauna Facials Body Wraps Microdermabrasion Medicinal Teas Gemstones and Necklaces Himalayan Salt Lamps Vitamins and Supplements Massage Therapy ... and much more! Experience optimal health naturally! 352-291-9459 11115 SW 93rd Ct Rd, Suite 600, Ocala, Florida 34481 Mon, Wed, Thurs, Fri 8 – 5 Tuesday 9-6 Closed every day from 12-1 ShingleX Soothing formula relieves pain, helps heal the sores, and reduces scarring. ShingleX soothes the pain of shingles lesions, helps prevent and pull out infection, protects, minimizes scarring, reduces length of illness. Testimonials online. 4-ounce jar Just $29.95+$6 S/H. To order, call 352-286-1779 or visit. Resale inquiries invited from natural practitioners and dermatologists. March 2013 21 The Bold Truth of Thyroid Disease by Dr. Michael J. Badanek, DC, BS, CNS, DACBN, DCBCN, DM(P) H ealth care is shifting significantly and is ripe for changes in the balance of power and the paradigms of our therapeutic interventions. For nearly a century, allopathic medicine has hailed itself as “the gold standard,” and other approaches have either submitted to or been crushed by their ongoing political/scientific manipulations. Despite their proclamation of intellectual and therapeutic superiority, 180,000-220,000 iatrogenic (medically-induced) deaths occur in the U.S. per year (500-600 per day), and drug-related morbidity and mortality are estimated to cost more than $136 billion per year. (Holland EG, Degruy FV, Drug-induced disorders. Am Fam Physician. 1997:56(7): 1781-8, 17912.) In addition, it’s well documented that most medical allopathic physicians are unable to provide accurate diagnoses due to pervasive inadequacies in medical training. Currently one out of six women in the U.S. experience some type of thyroid dysfunction. Conventional allopathic medicine has totally failed in both the clinical diagnostic and treatment pictures of thyroid disease. It is time for the American consumer to wake up and understand that you cannot treat symptoms without effectively finding the root cause(s) of the problem and then addressing the cause to relieve/eliminate the symptoms and disease. Controversy In the allopathic medical paradigm, much confusion exists regard- 22 ing a common but “mysterious” and “enigmatic” condition known as hypothyroidism, or low thyroid function. The basis for the confusion within the allopathic medical community about hypothyroidism is primarily two-fold: first, they rely on the wrong test (TSH) as the main basis for laboratory assessment; second, they use incomplete treatment (T4 without T3), defying the known physiology of the thyroid gland, which makes at least two hormones rather than one. Hypothyroidism’s converse—hyperthyroidism and Graves disease—is well understood, easily diagnosed, and readily treated. However, because the medical treatment for hyperthyroidism often leaves patients in a hypothyroid state, affected patients transition from clarity (they are clearly ill due to the disease process), into “mystery” (hypothyroidism) wherein they feel ill due to incomplete/inaccurate treatment. One might get the impression that perpetual confusion is at times the goal of the medical profession: we certainly see this with the management of hypertension, depression, diabetes mellitus, psoriasis, and other inflammatory/autoimmune conditions. For people who seek clarity, it is available. Basic physiology The hypothalamus produces thyrotropin-releasing hormone (TRH) which stimulates the anterior pituitary gland to make thyroid-stimulating hormone (TSH), which stimulates the thyroid gland to produce thyroxine (T4, approximately 85% of thyroid gland hormone production) and triiodothyronine (T3, approximately 15% of thyroid gland hormone production). In the periphery, the prohormone T4 is converted to active T3 by deiodinase enzymes. Stress, glucagon, and environmental toxins (originating most commonly from plastics and flame retardants) impair production of T3 and/ or increase production of reverse T3, which is either inert or inhibitory to the action of T3. If the thyroid gland begins to fail, then TSH levels increase as the body attempts to stimulate production of thyroid hormones from a failing gland, hence the association of elevated blood TSH levels with “primary hypothyroidism.” Thyroid hormones have many different functions in the body, and one of the chief effects is contributing to maintenance of the basal metabolic rate, or the speed of reactions within the body and the temperature of the body. An insufficiency of thyroid hormone adversely effects numerous biochemical reactions and body/organ functions, hence the myriad of clinical presentations. Clinical presentation of hypothyroidism In his classic book Biochemical Individuality, the author, Dr. Roger Williams, noted that “a wide variation in thyroid activity exists among ‘normal’ human beings.” Clearly, some patients do not make enough thyroid hormone to function optimally; or perhaps more precisely, they make enough thyroid hormone (T4) but do not efficiently convert it to the active form (T3) in the periphery. Further complicating the picture is that some patients make appropriate amounts of TSH, T4 and T3 but they make excess of inactive reverse T3 (rT3) which puts them into a physiologic state of hypothyroidism despite adequate glandular function. Patients may have one or more of the following: Printed on recycled paper to protect the environment fatigue, depression, cold hands and feet (excluding Raynaud’s syndrome, which is a part of peripheral vascular disease), dry skin, menstrual irregularities, infertility, premenstrual syndrome (PMS), uterine fibroids, excess menstrual bleeding, low basal body temperature, weak fingernails, sleep apnea and increased need for sleep, slow heart rate, easy weight gain and difficult weight loss (thus, predisposition to overweight and obesity), hypercholesterolemia, slow healing, decreased memory and concentration, frog-like husky voice, low libido, recurrent infections, hypertension (especially diastolic hypertension), poor digestion (due to insufficient gastric production of hydrochloric acid), delayed Achilles return (due to delayed muscle relaxation), carotenodermia (yellowish skin), vitamin A deficiency, gastroesophageal acid reflux, constipation, and predisposition to small intestine bacterial overgrowth (SIBO) due to slow intestinal transit. Of these manifestations, cold hands and feet, low basal body temperature, slow heart rate, and delayed Achilles return are the most specific. Some very competent physicians will, following proper patient evaluation, treat with thyroid hormone based on the clinical presentation of the patient without dependency upon laboratory findings.der a courtesy consultation. The Gift of Empathy How to Be a Healing Presence W by Margret Aldrich hentaking, “Allowing into our heart the other person’s suffering doesn’t mean we suffer with them, because that means shifting the focus of our attention to our own experience. Rather, it means we recognize the experience as fully human, and behold the beauty of it in all its aspects, even when difficult.” Margret Aldrich is a former associate editor of Utne Reader. March 2013 Licensed No. 2425, the Florida Commission for Independent Education 352-371-2833 COMING IN APRIL Natural Awakenings’ SPECIAL ISSUE GREEN LIVING Celebrate the possibilities of sustained healthy living on a flourishing Earth. Shape Body, Mind and Spirit • Color Therapy • Acupuncture • Electronic Gem Therapy • Biofeedback • Voice mapping/Clear • Homeopathic Injection Therapy for facial Mind Technology rejuvenation • Emotional Transformation Therapy Dr. Paula Koger DOM 2007 Top 5 Doctors in Bay Area 941 539 4232 Rainbow Natural Medicine wealthofhealthcenter.com Buy into the For more information about advertising and how you can participate, call 352-629-4000 24 community … Support our advertisers Printed on recycled paper to protect the environment Tickets: March 2013 25 The Birthright of Human Beings by David Wolf I n self-development circles, we commonly hear appeals to be aware of our emotions, feel our feelings, and experience our experiences. Certainly such expressions represent vital principles for conscious living. They signify consciousness itself. To the extent that we’re not aware of what’s happening inside us, we’re less than conscious of parts of ourselves. Thus, consciousness of our sensations, feelings, and thoughts is an essential foundation for selfrealization. Relatedly, the capacity to effectively communicate our experiences is a characteristic of a developed intrapersonal and interpersonal life. Learning from a Newborn Experience and expression, though, do not constitute the totality of a purposeful and fulfilling human life. After all, babies are masterful at experiencing and expressing their experience. Their anger, happiness, pain and sadness are communicated Grow Your Business Naturally with Natural Awakenings. Easy & affordable. Starting at just $19.95/month! How much is it costing you NOT to grow your business?. 26 without pretension. Certainly we can appreciate the authenticity of the newborn, especially as such genuineness tends to become elusive to many of us as we enter adolescence and adulthood. We can understand, however, that the consciousness of most very young children is not the full embodiment of self-realization. Consider qualities such as empathy for others, or the ability to voluntarily forego immediate gratification for a longer term goal. These are traits of a developed person that a small child tends to lack. It’s rare, for instance, for a mother to hear from her infant, “I understand that you’ve had a particularly stressful day, and you’re feeling rather irritated. So, I’ll patiently wait for a few hours, without crying or complaint, until you feed me.” Purposeful Living As spiritual beings having a human experience, it is our responsibility (while being grounded in our experience), to cultivate a sense of purpose—a purpose that recognizes and addresses our eternal spiritual nature, beyond the temporary attractions and repulsions of the material body and mind. Such a sense of purpose provides a context for understanding our experiences, for utilizing our cognitive, emotional, and physical experiences for the sublime purpose of consciousness-raising, of realizing our essential identity, apart from designations related to that which is impermanent. Express Emotions … Less? Nurturing conscious awareness is paramount in determining the extent to which expression of emotion is lifeenriching or life-alienating. What you express connects with the principle of conscious choice. Sometimes in the personal growth environment, I sense an attitude that assumes or implies that the goal or purpose is to express our emotions more. It’s true that for some people the process of consciousness-raising will look like being more expressive, emotionally and otherwise. But for others, the process of self-development will look like expressing emotions less—for some people, expressing thoughts, emotions, etc., can be an avoidance, a diversionary drama, from their internal life. So, the vital principle is about living from conscious choice—from inspiration rather than from fear. Fear of being rejected can cause us to not authentically express, and fear can also cause us to express and share not from a place of courage, but rather to evade genuinely meaningful and productive dialogue. Thus, whether we’re silent and internal, or verbally, emotionally expressive, in itself, in my view, is not the key factor. The essential principle is to be conscious, at choice. Spirit thrives when we experience our experience, in a childlike way, while also enhancing the natural philosophical capacity that is the birthright human beings. David B. Wolf, Ph.D., L.C.S.W., is the founder of Satvatove Institute (www. Satvatove.com), an international personal and organizational development company with headquarters in Alachua, Florida. The author of Relationships That Work: The Power of Conscious Living, he conducts transformative communication and personal growth seminars worldwide, as well as individual and group coaching. He has published books and articles in a variety of fields, including mantra meditation and child protection, and is the director of the Satvatove School of Transformative Coaching. Printed on recycled paper to protect the environment Desire “Like” our Natural Awakenings Facebook page for breaking news about health, the earth, and upcoming events. HEALTHY LIVING HEALTHY PLANET feel good live simply laugh more Live Simply FREE & Enjoy Relax and Refresh HOLIDAY YOGA 3 Easy Poses BREATHE These five components each work together to create what we want in Thoughts life via physics wave theory. One Beliefs cannot create a desire directly from Felt Emotions outward actions without first aligning thoughts, beliefs, and felt emotions. Just like an echo in a canyon—when you yell “1,2,3,4” into Outward Actions the canyon you do not hear back “5,6,7,8.” You hear exactly “1,2,3,4.” This isn’t negotiable; it’s physics wave theory. If one desires (wants something)—health for example but doesn't think, believe, or feel health to be possible for them, physics alignment is not present and long term health cannot be created and/or maintained. My book Why Stuff Happens in Life—the Good and the Bad (WSH) explains this in great detail. INTO BEING The Ins & Outs of Better Health Why “Stuff” Happens (WSH) in Life—the Good & the Bad Keywords: Natural Awakenings Gainesville, Ocala, The Villages Facebook is a registered trademark of Facebook, Inc. To advertise your north-central Florida business in by Stephanie Keller Rohde & End The Clutter ETC® 8961 SW 96 Lane—Unit D Ocala, FL 34481-6670 Toll-free (24/7) Recorded Message: 888-223-1922 Eastern Time Business Hours: 352-873-2100 Print Books: E-books: Teaching the Physics Components of Life—vibrant health, financial independence with wealth, unconditional loving relationships, and any (and every) life situation… endtheclutter@cfl.rr.com We do create our own personal unique reality every second of every day via physics wave theory whether we currently know (or like) this fact or not—again, it’s not negotiable, it’s physics—and we can always create differently starting right now. JUST CALL US! We create a personalized marketing plan, targeting YOUR CUSTOMERS YOUR NEEDS 352-629-4000 this tion Men and ad 25 ea$ v i e c n re unt o disco first your cing. pri visit Consultation Scheduling Times: 10:00 AM—Noon including lunch after the session—slow cooked veggies, beans, and whole grain brown rice (or animal protein), and fresh raw fruit for dessert. (Yes, there is such a thing as a free lunch if you believe it to be possible...) 1:30 PM—3:30 PM 4:00 PM—6 PM March 2013 27 At Peace in Your Mind by Rev. Bill Dodd H ave you ever asked yourself what might make your life better? With the present hectic pace and demands of modern life, we often feel stressed and overworked. Wouldn’t you like to have a truly better life with more time for yourself? A little peace and harmony for each of us would make us feel more spiritually connected and more relaxed overall. Meditation provides relief to sensory overload by allowing the mind to be quiet. Getting the mind to rest takes some time and practice, but it is worth it. It even gives us better focus on a daily basis. For some, meditation can positively affect physical health. Even a simple 10- or 15-minute exercise in meditation can help us to overcome stress and find true inner peace and balance. Meditation can also help us to understand our own mind. We can learn how to transform our thoughts from negative to positive, from disturbed to peaceful. Overcoming negativity and cultivating constructive thoughts is the purpose of the transforming meditations in the Buddhist tradition. Through meditative practices, we ultimately can get to a truly deeper state of self-awareness. With a quiet mind we free our overall awareness. And we ultimately find, after significant practice, that we can meditate at any time and anywhere, accessing the inner calm no matter what’s going on around us. We also find that we can better temper our reactions to things in life. Many people put off practicing meditation because they contend they don’t have 30 or 40 minutes to devote to the practice. Unfortunately, these stressed-out people who don’t have enough time to meditate are the ones who in fact need it the most. Once you learn how to truly meditate, the rest of the day will start to go much more smoothly with fewer difficulties. Even if we don’t find the time to go to a meditation class weekly, after a little meditation practice, we find that we can begin to settle our minds and have some beautiful inner peace. All we have to do is take less than 15 minutes before going to bed and follow our breath coming in and going out. It is best to close our eyes and focus our attention on that place slightly above and between our eyes (the “third eye”). Besides feeling better inside, meditation sends positive thoughts to people and circumstances around us. It also allows us to visualize what we’d like to bring into our lives through affirmations for peace, health, prosperity and happiness. People who are beginning meditative practices can use simple efforts. Basic simple practices such as walking or running can help us center our minds and improve our efforts in meditation. We simply need to set aside a short time each day to create physical or mental benefit in our lives. Another example of meditative practices is using guided visualizations where we can see scenes of peace and love in our lives. We can envision the perfect job or relationship and affirm, see, and feel these truths in our lives right now. Ultimately meditation will help us to center ourselves and to become successful on a regular basis. We will become exceptionally happy in our lives. And we deserve it. Reverend Bill Dodd is co-minister of Trinity of Light Spiritual Center. Services are held Sunday at 10am at the College of Central Florida, Enterprise Building #101. Meditation Class led by Mary Dodd is held on Thursday at 7pm at the Ocala Inner center. Trinity of Light combines Eastern and Western omnicultural philosophy in meditationcentered practices. Contact Trinity of Light at 352-502-0253. 28 Printed on recycled paper to protect the environment Himalayan Salt by Nuris Lemire, MS, OTR/L, NC E veryone knows someone who suffers from constant headaches. Many have family members who complain of lethargy brought about by bouts of insomnia or just restless sleep. Depression abounds in the masses almost as frequently as irritability. Many times people are just not able to put a finger on why they suffer from all these things. They look to sleep aids or anti-depressants. It is human nature to want to know why something is happening, to figure out the piece of the puzzle that they are missing and remedy the issue. We live in the technology age, a time where nearly every home has a computer (or three), flat screen television, fluorescent kitchen lights, cell phone(s), tablet(s), microwave and other electric appliances. It’s a time when homes have begun to literally bristle with ways to make people’s lives “simpler.” It’s also a time of some of the worst electronic pollution imaginable. Not just from planes, trains, and automobiles, but from the everyday things being used to help our “simpler” lifestyles continue. These amazingly sharp televisions, irreplaceable cell phones, and convenient microwaves are creating environments toxic to humans’ wellbeing through an overabundance of free radicals. Little by little, these free radical positive ions accumulate in the body, creating additional stress on the precious homeostasis (balance) that the body struggles to maintain. In recent years, more information has been surfacing about the negative side effects of free radicals and how they relate to our health. Normally bonds don’t split in a way that leaves a molecule with an odd, unpaired electron. When weak bonds split, however, free radicals are formed. These new molecules are very unstable, reacting with other compounds in an attempt to snag excess electrons with the goal of becoming stable. Typically they’ll go for the nearest stable molecule, taking its electron—and in so doing, will create a chain reaction in an organism of molecules stealing electrons from other, more stable molecules. Eventually, this process causes damage in the organism on a cellular level which can present itself in many different ways. There are natural ways to negate the effects of free radicals, but not everybody has the luxury of going to the beach or sitting on a porch during a thunderstorm. Increasing antioxidants can help as well, because antioxidants will donate an electron while still maintaining molecular stability. Enter pink Himalayan salt. Himalayan salt is the only substance with antioxidant properties that work in all areas in the human body. Himalayan salt not only has 84 elements and trace minerals, but it is also easily absorbed by the body, becoming readily available shortly after consumption. It is molecularly stable and able to neutralize free radicals without becoming part of the chain reaction. Not only does it have more elements and trace minerals than sea salt and refined salt, but it helps to lower blood pressure, unlike either of the above. It can be used in baths to aid with detoxification, relaxation, and increased circulation, as well as be a replacement for table salts. This amazing natural substance is available in both bath and food grades as well as in soothing lamps to purify the air. Himalayan salt lamps release negative ions when heated to help combat free radicals in the air, binding to and neutralizing them. They help to reduce humidity indoors and they provide a soothing soft light, making them great as night lights while still functioning as natural air purification. There have been studies on light waves and colors showing how the colors can improve mood and increase positive emotions. Animals, too, are keenly aware of the benefits and will often lie close to the lamps. Himalayan salt lamps are like a “beach in a bottle.” The benefits of natural Himalayan crystal salt include: regulating the water content throughout the body; promoting a healthy pH balance in the cells, particularly the brain cells; promoting blood sugar health and helping to reduce the signs of aging; assisting in the generation of hydroelectric energy in cells in the body; supporting respiratory health; promoting sinus health; prevention of muscle cramps; promoting bone strength; regulating sleep; supporting your libido; promoting vascular health; and, in conjunction with water, it is actually essential for the regulation of the blood pressure. Lemire Clinic is introducing the Himalayan Salt Room Ocala. The Himalayan Salt Room is a therapy which mimics the ancient Himalayan salt mines and the salt caves in Europe. This therapy consists of a 45-minute treatment in which you simply relax and breathe. The Himalayan Salt Room is comprised of pink Himalayan salt on all surfaces with ionized air for breathing treatments. Research has proven the therapeutic values of salt caves and their positive influence in the treatment of diseases including asthma, bronchitis, inflammation of the sinuses, lungs and large intestine, emphysema, duodenal and gastric ulcers, and general nervousness and exhaustion. Contact Lemire Clinic (352-291-9459,. com) for additional information, upcoming specials, or to purchase a beautifully crafted healing salt lamp. March 2013 29 CommunityResourceGuide Acupuncture Fitness. Winning Harmony CounselingTM James R. Porter, Ph.D., LMHC, MH10992 Gainesville, Alachua 352-514-9810, Be Yourself. Finally. Dr. Porter draws from modern culture, 12+ years of advanced clinical training, 2 years of formal breath and body training, and a lifetime of formal and autodidactic spiritual training to create a counseling experience that will energize those wanting to address nearly any mental, emotional, or life issue. Biologic Dentistry Dr. Cornelius A. Link, DDS 2415 SW 27th Ave., Ocala / 352-237-6196. Botanical Salon & Day Spa Haile Village Spa & Salon 5207 SW 91st Terrace, Gainesville / 352-335-5025 We are a full service AVEDA hair salon for every type of hair and offer extensions, fashion forward color, and designer haircuts. We also specialize in ORGANIC skin-care and cosmetics for facials, makeovers, and skin treatments. We offer both spa and medical grade massage, acupuncture, detox body wraps, body scrubs, body contouring, lypossage, natural nail manicures, pedicures and waxing. Like us on Facebook for weekly Salon and Spa specials!. 30 Holistic Medicine Hanoch Talmor, M.D. Gainesville Holistic Center 352-377-0015 We support all health challenges and the unlimited healing potential of God’s miracle: your body. Chelation, Nutrition, Cleansing, Homeopathy, Natural Energy Healing, Detoxification, Wellness Education and more. James E. Lemire, M.D., FAAFP Nuris Lemire, MS, OTR/L, NC The Lemire Clinic. Massage Clark Dougherty Therapeutic Massage Clinic 415 NE 25th Ave., Ocala 352-694-7255 / Offering a variety of therapeutic massage techniques for pain relief, improved flexibility, and other wonderful benefits. PIP and WorkComp always accepted, also group/private insurance in some instances. All credit cards accepted. Gift certificates are available for holidays and birthdays with 25% discount on a second session. MA27082, MM9718. Physics of Life & Health Stephanie Keller Rohde, End The Clutter ETC® Toll-free 24/7 message, 888-223-1922. Direct line (business hours), 352-873-2100. Web site: Print books: eBooks: My books and I teach how to create anything in life (vibrant health, wealth, unconditionally loving relationships, etc.) that an individual desires and currently does not yet have. Printed on recycled paper to protect the environment. For $66/month, 65,000 readers will see your Community Resource Guide listing here. Call today! 352-629-4000 The Frugal Wine Snob The blog about wines that taste like a million bucks, but cost less than $20. Grow Your Practice Naturally with Natural Awakenings! The cost of an ad in the â&#x20AC;&#x153;Community Resource Guideâ&#x20AC;? is less than a daily cup of Starbucks. How much is it costing you not to grow your business? Call 352-629-4000. CURLY HAIR SPECIALIST! Late Hours By Appt. Open Tuesday to Saturday Featuring All-Nutrient Organic Hair Color Earth friendly 10% off first visit * Sustainable products Dimensional color * with selected stylists Waxing and facials Independent Consultant 500 S.W. 10th Street, Suite 303 Ocala, Florida 34471 352-351-8991 Keratin Treatment with no formaldehyde and no smell! March 2013 31 calendarofevents February 7-March 3 “A Funny Thing Happened on the Way to the Forum,” musical comedy stage production. Ocala Civic Theatre, 4337 E. Silver Springs Blvd., Ocala, 352-236-2274,. February 22-March 17 “King of the Moon,” comedy stage production. Hippodrome Theatre, 25 SE 2nd Pl., Gainesville, 352-375-HIPP,. Friday, March 1 Aumakhua-Ki™ Healing Level 1 with Rev. Ojela Frank, LMT. 12 noon, online course. For information about the new daily course, cost, and preregistration: 352-239-9272,. March 1-4 Krishna Das “The Heart of Devotion” Retreat including Kirtan and Yoga workshops. $175. Hindu Temple of Central Florida, Casselberry, FL. Tickets and info: 321-439-8353,. Saturday, March 2 * Buddha Card Readings with Rev. Steve Henry. 12-5pm, $20/mini reading, $30/half hour, $60/full hour. Call to sign up or walk in. High Springs Emporium, 660 NW Santa Fe Blvd, High Springs, 386-454-8657, www. highspringsemporium.net. * Create Conscious Community through skill building, fun personal growth games, and powerful introspection. 1-6pm. Sacred Earth Center, 1029 NW 23rd Ave, Gainesville, FL 32609. patrickmangum@yahoo.com or 386937-6491 to register. * Introduction to Traditional Yoga with Chandresh Workshop, 11am12:30pm, free. Bliss Yoga Center, 1738 SE 58th Ave., Ocala, BlissYogaCFL.com. * Raspberry Ketones, African Mango, 7 Keto, Red Palm Oil, Green Coffee Bean Extract, are metabolic boosters to bust fat. Free consultation; call for appointment. Reesers Nutrition 32 Center, 3243 E. Silver Springs Blvd, Ocala, 352-732-0718,. * RawSome: Introduction to Eating Raw class, 11-1, $45. Held at Unity of Ocala’s Unity House, 101 Cedar Rd., Ocala. Registration required: 352-3612666, Lenore@lenoremacary.com. Sunday, March 3 Rev. Marita Graves, New Thought Minister and Mastery of Language teacher, will speak at the 11am service. All are invited to stay for Potluck Lunch. At 1pm, a Playshop experience on “Sacred Communication” will be facilitated by Rev. Marita. Love offering. Unity of Gainesville, 8801 NW 39th Ave., Gainesville, 352-373-1030, Monday, March 4 Meet the Doctor and Patient Information. 6pm, free. Call for reservation. Lemire Clinic, 11115 SW 93rd Ct Rd., Suite 600, Ocala, 352-291-9459,. Tuesday, March 5 Aumakhua-Ki™ Healing Level-1 Certification, online course begins. Register at. Wednesday, March 6 Metabolic Balance All Natural Weight Loss. No pills, no shakes, no injections, no craving, no hunger. Free consultation. Call for appointment. Reesers Nutrition Center, 3243 E. Silver Springs Blvd, Ocala, 352-732-0718,. March 8-10 Transformative Communication and Self-Empowerment Seminar facilitated by Dr. David Wolf, author of Relationships That Work, and Marie Glasheen, professional transformative coach. Information/register: dharm. khalsa77@gmail.com, 352-222-6331,. Saturday, March 9 * “Chakra Balancing with Crystals” Introductory Workshop and Sessions with Sharron Britton. Workshop 121:30, sessions 2-5:30. Workshop $20, sessions $10. High Springs Emporium, 660 NW Santa Fe Blvd, High Springs, 386-454-8657,. * “Gracefully Adjusting to Later Years with Happiness and Enthusiasm” workshop with Andrew and Ingrid Crane. 9am. RSVP by March 6 to Cara Owens, Vitalize Nutrition Company, Ocala, 352-509-6839,. March 9-10 * Marion County Master Gardeners expo. $1/person. 8-5 Saturday, 9-4 Sunday, Marion County Fairgrounds, 2232 NE Jacksonville Rd., Ocala. * Reiki Level II with Ojela Frank, LMT, in Ocala. Saturday 10-5:30, Sunday 12-4:30. Information: 352-2399272,. March 9-10 and 16-17 “Prince Charmless,” youth stage production, outdoor stage. Ocala Civic Theatre, 4337 E. Silver Springs Blvd., Ocala, 352-236-2274,. Sunday, March 10 Dr. Denise D’angelo Jones, speaker and singer/musician, 11am, love offering. Topic: living in the now and trusting intuition by blessing others with our unique spiritual gifts of sacred service. Unity of Gainesville, 8801 NW 39th Ave., Gainesville, FL 32606. 352-373-1030,,. bandcamp.com. Monday, March 11 * Cancer FREE Naturally lecture with author Jean Sumner. 6pm, free. Call for reservation. Lemire Clinic, 11115 SW 93rd Ct Rd., Suite 600, Ocala, 352-291-9459,. * Energy Activations for Awakening with Rev. Ojela Frank, LMT, (MA60322), all day, online by Skype video call or local sessions in Ocala, Register at, 352-239-9272. Printed on recycled paper to protect the environment Wednesday, March 13 * All-in-one-step total body cleanse. Antioxidant, anti-aging, immune support, intestinal health, weight Loss. Free consultation; call for appointment. Reesers Nutrition Center, 3243 E. Silver Springs Blvd, Ocala, 352-732-0718,. * Healthy Wednesday. Featured Film: “Cut Poison Burn,” documentary illuminating the truth about cancer treatment. Meditation at 6pm, potluck supper at 6:30, film at 7. $5/person. Unity of Ocala, 101 Cedar Rd., Ocala,. Thursday, March 14 Green Smoothie Workshop, Demonstration, and Tasting. 6pm, free. Call for reservation. Lemire Clinic, 11115 SW 93rd Ct Rd., Suite 600, Ocala, 352291-9459,. Saturday, March 16 * A Grand Metaphysical/Wellness Fair. 10-4. Explore options in healing, ways to heightened energy, inner beauty and wholeness. Unity of Gainesville, 8801 NW 39th Ave., Gainesville, FL 32606, 352-373-1030,. * Baby Yoga Class. 11-11:45am, free. Held at Gainesville’s Headquarters Library. 352-339-2212,. * “Esoteric Healing and Crystals” Workshop and sessions with Fran Oppenheimer, RN, LMT and certified Esoteric Healer. Workshop presentation 11am-12:30pm, $20. Sessions 1-5:30, $20. High Springs Emporium, 660 NW Santa Fe Blvd, High Springs, 386-4548657,. * Psychic/Medium Spiritual Development Class. 2-4:30pm. Includes meditation, lesson, practice. $25. Held at Unity of Gainesville, 8801 NW 39th Ave. International Foundation for Spiritual Knowledge,, 407-673-9776. * Toddler Yoga Class, 11:45am12:15pm, free. Held at Gainesville’s Headquarters Library. 352-339-2212,. Sunday, March 17 Rev. Marty Dow, “The Power of Love,” with guest musician Michelle Manderino. Unity of Gainesville, 8801 NW 39th Ave., Gainesville, FL 32606. 352-373-1030,,. Monday, March 18 * Are You Digging Your Grave With Your Fork? Lecture with Nuris Lemire, MS, OTR/L, NC. 6pm, Free. Call for reservation. Lemire Clinic, 11115 SW 93rd Ct Rd., Suite 600, Ocala, 352291-9459,. * Spiritualist service. What happens to our loved ones when they pass over? Who are our Spirit Guides? What are the principles Spiritualists follow? What exactly are spiritualist messages? 10am with Rev. Diane King & Rev. Roger Blom. Unity of Ocala, 101 Cedar Road, Ocala,. Wednesday, March 20 * Live Blood Analysis. $60, call for appointment. Lemire Clinic, 11115 SW 93rd Ct Rd., Suite 600, Ocala, 352291-9459,. * Wellness consultation on Irritable Bowel Syndrome and 24-hour urinalysis for biochemical evidence of what foods your body is having a difficult time digesting and assimilating. Free consultation; call for appointment. Reesers Nutrition Center, 3243 E. Silver Springs Blvd, Ocala, 352-732-0718,. Gold, 386-341-6260,. Saturday, March 23 * “Crystals for Balance and Grounding” Workshop with Sharron Britton. 2-4pm, $20. Call to sign up. High Springs Emporium, 660 NW Santa Fe Blvd, High Springs, 386-4548657,. Now enrolling toddlers ages 18 months to 3 years! "Free the child's potential, and you will transform him into the world." -Maria Montessori 16515 NW US HWY 441 Alachua. Fl 32615 (386) 418-1316 littlelotuspreschool.com ● Lic# F08AL0791 Thursday, March 21 Dreams: The Language of Spirit workshop. Group Coaching session. $10, 6pm. Call for reservation. Lemire Clinic, 11115 SW 93rd Ct Rd., Suite 600, Ocala, 352-291-9459, www. LemireClinic.com. March 21-April 14 “Boeing Boeing,” comedy stage production. Ocala Civic Theatre, 4337 E. Silver Springs Blvd., Ocala, 352-2362274,. March 22-24 Divine Healing Hands Training Program. 9am-10pm, with Master Ellen and Master Sha. Ocean Center, 101 N. Atlantic Ave., Daytona Beach.. Geho Ongoing Psychic Medium Spiritual Development Classes Next Class: Saturday, March 16, 2:00 p.m. Visit for details Check our complete program on the website. March 2013 33 calendarofevents * The Erasables in concert. 7:30pm, $15. Band will be performing tunes from their album “Heads in the Sand,” and debuting new material from their upcoming release. Unity of Gainesville, 8801 NW 39th Ave., Gainesville, FL 32606. 352-373-1030,, www. erasables.net. Sunday, March 24 Guest speaker David Charles Drew, Unity Minister, “What Does It Mean To Be BLESSED?” David was a radio and TV broadcaster, business trainer and seminar leader before being led to the ministry. 11am. Unity of Gainesville, 8801 NW 39th Ave., Gainesville, FL 32606. 352-373-1030, March 23-24 Kanapaha Spring Festival. 9-5 Saturday, 10-4 Sunday. Kanapaha Botanical Springs, 4700 SW 58th Drive Gainesville. March 25-26 Auditions for “Guys and Dolls,” musical stage production, 7pm. Director: Greg Thompson. Ocala Civic Theatre, 4337 E. Silver Springs Blvd., Ocala, 352-236-2274,. March 26-31 Satvatove Advanced Seminar Experience. Six days of courageous introspection and self-empowerment, facilitated by Dr. David Wolf and Marie Glasheen. Information/register:dharm. khalsa77@gmail.com, 352-222-6331,. Wednesday, March 27 Signs and Symptoms Analysis. Any time any of the organs and system of the body are out of balance, there are signs and symptoms. Once identified, a specific-to-you treatment is possible. Free consultation; call for appointment. Reesers Nutrition Center, 3243 E. Silver Springs Blvd, Ocala, 352-732-0718,. 34 Thursday, March 28 * Energy Activations for Awakening with Rev. Ojela Frank, LMT, (MA60322), all day, online by Skype video call or local sessions in Ocala, Register at, 352-239-9272. * Raw Food Class. $10, 6pm. Call for reservation. Lemire Clinic, 11115 SW 93rd Ct Rd., Suite 600, Ocala, 352-291-9459,. com. * You Have the Power to Heal Yourself. 6:30-9:30pm with Master Ellen. $25 at the door, $20/pre-register. Soul Essentials, 805 E. Ft. King St., Ocala. Geho Gold, 386-341-6260, www. BeHealedWithin.com. Saturday, March 30 * 13th Wholistic Health and Community Fair, FREE, 10-4, indoors, Sunshine Park Mall, 2400 S. Ridgewood Ave., South Daytona.. * “Changing Cellular Biology with Light Medicine” workshop with Dr. Paula Koger, DOM. 3-5pm, FREE. One person will receive a complimentary assessment. Reservations: 941-5394232. * Little Lotus House Preschool Open House. 2-6pm, FREE. 16515 NW U.S. Hwy. 441, Alachua, 352-3392212,. * “Make a Joyful Sound,” Crystal Singing Bowl Healing Session. 1-5pm, $20. Call to sign up. High Springs Emporium, 660 NW Santa Fe Blvd, High Springs, 386-454-8657,. * Spring Garden Kickoff. 9-3, $3/workshop/person, garden tours, snacks, seedlings for sale. Crones Cradle Conserve, 6411 NE 217th Pl., Citra, 352-595-3377,. April 4-12 “Sex Please, We’re Sixty,” comedy stage production. Ocala Civic Theatre, 4337 E. Silver Springs Blvd., Ocala, 352-236-2274,. Saturday, April 6 Eco Agro Trails Run. Registration:. Information: ecoagrotrails@gmail.com. Held at Crones Cradle Conserve, 6411 NE 217th Pl., Citra, 352-595-3377,. April 6-7 Prenatal Yoga Teacher Training Workshop with Carmella Cattuti, 10am-6pm. YA and/or Nursing CEUs. Bliss Yoga Center 1738 SE 58th Ave., Ocala, BlissYogaCFL.com Monday, April 8 Energy Activations for Awakening with Rev. Ojela Frank, LMT, (MA60322), all day, online by Skype video call or local sessions in Ocala, Register at, 352-239-9272. Tuesday, April 9 Satvatove Institute School of Transformative Coaching is now accepting applications for the semester starting September, 2013. Classes are approved by the International Coach Federation (ICF). Syllabus:. com/syllabus.pdf. Information: 386418-8840, April 13-14 Spring Expo. Free admission. Saturday 11-9 (jazz and cuisine samples for sale 5-9), Sunday 10-4. Door prizes. Home/garden/business expo. The India Cultural Hall, NE 36th Ave. (two blocks north of Silver Springs Blvd.), Ocala.. Saturday, April 20 * Aumakhua-Ki™ Healing LEVEL-1 Certification with Rev. Ojela Frank, LMT, 10am-4pm, $50, The Martial Arts Center, Ocala. Register at www. aumakhua-ki.com, 352-239-9272. * Spring Sustainability and Natural Foods Gala. 9-3, $1/person admission, $1/sample. Garden tours, music, exhibits. Crones Cradle Conserve, 6411 NE 217th Pl., Citra, 352-595-3377,. Printed on recycled paper to protect the environment April 21-22 Aumakhua-Ki™ Healing LEVEL- 2 Certification with Rev. Ojela Frank, LMT, 10am, $100, The Martial Arts Center, Ocala. Register at, 352-239-9272. Sunday, April 28 Introduction to Initiation Healing® with Fabiola Kindt, LMT, 12-5:30pm, $30, The Martial Arts Center, Ocala, 352239-9272,. Bliss Yoga Center of Central Florida A Place for Yoga, Wellness, Spiritual Study and Practice Bliss Yoga Center ONGOING 1738 Baseline Road 352-694-YOGA (9642) Sundays * A Course in Miracles, 9:30am; Master Mind Healing Circle, 10am; Inspiring Message, Meditation and Music, 11am; Children and Youth education classes, 11am; Nursery care provided. Love offering. Unity of Gainesville, 8801 NW 39th Ave., Gainesville, 32606, 352-373-1030, * Celebrating Community and Inspiring Message/ Science of Mind and Spirit. Meditation 9:45am, Celebration/Message 10:30am, Youth and Children’s Celebration 10:30am. Love offering. OakBrook Center for Spiritual Living, 1009 NE 28 Ave, Ocala, FL, * Celebration and Meditation, 10am. Farmers Market and MasterMind group afterwards. Unity of Ocala, 101 Cedar Rd., Ocala,. * Trinity of Light Spiritual Service and Meditation, 10am, College of Central Florida, Enterprise Bldg. RoomBring this Ad for a 101, 352-502-0253, Trinityoflightholders@aol.com. 10% Senior/Student Discount FREE 5 Day Trial Bring this ad for a ✁ FREE 5 Day Trial ✁ Mondays Bring thisSenior/Student Ad for a FREE 5Discount Day Trial * Abraham Study Group, 6pm; A Course in Miracles, 10% 10% Senior/Student Discount 7:30pm. Unity of Gainesville, 8801 NW 39th Ave., Gainesville, 32606, 352-373-1030, * Bliss Yoga Welcomes Annie Osterhout, ERYT to the Center. Iyengar Style Yoga classe, 5:30-6:30pm Beginners, GET A MEMBERSHIP EXPERIENCE 6:30-7:45pm Intermediate. Bliss Yoga Center, 1738 SE 58th Whole Body free training $ Ave., Ocala, BlissYogaCFL.com. 49 GET A MEMBERSHIP EXPERIENCE Whole Body Vibration EXPERIENCE Vibration /mo. free measurements low impact • kind to joints • tone & firm free training $ low Whole Impact Body 49 Monday-Friday free use of free measurements Belly-dancing, fitness, yoga classes, personal training infrared sauna /mo. GET MEMBERSHIP freeAuse of as early as 5:30am, as late as 7:30pm. Hip Moves, 708 NW infrared free alkaline water $ sauna 23rd Ave, Gainesville, 352-692-0132,. free alkaline/mo. water No Contract • No Hidden Fees No Contract • No Hidden Fees Tuesdays 49 Now accepting some kind toVibration joints insurances including tone &low firmImpact Silver and Fit. kind to joints tone & firm training 3131 SW College Rd, SuiteFree 301, Ocala, FL 34474 • Phone: 352-304-5571 3131 SW College Rd, Suite 301, Ocala, FL 34474 • Phone: 352-304-5571 Aumakhua-Ki™ Healing Level-1 Certification, ONLINE course. Register at. Tuesday-Saturday Therapeutic Bodywork, Energy Healing with Ojela Frank, LMT (MA60322), at Hyde-Away Salon, Ocala. Session by Appointment, 352-239-9272,. Wednesdays * Aumakhua-Ki™ Healing Level-1 Certification, ON Free measurements Free use of infrared sauna Free alkaline water No Contract • No Hidden Fees 3131 SW College Rd., Suite 301, Ocala FL 34474 352-304-5571 March 2013 35 “Discover the Power Within You” Our spiritual community offers practical, spiritual teachings to empower abundant and meaningful living. We welcome you! LINE course. Register at. * Meditation, Visioning, and Healing Service, 6-7pm. Love offering. OakBrook Center for Spiritual Living, 1009 NE 28 Ave, Ocala, FL,. * Silent Unity meditation, 12-12:30pm. Unity of Ocala, 101 Cedar Rd., Ocala,. 11am Sunday-Inspiring Message, Meditation & Music Also UniKids, UniTeens, Youth Of Unity classes (Nursery care provided on Sundays) … a positive path for spiritual living ... 8801 NW 39th Avenue, Gainesville, FL 32606 352-373-1030— Eastern & Western OMNICULTURAL SPIRITUALITY Spiritual Services and Meditation sundays aT 10aM Meditation Classes Thursdays aT 7PM The College of Central Florida Ocala Inner Center, 3100 SW College Road, 205 So. Magnolia Ave, Ocala, FL Enterprise Building #101, trinityoflightholders@aol.com Ocala, FL 34474 352.502.0253 Thursdays * Beginning Yoga, 8:30-9:30am, All About Art, Belleview. Information: Annie Osterhout, 315-698-9749, www. OsterhoutYoga.com. * Chair Yoga, 9:45-10:45am, All About Art, Belleview. Information: Annie Osterhout, 315-698-9749,. Fridays Reiki Healing with Dee Mitchell, 7pm, first and third Fridays. Unity of Gainesville, 8801 NW 39th Ave., Gainesville, 32606, 352-373-1030, Saturdays Farmstead Saturdays. Free, 9-3. Crones Cradle, 6411 NE 217 Pl, Citra. 352-595-3377,. Calendar of Events listings are free for our advertisers and just $15/listing for non-sponsors. To publicize your event, visit. Acne Relief Experience the Power of Divine Healing Hands with Dr. and Master Zhi Gang Sha World-Renowned Soul Healer, Soul Leader, Divine Channel and Master Ellen Logan Divine Channel and Disciple of Dr. and Master Sha Dr. Sha is an important teacher and a wonderful healer with a valuable message about the power of the soul to influence and transform all life. – Dr. Masaru Emoto, The Hidden Messages in Water Master Ellen Logan Divine Channel New York Times Bestseller! Divine Healing Hands are helping people around the world experience relief from chronic pain, boost energy and stamina, increase mobility and agility, and even improve chronic conditions. Visit YouTube.com/ZhiGangSha to see hundreds of personal soul healing miracles. You can receive Divine Healing Hands blessings at these live events or through the new Divine Healing Hands book. Each copy is a healing treasure offering blessings to the reader. You Have the Power to Heal Yourself with Master Ellen Thursday, March 28, 6:30–9:30 pm, $25, pre-register $20 Soul Essentials, 805 East Ft. King St., Ocala 34471 Divine Healing Hands Training Progam with Master Ellen and Master Sha Friday–Sunday, March 22-24, 9 am–10 pm, $625 Live in Daytona Beach. • Master Sha will join by webcast from Mumbai! Ocean Center, 101 N. Atlantic Ave., Daytona Beach 32118 Become a powerful healer when you receive Divine Healing Hands transmission from Master Ellen! Unique and Extraordinary Training Program! • Apply at DivineHealingHands.com More than an invitation ... a sacred calling! Information: Geho at 386.341.6260, Institute of Soul Healing & Enlightenment™ 888.3396815 • DrSha.com • Facebook.com/DrAndMasterSha • DivineHealingHands.com 36. Printed on recycled paper to protect the environment Feel Better, Lose Weight, Increase Energy and Mental Clarity People using detoxified iodine have reported relief from: ONL $ Y 4 $ 5 sh 6 wee ippi k March 2013 37 TURN YOUR PASSION INTO A BUSINESS Own a Natural Awakenings Magazine! • • • • • Low Investment Work from Home Great Support Team Marketing Tools Meaningful New Career Phenomenal Monthly Circulation Growth Since 1994. Now with 3.6 Million Monthly Readers in:* Denver/Boulder, CO Hartford, CT Fairfield County, CT* New Haven/ Middlesex, CT Washington, DC Daytona/Volusia/ Flagler, FL NW FL Emerald Coast Ft. Lauderdale, FL Jacksonville/ St. Augustine, FL Melbourne/Vero, FL Miami & Florida Keys Naples/Ft. Myers, FL North Central FL* Orlando, FL Palm Beach, FL Peace River, FL Sarasota, FL Tampa/St. Pete., FL FL’s Treasure Coastoston, MA Western, MA Ann Arbor, MI Grand Rapids, MI East Michigan Wayne County, MI Minneapolis, MN Asheville, NC* Charlotte, NC Raleigh/Durham/ Chapel Hill, NC Mercer County, NJ Monmouth/Ocean, NJ* North NJ North Central NJ Somerset/Middlesex Counties, NJ South NJ Santa Fe/ Albuquerque, NM Las Vegas, NV* Long Island, NY Manhattan, NY Rockland/ Orange Counties, NY Printed on recycled paper to protect NaturalAwakeningsMag.com • Westchester/ Putnam Co’s., NY • Central OH • Oklahoma City, OK • Portland, OR • Bucks/Montgomery Counties, PA • Harrisburg, PA • Lancaster, PA • Lehigh Valley, PA • Northeastern PA* Houston, TX • North Texas • San Antonio, TX • Richmond, VA • Southwestern VA • Seattle, WA • Madison, WI* • Milwaukee, WI • Puerto Rico *Existing magazines the for sale environment Discounts & COUPONS Give yourself and your loved ones gifts of health, wellbeing, and sustainability while supporting our local economy. Shop locally! EXP. NOTICE: These Special Offers are good for this month only, unless otherwise stated. 50% off haircut & style w/ HIRE SKIPPY. Farm Stead Saturday, 9-3 every week. Fun for the whole family. FREE! 6411 NE 217th Pl., Citra 352-595-3377 Grow Your Business Naturally with Natural Awakenings. Easy & affordable. conditioner first-visit clients. Treat yourself to a New YOU for the New Year! She can find $$$ for YOU in your closet and garage! The expert in what to “Keep-eBay-Donate!” CONTACT on Facebook: HIRE SKIPPY. Twitter: @SueMorris8. Call: 352-857-2046. Starting at just $19.95/month! FREE Acupuncture consultation. 352-271-1211 507 NW 60 Gainesville How much is it costing you NOT to grow your business?. Advertisers! Less than a mile west of I-75. Next to Panera. Coupons start at $19.99 monthly 352-509-6839 4414 SW College Rd., #1520 Market Street at Heath Brook INSTRUCTIONS: 1. Visit 2. Select your ad package 3. Email your logo, contact information, and special offer. It’s okay to change your offer each month. Changes must be received by the 15th. Questions? 352-629-4000 NaturalAwakeningsGainesvilleOcalaTheVillages 20% discount on pre-purchase of 5 or more massage sessions Clark Dougherty Therapeutic Massage Clinic / MM 9718 MA 27082 / 352-694-7255 Nature’s Way Organic Salon & Spa, 4620 E. Silver Springs Blvd. Suite 502, 352-2365353, INTRODUCES Whole Life Coaching We offer group and individual Coaching Sessions to help you take the next steps in your life, nutritionally, spiritually and emotionally. Our coaches will suit up and take action with YOU and your activities. 11115 SW 93rd Ct. Rd., Suite 600, Ocala 352-291-9459 Bring this ad for 20% discount Ayurveda Health Retreat \ Yoga, vegetarian cooking classes, musical performances, trips (Costa Rica in May), yoga teacher certification, much more. Retreats and health services 365 days/year. 352-870-7645, Alachua Integrative Medicine 14804 NW 140th St / Alachua 386-418/1234. Chiropractic, Mental Health Counseling, Massage, MORE. Meeting the medical needs of your whole family—naturally FREE classes / consultations, every Wednesday. 20% Discount: Your special offer here. Larry Veatch, M.S., M.B.A. Call for appointment. Reesers Nutrition Center, 3243 E. Silver Springs Blvd., Ocala, 352-732-0718, 351-1298,. ( ( Pre-Purchase of 4 or More Sessions Patricia Sutton, LMT, NMT, CRT, MA22645 Neuromuscular Massage By Design / 352-694-4503 Board Certified Life Coach; Licensed Mental Health Counselor. 352-359-0071, larryv8@cox.net Larry has 33 years’ experience helping others via individual and group sessions. Call today! March 2013 Save Money on a Healthy Lifestyle! 6 39 Do People Naturally Confide In You And Come To You For Guidance? Do You Wish You Knew How To Give Your Wisdom To Others? The Satvatove School Of Transformative Coaching Is For Those Who Are Ready To Take Their Natural Helping/Coaching Propensity To The Next Level Of Excellence! Our Mission Is To Empower Empathy-Driven Individuals Through A Rigorous Training In The Principles, Tools And Skills Of Life Transformation Coaching “Because of the incredible coaching skills I gained from Principles and Practices of Transformative Coaching, I’ve had the confidence to step up and establish my Life Coaching practice.” - David Aycrigg- New Zealand Dr. David Wolf is the founder of Transformative Coaching and author of Relationships That Work: The Power Of Conscious Living GRADUATES OF THE SATVATOVE SCHOOL OF TRANSFORMATIVE COACHING: • Master Powerful Coaching Skills • Develop Confidence To Use Their Gifts To Make A Powerful Difference For Others • Cultivate Their Natural Talents As A Stand For Personal Growth For Everyone They Meet • Engage Intensively In The Adventure Of Self-Realization All This, While Developing A Career Deeply Aligned With Their Highest Sense Of Purpose “I received world-class training from Satvatove Institute's School of Transformative Coaching as well as profound personal growth and learning. Perfect for anyone dealing with clients, patients, students or humans of any kind : )” - Patrick Mangum- Florida Find out if Transformative Coaching can help you optimize your natural tendencies for supporting & empowering others. Register for a FREE Coaching/Coach Training consultation at: ACSTH 40 Approved Coach Specific Training Hours International Coach Federation Printed on recycled paper to protect the environment OR CALL NOW 1-386-418-8840
https://issuu.com/gonaturalawakenings/docs/march2013naturalawakeningsonline?mode=embed&viewMode=presentation&layout=http%3A%2F%2Fskin.issuu.com%2Fv%2Flight%2Flayout.xml&showFlipBtn=true&pageNumber=12
CC-MAIN-2022-27
en
refinedweb
Welcome to scikit-network’s documentation! Python package for the analysis of large graphs: Memory-efficient representation as sparse matrices in the CSR format of scipy Fast algorithms Simple API inspired by scikit-learn Resources Free software: BSD license GitHub: Documentation: Quick Start Install scikit-network: $ pip install scikit-network Import scikit-network: import sknetwork See our tutorials; the notebooks are available here. You can also have a look at some use cases. Citing If you want to cite scikit-network, please refer to the publication in the Journal of Machine Learning Research: @article{JMLR:v21:20-412, author = {Thomas Bonald and Nathan de Lara and Quentin Lutz and Bertrand Charpentier}, title = {Scikit-network: Graph Analysis in Python}, journal = {Journal of Machine Learning Research}, year = {2020}, volume = {21}, number = {185}, pages = {1-6}, url = {} } Reference - Data - Topology - Path - Clustering - Hierarchy - Ranking - Classification - Regression - Embedding - Link prediction - Linear algebra - Utils - Visualization Tutorials - Data - Topology - Path - Clustering - Hierarchy - Ranking - Classification - Regression - Embedding - Link prediction - Visualization Use cases About
https://scikit-network.readthedocs.io/en/stable/
CC-MAIN-2022-27
en
refinedweb
Inheritance is one of the core concepts of object-oriented programming (OOP) languages. It is a mechanism where you can to derive a class from another class for a hierarchy of classes that share a set of attributes and methods. You can use it to declare different kinds of exceptions, add custom logic to existing frameworks, and even map your domain model to a database. Try Stackify’s free code profiler, Prefix, to write better code on your workstation. Prefix works with .NET, Java, PHP, Node.js, Ruby, and Python. Declare an inheritance hierarchy In Java, each class can only be derived from one other class. That class is called a superclass, or parent class. The derived class is called subclass, or child class. You use the keyword extends to identify the class that your subclass extends. If you don’t declare a superclass, your class implicitly extends the class Object. Object is the root of all inheritance hierarchies; it’s the only class in Java that doesn’t extend another class. The following diagram and code snippets show an example of a simple inheritance hierarchy. The class BasicCoffeeMachine doesn’t declare a superclass and implicitly extends the class Object. You can clone the CoffeeMachine example project on GitHub. package org.thoughts.on.java.coffee; import java.util.HashMap; import java.util.Map; public class BasicCoffeeMachine { protected Map configMap; protected Map beans; protected Grinder grinder; protected BrewingUnit brewingUnit; public BasicCoffeeMachine(Map beans) { this.beans = beans; this.grinder = new Grinder(); this.brewingUnit = new BrewingUnit(); this.configMap = new HashMap();); } } } The class PremiumCoffeeMachine is a subclass of the BasicCoffeeMachine class. package org.thoughts.on.java.coffee;); } } } Inheritance and access modifiers Access modifiers define what classes can access an attribute or method. In one of my previous posts on encapsulation, I showed you how you could use them to implement an information-hiding mechanism. But that’s not the only case where you need to be familiar with the different modifiers. They also affect the entities and attributes that you can access within an inheritance hierarchy. Here’s is a quick overview of the different modifiers: - Private attributes or methods can only be accessed within the same class. - Attributes and methods without an access modifier can be accessed within the same class, and by all other classes within the same package. - Protected attributes or methods can be accessed within the same class, by all classes within the same package, and by all subclasses. - Public attributes and methods can be accessed by all classes. As you can see in that list, a subclass can access all protected and public attributes and methods of the superclass. If the subclass and superclass belong to the same package, the subclass can also access all package-private attributes and methods of the superclass. I do that twice in the constructor of the PremiumCoffeeMachine class. public PremiumCoffeeMachine(Map beans) { // call constructor in superclass super(beans); // add configuration to brew espresso this.configMap.put(CoffeeSelection.ESPRESSO, new Configuration(8, 28)); } I first use the keyword super to call the constructor of the superclass. The constructor is public, and the subclass can access it. The keyword super references the superclass. You can use it to access an attribute, or to call a method of the superclass that gets overridden by the current subclass. But more about that in the following section. The protected attribute configMap gets defined by the BasicCoffeeMachine class. By extending that class, the attribute also becomes part of the PremiumCoffeeMachine class, and I can add the configuration that’s required to brew an espresso to the Map. Method overriding Inheritance not only adds all public and protected methods of the superclass to your subclass, but it also allows you to replace their implementation. The method of the subclass then overrides the one of the superclass. That mechanism is called polymorphism. I use that in the PremiumCoffeeMachine class to extend the coffee brewing capabilities of the coffee machine. The brewCoffee method of the BasicCoffeeMachine method can only brew filter coffee. public Coffee brewCoffee(CoffeeSelection selection) throws CoffeeException { switch (selection) { case FILTER_COFFEE: return brewFilterCoffee(); default: throw new CoffeeException("CoffeeSelection [" + selection + "] not supported!"); } } I override that method in the PremiumCoffeeMachine class to add support for the CoffeeSelection.ESPRESSO. As you can see in the code snippet, the super keyword is very helpful if you override a method. The brewCoffee method of the BasicCoffeeMachine already handles the CoffeeSelection.FILTER_COFFEE and throws a CoffeeException for unsupported CoffeeSelections. I can reuse that in my new brewCoffee method. Instead of reimplementing the same logic, I just check if the CoffeeSelection is ESPRESSO. If that’s not the case, I use the super keyword to call the brewCoffee method of the superclass. public Coffee brewCoffee(CoffeeSelection selection) throws CoffeeException { if (selection == CoffeeSelection.ESPRESSO) { return brewEspresso(); } else { return super.brewCoffee(selection); } } Prevent a method from being overridden If you want to make sure that no subclass can change the implementation of a method, you can declare it to be final. In this post’s example, I did that for the addBeans method of the BasicCoffeeMachine class.); } } It’s often a good idea to make all methods final that are called by a constructor. It prevents any subclass from, often unintentionally, changing the behavior of the constructor. A subclass is also of the type of its superclass A subclass not only inherits the attributes and methods of the superclass, but it also inherits the types of the superclass. In the example, the BasicCoffeeMachine is of type BasicCoffeeMachine and Object. And a PremiumCoffeeMachine object is of the types PremiumCoffeeMachine, BasicCoffeeMachine, and Object. Due to this, you can cast a PremiumCoffeeMachine object to type BasicCoffeeMachine. BasicCoffeeMachinee coffeeMachine = (BasicCoffeeMachine) PremiumCoffeeMachine(beans); That enables you to write code that uses the superclass and execute it with all subclasses. public void makeCoffee() throws CoffeeException { BasicCoffeeMachine coffeeMachine = createCoffeeMachine(); coffeeMachine.brewCoffee(CoffeeSelection.ESPRESSO); } private BasicCoffeeMachine createCoffeeMachine() { // create a Map of available coffee beans Map<CoffeeSelection, CoffeeBean> beans = new HashMap<CoffeeSelection, CoffeeBean>(); beans.put(CoffeeSelection.ESPRESSO, new CoffeeBean("My favorite espresso bean", 1000)); beans.put(CoffeeSelection.FILTER_COFFEE, new CoffeeBean("My favorite filter coffee bean", 1000)); // instantiate a new CoffeeMachine object return new PremiumCoffeeMachine(beans); } In this example, the code of the createCoffeeMachine method returns and the makeCoffee method uses the BasicCoffeeMachine class. But the createCoffeeMachine method instantiates a new PremiumCoffeeMachine object. When it gets returned by the method, the object is automatically cast to BasicCoffeeMachine and the code can call all public methods of the BasicCoffeeMachine class. The coffeeMachine object gets cast to BasicCoffeeMachine, but it’s still a PremiumCoffeeMachine. So when the makeCoffee method calls the brewCoffee method, it calls the overridden method on the PremiumCoffeeMachine class. Defining abstract classes Abstract classes are different than the other classes that we’ve talked about. They can be extended, but not instantiated. That makes them ideal to represent conceptual generalizations that don’t exist in your specific domain, but enable you to reuse parts of your code. You use the keyword abstract to declare a class or method to be abstract. An abstract class doesn’t need to contain any abstract methods. But an abstract method needs to be declared by an abstract class. Let’s refactor the coffee machine example and introduce the AbstractCoffeeMachine class as the superclass of the BasicCoffeeMachine class. I declare that class as abstract and define the abstract brewCoffee method. public abstract class AbstractCoffeeMachine { protected Map<CoffeeSelection, Configuration> configMap; public AbstractCoffeeMachine() { this.configMap = new HashMap<CoffeeSelection, Configuration>(); } public abstract Coffee brewCoffee(CoffeeSelection selection) throws CoffeeException; } As you can see, I don’t provide the body of the abstract brewCoffee method. I just declare it as I would do in an interface. When you extend the AbstractCoffeeMachine class, you will need to define the subclass as abstract, or override the brewCoffee method to implement the method body. I do some minor changes to the BasicCoffeeMachine class. It now extends the AbstractCoffeeMachine class, and the already existing brewCoffee method overrides the abstract method of the superclass. public class BasicCoffeeMachine extends AbstractCoffeeMachine { public BasicCoffeeMachine(Map<CoffeeSelection, CoffeeBean> beans) { super(); this.beans = beans; this.grinder = new Grinder(); this.brewingUnit = new BrewingUnit();!"); } } // .... } Another thing I changed is the constructor of the BasicCoffeeMachine class. It now calls the constructor of the superclass and adds a key-value pair to the configMap attribute without instantiating the Map. It is defined and instantiated by the abstract superclass and can be used in all subclasses. This is one of the main differences between an abstract superclass and an interface. The abstract class not only allows you to declare methods, but you can also define attributes that are not static and final. Summary As you’ve seen, inheritance is a powerful concept that enables you to implement a subclass that extends a superclass. By doing that, the subclass inherits all protected and public attributes and methods, and the types of the superclass. You can then use the inherited attributes of the superclass, use or override the inherited methods, and cast the subclass to any type of its superclass. You can use an abstract class to define a general abstraction that can’t be instantiated. Within that class, you can declare abstract methods that need to be overridden by non-abstract subclasses. That is often used if the implementation of that method is specific for each subclass, but you want to define a general API for all classes of the hierarchy. -
https://stackify.com/oop-concept-inheritance/
CC-MAIN-2022-27
en
refinedweb
The Exists Query in Spring Data Last modified: July 29, 2020 1. Introduction In many data-centric applications, there might be situations where we need to check whether a particular object already exists. In this tutorial, we'll discuss several ways to achieve precisely that using Spring Data and JPA. 2. Sample Entity To set the stage for our examples, let's create an entity Car with two properties, model and power: @Entity public class Car { @Id @GeneratedValue private int id; private Integer power; private String model; // getters, setters, ... } 3. Searching by ID The JpaRepository interface exposes the existsById method that checks if an entity with the given id exists in the database: int searchId = 2; // ID of the Car boolean exists = repository.existsById(searchId) Let's assume that searchId is the id of a Car we created during test setup. For the sake of test repeatability, we should never use a hard-coded number (for example “2”) because the id property of a Car is likely auto-generated and could change over time. The existsById query is the easiest but least flexible way of checking for an object's existence. 4. Using a Derived Query Method We can also use Spring's derived query method feature to formulate our query. In our example, we want to check if a Car with a given model name exists, therefore we devise the following query method: boolean existsCarByModel(String model); It's important to note that the naming of the method is not arbitrary — it must follow certain rules. Spring will then generate the proxy for the repository such that it can derive the SQL query from the name of the method. Modern IDEs like IntelliJ IDEA will provide syntax completion for that. When queries get more complex – for example, by incorporating ordering, limiting results, and several query criteria – these method names can get quite long, right up to the point of illegibility. Also, derived query methods might seem magical because of their implicit and “by convention” nature. Nevertheless, they can come in handy when clean and uncluttered code is important and when developers want to rely on a well-tested framework. 5. Searching by Example An Example is a very powerful way of checking for existence because it uses ExampleMatchers to dynamically build the query. So, whenever we require dynamicity, this is a good way to do it. A comprehensive explanation of Spring ExampleMatchers and how to use them can be found in our Spring Data Query article. 5.1. The Matcher Suppose that we want to search for model names in a case-insensitive way. Let's start by creating our ExampleMatcher: ExampleMatcher modelMatcher = ExampleMatcher.matching() .withIgnorePaths("id") .withMatcher("model", ignoreCase()); Note that we must explicitly ignore the id path because id is the primary key and those are picked up automatically by default. 5.2. The Probe Next, we need to define a so-called “probe”, which is an instance of the class we want to look up. It has all search-relevant properties set. We then connect it to our nameMatcher and execute the query: Car probe = new Car(); probe.setModel("bmw"); Example<Car> example = Example.of(probe, modelMatcher); boolean exists = repository.exists(example); With great flexibility comes great complexity, and as powerful as the ExampleMatcher API may be, using it will produce quite a few lines of extra code. We suggest using this in dynamic queries or if no other method fits the need. 6. Writing a Custom JPQL Query with Exists Semantics The last method we'll examine uses JPQL (Java Persistence Query Language) to implement a custom query with exists–semantics: @Query("select case when count(c)> 0 then true else false end from Car c where lower(c.model) like lower(:model)") boolean existsCarLikeCustomQuery(@Param("model") String model); The idea is to execute a case-insensitive count query based on the model property, evaluate the return value, and map the result to a Java boolean. Again, most IDEs have pretty good support for JPQL statements. Custom JPQL queries can be seen as an alternative to derived methods and are often a good choice when we're comfortable with SQL-like statements and don't mind the additional @Query annotations. 7. Conclusion In this tutorial, we saw how to check if an object exists in a database using Spring Data and JPA. There is no hard and fast rule when to use which method because it'll largely depend on the use case at hand and personal preference. As a rule of thumb, though, given a choice, developers should always lean toward the more straightforward method for reasons of robustness, performance, and code clarity. Also, once decided on either derived queries or custom JPQL queries, it's a good idea to stick with that choice for as long as possible to ensure a consistent coding style. A complete source code example can be found on GitHub.
https://www.baeldung.com/spring-data-exists-query
CC-MAIN-2022-27
en
refinedweb
tensorflow:: serving:: ServableHandle #include <servable_handle.h> A smart pointer to the underlying servable object T retrieved from the Loader. Summary Frontend code gets these handles from the ServableManager. The handle keeps the underlying object alive as long as the handle is alive. The frontend should not hold onto it for a long time, because holding it can delay servable reloading. The T returned from the handle is generally shared among multiple requests, which means any mutating changes made to T must preserve correctness vis-a-vis the application logic. Moreover, in the presence of multiple request threads, thread-safe usage of T must be ensured. T is expected to be a value type, and is internally stored as a pointer. Using a pointer type for T will fail to compile, since it would be a mistake to do so in most situations. Example use: // Define or use an existing servable: class MyServable { public: void MyMethod(); }; // Get your handle from a manager. ServableHandle handle; TF_RETURN_IF_ERROR(manager->GetServableHandle(id, &handle)); // Use your handle as a smart-pointer: handle->MyMethod();
https://www.tensorflow.org/tfx/serving/api_docs/cc/class/tensorflow/serving/servable-handle?hl=da
CC-MAIN-2022-27
en
refinedweb
A script to replace guests in complexes Why? Simply put, a co-worker asked if stk could swap one guest for another in a host-guest complex (regardless of chemistry), and I already had some code for doing it, but thought it is pretty useful for everyone. So, here we are! How? The entire script in the examples directory is shown below and is available, alongside Jupyter notebooks used in the video below, here. This process uses stk to read the host-guest and new guest molecules. But, NetworkX is the real hero here! Basically, we convert the stk.Molecule class to a NetworkX.graph based on the bonds and atoms in the molecule. Then, use the NetworkX.connected_components(graph) to get atoms that are not bonded (e.g. host and guest molecules). The rest is simple: a helper function to collect either the biggest (host) or smallest (guest) component, and then build a new stk.host_guest.Complex ConstructedMolecule from the host and the new guest. A reasonable conformer is produced using SpinDry ( stk.Spinner). What I like about this is that stk does not care about what the host and guest actually are - so use your imagination about what structural replacements you can do! import stk import networkx as nx import sys import os def get_disconnected_components(molecule): # Produce a graph from the molecule that does not include edges # where the bonds to be optimized are. mol_graph = nx.Graph() for atom in molecule.get_atoms(): mol_graph.add_node(atom.get_id()) # Add edges. for bond in molecule.get_bonds(): pair_ids = ( bond.get_atom1().get_id(), bond.get_atom2().get_id() ) mol_graph.add_edge(*pair_ids) # Get atom ids in disconnected subgraphs. components = {} for c in nx.connected_components(mol_graph): c_ids = sorted(c) molecule.write('temp_mol.mol', atom_ids=c_ids) num_atoms = len(c_ids) newbb = stk.BuildingBlock.init_from_file('temp_mol.mol') os.system('rm temp_mol.mol') components[num_atoms] = newbb return components def extract_host(molecule): components = get_disconnected_components(molecule) return components[max(components.keys())] def extract_guest(molecule): components = get_disconnected_components(molecule) return components[min(components.keys())] def main(): if (not len(sys.argv) == 3): print( f'Usage: {__file__}\n' ' Expected 2 arguments: host_with_g_file new_guest_file' ) sys.exit() else: host_with_g_file = sys.argv[1] new_guest_file = sys.argv[2] # Load in host. host_with_guest = stk.BuildingBlock.init_from_file(host_with_g_file) # Load in new guest. new_guest = stk.BuildingBlock.init_from_file(new_guest_file) # Split host and guest, assuming host has more atoms than guest. host = extract_host(host_with_guest) old_guest = extract_guest(host_with_guest) # Build new host-guest structure, with Spindry optimiser to # do some conformer searching. new_host = stk.ConstructedMolecule( stk.host_guest.Complex( host=stk.BuildingBlock.init_from_molecule(host), guests=(stk.host_guest.Guest(new_guest), ), # There are options for the Spinner class, # if the optimised conformer is crap. optimizer=stk.Spinner(), ), ) # Write out new host guest. new_host.write('new_host_guest.mol') if __name__ == '__main__': main() Examples and limitations. Currently, the provided script swaps out the smallest molecule in a complex for the new molecule (defined from .mol files). However, the tutorial below shows that we can swap the host or guest for any stk molecule. Additionally, we could easily extend this to systems with more than two distinct molecules. Please, test it, use it, break it and send me feedback!
https://andrewtarzia.github.io/posts/2022/06/replace-post/
CC-MAIN-2022-27
en
refinedweb
Back to: Design Patterns in C# With Real-Time Examples Composite Design Pattern in C# with Examples In this article, I am going to discuss the Composite Design Pattern in C# with Examples. Please read our previous article where we discussed the Bridge Design Pattern in C# with examples. The Composite Design Pattern falls under the category of Structural Design Pattern. As part of this article, we are going to discuss the following pointers. - What is the Composite Design Pattern in C#? - Understanding the Composite Design Pattern in C# with Real-time Examples. - Implementing the Composite Design Pattern in C#. - Understanding the components involved in the Composite Design Pattern. - When to use the Composite Design Pattern in real-time applications? What is the Composite Design Pattern in C#? As per the Gang of Four definitions, the Composite Design Pattern states that “Compose objects into tree structures to represent part-whole hierarchies. Composite let clients treat individual objects and compositions of objects uniformly”. The Composite Design Pattern allows us to have a tree structure and ask each node in the tree structure to perform a task. That means this pattern creates a tree structure of a group of objects. The Composite Pattern is used where we need to treat a group of objects in a similar way as a single unit object. In simple words, we can say that the composite pattern composes the objects in terms of a tree structure to represent part as well as a whole hierarchy. The composite design pattern is used when we need to treat a group of objects in a similar way as a single object. If this is not clear at the moment, then don’t worry we will try to understand this with an example. Understanding the Composite Design Pattern in C# with an Example: The term composite means a thing made of several parts or elements. Let us understand the composite design pattern in C# with one real-time example. Here we want to assemble a computer. As we know a computer is made of several parts or elements integrated together as shown in the below image. As shown in the above image, everything is an object. So, here, Computer, Cabinet, Peripherals, Hard Disk, Mother Board, Mouse, Keyboard, CPU, RAM, etc. all are objects. A composite object is an object which contains other objects. The point that you need to remember is a composite component may also contain other composite objects. The object which does not contain any other objects is simply treated as a leaf object. So, in our example, the Computer, Cabinet, Peripherals, Mother Boards are composite objects while Hard Disk, CPU, RAM, Mouse, and Keyboard are the leaf object which is shown in the below diagram. All the above objects are the same type i.e. Electronics type. For example, if we talk about a computer it is an electronic device. If we talk about the motherboard, it is an electronic device and similarly, if we talk about Mouse, it is also an electronic device. That means they follow the same structure and feature. For example, all the above objects having a price. So, you may ask the price for a Keyboard or Mouse or Motherboard even though the price of a computer. When you are asking the price for Keyboard it should show you the price of the Keyboard. But when you are asking the Price for the motherboard, then it needs to display the Price of RAM and CPU. Similarly, when you are asking the price of the computer, then it needs to display all the prices. The composite design pattern will be having a tree structure having composite objects and leaf objects. The fundamental idea is if you perform some operation on the leaf object then the same operation should be performed on the composite objects. For example, if I am able to get the price of a Mouse, then I should be able to get the Price of Peripherals and even I should be able to get the Price of the Computer object. If you want to implement something like this, then you need to use the Composite Design Pattern. Implementation of Composite Design Pattern in C# Let us implement the above-discussed example using Composite Design Pattern in C#. Step1: Create an interface Create an interface with the name IComponent and then copy and paste the following code. The common functionalities are going to be defined here. For our example, it is going to be showing the price of a component. namespace CompositeDesignPattern { public interface IComponent { void DisplayPrice(); } } Step2: Creating Leaf class Create a class file with the name Leaf.cs and then copy and paste the following code in it. This class implements the IComponent interface and hence we are providing the implementation for the DisplayPrice method. As part of this class, we are creating two properties to display the component name and price and we are initializing these two properties using the parameterized constructor. And finally, we are displaying these two values in the DisplayPrice method. using System; namespace CompositeDesignPattern { public class Leaf : IComponent { public int Price { get; set; } public string Name { get; set; } public Leaf(string name, int price) { this.Price = price; this.Name = name; } public void DisplayPrice() { Console.WriteLine(Name +" : "+ Price); } } } Step3: Creating Composite class Create a class file with the name Composite.cs and then copy and paste the following code in it. As this class is going to be our composite class, so we are creating a variable to hole the component name and another variable to hold its child components. Again we are initializing the name property using the class constructor. Using the AddComponent method we are adding the child components. This class also implements the IComponent interface and as part of the DisplayPrice() method, first, we print the component name and then show the price of all the leaf nodes using a loop. namespace CompositeDesignPattern { public class Composite : IComponent { public string Name { get; set; } List<IComponent> components = new List<IComponent>(); public Composite(string name) { this.Name = name; } public void AddComponent(IComponent component) { components.Add(component); } public void DisplayPrice() { Console.WriteLine(Name); foreach (var item in components) { item.DisplayPrice(); } } } } Step4: Client Program Please modify the Main method of the Program class as shown below. Here, we are creating the tree structure and then showing the respective component price by calling the DisplayPrice method. using System; namespace CompositeDesignPattern { public class Program { static void Main(string[] args) { //Creating Leaf Objects IComponent hardDisk = new Leaf("Hard Disk", 2000); IComponent ram = new Leaf("RAM", 3000); IComponent cpu = new Leaf("CPU", 2000); IComponent mouse = new Leaf("Mouse", 2000); IComponent keyboard = new Leaf("Keyboard", 2000); //Creating composite objects Composite motherBoard = new Composite("Peripherals"); Composite cabinet = new Composite("Cabinet"); Composite peripherals = new Composite("Peripherals"); Composite computer = new Composite("Computer"); //Creating tree structure //Ading CPU and RAM in Mother board motherBoard.AddComponent(cpu); motherBoard.AddComponent(ram); //Ading mother board and hard disk in Cabinet cabinet.AddComponent(motherBoard); cabinet.AddComponent(hardDisk); //Ading mouse and keyborad in peripherals peripherals.AddComponent(mouse); peripherals.AddComponent(keyboard); //Ading cabinet and peripherals in computer computer.AddComponent(cabinet); computer.AddComponent(peripherals); //To display the Price of Computer computer.DisplayPrice(); Console.WriteLine(); //To display the Price of Keyboard keyboard.DisplayPrice(); Console.WriteLine(); //To display the Price of Cabinet cabinet.DisplayPrice(); Console.Read(); } } } Understanding the Class Diagram of the Composite Design Pattern in C#: The Composite Design Pattern treats each node in two ways i.e. Composite or Leaf. Composite means it can have other objects (both Composite or Leaf) below it. On the other hand, Leaf means there are no objects below it. Please have a look at the following diagram. There are three participants involved in the Composite Design Pattern. They are as follows: - Component: This is going to be an abstract class or interface containing the members that will be implemented by all the child objects in the hierarchy. It also implements some of the behavior that is common to all objects in the composition. This abstract class acts as the base class for all the objects within the hierarchy. In our example, it is the IComponent interface. - Leafs: This is going to be a class to represent the leaf behavior in the composition. In our example, it is the Leaf class. - Composite: The Composite defines the behavior for the components which have children (i.e. Leaves). It also stores its child components and implements Add, Remove, Find and Get methods to do operations on child components. In our example, it is the Composite class. Note: The Client manipulates objects in the composition through the Component interface. When do we need to use the Composite Design Pattern in C# Real-Time Applications? We need to use the Composite Design Pattern in C# Real-time Applications, when - We want to represent part-whole hierarchies of objects. - We want the clients to ignore the difference between the compositions of objects and individual objects. In the next article, I am going to discuss the Proxy Design Pattern in C# with Examples. Here, in this article, I try to explain the Composite Design Pattern in C# with Examples. I hope, now you understood the need and use of the Composite Design Pattern in C# with Examples.
https://dotnettutorials.net/lesson/composite-design-pattern/
CC-MAIN-2022-27
en
refinedweb
# Display record with record.id==id /sahana/module/resource/create?format=json # ToDo /sahana/module/resource/update/id?format=json # ToDo The underlying output functions are very easy within Web2Py since 1.55: def display_json(): "Designed to be called via AJAX to be processed within JS client." list = db(db.table.id==t2.id).select(db.table.ALL).json() response.view = 'plain.html' return dict(item=list) tools.py now supports easy exposing of functions to XMLRPC or JSONRPC: - - (originally in T3:): Model: from gluon.tools import Service service=Service(globals()) Controller: @service.jsonrpc @service.xmlrpc @service.amfrpc def organisation(): "RESTlike CRUD controller" return shn_rest_controller(module, 'organisation') Can authenticate using wget/Curl (no need for browser): Token-based authentication: Another approach (using JSONRPC e.g. with Pyjamas): - - Twitter feeds as JSON:
https://eden.sahanafoundation.org/wiki/DeveloperGuidelines/WebServices?version=14
CC-MAIN-2022-27
en
refinedweb