text
stringlengths
8
5.77M
Main menu New feature: shared phone book and speed-dials New feature: shared phone book and speed-dials We’ve had a few requests lately for a way to easily share frequently dialled numbers between users, without the need to configure each individual handset. So here’s what we came up with – the shared phone book! To add numbers to your phone book: Log in to the customer portal and select the “Phone Book” item from the main menu. Click “Add Entry” and enter the contact’s name/number and specify a speed-dial (1-999). Click “Save” and repeat as required. To call a contact from any extension, simply dial *0 followed by the speed-dial specified for that contact. So for speed-dial 1, just dial *01. We’re already looking at ways in which we can build on this feature. Look out for some nice additions such as incoming caller ID lookup, LDAP directory integration and handset-specific XML downloads – all coming soon! Join our business community... Join the Nimvelo Community for free and we’ll send you more like this every month. You'll benefit from valuable insight into the UK startup landscape, as well as exclusive offers and discounts for Nimvelo Phone and other products.
San Jose nearly tied it in the 47th minute when Yannick Djalo (left), freshly inserted at the half, took a short corner and beat Steve Clark.(USATSI) Programming note: For all the day’s sports news, tune in to SportsNet Central tonight and every night at 6, 10:30 p.m. and midnight on Comcast SportsNet Bay Area SANTA CLARA, Calif. -- Chris Wondolowski scored on a header in the 51st minute to help the San Jose Earthquakes tie the Columbus Crew 1-1 on Sunday. Wondolowski has three goals this season for San Jose (0-2-2). Federico Higuain scored his fourth goal of the season with left-footed strike in the 44th minute for Columbus (3-1-1). The Crew are unbeaten in three road games this year. Columbus created the best buildups in a first half marked mostly by end-to-end counterattacks. Higuain finally cashed in one of those chances just before the half, slipping in unmarked while Hector Jimenez found Waylon Francis with a diagonal pass. Francis took the ball to the end line before turning in a grounded cross for Higuain to slot home with his left foot from 12 yards. San Jose nearly tied it in the 47th minute when Yannick Djalo, freshly inserted at the half, took a short corner and beat Steve Clark with a delicate chip that ricocheted off the back post and out The Quakes continued their pressure and tied on their fifth corner kick in the opening 6 minutes of the second half. Wondolowski got in behind Victor Bernardez's near-post run and headed home Shea Salinas' delivery. Crew forward Dominic Oduro beat Earthquakes goalkeeper Jon Busch with a strong header in the 82nd minute, but crashed off the crossbar. Wondolowski came close in the 89th minute after being freed by a lead pass from Jean-Baptiste Pierazzi, but his shot tailed wide of the near post.
Shred your cabbage and salt liberally. Toss cabbage with your fingers, being careful not to bruise it. Allow to sit for a while and then drain any liquid from this periodically. Slice your onions paper thin, and put in some oil to fry. Get them just golden and toss into the cabbage, oil and all. Toss well. Squeeze one large lemon and use the juice as your dressing. Toss and taste, add salt or lemon juice to taste. Edit: Best to salt the cabbage in the morning or early afternoon and let it sit. Drain any liquid that accumulates. Also, you can use one regular sized onion instead of the mini ones. Works just as well IMO. *there have been things added to this like tomatoes, radishes, green onions, cucumber but the best way is just plain. Shred your cabbage and salt liberally. Toss cabbage with your fingers, being careful not to bruise it. Allow to sit for a while and then drain any liquid from this periodically. Slice your onions paper thin, and put in some oil to fry. Get them just golden and toss into the cabbage, oil and all. Toss well. Squeeze one large lemon and use the juice as your dressing. Toss and taste, add salt or lemon juice to taste. Edit: Best to salt the cabbage in the morning or early afternoon and let it sit. Drain any liquid that accumulates. Also, you can use one regular sized onion instead of the mini ones. Works just as well IMO. *there have been things added to this like tomatoes, radishes, green onions, cucumber but the best way is just plain. I wonder how this recipe would be if you chopped some bacon, saute that (with the onions), then add the whole thing to the slaw. (Pardon me if I'm just thinking outside of the box.) Greg, it would be gross. My Dad (Joe) experimented with many MANY variations of this basic theme and the answer is, "if it aint broke don't fix it". This balances sweet and sour perfectly if you do the onions right. The balance is delicate and adding or subtracting things just make it...less. Having said that, feel free to experiment yourself and let us know what you think. I know I'm not going to change things a bit. I've even stopped using the bottled lemon juice and gone back to using a lemon, it DOES make a difference and we can all taste it. __________________ __________________You're only given a little spark of madness. You mustn't lose it. Robin Williams Alix
France’s silly stake obsession could kill BAE-EADS The author is a Reuters Breakingviews columnist. The opinions expressed are his own. When EADS and BAE went to the French government with their merger project back in July, they were greeted “as if by a Parisian waiter smoking on the pavement, who makes sure patrons understand they’re not welcomed”, says one investment banker involved in the talks. Not that Paris was opposed to the deal. But it made for an unexpectedly important decision to take at a time when the new socialist government’s energy was focused elsewhere. So the first reaction in Paris – and to be fair, in other capitals – was to raise objections to the deal and set out a series of “conditions” to be met. The merger’s initiators – the chief executives of the Franco-German Airbus maker and the British defence group – wanted to do away with the complex Franco-German shareholding and governance at EADS. But Paris’ insistence on keeping a significant stake in the future company has triggered a chain of counter-objections that could end up killing the project. The French state currently owns 15 percent of EADS, which would be diluted to 9 percent once the merger is completed. Ideally the stake would be sold down the road, to soothe political concerns in London and in the United States – which will also have to approve the deal. But France doesn’t want to be seen as being forced to sell the shares or be involved in a “privatisation” – still a dirty word in some circles of the French left. That appeared to be accepted by the UK, and the two merging companies’ CEOs seemed resigned to it – as long as Paris indicated it wouldn’t seek to increase its stake further. But that in turn raised concerns in Germany, which wants to remain on par with the French, and presented demands of its own. In the runup to an important Oct. 10 deadline, there is still ample room for the ultimate compromise – since no one will want to be held responsible for the deal’s failure. But the difficulties started with French demands. No one seems to have explained to President François Hollande that France would remain influential enough in the new company with a golden share and dispositions to protect strategic secrets. It may be time for a crash course in business basics. There have been few mergers over the years which can truly be considered a success other than in superficial terms. Takeovers can work when commercial or financial discipline needs to be brought to bear on a failing target or when an industry needs to thin down and the best bits can be kept in a business with sufficient scale. Academic studies and stock market results show mergers generally destroy capital and skills. Failures can be all the more sever when dealing with businesses in highly complex industries where established patterns of working might take years to change or integrate. So why do these two sets of management want it – the easy life! They are each under financial stress and want to hide from reality. Why do governments want it? Probably a good helping of naivety and gullibility but in the case of the Germans and French and much of the UK political class, the prospect of creating an uber-business for pan-European defense production makes them salivate. It will help them destroy the ability of the UK to have independent forces and advance their cause of an EU superstate. This is very worrying indeed for those who favour an independent, democratic Britain.
Today, everyone is speaking about commons and ‘commoning’, everyone wants to build commons. The World Bank has a group which is supposedly ‘protecting and improving the global commons’ and it reaches out to the private sector to ‘advance common goods’. You can find texts on commons on the website of the European Union, banks organize seminars on the commons. Transnational companies tell us they are building the commons, big magazines declare that Uber is commoning cars, and that the “sharing economy” is a form of commoning. Interview with Dario Azzellini Founded in 1973, (yes, that’s 45 years ago!) the Park Slope Food Coop is one of the oldest and largest consumer food coops in the country. It’s a presence in our lives, the source of our food, and a center for community engagement. And it’s also part of a larger coop movement that stretches back in time and exists in many parts of the world. One Question in On State of Nature Blog Class struggle, that is, the struggle between labour and capital, is not at all a concept that belongs to the past. In a world of growing inequality, it is a reality more pertinent than ever. A recent study has revealed that since 2008 the wealth of the richest 1% has been growing at an average of 6% a year, while the wealth of the remaining 99% of the world’s population has been growing by only 3%. By 2030, the world’s richest 1% will control nearly two-thirds of the world’s wealth. interview with Dario Azzellini “The communes should be the space in which we are going to give birth to socialism.” – these were the words of Hugo Chávez in one of his famous presidential broadcasts. To discuss the Venezuelan communes and the new forms of participation, as well as its successes, difficulties and contradictions, we have interviewed Dario Azzellini*. He has investigated and documented theses issues throughout the Bolivarian Revolution. His book Communes and Workers’ Control in Venezuela has recently been released in paperback by Haymarket Books. interview with Dario Azzellini A common feature in every crisis situation, from the upheavals of the early 20th century to the neo-liberal re-structurings of the late 20th century, is the emergence of workers’ control – workers organising to take over their workplaces in order to defend their jobs and their communities. The term democracy is generally used as a synonym for liberal democracy, which is far from being the only possible form of democracy; indeed, it is even questionable whether liberal democracy was ever intended to be truly democratic. For centuries, liberals and democrats have been fierce opponents. Liberals only accepted democracy when it was limited to the political sphere, excluding it from the economic and social sphere. Liberal democracy became the new form of governance of the emerging production model (industrial capitalism). No doubt we are heading for another economic crash because capitalism is always heading for another economic crash. It is the nature of capitalism to increase surplus capital and then destroy it again through a crash or war, in order to restart the accumulation process once again. After every crisis, as historical data shows, the rich get richer and capital concentration grows. The cycles from crash to crash are becoming shorter as the accumulation of surplus capital becomes faster.
Schiano reportedly has lost patience with Bowers because of shoddy conditioning, a lack of effort in practices and an underwhelming, two-tackle performance against the Ravens in Tampa Bay's preseason opener. But Schiano said earlier this week that a full-time role is a possibility. "He needs to play situational football, he needs to play a bunch of plays in a row, but not at the cost of (production)," Schiano said, per JoeBucsFan.com. "So, it's a double-edged sword that we're trying to make sure we are on the right edge of." There's still time for Bowers to alter the current perception. Sidetracked by microfracture and Achilles surgeries early in his NFL career, Bowers has shown hints of dominant pass-rush ability. That's what the Bucs are hoping to milk in 2013. We'd rather not add another Making the Leap player to the "curse" list, so we're rooting for Da'Quan to pull it together.
As we can see the closet organizer photo displayed above, the Amazing Hidden Underbed Storage Drawer, based on the many pageview counts this closet organizer photo has gain, it obviously means that this amazing hidden underbed storage drawer is one of users' most-loved closet organizer inspiration. This particular closet organizer is featuring some cool stuff, including minimalist apartment bedroom design, unfinished white oak material, and few more such as the all edges and corners are beveled and hard rubber swivel, etc. This closet organizer photograph, which was uploaded here, is of course not just the only one we'd want to recommend for you. There are countless closet organizer pictures similar to this closet organizer gallery set. In the next page, we have Maginificent Legacy Classic Kateri Platform which features finest king bed design and rectangle underbed storage drawer; similarly labeled with easy and effective under bed storage drawers topic. While in previous page — still related to closet organizer galleries — is the Best Underbed Storage Drawers which features six adjustable shelves inside and lacquered beachwood material. View the other design ideas through the bottom navigation or the thumbnails below, or just click through our homepage to enjoy more design inspirations.
Pages Monday, July 26, 2010 As many of you have heard, my dad is sick, though he currently feels and looks great. He was in the hospital Monday-Wednesday, but they sent him home temporarily while the doctors await the results of his tests. Kevin and I went to visit this weekend at my parents' home and had a nice home-cooked meal with them. Dad will be starting treatment soon, and I will try to keep you updated. Thank you for all your kind wishes and calls and emails. I love you guys too. That's why I keep this blog, to remember all the wonderful times with all of you. Update: The diagnosis is kidney cancer in his brain, lungs, and kidney. As some of you know, dad had kidney cancer 11 years ago and had one kidney removed. This is a recurrence. The brain tumor was what warned us, as cancer is typically silent. We are very hopeful about the treatment which will begin next week. Tuesday, July 13, 2010 After our wonderful airline experience, already documented by Kevin, the fun began in Ann Arbor. First and most importantly, the Sarah, Donald, Kevin and I went to worship the Sandwich at Zingerman's. The Zingerman reuben, on French bread (a Sarah-substitution) was divine. It was so good that it reminded me of Schwartz's smoked meat sandwich in Montreal. Sarah and I outside of Zingerman's We walked up and down Main Street and State Street, checking out the Farmer's Market. We walked around the campus and the law school. Ann Arbor Farmer's Market Michigan Law School courtyard As you know, Sarah and Donald are engaged and they are getting married through the Catholic Church, so we also swung by the Ann Arbor church where they are taking their pre-marriage classes. Their Michigan church Back at their house I got to see Donald's Supreme Court Clerkship offer letter from Justice Scalia. I thought about taking a picture of it, but somehow that seemed weird. For dinner we had Indian at Shalimar's Restaurant. I ate chicken korma for the first time ever. Then we went home to relax and watch the History Channel. Watching the History Channel, is, afterall, how Sarah and Donald got engaged in the first place. Since eating is one of our favorite things to do together, we started the next day with breakfast at Afternoon Delight. Again, I had something I had never eaten before, cheese blintz. Who knew Ann Arbor would be such a culinary adventure? Sarah and Donald have long been soccer fans, and Kevin and I enjoy the World Cup so we were all excited to watch the final together. We were all rooting for Spain (and even attempted to buy matching Spain shirts- but failed) and Donald invited a few clerk-friends to join us, one of whom was rooting for the nasty cheating Netherlands. I am particularly supportive of Xabi Alonso because we share a last name, and De Jong from the Netherlands kung fu kicked him square in the chest. It was hard to image how it could have been a mistake. Thus, Spain victory was doubly sweet. Monday, July 12, 2010 Flying from New York City to Detroit is quick, only about an hour in the air. So you can understand how frustrated I would be when I arrived 13.5 hours late. Here's the time-line: Flight #1: New York to Detroit (Delta): Canceled I'm thinking, well, stuff happens. Weather, mechanical, best to be careful. Surely, they'll put me and my wife on a later flight since it's mid-afternoon. Wait, what's that? You want to send us the next day? Through Minneapolis, which is several hundred miles past Detroit? And you believe this critical information should be delivered via a robot that doesn't leave a phone number where I can ask follow-up questions? Eventually, I found a phone number and an android who would confess, when bullied, that it had lied about (minimum): 1) there being no other flights that day, and 2) their ability to place me on another airline. Eventually, we were placed on an American flight leaving 75 minutes later. It had a connection in Chicago (again, going past Detroit), but at least we'd only arrive four hours late. Why the flight was canceled remains a mystery. Flight #2: New York to Chicago (American): Delayed My attempt to get a meal voucher (since we're missing dinner with our friends) is dismissed with something between indifference and abject hatred. To be expected, but I'm getting a little worried because that 75 minute window between when one flight lands in Chicago and the next flight takes off for Detroit is shrinking. And the delay stretches. And now the best case scenario is that the plane lands 20 minutes after the connecting flight is scheduled to take off, and my wife is weeping on account of not being able to see her friends, and hating to fly, and having to spend the night in Chicago for no good reason. Flight #3: Chicago to Detroit (American): Delayed Hooray for this delay! After sprinting through O'Hare with despair in our hearts to catch a connection that had surely already left, we find that the plane is still there and the kind attendants are willing to let us board it, even though boarding is over and the doors are sealed. But wait! Flight #3: Chicago to Detroit (American): Canceled Two flights--on two different airlines--canceled in one day, surely this is a personal record. There is no mystery here, though. The flight was canceled for the trivial reason of not having a pilot. Hotel and meal vouchers for all, leading to this exchange: Me: May I have a meal voucher for my wife, as well. Attendant: This voucher is for you and your wife. Me: $10 each? Attendant: $10 total. We'll dine like kings in Chicago for $5 each! We'll split a "Chicago-style" dog and bottled water--who are you to ask for more? Later, this conversation happens: Other passenger: You can use that voucher almost anywhere in the airport. Me (inside my exploding skull): ALMOST ANYWHERE? Thank God, it's only five hours until our next flight leaves because I don't think I'll be able to sleep tonight! Flight #4: Chicago to Detroit (American): On time This was a relatively painless flight. They did try to move my wife across the plane but failed. We had a nice time in Michigan. Flight #5: Detroit to New York (Delta): Lost reservation We're not in the system. The several confirmation codes I have are meaningless. This is confirmed by the attendant behind the desk. My wife explains most of the bad things that happened, while the attendant makes phone calls and whispers things like "this is strange" and "it's just not there." But wait! Flight #5: Detroit to New York (Delta): Delayed We're there, after all. You only have to look capable of crying or screaming for long enough. The flight is about 30 minutes late, but by this point, the delay seems positively generous. When the very, very bumpy flight is over, there are other planes at the gate, but this scarcely matters. We're home, and our hatred of domestic air travel cemented. Tuesday, July 6, 2010 Wow, love is definitely in the air this year. My good friend from law school Josh just let us all know that he upgraded his special lady friend and they are now engaged! They met in Virginia and moved to Germany for Josh's JAG career. Sadly I have not met her yet, but she's pretty (see picture) and obviously a little crazy because she'll be marrying Josh. Monday, July 5, 2010 We had a fabulous time on the Cape this weekend with Kevin's mother "Ma," and got home a little early to do some more home shopping for the new place and work. On Saturday, we had a delicious lobster and calamari lunch at Chatham Bars Inn, then walked around the beach and checked out the Chatham lighthouse. Then we went shopping around-- The Christmas Tree Shops are a requirement every time we go to the Cape, though it's always tough to decide which one. Kevin had a Cape Cod League game high on his to do list, so I bravely sat through a game between Orleans and Brewster (this might provoke another TMWCA). Ellen had a fabulous Spanish-theme party for her third birthday complete with paella and sangria. Then various local renegades set off fireworks around Follins Pond. It was quite the show. The Tavern at Chatham Bars Inn The birthday girl is three! Fireworks on Follins Pond Sunday, the three of us set off early for the beach but many of the beaches, even the residents-only beaches were really backed up by 10 a.m. We finally ended up sunning and swimming at West Dennis Beach. Actually a pretty roomy beach and the sand wasn't as rough as usual and the seaweed was somewhat under control. I swam around with tiny schools of fish and jellyfish. Finally, finally, the thing that I had been waiting for-- Moose Tracks Icecream at Lil' Caboose. We rounded out the weekend BBQing with Ma, Fran, and George. Perfect. Saturday, July 3, 2010 On Thursday, D got big news. He will be clerking for the U.S. Supreme Court! This is something that only the best of the best law students have the opportunity to do. First you have to be brilliant, then you have to clerk for another judge, and then if your application gets picked from among a stack of other brilliant people with brilliant applications, you get to interview. In Donald's case, Justice Scalia interviewed him and chose him to clerk. I've seen Scalia in action in the Court and I can't imagine that being cross-examined interviewed by Scalia is a ton of fun. But he was prepared to answer questions on a number of legal topics. About a month later, Scalia called Donald to tell him the good news that he will be clerking during the 2011-12 term. Despite my close friendship with Cate, because she met Matt at college in Massachusetts, and I missed some of his visits to Ramsey, I first met Matt the day before the wedding. I had heard a lot about Matt though, especially leading up to the time before he proposed. Talking to Cate on the phone, she described Matt as "her rock." You could tell she had chosen her life-mate. Cate and Matt were married on July 2, 2005, in the same church in New York where Cate's parents were married many years before. How sweet is that? During the ceremony, Cate cried all the way up the aisle. I have been to a lot of weddings, but she is still the only bride I've seen who was so moved. View from the deck of the Storm King Lodge Cate's parents now own the Storm King Lodge, a beautiful bed-and-breakfast so we stayed there and helped Cate get ready in the morning. After the very warm ceremony, we went back to the Lodge and took the posed wedding photos. Cate's brother pored a champagne toast for everyone which we enjoyed while taking in the amazing scenery at the Lodge. The Storm King area, and the nearby Storm King Arts Center in particular, is one of my favorite places in the world. Chrissy and Wendy helping Cate get dressed The beautiful bride getting ready Clark, Wendy, and Chrissy outside the church The first time I met Clark! Clark and Wendy are married now. The Ramsey bridesmaids enjoying the champagne toast (Cate's high school friends) The reception was held at the equally scenic Chalet on the Hudson, with mountains to one side and the river on the other. Matt and Cate raised the Cocktail Hour bar with super fun margaritas! I really liked the song that Matt danced to his mother, "I Hope You Dance," by Lee Ann Womack. I liked it so much that I wanted to steal it for my wedding, but I refrained. Reception at Chalet on the Hudson Cate and Matt at their sweetheart table with the view of the river in the back Back in 2005, I didn't take as many photos or photos as nice as the ones I take now, so I don't have any cute ones of Cate and Matt being all mushy at their wedding, but let me assure you that they were. Even more than that, they had an awesome time dancing with all the younger wedding guests at the wedding. I don't remember sitting much once the dancing started. I don't have photos of that either but it was so fun I still remember it. You'll forgive my total photo-fail though because I got a picture of their awesome wedding "cake" made of all beautiful cupcakes. I know you probably read about this trend in some magazine a few years ago, but Cate and Matt got married in 2005-- they made this a trend.
name=RAW_STREAMING-EVENTS path=Feeds and Translations/Test type=Pipeline uuid=dc3b6921-6602-4eeb-865f-01cbaea5024b
1. Field of the Invention The present invention relates to highly soluble aromatic polyimides wherein the dianhydride used to form the polyimide is at least partly 6FDA. The remaining portion of the dianhydride-derived polyimide is derived from several of the more common aromatic dianhydrides, as defined below, used to form polyimides. The diamine portion of the polyimide is derived essentially from amine terminated siloxane units. 2. Prior Art As taught in the prior art, siloxane-containing polyimides can be prepared by, for example, reacting a siloxane-containing diamine with a dianhydride. The initial product of such reactions, a polyamide acid, has been found to be soluble in highly polar solvents, such as N-methyl pyrrolidone. Solutions of such polyamide acids have been typically used to coat substrates. Such coatings have been converted to the siloxane-containing polyimide by heating, usually between 150.degree. C. and 400.degree. C., to remove the solvent and to effect cyclization of the polyamide acid. These processes are complicated by further problems, such as void formation caused by the evolution of the by-product water during the cure step and the like. These polyimides, while useful as protective coatings for semi-conductors and other electronic devices, suffer from the shortcoming of being insoluble in most low boiling organic solvents. They suffer from the shortcoming that many semiconductor devices cannot be heated to the extreme temperatures required to effect the cyclization of the precursor polyamide acid (150.degree. C.-400.degree. C.) as discussed above. Further, it has been taught in the prior art that such polyamide acids are unstable to hydrolysis. Such hydrolysis would tend to degrade the utility of the final product. Because of these and other shortcomings, it would be highly desirable to have siloxane-containing polyamide materials which are soluble in low-boiling solvents. These shortcomings have been partially overcome in the prior art. Berger, in U.S. Pat. No. 4,395,527, discloses that polyimides incorporating a siloxane unit of formula ##STR1## where Q is a substituted or unsubstituted aromatic group; Z is ##STR2## D is an unsubstituted or substituted hydrocarbylene; R.sup.1 R.sup.2, R.sup.3, R.sup.4, R.sup.5, and R.sup.6 are each independently unsubstituted or substituted hydrocarbyl; X, Y and Z each independently has a value from 0 to 100; have improved solubility parameters. For example, these polyimides are taught to be soluble in chlorinated hydrocarbon solvents such as dichlorobenzene and trichlorobenzene and in good polar solvents such as N, N-dimethyl acetamide, N-methyl caprolactam, dimethylsulfoxide, N-methyl-2-pyrrolidone, tetramethyl urea, pyridine, dimethylsulfone, hexamethyl phosphoramide, tetramethylene sulfone, formamide, N-methylformamide, butyrolactone and N-acetyl-2-pyrrolidone (U.S. Pat. No. 4,395,527, column 28, line 65). However, one shortcoming found for these materials is their lack of solubility in very weak solvents, such as toluene. Further, as one knowledgeable in the art would appreciate, this reference requires the use of unusual monomers which are not readily available. Lee, in U.S. Pat. No. 4,558,110, discloses crystalline polyimides which incorporate polydiorganosiloxane units terminated with amine functions. These materials are found to be soluble in ortho-dichlorobenzene but not soluble in good aprotic solvents, such as N-methyl pyrrolidone. Incorporation of bis(aminoalkyl)siloxane components into polyimides has not been shown to be a sufficient criteria for polyimide solubility. The prior art teaches that such materials suffer from the shortcoming of being insoluble in most low boiling organic solvents (see, for example, U.S. Pat. Nos. 4,395,527, 4,480,009, 4,449,149, 4,586,998, 4,609,569, and U.S. Pat. No. 4,652,598). Lee, in U.S. Pat. No. 4,558,110 discloses crystalline polyimides which contain bis(amino alkyl)-terminated siloxanes. These materials were found to be soluble in halogen-containing solvents, such as ortho-dichlorobenzene. However, these materials were not found to be soluble in even very good aprotic solvents, such as N-methyl pyrrolidone. No solubility of these materials in very weak aprotic solvents, such as toluene, was reported.
An 85-year-old man complained of a rash on his upper limbs, chest wall and back ([Picture](#g001){ref-type="fig"}). He had been undergoing hemodialysis due to end-stage diabetic kidney disease and taking vildagliptin \[dipeptidyl peptidase-4 inhibitor (DPP4i)\] for more than a year. DPP4is are incretin-related drugs widely prescribed for the treatment of type 2 diabetes. In time, his rash gradually spread over the rest of his body. Although a skin biopsy was not performed, a dermatologist confidently diagnosed him with non-inflammatory bullous pemphigoid (BP), which included tense blisters and a small number of erythemas. The patient\'s anti-BP180 NC16a antibody serum level was negative \[\<3.0 U/mL, chemiluminescence enzyme immuno assay (CLEIA) method\]. BP is an autoimmune subepidermal disease typically characterized by inflamed, itchy edematous erythema and tense blisters. Some drugs \[e.g., diuretics and DPP4is ([@B1])\] may be potential triggers. No other drugs that could possibly cause BP, except for the DPP4i, had been prescribed to him. The relationship between BP and hemodialysis itself is unclear at present. Recently, it was found that patients with DPP4i-associated BP (DPP4i-BP) tend to exhibit a non-inflammatory phenotype that presents with fewer erythemas. In addition, the anti-BP180 NC16a antibody serum levels are often low or even negative in patients with DPP4i-BP, although they are usually positive in patients with typical BP ([@B2]). We therefore concluded that the patient\'s rash had been caused by the vildagliptin, so its administration was immediately stopped, and the daily oral administration of 20 mg of prednisolone was started. The patient\'s symptoms gradually resolved. There are few case reports of BP caused by vildagliptin administration in dialysis patients. ![](1349-7235-59-0593-g001){#g001} Written consent for publication was obtained from the patient. **The authors state that they have no Conflict of Interest (COI).** [^1]: Correspondence to Dr. Jun Usami, <jusami176@hotmail.com>
Events Senior Lunch and Health Screening Tuesday, April 17 2012 10:30 AM - 1:00 AM Unified Health Solutions will be offering free health for seniors ages 60+. Lipid profile (fasting), total cholesterol (non-fasting), Body Mass Index, blood pressure, and take home fecal occult blood test will be offered. Lunch will be provided from 11:30-12:00pm in part by Tri-County Community Action and the Area Agency on Aging, PSA 2. The lunch menu will be Beef and Noodles, Mashed Potatoes, Green Beans, Fruit, Biscuit w/ Margarine and Milk for a suggested contribution of $2.00 for registered seniors who are 60 years of age or older or the spouse of a registered participant that meets the age requirement. Participants under 60 will be charged $4.00 per meal. Registered seniors who meet the age requirement will not be denied meal service due to the inability to pay any or all of the suggested contribution. Please call Beth Lawrence at 937-593-0034 to register for the screening and to reserve your meal at least 24 hours in advance.
[Toxoplasma gondii antibodies in pregnant women in the Ceské Budĕjovice District]. In 1984-1986 in the district of Ceské Budĕjovice in the southern part of the Czech Republic pregnant women were subjected to serological examinations for antibodies against Toxoplasma gondii. A total of 3,392 women were examined within the age bracket 16-54 years. For the serological examination parallel examinations were made using Sabin-Feldman test (SFT) and the complement fixation test (CFT). In SFT the basic serum dilution was 1:4, in CFT 1:10. Pregnant women were examined once or repeatedly. During the first examination which was usually made between the second and third month of pregnancy the total (SFT and CFT) prevalence was 37%. In the SFT antibodies were detected in 35%, in CFT in 25%. The second blood sample was taken during the 4th-5th month of pregnancy and subsequent samples during the 8th-9th month of pregnancy. In women who were examined twice or repeatedly (a total of 1,409 women), the dynamics of prevalence of antibodies were recorded. 64% women were permanently negative, 33% women were permanently positive with the same or a slightly varying titre and in 3% women during pregnancy, seroconversion was observed or a significant rise of antibodies. In 20 women where seroconversion or a significant rise of antibodies was found data were collected to find out whether in their children from birth to the age of 12-13 years toxoplasmosis was diagnosed. The toxoplasmosis was not diagnosed in these children.
711 A.2d 544 (1998) John O'NEILL and Samuel R. Goodman, Appellants, v. CITY OF PHILADELPHIA. Commonwealth Court of Pennsylvania. Argued February 9, 1998. Decided April 17, 1998. *545 Andrew F. Mimnaugh, Philadelphia, for appellants. Michael F. Eichert, Philadelphia, for appellee. Before COLINS, President Judge, and KELLEY, J., and RODGERS, Senior Judge. RODGERS, Senior Judge. John O'Neill and Samuel R. Goodman (Appellants) appeal from the order of the Court of Common Pleas of Philadelphia County (trial court) granting summary judgment to the City of Philadelphia (City) and denying Appellants' motion for summary judgment. We affirm. This case concerns the 1989 reorganization of the City's system for adjudicating parking violations. Until June 1, 1989, the Traffic Court of Philadelphia had original jurisdiction over parking violations.[1] Parking violations were summary offenses, criminal in nature. The police department notified Traffic Court of unpaid tickets; Traffic Court filed an information and generated a summons and, if the individual did not respond, issued a warrant for his or her arrest. Appeals from Traffic Court were taken to the Court of Common Pleas. In 1989, the Philadelphia City Council enacted an ordinance that permitted the transfer of control over parking violations from Traffic Court to the Office of the Director of Finance. Under the new system, the recipient of a parking ticket has fifteen days to admit the violation and pay a fine or deny liability and request a hearing. Failure to do either results in the entry of a default order sustaining the charge and fixing the fine, costs and fees. If liability is denied, a hearing is held before a Bureau of Administrative Adjudication (BAA) hearing examiner, whose decision may be appealed to the BAA Parking Appeals Panel, and thereafter to the Court of Common Pleas. An adjudication of liability, either by default or following a hearing, creates a debt owed to the City. The effect of the 1989 reorganization was to change the nature of parking violations from summary criminal offenses to civil violations. In practice, defendants in Traffic Court were entitled to three rights not available at a BAA hearing: 1) a disposition could not be made without the personal appearance of the defendant, 2) guilt had to be proved beyond a reasonable doubt, and 3) the two-year statute of limitations for summary offenses was in effect. The ordinance created a period of dual jurisdiction over parking tickets, citations and summonses from Traffic Court issued between October 2, 1987 and June 1, 1989. As of June 1, 1989, each Appellant had outstanding parking tickets that were issued in 1987, 1988 and 1989. Neither paid the fines or appeared in response to summonses issued by Traffic Court. In November of 1989, each Appellant received a "Violation Warning Notice" from the Office of the Director of Finance (ODF). The notice advised Appellants that the ODF would continue to pursue the listed unpaid parking violations and informed Appellants that they could elect to proceed either before Traffic Court or the BAA. Neither Appellant responded to the notice. Each subsequently received an *546 Order of Default indicating that failure to pay the stated amount due could result in the City taking additional legal action against them. On March 4, 1991, Appellant Goodman requested a hearing before the BAA with regard to a ticket he received in 1991. At the hearing, the BAA listed several additional tickets for disposition, including violations that occurred before June 1, 1989. Goodman's counsel objected to the inclusion of these tickets, arguing that Goodman had not consented to the BAA's jurisdiction over these tickets and that the statute of limitations had expired as to those tickets more than two years old. The hearing examiner overruled the objections, determined liability as to all tickets, and assessed a fine that included $173.00 for tickets issued before June 1, 1989. Goodman paid the amount due. In April of 1991, Appellant O'Neill sought to have three tickets listed before Traffic Court, but was told it was no longer hearing parking violations. O'Neill obtained a hearing before the BAA, at which his counsel raised the same objection Goodman's counsel had raised. The hearing examiner overruled the objections, determined liability and assessed fines of $45.00, which O'Neill has not paid. Neither Appellant appealed the hearing examiner's determination to the BAA Parking Appeals Panel or to the trial court, although they had the right to do so within thirty days of the hearing examiner's decision. Instead, they filed a class action in the United States District Court for the Eastern District of Pennsylvania, alleging violation of their rights under the United States Constitution and 42 U.S.C. § 1983, as well as violations of Pennsylvania state law. The District Court denied class certification and ordered the case to proceed as a constitutional test case. Upon consideration of summary judgment motions filed by both parties, the District Court ruled in Appellants' favor only as to part of Count II of the Complaint. The District Court considered Counts III and IV of the Complaint, which raised claims under state law, as withdrawn without prejudice and ruled in favor of the City on the remaining Counts. Count II of the Complaint alleged a violation of Appellants' due process rights and the Constitution's limitation on ex post facto legislation based on the City's failure to obtain Appellants' consent to the BAA's jurisdiction as was required by the 1989 ordinance. The District Court found a violation of procedural due process, in that the City failed to adequately notify Appellants that failure to elect to proceed in Traffic Court resulted in their automatic consent to the jurisdiction of the BAA and the deprivation of the rights they would have had in Traffic Court, particularly the right to assert a statute of limitations defense. Accordingly, the District Court entered judgment in Appellants' favor on the amounts due or paid for violations that occurred prior to June 1, 1989. On appeal, the Court of Appeals for the Third Circuit ordered that the judgment of the District Court be vacated and remanded the case with instructions to abstain under Younger v. Harris, 401 U.S. 37, 91 S.Ct. 746, 27 L.Ed.2d 669 (1971), and to dismiss Appellants' complaint. After the Supreme Court denied Appellants' petition for certiorari, Appellants transferred the case to the state trial court pursuant to Section 5103(b) of the Judicial Code, 42 Pa.C.S. § 5103(b). Following a hearing on the issue of class certification, the trial court denied certification and listed the case for argument on motions for summary judgment. The trial court determined that Appellants had not established that the alleged violation of their due process rights occurred as a result of an official City practice, custom or policy, as is required to establish municipal liability. See Monell v. Dept. of Social Services, 436 U.S. 658, 98 S.Ct. 2018, 56 L.Ed.2d 611 (1978). Noting that the City's ordinance expressly prohibited the hearing examiners' decisions, the trial court determined that the hearing examiners erred in assuming jurisdiction over Appellants' pre-1989 tickets and that the City cannot be held liable for those mistakes. The trial court also observed that while the City did not warn Appellants that they might lose access to Traffic Court, the *547 City did provide them with an alternate procedure that is adequate under due process standards. Thus, the trial court concluded that Appellants at most suffered a deprivation of process—not property—without due process, which does not constitute a violation of their Constitutional rights. The only issue before the trial court was the issue upon which the District Court had held in Appellants' favor, that is, whether the City violated Appellants' constitutional rights to due process under 42 U.S.C. § 1983 by failing to adequately notify them that by consenting to civil disposition of their unpaid parking tickets by the BAA, they would lose the right to assert the two-year statute of limitations defense in effect for summary criminal offenses in Traffic Court. The parties agree that the transfer of jurisdiction from Traffic Court to the BAA in itself, is not a denial of due process, Crane v. Hahlo, 258 U.S. 142, 147, 42 S.Ct. 214, 215, 66 L.Ed. 514 (1922) ("No one has a vested right in any given mode of procedure; and so long as a substantial and efficient remedy remains or is provided, due process of law is not denied by a legislative change.") Furthermore, there is no constitutional right to assert a defense based upon a given statute of limitations. See Chase Securities Corp. v. Donaldson, 325 U.S. 304, 311, 65 S.Ct. 1137, 1141, 89 L.Ed. 1628 (1945) ("In Campbell v. Holt, [115 U.S. 620, 6 S.Ct. 209, 29 L.Ed. 483 (1885)], this Court held that where lapse of time has not invested a party with title to real or personal property, a state legislature, consistently with the Fourteenth Amendment, may repeal or extend a statute of limitations, even after right of action is barred thereby, restore to the plaintiff his remedy, and divest the defendant of the statutory bar.") The enabling legislation[2] gave the Philadelphia Parking Authority all powers necessary or convenient for the administration, supervision and enforcement of an efficient system of on-street parking regulation and further provided that the exercise of any power provided should not be construed to constitute the prosecution of a summary offense under Sections 1301-1342 of the Judicial Code, 42 Pa.C.S. §§ 1301-1342 (relating to traffic courts). There was, therefore, no constitutional or statutory bar to the city by ordinance from transferring the enforcement of all outstanding parking tickets on June 11, 1989 from Traffic Court to the BAA. Of course, this would convert summary offenses to civil violations, and eliminate the two-year statute of limitations defense for summary violations, but it would also eliminate arrest of the person and incarceration as remedies for unpaid parking tickets. Moreover, as pointed out by Judge Dalzell, (O'Neill v. City of Philadelphia, 817 F.Supp. 558, 565 (E.D.Pa. 1993)), such an ordinance would not constitute either a bill of attainder or an ex post facto law. We therefore hold that since the due process clause of the Fourteenth Amendment did not require the City to give any notice to parking ticket violators that it was eliminating the two-year statute of limitations defense for summary offenses, the allegedly inadequate notice given in this case was not a deprivation of due process. Of course, Section 12-2807(8) of the ordinance enacted by the City did require the consent of the person contesting the violation to the jurisdiction of the Director of Finance, and such consent was not given by Appellants. The hearing officers were therefore in violation of the ordinance in ruling that Appellants were liable. But such violation under 42 U.S.C. § 1983 does not rise to the level of a deprivation of property without due process under the Fourteenth Amendment to the Constitution of the United States of America. "To the extent plaintiffs say that Chicago's methods of commencing lawsuits violates state law, they should present their claims to state court. Federal courts do not enforce state law by characterizing violations of state law as offenses against the Constitution." Saukstelis v. City of Chicago, 932 F.2d 1171, 1174 (7th Cir.1991) (citations omitted). Judge Easterbrook stated in Saukstelis that collecting fines for parking in forbidden zones is hard to do. He pointed out that in Sutton v. Milwaukee, 672 F.2d 644 (7th Cir. *548 1982), the court, using a cost-benefit analysis approved by the Supreme Court in Mathews v. Eldridge, 424 U.S. 319, 96 S.Ct. 893, 47 L.Ed.2d 18 (1976), held that towing an illegally parked auto without prior notice is proper because the risk of error is small and the governmental need great. In Saukstelis, the court held that an illegally parked auto may be booted if the parking ticket itself offers an opportunity for a hearing. In both Sutton and Saukstelis, the appellants unsuccessfully claimed deprivation of their property without due process in violation of the Constitution. Even if the failure of the City to notify Appellants that consenting to civil disposition of their unpaid parking tickets would cause them to lose the right to assert the two-year statute of limitations defense, violated their constitutional right to due process, they were not entitled to pursue their claim under 42 U.S.C. § 1983, without first exhausting state administrative and judicial remedies. National Private Truck Council, Inc. v. Oklahoma Tax Commission, 515 U.S. 582, 115 S.Ct. 2351, 132 L.Ed.2d 509 (1995). In National Private Truck Council, the petitioners in a class action in an Oklahoma trial court, pursuant to state law and § 1983 sought declaratory and injunctive relief as well as refund of taxes paid and attorney's fees under state law and § 1988. The Oklahoma Supreme Court held that retaliatory taxes imposed by the state on motor carriers violated the dormant Commerce Clause of the United State Constitution and awarded refunds under state law, but declined to award declaratory and injunctive relief under § 1983 and declined to award attorney's fees under § 1988 because adequate remedies existed under state law. In affirming the Oklahoma court, the Supreme Court said: In determining whether Congress has authorized state courts to issue injunctive and declaratory relief in state tax cases, we must interpret § 1983 in light of the strong background principle against federal interference with state taxation. Given this principle, we hold that § 1983 does not call for either federal or state courts to award injunctive and declaratory relief in state tax cases when an adequate legal remedy exists. Petitioners do not dispute that Oklahoma has offered an adequate remedy in the form of refunds. Id. at 589, 115 S.Ct. at 2355. The same principle is applicable here. In this case, in directing the district court to abstain under Younger v. Harris and to dismiss Appellants' complaint, the circuit court said: [T]he Supreme Court's holding in Younger rested primarily on considerations of `comity', a concept which encompasses `a proper respect for state functions.' ... It would well nigh be impossible to overstate the point that the federal courts have no interest whatsoever in the underlying subject matter of this litigation—the City of Philadelphia has a vital and crucial interest in the functioning of a regulatory system, such as the one at issue here, which is intimately associated with the physical and financial workings of the city in general, and of the municipal government in particular. O'Neill v. City of Philadelphia, 32 F.3d 785, 791-92 (3rd Cir.1994). Appellants here are seeking relief under § 1983 in the form of refunds and declaratory and injunctive relief, not only for themselves but for all others similarly situated, i.e., 2,713,975 persons with regard to over 3,000,000 undisposed of parking tickets, and are also seeking substantial attorneys' fees for legal services since 1991 under § 1988. Such relief is not available where Appellants have an adequate legal remedy. Appellants had the right to appeal the decision of the hearing examiner to the BAA Parking Appeals Panel, 12 Phila. City Code § 12-2808, and the decision of the Parking Appeals Panel to the Court of Common Pleas, and then to this Court. Sections 751-754 of the Local Agency Law, 2 Pa.C.S. §§ 751-754. Appellants do not dispute that the error of the hearing examiners in overruling their statute of limitations defense could have been corrected on appeal, (R.R. 203a-204a), but nevertheless claim the remedy is inadequate because the legal expense involved in contesting a parking violation is *549 too great to make such appeals practical. Such argument ignores the financial burden and administrative problems that would be created for the city by granting the relief requested by Appellants. Further, the expense and inconvenience of pursuing the administrative remedy does not render the remedy inadequate. See McGraw-Edison Co. v. Pennsylvania Human Relations Commission, 108 Pa.Cmwlth. 147, 529 A.2d 81 (1987). Accordingly, we affirm. ORDER NOW, April 17, 1998, the order of the Court of Common Pleas of Philadelphia County, at No. 3444, dated May 7, 1997, is affirmed. LEADBETTER, J., did not participate in the decision in this case. NOTES [1] Sections 1302 and 1321 of the Judicial Code, 42 Pa.C.S. §§ 1302 and 1321. [2] Section 5 of the Act of June 5, 1947, P.L. 458, as amended, 53 P.S. § 345(b)(17).
Whether your walking robot has two, four or six legs, there are lots of ways to approach building it. Here’s a roundup of DIY walking robots to get your creative juices flowing. I’ve already talked about several awesome robots that you could, in theory, make yourself, but one thing that all of them have in common is that they move around on wheels or tracks. Here is another set of hobbyist robots that not only can you build yourself, but walk on two, four, or even six legs! Walk on Two Legs with BOB Perhaps counter-intuitively, up to a point, the fewer legs you have on a robot, the harder it is to make. One-legged machines are technically possible, but I personally wouldn’t have the skill to write the control software for something like this self-balancing robotic Pogo stick (watch above). Two legs are somewhat easier, but that still creates the challenge of having to balance on one leg while the other moves. In order to partially circumvent the balance issue, makers have come up with two interesting solutions called BOB and Otto. This type of robot literally bobs around, shifting its weight from one leg to another, using four servos to walk in a rather strange gait. Their “eyes” are made from an ultrasonic sensor, and since the robot rotates its head-body back and forth when walking, this should allow it to scan the area for object avoidance. Or you can forgo this sensor altogether and simply program it to dance as shown below. Four-Leg Walkers Although four legs would seem to be easier, these robots present the same sort of challenges as their two-legged brethren. Unless the weight was somehow allowed to shift like with BOB, the robot becomes unstable when any leg moves. Though you’ll find solutions that sort of slide around like the “Hermes” quadruped, it’s a difficult task to do well without a solid grasp on the mechanics and control theory. On the other hand, some people do understand this, as the “Stag Mk2” shown below. This amazing little bot appears to use two servos per leg for locomotion, and is reminiscent of the Big Dog robot from Boston Dynamics. One might question the “amateur” status of the creators of this robot, but it appears to be walking around on textbooks inside of a college-style apartment. Glad to see these books being put to good use. Years after graduating, I can say that I still use several of my engineering textbooks every day. Unfortunately, they are supporting my computer monitor. Six or More Legs Once you get to six or more legs for your robot, this is where a bot can be constructed that is stable without high-level control design. Through several different schemes, these bots are able to move legs as needed while still being able to contact the ground in three or more points. Though these robots can be constructed with as few as three servos (moving pairs of legs in tandem), below is an example of one with twelve servos for the legs, plus actuation to allow the sensor module to look around. It seems to move from point to point well and avoid obstacles, but be sure to watch just after 1:00 when it shows off some of its fancier moves. It should also be noted that the same channel also has an advanced two leg BOB-style robot. If you’d like to build a hexapod but don’t think you have the mechanical skills to build one, the good there are many kits available. A quick Internet search should reveal many options for your next build. How to Make It Work Now that you have a few ideas of how to make your next robot mechanics-wise, perhaps you should consider what computing platform to use to run it. Here are five excellent ideas for your robot’s brain.
Ubiq presents their Bo-Ro mid top sneaker in a new leather version. The sneaker comes in 4 new colorways, featuring clean leather uppers, small perforated details and a premium suede flap. The Ubiq Bo-Ro Leather will be available from Chapter Express in January 2010. Detailed images of all 4 colorways of the sneaker follow after the jump.
2002 South Africa rugby union tour of Europe The 2002 South Africa rugby union tour of France Scotland and England was a series of matches played in November 2002 in France Scotland and England by South Africa national rugby union team. It was a woeful tour, a real nightmare, finished with the worst defeat in the history of the Springboks a 3–53 loss against England. The Matches In the first test, Springboks where defeated by a great French team France: 15.Nicolas Brusque, 14.Vincent Clerc, 13.Thomas Castaignede, 12.Damien Traille, 11.Cedric Heymans, 10.Francois Gelez, 9.Fabien Galthie (capt.), 8.Imanol Harinordoquy, 7.Olivier Magne, 6.Serge Betsen, 5.Olivier Brouzet, 4.Fabien Pelous, 3.Pieter de Villiers, 2.Raphael Ibanez, 1.Jean-Jacques Crenca, – replacements: 16.Sylvain Marconnet, 17.Jean-Baptiste Rue, 18.Thibault Privat, 19.Sebastien Chabal, 22.Xavier Garbajosa – No entry : 20.Dimitri Yachvili, 21.Gerald Merceron South Africa: 15.Werner Greeff, 14.Breyton Paulse, 13.Jean de Villiers, 12.Adrian Jacobs, 11.Brent Russell, 10.Andre Pretorius, 9.Neil de Kock, 8.Joe van Niekerk, 7.AJ Venter, 6.Corne Krige (capt.), 5.Jannes Labuschagne, 4.Bakkies Botha , 3.Willie Meyer, 2.James Dalton, 1.Lawrence Sephaka, – replacements: 16.Lukas van Biljon, 17.Wessel Roux, 18.Marco Wentzel, 19.Pedrie Wannenburg, 21.Butch James, 22.Marius Joubert – No entry: 20.Bolla Conradie Also with Scotland, the Springboks continue the "black series" Scotland: 15.Stuart Moffat, 14.Nikki Walker, 13.Andy Craig, 12.Brendan Laney, 11.Chris Paterson, 10.Gordon Ross, 9.Bryan Redpath (capt.), 8.Budge Pountney, 7.Simon Taylor, 6.Martin Leslie, 5.Stuart Grimes, 4.Scott Murray, 3.Bruce Douglas, 2.Gordon Bulloch, 1.Tom Smith, – replacements: 17.Dave Hilton, 18.Nathan Hines, 19.Jason White, 21.Gregor Townsend, 22.Ben Hinshelwood – No entry : 16.Steve Scott, 20.Graeme Beveridge South Africa: 15.Werner Greeff, 14.Breyton Paulse, 13.Adrian Jacobs, 12.Robbie Fleck, 11.Friedrich Lombard, 10.Butch James, 9.Bolla Conradie, 8.Joe van Niekerk, 7.Pierre Uys, 6.Corne Krige (capt.), 5.Jannes Labuschagne, 4.Marco Wentzel, 3.Deon Carstens, 2.Lukas van Biljon, 1.Wessel Roux, – replacements: 17.CJ van der Linde, 18.AJ Venter, 21.Andre Pretorius – No entry: 16.James Dalton, 19.Pedrie Wannenburg, 20.Brent Russell, 22.Bakkies Botha A huge defeat against the English team England: 15.Jason Robinson, 14.Ben Cohen, 13.Will Greenwood, 12.Mike Tindall, 11.Phil Christophers, 10.Jonny Wilkinson, 9.Matt Dawson, 8.Richard Hill, 7.Neil Back, 6.Lewis Moody, 5.Ben Kay, 4.Martin Johnson (capt.), 3.Phil Vickery, 2.Phil Vickery, 1.Jason Leonard, – replacements: 18.Danny Grewcock, 19.Lawrence Dallaglio, 20.Andy Gomarsall, 21.Austin Healey, 22.Tim Stimpson – No entry : 16.Mark Regan, 17.Robbie Morris South Africa: 15.Werner Greeff, 14.Breyton Paulse, 13.Robbie Fleck, 12.Butch James, 11.Friedrich Lombard, 10.Andre Pretorius, 9.Bolla Conradie, 8.Joe van Niekerk, 7.Pedrie Wannenburg, 6.Corne Krige (capt.), 5.AJ Venter, 4.Jannes Labuschagne , 3.Deon Carstens, 2.James Dalton, 1.Wessel Roux, – replacements: 16.Lukas van Biljon, 17.CJ van der Linde, 20.Norman Jordaan, 21.Adrian Jacobs, 22.Brent Russell – No entry: 18.Marco Wentzel, 19.Pierre Uys References Category:2002 rugby union tours Category:2002 in South African rugby union 2002 Category:2002–03 in French rugby union Category:2002–03 in Scottish rugby union Category:2002–03 in English rugby union Category:2002–03 in European rugby union 2002 2002 2002
Simple estimation of carrier binding capacity using sorption kinetics curve-fitting. Kinetic curves of the sorption of biological-like compounds and proteins onto insoluble carriers were determined either by linearisation via double reciprocal transformation or by non-linear fitting using the following equation: B = (1/K1) X [t/(t + K2/K1)]. Both these procedures provided non-significant differences in the values of parameters of sorption kinetics, namely of equilibrium sorption (BM), sorption half-time (t1/2) and initial sorption rate (vo). Moreover, these methods proved to be sufficient to provide a precise description of the kinetics of different types of sorption, such as chemical, physical, ionic, biospecific and non-specific sorption. Both computing procedures make it possible (i) to calculate the parameters BM, t1/2 and vo even in instances where they cannot be established experimentally, and (ii) to replace the graphic estimation with computation. The assumption that all types of sorption mentioned above will be kinetically controlled in a uniform way proved to be reasonable.
Q: How to disable search in the Google Chrome address bar? Possible Duplicate: Force Chrome to open URLs as URLs, instead of searching I'm surprised that I haven't found this question on superuser. Forgive me if it's a duplicate, but how do you disable the search functionality in Google Chrome's address bar? I want it to be just a straight up address bar. I know the address where I want to go. I don't want to have to click the "did you mean to go to" link all the time when Chrome thinks the word I typed is more likely to be a search term than a server. You'd think putting the http:// in front would help, but I already do this by default anyway and no dice. =/ Thanks! A: You can't disable the search feature of the Omnibox. You can disable instant search by clicking the wrench > options > basic and unchecking "enable instant." A possible work around is to create a custom search engine for chrome. Right click in the Omnibox (search bar) and choose Edit search engines. You can create a blank search engine that will always fail by putting "no" "null" and "http://%s" in the Name, Keyword, and URL fields respectively. If you're trying to point this to a local host, you could potentially use the URL field for this, though I have not tried this, and cannot confirm.
What Is Art? Jul 13 2012 Germany, Bad Antogast Why do you want to talk about darkness? The whole Universe is enveloped by darkness - Dark Energy and Dark Matter. Scientists today say that what you see today as light is only a spot; it is like a bubble in a water bottle. Light is like a bubble in a water bottle. But the water is the real thing. The bubble is simply what appears. That is not the real thing. So the Sun, of course we know is the source of energy. But the scientists say that, what keeps the Sun tight and round in shape is the Dark Energy around it. So the Dark Matter and Dark Energy is a million times more powerful than the Sun. Like air bubble inside a water bottle is just air trapped by the pressure of all the water molecules. Now water is heavier and so much more powerful than the air. Similarly the whole Universe is filled with energy which you cannot see and which you don’t know. And you think there is nothing in it. The black holes, what the scientists call it, can just swallow the Sun. Our Earth, the Sun and our solar system is simply escaping and moving in between huge black holes. So many black holes are there in the universe, and if the Sun gets even a little close to it then it just sucks the whole Sun, and nobody knows where it disappears. So the whole Universe is filled with that energy which is not seen, that is why it is called Dark Energy or Dark Matter. Questions & Answers You see a heap of stones lying somewhere and you think it is just a heap of stones. But if it is arranged a little bit, you start appreciating it. Then it becomes an Art. On a piece of paper when you start putting or splashing some colors and then start appreciating it as a painting, it becomes an Art. It gives you some meaning, isn’t it? And for poetry, it comes from a subtle level of the mind. When your breath flows in a particular rhythm, when a particular Nadi or channel in you gets opened, in that particular moment, something comes up and you write. Then the words rhyme. So it is a gift! Imagination is a gift. It is all how the consciousness expresses itself; how your mind expresses itself. And when you appreciate, it becomes Art. If you don’t appreciate and just blabber some words then that is not it. See the modern poetry needs certain intelligence to appreciate it. Have you read modern poetry? Let me read you something. A leaf ruffles on the ground; Water carries it along. That is it. (Laughter) And the way you write it – ruffles on the ground; water carries it along – this is poetry. Now this can have so many meanings, instantaneously. A leaf is on the ground or a leaf was in the air and it came to the ground. It was on the ground could mean it is almost dead. It fell on the ground and it is love that carries it along – water is synonymous to love. Water carries it along, yes! So it could mean with every desperate moment there is hope. So you can interpret it in so many ways. Poetry is like that. Words carrying the feelings, a little exposed and little covered. Poetry is words capsuling feelings, a little exposed and a little hidden or covered. That is what it is – a little mystic and a little obvious. A poet said to God, ‘You are in control of the world. But make it more obvious.’ (Laughter). To those for whom it is obvious, became saints. Those who don’t see that You are in control of the world, they struggle. You can understand it this way, isn’t it?! If someone appreciates you then you say, ’Oh, that person is trying to hook me in.’ And if they don’t appreciate you, then you think, ‘They are jealous.’ Same thing if you are rich, you think, ‘They are attending to me because I am rich. They are interested in my purse.’ If they don’t, then you say that they are arrogant, they are jealous. My God, the mind plays so many tricks. Similarly, someone can get into the labor union mind-set. Do you know what is the labor union mind-set? ‘I don’t want to listen to anybody! Everybody is against me.’ Who is against you?! The mind makes a whole bubble about it. They think, ‘Oh, someone is trying to control me, dominate me, and harass me.’ The other day one person came and said, ‘Everyone is harassing me at work.’ How could everyone harass you? You must be doing something really horrible. ’Everyone is harassing me at my work place’ – you know, this is labor union mentality. Because one does not feel good about oneself, they project it on everybody else thinking, ‘Everybody is bad, everybody is after me!’ Who is after you? What do they get by harassing you and making you miserable? You make yourself miserable. Got it? In many companies, the establishment suffers because there is one manager sitting there who is like that. He wants to show his or her power and he plays all sorts of tricks. He doesn’t realize that he is cutting the tree which he is sitting on. There are people like this. You know, in the World Bank, two teachers from The Art of Living went and conducted the TLEX program and many of these things came up. Isn’t it? Now there is complete transformation in them, in just 3 days; 3 hours each day. And they have said that all the World Bank employees must do this course. They have incorporated it in their Leadership Training Program – The Art of Living TLEX Program. They have also said that in Africa and different countries, the leaders should do the Sudarshan Kriya. So I have just put the Short Sudarshan Kriya in it. The Long Kriya becomes too much, so I said only little bit Kriya. And that is bringing sense into people and making them wake up. Otherwise in many countries they feel they have been victimized by somebody else. The victim consciousness haunts this world and that is why many countries remain poor. Because the people think, ‘Oh I am a victim.’ When you find yourself as a victim, you don’t have power and you feel weak. It is the same with women empowerment. Don’t ask anybody to empower you. You have a place, you ascertain your place. Don’t go and ask people, ‘You should empower us.’ Many women activists are so angry, isn’t it? Women activists are so uptight and angry. There is always rebelliousness and anger in them, and it doesn’t lead you anywhere. I’m not saying all are like that, but some women activists are so agitated and angry. I tell you, anger is not going to lead you anywhere. With a calm serene mind ascertain your place. You have a position, climb it! It is all in our own mind. See, if you are not looking beautiful, don’t feel low because someone else is beautiful. They may have a beautiful body, you have a beautiful mind. Remember that. Not everyone with a beautiful body has a beautiful mind. Sometimes when you look at their face, it looks so dull. Somebody may have a beautiful mind but they may not have a sharp intellect. You have a sharp intellect. So look into what you have. Someone may have lot of money, so what? You have talent, you have a good heart, you have something. So when you start comparing with somebody else, you forget there is a dimension inside you which is much bigger than anything else, and which gives everything to everybody. When you latch on to that then you will get over all these complexes. Do you see what I am saying? The world is full of so many complexes in the mind. You should overcome those complexes. And what can make you come out of complexes? Not psychotherapy, but spirituality. Latching on to the inner space will lift you from all complexes – inferior, superior, exterior, interior, all of them. All types of ’rior’ and irritating complexes will go away! You will find that, ‘Ah! So nice, and so peaceful.’ In Hindi there is a saying – Mann Meetha Toh Jaag Meetha. When there is sweetness inside of you then everything around you is also sweet. Look, life itself is an art, okay! The Art of Living. Art can be a business, but don’t think business is an art. You can say business is an art in a sense that one should manoeuvre properly how to do business. In a very complex and corrupt world how to keep ethics and still do business – that is an art. In that sense! Now again in business it is a different phenomenon. You have to watch the market. You can’t say, ‘I don’t care what the market is like, I will create my own thing.’ No! Business is a different art, it is not like the Fine Arts. In Fine Arts you don’t look at others, you allow the creativity to come out. Business is dealing with the world, with the market. So there you should know what others are doing. Because you are selling a product, you should be very well aware of how others are marketing their products, and what price structure they are keeping. All that you need to look into. Does the Nadi give us any signs? What does it say now? Sri Sri Ravi Shankar: Yes, now the Nadi says, ’Keep quiet’ (Checking the flow of breath from his nostrils). When both the Nadis (subtle energy channels) are running, it means keep quiet, don’t say anything, just meditate. But they change, everything changes. The whole Universe is full of changes. (A member of the audience spontaneously asked a question which was inaudible in the recording) Sri Sri Ravi Shankar: You have no choice. Do you have any choice? Sooner or later everybody has to do that. You cannot take credit for your good qualities because that is how you are. Now this sunflower cannot say, ‘I am yellow.’ It didn’t do anything to become yellow. It is made yellow. A rose cannot say, ‘Oh look, I am so pink! I made myself pink.’ How can you make yourself pink? Any good qualities, talents you have, you cannot take credit for it because that is how you are made. And the way I am, that is how I am made. Of course you should not take debit for your negative qualities either. And you can’t take credit for your positive qualities. So however you are that is how you are. If you are rebellious then make good use of that rebelliousness. Wherever there is injustice, fight! But fight with a smile. Fight with Me! (Laughter). Fight against illiteracy. Fight against injustice. Fight against lack. Go ahead, fight! And in the course of fighting there are always ups and downs. Never mind. You shouldn’t mind it. Think, ’Okay, come what may, fight!’ That is why you need to identify your Dharma (Duty) . If your Dharma is to teach, or to fight, or to convince, or to serve; whatever your Dharma is, and your nature at that moment, you should go with that. You can do all the four also, you can try. First you teach and educate. If it is not possible then try to convince; use marketing techniques. So convince and coerce them, cajoling and all that. And if that doesn’t work then serve them. And if nothing works, then fight! Got it? Use all these steps, okay! Whatever is natural for you, that is your Dharma. Like she says that rebelliousness is natural for her, she can rebel (referring to a member in the audience). Every moment she can rebel against anything, anywhere. So that is her Dharma. So you identify your Dharma. It becomes obvious. You become absolutely at home and feel comfortable doing it. See, all of these are difficult. Do you think teaching is easy? My goodness, it is such a big headache. In Telugu language there is a saying – A Teacher is supposed to teach a student and forget what he has learnt, because without forgetting what he has learnt he has no liberation. You learn, but you should forget all that and become totally hollow and empty. So a teacher is supposed to learn and whatever he has learnt in whichever field, he has to pass it on to the disciple and forget about it. Before that he cannot forget. So this is the rule of a teacher – Learn, teach and forget. Anyway nature will make you do it. As you become old you start forgetting everything, isn’t that so? As you become old and old and old, you forget everything. Now there is a very funny saying in Telugu – Having made you my student, having taught you, I lost my reputation. You could never learn and I could never forget! (Laughter) Because if someone asks, ‘Who is your Guru, who is your teacher?’ You will say that so-and-so is my teacher. And what has he taught you? So having taken you as my student, I lost my reputation. You could never learn, I could never forget. So teaching is not an easy job. And fighting is not an easy job too; it is a tough job. Convincing and cajoling is not an easy job. And serving is also a great challenge. You do all good things to serve people and still they blame you, yes! You do all that is possible, all the good things; whatever you do, still you can’t make someone happy. So serving is also not an easy job. So anything you take, if you see from one angle, it is all so difficult. So you think, ‘Okay everything is difficult, but still let me do it and keep quiet.’ But is that easy? That is even more difficult. So doing something is not easy and not doing anything is also not easy.
Q: how to remember specific action/event for current user? i have made multi-language site thanks to polylang now i dont want to use polylang's switcher as it causes layout issues so i found a better solution is by putting a link on top. for example if user clicks: <a href="mysite.com/fr" ...> whole website gets translated with no issues but it doesn't remember if the user went to another page for example categories page instead the language go back to default which is english unless adding 'fr' to the link manually in the browser. my thoughts about this is to edit the base URL if the user clicks on the link but have no idea on how to allow regular user to do it or there might be a better different way any help would be much appreciable. Thanks A: According to the documentation there are 3 Polylang functions that can help you here : The first to "remember" the user language. pll_current_language // Returns the current language pll_current_language( $value ); // ‘$value’ => (optional) either ‘name’ or ‘locale’ or ‘slug’, defaults to ‘slug’ returns either the full name, or the WordPress locale (just as the WordPress core function ‘get_locale’ or the slug ( 2-letters code) of the current language. Now this one to check if the user language exist for the page he clicked pll_get_post_translations // Returns an associative array of translations with language code as key and translation post_id as value pll_get_post_translations( $post_id ); // ‘$post_id’ => (required) id of the post for which you want to get the translations Now if the user language have been found get the translated post pll_get_post pll_get_post( $post_id, $slug ); // ‘$post_id’ => (required) id of the post you want the translation // ‘$slug’ => (optional) 2-letters code of the language, defaults to current language For the last one $slug isn't optional for you
HONG KONG, 25th November 2019 – BBOD, a global leader in offering altcoin perpetual futures contracts has launched Monero Perpetual Futures quoted and settled in TUSD. The new derivative contract is now trading at BBOD with up to 25x leverage. What is Monero ? Monero is a private decentralised digital currency, which started out in 2014 as a fork of Bytecoin – the first private cryptocurrency to be created. The Monero protocol obfuscates the 3 parts of any cryptocurrency transaction: the sender, the receiver, and amount sent. Transactions on the Monero blockchain are untraceable and unlinkable so no one can tell where they originated from and no one can not connect any two transactions together. Monero currency called XMR is mined through computers using a Proof-of-Work protocol. If you would like to find out more about Monero wallets please click here. To get to know more about Monero please watch our interview with Diego Salazar aka Rehrar, a contributor in the Monero community. Let’s talk about privacy at Monero: An interview with Diego Salazar aka Rehrar BBOD’s Monero Perpetual Futures Contract trading allows users to leverage their accounts with funds they do not actually possess. This can lead to far greater profits but also greatly amplify losses. BBOD’s XMR-TUSD perpetual futures will allow market participants to go long or short on cryptocurrency with leverage, empowering them to express sentiment and manage risk more effectively. The instrument has no expiration dates, unlike fixed maturity futures. The contract is designed as a perfect risk management tool for the Monero miners, as this group is capable of making a fairly accurate estimate of their income in Monero. If this is known, the dollar value of this future income can be fixed before mining is started; for this purpose, miners can use BBOD’s Monero Futures. Jacob Ruczynski, CEO at BBOD, said: “BBOD is excited to offer the world’s first Monero perpetual futures contract with 25x leverage on our trading platform. There has been a strong client demand for this product. Long-term investors may now effectively hedge value of their Monero holdings ahead of high volatility events. They can sell Monero Futures if they think that the price of the underlying asset will temporarily go down without a need to sell the cryptocurrency on the cash market. Finally, we believe a well-functioning XMR-TUSD perpetual futures market will increase the price transparency significantly and provide valuable and inexpensive information for the Verge market participants”. This new futures contract expands BBOD’s derivatives offering which currently includes the following perpetual futures contracts: Bitcoin, Ethereum, Bitcoin Cash, Ripple, EOS, Litecoin, Binance, Monero, Stellar, IOTA, Cardano, NEO, Chainlink, Cosmos, Tezos, VeChain, Digibyte and Dash, all quoted and settled in TUSD. BBOD is to become a leader in offering a wide range of altcoin perpetual futures and this launch is just the beginning as we will introduce new additions to the product offering in the future. BBOD is going to list +100 altcoin perpetual contracts by the end of 2020, to become the most diverse and secure crypto derivatives marketplace to trade altcoins with high leverage. About BBOD BBOD (Blockchain Board of Derivatives) is the world’s leading and most diverse cryptocurrency derivatives marketplace offering the widest range of futures contracts for trading and risk management for retail and institutional clients.
Alexander Mezhirov Alexander Petrovich Mezhirov (Russian: Александр Межиров ; September 26, 1923 [but see below] – May 22, 2009) was a Soviet and Russian poet, translator and critic. Mezhirov was among what has been called a "middle generation" of Soviet poets that ignored themes of communist "world revolution" and instead focused on Soviet and Russian patriotism. Many of them specialized in patriotic lyrics, particularly its military aspects. According to G. S. Smith, Mezhirov and a number of other "middle generation" poets "were genuine poets whose testimony, however well-laundered, to the tribulations of their times will endure at least as long as their generation." Some of Mezhirov's lyrical poems based on his wartime experience belong with the best Russian poetical works created in the Soviet 1950s-1960s. Life Born in Moscow, he was the son of an educated Jewish couple — his father a lawyer, his mother a German-language teacher, and one of his grandfathers was a rabbi. Drafted as a private in July 1941, he fought in World War II before a serious injury led to his demobilization in 1943 as a lieutenant. That same year, he joined the Communist Party; after the war heattended the Literary Institute, graduating in 1948. He translated poetry from Georgian and Lithuanian poets. "Mezhirov is a virtuosic translator, especially recognized for his renditions of Georgian and Lithuanian poetry," anthologist Maxim Shrayer has written. In 1944, he married Elena Yashchenko. The couple's daughter, Zoya Velikhova, was born in 1949 and became a writer. Mezhirov was a prominent figure in the Soviet literary establishment, although his allegiances and associations were varied. At some points he was close to fellow Jewish-Russian Boris Yampolsky, Kazakh writer Olzhas Suleimenov, and Russian cultural ultranationalist and critic Vadim Kozhinov. Mezhirov associated with younger writers Yevgeny Yevtushenko, Tatyana Glushkova (known for her nationalist views in the mid-1980s, according to Shrayer) and Evgeny Reyn, who was censored in the Soviet Union until the mid-1980s. Although Mezhirov had publicly stated that his patriotism for Russia was so intense that, unlike other Russian Jews, he could not immigrate, he suddenly left Russia for the United States in 1992, settling first in New York, then in Portland, Oregon. As of 2007, according to anthologist Maxim D. Shrayer, he had not revisited Russia. In March 2009 Mezhirov published a collection of new poems, two months before his death. According to the ITAR/TASS news service, his body was to be cremated in the United States, with the ashes to be buried in Peredelkino near Moscow. At one time the poet was a passionate pool player and was a friend of professional billiards players. He excelled in other games, as well. Critical reception Mezhirov has a "special gift" for absorbing the voices of his contemporaries and his predecessors from the 1900s–1930s, according to Shrayer, who notes the influences in Mezhirov's writing of Eduard Bagritsky, Erich Maria Remarque, Anna Akhmatova, Aleksandr Blok, Vladislav Khodasevich, Mikhail Kuzmin, Vladimir Lugovskoy, David Samoylov and Arseny Tarkovsky. Variations in Mezhirov's name and birth year Mezhirov has given his birth year as 1921, but a number of sources have instead given it as 1923. The poet's first name sometimes rendered "Aleksandr" or "Alexandr" in sources using the Latin alphabet. Bibliography Each year links to the corresponding "[year] in poetry" article. Unless otherwise sourced below, translations of the Russian-language titles of the following books were taken from Google Translate and may be overly literal: 1947: Дорога далеко ("The Road is Far Away"), edited by Pavel Antokolksy, Moscow 1948: Kommunisty, vpered!, "Communists, Ahead!" poem reprinted in his second collection, New Encounters, and in many volumes, anthologies and samplers 1949: Новые встречи ("New Encounters"), including "Communists, Ahead!" 1950: Коммунисты, вперёд! ("Communists, Ahead!"), reprinted 1952 1955: Возвращение ("Return") 1961: Ветровое стекло ("Windshield") 1964: Прощание со снегом ("Farewell to the Snow") 1965: Ладожский лёд ("Ice of Lake Ladoga") 1967: Подкова ("Horseshoe") 1968: Лебяжий переулок ("Swan's Lane") 1976: Под старым небом ("Under the Old Sky") 1977: Очертания вещей ("Outline of things") 1981: Selected Works, two volumes 1982: Проза в стихах ("Prose in Verse") (winner of the USSR State Prize, 1986) 1984: Тысяча мелочей ("A thousand small things") 1989: Бормотуха ("Bormotuha") 1989: Стихотворения ("Poems") 1991: Избранное ("Favorites") 1997: Позёмка ("Drifting") 1997: Apologii︠a︡ t︠s︡irka: kniga novykh stikhov ("Apologia of the Circus"), including a version of "Blizzard", St. Petersburg 2006: Артиллерия бьёт по своим, selected poems of recent years), Moscow: publisher: Zebra E Notes Category:1922 births Category:2009 deaths Category:Russian Jews Category:American people of Russian-Jewish descent Category:Russian male poets Category:Russian translators Category:Writers from Moscow Category:Writers from Portland, Oregon Category:Soviet poets Category:Soviet male writers Category:20th-century Russian male writers Category:Soviet translators Category:Russian literary critics Category:20th-century translators
So I was rewatching cat fingers and just noticed that Pearl was using a microscope and had some beakers. Just an interesting hint to Pearls more sciencey nature that I’ve never realized till now. Now I just wonder what she could possibly be studying, organic diseases? Trying to look into Gem corruption?
Serum antibody pattern, antigenemia, and virus isolation in infants born to mothers seropositive for human immunodeficiency virus type 1. Forty children born to mothers seropositive for the human immunodeficiency virus type 1 (HIV-1) followed-up to 15 months after birth were studied by means of serum antibody patterns to individual viral polypeptides, by the presence of detectable levels of viral core antigen (p24) and virus in serum and peripheral blood lymphocytes, and by total lymphocyte counts and T4/T8 lymphocyte ratios. The results obtained indicate that a persistent antigenemia is significantly associated with positive virus isolation from peripheral blood lymphocytes and with changes in the intensity of antibody reaction to core (p24, p17) and pol (p31) antigens. Six children (15%) presented unequivocal signs of HIV-1 infection and five also had signs of immune system involvement.
1. Field of the Invention The present invention relates generally to the field of hair replacement devices such as hairpieces. More specifically the present invention relates to a hair intersperser which takes the form of a network of flexible lines and which includes draw line means for uniformly contracting or expanding the size of the hairpiece to custom fit an individual wearer head. The lines making up the network are crocheted with rows of hair strands for interspersing with wearer hair. After the network is fitted to the wearer head, the network lines are tied so that the network permanently retains its fitted size. The network is died to approximate the hair color of the wearer, and rows of hair strands are secured to the network lines in quantities and locations corresponding to the specific needs of the individual wearer. A stock embodiment of the intersperser is optionally provided to which the hair strands are already attached and which draws against the wearer head during fitting to an approximated close fit for immediate use with minimized cost. 2. Description of the Prior Art There have long been hairpieces for covering thin and bald areas of wearer heads with real or simulated hair strands. One hairpiece, disclosed in U.S. Pat. No. 4,386,619 issued on Jun. 7, 1983 to the present applicant, provides a network of lines to which hair strands are attached for fitting between and interspersing with existing wearer hair to supplement and add fullness to existing hair. A problem with these prior hairpieces has been that they do not always fit the wearer head closely and evenly, and most cover rather than enhance and supplement wearer hair so that a fully convincing and natural look is not always achieved. Won, U.S. Pat. No. 4,658,841, issued on Apr. 21, 1987, teaches an assembled wig or wig kit. Won includes front and rear interconnecting strap networks for mounting a full head of hair strands. A problem with Won is that the wig is not size adjustable to conform to the dimensions of a particular wearer head. Another problem with Won is that it is not a hair intersperser to enhance actual wearer hair, but simply covers up wearer hair. Other prior references fail to teach hair interspersal. Torres, U.S. Pat. No. 5,562,111, issued on Oct. 8, 1996, discloses a hair highlighting cap. Torres includes a means for isolating and separating hair segments for coloring, and does not teach hair strand interspersal. Narvick, U.S. Pat. No. 5,873,373, issued on Feb. 23, 1999 reveals an integrated wig having a wefting construction. Narvick is thus a wig rather than a hair strand intersperser. Haber, et al., U.S. Pat. No. 5,647,384, issued on Jul. 15, 1997, discloses hair pieces and mounting means for hair pieces. Photopulos, U.S. Pat. No. 4,150,678, issued on Apr. 24, 1979 teaches cushioned retainer pads for wigs. Mendelson, et al., U.S. Pat. No. 3,884,248, issued on May 20, 1975, discloses adjustable wigs with means for reducing the size of the wig caps. Size is reduced only along one circumferential path, and thus the Mendelson, et al. cap is not uniformly fitted onto the wearer head. Ahn, U.S. Pat. No. 3,834,403, issued on Sep. 10, 1974, reveals a wig construction in which spaced apart points of adjacent strips of wefting are joined together at points of attachment offset from strip to strip to form a wefting network which is expandable to conform to the wearer head. Ahn does not retain a fitted size for a particular wearer. Cohen, U.S. Pat. No. 1,545,881, issued on Jul. 14, 1925, teaches a foundation for shingle bob wigs. Bergmann & Co GMBH, German Patent Number DE 3542123 A1, teaches a net configured hair piece base of woven plastic material or perforated foil with a pull received attachment to natural hair of a wearer. It is thus an object of the present invention to provide a hairpiece which intersperses hair strands with existing wearer hair for a fuller and more natural look. It is another object of the present invention to provide a custom version of such a hairpiece which has size adjustment means to be drawn to very closely, uniformly and evenly fit the wearer head and which is dyed to approximately match the wearer hair color, and to which hair stands are subsequently added in quantities and locations as needed by the particular wearer. It is finally an object of the present invention to provide such a hairpiece which is inexpensive to manufacture, sturdy and reliable.
NEWBERRY — Before the location at 1747 Vincent St. was a park, it was an African American hospital, the only in Newberry County. While the location holds memories of those the hospital served, today it is home to an inviting area for children and teens to enjoy. Dr. Julian Edward Grant, born in 1900 in Marlboro County, graduated from Claflin College in 1925, completing Meharry Medical College in Nashville in 1929. Grant came to Newberry in 1930 to practice medicine. “Dr. Grant noticed the African American community had a need in Newberry and he founded People’s Hospital,” said City Councilman Thomas Boyd. “He was about serving people, and never allowed his ego to get in the way of his purpose.” At the time, People’s Hospital was created, Newberry County Memorial Hospital was segregated. On Monday, community members and city employees gathered with members of Grant’s family to dedicate Vincent Street Park in his honor. Mayor Foster Senn shared his relationship with Grant as a young boy, recalling that his family owned land in the same area. “My family knew him and had a relationship with him,” Senn said. “He was held in great esteem as a person with great kindness and a great doctor.” Grant rallied the support of the community, Senn said, and renovated a home full of modern equipment. A board of trustees was organized by Grant, made up of members from the Newberry community. By 1935, the board had acquired 1747 Vincent St., complete with a two-story, seven-room framed house on two acres of land. The land sold for $1,500. Fitted and renovated with medical equipment, People’s Hospital opened in 1937. After the hospital closed in 1952, the area became the Vincent Street Community Center before being demolished in 1970 to build Vincent Street Park. The park was renovated at the end of 2013. Lisa Toland, vice president of AKA sorority, along with members from the group were present, presenting a tree to be planted at the park in Grant’s memory. “We wanted to donate the tree to the park as a lasting memory of Dr. Grant and as a symbol of longevity, tranquility and of life itself to the community,” Toland said. Memories Georgia Suber recalled knowing Grant’s children when they were born, as Grant was in her father’s and grandfather’s lives as their doctor. When Suber met her husband in 1940, she discovered that Grant had delivered all of her mother-in-law’s children. Suber described Grant taking her family in as his own. “He would come by when it was time for us to plant,” Suber said. “He would give my husband money for plants, and we would give him some of the plants for that.” In 1960 when Suber’s children caught polio, she remembered when her daughter had too high of a fever to contain. “He came with a tub full of ice, wrapped my daughter in towels, and stayed with us all night,” Suber said. When Suber offered him thanks, she said what he said to her was something she always remembered and to this day tries to live by: “Don’t you ever feel so high that you can’t reach down to pull someone up.” Grant’s passion for what he did inspired Suber to become a nurse and she worked in People’s Hospital as a clean-up person under him and another nurse. Suber became a nurse’s assistant, later on encouraged to go back to become a licensed practical nurse (LPN). Andrew Shealy also shared memories of Grant. Also living near Grant’s family, Shealy’s father had a garden, which helped him and Grant develop a bond. Shealy now serves as a board member for Grant Homes in Newberry, where Grant was once chairman on the board. “I don’t know of anyone who made a larger impact on me than Dr. Grant,” Shealy said. “Those who didn’t know him, you missed a jewel.” Grant’s son, Arther, and his daughter were presented a plaque honoring Grant’s legacy and proclaiming May 5, 2014, to be “Dr. Julian Edward Grant Family Day.” “We cannot stay connected without any memories,” Arther said. “You have created that today.” Contribute Comments All user comments are subject to our Terms of Service. Users may flag inappropriate comments.
Cerca nel Blog Treatments for face There are 5 products. The mandelic is a new alpha-hydroxy acids extracted from almonds, and performs its action in a more gentle and graceful, acting as a keratolytic without causing redness but they will burning while maintaining its effectiveness. He also established depigmenting capacity and can be successfully applied in treatments for skin spots. In line Mandelica there... moisturizing and nourishing treatment for the face, neck and décolleté. Protocol indispensable to balance and re-hydrate your face after the summer, ideal for those who has no blemish, rich in active natural nutrients.  Formulated to prevent and, where possible, reduce wrinkles, sagging skin, loss of support and elasticity of the face, neck and decolletage. Thanks to a combination of natural active ingredients including Hyaluronic Acid, Ginseng Extract and Marine Collagen, we can go to re-hydrate, restore, tone and re-impolpare all those people with dry skin, atonic,... This site or third-party tools used by this make use of cookies necessary for the operation and useful for the purposes outlined in the cookie policy.If you want to learn more or opt out of all or some cookies, see the cookie policy.By closing this banner, scrolling this page, clicking on a link or continuing navigation in any other way, you consent to the use of cookies.
IN OUR OPINION Editorial: Hitting full stride again Published: Wednesday, September 18, 2013 at 10:51 p.m. Last Modified: Wednesday, September 18, 2013 at 10:51 p.m. Florida's thoroughbred industry is having quite a run in 2013, with virtually every indicator suggesting it has found its stride again after being slowed by the recession for the past half decade. The latest good news for Florida thoroughbred breeders, announced last week, is that the state showed an increase in live foals for the second straight year. Florida was the only state to see a jump in foal production in both 2012 and 2013. That news came on the heels of a passel of other positive information about the Florida horse industry, Ocala/Marion County in particular. Legendary Ocala Stud, the longest continuously operating thoroughbred operation in Marion County, was named not only the Florida breeder of the year, but earlier this month was also named the national breeder of the year by the Thoroughbred Owners and Breeders Association. Ocala Stud is now moving into its third generation of management by the O'Farrell family. Of course the truest measure of economic recovery is dollars and cents, and plenty of those poured into Ocala/Marion County this year during Ocala Breeders' Sales Co. trifecta of 2-year-old sales and its yearling sale later on. The three 2-year-old sales held at OBS during the first half of the year brought in $92.4 million, a 34.5 percent increase over the 2012 total of $68.7 million. The August yearling auction, meanwhile, saw sales jump from $5.1 million in 2012 to $8.4 million this year, with both average and median sale prices setting records. And if all that was not enough to make area breeders and owners feel better, a colt sold in the March 2-year-old sale fetched $1.8 million, tying the record for the highest price ever at an OBS sale. Yes, the Florida thoroughbred industry is having a pretty good run this year — and that is good news not only for the industry but for our state and community as well. The industry is currently conducting a statewide economic impact study, but a 2005 analysis by the state found that thoroughbred breeding had a $2.2 billion impact statewide and a $1.3 billion impact here in Ocala/Marion County, the Horse Capital of the World. The 700-plus horse farms in our county alone create more than 3,000 jobs, while statewide more than 20,000 jobs are generated by this industry. Clearly the equine business is more than picturesque pasturelands and grazing horses we can see from the highway, although that is a bonus to the overall community. Like so many other business sectors, it's been a long slog for the thoroughbred industry since the start of the Great Recession. But now there is every indication it is hitting its stride again, although industry insiders remain cautious about declaring the recovery complete. That said, we like what Ocala Stud patriarch Mike O'Farrell told the Star-Banner's Bill Giauque after being named the nation's top breeder: "You keep at it, and sometimes good things happen." Those words seem applicable to our community's signature industry, horses, because good things are happening everywhere we look. <p>Florida's thoroughbred industry is having quite a run in 2013, with virtually every indicator suggesting it has found its stride again after being slowed by the recession for the past half decade.</p><p>The latest good news for Florida thoroughbred breeders, announced last week, is that the state showed an increase in live foals for the second straight year. Florida was the only state to see a jump in foal production in both 2012 and 2013.</p><p>That news came on the heels of a passel of other positive information about the Florida horse industry, Ocala/Marion County in particular.</p><p>Legendary Ocala Stud, the longest continuously operating thoroughbred operation in Marion County, was named not only the Florida breeder of the year, but earlier this month was also named the national breeder of the year by the Thoroughbred Owners and Breeders Association. Ocala Stud is now moving into its third generation of management by the O'Farrell family.</p><p>Of course the truest measure of economic recovery is dollars and cents, and plenty of those poured into Ocala/Marion County this year during Ocala Breeders' Sales Co. trifecta of 2-year-old sales and its yearling sale later on.</p><p>The three 2-year-old sales held at OBS during the first half of the year brought in $92.4 million, a 34.5 percent increase over the 2012 total of $68.7 million. The August yearling auction, meanwhile, saw sales jump from $5.1 million in 2012 to $8.4 million this year, with both average and median sale prices setting records.</p><p>And if all that was not enough to make area breeders and owners feel better, a colt sold in the March 2-year-old sale fetched $1.8 million, tying the record for the highest price ever at an OBS sale.</p><p>Yes, the Florida thoroughbred industry is having a pretty good run this year — and that is good news not only for the industry but for our state and community as well.</p><p>The industry is currently conducting a statewide economic impact study, but a 2005 analysis by the state found that thoroughbred breeding had a $2.2 billion impact statewide and a $1.3 billion impact here in Ocala/Marion County, the Horse Capital of the World. The 700-plus horse farms in our county alone create more than 3,000 jobs, while statewide more than 20,000 jobs are generated by this industry.</p><p>Clearly the equine business is more than picturesque pasturelands and grazing horses we can see from the highway, although that is a bonus to the overall community.</p><p>Like so many other business sectors, it's been a long slog for the thoroughbred industry since the start of the Great Recession. But now there is every indication it is hitting its stride again, although industry insiders remain cautious about declaring the recovery complete.</p><p>That said, we like what Ocala Stud patriarch Mike O'Farrell told the Star-Banner's Bill Giauque after being named the nation's top breeder: "You keep at it, and sometimes good things happen."</p><p>Those words seem applicable to our community's signature industry, horses, because good things are happening everywhere we look.</p>
Effectiveness and safety of chemotherapy with cytokine-induced killer cells in non-small cell lung cancer: A systematic review and meta-analysis of 32 randomized controlled trials. Cytokine-induced killer (CIK) cells are the most commonly used cellular immunotherapy for multiple tumors. To further confirm whether chemotherapy with CIK cells improves clinical effectiveness and to reveal its optimal use in non-small cell lung cancer (NSCLC), we systematically reevaluated all relevant studies. We collected all studies about chemotherapy with CIK cells for NSCLC from the Medline, Embase, Web of Science, China National Knowledge Infrastructure Database (CNKI), Chinese Scientific Journals Full-Text Database (VIP), Wanfang Data, China Biological Medicine Database (CBM), Cochrane Central Register of Controlled Trials (CENTRAL), Chinese clinical trial registry (Chi-CTR), World Health Organization (WHO) International Clinical Trials Registry Platform (ICTRP) and U.S. clinical trials. We evaluated their quality according to the Cochrane evaluation handbook of randomized controlled trials (RCTs) (version 5.1.0), extracted the data using a standard data extraction form, synthesized the data using meta-analysis and finally rated the evidence quality using the Grading of Recommendations Assessment, Development and Evaluation (GRADE) approach. Thirty-two RCTs with 2250 patients were included, and most trials had unclear risk of bias. The merged risk ratios values and their 95% confidence intervals of meta-analysis for objective response rate, disease control rate, 1- and 2-year overall survival rates, 1- and 2-year progression-free survival rates were as following: 1.45 (1.31-1.61), 1.26 (1.16-.37), 1.42 (1.23-1.63), 2.06 (1.36-3.12), 1.93 (1.38-2.69) and 3.30 (1.13-9.67). Compared with chemotherapy alone, all differences were statistically significant. CIK cells could increase the CD3+ T cells, CD3+ CD4+ T cells, NK cells and the ratio of CD4+/CD8+ T cells. The chemotherapy with CIK cells had a lower risk of hematotoxicity, gastrointestinal toxicity, liver injury and a higher fever than that of chemotherapy alone. The evidence quality was "moderate" to "very low." The available moderate evidences indicate that chemotherapy with CIK cells, especially autologous CIK cells, can significantly improve the tumor responses, 1- and 2-year overall and progression-free survival rates in patients with advanced NSCLC. This treatment does have a high risk of fever. The optimal use may be treatment with one or two cycles and in combination with vinorelbine and cisplatin, paclitaxel and cisplatin, or docetaxel and cisplatin.
Offline Julio was a barback at a local NYC gay bar when he decided to try his hand moonlighting as a pornstar, so he auditioned for Michael Lucas and got cast in our fifth movie, Restless, playing a bottom who takes on many partners in the orgy scene.
Buena Vista Independent School District Buena Vista Independent School District is a public school district based in the community of Imperial, Texas (USA). The district has one school, Buena Vista School that serves students in grades pre-kindergarten through twelve. In 2009, the school district was rated "academically acceptable" by the Texas Education Agency. Special programs Athletics Buena Vista High School plays six-man football. See also List of school districts in Texas References External links Buena Vista ISD Category:School districts in Pecos County, Texas Category:School districts in Texas
man killed due to disagreement on the priority pass Note: to turn off these warnings you need to set the 'safe mode' to OFF (on the top right)
The Executive Centre (Headquarters: Chiyoda-ku, Tokyo, Director: Paul Daniel Salnikow), which is engaged in the operation of serviced offices and coworking spaces in convenient locations of Tokyo such as Marunouchi, Kyobashi, Tameike-Sannou, Shibuya and Roppongi, operates flexible service office on the 19th floor of "Minatomirai Center Building," directly connected to Minatomirai Station or about 11-min walk from Sakuragicho Station. Minatomirai Center Building Yokohama Minatomirai has been promoting its improvement and development as a business base for many domestic and overseas corporations. It has a high level of infrastructure and provides easy access to Haneda Airport and Tokyo. We will support your business success in this safe, comfortable city, which creates a beautiful cityscape taking advantage of its history and waterfront landscape. Yokohama Minatomirai 21 Area with excellent accessibility from Tokyo metropolitan area and Haneda Airport Approx. 30 mins from Tokyo to Yokohama Station by train. Approx. 30 mins from Shinjuku Station to Yokohama Station by Shonan–Shinjuku Line. Yokohama City has a working population of approx. 1.8 million and more than 320,000 professional and technical workers. It provides excellent access for commuters and is good at securing human resources. The area surrounding Yokohama Station and Minatomirai Area expected to further develop In 2009, the headquarters of Nissan Motor Co., Ltd. transferred from Ginza to Yokohama, and this area is expected to further develop in the future. The Minatomirai Area has also served as the center of Yokohama economy, with business offices of approx. 1,810 companies and approx. 105,000 workers. The research & development facility of Fuji Xerox Co., Ltd., the headquarters building of CSK Group, and many other major companies have expanded into this area. In addition, the Yamato Lab of Lenovo Japan Ltd., BASF Japan Ltd., Wipro Japan K.K., and many other foreign companies have also expanded into this area. It has the second largest number of foreign companies in Japan, next to Tokyo. Another major advantage is that Yokohama City provides strong support systems for companies, including various subsidy and financing systems. The Art of Science of Design of The Executive Centre Designer: Fiona Hardie ID "Since our first collaboration in designing the Taipei 101 Tower office spaces, she has been in charge of design direction in The Executive Centre. " Vintage furniture / Collaboration with Timothy Oulton: "If you make a space beautiful, people just behave differently, and it’s always a change for the positive, even in a workplace. If you feel comfortable you’re much more open to different ideas, different perspectives apart from your own." Collaboration with Herman Miller: "The Executive Centre is a living example of Herman Miller's mission, which is to inspire designs to help people do great things. When you walk into all The Executive Centre spaces, you can really feel the positive attitude of the people working there." 9AM’s smart standing desks: "9AM’s height-adjustable standing desks inspire working people and energize the office where they are working. They provide a healthier and smarter workspace and create a truly seamless work experience." Attractiveness of The Executive Centre Upper Class Office Environment Based on the philosophy of providing a high level of service, all of our staff aim to provide serviced offices that allow you to focus on your daily work without anxiety, improve the work efficiency, and devote yourself to your core business. " Fulfilling One-Stop Solution Service Provided by Concierge Desk The Executive Centre provides support services in diverse fields ranging from company establishment and registration, to personnel affairs/outsourcing, tax affairs, accounting, and translation. We are willing to assist our clients in every possible way so that they can focus on important tasks. We are here to support and serve your business. Our service ・Executive office with rich facilities Modern office furniture (Aeron chairs by Herman Miller are our choice as the standard equipment) Taking advantages of our expanding global network, we are willing to provide opportunities for our members to extend their network, to share information, and to benefit from all the events taking place at all of our locations. Company Profile About The Executive Centre ・Established in Hong Kong in 1994, The Executive Centre is a leading company in the serviced office industry in the Asia-Pacific region. ・We provide professional business support services for serviced offices and home office workers at 6 locations in Tokyo and 1 location in Yokohama and at 130+ locations in 32 major cities in the Asia-Pacific region including Hong Kong, Beijing, Chengdu, Shanghai, Tianjin, Shenzhen, Macao, Taipei, Seoul, Singapore, Jakarta, Sydney, Brisbane, Perth, Mumbai and Gurgaon etc.
CNS development of health maintenance programs: quality improvement and cost reduction. Several outpatient health maintenance programs were developed by CNSs that resulted in quality improvement and cost savings. Common implementation strategies, barriers to implementation, marketing issues and techniques, and revenue generation issues were found. An outpatient consultation model was used to expand enterostomal therapy and diabetic teaching services. Education and support programs include a cardiac support group and community education classes on aging. Preadmission programs include physical therapy consultation for patients scheduled for elective orthopedic surgery. Outpatient programs facilitate continuity of care in a managed care format, promote cost savings, and provide unique services that encourage patients to return to our hospital, thereby increasing market share. Key barriers encountered include inconsistent or ambiguous administrative support, budgetary constraints, lack of collaboration, communication problems, facility limitations, funding considerations, resource allocation, and territoriality. The programs are consistent with the change in focus of health care from treatment to prevention.
Back or Neck Pain? The Top Laser Spine Surgeon wants to help you get your life back today, without a fusion! My goal is to cure your back pain, not manage it. I am a specialist in minimally invasive spine surgery for back and neck pain. If you’ve been suffering from disc herniations, annular tears, spinal stenosis or other back pain problems, why wait any longer to get your life back? – Call the number above! If you’ve been suffering from disc herniation, annular tears, spinal stenosis, lumbar stenosis, facet syndrome or other painful spine condition, it’s imperative that you don’t waste time because the longer you have a pain, the longer it can take to go away after treatment. “There are a number of alternatives that could help you get definitive back pain relief.” Let us help you return to a back pain-free lifestyle. Thousands of patients are already enjoying the benefits of these treatments. Avoid a Fusion – Refuse to Fuse and call me first! Contact us for a free one-on-one consultation and learn: What are my options? How much does a minimally invasive surgery cost? How would my procedure be step by step? How is the recovery phase? Find out if I’m a candidate for endoscopic laser surgery Common Questions About Our Concierge Spine Services Which insurances does Dr Mork accept? We do not take insurance directly for Dr. Mork’s fees, but if you provide us with your insurance information, we can check to see how we can best help you. The Surgery Center and the Anesthesiologist fees are covered by most insurance companies. How much does it cost The fee for our service based concierge procedures includes operative care. This includes direct access to Dr. Mork. Please contact our staff for more information about our fees and options. Where does he do his procedures Crown Valley Surgical Center 26921 Crown Valley Parkway Mission Viejo, Ca 92691 How far out is he booked This is variable, usually a procedure can be scheduled within 2-4 weeks This is a concierge practice and if you need something immediately, we will do our best to accommodate your needs. In some cases, we have been able to provide relief within three days. How long has he been in practice Dr. Mork began his practice in 1982 The first 16 years of his practice, Dr. Mork was a sports medicine surgeon, specializing in arthroscopy. What is the recovery time to return to my activities 99% percent of people could drive a car the day after surgery, even though it’s not recommended. Most people are able to return to light activities within a few days. More strenuous tasks generally need about six weeks. Dr. Mork will provide detailed instructions and personalized consultations to guide you through your recovery. It’s best to go slowly after surgery and stay comfortable; changing positions is great. What do I need to do to get a no-cost phone consult from Dr. Mork (The typical office consultation from a spine surgeon usually costs $500) Dr. Mork would like to examine your MRI or CT scan and report(completed within one year) Click Here and look in the upper right corner and look for “Upload Studies”. Follow the instructions to upload your MRI or CT scan. This is an important step because Dr. Mork has a special disc reader that lets him look at your MRI while you are discussing your problem during the phone consultation. Call our office and talk to the office staff to get an appointment to talk to Dr. Mork and, if needed, help sending your report to the office. How long do I need to stay after surgery This depends somewhat on your procedure. A typical timeline: Post-op Day 1 – Dr. Mork visits you and checks your dressing. Post-op Day 2 – Day of rest and recuperation. Post-op Day 3 – Recommend a day of rest, but could leave area. Post-op Day 4 – Can depart or have another day of rest. Does Dr. Mork use a laser Dr. Mork has used the laser in over 7000 cases. The laser is used primarily for soft tissue removal, it does not cut bone very well. Over the years he has found that other tools, like radiofrequency probes do a better job on soft tissues, so the laser is used less frequently. The are some cases(Cervical Facet Syndrome) where the laser is the best option, so of course, it would be used in these instances. Refuse to Fuse Call Dr. Mork There are a number of alternatives that could help you get definitive back pain relief. Let us help you return to a pain-free lifestyle.Get Your Life Back Today! Endoscopic Spine Surgery Specialist Dr. Mork has personally performed more than 8,000 endoscopic spinal surgeries. He has also designed some of the specialized tools used in laser spine surgery and is constantly pursuing the improvement of spinal care. Dr. Mork has been instrumental in developing some of the current endoscopic spinal surgery techniques. He is one of the originators and wrote or co-authored 11 peer-reviewed articles on the topic of endoscopic spine surgery techniques available today. He was the co-founder of Microspine and is a national instructor for Richard Wolf, the largest producer of endoscopic spine equipment in the world. While others copy his techniques, he is always moving forward. Dr. Mork represents the future of endoscopic laser spine surgery. He has even helped patients with prior failed spine surgeries and fusions. Unlike most surgeons who operate based on just the MRI findings, Dr. Mork’s diagnostic method will determine exactly what your problem is so that it can be resolved, thus minimizing guesswork. It may not be enough to plan a spinal surgery on the MRI alone. Spinal mapping may be very helpful to obtain the diagnosis. Dr. Tony Mork operates a concierge practice, emphasizing education, the diagnosis, communication, first class care and cutting edge technology. Dr. Mork is committed to his patients, their care and their desire to be pain free. Avoid Spine Fusion start living your life Pain Free I have performed over 8000 spine surgeries and have trained many of the top spine surgeons, read the reviews from many of my patient success stories! Get Your Life Back Today!
Worldwide business trends create new leverage for voluntary benefits. The recent growth of interest in voluntary benefits has coincided with a decline in resources available to administer benefit programs. Fortunately, technology offers employers a solution to this resources dilemma in the form of portals, which allow employers to provide choice and access to voluntary benefits without concerns about increasing the workload for the benefits staff and without causing confusion among employees.
A semiconductive conjugated copolymer comprises at least two chemically different monomer units which, when existing in their individual homopolymer forms, have different semiconductor bandgaps. The proportion of said at least two chemically different monomer units in the copolymer is selected to control...http://www.google.com/patents/US5672678?utm_source=gb-gplus-sharePatent US5672678 - Semiconductive copolymers for use in luminescent devices A semiconductive conjugated copolymer comprises at least two chemically different monomer units which, when existing in their individual homopolymer forms, have different semiconductor bandgaps. The proportion of said at least two chemically different monomer units in the copolymer is selected to control the semiconductor bandgap of the copolymer so as to control the optical properties of the copolymer. The copolymer is formed in a manner enabling it to be laid down as a film without substantially affecting the luminescent characteristics of the copolymer and is stable at operational temperature. The semiconductor bandgap may be spatially modulated so as to increase the quantum efficiency of the copolymer when excited to luminesce, to select the wavelength of radiation omitted during luminescence or to select the refractive index of the copolymer. Images(56) Claims(18) We claim: 1. An optical device which comprises a substrate, and at least one semiconductive conjugated copolymer layer supported by the substrate, wherein the copolymer comprises at least two chemically different monomer units each having different semiconductor bandgaps in their individual homopolymer forms, and wherein the proportion in the copolymer of said at least two chemically different monomer units forms the copolymer with a semiconductor bandgap that is spatially modulated from the semiconductor bandgap of each homopolymer form so that the optical properties of the copolymer are modulated, said copolymer being stable at operational temperatures within the range of about 0° C. to 150° C. 2. An optical device as claimed in claim 1, wherein the substrate is a transparent substrate. 4. An optical device as claimed in claim 1, wherein the copolymer is present in an emissive layer. 5. An optical device as claimed in claim 1, which comprises an electroluminescent device. 6. An optical device as claimed in claim 5, which further comprises two electrodes in between the copolymer layer is situated. 7. An optical device as claimed in claim 6, wherein one of the electrodes is a negative electrode formed from a member of the group consisting of calcium, aluminum, amorphous silicon, and silver/magnesium alloy. 8. An optical device as claimed in claim 6, wherein one of the electrodes is a positive electrode formed from a member of the group consisting of oxide coated aluminium, gold, indium oxide, and indium/tin oxide. 9. An optical device as claimed in claim 1, wherein the thickness of the copolymer layer is in the range 50 to 150 nm. 10. An optical device as claimed in claim 1, wherein the semiconductor bandgap of the copolymer has been spatially modulated so as to modulate the optical properties of the copolymer by increasing the quantum efficiency of the copolymer when excited to luminesce. 11. An optical device as claimed in claim 1, wherein the semiconductor bandgap of the copolymer has been spatially modulated so as to modulate the optical properties of the copolymer by modulating the wavelength of radiation emitted during luminescence. 12. An optical device as claimed in claim 1, wherein the semiconductor bandgap of the copolymer has been spatially modified so that the optical properties of the copolymer are modulated by modulating the refractive index of the copolymer. 13. An optical device as claimed in claim 1, wherein the chain of the copolymer is fully conjugated. 14. An optical device as claimed in claim 1, wherein at least one of the monomer units is not fully conjugated in the chain of the copolymer. 15. An optical device as claimed in claim 1, wherein the proportion in which said at least two chemically different monomer units are present is in the range of about 4:1 to 19:1 by molar ratio. 16. An optical device as claimed in claim 1, wherein at least one of the monomer units comprises an arylene vinylene unit substituted with a solubilizing group in the arylene ring so as to render the copolymer soluble in either aqueous or organic solvents. 17. An optical device as claimed in claim 16, wherein the solubilizing group comprises an alkoxy group of at least four carbon atoms. 18. An optical device as claimed in claim 17, wherein the alkoxy group is a 2-methylpentyloxy group or a 2-ethylexyloxy group. Description CROSS REFERENCES RELATED APPLICATIONS The present application is a continuation application of U.S. application Ser. No. 08/246,269, filed May 19, 1994, and which issued as U.S. Pat. No. 5,512,654 on Apr. 30, 1996. U.S. application Ser. No. 08/246,269 is a divisional application of U.S. application Ser. No. 07/748,777, filed Aug. 22, 1991, and which issued as U.S. Pat. No. 5,401,827 on Mar. 28, 1995. All of these applications claim priority under 35 U.S.C. §119 from United Kingdom Patent Application 9018698.2, filed 24 Aug. 1990. FIELD OF THE INVENTION This invention relates to semiconductive copolymers for use in luminescent devices, particularly electroluminescent devices. BACKGROUND TO THE INVENTION It has been shown that certain conjugated polymers show a relatively high quantum efficiency for the radiative decay of singlet excitons. Of these, poly-p-phenylene vinylene (PPV) can be prepared via a solution-processible precursor polymer, and although itself intractable and not easily processed, can be prepared in the form of thin films of high quality by thermal conversion of the as-prepared films of the precursor polymer. Details of this general synthesis method are given in "Precursor route poly(p-phenylene vinylene): polymer characterisation and control of electronic properties", D. D. C. Bradley, J. Phys. D: Applied Phys. 20, 1389 (1987), and "Spectroscopic and cyclic voltammetric studies of poly(p-phenylene vinylene) prepared from two different sulphonium salt precursor polymers", J. D. Stenger-Smith, R. W. Lenz and G. Wegner, Polymer 30, 1048 (1989). Measurements of photoluminescence, PL, have been reported by for example "Optical Investigations of Conjugated Polymers", R. H. Friend, J. Molecular Electronics, 4, 37 (1988), and "Photoexcitation in Conjugated Polymers", R. H. Friend, D. D. C. Bradley and P. D. Townsend, J. Phys. D 20, 1367 (1987). In our earlier International Patent Application No. PCT/GB90/00584 (Publication No. PCT/WO90/13148) films of PPV are disclosed as being useful as the emissive layer in a structure exhibiting electroluminescence (EL). This structure requires injection of electrons and holes from either side of the active (i.e. emissive) region of the film, and various metallic contact layers can be used. In sandwich-like structures, and for emission from the plane of the device, one of these should be semi-transparent. The advantages of using polymers of this type as the emissive layer in EL structures include: (a) ease of fabrication of large area structures. Various methods are available for solution-processing of the precursor polymer, including spin-coating from solution which is the preferred method, and dip-coating; (b) intractability of the polymer film, giving desirable strength, resistance to degradation from heat and exposure to oxygen, resistance to structural changes such as recrystallisation and shrinkage, and resistance to ion migration; The present invention is directed to providing polymers for use as the emissive layer in EL structures which overcome these difficulties. According to one aspect of the present invention there is provided a semiconductive conjugated copolymer comprising at least two chemically different monomer units which, when existing in their individual homopolymer forms, have different semiconductor bandgaps, the proportion of said at least two chemically different monomer units in the copolymer having been selected to control the semiconductor bandgap of the copolymer so as to control the optical properties of the copolymer, said copolymer having been formed in a manner enabling it to be laid down as a film without substantially affecting the luminescent characteristics of the copolymer, said copolymer being stable at operational temperature. The operational temperature depends upon the use to which the copolymer is put. Typically, use of the copolymer in luminescence devices may require the operational temperature to be ambient temperature or room temperature. Preferably, the stability of the copolymer extends to operational temperatures in the range 0°-150 ° C., more preferably down to 77° K. Preferably the monomer units in the copolymer are arylene vinylene units. A semiconductor is a material that is able to accommodate charged excitations which are able to move through this material in response to an applied electrical field. Charge excitations are stored in the semiconductor in states which are (or are derived from) conduction band states (in the language of quantum chemisty, lowest unoccupied molecular orbitals, LUMOs) if negatively charged, or valence band states (highest occupied molecular orbitals, HOMOs) if positively charged. The semiconductor band gap is the energy difference between valence and conduction bands (or from HOMO to LUMO) The present application is primarily concerned with copolymers in which the material is made up of chemically distinct regions of polymer chain. A convenient description of the electronic states (molecular orbitals) is one in which the wavefunctions are substantially localised on a region of chain of one chemical type. It is useful to define the semiconductor bandgap locally, i.e. as the energy gap between HOMO and LUMO on a particular sequence of polymer chain to which the HOMO and LUMO wavefunctions are substantially confined. One can expect to find a variation of gap from HOMO to LUMO between regions of one chemical type those of another. This may be described as a spatial modulation of the bandgap. The inventors have found that by modulating the semiconductor bandgap of the copolymer it is possible to increase the quantum efficiency of the copolymer when excited to luminesce. Quantum efficiency for luminescence may be defined as photons out per excited state. For photoluminescence this is identified as photons out per photon absorbed. For electroluminescence this is defined as photons out per electron injected into the structure. They have also found that the semiconductor bandgap can be modulated to control the wavelength of radiation emitted during luminescence. This gives the very desirable feature of controlling the colour of light output from the polymer. The inventors have also found that the semiconductor bandgap is a factor affecting the refractive index of the copolymer. In one aspect, the chain of the copolymer is fully conjugated. In a further aspect, at least one of the monomer units is not fully conjugated in the chain of the copolymer. It will be apparent that it is an important feature of the invention that the copolymer, when laid down as a film, comprises two chemically different monomer units. This can be achieved by converting a suitable precursor copolymer comprising a selected proportion of the different monomer units or by controlling the extent of conversion of a precursor polymer into a conjugated copolymer. The conjugated polymers used here are all examples of semiconductors, and there is some control of bandgap through adjustment of the repeat units of the chain. However, it is also found that it is useful to incorporate some units of non-conjugated polymers to form some of the copolymers. In this case, the non-conjugated section of the chain would function as a very large gap semiconductor, so that under the conditions of operation found here it would behave as an insulator, i.e. there would be little or no charge storage on or movement through such a region of the chain. In this case, the material as a whole will still function as a semiconductor so long as there is a path through the bulk of the sample that passes entirely through the semiconducting regions of the chain (those that are conjugated). The threshold for the existence of such a path is termed the percolation threshold, and is usually found to be in the region of 20% volume fraction of non-insulating material. In the present specification, all such copolymers are well above this percolation threshold and can be termed as semiconductors. In a preferred embodiment the present invention provides a conjugated poly(arylene vinylene) copolymer capable of being formed as a thin electroluminescent film, wherein a proportion of the vinylic groups of the copolymer are saturated by inclusion of a modifier group substantially stable to elimination during formation of the film, whereby the proportion of saturated vinylic groups controls the extent of conjugation, thereby modulating the semiconductor (π-π*) bandgap of the copolymer. In another aspect, the invention provides a method of manufacturing a semiconductive copolymer comprising: (a) reacting a quantity of a first monomer with a quantity of a second monomer in a solvent comprising a mixture of water and an alcohol; (b) separating the reaction product therefrom; (c) dissolving the reaction product in an alcohol the same as or different from said first mentioned alcohol; (d) forming from the result of step (c) a conjugated polymer film the quantities in step (a) being selected so that in the conjugated polymer the semiconductor bandgap is controlled so as to control the optical properties of the copolymer. Step (a) is preferably carried out in the presence of a base. The present invention also provides a method of forming a conjugated poly(arylene vinylene) copolymer as defined above, which method comprises heating substantially in the absence of oxygen a poly(arylene-1,2-ethanediyl) precursor copolymer wherein a proportion of the ethane groups include a modifier group substituent and at least some of the remaining ethane groups include a leaving group substituent, whereby elimination of the leaving group substituents occurs substantially without elimination of the modifier group substituents so as to form the conjugated poly(arylene vinylene) copolymer. The extent of conjugation of the conjugated poly(arylene vinylene) copolymer can be tailored by appropriate selection of the arylene constituents of the copolymer and of the modifier group. For example, phenylene moieties incorporating electron-donating substituent groups or arylene moieties with oxidation potentials lower in energy than that of phenylene are found to incorporate the modifier group preferentially as compared with the corresponding unsubstituted arylene moiety. Thus, the proportion of vinylic groups saturated by incorporation of the modifier group can be controlled by selection of the arylene moieties' substituents and the extent of conjugation of the copolymer may be concomitantly modulated. The extent of conjugation of the copolymer affects the π-π* bandgap of the copolymer. Therefore, selection of appropriate reaction components may be used to modulate the bandgap. This property may be exploited, for example, in the construction of electroluminescent devices as described in more detail with reference to the preferred embodiment. In a further aspect, the present invention also provides a poly(arylene-1,2-ethanediyl) precursor copolymer wherein a proportion of the ethane groups include a modifier group substituent and at least some of the remaining ethane groups include a leaving group substituent, the precursor copolymer being convertible by elimination of the leaving group substituents into a conjugated poly(arylene vinylene) copolymer as defined above. The invention also provides a method of conversion of the precursor into its copolymer in which the extent of elimination of the leaving group constituents is controlled to control the bandgap of the copolymer to define both the colour of luminescence of the resulting copolymer film and its quantum efficiency for luminescence. In a further aspect, there is provided a method of forming a poly(arylene-1,2-ethanediyl) precursor copolymer as defined above, which method comprises reacting a first monomer component with a second monomer component, in the presence of base and a solvent comprising a modifier group, wherein the first monomer component comprises a first arylene moiety substituted with --CH2 L1 and --CH2 L2 and the second monomer component comprises a second arylene moiety substituted with --CH2 L3 and --CH2 L4 in which L1, L2, L3 and L4 each represents a leaving group substituent which may be the same or different from one another. This method may constitute a first step in the formation of the conjugated poly(arylene vinylene) copolymer. A function of the modifier group is to interrupt the conjugation of the poly(arylene vinylene) copolymer by saturation of the vinylic groups of the copolymer chain. Thus, for the modifier group to be successful in this function it must be relatively stable to elimination during formation of the poly(arylene vinylene) copolymer. Typical modifier groups include: ##STR1## A preferred modifier group is a C1 to C6 alkoxy group, more preferably a methoxy group. The poly(arylene-1,2-ethanediyl) precursor copolymer may be formed in a first step by reacting a first monomer component with a second monomer component, in the presence of base and a solvent comprising the modifier group, wherein the first monomer component comprises a first arylene moiety substituted with --CH2 L1 and --CH2 L2 and the second monomer component comprises a second arylene moiety substituted with --CH2 L3 and --CH2 L4, in which L1, L2, L3 and L4 each represents a leaving group substituent which may be the same or different from one another. In the step of forming the poly(arylene-1,2-ethanediyl) precursor copolymer the solvent preferably also includes water. Thus, for aqueous solvents, the modifier group must be present as a water miscible polar solvent/reagent. Where the modifier group is alkoxy, the corresponding solvent or solvent component would therefore be an alcohol. Preferably the solvent comprises at least 30% modifier group by weight. More preferably the solvent is water:methanol at a ratio of 1:1 or lower. Modifier groups may be introduced selectively either during formation of the precursor copolymer or by displacement reactions on the precursor copolymer. The identity of the leaving groups is not particularly critical provided that the first and second monomer components may react together in the presence of base and provided that the leaving group substituents on the poly(arylene-1,2-ethanediyl) precursor copolymer may eliminate upon heating. Typical leaving groups include 'onium salts in general, bearing a non-basic counter anion. Sulphonium salts, halides, sulphonates, phosphates or esters are suitable examples of leaving groups. Preferably a sulphonium salt such as a tetrahydrothiophenium salt is used. Throughout this specification the term arylene is intended to include in its scope all types of arylenes including heteroarylenes as well as arylenes incorporating more than one ring structure, including fused ring structures. At least two arylene moieties are present in the copolymer chain and these may be substituted or unsubstituted arylene or heteroarylene moieties. Suitable substituents include alkyl, O-alkyl, S-alkyl, O-aryl, S-aryl, halogen, alkyl sulphonyl and aryl sulphonyl. Preferred substituents include methyl, methoxy, methyl sulphonyl and bromo, and the arylenes may be substituted symmetrically. In a more preferred embodiment of the invention, one of the arylene moieties of the copolymer is unsubstituted and comprises para-phenylene. Preferably, the second component is selected from the group comprising 2,5-dimethoxy-para-phenylene, 2,5-thienylene 2,5-dimethyl-para-phenylene, 2-methoxy-5-(2'methylpentyloxy)-para-phenylene and 2-methoxy-5-(2'ethylhexyloxy)-para-phenylene. More preferably the para-phenylene moiety is present in the copolymer chain in an amount resulting from conversion of a precursor copolymer formed by reaction of at least 70 mole % of the PPV precursor monomer unit. Referring in particular to the method of forming the conjugated polyarylene vinylene copolymer, this can be effected by heating, preferably in a temperature range of 70°-300° C. The heating is performed substantially in the absence of oxygen, for example under an inert atmosphere such as that of one or more inert gases or under vacuum. In the step of forming the precursor copolymer, a range of reaction temperatures and reaction times is possible. The reaction temperature is constrained mainly by the temperature range at which the solvent is liquid and typically varies from -30° C. to +70° C., preferably -30° C to +30° C., more preferably -5° C. to +10° C. The reaction time may typically be between 1 minute and 1 day, depending on the temperature and reaction components, preferably not greater than 4 hours. Once the precursor copolymer is formed this may optionally be purified, for example by precipitation with a salt of a non-nucleophilic counter anion (i.e. anion exchange). Preferably the precursor copolymer is dialysed against an appropriate solvent such as water or a water-alcohol mixture. Choice of the base used in the reaction is not particularly critical provided that it is soluble in the solvent. Typical bases include hydroxides or alkoxide derivatives of Group I/II metals and may be present at a ratio of 0.7-1.3 mole equivalents of base per mole of monomer. Preferably, hydroxides of lithium, sodium or potassium are used in equimolar proportions with the monomer. In a further embodiment, at least one of the monomer units of the copolymer comprises an arylene vinylene unit substituted with a solubilizing group in the arylene ring so as to render the copolymer soluble. Any known solubilizing group may be used for this purpose. Where the copolymer is to be soluble in water, a charged solubilizing group is preferred. The solubilizing group typically comprises an alkoxy group of at least 4 carbon atoms. The alkoxy group may be branched or linear and preferably introduces asymmetry into the arylene rings so as to disrupt the packing of the copolymer chains. Preferably the alkoxy group is a 2-methylpentyloxy or a 2-ethylhexyloxy group. A further alkoxy group such as a methoxy group may be substituted para to the solubilizing group. By making the copolymer soluble, this confers the advantage of allowing the copolymer to be processed in solution. Accordingly, a solution-processable conjugated copolymer may be provided in which the monomer units have been selected to modulate the semiconductor bandgap thereof. In this way, the quantum efficiency of the copolymer can be increased and the wavelength of radiation emitted during luminescence can be selected. In a further aspect, the present invention also provides a method of forming a conjugated poly(arylene vinylene) copolymer. The method comprises heating substantially in the absence of oxygen a poly(arylene-1,2-ethanediyl) precursor polymer wherein at least some of the ethane groups include a modifier group substituent, the heating conditions being controlled so that elimination of the modifier group substituents occurs to form the copolymer whereby a proportion of the vinylic groups of the copolymer remain saturated by the modifier group substituents, the proportion of saturated vinylic groups controlling the extent of conjugation in the copolymer, thereby modulating the semiconductor bandgap of the copolymer. In this aspect of the invention, the precursor polymer is formed whereby substantially all the leaving groups are replaced by the modifier groups. A suitable method for forming the precursor polymer is to be found in Tokito et al Polymer (1990), vol. 31, p. 1137. By replacing the leaving group with a modifier group which is substantially stable at ambient temperatures, a relatively robust precursor polymer is formed. Examples of typical modifier groups are set out in the foregoing discussion. Advantageously the modifier group is an alkoxy group, preferably a methoxy group. Advantageously the precursor polymer comprises a homopolymer, preferably a poly(paraphenylene-1,2-ethanediyl) polymer, a poly(2,5 dimethoxy para phenylene-1,2-ethanediyl) polymer, or a poly(thienylene-1,2-ethanediyl) polymer. Partial elimination of the modifier groups from the homopolymer produces a copolymer. By controlling the extent of conversion to the copolymer, the extent of conjugation in the copolymer is controlled. This therefore provides a further route for modulating the semiconductor bandgap of the copolymer. The heating of the precursor polymer is preferably performed substantially in the absence of acid. The presence of acid tends to result in conversion to the fully conjugated polymer. By controlling the temperature of heating and the time of heating it is possible to control the degree of conversion into the copolymer, thereby modulating the semiconductor bandgap of the copolymer. Thus, the wavelength of radiation emitted during luminescence of the materials may be selected by controlling the heating conditions. The more conversion to the conjugated copolymer, the more red-shifted the wavelength becomes. In this way, it is possible to control the colour of the emissions from blue to red. Preferably, the temperature of heating is in the range 200°-300° C. and preferably the heating time is up to 12 hours. ##STR2## Referring to the foregoing page of structural formulae, copolymers of type (i) have been prepared by Lenz et al from the tetrahydrothiophenium salts of the two monomer units as described in "Highly conducting, iodine-doped copoly(phenylene vinylene)s", C.-C. Han, R. W. Lenz and F. E. Karasz, Polym. Commun. 28, 261 (1987) and "Highly conducting, iodine-doped arylene vinylene copolymers with dialkoxyphenylene units", R. W. Lenz, C.-C. Hah and M. Lux, Polymer 30, 1041 (1989). Copolymers of type (ii) have been prepared by Lenz et al from the tetrahydrothiophenium salts of the two monomer units as described in "Synthesis and electrical conductivity of poly(1,4-phenylenevinylene-co-2,5-thienylenevinylene)", H.-K. Shim, R. W. Lenz and J.-I. Hin, Makromol. Chem 190, 389 (1989) and have been mentioned by K. Y. A. Jen, R. L. Elsenbaumer, L. W. Shacklette (Allied Corp.), PCT Int. Appl. Pub. No. WO 8800954. These copolymers were produced as intermediate products to the final products prepared by Lenz, these final products being heavily doped with strong oxidants to enable conductivity measurements to be undertaken. The intermediate products were not of interest themselves. Furthermore, they were prepared under aqueous reaction conditions. Direct comparison of the materials prepared in Lenz et al and the materials prepared by the method of the preferred embodiments of the present invention showed that they were different for a number of reasons. First, the use of water/alcohol mixtures as a solvent allows better control over the relative proportions of fragments of each monomer observed in the final co-polymers. This is observed by IR spectroscopy and micro-analysis. Second, the use of water/alcohol in the present process allows selective substitution of the sulphonium leaving group with the alcohol. This occurs at a faster rate at benzylic carbons which are attached to an activated phenylene ring, for example, a dimethoxy substituted phenylene ring. This option is not open to the Lenz process. Evidence for substitution comes from nuclear magnetic resonance (NMR), infrared (IR), and photoluminescence studies and also from reactions observed on the homopolymers. For example, dimethoxy-PPV is prepared from a precursor polymer which has methoxy modifier groups. This polymer is in turn prepared according to the literature (T. Momii, S. Tokito, T. Tsutsui and S. Saito--Chem. Letters (1988), 1201) from the precursor polymer which has sulphonium leaving groups by exchanging the chloride anion with a p-toluenesulphonate anion and then reacting this material with methanol. It has been observed by the inventors that it is not necessary to exchange anions for the substitution reaction to occur in the dimethoxy-PPV precursor polymer. It has also been found by the inventors that the reaction of the sulphonium precursor polymer of PPV with methanol occurs at a much slower rate. The precursor co-polymers prepared by the method of the preferred embodiments of the present invention can therefore be better described by the structures of General Formulae (I) and (III). Third, the usual method of conversion of precursor polymers with methoxy modifier groups is by heating under acidic conditions. With the method of the present invention it is preferred to use heat treatment alone as this allows the methoxy modifier groups to remain in part uneliminated thus segregating the conjugated material into discrete segments as described by General Formulae II and IV. This solution and method represents a significant advancement over the art. Thin films prepared by this method are stable to the loss of the methoxy modifier groups (for example, thin films heated for 2 h had similar properties to thin films heated for 24 h). This is evidenced by IR and ultraviolet/visible (UV/vis) spectroscopy. Fourth, the use of water/alcohol mixtures increases the reaction rate of both monomeric units compared with just using water as the solvent during polymerisation. This is evidenced by comparison of the amount of acid necessary to neutralise the remaining unreacted base in Example 1 and in the examples described by Lenz. Finally, the quality of films cast from a methanol solution as opposed to an aqueous solution is far superior and easily reproducible, and gives higher light output in electroluminescent devices. The quality of films was determined by Dek Tak profilometry. In the following when reference is made to ratios of PPV, dimethoxy-PPV, PTV, dimethyl-PPV 2-methoxy-5-(2'methylpentyloxy)-PPV and 2-methoxy-5-(2'-ethylhexyloxy)-PPV monomer units in both precursor and conjugated copolymer structures the ratios are defined by the amounts of the corresponding monomer units used in the initial polymerisation reaction. For a better understanding of the present invention and to show how the same may be carried into effect, reference will now be made by way of example to the accompanying drawings. BRIEF DESCRIPTION OF THE DRAWINGS FIG. 1 is a diagram showing an example of the steps of a method for producing the copolymers prepared via a soluble precursor; FIG. 2a is a graph showing the absorption spectra of spin-coated thin films of PPV and copolymers of PPV, as the majority constituent, and dimethoxy-PPV (DMeOPPV) as converted at 220° C. in vacuo for 2 hours Curve a is PPV homopolymer Curve b is 95% PPV to 5% DMeOPPV Curve c is 90% PPV to 10% DMeOPPV Curve d is 85% PPV to 15% DMeOPPV Curve e is 80% PPV to 20% DMeOPPV Curve f is 70% PPV to 30% DMeOPPV FIG. 2b is a graph showing the absorption spectrum of a spin-coated thin film of dimethoxy-PPV as converted at 220° C. in the present of acid for two hours. FIGS. 3a and 3b are graphs showing respectively the emission spectra for thin spin coated and thick solution cast films of a copolymer produced from a 1:9 molar ratio of dimethoxy-PPV and PPV monomer units respectively, converted at 220° C. in vacuo for two hours; FIGS. 4a and 4b are graphs showing respectively the emission spectra for thin spin coated and thick solution cast films of a copolymer produced from a 1:4 molar ratio of dimethoxy PPV and PPV monomer units respectively, converted at 220° C. in vacuo for two hours; FIGS. 5a and 5b are graphs showing respectively the photoluminescence spectra for homopolymers of PPV and dimethoxy PPV; FIGS. 6a, b and c are graphs showing respectively the absorption spectra of a homopolymer of PPV, and random copolymers of PPV and PTV produced respectively from 19;1 and 9:1 molar ratios of PPV and PTV monomer units, convened at 220° C. in vacuo for two hours; FIGS. 7a, b and c are graphs showing respectively the photoluminescence emission spectra for thick free cast films of a homopolymer of PPV; a copolymer produced from a 19:1 molar ratio of PPV and PTV monomer units respectively; and a copolymer produced from a 9:1 molar ratio of PPV and PTV monomer units respectively; FIGS. 8a, b and c are graphs showing the absorption spectra of spin-coated thin films of a homopolymer of PPV, and random copolymers of PPV and dimethyl PPV produced respectively from 19;1 and 9:1 molar ratios of PPV and PTV dimethyl monomer units as converted at 220° C. in vacuo for two hours; FIGS. 9a, b and c are graphs showing respectively the photoluminescence emission spectra of thick free cast films for the homopolymer of PPV; a copolymer produced from a 19:1 molar ratio of PPV and dimethyl PPV monomer units respectively; and a copolymer produced from a 9:1 molar ratio of PPV and dimethyl-PPV monomer units respectively; FIGS. 10a, 11a and 12a are graphs showing the current/voltage characteristics of a thin film of respectively PPV; a copolymer produced from a 9:1 molar ratio of PPV and dimethoxy PPV monomer units respectively; and a copolymer produced from a 9:1 molar ratio of PPV and thienylene vinylene monomer units respectively, the polymer films being spin-coated and converted at 220° C. for two hours in vacuo with hole injecting electrodes of oxidised aluminium, and with electron injecting electrodes of aluminium; FIGS. 10b, 11b and 12b are graphs showing the luminescence/current relationship for a thin film of respectively PPV; a copolymer produced from a 9:1 molar ratio of PPV and dimethoxy PPV monomer units respectively; and a copolymer produced from a 9:1 molar ratio of PPV and thienylene vinylene monomer units respectively, the polymer films being spin-coated and converted at 220° C. for two hours in vacuo with hole injecting electrodes of oxidised aluminium, and with electron injecting electrodes of aluminium; FIG. 13 illustrates the electroluminescent quantum yield of random copolymers formed from PPV and dimethoxy-PPV monomer units as measured in thin film structures with hole injecting electrodes of oxidised aluminium, a spin-coated film converted at 220° C. in vacuo for two hours, and with electron injecting electrodes of aluminium; FIG. 14 illustrates the electroluminescent quantum yield of random copolymers formed from PPV and PTV monomer units as measured in thin film structures with hole injecting electrodes of oxidised aluminium, a spin-coated film converted at 220° C. in vacuo for two hours, and with electron injecting electrodes of aluminium; FIG. 15 illustrates the electroluminescent quantum yield of random copolymers formed from PPV and dimethyl-PPV monomer units as measured in thin film structures with hole injecting electrodes of oxidised aluminium, a spin-coated film converted at 220° C. in vacuo for two hours, and with electron injecting electrodes of aluminium; A film of copolymer of 10% DMeOPPV: 90% PPV was spin-coated and an area was capped with 500 A of evaporated aluminium. The sample was then thermally converted for 12 hours at 220° C. in vacuo. The aluminium capping layer was removed by reacting it in dilute alkali. FIGS. 16 and 17 show the optical absorption spectra and photoluminescent spectra for two areas in a polymer film which have undergone different conversion treatments; FIGS. 18a, 18b, 18c are graphs showing the infrared spectra of precursor to random copolymers of PPV and MMP-PPV(2-methoxy-5-(2'-methylpentyloxy)-PPV produced from 80:20, 90:10, and 95:5 w/w ratios of PPV and MMP-PPV monomer units, respectively; FIGS. 19a, 19b, 19c, 19d, are graphs showing the absorption spectra of spin-coated thin films of random copolymers of PPV and MMP-PPV produced from 80:20, 90:10, and 95:5 and 100:0 w/w ratios of PPV and MMP-PPV monomer units, respectively as converted at 220° C in vacuo for 12 hours; FIG. 20 is a graph showing the current/voltage characteristics of a thin film of a random copolymer of PPV and MMP-PPV produced from 90:10 w/w ratio of PPV and MMP-PPV monomer units as converted in vacuo at 220° C. for 12 hours on a substrate of ITO-coated glass and with calcium as a cathode; FIG. 21 is a graph showing the luminance/current characteristics of a thin film of a random copolymer of PPV and MMP-PPV produced from 90:10 w/w ratio of PPV and MMP-PPV monomer units as converted in vacuo at 220° C. for 12 hours on a substrate of ITO-coated glass and with calcium as a cathode; FIGS. 22a and 22b are graphs showing the infrared spectra of precursors of random copolymers of PPV and MEH-PPV (2-methoxy-5-(2'-ethylhexyloxy)-PPV produced from 90:10 and 95:5 w/w ratios of PPV and MEH-PPV (2-methoxy-5-(2'-ethlyhexyloxy)-PPV) monomer units respectively; FIGS. 23a, 23b, 23c, 23d are graphs showing the absorption spectra of spin-coated thin films of random copolymers of PPV and MEH-PPV produced from 80:20, 90:10, 95:5 and 100:0 w/w ratios of PPV and MEH-PPV monomer units, respectively as converted at 220° C. in vacuo for 12 hours; FIG. 24 is a 1 H NMR spectrum of the copolymer described in example 11 produced from 5:95 w/w ratio of PPV and MEH-PPV monomer units; FIGS. 25a, 25b, 25c are graphs showing the infrared spectra of (c) MEH-PPV and of random copolymers of PPV and MEH-PPV produced from (a) 20:80 and (b) 5:95 w/w ratios of PPV and MEH-PPV monomer units, respectively, by the method described in example 11; FIG. 26 is a graph showing the absorption spectra of spin-coated thin films of MEH-PPV and of random copolymers of PPV and MEH-PPV produced from 20:80 and 5:95 w/w ratios of PPV and MEH-PPV monomer units, respectively; FIGS. 27a and 27b are graphs showing the photoluminescence emission spectra of random copolymers of PPV and MEH-PPV produced from 20:80 and 5:95 w/w ratios of PPV and MEH-PPV monomer units, respectively; FIGS. 28a and 28b are graphs showing the electroluminescence spectra for random copolymers of PPV and MEH-PPV produced from 20:80 and 5:95 w/w ratios of PPV and MEH-PPV monomer units, respectively; FIGS. 29a and 29b are graphs showing the current/voltage characteristics and luminance/voltage relationship for a thin film of a random copolymer of PPV and MEH-PPV produced from 20:80 w/w ratio of PPV and MEH-PPV monomer units thin; films were spin-coated onto substrates of ITO coated glass and aluminium cathodes were evaporated on top; FIGS. 30a and 30b are graphs showing the current/voltage characteristics and luminance/voltage relationship for a thin film of random copolymer of PPV and MEH-PPV produced from 5:95 w/w ratio of PPV and MEH-PPV monomer units: thin films were spin-coated onto substates of ITO coated glass and aluminium cathodes were evaporated on top; FIG. 31 is a scatter graph showing the quantum yield of random copolymers formed from PPV and MMP-PPV monomer units as measured in thin film structures with hole injecting electrodes of oxidised aluminium, a spin-coated film converted at 220° C. in vacuo for 12 hours, and with electron injecting electrodes of aluminium; FIG. 31a is a graph showing the photoluminescence spectra of MEH-PPV and random copolymers of (a) MEH-PPV and PPV produced from (b) 95:5 and (c) 80:20 w/w ratios of MEH-PPV and PPV monomer units, respectively; FIG. 33 is a graph showing the absorption spectra of precursors of THT-leaving PPV (broken) and MeO-leaving PPV (solid); FIG. 34 is a graph showing the absorption spectra of THT-leaving PPV (broken) and MeO-leaving PPV (solid) after thermal conversion at 300° C. for 12 hours in vacuo; FIG. 35 is a graph showing the absorption spectra of thin spin-coated films of MeO-leaving PPV before (dotted) and after (solid) thermal conversion at 300° C. for 12 hours in vacuo; FIGS. 36(a) and (b) are graphs showing respectively the current-voltage and luminance-current characteristics of THT-leaving PPV as converted in vacuo at 220° for 12 hours on a substrate of ITO-coated glass and with aluminium as a cathode; FIGS. 37(a) and (b) are graphs showing respectively the current-voltage and luminance-current characteristics of MeO-leaving PPV as converted in vacuo at 300° for 12 hours on a substrate of ITO-coated glass and with aluminium as a cathode; FIGS. 39(a) to (c) show respectively the formal structural formulae of the random copolymers of: PPV and DMeOPPV in precursor form; as converted thermally in vacuo; and as converted thermally in the presence of acid; FIG. 40 is a graph showing the absorption spectra of spin-coated thin films of random copolymers of PPV and DMeOPPV after thermal conversion as converted in vacuo at 220° C. for 12 hours. The percentages on the figure represent the percentage of DMeOPPV monomer units w/w from which the precursor was formed; FIG. 41 is a graph showing the infra red absorption spectra of a 20% random copolymer of DMeOPPV and PPV in which: FIG. 41a is the precursor FIG. 41b is the copolymer spin-coated on KBr and converted at 220° in vacuo for two hours FIG. 41c is the same sample further converted for two hours at 220° C. in the presence of acid; FIGS. 42a, 42b, 42c, 42d, 42e, are graphs showing respectively the infrared absorption spectra of PPV and the random copolymers of PPV, as the major constituent, and DMeOPPV produced from 95:5, 90:10, 80:20 and 70:30 molar ratios of PPV and DMeOPPV monomer units respectively; FIG. 43 is a graph showing the absorption spectra of spin-coated thin films of a 20% random copolymer of DMeOPPV and PPV converted in vacuo (a,b) and in the presence of HCl (c,d); FIG. 44 is a graph showing the variation of bandgap with different conversion conditions; the higher bandgap material (a) converted for 2 hours at 220° C. in vacuo, the lower bandgap material (b) converted for 12 hours at 100° C. in vacuo and subsequently four hours at 220° C. in a 15% random copolymer of DMeOPPV and PPV; FIG. 45 is a graph showing the photoluminescence spectra of a 30% random copolymer of DMeOPPV and PPV; FIG. 46 is a graph showing the photoluminescence emision spectra of a 30% random copolymer of DMeOPPV and PPV; FIG. 47 is a graph showing the absorption spectra of capped and uncapped 10% random copolymers of DMeOPPV and PPV; and FIG. 48 is a graph showing the photoluminescence emission spectra of capped and uncapped 10% random copolymers of DMeOPPV and PPV after thermal conversion. In each of FIGS. 45 to 48, a film of copolymer were spin-coated and an area was capped with 500 A of evaporated aluminium. The sample was then thermally converted for 12 hours at 220° C. in vacuo. The aluminium capping layer was removed by dissolving it in dilute alkali. The lower energy absorption and photoluminescence spectra are from the capped regions of polymer. DESCRIPTION OF THE PREFERRED EMBODIMENTS FIG. 1 illustrates in general terms a process for producing copolymers according to one embodiment of the invention. A mixture of two monomeric bis-sulphonium salts in a suitable solvent was polymerised by reaction with a base. The resultant soluble precursor copolymer was purified and then converted to a conjugated form by heat treatment. Examples of both the precursor copolymers and the partially conjugated copolymers are shown in the foregoing formulae drawings. The compound of General Formula I represents a precursor copolymer of the compound of General Formula II, which is a poly(para-phenylene vinylene-co-2,5-disubstituted-para phenylene vinylene) copolymer. Similarly, the compound of General Formula III represents a precursor copolymer of the compound of General Formula IV, which is a poly(2,5-thienylene vinylene-co-disubstituted-para-phenylene vinylene) copolymer. In these compounds the extent of conjugations will be determined by the values of n,m,o and p. Clearly, for a partially conjugated copolymer (II) or (IV), o+p≧1, and so at least some of the vinylic groups will be saturated by inclusion of the modifier group represented by --OR'. The present invention is concerned in one aspect with improving the efficiency of radiative decay of excitons by trapping them on local regions of the polymer chain, which have lower energy gaps and thus are regions of lower potential energy for the excitons, so that the excitons are confined for a long enough period that they will decay radiatively. This has been achieved by the synthesis of a family of copolymers in which the units which make up the polymer chain are selected from two or more chemically different groups, which possess differing bandgaps in their respective homopolymers. Such polymers have been synthesised while still retaining all the desirable processing and materials properties of PPV. In the examples shown in this disclosure, para-phenylene vinylene is used as one of the components (usually the majority component) together with varying compositions of the following other components or their unconverted precursors, as discussed more fully below: ##STR3## The first three of these components are available in the form of their corresponding homopolymers, and the first two possess an energy gap lower than that of PPV. PPV shows the onset of π to π* optical transitions at 2.5 eV; poly(2,5-dimethoxy-para-phenylene vinylene), PDMOPV, at 2.1 eV and poly(2,5-thienylene vinylene), PTV, at 1.8 eV. It is expected, on the basis of the known inductive effects of its substituents, that poly(2,5-dimethyl-para-phenylene vinylene), PDMPV, will have a bandgap a little lower than that of PPV. Dimethyl PPV (DMPPV) has a higher bandgap in its homopolymer than does PPV. This is contrary to the argument which runs that the methyl substituents have inductive effects and so will lower the bandgap of DMPPV over PPV. The true picture is that due to the steric interaction of the dimethyl groups, the polymer conjugated backbone is distorted decreasing the degree of electron delocalisation along the backbone and thus raising the bandgap with respect to PPV. This is evidenced in electron diffraction studies and quantum chemical calculations. Thus, the copolymers of PPV and DimethylPPV as prepared via a THT leaving group (FIG. 8) have a controlled shift in bandgap not because the DMPPV units are saturated giving a copolymer of saturated and unsaturated units but because DMPPV and PPV have genuinely different bandgaps and we are forming a copolymer of the two. We evidence that there are no saturated units by an absence of 1094 cm-1 stretch in the FTIR spectra of the precursors. Bandgap is still controllable hence by selection of the monomer units ratio. There follows specific examples of processes in accordance with embodiments of the invention. EXAMPLE 1 A mixture of α,α'-bis(tetrahydrothiophenium chloride)-p-xylene (0.97 g, 2.8 mmol) and α,α'-bis(tetrahydrothiophenium chloride)-2,5-dimethoxy-p-xylene (0.12 g, 0.3 mmol) in methanol (7.1 ml) was deoxygenated with nitrogen and cooled with an ice-bath. A nitrogen deoxygenated aqueous sodium hydroxide solution (0.4M, 2.9 mmol, 7.1 ml) was added dropwise and the reaction mixture was left to stir for 1 hour at 0° C. under inert atmosphere. The reaction was terminated by addition of hydrochloric acid (0.4M, 1.0 ml). The viscous solution was then dialyzed against deoxygenated distilled water (3×1000 ml) over 3 days using cellulose membrane dialysis tubing with a molecular weight cut-off of 12,400 (supplied by Sigma Chemical Company Limited, Dorset, U.K.). The solvent was completely removed in vacuo at room temperature from the material remaining in the dialysis tubing. The residue was dissolved in dry methanol (15 ml). EXAMPLE 2 A mixture of α,α'-bis(tetrahydrothiophenium chloride)-p-xylene (0.91 g, 2.6 mmol) and α,α'-bis(tetrahydrothiophenium chloride)-2,5-dimethyl-p-xylene (0.10 g, 0.26 mmol) in methanol (9.5 ml) was deoxygenated with nitrogen and cooled with an ice-bath. A nitrogen deoxygenated ice-cold aqueous sodium hydroxide solution (0.4M, 2.9 mmol, 7.1 ml) was added dropwise and the reaction mixture was left to stir for 1 hour at 0° C. under inert atmosphere. The reaction was terminated by addition of hydrochloric acid (0.4M, 0.5 ml). The viscous solution was then dialyzed against deoxygenated distilled water (3×1000 ml) over 4 days using cellulose membrane dialysis tubing with a molecular weight cut-off of 12,400 (supplied by Sigma Chemical Company Limited, Dorset, U.K.). The solvent was completely removed in vacuo at room temperature from the material remaining in the dialysis tubing. The residue was dissolved in dry methanol (10 ml). EXAMPLE 3 A mixture of α,α'-bis(tetrahydrothiophenium chloride)-p-xylene (0.98 g, 2.8 mmol) and α,α'-bis(tetrahydrothiophenium chloride)-2-nitro-p-xylene (0.11 g, 0.33 mmol) in methanol (8.0 ml) was deoxygenated with nitrogen and cooled with an ice-bath. A nitrogen deoxygenated ice-cold aqueous sodium hydroxide solution (0.4M, 2.9 mmol, 8.0 ml) was added rapidly and the reaction mixture was left to stir for 3.5 hours at 0° C. under inert atmosphere. The reaction was terminated by addition of hydrochloric acid (0.4M, 1.0 ml). The viscous solution was then dialyzed against deoxygenated distilled water (3×1000 ml) over 4 days using cellulose membrane dialysis tubing with a molecular weight cut-off of 12,400 (supplied by Sigma Chemical Company Limited, Dorset, U.K.). The solvent was completely removed in vacuo at room temperature from the material remaining in the dialysis tubing. The residue was dissolved in dry methanol (4 ml). A mixture of α,α'-bis(tetrahydrothiophenium chloride)-p-xylene (0.90 g, 2.6 mmol) and α,α'-bis(tetrahydrothiophenium chloride)-2-methoxy-5-(2'-methylpentyloxy)-p-xylene (0.10 g, 0.21 mmol) in methanol (10 ml) was deoxygenated with argon and cooled with an ice-bath. An argon deoxygenated ice-cold aqueous sodium hydroxide solution (0.4M, 2.6 mmol, 6.9 ml) was added dropwise and the reaction mixture was left to stir for 1 hour at 0° C. under inert atmosphere. The reaction was terminated by addition of hydrochloric acid (0.4M, 3.0 ml). The viscous solution was then dialyzed against deoxygenated distilled water (3×2000 ml) over 3 days using cellulose membrane dialysis tubing with a molecular weight cut-off of 12,400 (supplied by Sigma Chemical Company Ltd., Dorset, U.K.). The solvent was completely removed in vacuo at room temperature from the material remaining in the dialysis tubing. The residue was dissolved in dry methanol (20 ml). IR spectra of copolymers: FIG. 18. EXAMPLE 8 Preparation of 1-methoxy-4-(2'-ethylhexyloxy)benzene Sodium metal (6.50 g, 283 mmol) was dissolved in dry methanol (100 ml) under Ar to give a 2.5M solution of sodium methoxide. A solution of 4-methoxyphenol (29.3 g, 236 mmol) in dry methanol (150 ml) was added and this mixture was heated to reflux for 30 min. After cooling to room temperature, a solution of 1-bromo-2-ethylhexane (46.5 g, 259 mmol) in dry methanol (150 ml) was added dropwise. The mixture was then heated to reflux for 18 hours. The solvent was removed in vacuo, the residue dissolved in ether (200 ml), washed with dilute aqueous sodium hydroxide (500 ml) and water (500 ml), dried over MgSO4 and concentrated in vacuo again. Distillation at 120° C./0.1 mm Hg afforded 24.2 g (43%) 1-methoxy-4-(2'-ethylhexyloxy)benzene. EXAMPLE 9 Preparation of 1,4bis(chloromethyl)-2-methoxy-5-(2'-ethylhexyloxy)benzene A mixture of α,α'-bis(tetrahydrothiophenium chloride)-p-xylene (0.92 g, 2.6 mmol) and α,α'-bis(tetrahydrothiophenium chloride)-2-methoxy-5-(2'-ethylhexyloxy)-p-xylene (0.11 g, 0.22 mmol) in methanol (10 ml) was deoxygenated with argon and cooled with an ice-bath. An argon deoxygenated ice-cold aqueous sodium hydroxide solution (0.4M, 2.6 mmol, 6.5 ml) was added dropwise and the reaction mixture was left to stir for 2.5 hours at 0° C. under inert atmosphere. The reaction was terminated by addition of hydrochloric acid (0.4M, 0.8 ml). The viscous solution was then dialyzed against deoxygenated distilled water (3×2000 ml) over 3 days using cellulose membrane dialysis tubing with a molecular weight cut-off of 12,400 (supplied by Sigma Chemical Company Ltd, Dorset, U.K.). The solvent was completely removed in vacuo at room temperature from the material remaining in the dialysis tubing. The residue was dissolved in dry methanol (20 ml). IR spectra of copolymers: FIG. 22. EXAMPLE 12 A solution of 1,4-bis(chloromethyl)-2-methoxy-5-(2'-ethylhexyloxy) benzene (0.95 g, 2.9 mmol) and α,α'-dichloro-p-xylene (0.05 g, 0.29 mmol) in dry tetrahydrofuran (20 ml) was added to a solution of potassium tert-butoxide (95%, 2.5 g, 22 mmol) in dry tetrahydrofuran (120 ml) over 15 min. The mixture was then stirred at room temperature for 21.5 hours. The resulting orange mixture was reduced to 10% of its volume and poured into methanol (500 ml). The precipitate was filtered under suction and recrystallised from tetrahydrofuran/methanol to afford 101 mg of polymer. 1 H NMR (CD2 C2): FIG. 24. IR spectra of copolymers: FIG. 25. The absorption spectra of MEH-PPV, 5% PPV/95% MEH-PPV and 20% PPV/80% MEH-PPV are shown in FIG. 26. The photoluminescent spectra (FIG. 27a, 26b, 31a) show that the luminescence is as expected of higher energy with increasing number of PPV units. EL devices were made in a standard configuration with ITO and aluminium contacts and the material showed electroluminescence (FIG. 29a, 29b, 30a and 30b). The corresponding electroluminescence spectra are illustrated in FIG. 28a and 28b. Both the 5% PPV/95% MEH-PPV and the 20% PPV/80% MEH-PPV had a turn-on voltage of about 8 V. EXAMPLE 13 The previous PPV EL devices were constructed with PPV prepared via a Tetrahydrothiophenium (THT)-leaving precursor polymer (FIG. 32a) spun from methanolic solution. This precursor is unstable with respect to its conjugated product and is fully converted by heating at 220° C. for 2 hours (FIG. 32c). By replacing the THT-leaving group with a methoxy (MeO)-leaving group a more stable precursor (FIG. 32b) is formed. This can be easily processed by spin coating from a solution in chloroform (as can the THT-precursor from methanolic solution). Thermal conversion of the MeO-leaving PPV precursor at 300° C. in vacuo for 12 hours gives very little thermal elimination leaving a copolymer of conjugated and unconjugated units (FIG. 32d). This is clearly seen from the absorption spectra of the THT-leaving PPV and the MeO-leaving PPV (FIG. 33). The absorption spectra of the precursors of both are very similar. A significant change occurs in the absorption spectrum of the THT-leaving PPV (FIG. 34); an insignificant change occurs in the absorption spectrum of the MeO-leaving PPV (FIG. 35). Clearly both products are subsequently very stable against subsequent changes at room temperatures and are very suitable as emitting materials in commercial EL devices. A device was made with the MeO-leaving PPV. An ITO substrate was cleaned in an ultrasound bath, of first acetone and subsequently propan-2-ol. The precursor material was then spin-coated on the substrate. The device was then thermally converted at 300° C. in vacuo for 12 hours. A top contact of Aluminium was then deposited to define an active area by vacuum deposition at a pressure of less than 6.10-6 torr to a thickness of 2-500 A. The performance of the device shows no deterioration over those made with PPV prepared via a THT leaving group precursor polymer with a turn on voltage below 10 V, a diodic current-voltage characteristic and a largely linear current-luminance response and a slightly improved quantum efficiency by at least a factor of 2 (FIGS. 36 and 37). The emission spectrum of the MeO-leaving PPV is markedly different with a peak emission at 2.5 eV compared with 2.25 eV in THT-leaving PPV. The emission is a bluey-green as opposed to a greeny-yellow in the case of the THT-leaving PPV. This is again consistent with the MeO-leaving PPV as converted being a copolymer of conjugated and unconjugated sequences: emission coming from the small conjugated sequences but at a higher energy than in fully conjugated PPV, (FIG. 37). Thus by careful conversion conditions it is possible using copolymers of PPV to obtain electroluminescent emission of different colours and with improved efficiencies. EXAMPLE 14 The random copolymers of PPV and DMeOPPV give a means to controlling the bandgap of a conjugated polymer and the potential for the construction of multicolour EL devices and channel waveguides. The copolymers are prepared initially in a precursor form which is soluble in Methanol and consists of at least 3 distinct monomer units--a PPV precursor monomer unit with a THT-leaving group, a DMeOPPV monomer unit with a THT-leaving group and certainly a DMeOPPV monomer unit with a MeO-leaving group (formed by the methanolic solution substitutionally attacking the DMeOPPV THT-leaving units) as seen by the strong 1094 cm-1 adsorption in the infrared absorption spectra of both the MeO-leaving homopolymer precursor of DMeOPPV and all the copolymer precursor polymers. There is possible a small amount of a fourth monomeric unit--a PPV monomer unit with a MeO-leaving group (formed by the methanolic solution substitutionally attacking the PPV THT-leaving units) (FIG. 39(a)). Thin films (of the order of 1000 A as used in EL devices) of the copolymers can be obtained by spin-coating the precursor solutions. Thermal conversion of the said films gives mechanically and thermally robust films. It is found that by linearly varying the copolymer monomer unit ratio that the absorption edge of the converted copolymers may be accurately controlled (FIG. 40). Typically films are converted at 220° C. for 2 hours. More fully conjugated material has a lower bandgap. The controlled increase in bandgap with additional DMeOPPV to PPV units indicates an associated decrease in conjugation. FTIR data shows that the copolymers are only partially conjugated as converted (FIG. 41). There is still a significant absorption at 1094 cm-1 indicating monomeric units of DMeOPPV with the methoxy leaving group have not been converted to the conjugated form leaving a copolymer of conjugated sequences and unconjugated sequences. The degree of conjugation will thus vary with the number of DMeOPPV Units present (FIG. 42). To convert fully the homopolymer of DMeOPPV with the methoxy leaving group it is necessary to heat the precursor in the presence of acid to catalyse the loss of the methoxy group. As the THT-leaving group leaves, acid is also generated. Thus in the copolymers of PPV and DMeOPPV it is possible further to convert the monomeric units of DMeOPPV with the methoxy leaving group to the conjugated form, so lowering the bandgap further and giving more control of the bandgap, by methods of internally trapping the self produced acid where excess acid may damage electrodes or simply by heating the precursor films in the presence of acid. By converting a spun-coated film of a copolymer at 220° C. in an argon flow which has been passed through concentrated HCl for 2 hrs it is clearly seen that the absorption bandgap of the polymer is shifted to lower energy over a similar film converted at 220° C. in vacuo indicating that the "acid" converted film is more fully conjugated. FTIR absorption measurements support this with the disappearance of the 1094 cm-1 absorption only when the copolymer is "acid" converted. Again it is noted that 2 hours conversion by either technique gives stable material against further change (FIGS. 43 and 41). By converting a spun-coated copolymer film on a glass substrate initially with a low temperature bake in vacuo at about 100° C. the diffusion rate of the acid ions out of the film is reduced giving an enhanced probability of causing conversion of methoxy-leaving units. A subsequent bake at 220° C. in vacuo yields fully stable material at room temperature again. A considerable reduction in bandgap is so obtained over material heated directly to 220° C. in vacuo. Thus there is a further method for controlling the bandgap of these materials (FIG. 44). It should be emphasised that any method of controlling the bandgap in these conjugated polymers equally controls the colour of emitted light in an electroluminescent device (or the colour of photoluminescence under optical excitation) as the wavelength of the emitted light largely follows the bandgap of the material (an increase in the bandgap of the material causes a similar decrease in the wavelength of the emitted light). The spatial limit for this spatial control of bandgap across the polymer film is of the order of the thickness of the polymer film i.e. 1000 A. Another film of copolymer (30% Copolymer) was spun-coated onto a glass substrate and before thermal conversion 500 A of Aluminium were vacuum deposited at a pressure of less than 6.10-6 torr via a shadow mask. The sample was then baked in vacuo for 20 hours at 220° C. to facilitate full conversion. The sample was then etched in weak sodium hydroxide solution to remove the aluminium. The polymer film was unaffected by the etching process. However, the polymer is left patterned. Where the aluminium was, the polymer to the eye is a deeper orange colour indicating a greater degree of conjugation due to enhanced trapping of the acid ions in the polymer film by the aluminium. This is born out by the shift to lower energy of the absorption edge (FIG. 45) and the photoluminescence emission (FIG. 46) of the dark region originally covered by the aluminium. Thus the bandgap of the copolymers may again be controlled and moreover in different regions of the same film giving rise to the possibility of multicolour emission from a single EL device. Such patterning also has an application in the manufacture of channel waveguides. Another such patterned device as above was made (from 10% copolymer) and there were the same associated lowering of bandgap and absorption edge where the aluminium had been etched from (FIG. 47) and lowering in energy of the photoluminescence emission from the same area (FIG. 48). The refractive indices of the two regions at 633 nm were measured by coupling light into the first TE modes from a He--Ne laser. The refractive index of the less conjugated material was measured to be 1.564 (0.002) and that of the more conjugated material (as converted under the encapsulation of aluminium) was measured to be 1.620 (0.002). This result is in keeping with simple dispersion theory for propagation of light in a dielectric medium such that the refractive index varies inversely with bandgap. Thus the patterning of the polymer allows also the spatial control of refractive index across a polymer film to a length scale of the order of 1000 A. For typical waveguiding structures (such as a channel waveguide) it is necessary to define channels of material to a precision of the order but no smaller than the wavelength of the light to be guided (i.e. for the 633 nm emission from a He--Ne laser to a precision of the order of 6000 A) with a higher refractive index than of the surrounding material. Clearly this method of patterning the copolymers of PPV and DMeOPPV is amenable to making waveguide structures as high refractive index regions can be defined to a size smaller than the wavelength of light which is to be confined in the high index region and guided. In order to characterise more fully the nature of the resulting copolymers the absorption spectra were obtained from samples which had been spun onto glass under the same conditions as discussed below for the construction of devices (step (c)) and subsequently thermally converted side by side with the corresponding devices (step (d)). The results thus provide a direct insight into the effect upon the polymer electronic structure of the copolymer composition. FIG. 2a shows a set of spectra for the compositions of the copolymers (of general structure II with R=OCH3) of para-phenylene vinylene, 2,5-dimethoxy-para-phenylene vinylene and unconverted precursor units that have been investigated in device structures and whose performance is exemplified below. The spectra have all been scaled to the same peak absorption to allow a ready comparison of the onsets for their π to π* optical transitions and the energies of their absorption peaks. Also shown for comparison is the absorption spectrum of the PDMOPV homopolymer obtained as previously shown in "Polyarylene vinylene films prepared from precursor polymers soluble in organic solvents", S. Tokito et al, Polymer 31, 1137 (1990). There is a clear trend in these spectra that the energy of the absorption peak shifts to higher energy as the relative content, in the precursor copolymer (structure I with R=OCH3 and R1, R2 =--(CH2)4 --), of units of the precursor to 2,5-dimethoxy-para-phenylene vinylene is increased. This behaviour is contrary to expectation for a fully conjugated copolymer since as discussed above and shown in FIGS. 2a and 2b, PDMOPV has a lower energy gap than PPV. In FIG. 2a, curve (a) is 100% PPV, (b) is 95% PPV/5% PDMOPV, (c) is 90% PPV/10% PDMOPV, (d) is 85% PPV/15% PDMOPV, (e) is 80% PPV/20% PDMOPV and (f) is 70% PPV/30% PDMOPV. Similarly this has been observed with 95% PPV/5% MMP-PPV, 90% PPV/10% MMP-PPV and 80% PPV/20% MMP-PPV (FIG. 19) and with 95% PPV/5% MEH-PPV, 90% PPV/10% MEH-PPV and 80% PPV/20% MEH-PPV (FIG. 23). The data is however consistent with incomplete conversion of the precursor units during the thermal treatment, resulting in remnant non-conjugated sequences that interrupt the electron delocalisation (structure II with R=OCH3), limiting the effective conjugation length and thus increasing the to * transition energy. These remnant sequences are mostly associated with the precursor to 2,5-dimethoxy-para-phenylene vinylene however, there can also be methoxy leaving groups associated with the precursor to PPV, i.e. the methoxy leaving group precursor polymer to PPV, which will not be fully eliminated by thermal treatment (structure II with R=OMe). The lack of conversion of the methoxy precursors to 2,5-dimethoxy-para-phenylene vinylene and to para-phenylene vinylene under the thermal conversion procedure utilised here is ascribable to the difficulty of elimination of the methoxy leaving group, previously shown in "Polyarylenevinylene films prepared from precursor polymers soluble in organic solvents" S. Tokito, T. Momii, H. Murata, T. Tsutsui and S. Saito, Polymer 31, 1137 (1990) to require acid catalysis for its full removal. It should be emphasised that while the conversion of the precursors to PPV does in fact liberate acid as one of its by-products, in thin film copolymer samples converted by heating in vacuo the acid is too rapidly removed to be effective in driving the conversion of the precursor to 2,5-dimethoxy-para-phenylene vinylene to completion. In thick film samples prepared by static solution casting, however, the extent of conversion of the methoxy precursors is significantly enhanced. This is clearly evidenced in their colour (they are unfortunately too thick for optical absorption measurements) which, unlike the uniformly yellow thin film samples, becomes increasingly red as the content of the precursor to 2,5-dimethoxy-para-phenylene vinylene in the copolymers increases. It is also evidenced by the decrease of the strength, during conversion, of the characteristic C--O stretch vibration in the infrared spectra that is associated with the methoxy modifier group on the benzylic carbon of the methoxy precursors to 2,5-dimethoxy-para-phenylene vinylene and para-phenylene vinylene. This behaviour can be understood as being due to the lower rate of loss of acid from the bulk of thick films, allowing greater interaction with the units of the methoxy precursors and consequently a greater extent of their conversion. Further evidence supporting these differences between the thin, spin-coated films and thicker solution cast films comes from their photoluminescence spectra. Discussion here is limited to the representative cases of the copolymers obtained following thermal conversion of thin spin-coated and thick solution cast films of the copolymer precursors prepared from (1) 10% of units of the precursor to 2,5-dimethoxy-para-phenylene vinylene/90% of units of the precursor to para-phenylene vinylene and (2) 20% of units of the precursor to 2,5-dimethoxy-para-phenylene vinylene/80% of units of the precursor to para-phenylene vinylene. In FIGS. 3(a) and (b) are shown respectively the emission spectra for thin spin-coated and thick solution cast films for case (1). In FIGS. 4(a) and (b) are shown the corresponding spectra for case (2). For comparison FIGS. 5(a) and (b) show the photoluminescence spectra for the PPV and PDMOPV homopolymers; the latter prepared via acid catalysed thermal conversion under HCl containing nitrogen gas flow so as to ensure substantial, if not wholly complete, conversion of the precursor units. It is immediately clear from the spectra in FIGS. 3 and 4 that in vacuo thermally converted spin-coated thin films have significantly different emission spectra to the thicker films obtained under identical conversion conditions and from the same precursor solutions but following static solution casting. Furthermore, whilst the spectra of the thin spin-coated samples have spectra which lie at higher energy than in PPV (FIG. 5(a)), the thicker static solution cast samples show spectra that are red shifted relative to PPV and hence that are shifting towards the emission spectrum seen in PDMOPV (FIG. 5(b)). It is thus clear that the electronic structures of the copolymers that are incorporated into device structures may be controlled by the selection of the constituent components present in the copolymer precursor and by the conversion conditions used in device fabrication. Changing some of the units of the precursor to para-phenylene vinylene to units of the precursor to 2,5-dimethoxy-para-phenylene vinylene can have two different effects depending on whether conversion is purely thermal or also involves acid catalysis. For purely thermal conversion there is an incomplete elimination such that the resultant conjugated segments are separated by remnant non-conjugated precursor units, causing the energy gap to increase relative to that of homopolymer PPV and the photoluminescence emission to be blue shifted, occuring at higher energy than in PPV. For acid catalysed thermal conversion the elimination is substantially complete with the result that the energy gap decreases and photoluminescence emission shifts to the red. A similar situation arises in the case of the copolymers of the precursor to para-phenylene vinylene and the precursor to 2,5-thienylene vinylene (structure II with R=H and R'--CH3) with the absorption spectra of thin spin-coated films of in vacuo thermally converted copolymers showing a shift in the position of the absorption peak to higher energy than seen in PPV (see FIG. 6) whilst the photoluminescence emission spectra for thick solution cast films converted under identical conditions show a red shift relative to that in PPV (see FIGS. 7(a), (b) and (c)). In FIG. 6, curve (a) is 100% PPV, (b) is 95% PPV/5% PTV and (c) is 90% PPV/10% PTV. Thus, the conversion of methoxy modifier group precursor units of 2,5-thienylene vinylene is enhanced in thick films by acid catalysed elimination driven by the acid by-product of the para-phenylene vinylene sulphonium-salt-precursor conversion. It was previously reported in "Optical Excitations in Poly(2,5-thienylene vinylene)", A. J. Brasserr, N. F. Colaneri, D. D. C. Bradley, R. A. Lawrence, R. H. Friend, H. Murata, S. Tokito, T. Tsutsui and S. Saito, Phys. Rev. B 41, 10586 (1990) that the photoluminescence emission from the PTV homopolymer obtained by acid catalysed thermal conversion of the methoxy leaving group precursor polymer is extremely weak (with quantum yield less than or of order 10-5) and, when it can be observed, appears at energies above the onset for π to π* optical transitions. In the copolymers of the precursors to para-phenylene vinylene and 2,5-dimethyl-para-phenylene vinylene (structure (I) with R═OCH3 and R1, R2 ═--(CH2)4 --) the absorption spectra of in vacuo thermally converted thin spin-coated samples show a shift in the position of the absorption peak to higher energy than seen in PPV (see FIG. 8) whilst the photoluminescence emission spectra for thick solution cast films converted under identical conditions show little shift relative to that in PPV (see FIGS. 9(a), (b) and (c)). In FIG. 8, curve (a) is 100% PPV, (b) is 95% PPV/5% DMPPV and (c) is 90% PPV/10% DMPPV. The explanation of the higher bandgap energy obsrved in the absorption spectra of the thin spin-coated samples is that the as-formed copolymer contains disruption in the conjugation due either to steric interactions of the methyl group with the vinylic proton twisting the sp2 -π-orbitals of the dimethyl-para-phenylene and the adjacent vinylene units out of planarity or that in the absence of acid catalysed conversion, the elimination of methoxy leaving groups from the methoxy precursors to 2,5-dimethyl-para-phenylene vinylene and para-phenylene vinylene is incomplete, thus resulting in a copolymer structure containing conjugated segments separated from each other by unconverted non-conjugated precursor units or a combination of both. The inventors have trapped some of the acid released from a thin film during thermal conversion by capping a section of a film of the 10% dimethoxy-PPV/90% PPV precursor polymer which had been spin coated onto a glass slide (about 2.5 cm square) with a strip of evaporated aluminium (about 4 mm wide) before heat treatment. The precursor was then heated as described above to leave a film of thickness 100 nm and the aluminium was removed using dilute aqueous sodium hydroxide. There was a clear difference in colour between the area previously coated with aluminium (orange) and that where there had been no aluminium (yellow). The optical absorption spectra for the two areas are shown in FIG. 16 from which it can be seen that there is a shift in band gap towards the red of about 0.2 eV for the area previously coated with aluminium. The photoluminescent spectra for the two regions are shown in FIG. 17. This shows that we can control the extent of conjugation in different regions of the same polymer film so as to produce different emission colours from these different regions. Fabrication of Electroluminescent (EL) Structures Structures for an EL device require two electrodes to either side of the emissive region. For the examples shown here, devices have been fabricated by deposition of a series of layers onto a transparent substrate (glass), but other structures can also be made, with the active (i.e. emissive) area being defined by patterning within the plane of the polymer film. The choice of electrode materials is determined by the need to achieve efficient injection of charge carriers into the polymer film, and it is desirable to choose materials which preferably inject electrons and holes as the negative and positive electrodes respectively. In International Patent Application No. PCT/GB90/00584 (Publication No. PCT/WO9013148) is described the use of PPV as the emissive layer, and a choice of aluminium, amorphous silicon, silver/magnesium alloy as the negative electrode, and aluminium with a thin oxide coating, gold and indium oxide as the positive electrode. Many of these combinations were found to be satisfactory. In the present disclosure, where many different compositions of copolymers have been investigated, the choice of contact layers has been generally for convenience that of aluminium for the negative electrode and aluminium with an oxide coating as the positive electrode. Calcium has also been used as the negative electrode with indium/tin oxide as the positive electrode. It is to be expected that results obtained with this combination give a good indication of the behaviour to be expected with other choices for electrode materials: The procedure used for all devices used in this work is as follows: (a) Clean glass substrates (microscope slides) in propan-2-ol reflux. (b) Deposit bottom contact of aluminium by evaporation of aluminium in a standard vacuum evaporator (base pressure 2×10-6 mbar). Four strips 1 mm wide were usually deposited, and the aluminium film thickness was chosen to give a conducting but semi-transparent film (9-12 nm). The aluminium was then exposed to air at room temperature, to allow formation of a surface oxide coating. (c) Deposition of the precursor polymer from solution in methanol by spin-coating, using a Dyna-Pert PRS14E spin-coater. This was performed inside a laminar-flow cabinet, with a spin speed of 2000 rev/min, and produced films of polymer in the thickness range 50-150 nm. (d) Thermal treatment of the precursor, to convert to the conjugated polymer. This was carried out in an evacuated oven (base pressure 10-5 mbar) inside an argon-atmosphere glove box. The heat treatment used was 30 min to heat to 220° C. between 2 and 5 hours at 220° C., and 3 hours to cool to room temperature. (e) Evaporation of aluminium top contact, carried out as in (b) above, but with the 1 mm wide strips rotated by 90°, to give a total of 16 independently addressable devices, each 1 mm2. The aluminium thickness here was typically 50 nm, to ensure a good coverage, and to provide some encapsulation to keep oxygen away from the active parts of the device. Measurements of Devices Positive bias was applied to the bottom contact (aluminium with surface oxide coating) using a programmable voltage source (Keithley model 230). The current through the device was measured with a Keithley model 195 DVM connected between the top contact and ground. The light output was measured with a large area silicon photovoltaic cell (1 cm2 active area, Radio Spares catalogue number RS 303-674). Typical results of the PPV homopolymer, a copolymer obtained by in vacuo thermal conversion of spin-coating thin films of spin coated films of a precursor copolymer synthesised from 90% para-phenylene vinylene/10% 2,5-dimethoxy-para-phenylene vinylene precursor units, a copolymer obtained by in vacuo thermal conversion of spin-coated thin films of a precursor copolymer synthesised from 90% para-phenylene vinylene/10% 2,5-thienylene vinylene precursor units and a copolymer obtained by in vacuo thermal conversion of spin-coated thin films of a precursor copolymer synthesised from 90% para-phenylene vinylene/10% 2-methoxy-5-(2'-methylpentyloxy)-para-phenylene vinylene precursor units are shown in FIGS. 10, 11, 12, 20 and 21 which present the current versus voltage and light output versus current characteristics. In FIG. 10 the bottom contact thickness is 110 A, the top contact thickness is 1300 A and the thickness of the electroluminescent layer is 900 A. In FIG. 11 the corresponding thickness values are 120 A, 1000 A and 1450 A and in FIG. 13 they are 90 A, 1370 A and 1070 A. Similar current versus voltage characteristics were found for all devices, with a threshold voltage for current injection of around 25 to 40 V. There was also found a broadly linear relation between current and light output (which allows the efficiency of the device to be characterised simply, by the gradient of this plot). It is found that the light output varies strongly with the choice of copolymer, and that some of the copolymers show very strongly enhanced efficiencies as measured against the efficiency of the PPV homopolymer. The variation of the quantum efficiency is shown as actually measured (current in photodetector/current through EL device) in FIGS. 13, 14, 15 and 31 for the copolymers obtained from the in vacuo thermal conversion of spin-coated thin films of precursor copolymers formed between the precursors to PPV and PDMOPV, the precursors to PPV and PTV, the precursors to PPV and PDMPV, and the precursors to PPV and MMP-PPV respectively. The plots show some data for a large number of devices, and there is some scatter evident between devices of the same nominal composition. This may be due to inhomogeneities in the devices, such as non-uniform thickness, entrapped dust particles etc. and it is considered that the better values of efficiency at each composition give a true indication of the intrinsic behaviour of the EL structure. The PPV/PDMOPV copolymers show a very big improvement in efficiency for PDMOPV in the range 5-15%, with best results at 10%, for which the improvement over that obtained for PPV is by a factor of about 50. The PPV/PTV copolymers do not show such behaviour. This may be compared with the very low quantum yield for photoluminescence (less than or of order 10-5) that is found in the homopolymer, as in "Optical Excitations in Poly(2,5-thienylene vinylene)", A. J. Brassett, N. F. Colaneri, D. D. C. Bradley, R. A. Lawrence, R. H. Friend, H. Murata, S. Tokito, T. Tsutsui and S. Saito, Phys. Rev. B 41, 10586 (1990). For the PPV/PDMPV copolymers an improvement over the PPV homopolymer is seen at 10% PDMPV, but the changes are less marked than with the PPV/PDMOPV copolymers. The maximum measured efficiencies for the devices shown here, obtained for the 90/10% PPV/PDMOPV copolymer, approach 10-2 %. To obtain the real efficiency of the EL layer in the device it is necessary to correct for the efficiency of the photodetector (50%), the collection efficiency for the EL (24%) and the optical transmittance of the Al semitransparent layer (30%). With these factors included, it is estimated that the real efficiency of the EL layer in such a device is as high as 0.3%. This value compares very favourably with the performance of EL devices fabricated with other materials. As PL and EL are due to the same excited state in the polymer, as evidenced by the similarity in emission recorded for a single polymer film, a correspondence between efficiency for EL and for PL is broadly to be expected. However, there are some differences as discussed below. The efficiency for luminescence is in part an intrinsic property of the material (that is to say that it has the same value for all samples), and possibly also dependent on the actual form of the sample and the nature of the interfaces to it. Thus, it might be expected for the thin films used for the EL structures that migration of the excited states to the interface between the polymer film and the electrode material might result in non-radiative decay of the excited state, and thus allow the efficiency for luminescence to fall below its "intrinsic" value. The effect, then of restricting the motion of the excited states in the copolymers may be to improve quantum yield both by improving the intrinsic properties of the polymer, and also by reducing the motion of excited states to the interface region. Thus, the improvements in quantum yield that have been measured in EL for some of the copolymers are by a very large factor (×50), considerably larger than the factor by which the yield for PL is improved. There has been described a design technique and a method of manufacture for achieving especially efficient emission in conjugated copolymer electroluminescent structures through the use of the local modulation of semiconductor energy gap, between the highest occupied and lowest unoccupied energy levels, achieved in copolymers of two or more different monomer units. The modulation of energy gap is achieved by the use, in the copolymer structure, of chemically-different monomer units which in their individual homopolymer forms have different energy gaps. The effect of the energy gap modulation is to produce local regions that are potential energy minima and that act to confine the exciton states created by injection of electrons and holes from the contact layers. This confinement is beneficial for efficient radiative recombination of excitons through its reduction of the opportunities for migration of the excitons to non-radiative recombination sites subsequent to their initial generation and thus leads to a higher electroluminescent yield. The copolymers described herein are intractable, insoluble in common solvents and infusible at temperatures below the decomposition temperature, or they are soluble in a few organic solvents.
/* * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ package com.aliyuncs.actiontrail.model.v20171204; import java.util.List; import java.util.Map; import com.aliyuncs.AcsResponse; import com.aliyuncs.actiontrail.transform.v20171204.LookupEventsResponseUnmarshaller; import com.aliyuncs.transform.UnmarshallerContext; /** * @author auto create * @version */ public class LookupEventsResponse extends AcsResponse { private String requestId; private String nextToken; private String startTime; private String endTime; private List<Map<Object,Object>> events; public String getRequestId() { return this.requestId; } public void setRequestId(String requestId) { this.requestId = requestId; } public String getNextToken() { return this.nextToken; } public void setNextToken(String nextToken) { this.nextToken = nextToken; } public String getStartTime() { return this.startTime; } public void setStartTime(String startTime) { this.startTime = startTime; } public String getEndTime() { return this.endTime; } public void setEndTime(String endTime) { this.endTime = endTime; } public List<Map<Object,Object>> getEvents() { return this.events; } public void setEvents(List<Map<Object,Object>> events) { this.events = events; } @Override public LookupEventsResponse getInstance(UnmarshallerContext context) { return LookupEventsResponseUnmarshaller.unmarshall(this, context); } @Override public boolean checkShowJsonItemName() { return false; } }
According to a new report that looks at how continuing improvements to artificial intelligence and robotics will impact society, “robotic sex partners will become commonplace” by 2025. A large portion of the report also focuses on how AI and robotics will impact both blue- and white-collar workers, with about 50% of the polled experts stating that robots will displace more human jobs than they create by 2025. The report, called “AI, Robotics, and the Future of Jobs” and published by Pew Research, is a 66-page monster [PDF]. The report basically consists of a bunch of experts waxing lyrical about what the world will look like in 2025 if robots and AI continue to advance at the same scary pace of the last few years. Almost every expert agreed that robots and AI will no longer be constrained to repetitive tasks on a production line, and will permeate “wide segments of daily life by 2025.” The experts are almost perfectly split on whether these everyday robots will be a boon or a menace to society, though — but more on that at the end of the story. While the report is full of juicy sound bites from experts such as Vint Cerf, danah boyd, and David Clark, one quote by GigaOM Research’s Stowe Boyd caught my eye. By 2025, according to Boyd, “Robotic sex partners will be a commonplace, although the source of scorn and division, the way that critics today bemoan selfies as an indicator of all that’s wrong with the world.” Back in 2012 — a long time ago in today’s tech climate — I wrote that we’d have realistic sexbots by around 2017. These robostitutes won’t necessarily have human-level intelligence (that’s still another 10+ years away I think), but they’ll look, move, and feel a lot like real humans. In short, they’ll probably be good enough to satisfy most sexual urges. What effect these sexbots will have on human-human relationships, and the sex and human trafficking trades, remains to be seen. At a bare minimum, a lot of sex workers will probably lose their jobs. If lovotics — the study of human-robot relationships — becomes advanced enough and people start falling in love with their sexbots (or rather partnerbots), then there could be some wide-ranging repercussions. But, back to the bigger story: Will advanced AI and robots make the world a better place or not? Basically everyone agrees that robotics and AI are going to displace a lot of jobs over the next few years as the general-purpose robot comes of age. Even though these early general-purpose bots (such as Baxter in the video below) won’t be as fast or flexible as humans, they will be flexible enough that they can perform various menial tasks 24/7 — and cost just a few cents of electricity, rather than minimum wage. Likewise, self-driving vehicles will replace truck drivers, taxis, pizza delivery kids, and so on. [Read: Virtual reality and the future of sex.] Displacing jobs with robots isn’t necessarily a bad thing, though. Historically, robots have been a net creator of jobs, as they free up humans to work on more interesting things — and invent entirely new sectors to work in. More robots also means less drudgery — less tilling the fields, less stop-start commute driving — and in theory more time spent playing games, interacting with your family, etc. On the other hand, the robot jobocalypse is likely to happen very quickly — so fast that our economic, education, and political systems may struggle to keep up. Previously robots mostly replaced blue-collar workers, but this next wave will increasingly replace skilled/professional white-collar workers. A lot of these specialized workers may find themselves without a job, and without the means to find a new one. We may suddenly see a lot of 50-year-olds going back to university.
Population dynamics and regulation in the cave salamander Speleomantes strinatii. Time series analysis has been used to evaluate the mechanisms regulating population dynamics of mammals and insects, but has been rarely applied to amphibian populations. In this study, the influence of endogenous (density-dependent) and exogenous (density-independent) factors regulating population dynamics of the terrestrial plethodontid salamander Speleomantes strinatii was analysed by means of time series and multiple regression analyses. During the period 1993-2005, S. strinatii population abundance, estimated by a standardised temporary removal method, displayed relatively low fluctuations, and the autocorrelation function (ACF) analysis showed that the time series had a noncyclic structure. The partial rate correlation function (PRCF) indicated that a strong first-order negative feedback dominated the endogenous dynamics. Stepwise multiple regression analysis showed that the only climatic factor influencing population growth rate was the minimum winter temperature. Thus, at least during the study period, endogenous, density-dependent negative feedback was the main factor affecting the growth rate of the salamander population, whereas stochastic environmental variables, such as temperature and rainfall, seemed to play a minor role in regulation. These results stress the importance of considering both exogenous and endogenous factors when analysing amphibian long-term population dynamics.
Epidemiology of Acute Lower Respiratory Tract Infection in HIV-Exposed Uninfected Infants. Increased morbidity and mortality from lower respiratory tract infection (LRTI) has been suggested in HIV-exposed uninfected (HEU) children; however, the contribution of respiratory viruses is unclear. We studied the epidemiology of LRTI hospitalization in HIV-unexposed uninfected (HUU) and HEU infants aged <6 months in South Africa. We prospectively enrolled hospitalized infants with LRTI from 4 provinces from 2010 to 2013. Using polymerase chain reaction, nasopharyngeal aspirates were tested for 10 viruses and blood for pneumococcal DNA. Incidence for 2010-2011 was estimated at 1 site with population denominators. We enrolled 3537 children aged <6 months. HIV infection and exposure status were determined for 2507 (71%), of whom 211 (8%) were HIV infected, 850 (34%) were HEU, and 1446 (58%) were HUU. The annual incidence of LRTI was elevated in HEU (incidence rate ratio [IRR] 1.4; 95% confidence interval [CI] 1.3-1.5) and HIV infected (IRR 3.8; 95% CI 3.3-4.5), compared with HUU infants. Relative incidence estimates were greater in HEU than HUU, for respiratory syncytial virus (RSV; IRR 1.4; 95% CI 1.3-1.6) and human metapneumovirus-associated (IRR 1.4; 95% CI 1.1-2.0) LRTI, with a similar trend observed for influenza (IRR 1.2; 95% CI 0.8-1.8). HEU infants overall, and those with RSV-associated LRTI had greater odds (odds ratio 2.1, 95% CI 1.1-3.8, and 12.2, 95% CI 1.7-infinity, respectively) of death than HUU. HEU infants were more likely to be hospitalized and to die in-hospital than HUU, including specifically due to RSV. This group should be considered a high-risk group for LRTI.
My old system, which specs ar ein my signature, died, i think due to a bad psu. Now i am using my sister's computer. There is a problem, it is running hot, real hot. I am using everest to moniter the temp and it says that the cpu, a 2.6Ghz Celeron 4A is running at an exceptionally cool 35C (95F), but it also says the mobo is running at a scorching 61C (142F). What could be causing this, it is not lagging at all and performance is not slowing down. It just scares me that it is that hot. What could be causing this? Your Northbridge has a fan or heatsink on it right? If it has a fan, make sure it is cooling properly. Also, you can just get a system fan (120mm) and aim in onto the motherboard to aid in cooling. Motherboards should really run betweem 40-50C, maybe 55C at most. That is pretty warm, it might damage the northbridge controllers wafers if u let that continue. BAD JUJU when silicon wafer transistor series get damaged.... Get a better northbridge cooler for it. the northbridge has a heatsink, yet it is not hot to the touchm it feels cool if anything. and i cannot find any place on the mobo that feels hot to the touch. maybe there is a problem with the integrated thermometer? I checked the bios, apparently there is a temp sensor for my cpu, but, one does not exist for my mobo, so i think everest is just making it up. No where in my bios could i find the temp for my mobo, only for my cpu.
A Market Timing Report based on the 3-14-2014 Close published Sunday March 16, 2014 NOTE: The SP500 Index Chart will be out by Sunday evening. Gold is rallying again as interest rates fall within the most recent narrow band. Let’s review the GLD and 10 Year Treasury Yield charts and then a comparison chart between the two. GLD chart: Gold Rally Continues. Ten Year Treasury Note Chart (TNX): Rates fall within range. And how have they been moving versus each other? (In the chart below, GLD is plotted against TNX which is in magenta.) When rates were rising above the most recent narrow range (far left on chart below), gold fell. Then when rates came back down, gold rallied. Then as rates rose slightly, gold rose. Then as rates fell gold went sideways. Then as rates fell, gold rose. In sum, gold has been going up or sideways DESPITE what rates have done, except when rates were headed toward 3%. Then gold fell in price. Should rates suddenly move to a significantly higher level, things could change. Even a rapid climb toward 3% could cause trouble for the metals. Rates on the 10 Year Treasury will eventually rise above 3% as the economies of the world improve, so we need to keep a sharp eye on the metals vs. the inflation rate.The only way rates can rise rapidly and much higher with gold rising too is if inflation starts to get out of hand. If rates were to rise too fast without inflation to go with it, gold would suffer. So instead of “hoping” the metals rally will keep going, we’ll follow the market’s direction, while we keep an eye on the balance between inflation and interest rates. Gold vs. 10 Year Treasury Yield To keep up with my latest thoughts on gold and interest rates on my access page, you’ll need the password, which you can get here: I thank Worden Brothers for the chart system I use to post these charts. If you want to know more about the charting system I use every day, go to my “Other Resources” page here: Other Resources It makes it much easier to follow along with me if you can see the charts and manipulate them on your own computer, so it’s a great investment to have an excellent charting system. Look for updates on the main chart tracking pages this week as I feel they are needed and comments via Twitter @SunAndStormInv (see link to upper right).
Ramsons (Wild Garlic) Allium ursinumBelongs to the onion family. An annual (sometimes biennial) onion relative that grows wild in shady, moist and nutrient-rich top soil, especially in decidious forests. It is quite rare, but when it thrives, it can occur in large populations covering the ground in spring.Wild garlic is harvested in spring and early summer, is 15-45 cm high and has, unlike the chives, large broad leaves and white flowers. Everything can be used: the elongated bulb, the leaves and also flower stalks with closed buds. It has a nice, mild garlic flavour and similar medicinal properties. It's propagated by seeds and bulbs. SOWING: The seeds need to be brought out of suspension with cold spells. Either directly sown in the autumn or very early spring. For normal spring sowing, some weeks in the refrigerator's freezer compartment, sown in light soil, will make the seeds believe that it has been winter. SEEDS: 1 g is about 150 seeds. Ramsons, organic seedTemporarily unavailable Belongs to the Onion family, it is a perennial (sometimes biannual) relative of the onion which grows wild in shadowy and nutrient- and humus-rich soil, predominantly in deciduous forests. It is quite...
1. Introduction {#s0005} =============== The banking sector is one of the most complex industries, and it is one of the main contributors to a country's wealth ([@b0485])\]. [@b0640] indicated that this sector plays an increasingly critical role in the development of the financial system. Given the relevance of these institutions, bank performance has been a matter of great interest for various stakeholders, such as regulators, customers, investors, and the general public ([@b0200]), especially after the economic collapse of 2007--8 ([@b0395]). In the past, analysis of bank performance was done mainly through financial indices, which, according to [@b0715], are unsatisfactory measures of performance. With the advances in operational research techniques, this scenario changed with the emergence of techniques such as Data Envelopment Analysis (DEA), which is currently one of the most popular techniques for analysing the efficiency of organizations ([@b0685]). DEA consists of a non-parametric mathematical linear programming technique whose objective is to analyse a group of homogeneous production units known as Decision Making Units (DMUs) --- which contain the same inputs and outputs --- to identify the most efficient organizations and indicate the actions that inefficient ones must take to become efficient. It does not require specifications regarding the type of production frontier[1](#fn1){ref-type="fn"} , which is constructed based on empirical observations. Little information is needed *a priori* to apply the model. The strengths of DEA include the following: it is effective in dealing with complex production processes ([@b0515]); it has the ability to work with inputs and outputs at different measurement scales ([@b0595]); it has the ability to analyse each DMU individually, comparing them with other DMUs, with the optimization process performed for all DMUs in the sample ([@b0635]); and it can identify inefficient DMUs, providing an indication of benchmarks ([@b0005]). Despite their great popularity, traditional DEA models have been criticized for treating the production process like a black box, in which input variables are transformed in the DMU's production process, generating the output variables without an explicit modelling of how this transformation occurs ([@b0180]). Additionally, [@b0485] emphasized that most of the rejections by administrators of suggestions for improvements made by the DEA are due to the model not considering environmental factors outside the organization, which administrators have no control over. In other words, the environment in which the bank is inserted is not considered in the analysis. Often a bank is regarded as efficient simply because it is in a more favourable environment. Consequently, [@b0310] emphasized that DEA indices, although consistent, are biased. Seeking to improve the application of DEA, two-stage DEA models have been gaining prominence in the literature, precisely because they make it possible to overcome the aforementioned limitations. [@b0160] analysed the most popular keywords in DEA studies from 2015 and 2016 and found that the terminology two-stage DEA was the second-most popular keyword, while the banking sector was one of the most popular fields of study for the application of DEA. Thus, two-stage DEA is an emerging topic in the literature that, to the best of our knowledge, still lacks a systematic review. Nevertheless, it is important to introduce a caveat regarding the two-stage DEA terminology. When searching for articles with this terminology, different models are identified, many of them with very different purposes. Although the distinction between these models can easily be made by reading the articles, often, this is not so evident when only reading the abstracts. These different models also make it difficult to clearly define exactly what the terminology two-stage DEA represents. In the study conducted by [@b0160], it was not clear which type of two-stage DEA was becoming popular. Hence, when searching for articles addressing two-stage DEA, there is a mixture of different models, although both are called two-stage DEA. When the production process is broken down into several subprocesses, these models are categorized here as internal two-stage DEA models; in turn, the approaches in which two analysis procedures are used, with DEA in the first stage and some other technique, either parametric or not, in the second stage, are called external two-stage DEA models. While the internal models enable the black box problem to be overcome, the external models enable a more complete analysis of DMUs. In this context, although the two-stage DEA is a relevant research topic ([@b0160]) and used in a large number of studies, results of our survey indicate that the terminology is still not well-established or a common ground. In addition, many studies in expert and intelligent systems have, as main objective, to advance the methodological perspective of the use of the two-stage DEA models, as in [@b0300], [@b0455], [@b0470]. However, to the best of our knowledge, the studies did not synthesize and debate the problems raised here. Therefore, our survey of the literature contributes to the topic, by consolidating the state of the art of two-stage DEA models and by pointing out challenges and directions for future studies. We highlight that other review papers exploring expert and intelligent systems, such as [@b0260], [@b0275], [@b0705], [@b0720], reflect the relevance of consolidating studies aiming at better understanding and mapping specific models and techniques. Therefore, our literature review seeks to carry out a critical and in-depth analysis of two-stage DEA. More specifically, we discuss the terminology on two-stage DEA models, and also provide researchers with gaps in the literature, which are opportunities for studies that further advance the current knowledge in the topic. Accordingly, given the various studies described as two-stage, what exactly does the literature consider to be two-stage DEA models? When referring to two-stage models in banks, what has been most published regarding this? What is the most frequent technique used in the second stage of external models? How has this topic been discussed over the years? Such questions emerge when analysing the publications on two-stage DEA models in banks. The contribution of the present study is that it proposes solutions to these questions. We believe that with this review, we can map the existing knowledge on this research theme and stimulate a debate on this emerging topic in the literature. Regarding the applicability of two-stage DEA models, whether internal or external, this study contributes to the literature by fully exploring how they are applied in banks, identifying the most frequent scope, which can be understood as the approach used for the selection of the variables in the analysis of banks, which, in turn, will determine the model's input and output variables. Furthermore, we will discuss which is the technique most used in the second stage of external models, the most used DEA model, the economic context and the continents of the most studied banking sectors, the type of study and its objectives, which authors produce the most research on the topic, and which publications are the most relevant. The discussion regarding the research scope is of great importance because --- as shown by [@b0285] --- considering a variable as an input or an output significantly changes which DMUs are indicated as efficient by the model.[2](#fn2){ref-type="fn"} Hence, considering that the main variable selection approaches in the literature analyse distinct bank functions and, therefore, assume different inputs and outputs, the importance of comparing studies that have used similar approaches becomes evident. Another controversial topic in the literature that requires further discussion is the impact of exogenous variables on efficiency. The motivation to address this particular aspect of external two-stage DEA models is centred on the researchers' recognition that environmental factors or exogenous variables can significantly influence the efficiency scores measured by DEA ([@b0205]). Despite the growing interest, the results in the literature regarding this impact have been quite ambiguous in that an environmental variable can have a positive impact on the bank in its role of financial intermediary but a negative impact on the function of offering services to clients, for example, which makes consolidation in the literature difficult. For this reason, as discussed previously, the analysis and comparison of the impact of an exogenous variable on efficiency should consider the scope of the study, that is, the approach used to select the variables. This review will make the results found in the literature regarding these impacts clearer by clarifying the approaches used in the studies and the respective effects of the variables on efficiency. We emphasize that the goal of our manuscript is not to defend the two-stage DEA model or one technique over the other, but rather to present a discussion on the topic, focusing on bank efficiency studies. More specifically, we analyse diverse challenges for the use of two-stage DEA, including the terminology itself, and the statistical drawbacks, such as the separability problem. We argue that a systematic survey of the literature on this topic is urgent, since despite all the challenges, the number of studies has been quite large, as depicted in [@b0155], [@b0160]. Our work contribute to the discussion of two-stage DEA models, presenting the state of the art on the subject as well as identifying the challenges in studies using this technique, specially in the banking sector. Finally, this study is the first conducted on the banking sector that adopts the systematic literature review method developed by [@b0390] and later disseminated by [@b0440], [@b0305], [@b0555], [@b0280]. As highlighted by [@b0440], [@b0305], this method allows us to:•Identify the main results of the studies analysed and relate them to emerging issues in the theme researched;•Fully discuss and present the latest innovations regarding the key topics of the theme;•Identify possible gaps and challenges for future research. The article is structured as follows: a brief contextualization of two stage DEA models is performed in Section [2](#s0010){ref-type="sec"}; the research method is presented in Section [3](#s0025){ref-type="sec"}; the classification and coding criteria for the analysed articles are described in Section [4](#s0030){ref-type="sec"}; the results of the bibliometric analysis and coding are discussed in Section [5](#s0035){ref-type="sec"}; and finally, the conclusions are provided in Section [6](#s0055){ref-type="sec"}. 2. Brief summary of two-stage DEA model in banks {#s0010} ================================================ Despite the growing interest in two-stage DEA models, as highlighted in [@b0160], several aspects remain ambiguous, including the terminology two-stage DEA model itself. The literature consists of two types of models that are completely different from each other, with distinct purposes, but that are both classified as two-stage DEA models. Given this, herein, we intend to briefly discuss the different approaches and techniques described as two-stage DEA models, categorizing them as either external two- stage DEA models or internal two-stage DEA models. It is worth highlighting that both internal and external two-stage DEA models have emerged as a response to the limitations of conventional DEA models. In other words, [@b0180] stated that variations in traditional DEA models seek to *suit the application*. Accordingly, regardless of the purpose, whether two-stage DEA models involve intermediate variables (internal) or the use of some technique after the application of DEA (external), the analysis will be closer to reality. The use of intermediate variables overcomes the black box problem, whereas the application of another technique after DEA enables a more complete analysis. 2.1. Internal two-stage DEA model {#s0015} --------------------------------- One of the main limitations of traditional DEA models is that they treat the production process like a black box, in which the input variables are transformed within this box to give the output variables. Although this is one of the advantages of DEA, i.e., revealing without needing to impose the structure of the transformation process ([@b0180]), in various applications, a more structured model is needed for better application. One great example of this situation is the banking sector. Because it is a highly complex sector ([@b0515]), an improved DEA model is ideal to make it possible to encompass this production process. Thus, to overcome the black box problem, various researchers have sought to improve traditional DEA models to enable the analysis to be closer to reality. Internal two-stage DEA models represent such an effort --- the two stages of the model refer to stages of the production process. Traditional models have only input and output variables, and based on the relationship between these variables, the DEA indicates which DMUs are efficient; in internal two-stage DEA models, the production process is divided into two subprocesses, in which the outputs of the first stage consist of the inputs of the second stage. [Fig. 1](#f0005){ref-type="fig"} shows an example of the production process with intermediate variables. It is worth emphasizing that not necessarily all the outputs of the first stage will be the inputs of the second stage --- some outputs may exit or some inputs may enter the system.Fig. 1Internal two-stage DEA model with $X_{i}$ inputs, $Y_{i}$ outputs and $W_{i}$ intermediate variables. The first advance in this direction was made by [@b0525] --- the first study to apply internal two-stage DEA models in banks, which aimed to analyse the profitability and marketability[3](#fn3){ref-type="fn"} of the 55 largest commercial banks in the United States. Accordingly, efficiency was measured in the first stage, considering profitability, with three inputs, i.e., number of employees, assets, and stockholders' equity, and two outputs, i.e., profit and revenues. The variables profit and revenues --- outputs of the first stage --- are the input variables of the second stage --- thus referred to as intermediate variables. The outputs of the second stage are market value, total return to investors, and earnings per share. In this stage, the bank's efficiency in converting its profits and revenues into marketability was analysed. [@b0010] studied twenty-one Bangladeshi banks from 2005 to 2008, through a two-stage network Slacks-based inefficiency DEA model. The authors identified that the black box performance models had divergent results from the network DEA. Similarly, [@b0210] showed that the precision and accuracy of DEA results are greater when using network models, compared to traditional DEA models. [@b0335] found that DMUs that had been indicated as efficient using traditional DEA models were not efficient using network models. The authors found that the efficiency of the productive process calculated by the black box models could be overestimated. This issue is more serious when more stages are involved. Lastly, [@b0330] discusses cases where the overall productive system can be considered efficient, even if its sub-stages are not efficient. Likewise, the author found situations in which a DMU had efficiency rates below another DMU in its sub-stages, but presented superior efficiency scores when analysing from the black box perspective. Despite the advance generated by the study by [@b0525], the two-stage DEA model used by these authors --- classified by [@b0345] as independent --- can have problems related to the intermediate variables, given that by seeking maximization of the outputs in the first stage and minimization in the second, the same variables would be minimized and maximized. To solve this problem, researchers, such as [@b0185], [@b0175], [@b0170], sought to include such intermediate variables in the DEA model itself, which led to the development of Network DEA (NDEA) models, later extended by [@b0180], [@b0400], [@b0340], [@b0320], [@b0350], [@b0080], [@b0095], among others. In this regard, [@b0220] argue that NDEA models can be divided in four main categories:•Independent: Independent models investigate each stage of the productive process separately, without any relationship between stages;•Connected: In Connected models, contrasting with Independent models, the interactions between the stages are taken into account in the calculation of the overall efficiency. Therefore, for a DMU to be overall efficient, it must necessarily be efficient in all stages considered;•Relational: Relational NDEA models, proposed by [@b0320], consist of a combination of the two previous models. Relational models make it possible to measure the efficiency of each system and the overall efficiency. This category of models assumes an additive or multiplicative relationship between overall efficiency and the stage efficiencies;•Game theoretic: Game theoretic models assume each stage of the productive process as a player in a cooperative or a non-cooperative game. Thus, the big difference between the network model and the models with independent intermediate variables is that the former includes all the stages of the process in its mathematical formulation; that is, the production process is divided into various sub-processes, with each sub-process being formulated mathematically in the model. Therefore, the network model enables the formulation of the intermediate variables, whereas the other, which consists of an application of basic DEA models at each stage, does not. [@b0030] emphasized that NDEA has great potential for practical application and provides relevant information to managers. [@b0270] point out that independent models have the burden of not consider connections between stages. However, independent models are less restrictive and generate the highest efficiency scores. Connected NDEA avoid conflicts between stages by considering the interactions between them, whereas Relational NDEA considers any mathematical relationship between the stages. Finally, game theoretic approach is more appropriate when the stages of the production process can be analysed as a game. NDEA systems can have a serial or parallel structure ([@b0270]). Productive processes in a serial structure are connected in sequence. Each process uses potential exogenous inputs and outputs from the previous stage and produces potential exogenous outputs and intermediate variables to the next stage. In a parallel structure, the production processes operate simultaneously and independently. There are also NDEA systems in which the productive process is a mixture between the parallel and the serial structure. In addition, network models can be static, dynamic or shared resources[4](#fn4){ref-type="fn"} . [@b0330] points out that static NDEA models analyse a single moment in time, whereas the dynamic NDEA consists of the repetition of the one-period system in subsequent periods, connected by carry overs. New improvements in dynamic NDEA can be seen in the study by [@b0615], which proposes an NDEA with dynamic slack-based measure and in the study by [@b0325], which presented a relational dynamic NDEA. It is important to highlight that each model is best suited to specific circumstances. [@b0270] point out that less restrictive models can result in an overestimation of efficiency when more complex relationships between stages exist. In contrast, more restrictive models, when not necessary, can lead to underestimated efficiency scores. Nevertheless, despite the issues discussed earlier, the traditional DEA models are still a valid tool for analysing efficiency, when the productive system is simple. In the case of the banking sector, a highly complex system ([@b0515]), we recommend using NDEA, since it allows to incorporate the potential interrelationships among variables. Finally, as a pitfall of the network approach, [@b0075] indicate that in general there are two types of NDEA models: traditional multiplier-based DEA models, focused on DEA ratio efficiency, and envelopment-based NDEA models, focused on the production possibility set. Although for conventional DEA models, these two types of models are dual and equivalent, for NDEA models the duality and equivalence properties do not necessarily hold. The authors recommend that the envelopment-based NDEA model should be applied to determine the projection boundary for inefficient DMUs, whereas the multiplier -based NDEA model should be employed to measure the divisional efficiency. [@b0075] argue that these two types of NDEA follow different approaches and explore distinct efficiency concepts. The authors further indicate that many models currently used in production possibility set-based network DEA should be re-examined. More specifically, some studies using envelopment models failed to calculate divisional efficiencies. However, this result does not mean that it is impossible envelopment models to calculate the divisional efficiency, but that more research is needed in order to extend the existing production possibility set-based network DEA and solve this issue. 2.2. External two-stage DEA model {#s0020} --------------------------------- The other branch of the literature refers to external two-stage DEA models, which consist of a second stage outside the production process. This is actually a procedure adopted by the researcher in which the efficiency indices are calculated in the first stage through DEA, and subsequently, these indices are used to power some other technique, which may be some type of regression (e.g., Ordinary Least Squares or Bootstrapped Truncated Regression), an Analytical Hierarchy Process (AHP), or an Artificial Neural Network (ANN), among others, considering the various possibilities available to the researcher in the second stage. [Fig. 2](#f0010){ref-type="fig"} shows the structure of the external two-stage DEA models. Thus, the main motivations for using external two-stage DEA models, as well as the respective technique in the second stage, include the following:•[@b0485] emphasized, many of the rejections by managers of suggestions for improvements made by DEA are because traditional models do not consider that environmental factors, which are external to the organization, influence the results found in the model, and the administrators would have no control over such factors. Therefore, regression techniques are used in the second stage, in which the efficiency index calculated by the DEA model is the dependent variable and the exogenous variables are the independent ones;•Given that DEA is very sensitive to the presence of outliers and statistical noise, ANNs can be used in the second stage for the purpose of finding data envelopes, which, instead of being based on outliers, are supported by the whole database ([@b0685]). Additionally, the ANN allows the researcher to make predictions through training with the efficiency scores measured by the DEA; i.e., by being repeatedly exposed to the data, neural networks learn the relationship between the input and output variables of the DMUs ([@b0025]);•Recognizing the importance of including qualitative indicators in the efficiency analysis, [@b0040] used an external two-stage DEA model that integrated DEA with AHP, a multi-criteria decision technique developed by [@b0510] that allows modelling a complex problem in a hierarchical structure composed of different levels, with the top of the hierarchical structure representing the overall goal, while the lower levels consist of all possible alternatives ([@b0530]). With this, AHP reduces the complexity of the decision-making process to a series of simple comparisons and rankings. Fig. 2External two-stage DEA model. For most of the studies analysed, it was clear whether the study involved an external two-stage DEA model or not. However, in other cases, this was not so obvious. There is a grey area that lacks a clear and accurate definition about when a study can be classified as an external two-stage DEA model. Exemplifying this situation, [@b0685] --- who categorized their work as two-stage DEA --- applied DEA to measure bank efficiency and, subsequently, used these efficiency scores to train an ANN. In turn, [@b0460], despite basing his study on the study of [@b0685], did not categorize it as thus. After careful reading, we considered the study of [@b0460] to be an external two-stage DEA model, considering that the scores of the DEA --- which were measured in the first stage --- were used in an ANN model in the second stage, as in [@b0685]. This difficulty in classifying the studies as involving models that are either external two-stage DEA or not highlights the importance of systematically analysing the theme --- this work is an initial effort towards this, but restricted to the banking sector. If there was a clear definition for such models, there would be no difficulty in identifying which studies do and do not consist of an external two-stage DEA model. It is also important to mention another ambiguous aspect of external two- stage DEA models, i.e., the impact that exogenous variables have on efficiency. Although analysing this effect is only one of various possible objectives in the second stage, this issue needs a more in-depth discussion, given that the literature presents quite controversial results, often without the necessary care when comparing results. Authors have often used previous studies to support certain results found, although the studies in question use different approaches or measure different types of efficiency. It is known that variable selection strongly influences the results found. [@b0285] showed that in a study following the production approach, when keeping other variables constant, a larger number of deposits would lead to higher efficiency scores indicated by the DEA model, given that deposits would be a model output. By contrast, a researcher who had followed the intermediation approach and treated deposits as an input, keeping the other variables constant, would obtain higher efficiency scores when the bank had a fewer number of deposits. This problem becomes even more interesting when considering two external stages. A study that followed the intermediation approach --- that is, studied the bank's role as a financial intermediary and analysed efficiency through the Banker, Charnes and Cooper (BCC) model, which measures efficiency related only to administrative issues --- and performed a regression in the second stage to analyse the impact that exogenous variables have on efficiency may encounter different results for this impact when compared to another study that adopted different criteria. An example of this is to compare the situation above with another study that analysed banks in their function of offering services to customers, that is, a study that followed the production approach of [@b0055] and measured efficiency through the Charnes, Cooper and Rhodes (CCR) model, which measures technical efficiency, also referred to as overall efficiency. The effect of the exogenous variable in question may vary from one study to another simply because of the methodological differences adopted. It is not surprising, therefore, that the literature presents quite controversial results in this respect. This review will guide this discussion by providing the results found by researchers who have addressed the two-stage DEA model in the banking sector, highlighting all the methodological aspects adopted by them, that is, the approach used to select the model's variables, the type of efficiency analysed, the non-discretionary variables used and their respective impacts, considering the peculiarities of each study. In addition to the problems mentioned above, external two-stage DEA models are sensitive to the problem of separability. [@b0565] found that traditional regression techniques in the second stage were not appropriate and proposed a bootstrap truncated regression model as an alternative, despite recognizing that this option could suffer from the same issue. As [@b0115] point out, if the condition of separability does not hold, the results of the second stage would present drawbacks and be difficult to analyse. The issues discussed in [@b0115], [@b0570] are crucial for external two-stage DEA models. Such models are being extensively used in the literature, as [@b0160] argue in their review, and may present statistical drawbacks as they fail to maintain the hypothesis of separability. In our review of the application of models with the terminology Two-stage DEA in banks, we bring the discussion on separability aiming to present the state of the art. We also present strengths and weaknesses of the models, identifying gaps that signal opportunities for future studies. We argue that the tool developed in [@b0115] to test separability, similarly to the tool indicated by [@b0375] to test constant *versus* variable returns to returns, can convey relevant information and should be considered in future research of external two-stage DEA models. It is also important to reference a stream of research in the literature, which started with [@b0105], [@b0110], and makes it possible to cope with separability problems. [@b0105] developed a conditional efficiency model that allows the estimation of the efficiency in the presence of environmental variables. These environmental variables will be neither inputs nor outputs in the production process. Examples of application of this approach in the banking sector are discussed, for instance, in [@b0135], [@b0120], [@b0360], [@b0355], [@b0445], [@b0630]. Since conditional efficiency is not a two-stage DEA model, we did not include it in the survey of papers. However, we highlight that conditional efficiency models represent a relevant mechanism to deal with separability. [Table 1](#t0005){ref-type="table"} presents a description of studies that used two-stage models --- either internal or external --- in the banking sector. It contains a brief review of each study identified, considering the literature review criteria discussed in Section [4](#s0030){ref-type="sec"}. The studies were ordered from the oldest to the most current to show how topics related to two-stage DEA models in the banking sector have been discussed in the literature over time. The study by [@b0425] --- the oldest of the sample --- was classified as number 1, [@b0050] as number 2, and so on. [Table 1](#t0005){ref-type="table"} also lists the number of citations per article according to either *Scopus* or --- if the article is not in this database --- *Web of Science* up to August 2018.Table 1Brief description of objectives and results of each analysed study.N.StudyBrief SummaryNumber of Citations1[@b0425]By analysing three gaps in the literature, the authors examine the profitability and marketability efficiency of 245 US banks, as well as verify whether the bank location impacts its efficiency. The results suggest that banks' greatest source of inefficiency is marketability. The bank's location is unrelated to efficiency ratios, and overall technical efficiency can be used as a predictor of the likelihood of bank failure.1342[@b0050]The study analyses the efficiency of 31 agencies from Kölner Bank in Germany, verifying the impact of non-discretionary variables such as branch area, public transport, competition and others. The two-stage models were able to more accurately evaluate efficiency compared to one-stage models, and none of the environmental variables were statistically significant.93[@b0685]The paper combines DEA with Neural Network (DEA-NN) to analyse the relative efficiency of branches of a Canadian bank. The results found by the proposed model are comparable to those of the traditional DEA models. The proposed model leads to a more robust boundary and identifies more efficient DMUs, but it is inferior in identifying benchmarks.1624[@b0495]The study evaluates bank efficiency in different countries (different contexts) by checking the impact of regulatory factors on efficiency. The results provide evidence for relevance of the three pillars of Basel's II Accord. Larger banks with lower loans showed better technical efficiency indices under all circumstances. Country-specific variables had a statistically significant impact on efficiency.1055[@b0460]The paper analyses the efficiency of the largest Arab banks through the integration of DEA with neural networks (NN). NN have great potential to assess the relative efficiency of banks because of their flexibility and robustness. The predictive capacity of the model is very similar to the results of other statistical techniques.516[@b0600]The authors investigate the relationship between post-crisis banking restructuring and country-specific factors with bank efficiency in a sample of 110 banks in five Asian countries in the period 1997 to 2001. Bank restructuring does not necessarily increase banks' efficiency. Domestic M&A perform better on efficiency than foreign acquisition. Banks under state intervention are more inefficient. The inefficiencies in the banking sector are attributed largely to country- specific factors.307[@b0575]The study estimates cost, allocative and technical efficiencies of Brazilian banks in the post-privatization period (2000--2007) through a three-stage model. Brazilian banks have a high degree of inefficiency compared to other countries. Stated owned banks were more efficient than private ones, and foreigners showed higher levels of cost inefficiency. Size is not an important variable that impacts efficiency.878[@b0620]The study measures the overall efficiency, comprised by the efficiency of profitability and effectiveness, of the 50 best branches of a Greek bank. Nineteen branches were efficient in profitability and effectiveness. Regarding overall efficiency, the main cause of inefficiency was profitability. The bank's performance can be largely improved by changing practices in branches identified as worst DMUs.209[@b0040]The authors analyse and suggest strategies to optimize the productivity of workers from various branches of the Bank of Industry and Mining in Iran. Integrating AHP and DEA, the study verified that a large part of the inefficiency of the branches is due to low work quality level and high number of training hours. The proposed analysis technique leads to better results than others, exploring both qualitative and quantitative data.2410[@b0285]The authors propose a DEA model that considers the variable deposits as an intermediate variable. The results show that the decision to define deposits as input or output significantly affects the indexes and the efficiency ranking of traditional models and, for this reason, the method developed by the authors managed to avoid this dilemma.5111[@b0485]The authors apply a two-stage DEA model to analyse the efficiency of 816 bank branches in order to reconcile the results indicated by this model with the opinions of the managers of these organizations. The efficiency indexes presented considerable variations among the different regions analysed. Branches in smaller markets were more efficient. Considering different approaches for analysing efficiency allowed finding results with greater consistency.11112[@b0535]The study analyses the efficiency of 20 branches of Saderat Bank in Iran, pointing out which units are efficient and inefficient, as well as benchmarking inefficient ones and how they can improve their operations. Only three branches were efficient, and the largest source of inefficiency was in the production stage.213[@b0410]The authors identify bank failures through a two-stage DEA model of worst practices, which makes it possible to work with negative outputs. The empirical analysis showed the applicability of the model to predict potential bank failures. The model predicted a number of potential banks to fail similar to what has been observed in Taiwan.214[@b0430]The study analyses the influence of reforms in the banking sector of six countries, whose objectives were to strengthen the financial and economic integration between these countries. These measures had a significant impact on the efficiency and homogenization of the banking sectors of the countries analysed.1715[@b0550]The authors analyse the true managerial efficiency of 123 branches of a bank in Taiwan, through a three-stage model, adjusted for environmental variables and statistical noise. Traditional DEA models overestimated efficiency ratios. The main cause of branch inefficiency was the operated scale. Location did not show significant impacts on efficiency. Branches with greater scope of action and volume of deposits were more efficient.2316[@b0700]The study integrates NDEA with Fuzzy to measure branch performance in Taiwan's banking industry. Most of the branches analysed had a better performance in the first stage of productivity. Interest cost is the largest factor in the first stage, while fund transfer income and interest income are key factors of the second stage.4317[@b0265]The study examines the efficiency of 18 Greek banks in a period of Greece's fiscal crisis by checking how the banks' efficiency would react to possible Mergers and Acquisitions (M&A). The results suggest that, analysing the year before and the year after the crisis, M&A did not generate operational efficiency in the short term. M&A between efficient banks will not necessarily generate an efficient bank.3218[@b0370]The paper measures the efficiency of 16 branches in Iran using an integrated DEA model with AHP. The location of the branches was a key factor of efficiency. The strengths of one branch can serve as benchmarking for the others. The use of AHP together with DEA provided more consistent results.119[@b0405]The authors apply an integrated model for the measurement of bank efficiency in Taiwan through Independent Component Analysis (ICA) and the Network Slack-Based Measure (NSBM). Three dimensions of efficiency were analysed: production efficiency, service efficiency and profitability efficiency. The results indicate that the proposed model was able to determine the main causes of bank inefficiency, presenting an excellent discriminatory feature.2320[@b0450]The study evaluates the risk management performance of Chinese banks in terms of their contribution to profitability through a three-stage NDEA model. The inclusion of the proxies for risk improved the efficiency measurement.6921[@b0480]The study integrates DEA and Analytical Network Process (ANP) to evaluate the efficiency of commercial banks in Turkey, with the possibility of incorporating managerial preferences into the model. The proposed integration presented several advantages over traditional models, such as considering multiple performance measures. The weights of the model can based on the preferences of the managers.122[@b0690]The paper verifies the impact of size and market power on the efficiency of 16 Chinese banks in the period from 2007 to 2011. The results found that size is a determinant of the efficiency of banks. A favourable economic environment (real GDP growth) also has a positive influence on efficiency.023[@b0150]The authors propose a three-stage DEA model with two independent parallel stages, where the outputs of these stages serve as input to the third stage, with the presence of undesirable outputs. In a case study of 49 People's Bank branches, the study corroborates the effectiveness and applicability of the model in bank efficiency studies.2924[@b0290]The study proposes a two-stage DEA Network Slack-Based Measures (NSBM) model with undesirable output aiming to open the black box of the production process. The proposed model has a better applicability than traditional models. All hypotheses suggested for efficiency determinants were confirmed for overall efficiency. On the other hand, the hypotheses could not be accepted when each stage were analysed individually.925[@b0500]The authors analyse the efficiency of Micro Financial Institutions (MFIs) both in the execution of financial tasks and in their role of coping with social problems, through, for instance, loans to poor people. In 46% of MFIs, there was no trade-off between the two dimensions analysed. Directives were given in order for MFIs to improve both their financial and social efficiencies.1826[@b0640]The study analyses the efficiency of the 16 largest Chinese banks in the period from 2003 to 2011, which corresponds to a reform in the Chinese banking sector. The authors consider deposits as intermediary variable and unrealized loans as undesirable output. The two-stage model was able to explain more appropriately the inefficiency of the banks than conventional DEA models. The efficiency of the banks increased during the period analysed because of the reform. State owned banks were more efficient before the reform, however difference to other banks decrease afterwards.7627[@b0650]The authors investigate the relationship between bank efficiency and intellectual capital in a sample of 16 US banks through a two-stage model. Profitability is included in the first stage and creation of value is included in the second. The authors found evidence that intellectual capital positively impacts efficiency.2128[@b0660]The study evaluates the 40 largest banks in Brazil regarding the optimization of costs and productive efficiency, establishing a connection between these two variables. Brazilian banks tend to be more efficient at translating administrative expenses and personnel expenses into shareholders' equity and fixed assets than at managing physical and human resources. M&A, size and the fact that the bank is state owned are also variables that influence efficiency.4829[@b0020]The authors measure the efficiency of 16 Chinese banks in the period 2008 to 2012 through a two-stage DEA-SBM approach, in which the first stage was called a deposit generator and the second as a deposit user with the presence of undesirable output. The results indicate that efficiency has increased during these five years due to banks' improvements in deposit creation.1130[@b0065]The study applies the Dynamic Network Slack-Based Measure Data Envelopment Analysis Model (DNSBM) to evaluate the performance of Taiwanese banks during the period 2005--2011. Using a three-stage model, the results indicate that banks have lost profitability since the 2008 crisis, while the creation of intellectual capital increased from 2008 to 2010.731[@b0365]The paper evaluates the relative efficiency of customer services in 30 branches of an Iranian bank, through a hybrid model based on Multi-Criteria Satisfaction Analysis (MUSA) and NDEA. The proposed method was able to identify which branches were able to meet consumers' expectations.032[@b0225]The authors assess the dynamic efficiency and productivity of Japanese commercial banks, maximizing desirable outputs and minimizing undesirable outputs (Non Performing Loans). For a 3 year dynamic window, the inefficiency of Japanese banks ranged from 19.5% of average outputs and inputs in 2007--2009 to 21.5% of average outputs and inputs in 2008--2010. Banks could become more efficient by increasing the volume of deposits.1733[@b0385]The authors combine two empirical data analysis techniques to evaluate and predict performance improvements for 181 US banks. The proposed model contributes, in an impactful way, to the managerial process of decision making.1934[@b0540]The authors measure the efficiency of Islamic Yemeni commercial banks, analysing the stability and efficiency of the sector. The study also checks for variables that may be affecting efficiency. The results suggest that the recent reforms adopted by the Yemeni government have failed to improve the sector since the efficiency scores were low. Islamic banks performed better than commercial banks.235[@b0590]The study estimates the efficiency of Malaysian banks from 1999 to 2008, analysing the impact of several environmental variables such as liquidity, risk, size, profitability, capitalization level, macroeconomic conditions. Size, non-interest income, foreign control, and capitalization have a positive impact on productive efficiency. State owned banks were more inefficient. Credit risk and liquidity were not statistically significant.036[@b0625]The paper incorporates stochastic models in the DEA to analyse the efficiency of Greek banks in a period of crisis of the country incorporating variable related to risk. The model measures efficiency considering the possibility of stochastic variables in the DEA model, In addition, the model is able to control, from the efficiency indexes, the favourable operating conditions.2137[@b0645]The study analyses the efficiency of banks in Taiwan by pointing out the marginal benefits of information technology (IT). In addition, considering the Basel III Accord, the impact of some proxies for risk on efficiency is measured. Most banks need to improve their returns to scale on IT inputs. The effect of risk proxies on efficiency was not universal in the study.038[@b0465]The authors measure the cost efficiency of 32 Vietnamese banks in the period from 2000 to 2014, verifying the impact of two reforms in the banking sector, namely: partial acquisition by foreign banks and entry into the stock market. In addition, the study also analyses the impact of other environmental variables. Efficiency showed a slight upward trend in the period. Banks listed on the stock exchange or partially acquired by foreign capital presented better efficiency ratios.339[@b0505]The authors investigate the relationship between efficiency and risk through a three-stage model in a case study with 14 branches. Risk causes banks to seek enhancement of their operations, thereby increasing their technical efficiency. Therefore, risk is positively related to efficiency.240[@b0580]The study analyses the determinants of efficiency of Vietnamese banks from 1999 to 2009. The largest banks are more efficient than the medium and small banks, with the latter being the most inefficient. Profitability had a positive impact on efficiency, while the number of branches and number of years in operation had the opposite effect. As far as global efficiency is concerned, private banks are more efficient than state owned banks.1541[@b0670]The study estimates, through a two-stage model, the impact on virtual efficiency of M&A of Mozambique's banks and also analyses the results taking into account whether the bank is state owned or has foreign control. The results indicate that control of the bank (state or foreign) affects efficiency and that mergers should occur between banks of different type of controls. M&A involving the analysed banks may lead, in most cases, to the situation of decreasing returns of scale.342[@b0665]The study uses a new Fuzzy-DEA model to evaluate bank efficiency in Mozambique for the years 2003--2011. Several aspects explain bank efficiency in Mozambique as, for instance, labour price, capital price and deposits. The effect of the environmental variables was ambiguous, depending on the degree of uncertainty of the model. Banks should reduce the number of employees and make initiatives to leverage capital.543[@b0005]The authors analyse the efficiency of a bank's branches in Greece during different periods of the economy, taking into account expansion followed by strong recessions The study also verifies how efficiency has behaved over the years. Banks' efficiency deteriorated at the beginning of the recession, and especially as it deepened.344[@b0015]The study examines the impact of exogenous variables on the efficiency of 26 Ghanaian banks in the period 2003 to 2011. A high level of inefficiency among Ghanaian banks is evident, mainly due to pure technical inefficiency. The size of the bank positively influences efficiency only to a certain degree, due to economies of scale. Market concentration, leverage, and loan loss provisions are other significant factors identified as determinants of efficiency.145[@b0035]The study evaluates and optimizes the productivity of employees from of the Bank of Industry and Mine in Iran by integrating DEA with AHP with quantitative and qualitative indicators. The results specify that the most inefficient branches are related to low work quality and high training hours.246[@b0165]The paper investigates the impact of exogenous variables on the efficiency of Islamic commercial banks in Indonesia from 2011 to 2014. The actual average efficiency of Islamic commercial banks in Indonesia is 91.82%. Assets and ROA had a positive impact, while the number of branches negatively affected the bank's efficiency.147[@b0215]The authors extend the two-stage Network DEA (NDEA) by proposing a banking revenue function. In addition, the Nerlove model is also applied to identify bank inefficiencies. The results indicate that the Japanese regional banks did not reach the optimum point in their productive processes. The main cause of bank inefficiency is allocative efficiency. Capitalization and risk had a negative effect on efficiency.1548[@b0255]The authors measure the operational and intermediation efficiency of 46 Indian banks through a two-stage Network DEA model, in addition to a bootstrapped truncated regression to verify the impact of variables on these indices. The overall efficiency of the sector needs improvement in the two stages analysed. Larger and private banks showed better results.249[@b0315]The paper analyses the determinants of productivity of Southern Asian Banks. National and foreign Islamic banks showed an improvement in Total Factor Productivity Change (TFPCH). Among the exogenous variables analysed, capitalization, liquidity and world financial crisis had a significant influence on the level of productivity level of banks.150[@b0380]The authors propose an extension of the two-stage DEA model developed by [@b0085], making it possible to work with negative data and undesirable outputs. Operational efficiency, calculated in the first stage, is statistically smaller than profitability efficiency, measured in the second stage.151[@b0545]The authors propose a model to estimate and decompose possible M&A gains from Chinese banks. The results show that banks can improve their operations, mainly in relation to technical efficiency, when engaging in M&A. In contrast, M&A have a negative impact on scale efficiency.352[@b0675]The authors analyse the virtual efficiency of M&A of South African banks. In addition, the impact of contextual variables on these efficiency indices is tested. M&A tend to be beneficial to banks, increasing technical efficiency, especially in terms of production. M&A gains are larger when both banks are local.653[@b0090]The paper presents an innovative DEA model with SVM in the second stage in order to segregate efficiency groups. The study also analyses the effects of different context-related variables on efficiency indexes. For the sample of Chinese banks, efficiency is related to domestic origin and enlisting in the stock market. However, results show that performance of the Chinese banking sector is low.354[@b0145]The authors analyse the impact of earning asset diversification on Chinese bank efficiency from 2006 to 2011. In addition, they proposed an innovation on the method by extending the bootstrap model of [@b0565] Chinese banks could improve their efficiency with an increase in the diversification of their asset portfolios.155[@b0190]The study measures the efficiency of banks in peripheral countries in the Eurozone and examines the effects of determinants of risk on bank performance over 2007--2014. Results indicate that higher levels of liquidity and credit risk negatively influence efficiency, while capital and profit risk have a positive impact on banks' performance. The crisis tends to amplify the effect of bank risk.056[@b0475]The authors investigate the efficiency of 109 UK banks in the period 1987--2015, through DEA with a regression feedback mechanism. Several types of DEA model were used as well as different orientations. The proposed model increased the discriminatory power of the DEA. The SBM presented more consistent results than BCC and CCR.057[@b0295]The study extends the NDEA to the Copula-Based Network SFA model, with application to US banks. The proposed model made it possible to overcome the convergence problem specifically when phenomena are subject to highly nonlinear simultaneous equations. The inefficiency of banks comes mainly from the first stage.158[@b0695]The study measures the efficiency of Chinese commercial banks and assesses the impact of foreign capital participation on efficiency. Banks with foreign capital tend to be more efficient, even if this share is owned by minority shareholders. In addition, efficiency is also influenced by macroeconomic factors.059[@b0710]The authors develop a dynamic two-stage DEA-SBM model to identify the sources of inefficiency of Ghanaian banks. Banks' efficiency ratios were considerably low. The biggest source of inefficiency is in the first stage, called the productivity stage.0 3. Method {#s0025} ========= A review of the literature on DEA is nothing new. [@b0155], [@b0160], [@b0440], [@b0585], [@b0435], [@b0420], [@b0415] conducted literature reviews on DEA in various areas. However, to our knowledge, only [@b0200], [@b0490] focused on the banking sector, and neither specifically reviewed two-stage DEA models in banks, a topic that has been gaining considerable prominence, as highlighted by [@b0160]. According to [@b0195], the literature review is an important tool for gathering the results of previous studies on a certain theme, producing an in-depth analysis of the main studies. This method is particularly relevant for mapping the main topics studied and providing a complete view of the existing knowledge from the articles on the subject analysed, as well as for identifying the existence of possible gaps and opportunities for future studies. Accordingly, [@b0305]\]indicated that this technique identifies challenges for the development of future studies; that is, after identifying the characteristics of how the literature has been discussing a theme, it is possible to discover gaps and opportunities in topics that are not being discussed to the same degree as others. In addition to the previous observations, the review done here is important because, despite the existence of literature reviews regarding DEA in banks, namely, [@b0200], [@b0490], neither have specifically analysed two-stage DEA models in banks. [@b0160] identified the most popular keywords in publications from 2015 and 2016 and found that in second place were keywords such as two-stage models and efficiency decomposition, which is one of the functionalities of these models, whereas in fourth place were words such as bootstrap and bootstrapping. Additionally, [@b0160] found that the banking sector is the field of study with the second highest number of studies. Linking these two aspects, there is an emerging topic in DEA that, to the best of our knowledge, has not yet been systematically reviewed, i.e., two-stage DEA models in banks. Therefore, by reviewing this topic, this work contributes to the literature by presenting the state of the art on this topic and providing an agenda for future studies. In spite of the popularity and the various years of research, questions that are quite frequent in studies on DEA in banks --- such as what the orientation should be (input or output) and at what scope the bank should be analysed, which, in turn, will influence variable selection --- still have no answers. In the particular case of two-stage DEA models, as discussed in Sec. [2](#s0010){ref-type="sec"}, various other aspects require further discussion, for example, which type of two-stage DEA model is used most often and, in the case of external two-stage models, which technique is most popular in the second stage and what the impacts of non-discretionary variables on bank efficiency are. Perhaps the clearest aspect for researchers is that the CCR model should only be used when all companies are operating at the optimal scale level ([@b0200], [@b0635], [@b0250]); therefore, models that work with variable returns to scale have been prioritized in more recent studies ([@b0200]), but it is still not possible to state that one model is superior to another. Considering the aspects without consensus in the literature, the present study makes a contribution by providing guidance to researchers in future studies, summarizing how the literature has been addressing these topics relevant to studies on two-stage DEA models in banks. Briefly discussing the two reviews about DEA in banks mentioned earlier, [@b0200] analysed 196 studies that applied operational research or artificial intelligence techniques in the banking sector. They searched the *Scopus* database using the following keywords: bank efficiency, bank and data envelopment analysis, bank performance, bank and neural networks, bank and artificial intelligence, and bank and operational research. The review period was from 1998 to 2009, and only articles in English were considered. Of the 196 articles, 151 used DEA and its variations to estimate several measures of banking efficiency and productivity growth. Therefore, DEA is the most used technique in the field of operational research. The articles analysed were published in 73 different journals, with 58% of the publications concentrated in 12 journals. The European Journal of Operational Research (EJOR) was ranked first, followed by the Journal of Banking and Finance and Applied Financial Economics, with 19, 15, and 13 publications. Regarding method-related questions, most of the studies focused on the measurement of technical efficiency, worked with variable returns to scale, used input orientation, and followed the intermediation approach to select variables. In Section [3](#s0025){ref-type="sec"} of their work, [@b0200] discussed the topics of interest in the studies analysed, in which it stands out their discussions about the determinants of efficiency. The non-discretionary variables that are studied typically included size, profitability, capitalization, and country-specific factors. Despite providing an enlightening discussion regarding this aspect, the authors neither identified the technique most used in the second stage for this purpose nor clearly specified what impact the reviewed studies found in relation to the non-discretionary variables on the different types of efficiency (technical efficiency (TE), pure technical efficiency (PTE), and scale efficiency (SE)). The other topics of interest included the relationship between stock returns and efficiency; bank ownership; corporate events, such as mergers and acquisitions, and efficiency; regulatory reforms or liberalizations and efficiency; a comparison of frontier techniques; and bank branch efficiency. [@b0490] reviewed 80 studies that applied DEA in bank branches, classified according to the following attributes: country or region, inputs, outputs, premise regarding the returns to scale, and the objective. With the exception of two studies, all of the others focused on branches in just one country. The five most researched countries were Canada, Greece, Portugal, the United States, and the United Kingdom, which accounted for 65% of the studies reviewed. The studies had two main focus: to develop more advanced DEA models (38%, or 30 articles) and to evaluate the efficiency and provide guidance for improvements (33%, or 26 articles). Of the 80 articles, 5 used deposits as an input, and 43 used it as an output; 47% followed the premise of constant returns to scale, 20% followed the premise of variable returns to scale, and 33% used both models. [@b0490] concluded their review by stating that although DEA is a deterministic technique, its results are sensitive to the data used. Thus, generating statistical inferences and confidence intervals is of great relevance, as this enables the reliability and acceptance of the model to be demonstrated. The authors indicated that although several advances have already been made in this regard over the past 20 years, an opportunity for future studies in this area still remains. Therefore, in the case of bank branches, there is an opportunity for research that uses statistical techniques --- such as the bootstrap technique of [@b0565] --- together with DEA. This review differs from others, such as [@b0200], [@b0490], first, because it focuses on an emerging topic in the DEA literature, i.e., two-stage DEA models in banks ([@b0160]). [@b0200] discussed this aspect, but only briefly. Discussing how two-stage DEA models have been applied in banks and identifying the objectives of these studies, the results found, and even aspects related to the two-stage terminology itself are extremely important, and to the best of our knowledge, this has not yet been done. Second, the classifications and codings created here --- proposed by [@b0390] and later disseminated by [@b0440], [@b0280], [@b0560] --- are unique in the area of banking efficiency. Finally, in relation to the review by [@b0200], more than 8 years have passed since its publication, which, although not a long time, indicates the need for a new review that considers the context of two-stage models, given that these models have been gaining notoriety, especially in recent years, as shown by [@b0160]. Regarding the review of [@b0490], they did not specifically discuss two-stage models and focused on studies of bank branches, not on the banks themselves. Considering the aspects previously discussed, as well as the relevance that a literature review adds to the academic debate on a given theme, [@b0390] presented five steps to be followed when conducting a review, later followed by [@b0440], [@b0305], [@b0280], as shown in [Fig. 3](#f0015){ref-type="fig"} .Fig. 3Steps for the literature review. Considering step 1, the first keyword used for the search was DEA Bank in the title and stage in the topic in the *Web of Science*, *ScienceDirect*, and *Scopus* databases. The reason for using only the word stage in the topic --- which considers the title, abstract, and keywords --- is that if the article happened to use DEA with more than one stage, the authors would possibly specify this. Therefore, there is no need to search for two-stage because these articles will already be found with the criterion adopted. Besides, studies that adopted a three-stage model could also be identified. Two searches were conducted, the first in June 2017 and the second in July 2018. Regarding the first search, 27 publications were found in *Web of Science*, which included 19 articles and 8 proceedings papers; 37 articles were found in *ScienceDirect*; and 27 publications were found in *Scopus*, including 22 articles, 2 articles in publication, 1 book chapter, and 2 conference papers. Another search criterion used was Data Envelopment Analysis and bank in the title with stage in the topic, given that some articles could use the full nomenclature for DEA. Eleven documents were found in the *Web of Science*, including 6 articles and 5 proceedings papers; 13 articles in *ScienceDirect*; and 9 articles and 1 conference article in *Scopus*. In accordance with [@b0200], it was decided to include only articles published in journals in the review. As many articles were identified more than once due to the different search criteria used, 77 articles were selected. In step 2, a careful analysis was conducted to verify if the articles actually had a connection with the theme of the present study, i.e., two-stage DEA models in banks. This analysis is complex because of the absence of an accurate definition of what exactly characterizes these models, as discussed in Section [2](#s0010){ref-type="sec"}. Of the 77 articles found, 47 had an appropriate relationship with the research theme. The second search --- conducted in July 2018 in the same databases and considering the same keywords --- found an additional 12 articles that had a relationship with the theme. Thus, the final sample was 59 articles. 4. Classification and coding {#s0030} ============================ After the evaluation of the articles and considering step 3, an analytical framework was developed that contained ten classifications covering topics relevant to the literature on two-stage DEA models. Consequently, each article was classified and coded according to its characteristics and the results found. The classifications were composed of numbers and letters (A, B, D, E, and so on); thus, the coding consists of a combination of letters and numbers. This step is important in identifying the most studied topics and possible gaps in the studies in this area. First, we analyse all the articles jointly to present the general landscape of the literature on two-stage DEA models. Second, we segregate the external and internal two-stage DEA models to determine whether the gaps found are maintained. Classification 1 addresses the type of two-stage DEA model adopted in the studies, which are coded as A (internal) or B (external) --- the studies coded as A refer to those that analyse the production process in two or more stages by breaking this process down into subprocesses. The studies categorized as B are those that used another procedure in the second stage, outside the production process. The results of this classification will be important to understand exactly what is understood in the literature by the terminology two-stage, as well as to segregate these different types of models. Classification 2 identifies the economic context of the country of the study in question --- it has an A--C scale of coding possibilities. It should be noted that the C coding was restricted to theoretical studies or literature reviews that did not have a country as the focus of the study. According to [@b0660], much of the literature on bank efficiency focuses on the United States and Europe, neglecting countries with emerging economies. Additionally, when reviewing 80 studies on bank branch efficiency, [@b0490] identified a gap regarding studies considering more than one country in the analysis (only 2 of the 80 articles reviewed involved more than one country). Therefore, this classification enables determining whether the gaps found by [@b0660], [@b0490] also exist in the literature on two-stage DEA models in banks. Classification 3 refers to the continent of the data analysed by the article in focus. The coding scale is composed of the letters A--F. The results of this classification will be important for identifying possible continents with few studies, thus indicating a gap in geographic perspective. Taking again the literature review of [@b0490], these authors determined that the bulk of research is concentrated in North America and Europe; thus, determining whether this also occurs with the literature on two-stage DEA models in banks is of great importance and will make it possible to direct future research to less studied continents. Classification 4 analyses the articles in accordance with their research objectives, with the coding scale composed of the letters A--E. To construct this classification, the findings of [@b0490] were considered, which indicated that, in general, the main topics of the studies are as follows: changes in efficiency due to regulations, effect of exogenous variables on efficiency, measurement of efficiency with an indication of benchmarks, and international comparison. As some of the studies analysed here were proposing new adaptations of two-stage models (e.g., the use of new techniques in the second stage, in the case of the external two-stage DEA model, or extensions in the mathematical formulations, in the case of internal models), coding was added for this situation. Classification 5 identifies the level of research of the analysed works --- the possibilities in the coding scale range from A to D. Exploratory research aims to develop, clarify, and modify concepts and ideas. In general, this type of research is the first step of a broader investigation on a given topic. In turn, the purpose of descriptive research is to describe the characteristics of a certain population or phenomenon or to establish relationships between variables. Explanatory research has as its central concern identifying the factors that determine or contribute to the occurrence of a particular phenomenon. Finally, predictive research seeks to predict future outcomes based on the analysed data. Through this classification, it will be possible to understand which type of research is predominant in this theme. Classifications 6, 7, 8, 9, and 10 address aspects related to the DEA method. As discussed previously, despite the extensive application, there is still no unanimity regarding the basic aspects of a DEA study, for example, which orientation should be adopted (input or output), how to select variables, and what technique should be used in the second stage. It is known that models with constant returns to scale should be used only if all banks are operating efficiently in scale ([@b0250]), something that is unlikely to occur in practice. However, it is worth noting that one model is not necessarily superior to another given that they measure different phenomena --- the CCR model measures the technical efficiency (TE) or overall efficiency, which is composed of purely technical efficiency (PTE) and scale efficiency (SE), while the BCC model analyses only the PTE, based solely on administrative capacities. In other words, PTE is equivalent to TE, disregarding the impact of the economies or diseconomies of scale. Another frequent finding in relation to model orientation is that since banks generally do not have control over output levels, orientation towards the input is recommended ([@b0515]). However, given the plurality of existing output variables, this may not always be true. Observing how the literature has been addressing this subject will be important in providing direction for future studies. Accordingly, classification 6 refers to the DEA model used, with the following possible codings: radial DEA models, with the popular models of [@b0070], [@b0045], known as CCR and BCC, and the non-radial DEA model of Slack-Based Measures (SBM) developed by [@b0605]. While radial models deal with proportional changes in inputs and outputs in order for a given DMU to become efficient, the non-radial models, which focus on slacks, do not make this assumption ([@b0610]). [@b0235] also highlight that both the radial and non-radial models may be biased when there are slacks in the restrictions that define the technology of the production process. A possible existence of slack in the input constraints indicates that a unit can be judged to be efficient even though it could reduce at least one input, maintaining the same output level ([@b0230]). Classification 7 analyses the returns to scale considered in the studies, which can be constant or variable. It is worth noting that the articles could receive more than one code if they considered both constant and variable returns to scale. The use of the two types of return is necessary to calculate the different types of efficiency, namely, TE, SE, and PTE. In the radial models, the returns to scale take the acronym of the creators of the traditional DEA, the CCR and BCC models, whereas in the case of the non-radial models, although there was the possibility of working with constant or variable returns, this did not occur, given that they were not developed by the authors of the acronym. Articles that did not specify which return was adopted were coded as 7C. Classification 8 deals with the orientation of the model, which can be as follows: input-oriented, in which, for an inefficient DMU to become efficient, it must keep its outputs constant and reduce its inputs; output-oriented, in which it is sought to increase the outputs while keeping the inputs constant; and non-oriented (often used in NDEA), the objective of which is to maximize the outputs while minimizing the inputs. As some authors do not clearly specify the orientation adopted, there is a coding for such studies (unidentified). Classification 9 addresses the scope of the analysis, indicating which approach was used for variable selection, which, in turn, will determine which specific function of the bank is being analysed. According to [@b0060], the production approach --- proposed by [@b0055] and which considers the bank's main objective as providing services to its clients --- is more appropriate for bank branch studies, while the intermediation approach of [@b0520] --- which indicates the financial intermediator role as the primary function of the bank --- is more appropriate for studies on the banks themselves. The profit approach --- proposed by [@b0140] --- analyses the bank as a producer of profit components, such as interest and fee income (outputs), generated through the use of inputs, such as operational expenses and the quality of the loan portfolio, i.e., cost components ([@b0005]). Studies that either followed less popular approaches in the literature or proposed a new approach were coded as Others (9D). Another possible coding for classification 9 concerns studies that combined more than one approach, coded as 9E. For internal two-stage studies in banks, it is quite frequent for the authors to follow one approach in the first stage and another in the second. This is done mainly by combining the production approach with that of intermediation so that the researcher does not need to make a judgement call regarding the dilemma of deposits, as discussed by [@b0285], [@b0220]. This coding, therefore, encompasses studies that have mixed approaches or have treated deposits as an intermediate variable. Studies that did not follow any specific approach or adopted the same variables as previous studies in the literature were coded as 9F. Classification 10 identifies the procedures adopted by the researcher that characterize the article as a two-stage DEA model. One of the possibilities in the second stage is to use the outputs of the first stage as inputs in the second, as in [@b0525] --- this procedure is known as the intermediate variable technique. In this case, the second stage refers to the production process of a specific bank and is often used to overcome the problem of the DEA treating the production process as a black box ([@b0240]). Another possibility is to use another procedure in the second stage --- this technique is external to the production process. OLS regressions, censored models, such as Tobit, resampling techniques, such as bootstrap, and qualitative techniques, such as AHP, can be used in this stage. Considering the observations of [@b0565] that using traditional regression techniques in the second stage would not be as appropriate for analysing the effect of the non-discretionary variables (because the DEA efficiency scores have a statistical bias and are highly correlated, requiring the bootstrap procedure to correct these problems), this classification allows the identification of the approach that is the most used in the second stage and if the researchers are following the observations in [@b0565]. Additionally, analysing the techniques adopted in the second stage can be an important step towards a more accurate understanding of the application and definition of the two-stage DEA model in banks. It is important to highlight that, with the exception of classifications 4, 5, and 6, whose coding options are mutually exclusive, the articles could be coded with more than one code; therefore, the total number of articles in these categories could be more than 59. The classifications, as well as the coding possibilities discussed herein, are presented in [Table 2](#t0010){ref-type="table"} .Table 2Codes used to analyse the articles.ClassificationMeaningCryptography1Two-stage DEA1A - Internal.1B - External.2Economic Context2A - Mature economy.2B - Non-mature economy.2C - Do not apply.3Geographical Region3A - North America.3B - South America.3C - Europe.3D - Asia.3E - Other regions.3F - Do not apply.4Objective4A - To verify the change in efficiency taking into account reforms, e.g., liberalization and deregulation in the banking industry, changes in the market structure and changes in the economic environment.4B - To measure banking efficiency and indicate benchmarks and opportunities for improvement.4C - To analyse the effect of non-discretionary variables of banks/ branches in efficiency.4D - To propose an extension or a new model/method of DEA to measure efficiency of banks/branches.4E - To make comparisons of efficiency in an international context.5Type of Research5A - Exploratory.5B - Descriptive.5C - Explanatory.5D - Predictive.6DEA Model6A - Radial.6B - Non-radial.7Return of Scale7A - Constant.7B - Variable.7C - Not identified.8Orientation8A - Input.8B - Output.8C - Unoriented.8D - Not identified.9Approach9A - Intermediation.9B - Production.9C - Profit.9D - Others.9E - Combined more than one approach.9F - Not identified/Do not apply.10Procedure Related to the Second Stage10A - Tobit.10B - Analytical hierarchy process.10C - Bootstrapped truncated regression.10D - OLS.10E - Artificial neural networks.10F - Intermediate variables.10G - Others. 5. Results of the literature analysis {#s0035} ===================================== To present the results in the most detailed manner, we performed a bibliometric analysis and codification. Bearing this procedure in mind, this section is divided into the two following subsections: bibliometric analysis and coding results. We believe that this will enable us to present the state of the art, opportunities, and challenges for future studies on two-stage DEA models in banks. 5.1. Bibliometric analysis {#s0040} -------------------------- The first dimension presented is the bibliometric analysis. [Table 3](#t0015){ref-type="table"} shows that there is a decentralization of publications with regard to journals. In first place is the journal Expert System with Applications, the scope of which is the application of intelligent systems in businesses, governments, and universities, with nine publications, or 15.25% of the total; followed by the European Journal of Operational Research, with six publications (10.17%); Omega and Research in International Business and Finance, with three publications (5.08%); and the Journal of Banking and Finance, Annals of Operational Research, Economic Modelling, and International Journal of Productivity and Performance Management, with two publications each. The other journals, including Measurement, Socio-Economic Planning Sciences, Benchmarking, North American Journal of Economics and Finance, and Journal of Productivity Analysis, among others (30 in total), had only one publication, and accounted for 50.85% of the publications.Table 3Number of papers per year and per journal.Analysed CriteriaClassificationAmountPercentage (%)JournalExpert System with Applications915.25European Journal of Operational Research610.17Omega35.08Research in International Business and Finance35.08Annals of Operational Research23.39Economic Modelling23.39Int. Journal of Productivity and Performance Managemen23.39Journal of banking and Finance23.39Others3050.85Year200311.69200511.69200611.69200811.69200923.39201023.39201146.78201246.782013610.172014610.172015915.25201658.4720171016.952018711.86 The first publication of the analysed sample was that of [@b0425], followed by [@b0050], [@b0495]. Between 2000 and 2010, there were few publications regarding two-stage DEA models in banks. The scenario began to change after 2010, with four publications in 2011 and 2014 and six in 2013 and 2014. In 2015, there were nine publications --- the year with the second highest number of articles published, behind 2017, with ten publications. It is worth mentioning the publication in 2018, given that until July, the month when the article search was conducted, there were already seven publications. This analysis is important because it shows that, over time, the interest in two-stage DEA models in banks has grown considerably. The year 2018 will possibly surpass the other years in terms of number of publications, and if there is no sudden change in the trend, the same will occur in 2019. The recent interest in two-stage DEA models specifically in the banking sector indicates an emerging topic in the literature, as already discussed in Section [3](#s0025){ref-type="sec"}, which means that a great opportunity exists for researchers in future studies. It is worth noting that despite the growth in the number of publications at the global level, various opportunities for application still remain, given that there is a large plurality of non-discretionary variables to be analysed, as well as different models and approaches in DEA, in addition to different countries that lack research, for example, Latin America countries. Given this growth, reviewing how this model has been applied is of paramount importance for a better understanding of how two-stage DEA models have been applied in banks and to provide guidance for future studies. Regarding the relevance of the studies, the number of citations may be a good indicator ([@b0440]). [Table 1](#t0005){ref-type="table"} shows that the most cited study (162 citations) was that of [@b0685], which combined DEA with ANN to analyse the efficiency of 142 branches of a Canadian bank, followed by the study of [@b0425], with 134 citations, in which NDEA was used, with each stage independent of the other --- profitability efficiency was analysed in the first stage and marketability efficiency in the second for 245 U.S. banks, similar to the work of [@b0525]. The other three most cited articles were those of [@b0485], [@b0495], [@b0575], with 111, 105, and 87 citations, respectively. [@b0485] analysed the efficiency of 816 branches of a Canadian bank through an external two-stage DEA model, in which the outputs of the second stage were the efficiency scores calculated in the first stage, considering three different approaches. [@b0495] evaluated banking efficiency in 95 countries, verifying the impact of regulatory factors on efficiency. [@b0575] estimated the cost, allocative, and technical efficiency of Brazilian banks, analysing the impact of non-discretionary variables on efficiency. Because the two-stage DEA model is a more recent variation of the DEA model, this number of citations has great potential to increase considerably over the next few years as more research is published. It is worth emphasizing that the number of citations was collected in August 2018 and may have increased since then. Thirty-five studies were longitudinal, whereas 19 were cross-sectional. Thirteen studies analysed the efficiency of bank branches, whereas 46 considered the banks themselves. With very few exceptions, the vast majority involved 2 to 10 inputs and outputs at each stage, whereas the number of DMUs analysed varied considerably, from 16 in some cases up to 246, but always respecting the rule, discussed in [@b0100], of having three times more observations than the total number of variables[5](#fn5){ref-type="fn"} . 5.2. Coding results {#s0045} ------------------- Considering now the second dimension of the results, the respective codings of each study are presented in [Table 4](#t0020){ref-type="table"} . The gaps will be presented under the following abbreviations: $G_{1,2,\ldots,x}$ for gaps that refer to both internal and external two-stage DEA models, $\mathit{Gi}_{1,2,\ldots,x}$ for gaps referring only to internal two-stage DEA models, and $\mathit{Ge}_{1,2,\ldots,x}$ for gaps referring only the external ones. First, to provide an overview of how the literature is categorized when referring to the two-stage terminology in banks, we will analyse the internal and external two-stage articles together, and second, we will segregate them to determine if the identified gaps are maintained and if there are new gaps to be indicated.Table 4Results of codifications. **Note:** The articles were classified by the year of publication, as in [Table 1](#t0005){ref-type="table"}.Article Classification1234567891011A2A3A4B5A6A7A/7B8A9B10F21B2A3C4C5C6A7B8B9F10C31B2A3A4D5D6A7A8A9F10E41B2A/2B3A/3B/3C/3D/3E4E5C6A7A/7B8C9A10A51B2B3D4D5D6A7A/7B8B9F10E61B2A/2B3D4A5C6B7B8C9A10G71B2B3B4C5C6A7C8A9A10A/10G81A2A3C4B5C6A7B8A9C10F91B2B3D4D5C6A7B8B9F10B101A2A3A4D5C6A7B8C9E10F111B2A3A4B5C6B7A/7B8A/8B9A/9B/9C10G121A2B3D4B5B6A7A8D9F10F131A2A3D4D5B6A7B8A9F10F141B2B3D4E5C6A7B8A9A10C151B2A3D4C5C6A7A/7B8A9F10G161A2A3D4B5C6A7A8D9E10F171B2A3C4A5D6A7B8A9A10C181B2B3D4B5C6A7C8D9F10B191A2A3D4D5C6B7B8C9E10F201A2B3D4D5C6B7C8C9E10F211B2B3C4D5B6A7A8D9D10G221A/1B2B3D4C5C6A7C8D9F10F/10G231A2A3A4D5B6A7C8C9D10F241A/1B2B3D4D5C6B7B8C9D10D/10F251A2B3E4B5C6A7B8A9E10F261A2B3D4A5C6A7B8D9E10F271A/1B2A3A4C5C6A7C8D9A10C/10F281A/1B2B3B4B5C6A7C8D9A10C/10F291A2B3D4D5C6B7B8C9B10F301A2A3D4B5C6B7B8C9F10F311A/1B2B3D4D5B6A7A8B9F10F/10G321A2A3D4D5C6B7C8D9D10F331A/1B2A3A4D5D6A7A8B9E10F/10E341B2B3D4B5C6A7C8D9A10G351B2B3D4C5C6A7B8A9A10C361B2A3C4D5C6A7A8A9A10G371A/1B2A3D4C5C6A7A/7B8C9E10A/10F381B2B3D4A5C6A7B8D9A10A391A2B3D4C5C6A7C8D9F10F401B2B3D4C5C6A7A/7B8A9A10C411B2B3E4C5D6A7B8D9B10G421B2B3E4D5C6A7C8D9B10C431B2A3C4A5C6A7B8A9C10C441B2B3E4C5C6A7A/7B8A9A10C451A/1B2B3D4B5C6B7B8B9E10B/10F461B2B3D4C5C6A7A8B9A10A471A/1B2A3D4D5C6A7B8D9A10C/10F/10G481A/1B2B3D4B5C6A7C8D9E10C/10F491B2B3D4C5C6A7B8B9A10D501A/1B2A3D4D5C6A7B8C9E10F/10C511A2B3D4D5D6A7B8D9E10F521B2B3E4C5C6A7A8C9E10A/10G531B2B3D4D5C6A7A/7B8B9B10G541B2B3D4C5C6A7A8B9C10C551B2A3C4C5C6A7A8B9C10C561B2A3C4D5C6B7A/7B8A/8B9A10G571A2A3A4D5C6A7C8C9A10F581A/1B2B3D4C5C6A7C8D9F10A/10F591A2B3E4D5B6B7C8D9C10F The first classification to be analysed addresses the type of two-stage DEA model adopted in the studies, with the following coding possibilities: A --- internal two-stage DEA model, and B --- external two-stage DEA model. There were 18 studies for the internal two-stage model, 29 for external two-stage, and 12 combining the internal and external models. These results are shown in [Fig. 4](#f0020){ref-type="fig"} and indicate that when referring to the term two-stage in banks, models that use some technique after measuring the efficiency by DEA predominate. Interestingly, few studies used both the two-stage DEA model to overcome the black box problem (internal) and the model that allows a more complete analysis (external). Considering this aspect, the following gap emerges:$G_{1}$: More studies could combine internal and external two-stage DEA models. These two types of two-stage models are becoming increasingly common in the literature and can complement each other to make the analysis even more realistic and complete. While internal two-stage DEA models overcome limitations related to the production process, the use of some technique in the second stage enables a more in-depth analysis. Fig. 4Frequency distribution for the Classification 1. Analysing the studies over time, it can be seen that combining the two types of two-stage DEA models has occurred in more recent years. The first study in the analysed sample that combined the two types of model was [@b0690], followed by [@b0290], [@b0650]. In 2017 alone, four studies combined these models --- of all the articles that applied internal and external two-stage DEA models, 33.33% were published in 2017. Two-stage DEA models may be vulnerable to the problem of separability, as discussed in [@b0570], [@b0115]. Considering this potential drawback, we suggested the tests presented in [@b0115] to verify the separability condition. If the separability assumption is violated, conditional efficiency models could be used, following [@b0135], [@b0120], [@b0360], [@b0355], [@b0445], [@b0630]. The second classification, which considers the economic context of the countries analysed, has the following coding possibilities: A --- mature economy; B --- non-mature economy; and C --- not applicable, which corresponds to studies that had no empirical analysis. Twenty-three studies were conducted in countries considered to be mature economies or developed economies (i.e., they were coded with the letter A), whereas 33 studies were conducted in emerging non-mature economies ([Fig. 5](#f0025){ref-type="fig"} ). Despite the predominance of studies in less developed economies, this difference is not very large, which indicates that the literature is not prioritizing one type of economy over another but rather analysing the banking sector in different economic contexts.Fig. 5Frequency distribution for the Classification 2. It is interesting to note that despite the predominance of undeveloped economic contexts in the sample of studies, this was not valid for the older articles, in which the analysis of the banking sector of mature economies prevailed. Upon analysing the coding of the ten oldest articles in the analysed sample, it could be seen that five considered mature economies, two considered both economic contexts, and only three dealt with non-mature economies. This indicates that --- as discussed by [@b0660] --- less developed countries were overlooked, something that has been reversed over time. Comparing the publications on internal two-stage DEA models with external ones, it can be seen that there is a large difference in the economic contexts. If in the articles of the first type of model, there was a slight predominance of publications in developed economies (10 *versus* 8), when referring to the second type of model, 18 articles were focused on the banking sector in non-mature economies *versus* 9 in mature economies. Given this, the following question emerges:$\mathit{Ge}_{1}$: When the two-stage DEA model used is the external type, why are researchers prioritizing less developed economic contexts, as opposed to studies that have used internal two-stage DEA models and the observations by [@b0660] that, generally, more developed countries are more frequently the focus of studies? One possible answer is that due to the instability of non-mature economies, environmental factors tend to exert a greater influence on efficiency --- something that the researcher must consider. This hypothesis lacks testing but could be verified in future studies. Only two studies ([@b0600], [@b0495]) have been conducted considering these two contexts simultaneously --- both involved external two-stage models. These two studies found that the difference in context between one country and another has a significant effect on efficiency. Therefore, more research is needed to analyse different economic contexts in the same study. It is worth highlighting that [@b0490] found the same gap when reviewing studies on bank branches, which indicates that this gap has existed for some time and has not been explored by researchers. One difficulty for this could be the limitation in obtaining data from more than one country, or the difficulty of comparing banks in different countries when using DEA, a technique of relative efficiency; however, as some researchers --- for example, [@b0495], [@b0600] --- have managed to overcome such limitations, others could also do the same by following these authors. Thus, the gap resulting from classification 2 is as follows:$G_{2}$: Given that the economic context can have a significant effect on efficiency, more research that considers these different contexts --- such as that by [@b0495], [@b0600] --- is needed. The geographical region of the countries evaluated is identified in classification 3. This classification, which aggregates information of the second classification, has the following coding options: A --- North America, B --- South America, C --- Europe, D --- Asia, E --- other regions, and F --- not applicable. Eight studies were conducted in North America, two in South America ([@b0575], [@b0660]), eight in Europe, 34 in Asia, six in other regions (Africa, Oceania, and Central America), and only one in more than one continent ([@b0495]), as shown in [Fig. 6](#f0030){ref-type="fig"} . When considering how articles studied the different geographic regions over time, as well as the type of two-stage DEA model used, no large variations were observed.Fig. 6Frequency distribution for the Classification 3. There was a large concentration of studies in the Asian continent --- nine studies in China, seven in Taiwan, and five in Iran. Despite the predominance of publications that studied the banking sector of Asian countries, the United States was the focus of six studies. A similar situation occurred with Greece, with four articles. Both the USA and Greece accounted for virtually all the publications in their respective regions. Thus, the following gap was identified:$G_{3}$: Why are researchers so focused on studying the Asian continent? In general, the other continents need more research, especially an analysis that encompasses more than one continent. Furthermore, various countries do not have any publications, for example, Latin American countries (excluding Brazil) and European countries (excluding Greece and Germany). Studies in other countries are also needed. Another relevant aspect is that only the study of [@b0495] was done in more than one continent, which indicates a clear need for more research that considers different continents. Although the gap discussed in classification 2 is related to this, the focus in that gap was to consider different economic contexts rather than different geographic regions. With this in mind, the following gap was identified:$G_{4}$: Studies that consider different geographic regions are necessary so that international evidence can be found regarding the impacts that a given environmental variable has on efficiency, as the comparison between different studies in the literature has the complication that the authors can use different DEA models, as well as different variables as inputs or outputs, thus making it difficult to compare the results found. Consequently, a researcher maintaining the same model and the same variables in different continents would solve this problem and would also enable researchers to determine how the impact of these non-discretionary variables on efficiency would change with the continents considered, thus making an international comparison possible. Regarding classification 4, five coding possibilities were elaborated to categorize the objectives of the articles as follows: A --- determine efficiency variation over time due to reforms, B --- measure efficiency and indicate benchmarks, C --- analyse the impact of non-discretionary variables on efficiency, D --- propose an extension or a new DEA method/model, and E --- international comparison. [Fig. 7](#f0035){ref-type="fig"} shows the classifications of the studies: five were classified as 4A (i.e., they aimed to determine the impact of reforms and regulations on banking efficiency), twelve were classified as 4B (comprising the studies that measured efficiency and discussed benchmarks for improving inefficiencies), seventeen were classified as 4C (determination of the impact of non-discretionary variables on efficiency), twenty-three were classified as 4D (representing the studies that proposed new models or adaptations to two-stage DEA models with applications in the banking sector), and only two were classified as 4E, whose main objective was international comparison. Thus, a gap can be seen in this last coding; however, as this gap has already been discussed, no new gap was identified. It is worth mentioning that when analysing the objectives of the articles over time, interest in determining the impact of non-discretionary variables has increased.Fig. 7Frequency distribution for the Classification 4. The slight concentration of studies in the coding 4D was expected, given the predominance of publications in high impact journals, which, in turn, demands a certain degree of innovation from researchers, whether in changes to the mathematical formulations of the model or in the combination of new techniques with DEA in the second external stage. The coding with the second most published articles consists basically of one of the essences of the external two-stage DEA models, i.e., determining the impact of exogenous variables on efficiency. Various internal factors of banks --- size ([@b0575], [@b0690]); state or private control ([@b0575], [@b0660], [@b0580], [@b0590]); foreign or domestic ([@b0590], [@b0660]); dividend payment policies ([@b0665]); capitalization ([@b0590]); profitability ([@b0540]); intellectual capital ([@b0650]); risk ([@b0645], [@b0625]); macroeconomic factors ([@b0690]), such as the country's gross domestic product, inflation, and industry factors ([@b0215]); and, finally, factors unique to each country ([@b0495]) --- were considered. One topic of interest identified was the objective of assessing possible Mergers and Acquisitions (M&A) ([@b0655], [@b0660], [@b0675], [@b0670], [@b0265]). With the segregation of the types of two-stage DEA models, large variations in the results of the codings are expected, given that the internal and external models overcome the distinct limitations of traditional DEA models. Thus, for the internal models, only [@b0505] aimed to analyse the impact of exogenous variables on efficiency. These authors calculated three different models --- one not including risk and two with this variable modelled in the NDEA. The predominant objective in this type of two-stage model was to propose extensions of the DEA models (10 publications), followed by the indication of benchmarks (6 publications). Regarding the external models, the main objective was to analyse the impact of exogenous variables --- 12 of the 29 studies had this objective. Eight studies proposed extensions to the DEA models, with the use of new techniques in the second stage or changes in the mathematical formulations of the model; four specifically analysed variations in efficiency due to banking freedom or deregulation; three sought to indicate benchmarks for improving efficiency; and the aim of two was international comparison. As highlighted in [@b0120], the Global Financial Crisis (GFC) in 2007--2008 demonstrated the weaknesses of banking systems and the importance of understanding the mechanisms that improve bank performance. Considering the impact of financial institutions to the economy ([@b0475], [@b0485], [@b0640]), and since banks are often the target of new regulations after fiscal crises ([@b0265], [@b0600]), we consider the following gap related to the external models, as internal models would not be appropriate for solving these situations:$\mathit{Ge}_{2}$: How banking efficiency is affected by economic crises and changes in the regulation of the sector? This question is even more relevant in the contemporary context, given the global economic recession triggered by the COVID-19. Some studies identify a relevant relationship between economy and efficiency in the banking sector. For instance, [@b0210], [@b0220] identify an overall worsening in the efficiency of Turkish banks due to the economic environment. In [@b0210], the negative impact could be explained by a specific crisis in the country, whereas in [@b0220] the explanation could be related to the GFC. Similarly, [@b0130] and [@b0355] observe evidence of a degradation in productivity of European banks during the GFC. In addition, [@b0120] detect a positive nonlinear relationship among financial centres' competitiveness, banks' stability and innovation capacity levels. Future research could include further discussion regarding how financial crisis and changes in regulations affect the efficiency of banks. Classification 5 discusses the level of research, with the following coding possibilities: A --- exploratory, B --- descriptive, C --- explanatory, and D --- predictive. One study was classified as exploratory, six as descriptive, forty-six as explanatory, and six as predictive ([Fig. 8](#f0040){ref-type="fig"} ). Only the study of [@b0425] was classified as exploratory, precisely because it was the first conducted in the group of studies analysed, thus providing guidance for future research. It is worth mentioning that the authors themselves also classified their studies as such. When separating the types of two-stage models, no large variation was perceived that would justify a segregated analysis.Fig. 8Frequency distribution for the Classification 5. The six studies classified as predictive were those of [@b0685], [@b0460], [@b0265], [@b0385], [@b0670], [@b0545]. [@b0265], [@b0670] sought to predict the efficiency behaviour of Greek and Mozambican banks, respectively, with possible M&As and, in the case of [@b0670], with changes in the majority shareholder, for example, if a public bank were acquired by a private bank. [@b0385], [@b0685], [@b0460] combined DEA with ANN techniques to develop a model to predict bank performance, while [@b0545] developed a production possibility set (PPS) for M&As. Considering the small number of predictive studies, the following gap was identified:$G_{5}$: As most of the articles in the literature on two-stage DEA are focused on explaining the efficiency scores found *ex-post facto*, there is a lack of studies seeking to predict efficiency behaviour in certain situations *ex-ante facto*, for example, in M&As, at times when the economy is heating or cooling, and how a new specific regulation would affect efficiency, among other possibilities. In short, there is a need for more predictive research. It is worth noting that despite there being only one exploratory study ([@b0425]), the need for further studies with this level of research was not suggested, as this type of research is the first stage of an investigation on a certain topic. It would be necessary for researchers to identify something that the literature has not yet discussed. Given the complexity of this, it was decided to not indicate gaps in this sense. Classification 6 analyses the type of DEA model used and is coded with the following letters: A --- radial and B --- non-radial. Most of the articles (48) adopted radial models; whereas 11 articles involved non-radial models, using the SBM model. This indicates that in most of the studies, for a bank to become efficient, it must make proportional changes in its inputs or outputs, given that this is one of the characteristics of radial models. These results are shown in [Fig. 9](#f0045){ref-type="fig"} .Fig. 9Frequency distribution for the Classification 6. Regarding classification 7, which analyses the returns to scale adopted, models that involved variable returns to scale were the most used (present in 33 studies), with use as follows: individual; combined with constant return models to identify if the banks were presenting increasing, constant or decreasing returns to scale; or involving more complex models (e.g., network or fuzzy). Eight articles exclusively used the CRS model. The reason for the low utilization of the CRS model was precisely due to the aspect discussed by [@b0250] --- that the constant returns to scale model should only be used if all analysed DMUs are operating at the optimal level, which is very difficult to do in real terms. [Fig. 10](#f0050){ref-type="fig"} shows the results for this classification.Fig. 10Frequency distribution for the Classification 7. The segregated analysis of internal and external models does not add new information to the discussion about classification 6 and classification 7. The predominance in both the internal and external models was of articles that adopted variable returns to scale, with the small difference that the CCR model was used proportionally more in the external models than the internal ones. However, this greater use of the CCR model is mainly due to the joint use with the BCC model, which makes it possible to calculate the SE. Thus, the results found in these classifications indicate that radial models account for the vast majority of DEA models used and confirm that researchers have sought to work with variable returns to scale models rather than constant returns to scale models, following the recommendation of [@b0250]. Because the methodological aspects of the DEA model are being addressed, no gap was identified in this classification. Classification 8, which verifies the orientation of the DEA model in the analysed studies, had the following coding possibilities: A --- input- oriented model, B --- output-oriented model, C --- unoriented, and D --- not identified. Fourteen studies followed the input orientation; eleven followed the output orientation; two estimated the DEA model oriented at first to inputs and later oriented to outputs ([@b0475], [@b0485]); thirteen adopted an unoriented model, which sought both the minimization of inputs and the maximization of outputs; and nineteen studies did not specify the orientation adopted in their models. It is worth highlighting that the unoriented models were much more frequent in articles that used internal two-stage DEA models. These results are shown in [Fig. 11](#f0055){ref-type="fig"} .Fig. 11Frequency distribution for the Classification 8. The slight predominance of articles that adopted the input orientation rather than the output orientation reveals that researchers have generally followed the argument of [@b0515] that banks do not have control over their outputs and, therefore, input orientation is more appropriate. However, at the time of the study by [@b0515], the DEA models that simultaneously minimized inputs and maximized outputs were not yet popular. Thus, for the current context, the recommendation of these authors may not be as strong and relevant as at the time it was proposed. Classification 9 deals with the approach used in the studies for selecting the model's variables, which will define the scope of the analysis, with the following possible codings: A --- intermediation approach, B --- production approach, C --- profit approach, D --- other ways of selecting variables, E --- combination of approaches, and F --- not specified. Thirteen studies did not specify or did not use a specific approach (many of these studies only considered the variables used by other authors and opted to replicate them). Most of the studies (18) followed only the intermediation approach. Five used the production approach, five used the profit approach, and thirteen studies combined more than one approach (i.e., in the first stage, they analysed the bank's efficiency in one function, and in the second stage, they analysed it in another). These results are shown in [Fig. 12](#f0060){ref-type="fig"} .Fig. 12Frequency distribution for the Classification 9. One important observation is that the suggestion of [@b0060] --- that in studies dealing with branches, the production approach should be chosen, whereas in studies with banks, the intermediation approach should be chosen --- is not being followed, given the small number of articles that have used the production approach. Interestingly, despite the predominance of the use of the intermediation approach for variable selection, especially in studies that used external two-stage DEA models, when considering the studies that used internal two-stage DEA models, only [@b0295], [@b0215], [@b0650] used the intermediation approach. However, it is worth highlighting that most of these studies combined more than one approach (i.e., they were coded as 9E), seeking to treat deposits as intermediate or to analyse different bank functions. In this context, deposits was the most used variable as intermediate. Another aspect that should be highlighted is the low number of studies that used more than one approach --- only [@b0485] used at least three approaches in different efficiency estimates to analyse how efficiency varied from one model to another. This is especially relevant in the case of the banking sector because, as discussed in [@b0285] regarding the deposits variable, the way a variable is treated will influence which banks will be indicated as efficient by the model. Additionally, there is no consensus in the literature as to the most appropriate approach to measure bank efficiency, as each of the approaches analyse the bank from a different perspective. Given the above, the following gaps emerge, in which $G_{7}$ is a motivating gap for the use of internal two-stage DEA models, considering that only through the internal models is this issue resolved:$G_{6}$: Given that DEA results are quite sensitive to the variables that will be part of the model, studies considering more than one approach are important to verify the behaviour of the results with different variables. $G_{7}$: Given the difficulty in dealing with the deposits variable (a judgement call must be made by the researcher), a new group of studies has been directing how to treat this variable ([@b0285], [@b0220]), which is to consider it as an intermediate variable. [@b0220], [@b0125] argue that considering deposit as an intermediate variable provides a plausible solution to this dilemma, since this variable can be both input and output ([@b0060]). Thus, the double role that the variable deposit can play would remain intact. Future research could take this aspect into account and conduct studies treating this variable as such. The last classification to be discussed --- classification 10 --- verified the procedures adopted by the researcher that characterize the article as a two-stage DEA model (either internal or external) and had the following coding possibilities: A -- Tobit, B -- AHP, C -- bootstrapped truncated regression, D -- OLS, E -- Artificial Neural Networks, F -- intermediate variables, and G -- other techniques. The results in this classification showed that three studies used only Tobit in the second stage ([@b0465], [@b0495], [@b0165]), two used only the AHP ([@b0370], [@b0040]), ten used only bootstrapped truncated regressions, one used only an OLS regression ([@b0540]), eighteen used only the intermediate variable model, two used only ANN ([@b0685], [@b0460]) and nine used only other techniques, such as stochastic simulations and the Monte Carlo algorithm ([@b0625]), support vector machine ([@b0670]), beta regressions ([@b0675], [@b0690]), panel analysis ([@b0540]), among others. Finally, in 14 studies, more than one technique was used in the second stage. The results of this coding are shown in [Table 5](#t0025){ref-type="table"} .Table 5Classification according to item 10.Second Stage ProcedureNumber of ArticlesTobit3Analytical hierarchy process2Bootstrapped truncated regression10Artificial neural network2Tobit and Int. variables2Tobit and others2AHP and Int. variables1ANN and Int. variables1Bootstrap and Int. variables4Bootstrap, Int. variables and others1OLS1OLS and Int. variables1Intermediate variables18Intermediate variables and others2Others9 Considering these results, it can be seen that the most frequent application of two-stage DEA models in banks is to use the outputs of the first stage as inputs of the second stage, that is, two-stage models concerned with the production process. Despite the predominance of external two-stage DEA models, as discussed in classification 1, there is a wide variety of possible techniques to apply in the second stage, while all the articles concerning two internal stages inevitably used intermediate variables. For this reason, this superiority is, to some extent, expected. Considering only the articles on external models (either combined with internal models or not), there was a small dominance in the number of articles that adopted the bootstrapped truncated regression (15), followed by other techniques in the second stage (14). Examining only the studies whose objective was to determine the impact of non-discretionary variables on efficiency (classified as code 4C), of the 17 studies, 11 used bootstrapped truncated regression, which was the main technique for this purpose. Furthermore, following the trend --- verified in classification 4 --- of the growing interest of researchers in determining the impact of non-discretionary variables on efficiency over time, the bootstrapped truncated regression has become more popular in recent years. As in categories 5, 6, and 7, no gap was identified in our classification, as the objective was to verify which technique was predominant in the second stage. However, this evidence helps future researches in the definition of which technique would be the most appropriate to study specific phenomenon. It is important to emphasize that results of studies that explored external models could susceptible to drawbacks if the condition of separability does not hold, as discussed by [@b0565], [@b0570]. Therefore, future research could re-analyse these studies using the test proposed by [@b0115]. In this context, there are opportunities for research aiming to not only reproduce results but also check robustness of empirical results taking into account the separability issue. 5.3. Exogenous variables {#s0050} ------------------------ In addition to the terminological discussions already covered in this study, another controversial aspect in the literature is the impact of exogenous variables on efficiency. Although this topic is specific to external two-stage DEA models, we chose to address it because of its relevance to this model type, as it is the most frequent motivation for using external two-stage models; however, more in-depth discussion is needed. We recognize that the results of these studies may suffer from the problem of separability. However, we understand that it is not possible to separate the application of a chain of two-stage DEA models from the analysis the impact of exogenous variables. It is not the purpose of our research to discard all previous studies that used two-stage DEA models simply because they did not consider separability issues, but rather to propose an initial discussion on one of the most popular topics in the banking efficiency literature today. By providing an overview summarizing the results of a large number of current studies, we can also contribute to another controversial aspect, which is the impact of the exogenous variables in efficiency. When referring to a particular exogenous variable, such as bank profitability, it is known that the impact on efficiency is quite ambiguous. This problem appears with practically all exogenous variables considered in the literature, with no consensus regarding what the actual impact on bank efficiency is. A possible explanation for the ambiguity is the use of different DEA models, which analyse different efficiency types, although the efficiency types are highly correlated ([@b0580]), as well as the use of different variable selection approaches. To give this discussion direction, [Table 6](#t0030){ref-type="table"} presents information on the input and output variables, the type of efficiency analysed (PTE and TE), the variable selection approach used in the study, the exogenous variables used, and their impact on efficiency. Thus, the results obtained with respect to the effect of these environmental variables on efficiency can be analysed from the perspective of the approach used in each study, that is, in the different functions performed by the bank. It is worth emphasizing that [Table 6](#t0030){ref-type="table"} lists only the articles that analysed the effect of non-discretionary variables on efficiency.Table 6Inputs, outputs, exogenous variables and their impacts on efficiency, by approach and efficiency.NumberType of EfficiencyScope of AnalysisInputs 1^o^StageIntermediate VariablesOutputs 2^o^ StageExogenous VariablesImpact on Efficiency37TE and PTE.Profitability (1st stage) and marketability (2nd stage).Number of employees, fixed assets and information technology expenditure annual.Deposits, liabilities and ATFD (Amount of trading by financial derivatives).Operating diversification, branches and non performing loans recovered.Two governance variables: Government Shareholdings (SOE), Financial Holding Subsidiary (FHS). Variables related to risk factors: Exchange Rate Volatility (ERV), Interest Volatility (INV), Long-term loan to capital (LCR). Variable related to Basel III Accord: Capital Adequacy Ratio (CAR).**TE:** CAR, ERV, SOE and FHS have positive impact and LCR has negative impact. **SE:** CAR, SOE and FHS positive, ERV, IRV negative. **PTE:** CAR and FHS positive, ERV and IRV negative.24PTE.Deposits generation (1st stage) and loan generation (2nd stage).Fixed assets, equity and personnel expenses.Deposits and other raised funds.Gross loans, other earning assets and an undesirable output of non-performing loans.Risk, assets liquidity, interest margin, shareholders behind and scale effect. Macroeconomic factors: the annual growth rate of GDP (g gdp), annual growth rate of money (GRM) and market structure.**Overall** - Positive impact: risk, liquidity, shareholders behind and size. Negative impact: interest margin. Others variables were not statistically significant. **Stage 1** - Positive impact: risk, liquidity, shareholders behind, size and GRM. Other variables were not statistically significant. **Stage 2** - Positive impact: liquidity and assets. Negative impact: interest margin, Shareholders behind and GRM.11SBM - TE and PTE.Intermediation approach, production approach and profit approach.Production: nine personnel-related inputs. Intermediation (5): cash balances, fixed assets, other liabilities, net non performing loans, loan loss experience. Profit (6): personnel expenses, occupancy/computer expenses, loan losses, cross charges, other expenses and sundry expenses.Do not apply.Production: (9) segregated by the three main costumers type: Retail: relationship, service, internal. Commercial: relationship, service, internal. Corporate: relationship, service, internal. Intermediation: (6) Wealth management, home-owner mortgages, consumer lending, commercial loans, commercial deposits, consumer deposits. Profit: (7) comissions, consumer deposits, consumer lending, wealth management, home mortgages, commercial deposits e commercial loans.Regions, market size and scale.Some regions of Canada showed higher efficiency values for each model analysed. Branches in Rural market had a better performance in profit and production than Small Urban and Major Urban branches. In the three efficiencies considered, increasing asset size results in a larger percentage of branches being classified as DRS.48Not identified.Intermediation (1st stage) and Profit (2nd stage).Fixed assets, number of employees and loanable funds (Deposits and borrowings).Advances and investment.Interest income and non interest income.Size, liquidity, profitability, risk, diversification, ownership, IC.**Intermediation efficiency:** Size, liquidity and priority positively impacted, IC negatively impacted and the others variables were not statistically significant. Profit efficiency (operating): profitability and diversification had a positive impact, whereas the other variables were not significant.50PTE.Intermediation and profit.Operational expenses, loanable funds and capital stock.Investment, performing loans and outputs that leaves the production system: service revenues and nonperforming loan.Interest income and investment revenue.Ratio of investments to loans and the ratio of nonperforming loans to performing loans.No significance was found in the estimations assessed.49Malmquist.Intermediation approach.Total deposits ($x_{1}$), total labour ($x_{2}$) and capital ($x_{3}$).Do not apply.Total loans ($y_{1}$) and total investments ($y_{2}$).Bank specific (7): Size, credit risk, capitalization, market power, liquidity, management efficiency and dummy for domestic islamic bank. Macroeconomics: economic growth, inflation and world financial crisis.TFPCH --- negative impact: capitalization. Liquidity and world financial crisis had positive impacts, however the relationship varies among models.28Not identified.Intermediation approach.Number of branches and number of employees.Administrative expenses and personnel expenses.Equity and permanent assets.Size, public, domestic, foreign and recent M& A.**Cost Efficiency:** Size and recent M&A positively impacted, whereas the other variables were not significant. **Productive efficiency:** State owned positively impacted, recent M&A negatively impacted and the others variables were not significant.27Not identified.Intermediation approach.$X_{1}$: total liability ratio; $X_{2}$: total equity ratio; $X_{3}$: unit cost of employee.$Y_{1}$: profit ratio; $Y_{2}$: return on asset (ROA); $Y_{3}$: return on equity (ROE).Book-to-market equity ratio (B/M) and Earnings to price ratio (E/P).Intellectual capital, measured by three variables: human capital (HC), structural capital (SC) and relational capital (RC)The efficiency assessed is the efficiency of each subprocess combined through the relational network model. HC, SC e RC positively impacted the efficiency.34Not identified.Intermediation approach.Capital, deposits and labour.Do not apply.Conventional banks: interest income, non-interest income and total loans. Islamic banks: financing income, non-interest income and total financing.Three macroeconomic variables: Growth domestic product (GDP), inflation and concentration. Seven bank specific variables: Dummy for islamic banks, size, capitalization, profitability, credit risk, diversification and market power.Market power, the fact that the bank is islamic, GDP, profitability and concentration showed positive impacts on efficiency. Size, capitalization and diversification had negative impacts and inflation and credit risk were not statistically significant.17PTE.Intermediation approach.Deposits, number of employees and fixed assets.Do not apply.Securities and loans.Potential M&A.During the crisis, the vast majority of potential M&A did not generate gains in efficiency. In the last year analysed this situation changed with an improvement in efficiency due to M&A.14PTE.Intermediation approach.Interest expenses, operational expenses net of personnel expenses, personnel expenses and total deposits.Do not apply.Performing loans, other earning assets, interest revenue and non-interest revenue.Influence of integration and coordination efforts on banking efficiency, and on convergence within the GCC countries.Tests corroborate convergence in banking efficiency. Integration and harmonization measures had a significant effect on efficiency and on the degree of homogeneity in the GCC banking industry.35PTE.Intermediation approach.Total deposits, capital and personnel expenses.Do not apply.Loans, investments and non-interest income.Six bank specific variables: ratio of loan loss provisions to total loans (LLP/TL), ratio of non-interest income over total assets (NII/TA), ratio of non-interest expenses to total assets (NIE/TA), LOANS/TA, LN(TA), EQASS. Five external factors: LN(GDP), LN(INFL), LN(CR3), LN(Z-score), LN(MKTCAP/GDP). Bank ownership: (foreign, governmental, listed in the public stocks).Positive impacts: Size (lnTA), capitalization (LN(EQASS)), diversification (ln (NII/TA)), GDP, CR3, Z-score (proxy for sector risk to default) and foreign. Negative impacts: LN(MKTCAP/GDP) (Proxy for financial market development), listed in the public stocks and governamental. Other variables were not statistically significant.38PTE.Intermediation approach.Total funding, fixed assets and number of employees.Do not apply.Net profit and other earning assets.Governance reform variables: foreign partial acquisition, public listing, short-term and long term partial foreign partial acquisition, short term and long term public listing. Control variables: time, state-owned banks, equity to total assets and GDP growth.Public listing, time, state-owned banks, equity to total assets and GDP growth positively impacted efficiency. On the other hand, foreign partial acquisition negatively impacted efficiency. Other variables were not statistically significant.47PTE.Intermediation approach.Number of employees and physical capital.Deposits.Performing loans, securities investments and a bad output: Nonperforming loans.Capitalization, Net Interest Margin (NIM), risk, Industrial Index, bankrupt loans (BRL).Capitalization, NIM, risk and BRL negatively impacted efficiency, whereas Industrial Index had a positive impact.6SBM-PTE.Intermediation approach.Deposits, labour, capital and physical capitalDo not apply.Loans adjusted to non-performing loans, investments and other earning assets, fee income and off-balance sheet items.Variables related to restructuring measures: dummy variables for domestic bank mergers (MER), foreign bank entry (FOR), and state intervention (SI). Five country-specific factors: Market Concentration Index (MC), Interbank Interest Rate (INT), Intermediation Ratio (IR), per capita GDP (PCGDP), and IMF supports (IMFS); Control variable: Size.The impacts analysed are not in the efficiency index but in the lacks of the inputs. Several variables had an impact on these lacks.7TE.Intermediation approach.Labour, capital and purchased funds.Do not apply.Total loans net of provision loans, deposits and investments.Size, ownership, non-performing loan (NPL), market share (MS), equity and activity.**Allocative efficiency**: NPL and equity negatively impacted; MS, the fact that the bank is domestic and state-owned positively impacted efficiency. **Technical efficiency**: MS had a positive impact; **Cost efficiency**: MS and state-owned positively impacted and MS of the previous year negatively impacted. Other variables were not statistically significant.46TEIntermediation approachThird-party funds, total assets, and labour costs.Do not apply.Financing and operating income.Asset, the number of bank branches (BRANCHES), return on assets (ROA), capital adequacy ratio (CAR), and non-performing financing (NPF).Negative impact: Asset and ROA. Positive impact: Branches. Others variables were not statistically significant.4TE and PTE.Intermediation approach.Total deposits, total costs (interest expenses and non-interest expenses), and equity.Do not apply.Loans, other earning assets and non-interest income.Five bank specific variables: LOGTA which corresponds to the logarithm of bank's total assets and controls for bank's size; EQAS is the equity to assets ratio and controls for capital strength; LOANTA is bank's net loans to total assets ratio, and is a measure of loan activity; ROE is the pre-tax profit divided by equity; EXPTA is the non-interest expenses to assets ratio. 12 variables related to country-specific factors.**PTE** - Statistically significant impacts: Country-specific variables such as the protection of private property rights, market capitalization to GDP, bank claims to GDP, the number of branches and ATMS relative to the population, the presence of government-owned and foreign-owned banks and concentration. Positive impacts: Higher size and lower loan activity. Not significant: Capitalization, profitability and expenses relative to assets.44TE and PTE.Intermediation approach.Fixed assets, deposits and staff expenses.Do not apply.Investment, net loans and fees.Size, bank asset concentration, leverage, loan loss provisions to Loans (LLP), ratio of loans to TA (LOTA) and ROA**TE**: size, LLP and LOTA negatively impacted efficiency, whereas other variables were not statistically significant. **PTE:** size, LLP and LOTA negatively impacted efficiency, whereas other variables were not statistically significant.40TE e PTE.Intermediation approach.Number of employees, purchased funds and costumer deposits.Do not apply.Costumer loans, other loans and securities.ROA, COA, city, size, branches, age and the ratio of non-performing loans to customer loans.**TE:** ROA, size and city positively impacted efficiency, whereas the number of branches and age (number of years the bank existed before 2009) negatively impacted. **PTE:** ROA had a positive impact, number of branches and age had a negative impact, whereas other variables were not statistically significant.22Not identified.Not identified.Employees, assets and net assets.Deposits, loans, income and interest income.Net interest income, net service income and profit.Weight of shares held by the top 5 shareholders, the weight of shares held by the foreign strategic shareholders, the real GDP and the CPI.The 3 market power proxies and CPI positively impacted efficiency, whereas other variables were not statistically significant.2PTE.Not identified.Personnel expenses, branch space, other expenses and risk index.Do not apply.Comissions, deposits and loans.Two agency-specific variables: public transportation and automatic teller frequency. Other variables: potential costumers and competitive environment.No significant impact was found.15TE and PTE.Not identified.Number of operational staff, number of business personnel, branch office rent and operating expenses.Do not apply.Net interest spread income and net fee income.Two variables related to external economic environment: Real GDP growth and Consumer Price Index (CPI). Three agency-specific variables: branch floor area, years of operation, and loan amount.The impact analysed is not in the efficiency index but in the slacks of the inputs.58Not identified.Operational efficiency and market efficiency (2 nd stage).Net asset, total asset and employees.Deposits, loans and service income.Net income, ROA and ROE.Two foreign capital participation proxies. Control variables: capital structure, real GDP, money supply growth rate, and bank loans' weight in the total capital formation. Dummy for private or state-owned and the percentage of employees with a diploma.**Market efficiency**: Positively impacted: foreign ownership, money supply growth rate. Negatively impacted: real GDP. Other variables were not statistically significant.42Not identified.Production approach.$X_{1}$: Total costs, $X_{2}$: employee costs.Do not apply.$Y_{1}$: total deposits, $Y_{2}$: income before tax, $Y_{3}$: total credit.Control Variables (5): price of labour, price of capital, price of deposits, trend, market-share. Contextual Variables (5): foreign ownership, government ownership, M&A, IFRS accounting policy and Active dividend policy.Foreign ownership, government ownership, recent M& A, active dividend policy, and trend were not statistically significant. Price of deposits, price of labour, IFRS accounting principles, and market-share were significant, and the relationship (positive or negative) with efficiency depends on the reliability of input and output variables.1PTE.Production approach.Number of employees, equity and total asset.Profit and revenue.Market value, earning per shares (EPS) and stock price.Bank's location.There was no relevant impact of the location of the bank in the efficiency.41PTE.Production approach.17 variables related to banking activity.Do not apply.17 variables.Foreign ownership, government ownership, recent M&A and Same General Accepted Accounting Principles.Impact on the virtual efficiency of M&A: Foreign ownership, government ownership and same accounting principles positively impacted efficiency. Recent M&A was not statistically significant.53TE and PTE.Production approach.(8): Reserves for Impaired loans, equity, impaired loans, operational cost, personnel expenses, number of employees, number of branches and depreciation.Do not apply.(8): Total assets, fixed assets, gross loans, total securities, total customer deposits, pre-tax profit, net interest income and total non-interest operating income.(8): 1. Listed in stock market; 2. Foreign bank; 3. Big bank; 4. Tier 1 Ratio; 5. Total Capital Ratio; 6. Interest Expense on Customer Deposits/Average Customer Deposits; 7. National/Regional; and 8. Cost of deposits.Being national and listed in the stock market increase the likelihood of a bank being efficient, whereas (3) whether the bank is big or not; (4) Tier 1 ratio; (5) total capital ratio; and (6) relative interest expense on customer deposits decrease that likelihood.52TEProduction approach (1st stage) and intermediation (2nd stage).Employees, fixed assets and operational expenses.Deposits and loans.Interest income and non interest income.Trend, Trend^2^, commercial, local.Gains from M&A are likely to be higher when the two banks are commercial and smaller and when banks are local.43PTE.Profit approach.Operational expenses and loan loss provisions.Do not apply.Fee and Income.Two agency-specific variables: Diversification (DIV) and ratio of loans to deposits (LD). Four control variables: Return on capital (ROC), size, Location^1^ and Location^2^.DIV, ROC and Location1 positively impacted efficiency, whereas LD, size and Location2 negatively impacted efficiency.54TE.Profit approach.Total interest expenses and non-interest expenses.Do not apply.Aggregated net income.Ratio of other earning assets over loans (OEA/L), ratio of other earning assets over total earning assets (OEA/TEA), ratio of non earning assets/total assets (NEA/TA) and ratio of deposits to loans.Negative impact: OEA/L and OEA/TEA. Positive impact: NEA/TA, ratio of deposits to loans.55TE.Profit approach.Operating expenses and interest expenses.Do not apply.Total income.Bank-specific factors: Capitalization, liquidity, risk, profitability, credit risk and asset quality proxy and size. Macro-environmental variables: annual GDP growth rate and current period inflation.Positively impacted: capitalization, profitability, size and GDP. Negatively impacted: liquidity risk, credit risk and asset quality proxy and inflation.[^1][^2][^3][^4][^5][^6][^7] Even when comparing the impact of non-discretionary variables in similar contexts, ambiguous results are observed. For example, [@b0590] --- article number 35 in [Table 6](#t0030){ref-type="table"} --- found a positive effect of capitalization on PTE, while [@b0215] --- article number 47 --- found a negative impact. These two studies followed the intermediation approach and measured the same efficiency. The same occurred with the size variable, which, in the study of [@b0015], negatively influenced both TE and PTE, while [@b0580] identified a positive impact of size on TE, although these authors followed the same approach. Similar results were also verified in other studies. Recognizing that it is not possible to accurately determine what the impact of non-discretionary variables on efficiency will be, in view of the ambiguous results found in the literature (even considering the function of the bank analysed and the type of efficiency), [Table 6](#t0030){ref-type="table"} can serve as a background for comparisons of future studies with those already in the literature. 6. Conclusions {#s0055} ============== Two-stage DEA models have been gaining prominence in research on efficiency in the banking sector because they overcome the limitations of traditional DEA models. Recognizing the existence of several controversial aspects in the literature, from the two-stage terminology itself to the application of these models in banks, this study analysed 59 articles related to two-stage DEA models in banks. All of these studies were found using the *Scopus* and *Web of Science* databases and Elsevier's *ScienceDirect* search engine. This study followed the steps proposed by [@b0390] to review the literature. In this sense, ten classifications were created, ranging from the economic context and the geographic region to methodological aspects of the two-stage DEA models in banks, with several codification possibilities for each classification. We believe that with this study, which presented the existing knowledge, opportunities, and challenges for future studies, the state of the art in this emerging topic in the literature can be properly mapped. Throughout this review, we highlighted the main characteristics of publications related to the term two-stage in banks. Although some gaps are common to both internal and external two-stage DEA models, we also showed the need to segregate these models to explore new gaps. The common terminology used for these two distinct types of models hinders a universal definition for two-stage DEA models in the banking sector. Based on the initial discussion herein, future studies can advance this work so that there is a clear terminological distinction between these models. We find seven gaps in the literature, as highlighted in the discussion of the different classes or categories. The study identifies research opportunities related to (i) the combination of internal and external two-stage DEA models, (ii) the analysis of how changes in regulations or market environment affect efficiency of banks, especially after the GFC in 2007--2008 and the COVID-19 pandemic in 2020, (iii) the analysis of efficiency on a more diverse list of different countries or continents, (iv) the investigation of banking efficiency not only in different geographic regions but also in different economic contexts, (v) the prediction of efficiency behaviour in certain situations ex-ante facto, (vi) the use of diverse approaches to select relevant variables to include in the model, and (vii) the set up of deposits as an intermediate variable. Each of these gaps can be considered a potential topic for future research on the subject. It was found that the most frequent objective in the studies was to extend or improve DEA models, whereas the intermediation approach was the most used for variable selection, and the intermediate variables technique was the most popular in the second stage, in which the deposits variable was the most frequently adopted intermediate variable. Despite operational research and expert and intelligent systems focus on extending or improving DEA models, several other aspects still need further in-depth analysis, as presented in the discussion of the gaps in the literature. In addition, we contribute to the literature by presenting the state-of-the-art on two-stage DEA models as well as by providing directions and gaps for further research. In this context, this systematic review reflects an effort to shed light at those points. Considering the models that effectively adopted a procedure after measuring the efficiency scores via DEA (categorized as external two-stage DEA models), the application of a bootstrapped truncated regression was most common. Regarding the impact of non-discretionary variables on efficiency, even when comparing studies that analysed banks in similar functions, results remain ambiguous. it is important to highlight that these studies may be susceptible to the separability issue and that future research should carefully address this limitation of the method. Considering the models that effectively adopted a procedure after measuring the efficiency scores via DEA (categorized as external two-stage DEA models), the application of a bootstrapped truncated regression was most common. Regarding the impact of non-discretionary variables on efficiency, even when comparing studies that analysed banks in similar functions, results remain ambiguous. it is important to highlight that these studies may be susceptible to the separability issue and that future research should carefully address this limitation of the method. A limitation of this study is that it did not review all the articles that applied two-stage DEA models (internal or external) in banks. However, we believe that by analysing the 59 articles included, it was possible to present an overview of how the application of such models has occurred in banks, with an in-depth discussion on controversial issues. We hope that this study can assist in future applications and discussions on the theme. CRediT authorship contribution statement {#s0060} ======================================== **Iago Cotrim Henriques:** Conceptualization, Methodology, Writing - original draft. **Vinicius Amorim Sobreiro:** Conceptualization, Methodology, Writing - review & editing, Supervision. **Herbert Kimura:** Writing - review & editing, Supervision. **Enzo Barberio Mariano:** Writing - review & editing, Validation. Declaration of Competing Interest ================================= The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper. Support from Coordination for the Improvement of Higher Education Personnel (CAPES) is acknowledged. This document was a collaborative effort. Put simply, this production frontier can be understood as the existing technology of DMUs in generating outputs from a set of inputs. For more information, please see [@b0285]. Factors such as market value, earnings per share, and return to investors are part of the marketability, defined in the study of [@b0525]. For more information on static and dynamic NDEA, please see [@b0245]. A more in-depth discussion of the number of DMUs *versus*the number of inputs plus outputs is done in [@b0680]. [^1]: The authors defined this variable as intermediation cost to total assets. [^2]: Ratio of priority sector advances (i.e., directed credit) to total assets. and priority. [^3]: Total Factor Productivity Chance, calculated by Malmquist index. [^4]: In a simplified way, the authors examined whether banks are operating similarly due to Gulf Council measures. [^5]: Analysed by statistical tests, e.g., ANOVA, KruskalWallis. [^6]: Calculated as the sum of interest income and non interest income. [^7]: The authors did not analyse the impact in efficiency per se, but rather the probability of a bank being efficient, taking into account environmental variables.
Mikhail Tarielovich Loris-Melikov Loris-Melikov, Mikhail Tarielovich (mēkhəyēlˈ təryĕlˈəvĭch lôˈrĭs-mĕˈlyĭkəf) [key], 1826–88, Russian general and statesman, of Armenian descent. He was created count for his services in the Russo-Turkish War of 1877–78 and in 1880 was made minister of the interior by Alexander II. He promoted some liberal reforms, specifically in the educational system, and drafted a program to allow members of the zemstvos to play a minor advisory role in legislation. Alexander II approved this reform on the day he was assassinated (1884), but Alexander III voided the reform and dismissed its author. Loris-Melikov in his youth is portrayed in Leo Tolstoy's Hadjii Murad.
Introduction {#s1} ============ Hepatocellular carcinoma (HCC) is the fifth most lethal cancers worldwide, while China accounted for more than half of all cases and deaths in 2012 ([@B1]). More than 400,000 people die from liver cancer and over 450,000 new cases are diagnosed in China each year ([@B2]). Though the treatments for HCC have been greatly advanced in recent years, the outcome of HCC is still unoptimistic. Postoperative recurrence, the main reason for poor survival of HCC patients, mainly owes to the tendency of the invasion and metastasis of HCC cells ([@B3], [@B4]). Therefore, understanding the mechanism of HCC tumorigenesis and progression is critical to improve the clinical outcome of HCC patients. Tissue factor (TF, also known as platelet tissue factor, factor III, thromboplastin, or CD142, encoded by the F3 gene) is a 47 kD transmembrane glycoprotein that contains 263 amino acid residues totally including a 219 amino acid extracellular region, a 23 amino acid hydrophobic transmembrane region, and a 21 amino acids C-terminal intracellular tail ([@B5]). Originally, TF is found on the surface of intravascular cells, such as platelets, leukocytes, and endothelial cells and functions as the principal initiator of the extrinsic coagulation cascade by binding with circulating factor VII or VIIα (FVII/VIIα) ([@B6]). Recently, TF is frequently overexpressed in a variety of tumors, including breast cancer, colorectal carcinoma, gastric cancer, non-small cell lung, and pancreatic ductal carcinoma, etc. ([@B7]). We and other groups have reported that the expression of TF is upregulated and correlated with prognosis in HCC ([@B8]--[@B10]). In the current study, we investigate the role and molecular mechanism of TF in the growth of HCC cells. Materials and Methods {#s2} ===================== Patients and Tissue Specimens ----------------------------- A total 144 HCC tissues were obtained from patients who underwent curative resection between Jan 2008 and Dec 2010 at the First Affiliated Hospital, Sun Yat-sen University. None of the patients received neoadjuvant radiotherapy or chemotherapy before surgery. Signed informed consents were obtained from all patients. The study was approved by the ethics committee of the First Affiliated Hospital, Sun Yat-sen University. Cell Culture and Reagents ------------------------- The human HCC cell lines HepG2, BEL-7402, SK-HEP1, SMMC-7721, and normal hepatic cell line LO2 were from China Center for Type Culture Collection and cultured in Dulbecco\'s modified Eagle\'s medium (DMEM) supplemented with 10% fetal bovine serum (FBS), penicillin (100 U/ml) and streptomycin (100 ng/ml) in a humidified incubator at 37°C with 5% CO~2~ atmosphere. U0126, LY294002, and Gefitinib were from ApexBio. Anti-TF (ab17375) and Anti-Ki-67 (2724-1) were from Abcam. Anti-pAKT (4060), Anti-AKT (4691), Anti-pERK (4370), and Anti-ERK (4695) antibodies were from Cell Signaling Technologies. Anti-EGFR (SC-03) and Anti-c-Myc (SC-40) antibodies were from Santa Cruz Biotechnology. Anti-β-actin (LK9001T) and Anti-GAPDH (LK9002T) antibodies were from Tianjin Sungene Biotech. Plasmid Construction and Lentivirus Production ---------------------------------------------- The human TF cDNA was cloned into pLVX-AcGFP1-N1 lentiviral vector, and shRNA targeting human TF mRNA (5′-GCGCUUCAGGCACUACAAA-3′) was cloned into pLKO.1 lentiviral vector. Lentivirus was packaged in HEK293T cells and collected from the medium supernatant. Stable cell lines were established by infecting lentivirus into cells, followed by puromycin selection ([@B11], [@B12]). siRNA Transfection ------------------ The EGFR siRNA (sense sequences: 5′- CUGACUCCGUCCAGUAUUGAU−3′) and negative control siRNA were synthesized by Guangzhou Ribobio. Each siRNA solution was mixed gently with the respective volume of the X-tremeGENE siRNA Transfection Reagent and allowed to form transfection mixture for 20 min. Cells were cultured in 6-well plate with DMEM until 50% of confluence and added with the transfection mixture for 24 h before the next experiment ([@B13], [@B14]). Western Blot ------------ Cells were harvested and washed twice with cold PBS, then resuspended and lysed in RIPA buffer (1% NP-40, 0.5% sodium deoxycholate, 0.1% SDS, 10 ng/ml PMSF, 0.03% aprotinin, 1 μM sodium orthovanadate) at 4°C for 30 min. Lysates were centrifuged for 10 min at 14,000 × g and supernatants were stored at −80°C as whole cell extracts. Proteins were separated on 12% SDS-PAGE gels and transferred to polyvinylidene difluoride membranes. Membranes were blocked with 5% BSA and incubated with the indicated primary antibodies. Corresponding horseradish peroxidase-conjugated secondary antibodies were used against each primary antibody. Signals were detected using the ChemiDoc XRS chemiluminescent gel imaging system (Bio-RAD) ([@B15], [@B16]). MTT Assay --------- Cells were seeded into a 96-well plate at a density of 0.5--1 × 10^4^ cells/well and treated with various concentrations of agents. After 3 days, 3-(4, 5-dimethylthiazolyl-2)-2, 5-diphenyltetrazolium bromide (MTT) was added to each well at a final concentration of 0.5 mg/ml. After incubation for 4 h, the medium and MTT solution were removed from each well, and formazan crystals were dissolved in 100 μl of DMSO. Absorbance was measured at 570 nm by Multiscan Spectrum (Thermofisher) ([@B17], [@B18]). Sphere Formation Assay ---------------------- Cells were trypsinized, suspended in medium containing 0.3% agar and 10% FBS and seeded at a density of 5 × 10^2^ cells/well in a 12-well plate. The agar--cell mixture was plated onto a bottom layer with 0.5% agar. Then treated cells were incubated in a humidified incubator and fresh medium was added every 3 days. Two weeks later, colonies were analyzed microscopically ([@B19], [@B20]). Nude Mice Xenograft Tumor Assay ------------------------------- The female Balb/c nude mice with 5 weeks old and 16--18 g weight were obtained from the Shanghai SLAC Laboratory Animal Co and maintained with sterilized food and water. For xenograft tumor assay, 4 × 10^6^ cells in 100 μl of DMEM were injected subcutaneously under the shoulder of six mice per group. The mice were anesthetized after experiment, and tumors or lungs were removed, weighed, and sectioned. All experimental procedures were approved by the Institutional Animal Care and Use Committee of Jinan University ([@B21], [@B22]). Immunohistochemistry Assay -------------------------- Immunohistochemistry (IHC) assay was performed with a microwave-enhanced avidin-biotin staining method. Formalin-fixed, paraffin embedded human HCC tissue array and subcutaneous tumors in mice were stained with antibodies, respectively, using a microwave-enhanced avidin-biotin staining method. To quantify the protein expression, the following formula was used: IHC score = percentage of positive cells × intensity score. The intensity was scored as follows: 0, negative (no staining); 1, weak (light yellow); 2, moderate (yellow brown); and 3, intense (brown) ([@B23], [@B24]). Statistical Analysis -------------------- Statistical analyses were performed using SPSS 19.0 for Windows (SPSS) and Graph-Pad Prism 6. Data were expressed as the mean ± standard deviation (SD) from at least three independent experiments. Quantitative data between two groups were compared using the Student\'s *t*-test. Categorical data were analyzed by the χ^2^ test or Fisher exact test. Correlations between different protein expressions level were determined using Spearman\'s rank analysis. The *p* \< 0.05 was considered as statistical significance. ^\*^*p* \< 0.05; ^\*\*^*p* \< 0.01; NS: no statistical significance. Results {#s3} ======= Knockdown of TF Inhibits the Growth of HCC ------------------------------------------ To explore the potential biological function of TF in HCC, we first examined the protein expression of TF in human HCC cell lines including HepG2, BEL-7402, SK-HEP1, SMMC-7721, and normal hepatic cell line LO2. Notably, all HCC cell lines displayed higher protein levels of TF than normal hepatic cell line, and SK-HEP1 and SMMC-7721 cells showed the highest protein levels of TF in all cells ([Figure 1A](#F1){ref-type="fig"}). To further investigate the role of TF in HCC malignancy, we generated the cells with shRNA-mediated stable knockdown of endogenous TF in both SK-HEP1 and SMMC-7721 cells ([Figure 1B](#F1){ref-type="fig"}). Knockdown of TF decreased the cell amounts, sphere numbers and sizes in both SK-HEP1 and SMMC-7721 cells as detected by MTT and sphere formation assays ([Figures 1C--E](#F1){ref-type="fig"}). Additionally, the data of subcutaneous tumor models in nude mice showed that TF knockdown inhibited the growth of SMMC-7721 xenografts by decreasing the volumes and weights of tumors as well as the numbers of Ki67^+^ proliferating cells ([Figures 1F--H](#F1){ref-type="fig"}). ![Knockdown of TF inhibits the growth of HCC. **(A,B)** Western blot analysis of the protein expressions in the indicated cells. **(C)** Cell growth of the indicated cells as determined with MTT assay. **(D)** Representative images and **(E)** quantification of the indicated cells sphere as determined with sphere formation assay. **(F)** The indicated subcutaneous tumors and **(G)** tumor weight of nude mice were shown. **(H)** Representative images of H&E and Ki-67 staining in the indicated tumor sections as determined with IHC assay. Error bars, mean ± SD. \**p* \< 0.05 and \*\**p* \< 0.01 \[two-tailed Student\'s *t*-test **(C,E,G)**\].](fonc-09-00150-g0001){#F1} Overexpression of TF Promotes the Growth of HCC ----------------------------------------------- To confirm the effect of TF on HCC growth, we performed rescue experiments by ectopic expression of TF in both TF-silenced SMMC-7721 and SK-HEP1 cells ([Figure 2A](#F2){ref-type="fig"}). Ectopic expression of TF increased the cell amounts, sphere numbers, and sizes in both TF-silenced SMMC-7721 and SK-HEP1 cells ([Figures 2B--D](#F2){ref-type="fig"}). Furthermore, overexpression of TF increased the cell amounts in LO2, HepG2, and BEL-7402 cells ([Figures 2E, F](#F2){ref-type="fig"}). Taken together, these results suggest that TF can promote the growth of HCC. ![Overexpression of TF promotes the growth of HCC. **(A,E)** Western blot analysis of the protein expressions in the indicated cells. **(B,F)** Cell growth of the indicated cells as determined with MTT assay. **(C)** Representative images and **(D)** quantification of the indicated cells sphere as determined with sphere formation assay. Error bars, mean ± SD. \**p* \< 0.05 and \*\**p* \< 0.01 \[two-tailed Student\'s *t*-test **(B**,**D**,**F)**\].](fonc-09-00150-g0002){#F2} TF Promotes the Growth of HCC by Activating Both ERK and AKT Signaling Pathways ------------------------------------------------------------------------------- To further explore the molecular mechanism of TF-promoted HCC growth, we detected the downstream signaling pathway of TF. As shown in [Figure 3A](#F3){ref-type="fig"}, knockdown of TF decreased the protein levels of phosphorylated ERK (pERK), phosphorylated AKT (pAKT), and their downstream transcriptional factor c-Myc in both SMMC-7721 and SK-HEP1 cells. While ectopic expression of TF increased the protein levels of pERK, pAKT and c-Myc in both TF-silenced SMMC-7721 and SK-HEP1 cells. Interesting, the protein level of EGFR was downregulated in TF-silenced HCC cells and upregulated in TF-overexpressed HCC cells ([Figure 3A](#F3){ref-type="fig"}). To define the roles of ERK and AKT in TF-mediated HCC growth, we examined the effects of MEK inhibitor U0126 and PI3K inhibitor LY294002 on the growth of both SK-HEP1 shTF-Vector and -TF cells. Treatment with U0126 or/and LY294002 decreased the protein levels of EGFR, c-Myc, pERK or/and pAKT in both SK-HEP1 shTF-Vector and -TF cells ([Figures 3B--D](#F3){ref-type="fig"}). However, with U0126 or LY294002 alone inhibited the growth only in SK-HEP1 shTF-TF cells but not in SK-HEP1 shTF-Vector cells. After treating with the combination of U0126 and LY294002 significantly inhibited the growth in both SK-HEP1 shTF-Vector and -TF cells ([Figure 3E](#F3){ref-type="fig"}). In short, these data suggest that TF promotes the growth of HCC by activating both ERK and AKT signaling pathways. ![TF promotes the growth of HCC by activating both ERK and AKT signaling pathways. **(A)** Western blot analysis of the protein expressions in the indicated cells. SK-HEP1 shTF-Vector and SK-HEP1 shTF-TF cells were treated with/without U0126 and LY294002 at the concentration of 10 μM for 24 h. **(B--D)** Western blot and **(E)** MTT assay analysis of the protein expressions and cell growth. Error bars, mean ± SD. \**p* \< 0.05 (two-tailed Student\'s *t*-test **E**).](fonc-09-00150-g0003){#F3} Inhibition of EGFR Suppresses TF-Mediated HCC Growth ---------------------------------------------------- EGFR has been identified as a key player in the development of HCC ([@B25]). To verify the role of EGFR in TF-mediated HCC growth, we examined the effects of EGFR siRNA and EGFR inhibitor gefitinib on the growth of both SK-HEP1 shTF-Vector and -TF cells. EGFR siRNA or gefitinib decreased the protein levels of EGFR in both SK-HEP1 shTF-Vector and -TF cells ([Figure 4A](#F4){ref-type="fig"}). Furthermore, EGFR siRNA or gefitinib inhibited the growth more significantly in SK-HEP1 shTF-TF cells than in SK-HEP1 shTF-Vector cells, indicating that inhibition of EGFR suppresses TF-mediated HCC growth ([Figure 4B](#F4){ref-type="fig"}). ![Inhibition of EGFR suppresses TF-mediated HCC growth. SK-HEP1 shTF-Vector and SK-HEP1 shTF-TF cells were transfected with siControl or siEGFR or treated with/without gefinib at the concentration of 10 μM for 24 h. **(A)** Western blot and **(B)** MTT assay analysis of the protein expressions and cell growth. Error bars, mean ± SD. \**p* \< 0.05 and \*\**p* \< 0.01 (two-tailed Student\'s *t*-test **B**).](fonc-09-00150-g0004){#F4} TF Protein Expression Is Correlated With EGFR in HCC Tissues ------------------------------------------------------------ Our results clearly demonstrate that EGFR is regulated by TF in cell culture. To determine whether this is also the case in tumor tissues, we compared the protein levels of TF and EGFR in human 144 HCC tissues by IHC assay. High TF and EGFR staining were present in 105 (72.9%) and 91 (63.2%) out of 144 HCC tissues, respectively. Results of representative tissues with co-low or co-high staining of TF and EGFR were shown in [Figure 5A](#F5){ref-type="fig"}. The expression of TF was highly correlated with the expression of EGFR in HCC tissues ([Table 1](#T1){ref-type="table"} and [Figures 5B, C](#F5){ref-type="fig"}). ![TF protein expression is correlated with EGFR and poor HCC patient prognosis. TF and EGFR protein expressions in 144 HCC tissues were examined with IHC assay. **(A)** Representative images of positive and negative expression of both TF and EGFR were shown at 4 X and 20 X magnification. **(B)** Representative images of western blot analysis of TF and EGFR protein expression in the paired HCC tissues and adjacent normal tissues. **(C)** Spearman\'s rank correlation test showed the correlation between TF and EGFR protein expressions by Western blot.](fonc-09-00150-g0005){#F5} ###### The correlation between TF and EGFR protein expressions in HCC tissues. **TF expression** ***P*** ------------ ------ ------------------- --------- ----- ---------- EGFR High 82 9 91 \< 0.001 expression Low 23 30 53 0.668 105 39 144 Discussion {#s4} ========== It has been demonstrated that TF-induced tumor progression need the activation of intracellular signaling pathways, where TF cytoplasmic domain couples to proteolytic activation of the protease activated receptor (PAR) 2 and subsequently activates ERK, AKT and other signaling pathways ([@B26]). For example, TF was involved in retinoblastoma cell proliferation via activating both ERK and AKT signaling pathways ([@B27]). Knockdown of TF suppressed human lung adenocarcinoma growth *in vitro* and *in vivo* through inhibiting both ERK and AKT signaling pathways ([@B28]). Similarly, our results showed that TF promoted the growth of HCC *in vitro* and *in vivo* by activating both ERK and AKT signaling pathways. Inhibition of ERK and AKT blocked TF-mediated growth of HCC. Therefore, activation of both ERK and AKT signaling pathways is indispensable for TF-promoted the growth of HCC. EGFR is a member of ErbB/HER family of transmebrane receptor tyrosine kinases. It is activated by specific ligands resulting in the activation of multiple intracellular signaling pathways including ERK, AKT. Those signaling pathways is related to cell proliferation, migration and invasion ([@B29]--[@B31]). The gene expression of EGFR is regulated by the transcription factor c-Myc ([@B32]). In this study, we found that TF could enhance the expression of c-Myc and EGFR, and inhibition of ERK and AKT could block TF-induced c-Myc and EGFR upregulation. Phosphorylation of serine 62 amino acid residues by ERK prevents c-Myc protein from degradation ([@B33]). AKT stabilizes c-Myc protein by phosphorylation and inactivation of GSK-3β which phosphorylated threonine 58 amino acid residues of c-Myc to promote c-Myc degradation ([@B33]). Inhibition of EGFR with either small molecule inhibitors or specific antibodies has achieved promising results in the preclinical HCC models. In human HCC cells, gefitinib, erlotinib or cetuximab could induce growth inhibition, cell cycle arrest and apoptosis ([@B34]--[@B36]). In the orthotopic HCC models, gefitinib significantly inhibited the growth and metastasis of HCC tumors, and enhanced by the combination with cisplatin ([@B37], [@B38]). However, the outcome of targeting EGFR in HCC was modest in the clinical trials. When used as a single agent in HCC patients, erlotinib only acquired moderate effects ([@B39], [@B40]), and cetuximab showed no antitumor activity ([@B41]). Treatment failure with EGFR inhibitors in HCC patients may cause by many reasons, such as the levels and mutations of EGFR, EMT status of tumor cells, etc. ([@B42]--[@B44]). In the current study, we found that treatment with EGFR siRNA or gefitinib suppressed the growth more significantly in the TF highly expressed HCC cells, suggesting that the levels of TF in tumor cells may influence the effects of EGFR inhibitors. Furthermore, our IHC data showed that both positive ratios of TF and EGFR protein in the HCC tissue were 72.9% (105/144) and 63.2% (91/144), respectively. The expression of TF was highly correlated with the expression of EGFR in HCC tissues. Therefore, it may be valuable to investigate the relation of TF expressions and EGFR inhibitors effects in the future studies. Conclusions {#s5} =========== Our results provide proof-of-principle insights into a novel mechanism driven by TF on HCC growth and suggest that TF and EGFR may be potential therapeutic targets of HCC. Data Availability {#s6} ================= The datasets generated for this study are available on request to the corresponding author. Author Contributions {#s7} ==================== S-ZH, M-NW, J-RH, Z-JZ, W-JZ, Q-WJ, and YY performed experiments. H-YW, H-LJ, KW, Z-HX, M-LY, and YL collected and analyzed data. X-SH, ZS, and QZ prepared the manuscript. Conflict of Interest Statement ------------------------------ The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest. **Funding.** This work was supported by funds from the National Natural Science Foundation of China Nos. 81661148049 and 81772540 (ZS), the Guangdong Natural Science Funds for Distinguished Young Scholar No. 2014A030306001 (ZS), the Guangdong Special Support Program for Young Talent No. 2015TQ01R350 (ZS), the Science and Technology Program of Guangdong Nos. 201300000187 (QZ) and 2016A050502027 (ZS), the Science and Technology Program of Guangzhou No. 201704030058 (ZS), the Science and Technology Program of Huizhou (170520181743174/2017Y229 and 180529101741637/2018Y305), and the Program Sci-tech Research Development of Guangdong Province 2014A020212717 (QZ). [^1]: Edited by: Yunkai Zhang, Vanderbilt University Medical Center, United States [^2]: Reviewed by: Chuan Wang, Auburn University, United States; Shujue Lan, Institute of Biochemistry and Cell Biology, Shanghai Institutes for Biological Sciences (CAS), China; Jing Zhao, Fourth Military Medical University, China [^3]: This article was submitted to Cancer Molecular Targets and Therapeutics, a section of the journal Frontiers in Oncology [^4]: †These authors have contributed equally to this work
325 F.3d 384 UNITED STATES of America, Appellee,v.John CABOT, Defendant-Appellant. Docket No. 02-1137. United States Court of Appeals, Second Circuit. Argued: October 24, 2002. Decided: April 7, 2003. David V. Kirby, U.S. Attorney's Office District of Vermont, Burlington, VT, for the Appellee. Elizabeth D. Mann, Federal Public Defender, Burlington, VT, for the Defendant-Appellant. Before: VAN GRAAFEILAND, JACOBS, CABRANES. VAN GRAAFEILAND, Senior Circuit Judge. 1 On August 27, 2001, in the United States District Court for the District of Vermont, John Cabot pled guilty to a charge of persuading a minor to engage in sexually explicit conduct for the purpose of producing visual depictions of such conduct, in violation of 18 U.S.C. §§ 2251(a) and 2251(d). He now challenges several of the conditions governing the three-year term of supervised release that will follow the 130-month term of imprisonment imposed on February 4, 2002, by Chief Judge J. Garvan Murtha of the Vermont Court. His challenges are addressed to the following conditions: 2 The defendant shall neither possess nor have under his control any "matter" that is pornographic or that depicts or alludes to sexual activity or depicts minors under the age of eighteen. This includes but is not limited to any "matter" obtained through access to any computer or any material linked to computer access or use. 3 The defendant shall not possess or use a computer or any other device with the ability to access the internet at any location (including employment) without the prior approval of the probation officer. Any approval by the probation officer shall be subject to any conditions set by the probation officer with respect to that approval. 4 Because Cabot's conditional freedom will hinge upon his compliance with the conditions prescribed, they must "give the person of ordinary intelligence a reasonable opportunity to know what is prohibited, so that he may act accordingly." Grayned v. City of Rockford, 408 U.S. 104, 108, 92 S.Ct. 2294, 33 L.Ed.2d 222 (1972). It has been further stated that a "probationer cannot reasonably understand what is encompassed by a blanket prohibition on pornography." United States v. Guagliardo, 278 F.3d, 868, 872 (9th Cir.2002) (citing United States v. Loy, 237 F.3d 251, 253 (3d Cir.2001)). "One man's pornography may be another's keepsake." Giano v. Senkowski, 54 F.3d 1050, 1056 (2d Cir.1995). 5 The foregoing notwithstanding, the condition prohibiting Cabot from possessing pornographic matter provides him with notice of the prohibition that is adequate, in view of (I) a defendant's diminished due process rights during supervised release, see United States v. Knights, 534 U.S. 112, 119, 122 S.Ct. 587, 151 L.Ed.2d 497 (2001); United States v. Reyes, 283 F.3d 446, 460 (2nd Cir.2002), and (ii) his conviction for the production of "child pornography" under a statutory scheme that defines that term, see 18 U.S.C. §§ 2256(2) & (8). See also United States v. Bee, 162 F.3d 1232, 1234-35 (9th Cir.1998) (upholding condition of supervised release prohibiting possession of sexually oriented materials where defendant pled guilty to abusive sexual conduct involving six-year-old girl). Indeed, in the plea agreement which gave rise to the instant litigation, Cabot, with the advice and approval of competent counsel, pled guilty to the count in the indictment charging him with the production of child pornography. Judge Murtha's condition prohibiting Cabot from possessing "any `matter' that is pornographic" puts Cabot on reasonable notice of what he may lawfully possess. "A sentencing court may order a special condition of supervised release that is `reasonably related' to several of the statutory factors governing the selection of sentences, `involves no greater deprivation of liberty than is reasonably necessary' for several statutory purposes of sentencing, and is consistent with Sentencing Commission policy statements." United States v. Sofsky, 287 F.3d 122, 126 (2d Cir.2002) (quoting 18 U.S.C. § 3583(d)). 6 Cabot also contends that the prohibitions against his possessing matter that "depicts or alludes to sexual activity" or that "depicts minors under the age of eighteen" are too broad. At the same time, he challenges the prohibition against internet usage as impermissibly restrictive, relying on our recent decision in Sofsky, which held that a condition of supervised release that prohibited a defendant from accessing computer and internet without the approval of a probation officer caused greater deprivation than was reasonably necessary. Id. 7 The government concedes merit in these other arguments. Specifically, it agrees that the restrictions against possession of any matters depicting "sexual activity" or "minors" are excessive in scope, and that the outright ban on unapproved internet access cannot stand post-Sofsky. 8 We affirm that portion of the district court's conditions forbidding possession of pornographic matter, but vacate and remand for future consideration those portions that prohibit Cabot from possessing matters that depict or allude to "sexual activity" or which depict minors under the age of eighteen.
Q: PingFederate and PingAccess REST APIs using Authorization token I am calling PF and PA REST Web services using username and password but it seems vulnerable to provide credentials.Is there a way to provide the only Authorization code for REST AP service calls instead of credentials?. At the moment am calling in this way root@ubuntu:/home/joe# curl -k -u "**UserName:Password**" -H "X-Xsrf-Header: PingAccess" https://localhost:9000/pa-admin-api/v1/virtualhosts A: The PingAccess administrative API supports OAuth Access Tokens for authentication. They must be access tokens issued by PingFederate (using any grant type) and contain a configured scope for administrative API access. For more details see: https://support.pingidentity.com/s/document-item?bundleId=pingaccess-52&topicId=reference/ui/pa_t_Configure_API_Authentication.html PingFederate itself currently does not support OAuth for administrative APIs however there are a number of options for authentication. The most secure form of authentication currently supported is client certificate authentication. For more details, see: https://support.pingidentity.com/s/document-item?bundleId=pingfederate-92&topicId=adminGuide%2FconfiguringAccessToTheAdministrativeApi.html
Q: how come ruby's single os thread doesn't block while copying a file? My assumptions: MRI ruby 1.8.X doesn't have native threads but green threads. The OS is not aware of these green threads. issuing an IO-heavy operation should suspend the whole process until the proper IO interruption is issued back. With these I've created a simple ruby program that does the following: starts a thread that prints "working!" every second. issues an IO request to copy a large (1gb) file on the "main" thread. Now one would guess that being the green threads invisible to the OS, it would put the whole process on the "blocked" queue and the "working!" green thread would not execute. Surprisingly, it works :S Does anyone know what's going on there? Thanks. A: There is no atomic kernel file copy operation. It's a lot of fairly short reads and writes that are entering and exiting the kernel. As a result, the process is constantly getting control back. Signals are delivered. Green threads work by hooking the Ruby-level thread dispatcher into low-level I/O and signal reception. As long as these hooks catch control periodically the green threads will act quite a bit like more concurrent threads would. Unix originally had a quite thread-unaware but beautifully simple abstract machine model for the user process environment. As the years went by support for concurrency in general and threads in particular were added bit-by-bit in two different ways. Lots of little kludges were added to check if I/O would block, to fail (with later retry) if I/O would block, to interrupt slow tty I/O for signals but then transparently return to it, etc. When the Unix API's were merged each kludge existed in more than one form. Lots of choices.1. Direct support for threads in the form of multiple kernel-visible processes sharing an address space was also added. These threads are dangerous and untestable but widely supported and used. Mostly, programs don't crash. As time goes on, latent bugs become visible as the hardware supports more true concurrency. I'm not the least bit worried that Ruby doesn't fully support that nightmare. 1. The good thing about standards is that there are so many of them. A: When MRI 1.9 initiates, it spawns two native threads. One thread is for the VM, the other is used to handle signals. Rubinis uses this strategy, as does the JVM. Pipes can be used to communicate any info from other processes. As for the FileUtils module, the cd, pwd, mkdir, rm, ln, cp, mv, chmod, chown, and touch methods are all, to some degree, outsourced to OS native utilities using the internal API of the StreamUtils submodule while the second thread is left to wait for a signal from the an outside process. Since these methods are quite thread-safe, there is no need to lock the interpreter and thus the methods don't block eachother. Edit: MRI 1.8.7 is quite smart, and knows that when a Thread is waiting for some external event (such as a browser to send an HTTP request), the Thread can be put to sleep and be woken up when data is detected. - Evan Phoenix from Engine Yard in Ruby, Concurrency, and You The implementation basic implementation for FileUtils has not changed much sense 1.8.7 from looking at the source. 1.8.7 also uses a sleepy timer thread to wait for a IO response. The main difference in 1.9 is the use of native threads rather than green threads. Also the thread source code is much more refined. By thread-safe I mean that since there is nothing shared between the processes, there is no reason to lock the global interpreter. There is a misconception that Ruby "blocks" when doing certain tasks. Whenever a thread has to block, i.e. wait without using any cpu, Ruby simply schedules another thread. However in certain situations, like a rack-server using 20% of the CPU waiting for a response, it can be appropriate to unlock the interpreter and allow concurrent threads to handle other requests during the wait. These threads are, in a sense, working in parallel. The GIL is unlocked with the rb_thread_blocking_region API. Here is a good post on this subject.
Chordoid meningioma arising in the pineal region: a case report. We report a rare case of chordoid meningioma arising in the pineal region, which presented in a 22-year-old woman. Her only complaint was headache, and neurological examination revealed no deficits. She had suffered from prolonged fever a few weeks earlier, and her hematological findings included hypochromic microcytic anemia and a high serum level of C-reactive protein (CRP). Cranial magnetic resonance (MR) images demonstrated a 25 x 30 mm mass in the pineal region, which showed iso-to low intensity on T1-weighted images (T1WI), high to low intensity on T2-weighted images (T2WI), and homogeneous enhancement with gadolinium-diethylenetriaminepentaacetic acid (Gd-DTPA). We performed subtotal removal of the tumor with an occipital transtentorial approach (OTA), and all her preoperative symptoms completely abated. Histological examination of this tumor specimen showed the typical pattern of chordoid meningioma. Chordoid meningioma has been known to correspond with Castleman's disease, and pineal meningiomas are extremely rare among intracranial meningiomas. The details of this case are presented with a review of the literature.
Thread: Sprayers for herbicide control hey guys and gals,<br>I was wondering if any of you young inventors and experienced people know what I can do to treat large areas of lawn without purchasing a large tank sprayer? I don't want to put up this larger initial investment this year but, I do have one client that wants me to roundup about an acre to acre and a half. What can I use, if anything that will be effective and not take me four hours? Another question would be how can I apply my post emergents? Would a granular be fine? Does it work as well? I think lesco sales a granular with momentum on it. Will this do the trick? I plan on developing a better more efficient spraying system next year but I don't want to buy something expensive this year. What are your thoughts on these topics?<p>----------<br>Integrated Landscape Solutions<br>Lexington, KY Don't use granular for broadleaf weed control.<p>For broadleaf: Use a M60 Spray Kit from Perma-Green (www.ride-onspreader.com) for 350.00. Works great for fertilizing and spraying broadleafs at the same time, or each independently.<p>For Round-Up: Since you won't be treating 65M of Round-Up often, I would back-pack this area.<br> Use a backpack sprayer and if using the customers hose to refill is out of the question use a 55 gallon Rubbermaid trash can as your water source. Rinse it out well before you leave home and then fill it half full. Use a 2 liter bottle or milk jug to refill your sprayer. This is what I do when I am spraying several homes in one day. I have lots of sprayers, I never use two different chemicals in the same sprayer no matter how good a job you do cleaning you might still have some residue left behind. Get a back pack sprayer for the round up and another one for broad leaf control. I also have a 30 gal sprayer with 10' booms that I made to fit on my Walker that I use for broad leaf control. Thanks guys,<br>Hey Lazer, does the spray kit really work well? What are your experiences with it?<p>I can't get a boom sprayer because I don't have a riding mower right now.<p>Would it really work to use ortho hose end spray kit? Attach the thing directly to the hose and put in the amount of Roundup you need then go at it?<p>TIA<p>----------<br>Integrated Landscape Solutions<br>Lexington, KY The M60 Spray Kit attaches to any square hopper spreader allowing you to spray weeds as the same time you're fertilizing.<p>I absolutely love 'em and would not make broadleaf hericide treatments without them. Matt wrote:<p>&gt;I can't get a boom sprayer because I don't have a riding mower right now.<p>Buy a 15 gal 12vt sprayer from Northern tool<br>mount it to a some plywood then mount it<br>to whatever you got. Add a y valve and make your own boom from pvc.<p>
Assessing the Influence of Calcium Fluoride on Pyrite Electrochemical Dissolution and Mine Drainage pH. We investigated the influence of dissolved calcium fluoride, CaF(aq), on the electrochemical dissolution of pyrite and the corresponding environmental effects on acid mine drainage (AMD). The experimental results showed that CaF(aq) promotes pyrite electrochemical dissolution. When the CaF(aq) concentration increased from 0 to 10 mg L up to saturation, the promoting efficiency was 15.80 and 57.25%, respectively. The reason for this phenomenon is that F and Fe form FeF, and at a higher scan potential, F and Fe form the ion complex FeF. The mechanisms include: (i) the decreasing charge transfer resistance at the double layer due to the iron fluorine complex formation; and (ii) the decreasing passivation resistance at the cover layer due to the strong penetration of F ions through it into the double layer. Although the hydrolysis reaction of F in solution could increase the pH value of mine drainage, the AMD was significantly aggravated because CaF(aq) promoted the pyrite electrochemical dissolution.
{ "name": "Veer West LLC.", "displayName": "Veer West", "properties": [ "tfaforms.com", "tfaforms.net" ], "prevalence": { "tracking": 0.000103, "nonTracking": 0, "total": 0.000103 } }
import React from 'react' import { styles } from 'refire-app' import Color from 'color' const NewThreadsAvailable = ({ threads, nextThreads = [], showNewThreads, styles }) => { const diff = nextThreads.length - threads.length const threadsWord = diff === 1 ? "thread" : "threads" if (!diff) { return <div /> } else { return ( <div className={styles.container} onClick={showNewThreads}> { diff } new { threadsWord } available, show up-to-date list? </div> ) } } const css = { container: { padding: "10px", background: Color("#27ae60").lighten(0.7).hexString(), cursor: "pointer", marginBottom: "20px", }, } export default styles(css, NewThreadsAvailable)
Blue nevus of the endocervix. A study of five cases. The pathologic and immunohistochemical features of 5 cases of blue nevus of the endocervix are presented: 4 of them were studied ultrastructurally. The lesions were incidentaly discovered at microscopic examination and showed pigmented dendritic cells in the cervical stroma. Immunocytochemical examination showed all cases to be positive for S-100 protein. Ultrastructurally they contained melanosomes, were surrounded by a basement membrane, and displayed occasional desmosome-like devices. Histogenesis is discussed, and support for a schwannian origin is presented.
1. Field of the Invention The present invention relates generally to web services publishing and hosting environments, and more specifically to preventing web server error codes on dynamically loaded missing images in a web page. 2. Description of the Related Art For every request to a web server, the web server responds with a return status code. Hypertext Transport Protocol (HTTP) status codes returned by a web server can provide useful information to dynamic web applications. An application that analyzes these return codes may branch into different functionality based on the return code. Some of these codes, such as the Not Found error message (404) are also used by server administrators to diagnose server issues. While 404 errors can reveal a number of problems including server issues, publishing path issues, or back-end web site programming issues, the monitoring of an excessive number of non-problematic 404 errors can waste valuable time and resources, since non-problematic error codes may cloud actual 404 errors that the server administrators are looking for and seeking to fix. This situation may arise, for example, when an application on a web site attempts to load a set of dynamically defined images based on programmatically generated lists that are passed to the applications. Some or all of the images listed actually may not be available on the website until some time after the list has been generated. The application does not know which images exist and which do not exist, and therefore checks for an error code, such as a 404 error, on the image load. The application may invoke one set of methods if the image exists, and another set of methods if the image does not exist. However, the non-existence of the image is not an exception condition for the application. This application tactic may result in an excessively high number of non-problematic server error codes being generated when an application tries to load images that may not exist on the server.
Target lysis by human LAK cells is critically dependent upon target binding properties, but LFA-1, LFA-3 and ICAM-1 are not the major adhesion ligands on targets. The cytotoxicity mediated by the CD2+ CD3- lymphocyte subset, either NK or LAK, is puzzling since no specific antigen recognition structures, equivalent to the CD3-associated heterodimer T-cell receptor, have been recognized on these cells so far. The possibility exists that the CD3- cytotoxic effectors recognize their targets through non-specific adhesion mechanisms. The goal of this study was: (a) to examine the correlation between binding properties and susceptibility to lysis of 6 informative target cell lines; (b) to evaluate the role, as ligands on these targets, of adhesion molecules such as LFA-1, LFA-3 and ICAM-1. The effectors used in this study were IL-2-activated LGL, predominantly CD3-, or highly purified CD3- lymphocytes from normal human donors. The 6 target lines studied included 2 pairs of EBV-transformed B-cell lines (721 LCL vs. 721.134, and MM vs. MM-10F2) in which the parental lines were resistant to lysis while HLA variants were susceptible. A third pair was the Daudi Burkitt cell line, susceptible to LAK lysis, and an HLA-positive transfected Daudi line which was more resistant to lysis. The binding properties of these targets to LAK effectors (conjugate formation) were evaluated using a sensitive double fluorescence flow cytometry method. In each pair examined, the susceptible targets formed more conjugates and were surrounded by more cytotoxic LAK effectors than their resistant counterparts, indicating that the conjugation properties of targets are closely correlated with their susceptibility to LAK lysis. The expression of adhesion molecules on the informative targets was examined by indirect immunofluorescence and their role was evaluated by inhibition of lysis after pre-coating the targets with the relevant antibodies. The differences in the expression of the classical cell-cell adhesion molecules LFA-1, LFA-3 and ICAM-1 on the target surfaces were only marginal, insufficient to explain the striking differences in susceptibility to lysis and in binding properties. Coating the target cells with antibodies directed against these adhesion determinants had no effects on the lysis of susceptible target cells. The same antibodies reacting with the LAK effectors did inhibit lysis. Taken together, these results suggest that, on the targets, presently undefined membrane adhesion structures may have a major role in conjugate formation between target and CD3- effectors and determine the susceptibility of the targets to lysis.
Quantum tunneling in nanomagnetic systems with different uniaxial anisotropy order. A study of macroscopic quantum tunneling (MQT) of the magnetic moment in systems with quadratic and higher order uniaxial anisotropy and Zeeman interaction is presented. By using the instanton technique, under the giant spin approximation, the escape rate or probability per unit of time Gamma that the system undergoes a transition between coherent or metastable states is calculated. Using an effective particle potential we also determine the escape temperature T(e)(T), which marks the transition from quantum tunneling to thermal activation. A discussion is presented about the different models and the behavior of the magnetic system under the tunneling regime.
Louis Tse, 27, lived in his car to save money while completing his doctoral studies at the University of California, Los Angeles. A duffel bag holding bottled water and non-perishable foods served as his kitchen. Family photos hung on the backseat windows. At night, Tse parked wherever he found an open WiFi network so he could do homework. "For young people who are experiencing homelessness, they could go to the nearest youth shelter, which is a two-hour drive away in Hollywood -- or rough it out. That's the path of least resistance," Tse, now a thermal engineer at NASA's Jet Propulsion Laboratory, tells Business Insider. He declined to comment on his current living situation. In October 2016, Tse and former classmate Luke Shaw opened up a student-run shelter for students who are experiencing homelessness because of the sky-high costs of higher education. Students for Students, formerly known as the Bruin Shelter, provides them with a safe and supportive place to eat, sleep, socialize, and study during the academic year. Temporary residents at Students for Students sleep in bunk beds in the loft of a refurbished church space. (Facebook/Bruin Shelter) The shelter has nine beds and welcomes college students from the Los Angeles area (a majority come from UCLA, because of its proximity to campus). Unlike traditional shelters, which use a lottery-based system to fill beds, Students for Students interviews applicants and offers a place to stay for up to six months. Breakfast and dinner are served family-style every day. There are 60 student-volunteers who keep the shelter running day and night. Case managers from the UCLA Department of Social Welfare come by to help residents locate more permanent housing and tap into city programs that subsidize rent for homeless individuals. Medical and dental students from the university provide routine check-ups. Counseling is also available. Tse says that having a home base goes a long way for a young person juggling school, a job, and a life. The city's resource-starved shelters take in people of all ages, some of whom are combatting severe mental illnesses; it's hard for struggling college students to fit in there. "Knowing that you have a stable place to stay helps you be more stable," Tse says. A striking number of college students in the US are living without permanent housing. A recent study from the University of Wisconsin surveyed 33,000 students across 70 US community colleges. Of these students, about half were "housing insecure," meaning they bounce between homes often or cannot afford the cost of living. A staggering 14% of students were homeless. In California, one in three community college students face some level of housing insecurity. The problem extends to four-year universities as well, as Tse saw firsthand. ​Tse and Shaw were inspired to build their organization by a similar shelter for young adults at Harvard University. They won a $20,000 grant from the David Geffen School of Medicine at UCLA and asked for donations -- ranging from food, clothes, blankets, and toiletries -- from the community. Last fall, they opened the doors of a refurbished church space to students. They expect to serve 18 to 27 individuals per academic semester. In its first semester, Students for Students welcomed a student who grew up in the foster care system and fell through the cracks of a scholarship program that assists fostered youth with college costs. Months later, shelter volunteers saw the student walk at graduation. "We're all in school because we value education and we know that getting a diploma is necessary if you're to open doors for yourself in life," Tse says. "That's the mission that drives us. There are students who are facing a variety of life circumstances, and we want to help them get to that point." Read more:
Malthus’ nightscape is nigh, yo- Frugal survivalism as if prepping is time sensitive daily ad Monday, June 13, 2016 killing time 1 of 5 KILLING TIME Back in the Cold War when the government actually used persuasion rather than threats to get the public to cooperate, fallout shelters were used as a ploy to trick the population into thinking they wouldn’t all become radioactive particles in the atmosphere.The attitude seemed to be, hey, you go die in our colonial wars after we draft you, and you hurt any outsider of the group showing dissent, so we might as well throw you a bone and save some of your women and children in the urban areas in case the Soviets attack us.And if you can’t build your own private properly constructed and equipped shelter in the suburbs we can provide you with some expedient no cost shelter plans.All us White Bread workers are in this together!Of course, now the attitude is more like, hey, screw you!You haven’t sacrificed enough with flat to declining living standards for the last nearly fifty years.If you question your rich superiors we will leave you in a cell you die of rectally inserted AIDS so you had better smile harder ( the beating will continue until morale improves ) and work longer and pay more in taxes.You will nod obsequiously as we import Muslims to your neighborhood where they will live on Section Eight Housing while you work three jobs to pay your underwater mortgage, you will scrape and bow and postulate yourself while buying more crap on credit you can’t pay back.And if the Muslims dirty-bomb the city upwind, well screw you.We didn’t save the thousands of Darkies during Katrina and if you are poor we think of you as just another Wigger, so figure your odds of getting an assist from us. * And yes, while it is refreshing to see the government show its true oligarchy colors, it is also true that it was nice when they screwed us but also tried to hide it.Damn, show some class and put some effort into it, dudes!Anyway, even if the fallout shelters weren’t nearly enough for everyone ( I could be misremembering but I seem to recall that the Interstate Highway overpasses were constructed in such a way as to be used as expedient shelters, also.After the homeless took them over, now they build them without the habitable lip ), with rations usually consisting of crackers and hard candy, and enforced disarmament, the living conditions were sure to be horrid.There were government studies on the psychological effects of cramped dark crowded shelter living.I’m sure this was just a manual for the big cheese Civil Defense Warden to become aware of discipline problems rather than a wake-up call to fix the issues because the whole CD program was never a budget priority and as soon as Uncle Sugar started running out of money the whole thing was mothballed ( if the choice was more spending on space missile defense systems that would enrich a contractor or trying to save taxpayers, well, you know who always wins there ). * Of course, we all know about a far more familiar concept, “cabin fever”.If the feds needed to elaborate on that concept, the shelters must really have been forecasted to be pretty bad.Perhaps like taking the SuperDome crowd and stuffing them into a regular size school gym?I can tell you from very limited experience that cabin fever sucks, and sucks hard, and not in a good way.Back in my B-POD ( Bison Pit Of Doom ) days I always went upstairs to the solar heated RV during the day.In the winter, it was rare on a sunny day no matter how cold that it didn’t get to fifty in the trailer ( extra insulation tacked to the inside walls-squishy foam and foil faced bubble wrap-plus skirting plus southern exposure ), which while that might sound sad to you central air using pukes was plenty warm enough dressed in wool and cotton layers and made my heating bill about zero, as just perking coffee in the morning and cooking dinner at night kept the underground place also about fifty.And that was without a second layer of insulation on the roof and no solar heat.Imagine how comfy it would have been then. * However, as sunny as the high desert is, it isn’t sunny every day.In fact, all the cloudy days seem to want to cluster around winter time.When it was simply too cold to go upstairs and sit in my recliner comfortably, I was stuck down in the underground cabin which measured six by twelve.And which has no recliner.And which has limited watts to burn.Even with 70 watts in panels ( bought in the days of $6 then $3 a watt panels, unlike today’s $1.39 ) I couldn’t burn my bright overhead lamp all day long ( the cabin has two very small windows, each something like 9 inches by twelve-just enough for gloom rather than pitch darkness ).You couldn’t much move around or do anything electronic.My choices were basically to read by small clip on book lamp or sit and twiddle my thumbs.And that was never more than two days in a row, as I’d then go back to work.How would you like to live like that for a longer period?It is more than just lack of heat in the winter or a nuclear war that will force you into this situation. More tomorrow. END Please support Bison by buying through the Amazon ad graphics at the top of the page.IF YOU DON’T SEE THE AD, DISABLE AD BLOCK ( go to the Ad Blocker while on my page and scroll down the menu to “disable this site” ). You can purchase anything, not just the linked item. Enter Amazon through my item link and then go to whatever other item you desire. As long as you don’t leave Amazon until after the order is placed, I get credit for your purchase. For those that can’t get the ads because they are blocked by your software, just PayPal me occasionally or buy me something from my Amazon Wish List once a year.Pay your author-no one works for free.I’m nice enough to publish for mere Book Money, so do your part.****Contact Information* Links To Other Blogs * Land In Elko* Lord Bison* my bio & biblio* my web site is www.bisonprepper.com*Link To All My Published Books*By the by, all my writing is copyrighted. For the obtuse out there 7 comments: Here's something that you and the other minions might find useful come collapse time James. You could watch it and remember it well enough, as it is pretty straight forward. Otherwise download it using Mozilla Firefox with the Download Helper plugin, or Tubemate for your Android device. How To Reload Primers with Matches By Grant Thompson - "The King of Random" https://www.youtube.com/watch?v=t_7LWCFH5Gc Speaking of limited space fallout shelters, a good reminder of what that's going to be like is in the classic twilight zone episode titled “The Shelter”. Just remember that people were far more civilized back then. Oh, and sent you a book; it should be there by the end of the week. It's this one: Oh, excellent. It was on my Wish List. Or did you already know that? One of my old Paladin type books on improvised ammo covered matches for primers. Not sure how many matches will be around, or even if it is a good idea to use them for ammo rather than creating fire-especially as primers are 3 cents in bulk. Yes, I got it off of your wish list. I wasn't about to gamble and send you a sucky book after my last terrible suggestion that you hated. Looks like a great book; I might even order it for myself? Good point on the matches. I suppose that you cannot have too many. For post collapse fire starting, the ferrocerium rods are good to have in abundance, as they last a good long time, can't leak fluid, and are water resistant. I got 2 of them off of Ebay that are 6” by 1/2”. One of those would probably last for many years. The fresnel lens are usually unbreakable and whenever there is sun, you will have fire. I got a couple of free frensels-I think they were an advertisement, or a trash pick. Plus, at least three $ store magnifying glasses. And, matches out the wazoo-always buying and stashing those. But I image the Strike Anywhere are the reloading matches, I just buy On The Box types. I must moderate-trust me. You don't want to see what happens otherwise. Sometimes it takes awhile to respond as I only check two or three times a day. No N-Bombs, nothing to get me libeled. Otherwise, have at it. If you criticize me, make sure to praise my hair first.
Q: Problem Boltzmann distribution I am trying to solve a problem about boltzmann distribution. If(A)=0.74 Uf(V)=0.037 Ia(A)=0.130 I have to find N, ln N, ln N0, N0, T(K) I know that e=1.6E-19 and k=1.38E-23 Formulae: N=Ia/e -> N=0.130/1.6E-19=8.13E+17 ln N=ln N0-(e/kT)Uf How can i find out ln N if i don`n know N0 and how may i find T(K) A: You find $N$ from the expression $$N = \frac{I_a}{e}$$ which you have stated yourself (equation (4) in the problem) - no need to find $N_0$ first. Plotting $\ln(N)$ as a function of $U_f$ will give you a straight line (according to equation (3) from the problem), with an intercept equal to $\ln(N_0)$ and a slope equal to $\frac{e}{kT}$. Since you know $e$ and $k$, it should be easy to find $T$.
Comments on: In the end, it was about the Heat executionhttp://probasketballtalk.nbcsports.com/2011/05/12/in-the-end-it-was-about-the-heat-execution-in-the-end/ Basketball - NBC SportsSun, 02 Aug 2015 20:31:59 +0000hourly1http://wordpress.com/By: purduemanhttp://probasketballtalk.nbcsports.com/2011/05/12/in-the-end-it-was-about-the-heat-execution-in-the-end/comment-page-1/#comment-40824 Fri, 13 May 2011 03:28:48 +0000http://probasketballtalk.nbcsports.com/?p=23550#comment-40824edw… you must not watch many Chicago games. Deng averages 17.0 ppg. That ain’t exactly inconsistent, as he also has a .460 FG%. No, he’s not flashy, but he’s a hell of a lot more consistent than you give him credit for offensively. The problem the Bulls offense has with regard to consistency comes from not having a true #2 (shooting), guard and never knowing which Carlos Boozer is going to show up on any given night. Has nothing to do with Deng’s offense, but to each their own… even if their opinions are uniformed and wrong. ]]>By: borderline1988http://probasketballtalk.nbcsports.com/2011/05/12/in-the-end-it-was-about-the-heat-execution-in-the-end/comment-page-1/#comment-40810 Fri, 13 May 2011 02:20:05 +0000http://probasketballtalk.nbcsports.com/?p=23550#comment-40810Deng is a very good defender, but honestly, I don’t think he can keep up with Lebron. And Wade has the ability to run over this squad (I think Bogans will be on him?). Bosh has always had trouble against Noah, but Bosh should be able to produce more than when Garnett defended him. I think both Rose and Wade will put up crazy numbers this series. It will probably go 6 games but Chicago’s lack of scorers will be their doom in the end. Miami can play stifling defense as well as Chicago can, and I expect Miami to start doubling Derrick Rose in the 4th quarter as soon as he crosses half court. Miami will dare the other Bulls to make plays in the 4th quarter, and I simply don’t trust them to against one of the NBA’s top 3 defenses. ]]>By: edweird0http://probasketballtalk.nbcsports.com/2011/05/12/in-the-end-it-was-about-the-heat-execution-in-the-end/comment-page-1/#comment-40783 Thu, 12 May 2011 22:16:44 +0000http://probasketballtalk.nbcsports.com/?p=23550#comment-40783Haha im a fan of basketball. I dont disagree with you one bit, Deng is a great defensive player. It’s his inconsistent offense that keeps him from being a “popular” or “household name” and thats no ones fault but his own. Everyone to their own my friend :) ]]>By: purduemanhttp://probasketballtalk.nbcsports.com/2011/05/12/in-the-end-it-was-about-the-heat-execution-in-the-end/comment-page-1/#comment-40779 Thu, 12 May 2011 22:04:08 +0000http://probasketballtalk.nbcsports.com/?p=23550#comment-40779edw…. are you sure you’re not a Laker fan? I mean, just like Laker fan you’re counting your chickens before they hatch. That’s why they play the games you know. And please stop splitting hairs wrt to the NBA All Defensive Team. This may come as a big shock to you, but professional basketball teams are made up of more than just the starting 5 players. All defense is all defense, just as being All Pro by going to the Pro Bowl is still All Pro in the NFL, be it Jay Cutler or Butt-chin Tom (Brady). Deng simply lacks name recognition and is obviously overshadowed by Derrick Rose; that doesn’t mean that he’s not one of the best defensive wing players in the NBA though. Just means he isn’t the “popular” or “household name” choice. The Bulls certainly do have their offensive struggles from time to time, but they are a picture of consistency when it comes to defense. La Bum and Bosh at times just decide to take games off; the question now is which of the upcoming East Conference Finals will they take off? Finally there’s not the grudge mode vs. the Bulls like there was vs. the Celtics, but I hope that the media continues to discount the Bulls and anoint the Heat as the bigger they are, the harder they’ll fall (be it to Chicago, Atlanta or whoever comes out of the West).` ]]>By: edweird0http://probasketballtalk.nbcsports.com/2011/05/12/in-the-end-it-was-about-the-heat-execution-in-the-end/comment-page-1/#comment-40770 Thu, 12 May 2011 21:26:58 +0000http://probasketballtalk.nbcsports.com/?p=23550#comment-40770See thats were you and I differ, I’ve seen pretty much every bulls game since the first round and I’ve seen those struggles. For you to negate the Heat bench is just silly. While our stars might take the majority of the workload, are bench comes through when it matters most. You speak about the Heat as though they’re not known for their own brand of stifling defense. And yea dude, the regular season means nothing as long as you get into the playoffs. You can go 82-0 during the regular season but that means nothing for you in the playoffs cause like my man Lebron said “everyone starts 0-0 in the playoffs”. ]]>By: purduemanhttp://probasketballtalk.nbcsports.com/2011/05/12/in-the-end-it-was-about-the-heat-execution-in-the-end/comment-page-1/#comment-40758 Thu, 12 May 2011 20:39:59 +0000http://probasketballtalk.nbcsports.com/?p=23550#comment-40758edw… Admittedly I pretty much ignored the Heat-Celtics series; just had no interest in either team. No one’s saying that Deng is on par OVERALL to La Bum; all I’m saying is the Deng made the All NBA defensive team for a reason, he’s long, fast and tall enough to give La Bum fits. The Bulls are NOT a one man team either; far from it. While the Heat have two All Stars to the Bulls 1, the Bulls have a very solid, up and coming supportive cast and an excellent bench that plays as a TEAM (a concept foreign to the Heat). My point is that the Heat can be beat in a series when up against a stifling defensive team, which the Bulls clearly are (just check the stats if you like). The Bulls will NOT be intimidated by the Heat either, which will help to offset their playoff inexperience. All I’m predicting though is that the Heat sure as hell aren’t going to sweep the Bulls; I think that the series will go at least six games. That’s my prediction at this point, because Carlos Boozer has a history of getting injured and disappearing in big games making him the Bulls “X” factor in the upcoming East Finals series. ]]>By: edweird0http://probasketballtalk.nbcsports.com/2011/05/12/in-the-end-it-was-about-the-heat-execution-in-the-end/comment-page-1/#comment-40752 Thu, 12 May 2011 20:22:02 +0000http://probasketballtalk.nbcsports.com/?p=23550#comment-40752@purdueman How can you even say that about the bulls? Look how much they’ve struggled against worse teams and you expect them to put up a fight against the Heat? Clearly we all know that the regular season wins have no place in the playoffs (Heat/Celtics &Mavs/Lakers) so for you to use that argument is just foolish. The Heat are going to expose the bulls for what they are: a young inexperienced one man team. For you to even compare deng against Wade or James is beyond me. His inconsistency alone is sufficient enough evidence to scrap any of your arguments. I understand your dislike for Lebron, everyones entitled to their opinion, but for you to live in some alternate reality and negate how awesome of a player he is downright redundant. Clearly you and I having been watching two different playoffs. ]]>By: purduemanhttp://probasketballtalk.nbcsports.com/2011/05/12/in-the-end-it-was-about-the-heat-execution-in-the-end/comment-page-1/#comment-40692 Thu, 12 May 2011 16:28:55 +0000http://probasketballtalk.nbcsports.com/?p=23550#comment-40692not so fast their, ed… just because Miami has La Bum and Wade doesn’t mean that they are on their way to any kind of a dynasty. The power in the NBA is clearly shifting to the eastern conference (as the aging Lakers and Spurs fade into mediocrity and an almost equally old Mavs squad likely will soon too). If you think that the young Bulls are going to stand pat this offseason, you’re crazy. The Bulls already match up very well with the Heat (and won all three meetings between the two clubs this past season), and know that they have to add an impact shooter to their mix. The Knicks aren’t going to stand pat either; don’t be shocked to see Chris Paul join Mello and Amarie, if not at the end of this season than surely the following season (when Paul is scheduled to be a free agent, but I’m willing to bet that he’s going to “pull a Mello” and force a trade to the Knicks before his free agency hits). The Hawks and Pacers are up and coming teams too; I’d take either of their current rosters over those of the Lakers or Spurs (due to the age of those rosters). Let’s take a quick look at how the Bulls match up the Miami’s so called “Big 3″ (I say so called because I’m not at all sold on Bosh): La Bum matches up with defensive stud Loul Dang. Dang has the length, athleticism and speed to matchup. Wade matches up with Rose (a draw or close to it). Bosh matches up with Boozer/Gibson. Both Bosh and Boozer run hot and cold, but Gibson will flat out shut down Bosh. Bench? The Bulls have one of the best benches in the league; advantage? Clearly Bulls. No, I’m not predicting that the Bulls will win the East; at least not this season, but it’s not going to be the cakewalk that so many giddy Heat fans think it will be either. We’ve ALL seen La Bum and Bosh periodically just take nights off; no one on the Bulls does that! ]]>By: edweird0http://probasketballtalk.nbcsports.com/2011/05/12/in-the-end-it-was-about-the-heat-execution-in-the-end/comment-page-1/#comment-40628 Thu, 12 May 2011 07:53:09 +0000http://probasketballtalk.nbcsports.com/?p=23550#comment-40628As one dynasty fades into the background, another one establishes itself at the forefront. ]]>
Pavement Preservation Letter Pavement Preservation White Paper Pavement Preservation FAQ Pavement Preservation Benefits Card LTPP Tech Brief NJAPA Positions Thinlays are a proven, effective method to use to preserve your roadway assets and extend their life – research has shown that thin asphalt overlays have resulted in performance being extended from 6 to 16 years.1 In 2011, the Federal Highway Administration issued a Technical Brief regarding the different pavement preservation techniques and found that “The overall results indicate that thin overlays and chip seals have superior performance, compared to slurry seal and crack seal.”2 This Technical Brief summarized that thin asphalt overlays “were more effective than slurry seal and crack seal treatments and performed better than the control section for fatigue cracking”; “mitigated and slowed the progression of rutting under all circumstances”; were “effective in mitigating and delaying the progression of roughness:”; and “outperformed other treatments when the existing section had minimal cracking prior to the treatment and higher levels of preexisting cracking.”2 Surface distresses and structural adequacy of the road must be evaluated prior to using pavement preservation treatments, including Thinlays. Thinlays should be considered as part of any pavement preservation program. Consideration should be given to protecting the pavement structure and the benefits of creating a Perpetual Pavement, particularly for thinner lower volume pavements, through the strategic application of Thinlays over time (staged construction). Selection of Thinlays should include a review of the economic and engineering suitability. Restrictions that prohibit the use of Thinlays due to possible addition of structural capacity and without technical basis should be removed from specifications and guidance. Milling prior to overlay should be allowed, and in some cases encouraged, to allow removal of surface distresses and provide optimum smoothness for long-term performance. The use of warm-mix asphalt (WMA) should be allowed for the construction of Thinlays. The use of reclaimed asphalt pavement (RAP) and/or reclaimed asphalt shingles (RAS) should be permitted for Thinlays. Learn More About the Benefits of Thinlays Contact Jim Purcell, NJAPA Technical Director, for more information or to schedule an in-person presentation.
Conventional devices for accessing and visualizing interior regions of a body lumen are known. For example, various catheter devices are typically advanced within a patient's body, e.g., intravascularly, and advanced into a desirable position within the body. Other conventional methods have utilized catheters or probes having position sensors deployed within the body lumen, such as the interior of a cardiac chamber. These types of positional sensors are typically used to determine the movement of a cardiac tissue surface or the electrical activity within the cardiac tissue. When a sufficient number of points have been sampled by the sensors, a “map” of the cardiac tissue may be generated. Another conventional device utilizes an inflatable balloon which is typically introduced intravascularly in a deflated state and then inflated against the tissue region to be examined. Imaging is typically accomplished by an optical fiber or other apparatus such as electronic chips for viewing the tissue through the membrane(s) of the inflated balloon. Moreover, the balloon must generally be inflated for imaging. Other conventional balloons utilize a cavity or depression formed at a distal end of the inflated balloon. This cavity or depression is pressed against the tissue to be examined and is flushed with a clear fluid to provide a clear pathway through the blood. However, many of the conventional catheter imaging systems lack the capability to provide therapeutic treatments or are difficult to manipulate in providing effective therapies. For instance, the treatment in a patient's heart for atrial fibrillation is generally made difficult by a number of factors, such as visualization of the target tissue, access to the target tissue, and instrument articulation and management, amongst others. Conventional catheter techniques and devices, for example such as those described in U.S. Pat. Nos. 5,895,417; 5,941,845; and 6,129,724, used on the epicardial surface of the heart may be difficult in assuring a transmural lesion or complete blockage of electrical signals. In addition, current devices may have difficulty dealing with varying thickness of tissue through which a transmural lesion is desired. Conventional accompanying imaging devices, such as fluoroscopy, are unable to detect perpendicular electrode orientation, catheter movement during the cardiac cycle, and image catheter position throughout lesion formation. The absence of real-time visualization also poses the risk of incorrect placement and ablation of structures such as sinus node tissue which can lead to fatal consequences. Moreover, because of the tortuous nature of intravascular access, devices or mechanisms at the distal end of a catheter positioned within the patient's body, e.g., within a chamber of the heart, are typically no longer aligned with the handle. Steering or manipulation of the distal end of the catheter via control or articulation mechanisms on the handle is easily disorienting to the user as manipulation of a control on the handle in a first direction may articulate the catheter distal end in an unexpected direction depending upon the resulting catheter configuration leaving the user to adjust accordingly. However, this results in reduced efficiency and longer procedure times as well as increased risks to the patient. Accordingly, there is a need for improved catheter control systems which facilitate the manipulation and articulation of a catheter.
-1)? 2 Let i = -4 + 60. Suppose -5*o = 3*z + i, z - 6*z + 10 = -2*o. What is the units digit of (3/(-6))/(1/o)? 5 Let g(a) = -a**3 - 6*a**2 - 2*a - 17. What is the units digit of g(-7)? 6 Let w(p) = p**2 + p + 5. Let c be w(0). Suppose -92 = -c*i + 88. What is the units digit of i? 6 Let f(h) = h**3 - 5*h**2 + 4*h - 11. Let c be f(5). Suppose -4*d - 29 = -2*s + c, -s + d + 19 = 0. What is the units digit of s? 9 Let l be (0 - 0) + -1 + 1. Let d(x) = 3*x**2 + 2*x - 1. Let m(q) = -10*q**2 - 7*q + 2. Let u(a) = -7*d(a) - 2*m(a). What is the units digit of u(l)? 3 Suppose 5*x - 3*b - 41 = b, -5*x - 3*b = -13. Suppose -x*m = -0*m - 65. What is the units digit of m? 3 What is the tens digit of ((-8)/(-6) + 0)/(6/207)? 4 Let x(p) = 10*p**2 + 9*p + 10. What is the tens digit of x(-6)? 1 Let w be (-10)/6*(-1 + -26). Suppose 5*x - 2*x - w = 0. Let y = x + -6. What is the units digit of y? 9 Let d = -10 + 4. Let t be (-3)/(-9) + (-10)/d. Let s = t - -3. What is the units digit of s? 5 Let b be 3/6*(27 + -1). Suppose t - b = -d - 5, -5*d = -2*t + 44. Let g = -6 + t. What is the units digit of g? 6 Let t(q) = q**3 - 7*q**2 + 4*q - 7. What is the tens digit of t(7)? 2 Let i(g) = -8*g - 8. What is the units digit of i(-2)? 8 Suppose 5*r - 2*k = 55, -4*r + k = 2*k - 57. What is the tens digit of r? 1 Let s(c) = -c**2 + 5*c - 6. Let p be s(5). Let o be 0 + 0 + p*1. Let d(z) = -z**3 - 5*z**2 + 3*z - 8. What is the units digit of d(o)? 0 Suppose -24 + 4 = -2*x. Let p be 3/12*x*2. Suppose -w - z + 1 = -8, 5*z + p = 5*w. What is the units digit of w? 5 Let u(c) = -c**3 + c**2 + c + 4. What is the units digit of u(0)? 4 Let a = -4 + 7. Let w(z) = 3*z**2 - 3*z - 4. What is the units digit of w(a)? 4 Let g(j) = j - 11. Let a be g(11). What is the units digit of 11 + (0 - 2 - a)? 9 Let r(w) = w**3 - 5*w**2 + 4*w + 4. Suppose 12 = 3*o + 4*b, 2*b = -o + 1 + 3. What is the units digit of r(o)? 4 Let z(l) = l**2 + 6*l + 7. Let r be z(-5). Suppose -5*d + 0*d - 63 = -r*h, -3*h + d + 62 = 0. What is the tens digit of h? 1 Let y(l) = -l**2 + l + 2. Let f be y(2). Let x be -1 + -2 + (6 - f). Suppose 0 = -x*r + 7 + 8. What is the units digit of r? 5 Let r(a) be the third derivative of -a**6/20 - a**5/30 - a**4/8 + 3*a**2. What is the units digit of r(-2)? 6 Suppose -3*y + 9 + 15 = 0. Let w = y - 5. What is the units digit of 1/w + (-46)/(-6)? 8 Suppose 5*a + 6 = -4. Let x(b) = -b**3 - b**2 - b + 1. What is the units digit of x(a)? 7 Let v(j) = -9*j + 1. Let t = -4 + 3. Let h be v(t). Let i = 16 - h. What is the units digit of i? 6 Let h(m) be the second derivative of m**4/6 + 2*m**3/3 + 3*m**2 + m. What is the units digit of h(-4)? 2 What is the units digit of ((-1)/(-3))/(-1)*102*-1? 4 Let z be (10/(-3))/(1/48). What is the units digit of (-12)/18 + z/(-6)? 6 Suppose 2*y + 2*q = 18 + 72, 0 = -5*y + q + 213. What is the units digit of y? 3 Suppose 4*i + 431 = 5*v, 0*v + 154 = 2*v + 3*i. What is the units digit of v? 3 Suppose -m - 47 = -5*r, 5*r + 4*m - 12 = 25. Let j = r - -1. What is the units digit of j? 0 Let o = 23 + -12. Suppose 16 - 52 = -2*f. Let s = f - o. What is the units digit of s? 7 Suppose f = 18 + 2. Let k = 40 - f. What is the tens digit of k? 2 Let k be 0/(4/2 + 0). Let n = 1 + k. Let t(b) = 3*b**2 + b. What is the units digit of t(n)? 4 Let w(x) = 1. Let i(c) = -22*c + 1. Let p(a) = i(a) - 2*w(a). Let l be p(-1). Suppose 2*v = -v + l. What is the units digit of v? 7 Suppose 11 = -5*g - 4. Let o = g + -2. Let r = 4 - o. What is the units digit of r? 9 Suppose 0 = -2*x - 4*d - 22, 0 = x - d + 4*d + 16. Let v = -1 - x. Suppose f = -v + 19. What is the units digit of f? 9 What is the hundreds digit of ((-1740)/(-45))/(2/6)? 1 Suppose 5*m + 2 = 7. Let a be -12*(-3)/(-2)*m. Let o = a + 33. What is the tens digit of o? 1 Let x(h) = h**2 + 10*h + 2. Let c be x(-9). Let t be c/5 + (-2)/(-5). What is the units digit of 1 + ((-5)/t - 1)? 5 Let u(k) = 53*k**3 - k**2 + 1. What is the units digit of u(1)? 3 Let t(l) = l - 3. What is the units digit of t(11)? 8 Let h(n) be the first derivative of -11*n**2/2 + n + 2. Let q be h(1). What is the units digit of (5 - 1)/((-2)/q)? 0 Let u = -91 + 96. What is the units digit of u? 5 Let w be -4*(-2 - 454/8). Let c = 156 - w. What is the units digit of c/(-9) + (-2)/(-9)? 9 Suppose 3*p - 66 = 4*b, 0 = 5*p - b - 65 - 45. Let y = -3 - 5. Let x = y + p. What is the tens digit of x? 1 Let f(s) = -s**3 + 22*s**2 + 29*s - 1. What is the hundreds digit of f(23)? 1 What is the hundreds digit of ((-3560)/(-12))/(-5)*-3? 1 Suppose -13 = -5*x - 3. Suppose 2*a - x = 24. What is the units digit of a? 3 Suppose -x = -1 - 1. Suppose -v - x = -3*v. Let j = 2 - v. What is the units digit of j? 1 Let b(h) = 17*h + 4. Let i be b(4). Suppose -5*t + 7 = -r, 2*r + 2 = -2*t + 6*t. Suppose r*f = -f + i. What is the units digit of f? 8 Suppose 4*o - 4*b - 345 = -117, -b + 63 = o. Suppose -2*t + 159 = 3*t + 2*z, -2*t = 2*z - o. What is the tens digit of t? 3 Let o(c) = 56*c**3 - 2*c**2 + 2*c. What is the units digit of o(1)? 6 Suppose 5*t + 2*q - 18 = 45, 4*q = 3*t - 17. What is the units digit of (-60)/(-33) + 2/t? 2 Let r = 9 - 7. Let y be r/(1*(-3)/51). Let t = -20 - y. What is the tens digit of t? 1 Suppose 2*r + 2*v = 202, 0 = -2*r - v - 0*v + 198. What is the units digit of r? 7 Let q = 139 + -37. What is the units digit of (4/(-6))/((-4)/q)? 7 Suppose 2*o - 14 = -0. Suppose t + 4*k = 10, 22 - o = 3*k. Let j = -1 - t. What is the units digit of j? 9 What is the units digit of (1 + -4)/(-9)*69? 3 Suppose 3*z = -2*z - 20. What is the units digit of (-12 + 3)*z/6? 6 Let n be ((2 + -2)/1)/3. Let d(x) = -x + 3. What is the units digit of d(n)? 3 Let t = 5 + -3. Suppose 0 = -2*r - t*r + 8. Suppose -24 = -r*k - 8. What is the units digit of k? 8 Let z(v) = v**2 - v. Let t = 8 + -6. Let h be z(t). Suppose 0 = h*f + 2 - 40. What is the tens digit of f? 1 Suppose 4 = 2*b - 2. Suppose -3*m = -b*a + 4*a, -7 = -3*a - 2*m. Suppose a*w = -2*w - 3*r + 80, 0 = 4*r. What is the units digit of w? 6 Let m be (-4 + 3)/((-3)/9). Suppose 2*y - m*i = -3*y, -2*i = -10. Suppose y*b + a = 2*a + 26, -3*b = 5*a - 32. What is the units digit of b? 9 Suppose 3*a = 4*q + 42 - 7, a = -3*q - 10. Suppose 2*h + w - 91 = 0, -28 - 67 = -2*h - a*w. Suppose h = -0*k + 3*k. What is the tens digit of k? 1 Suppose 0 = -s + 4*s - 63. What is the tens digit of s? 2 Suppose 2*b - 1320 = -b. What is the tens digit of b/45 + 2/9? 1 Let w(r) = -3*r + 2. Let z be w(-4). Let b(a) = -9*a. Let y be b(1). Let c = y + z. What is the units digit of c? 5 Let u(n) = 9*n - 13. Let r(k) = -5*k + 7. Let j(c) = 10*r(c) + 6*u(c). What is the tens digit of j(8)? 2 Let l be (-3)/15 - (-38)/(-10). Let k(d) = d**3 - 4*d**2 + d + 6. Let u(h) = -2*h**3 + 4*h**2 - 6. Let v(c) = -3*k(c) - 2*u(c). What is the units digit of v(l)? 6 Let s(d) = d - 8. Let h be (3/(-2))/((-3)/16). Let g be s(h). Suppose -2*a - o = -25, g = a + 2*a - 2*o - 34. What is the tens digit of a? 1 Let w(d) = d**3 + 6*d**2 - 4*d + 4. Let g be w(-5). Let h be 108/14 + 14/g. Suppose 3*r = 3, -4*c = 5*r - h*r - 33. What is the units digit of c? 9 Suppose 3*n - 4 = n. Let k = 21 + 3. Suppose n*f = -2*h + 22, -4*h - 5*f + 19 = -k. What is the tens digit of h? 1 Suppose -2*v = 3*i - 125, -2*i + 78 = -0*i + 4*v. What is the tens digit of i? 4 Let n(u) = -2*u + 2. Let p be n(-3). Let m(t) = -1 + t**3 + 9*t**2 + p*t + 2 + 1. What is the units digit of m(-8)? 2 Let k be 6/(-14) - (-24)/7. Suppose k*z - 30 = -2*l - 3*l, 0 = 4*z + 4*l - 48. What is the tens digit of z? 1 Suppose 2*x + 4 = 10. Suppose -x*l + 16 = d + 2, -4*l + 8 = -4*d. What is the units digit of l? 4 Let t be (-2 + 10)*5/5. Let p = t + -3. Suppose v - a = 2*a + 9, -2*v - p*a + 18 = 0. What is the units digit of v? 9 Let x = 1 - -2. Let p = x + 1. Suppose 2*q = -q - 4*m + 3, 5*q - 5 = -p*m. What is the units digit of q? 1 Let z(n) = -6*n**3 + n**2 + 2*n. Let q be z(-2). Suppose -4*c + q + 28 = 0. What is the tens digit of c? 1 What is the units digit of (-6)/(1*(-2 - -1))? 6 Let n(t) be the first derivative of t**6/45 - t**5/60 + t**4/24 - t**3/3 - 1. Let s(j) be the third derivative of n(j). What is the units digit of s(1)? 7 Let h(x) = -x**3 + 6*x**2 - 5*x. Let f be h(4). What is the units
Q: Are regex functions like "regexec" thread safe in libc version 2.2.5? I've read that regex functions in the libc should be threadsafe, but I've also read that in earlier version it was not the case. I now have to work on an embedded system that has an old libc version 2.2.5 . So I'm not really sure if functions like "regexec" are thread safe or if they should be protected by a mutex? If anyone has any clue about this, I would be grateful. I'm also not very sure about what I should test against to verify the thread safety of these functions. A: I searched the NEWS file in a recent libc version (2.13) for the regex keyword. There is nothing about thread safety, but the following note : Version 2.3 ... Isamu Hasegawa contributed a completely new and POSIX-conformant implementation of regex. But according to this, there have been some concurrency issue after 2.3 so things does not look so good for 2.2.5 According to this very similar question POSIX conformance means regexec must be thread safe, but it does not mean there is no concurrency bug in earlier version of the libc.
--- abstract: 'Multidimensional noncommutative Laplace transforms over octonions are studied. Theorems about direct and inverse transforms and other properties of the Laplace transforms over the Cayley-Dickson algebras are proved. Applications to partial differential equations including that of elliptic, parabolic and hyperbolic type are investigated. Moreover, partial differential equations of higher order with real and complex coefficients and with variable coefficients with or without boundary conditions are considered.' author: - 'Ludkovsky S.V.' date: 25 January 2010 title: 'Multidimensional Laplace transforms over Cayley-Dickson algebras and partial differential equations.' --- Introduction. ============= The Laplace transform over the complex field is already classical and plays very important role in mathematics including complex analysis and differential equations [@vladumf; @lavrsch; @polbremm]. The classical Laplace transform is used frequently for ordinary differential equations and also for partial differential equations sufficiently simple to be resolved, for example, of two variables. But it meets substantial difficulties or does not work for general partial differential equations even with constant coefficients especially for that of hyperbolic type. To overcome these drawbacks of the classical Laplace transform in the present paper more general noncommutative multiparameter transforms over Cayley-Dickson algebras are investigated. In the preceding paper a noncommutative analog of the classical Laplace transform over the Cayley-Dickson algebras was defined and investigated [@lutsltjms]. This paper is devoted to its generalizations for several real parameters and also variables in the Cayley-Dickson algebras. For this the preceding results of the author on holomorphic, that is (super)differentiable functions, and meromorphic functions of the Cayley-Dickson numbers are used [@ludoyst; @ludfov]. The super-differentiability of functions of Cayley-Dickson variables is stronger than the Fréchet’s differentiability. In those works also a noncommutative line integration was investigated. We remind that quaternions and operations over them had been first defined and investigated by W.R. Hamilton in 1843 [@hamilt]. Several years later on Cayley and Dickson had introduced generalizations of quaternions known now as the Cayley-Dickson algebras [@baez; @kansol; @kurosh; @rothe]. These algebras, especially quaternions and octonions, have found applications in physics. They were used by Maxwell, Yang and Mills while derivation of their equations, which they then have rewritten in the real form because of the insufficient development of mathematical analysis over such algebras in their time [@emch; @guetze; @lawmich]. This is important, because noncommutative gauge fields are widely used in theoretical physics [@solov]. Each Cayley-Dickson algebra ${\cal A}_r$ over the real field $\bf R$ has $2^r$ generators $\{ i_0,i_1,...,i_{2^r-1} \} $ such that $i_0=1$, $i_j^2=-1$ for each $j=1,2,...,2^r-1$, $i_ji_k=-i_ki_j$ for every $1\le k\ne j \le 2^r-1$, where $r\ge 1$. The algebra ${\cal A}_{r+1}$ is formed from the preceding algebra ${\cal A}_r$ with the help of the so-called doubling procedure by generator $i_{2^r}$. In particular, ${\cal A}_1=\bf C$ coincides with the field of complex numbers, ${\cal A}_2=\bf H$ is the skew field of quaternions, ${\cal A}_3$ is the algebra of octonions, ${\cal A}_4$ is the algebra of sedenions. This means that a sequence of embeddings $...\hookrightarrow {\cal A}_r\hookrightarrow {\cal A}_{r+1}\hookrightarrow ...$ exists. Generators of the Cayley-Dickson algebras have a natural physical meaning as generating operators of fermions. The skew field of quaternions is associative, and the algebra of octonions is alternative. The Cayley-Dickson algebra ${\cal A}_r$ is power associative, that is, $z^{n+m}=z^nz^m$ for each $n, m \in \bf N$ and $z\in {\cal A}_r$. It is non-associative and non-alternative for each $r\ge 4$. A conjugation $z^*={\tilde z}$ of Cayley-Dickson numbers $z\in {\cal A}_r$ is associated with the norm $|z|^2 = zz^* = z^*z$. The octonion algebra has the multiplicative norm and is the division algebra. Cayley-Dickson algebras ${\cal A}_r$ with $r\ge 4$ are not division algebras and have not multiplicative norms. The conjugate of any Cayley-Dickson number $z$ is given by the formula: $(M1)$ $z^* := \xi ^* - \eta {\bf l}$.\ The multiplication in ${\cal A}_{r+1}$ is defined by the following equation: $(M2)$ $(\xi + \eta {\bf l})(\gamma +\delta {\bf l})=(\xi \gamma -{\tilde {\delta }}\eta )+(\delta \xi +\eta {\tilde {\gamma }}){\bf l}$\ for each $\xi $, $\eta $, $\gamma $, $\delta \in {\cal A}_r$, $z := \xi +\eta {\bf l}\in {\cal A}_{r+1}$, $\zeta :=\gamma +\delta {\bf l} \in {\cal A}_{r+1}$. At the beginning of this article a multiparameter noncommutative transform is defined. Then new types of the direct and inverse noncommutative multiparameter transforms over the general Cayley-Dickson algebras are investigated, particularly, also over the quaternion skew field and the algebra of octonions. The transforms are considered in ${\cal A}_r$ spherical and ${\cal A}_r$ Cartesian coordinates. At the same time specific features of the noncommutative multiparameter transforms are elucidated, for example, related with the fact that in the Cayley-Dickson algebra ${\cal A}_r$ there are $2^r-1$ imaginary generators $\{ i_1,...,i_{2^r-1} \} $ apart from one in the field of complex numbers such that the imaginary space in ${\cal A}_r$ has the dimension $2^r-1$. Theorems about properties of images and originals in conjunction with the operations of linear combinations, differentiation, integration, shift and homothety are proved. An extension of the noncommutative multiparameter transforms for generalized functions is given. Formulas for noncommutative transforms of products and convolutions of functions are deduced. Thus this solves the problem of non-commutative mathematical analysis to develop the multiparameter Laplace transform over the Cayley-Dickson algebras. Moreover, an application of the noncommutative integral transforms for solutions of partial differential equations is described. It can serve as an effective means (tool) to solve partial differential equations with real or complex coefficients with or without boundary conditions and their systems of different types. An algorithm is described which permits to write fundamental solutions and functions of Green’s type. A moving boundary problem and partial differential equations with discontinuous coefficients are also studied with the use of the noncommutative transform. Moreover, a decomposition theorem of linear partial differential operators over the Cayley-Dickson algebras is proved. A relation between fundamental solutions of an initial and component operators is demonstrated. In conjunction with a line integration over the Cayley-Dickson algebras and the decomposition theorem of partial differential operators it permits to solve partial differential equations linear with constant and variable coefficients and non-linear as well as boundary problems (see also [@ludifeqcdla]). Certainly, this approach effectively encompasses systems of partial differential equations, because each function $f$ with values in the Cayley-Dickson algebra is the sum of functions $f_ji_j$, where each function $f_j$ is real-valued. All results of this paper are obtained for the first time. Multidimensional noncommutative integral transforms. ==================================================== [**1. Definitions. Transforms in ${\cal A}_r$ Cartesian coordinates.**]{} Denote by ${\cal A}_r$ the Cayley-Dickson algebra, $0\le r$, which may be, in particular, ${\bf H} = {\cal A}_2$ the quaternion skew field or ${\bf O} = {\cal A}_3$ the octonion algebra. For unification of the notation we put ${\cal A}_0 = {\bf R}$, ${\cal A}_1 = {\bf C}$. A function $f: {\bf R}^n\to {\cal A}_r$ we call a function-original, where $2\le r$, $n\in \bf N$, if it fulfills the following conditions $(1-5)$. $(1).$ The function $f(t)$ is almost everywhere continuous on ${\bf R}^n$ relative to the Lebesgue measure $\lambda _n$ on ${\bf R}^n$. $(2).$ On each finite interval in $\bf R$ each function $g_j(t_j)= f(t_1,...,t_n)$ by $t_j$ with marked all other variables may have only a finite number of points of discontinuity of the first kind, where $t=(t_1,...,t_n)\in {\bf R}^n$, $t_j\in \bf R$, $j=1,...,n$. Recall that a point $u_0\in \bf R$ is called a point of discontinuity of the first type, if there exist finite left and right limits $\lim_{u\to u_0, u<u_0} g(u) =: g(u_0-0)\in {\cal A}_r$ and $\lim_{u\to u_0, u>u_0} g(u) =: g(u_0+0)\in {\cal A}_r$. $(3).$ Every partial function $g_j(t_j)=f(t_1,..., t_n)$ satisfies the Hölder condition: $|g_j(t_j+h_j)-g_j(t_j)| \le A_j |h_j|^{\alpha _j}$ for each $|h_j|<\delta $, where $0<\alpha _j\le 1$, $A_j=const >0$, $\delta _j>0$ are constants for a given $t=(t_1,..., t_n)\in {\bf R}^n$, $j=1,...,n$, everywhere on ${\bf R}^n$ may be besides points of discontinuity of the first type. $(4).$ The function $f(t)$ increases not faster, than the exponential function, that is there exist constants $C_v = const >0$, $v= (v_1,...,v_n)$, $a_{-1}, a_1 \in \bf R$, where $v_j\in \{ -1, 1 \}$ for every $j=1,...,n$, such that $|f(t)|<C_v \exp ((q_v,t))$ for each $t\in {\bf R}^n$ with $t_j v_j\ge 0$ for each $j=1,...,n$, $q_v = (v_1a_{v_1},...,v_na_{v_n})$; where $(5)$ $(x,y) := \sum_{j=1}^n x_j y_j$ denotes the standard scalar product in ${\bf R}^n$. Certainly for a bounded original $f$ it is possible to take $a_{-1} = a_1 = 0$. Each Cayley-Dickson number $p\in {\cal A}_r$ we write in the form $(6)$ $p = \sum_{j=0}^{2^r-1} p_j i_j$, where $ \{ i_0, i_1, ...,i_{2^r-1} \} $ is the standard basis of generators of ${\cal A}_r$ so that $i_0=1$, $i_j^2=-1$ and $i_0i_j=i_j=i_ji_0$ for each $j>0$, $i_ji_k = - i_ki_j$ for each $j>0$ and $k>0$ with $k\ne j$, $p_j\in {\bf R}$ for each $j$. If there exists an integral $(7)$ $F^n(p) := F^n(p;\zeta ):= \int_{{\bf R}^n} f(t) e^{- <p,t) - \zeta }dt$,\ then $F^n(p)$ is called the noncommutative multiparameter (Laplace) transform at a point $p\in {\cal A}_r$ of the function-original $f(t)$, where $\zeta -\zeta _0 = \zeta _1i_1+...+\zeta _{2^r-1}i_{2^r-1}\in {\cal A}_r$ is the parameter of an initial phase, $\zeta _j\in \bf R$ for each $j=0,1,...,2^r-1$, $\zeta \in {\cal A}_r$, $n=2^r-1$, $dt = \lambda _n(dt)$, $(8)$ $<p,t) =p_0 (t_1+...+t_{2^r-1}) + \sum_{j=1}^{2^r-1} p_j t_j i_j$, we also put $(8.1)$ $u(p,t;\zeta ) = <p,t) + \zeta $. For vectors $v, w \in {\bf R}^n$ we shall consider a partial ordering $(9)$ $v\prec w$ if and only if $v_j\le w_j$ for each $j=1,...,n$ and a $k$ exists so that $v_k<w_k$, $1\le k \le n$. [**2. Transforms in ${\cal A}_r$ spherical coordinates.**]{} Now we consider also the non-linear function $u=u(p,t;\zeta )$ taking into account non commutativity of the Cayley-Dickson algebra ${\cal A}_r$. Put $(1)$ $u(p,t) := u(p,t;\zeta ) := p_0 s_1 + M(p,t)+\zeta _0$, where $(2)$ $M(p,t)=M(p,t;\zeta ) = (p_1s_1+\zeta _1)[ i_1 \cos (p_2s_2 +\zeta _2) + i_2 \sin (p_2s_2+\zeta _2)$\ $\cos (p_3s_3+\zeta _3) +...+ i_{2^r-2} \sin (p_2s_2+\zeta _2) ...\sin (p_{2^r-2}s_{2^r-2}+\zeta _{2^r-2}) \cos (p_{2^r-1}s_{2^r-1}$\ $+\zeta _{2^r-1}) + i_{2^r-1}\sin (p_2s_2+\zeta _2)...\sin (p_{2^r-2}s_{2^r-2}+\zeta _{2^r-2}) \sin (p_{2^r-1}s_{2^r-1} + \zeta _{2^r-1})]$\ for the general Cayley-Dickson algebra with $2\le r<\infty $, $(2.1) \quad s_j := s_j(n;t) := t_j +...+t_n$ for each $j=1,...,n$, $n=2^r-1$, so that $s_1=t_1+...+t_n$, $s_n=t_n$. More generally, let $(3)$ $u(p,t)=u(p,t;\zeta )=p_0 s_1 + w(p,t)+\zeta _0$, where $w(p,t)$ is a locally analytic function, $Re (w(p,t))=0$ for each $p\in {\cal A}_r$ and $t\in {\bf R}^{2^r-1}$, $Re (z) := (z+{\tilde z})/2$, ${\tilde z}=z^*$ denotes the conjugated number for $z\in {\cal A}_r$. Then the more general non-commutative multiparameter transform over ${\cal A}_r$ is defined by the formula: $(4)$ $F_u^n(p;\zeta ) := \int_{{\bf R}^n} f(t) \exp (-u(p,t;\zeta ))dt$\ for each Cayley-Dickson numbers $p\in {\cal A}_r$ whenever this integral exists as the principal value of either Riemann or Lebesgue integral, $n=2^r-1$. This non-commutative multiparameter transform is in ${\cal A}_r$ spherical coordinates, when $u(p,t;\zeta )$ is given by Formulas $(1,2)$. At the same time the components $p_j$ of the number $p$ and $\zeta _j$ for $\zeta $ in $u(p,t;\zeta )$ we write in the $p$- and $\zeta $-representations respectively such that $(5)$ $h_j=(-hi_j+ i_j(2^r-2)^{-1} \{ -h +\sum_{k=1}^{2^r-1}i_k(hi_k^*) \} )/2$ for each $j=1,2,...,2^r-1$,\ $(6)$ $h_0=(h+ (2^r-2)^{-1} \{ -h + \sum_{k=1}^{2^r-1}i_k(hi_k^*) \} )/2$,\ where $2\le r\in \bf N$, $h=h_0i_0+...+h_{2^r-1}i_{2^r-1}\in {\cal A}_r$, $h_j\in \bf R$ for each $j$, $i_k^* = {\tilde i}_k = - i_k$ for each $k>0$, $i_0=1$, $h\in {\cal A}_r$. Denote $F_u^n(p;\zeta )$ in more details by ${\cal F}^n(f,u;p;\zeta )$. Henceforth, the functions $u(p,t; \zeta )$ given by 1$(8,8.1)$ or $(1,2, 2.1)$ are used, if another form $(3)$ is not specified. If for $u(p,t; \zeta )$ concrete formulas are not mentioned, it will be undermined, that the function $u(p,t; \zeta )$ is given in ${\cal A}_r$ spherical coordinates by Expressions $(1,2, 2.1)$. If in Formulas 1$(7)$ or $(4)$ the integral is not by all, but only by $t_{j(1)},...,t_{j(k)}$ variables, where $1\le k<n$, $1\le j(1)<...<j(k)\le n$, then we denote a noncommutative transform by $F_u^{k; t_{j(1)},...,t_{j(k)}} (p;\zeta )$ or ${\cal F}^{k; t_{j(1)},...,t_{j(k)}} (f,u;p;\zeta )$. If $j(1)=1,$...,$j(k)=k$, then we denote it shortly by $F^k_u(p;\zeta )$ or ${\cal F}^k(f,u;p;\zeta )$. Henceforth, we take $\zeta _m=0$ and $t_m=0$ and $p_m=0$ for each $1\le m\notin \{ j(1),...,j(k) \} $ if something other is not specified. [**3. Remark.**]{} The spherical ${\cal A}_r$ coordinates appear naturally from the following consideration of iterated exponents: $(1)$ $\exp (i_1(p_1s_1+\zeta _1)\exp (-i_3(p_2s_2 +\zeta _2) \exp (-i_1(p_3s_3+\zeta _3)))= \exp (i_1(p_1s_1+\zeta _1)\exp (- (p_2s_2+\zeta _2)(i_3\cos (p_3s_3+\zeta _3) - i_2\sin (p_3s_3+\zeta _3))))$ $= \exp (i_1(p_1s_1+\zeta _1)(\cos (p_2s_2+\zeta _2) - \sin (p_2s_2+\zeta _2)(i_3\cos (p_3s_3+\zeta _3) - i_2\sin (p_3s_3+\zeta _3))))$ $= \exp ((p_1s_1+\zeta _1)(i_1\cos (p_2s_2+\zeta _2) + i_2 \sin (p_2s_2+\zeta _2)\cos (p_3s_3+\zeta _3) + i_3\sin (p_2s_2+\zeta _2)\sin (p_3s_3+\zeta _3)))$. Consider $i_{2^r}$ the generator of the doubling procedure of the Cayley-Dickson algebra ${\cal A}_{r+1}$ from the Cayley-Dickson algebra ${\cal A}_r$, such that $i_ji_{2r}=i_{2^r+j}$ for each $j=0,...,2^r-1$. We denote now the function $M(p,t;\zeta )$ from Definition 2 over ${\cal A}_r$ in more details by $\mbox{ }_rM$. Then by induction we write: $$(2)\quad \exp (\mbox{ }_{r+1}M(p,t;\zeta ))= \exp \{\mbox{ }_rM((i_1p_1+_...+i_{2^r-1}p_{2^r-1}),(t_1,...,t_{2^r-2}, (t_{2^r-1}+s_{2^r}));$$ $$(i_1\zeta _1+...+i_{2^r-1} \zeta _{2^r-1}) \exp (-i_{2^r+1}(p_{2^r}s_{2^r} +\zeta _{2^r})$$ $$\exp (-\mbox{ }_rM((i_1p_{2^r+1}+...+i_{2^r-1}p_{2^{r+1}-1}), (t_{2^r+1},...,t_{2^{r+1}-1}); (i_1\zeta _{2^r+1}+...+i_{2^r-1} \zeta _{2^{r+1}-1}))) \} ,$$ where $t=(t_1,...,t_n)$, $n=n(r+1) = 2^{r+1}-1$, $s_j=s_j(n(r+1);t)$ for each $j=1,...,n(r+1)$, since $s_m(n(r+1);t) = t_m+...+t_{n(r+1)} = s_m(n(r);t) + s_{2^r}(n(r+1);t)$ for each $m=1,...,2^r-1$. An image function can be written in the form $(3)$ $F_u^n(p;\zeta ):= \sum_{j=0}^{2^r-1} i_j F_{u,j}^n(p;\zeta )$,\ where a function $f$ is decomposed in the form $(3.1)$ $f(t)=\sum_{j=0}^{2^r-1} i_j f_j(t)$,\ $f_j: {\bf R}^n\to \bf R$ for each $j=0,1,...,2^r-1$, $F_{u,j}^n(p;\zeta )$ denotes the image of the function-original $f_j$. If an automorphism of the Cayley-Dickson algebra ${\cal A}_r$ is taken and instead of the standard generators $ \{ i_0,...,i_{2^r-1} \} $ new generators $ \{ N_0,...,N_{2^r-1} \} $ are used, this provides also $M(p,t;\zeta )=M_N(p,t;\zeta )$ relative to new basic generators, where $2\le r\in \bf N$. In this more general case we denote by $\mbox{ }_NF_u^n(p;\zeta )$ an image for an original $f(t)$, or in more details we denote it by $\mbox{ }_N {\cal F}^n(f,u;p;\zeta )$. Formulas 1$(7)$ and 2$(4)$ define the right multiparameter transform. Symmetrically is defined a left multiparameter transform. They are related by conjugation and up to a sign of basic generators. For real valued originals they certainly coincide. Henceforward, only the right multiparameter transform is investigated. Particularly, if $p=(p_0,p_1,0,...,0)$ and $t=(t_1,0,...,0)$, then the multiparameter non-commutative Laplace transforms 1$(7)$ and 2$(4)$ reduce to the complex case, with parameters $a_1$, $a_{-1}$. Thus, the given above definitions over quaternions, octonions and general Cayley-Dickson algebras are justified. [**4. Theorem.**]{} [*If an original $f(t)$ satisfies Conditions 1$(1-4)$ and $a_1<a_{-1}$, then its image ${\cal F}^n(f,u;p;\zeta )$ is ${\cal A}_r$-holomorphic (that is locally analytic) by $p$ in the domain $\{ z\in {\cal A}_r: a_1< Re (z)<a_{-1} \} $, as well as by $\zeta \in {\cal A}_r$, where $1\le r\in \bf N$, $2^{r-1}\le n \le 2^r-1$, the function $u(p,t; \zeta )$ is given by 1$(8,8.1)$ or 2$(1,2, 2.1)$.*]{} [**Proof.**]{} At first consider the characteristic functions $\chi _{U_v} (t)$, where $\chi _U(t) =1$ for each $t\in U$, while $\chi _U(t)=0$ for every $t\in {\bf R}^n\setminus U$, $U_v := \{ t\in {\bf R}^n: v_jt_j\ge 0 ~ \forall j=1,...,n \} $ is the domain in the Euclidean space ${\bf R}^n$ for any $v$ from §1. Therefore, $(1)$ $F_u^n(p;\zeta ) := \sum_{[v=(v_1,...,v_n): v_1,...,v_n \in \{ -1, 1 \} ]} \int_{U_v} f(t) \exp (-u(p,t;\zeta )) dt,$\ since $\lambda _n (U_v\cap U_w)= 0$ for each $v\ne w$. Each integral $\int_{U_v} f(t) \exp (-u(p,t;\zeta )) dt$ is absolutely convergent for each $p\in {\cal A}_r$ with the real part $ a_1< Re (p) <a_{-1}$, since it is majorized by the converging integral $(2)$ $|\int_{U_v} f(t) \exp (-u(p,t;\zeta )) dt| \le \int_0^{\infty }... \int_0^{\infty } C_v \exp \{ - v_1(w - a_{v_1})y_1-...-v_n(w - a_{v_n})y_n - \zeta _0 \} dy_1...dy_n = C_v e^{-\zeta _0} \prod_{j=1}^n v_j(w-a_{v_j})^{-1} $,\ where $w=Re (p)$, since $|e^z|=\exp (Re (z))$ for each $z\in {\cal A}_r$ in view of Corollary 3.3 [@ludfov]. While an integral, produced from the integral $(1)$ differentiating by $p$ converges also uniformly: $$(3)\quad |\int_{U_v}f(t)[\partial \exp (-u(p,t;\zeta ))/ \partial p].hdt|$$ $$\le \int_0^{\infty }...\int_0^{\infty } C_v |(h_0(v_1y_1+...+v_ny_n),h_1(v_1y_1+...+v_ny_n),...,h_{n-1}(v_{n-1}y_{n-1}+v_ny_n), h_nv_ny_n)|$$ $\exp \{ - v_1(w - a_{v_1})y_1-...-v_n(w - a_{v_n})y_n - \zeta _0 \} dy_1...dy_n$ $$\le |h|C_v e^{-\zeta _0} \prod_{j=1}^n (w-a_{v_j})^{-2}$$ for each $h\in {\cal A}_r$, since each $z\in {\cal A}_r$ can be written in the form $z=|z|\exp (M)$, where $|z|^2=z{\tilde z}\in [0,\infty )\subset \bf R$, $M\in {\cal A}_r$, $Re (M):= (M+{\tilde M})/2=0$ in accordance with Proposition 3.2 [@ludfov]. In view of Equations 2$(5,6)$: $(4)$ $\partial (\int_{{\bf R}^n}f(t)\exp (- u(p,t;\zeta ))dt)/\partial {\tilde p}=0$ and $(5)$ $\partial (\int_{{\bf R}^n}f(t)\exp (- u(p,t;\zeta ))dt)/\partial {\tilde \zeta }=0$, while $(6)$ $|\int_{U_v} f(t) [\partial \exp (- u(p,t;\zeta ))/\partial \zeta ].hdt| \le |h| \int_0^{\infty }...\int_0^{\infty } C_v \exp \{ - v_1(w - a_{v_1})y_1-...-v_n(w - a_{v_n})y_n - \zeta _0 \} dy_1...dy_n = |h| C_v e^{-\zeta _0}\prod_{j=1}^n v_j(w-a_{v_j})^{-1}$\ for each $h\in {\cal A}_r$. In view of convergence of integrals given above $(1-6)$ the multiparameter non-commutative transform $F_u^n(p;\zeta )$ is (super)differentiable by $p$ and $\zeta $, moreover, $\partial F_u^n(p;\zeta )/\partial {\tilde p}=0$ and $\partial F_u^n(p;\zeta )/\partial {\tilde \zeta }=0$ in the considered $(p,\zeta )$-representation. In accordance with [@ludoyst; @ludfov] a function $g(p)$ is locally analytic by $p$ in an open domain $U$ in the Cayley-Dickson algebra ${\cal A}_r$, $2\le r$, if and only if it is (super)differentiable by $p$, in another words ${\cal A}_r$-holomorphic. Thus, $F_u^n(p;\zeta )$ is ${\cal A}_r$-holomorphic by $p\in {\cal A}_r$ with $a_1<Re (p)< a_{-1}$ and $\zeta \in {\cal A}_r$ due to Theorem 2.6 [@lutsltjms]. [**4.1. Corollary.**]{} *Let suppositions of Theorem 4 be satisfied. Then the image ${\cal F}^n(f,u;p;\zeta )$ with $u=u(p,t;\zeta )$ given by 2$(1,2)$ has the following periodicity properties:* $(1)$ ${\cal F}^n(f,u;p;\zeta +\beta i_j) = {\cal F}^n(f,u;p;\zeta )$ for each $j=1,...,n$ and $\beta \in 2\pi {\bf Z}$; $(2)$ ${\cal F}^n(f,u;p^1;\zeta ^1) = (-1)^{\kappa } {\cal F}^n(f,u;p^2;\zeta ^2)$ for each $j=1,...,n-1$ so that $\zeta _0^1 = \zeta _0^2$ and $\zeta _j^1 = - \zeta _j^2$, $\zeta _{j+1}^1 = \pi + \zeta _{j+1}^2$, $\zeta _s^1 = \zeta _s^2$ for each $s\ne j$ and $s\ne j+1$, while either $p_j^1 = - p_j^2$ and $p_l^1=p_l^2$ for each $l\ne j$ with $\kappa =2$ or $p^1=p^2$ and $f(t)$ is an even function with $\kappa =2$ by the $s_j=(t_j+...+t_n)$ variable or an odd function by $s_j=(t_j+...+t_n)$ with $\kappa =1$; $(3)$ ${\cal F}^n(f,u;p;\zeta + \pi i_1) = - {\cal F}^n(f,u;p;\zeta )$. [**Proof.**]{} In accordance with Theorem 4 the image ${\cal F}^n(f,u;p;\zeta )$ exists for each $p\in W_f := \{ z\in {\cal A}_r: ~ a_1< Re (z) < a_{-1} \} $ and $\zeta \in {\cal A}_r$, where $1\le r$. Then from the $2\pi $ periodicity of sine and cosine functions the first statement follows. From $\sin (-\phi ) = - \sin (\phi )$, $\cos ( \phi ) = \cos ( - \phi )$, $\sin (\pi +\phi ) = - \sin (\phi )$, $\cos (\phi +\pi ) = - \cos (\phi )$ we get that $\cos ( p_j s_j + \zeta _j^1) = \cos ( - p_j s_j + \zeta _j^2)$, $\sin (p_j s_j + \zeta _j^1) \cos (p_{j+1} s_{j+1} + \zeta _{j+1}^1) = ( - \sin ( - p_j s_j + \zeta _j^2) ) ( - \cos (p_{j+1} s_{j+1} + \zeta _{j+1}^2) )$ and $\sin (p_j s_j + \zeta _j^1) \sin (p_{j+1} s_{j+1} + \zeta _{j+1}^1) = ( - \sin ( - p_j s_j + \zeta _j^2) ) ( - \sin ( p_{j+1} s_{j+1} + \zeta _{j+1}^2) )$ for each $t\in {\bf R}^n$. On the other hand, either $p_j^1 = - p_j^2$ and $p_l^1=p_l^2$ for each $l\ne j\ge 1$ with $\kappa =2$ or $p^1=p^2$ and $f(t_1,...,s_{j-1}+s_j, -s_j-s_{j+1}, t_{j+1},...,t_n) = (-1)^{\kappa } f(t_1,...,s_{j-1}-s_j, s_j-s_{j+1},t_{j+1},...,t_n)$ is an even with $\kappa =2$ or odd with $\kappa =1$ function by the $s_j=(t_j+...+t_n)$ variable for each $t=(t_1,...,t_n)\in {\bf R}^n$, where $t_j=s_j-s_{j+1}$ for $j=1,...,n$, $s_{n+1}=s_{n+1}(n;t)=0$. From this and Formulas 2$(1,2,4)$ the second and the third statements of this corollary follow. [**5. Remark.**]{} For a subset $U$ in ${\cal A}_r$ we put $\pi _{{\sf s},{\sf p},{\sf t}}(U):= \{ {\sf u}: z\in U, z=\sum_{{\sf v}\in \bf b}w_{\sf v}{\sf v},$ ${\sf u}=w_{\sf s}{\sf s}+w_{\sf p}{\sf p} \} $ for each ${\sf s}\ne {\sf p}\in \bf b$, where ${\sf t}:=\sum_{{\sf v}\in {\bf b}\setminus \{ {\sf s}, {\sf p} \} } w_{\sf v}{\sf v} \in {\cal A}_{r,{\sf s},{\sf p}}:= \{ z\in {\cal A}_r:$ $z=\sum_{{\sf v}\in \bf b} w_{\sf v}{\sf v},$ $w_{\sf s}=w_{\sf p}=0 ,$ $w_{\sf v}\in \bf R$ $\forall {\sf v}\in {\bf b} \} $, where ${\bf b} := \{ i_0,i_1,...,i_{2^r-1} \} $ is the family of standard generators of the Cayley-Dickson algebra ${\cal A}_r$. That is, geometrically $\pi _{{\sf s},{\sf p},{\sf t}}(U)$ means the projection on the complex plane ${\bf C}_{{\sf s},{\sf p}}$ of the intersection $U$ with the plane ${\tilde \pi }_{{\sf s},{\sf p},{\sf t}}\ni {\sf t}$, ${\bf C}_{{\sf s},{\sf p}} := \{ a{\sf s}+b{\sf p}:$ $a, b \in {\bf R} \} $, since ${\sf s}{\sf p}^*\in {\hat b}:={\bf b}\setminus \{ 1 \} $. Recall that in §§2.5-7 [@ludfov] for each continuous function $f: U\to {\cal A}_r$ it was defined the operator ${\hat f}$ by each variable $z\in {\cal A}_r$. For the non-commutative integral transformations consider, for example, the left algorithm of calculations of integrals. A Hausdorff topological space $X$ is said to be $n$-connected for $n\ge 0$ if each continuous map $f: S^k\to X$ from the $k$-dimensional real unit sphere into $X$ has a continuous extension over $\bf R^{k+1}$ for each $k\le n$ (see also [@span]). A $1$-connected space is also said to be simply connected. It is supposed further, that a domain $U$ in ${\cal A}_r$ has the property that $U$ is $(2^r-1)$-connected; $\pi _{{\sf s},{\sf p},{\sf t}}(U)$ is simply connected in $\bf C$ for each $k=0,1,...,2^{r-1}$, ${\sf s}=i_{2k}$, ${\sf p}=i_{2k+1}$, ${\sf t}\in {\cal A}_{r,{\sf s},{\sf p}}$ and ${\sf u}\in {\bf C}_{{\sf s},{\sf p}}$, for which there exists $z={\sf u}+{\sf t}\in U$. [**6. Theorem.**]{} *If a function $f(t)$ is an original (see Definition 1), such that $\mbox{ }_NF_u^n(p;\zeta )$ is its image multiparameter non-commutative transform, where the functions $f$ and $F_u^n$ are written in the forms given by 3$(3, 3.1)$, $f({\bf R}^n)\subset {\cal A}_r$ over the Cayley-Dickson algebra ${\cal A}_r$, where $1\le r\in \bf N$, $2^{r-1}\le n\le 2^r-1$.* Then at each point $t$, where $f(t)$ satisfies the Hölder condition the equality is accomplished : $$(1)\quad f(t) = \{ [(2\pi N_n)^{-1} \int_{-N_n\infty }^{N_n\infty }](... ([(2\pi N_1)^{-1} \int_{-N_1\infty }^{N_1\infty }] \mbox{ }_NF_u^n(a+p;\zeta )$$ $$\exp \{ u(a+p,t;\zeta ) \} )...)dp \} =: ({\cal F}^n )^{-1} (\mbox{ }_NF_u^n(a+p;\zeta ),u,t;\zeta ) ,$$ where either $u(p,t;\zeta )=<p,t) + \zeta $ or $u(p,t;\zeta )=p_0 s_1 + M_N(p,t;\zeta )+\zeta _0$ (see §§1 and 2), the integrals are taken along the straight lines $p(\tau _j)=N_j\tau _j\in {\cal A}_r$, $\tau _j\in \bf R$ for each $j=1,...,n$; $a_1< Re (p) = a < a_{-1}$ and this integral is understood in the sense of the principal value, $t=(t_1,...,t_n)\in {\bf R}^n$, $dp=(...((d[p_1N_1])d[p_2N_2])...)d[p_nN_n]$. [**Proof.**]{} In Integral $(1)$ an integrand $\eta (p)dp$ certainly corresponds to the iterated integral as $(...(\eta (p)d[p_1N_1])...)d[p_nN_n]$, where $p = p_1N_1+...+p_nN_n$, $p_1,...,p_n\in \bf R$. Using Decomposition 3$(3.1)$ of a function $f$ it is sufficient to consider the inverse transformation of the real valued function $f_j$, which we denote for simplicity by $f$. We put $$\mbox{ }_NF^n_{u,j}(p;\zeta ) := \int_{{\bf R}^n}f_j(t)\exp (-u(p,t;\zeta ))dt .$$ If $\eta $ is a holomorphic function of the Cayley-Dickson variable, then locally in a simply connected domain $U$ in each ball $B({\cal A}_r,z_0,R)$ with the center at $z_0$ of radius $R>0$ contained in the interior $Int (U)$ of the domain $U$ there is accomplished the equality $(\partial [\int_{z_0}^z\eta (a+\zeta ) d\zeta ]/\partial z).1=\eta (a+z)$,\ where the integral depends only on an initial $z_0$ and a final $z$ points of a rectifiable path in $B({\cal A}_r,z_0,R)$, $a\in {\bf R}$ (see also Theorem 2.14 [@lutsltjms]). Therefore, along the straight line $N_j{\bf R}$ the restriction of the antiderivative has the form $\int_{\theta _0}^{\theta }\eta (a+N_j\tau _j)d\tau _j$, since $(2)$ $\int_{z_0=N_j\theta _0}^{z=N_j\theta }\eta (a+\zeta )d\zeta =\int_{\theta _0}^{\theta } {\hat {\eta }}(a+N_j\tau _j).N_jd\tau _j$,\ where $\partial \eta (a+z)/\partial \theta =(\partial \eta (a+z)/\partial z).N_j$ for the (super)differentiable by $z\in U$ function $\eta (z)$, when $z=\theta N_j$, $\theta \in {\bf R}$. For the chosen branch of the line integral specified by the left algorithm this antiderivative is unique up to a constant from ${\cal A}_r$ with the given $z$-representation $\nu $ of the function $\eta $ [@ludfov; @ludoyst; @lutsltjms]. On the other hand, for analytic functions with real expansion coefficients in their power series non-commutative integrals specified by left or right algorithms along straight lines coincide with usual Riemann integrals by the corresponding variables. The functions $\sin (z)$, $\cos (z)$ and $e^z$ participating in the multiparameter non-commutative transform are analytic with real expansion coefficients in their series by powers of $z\in {\cal A}_r$. Using Formula 4$(1)$ we reduce the consideration to $\chi _{U_v}(t)f(t)$ instead of $f(t)$. By symmetry properties of such domains and integrals and utilizing change of variables it is sufficient to consider $U_v$ with $v=(1,...,1)$. In this case $\int_{{\bf R}^n}$ for the direct multiparameter non-commutative transform 1$(7)$ and 2$(4)$ reduces to $\int_0^{\infty }...\int_0^{\infty }$. Therefore, we consider in this proof below the domain $U_{1,...,1}$ only. Using Formulas 3$(3, 3.1)$ and 2$(1,2,2.1)$ we mention that any real algebra with generators $N_0=1$, $N_k$ and $N_j$ with $1\le k\ne j$ is isomorphic with the quaternion skew field $\bf H$, since $Re (N_jN_k)=0$ and $|N_j|=1$, $|N_k|=1$ and $|N_jN_k|= 1$. Then $\exp (\alpha + M\beta ) \exp (\gamma + M \omega ) = \exp ((\alpha + \gamma ) + M (\beta +\omega ))$ for each real numbers $\alpha , \beta , \gamma , \delta $ and a purely imaginary Cayley-Dickson number $M$. The octonion algebra $\bf O$ is alternative, while the real field $\bf R$ is the center of the Cayley-Dickson algebra ${\cal A}_r$. We consider the integral $(3)$ $g_b(t) := [(2\pi N_n)^{-1} \int_{-N_nb}^{N_nb}](... ([(2\pi N_1)^{-1} \int_{-N_1b}^{N_1b}] \mbox{ }_NF_{u,j}^n(a+p;\zeta )\exp \{ u(a+p,t;\zeta ) \} )...)dp$\ for each positive value of the parameter $0<b<\infty $. With the help of generators of the Cayley-Dickson algebra ${\cal A}_r$ and the Fubini Theorem for real valued components of the function the integral can be written in the form: $$(4)\quad g_b(t) = [(2\pi N_n)^{-1} \int_0^{\infty }d\tau _n \int_{-N_nb}^{N_nb}](...([(2\pi N_1)^{-1} \int_0^{\infty }d\tau _1 \int_{-N_1b}^{N_1b}]$$ $$f(\tau )\exp \{ - u_N(a+p,t;\zeta ) \} \exp \{ u_N(a+p,\tau ;\zeta ) \} )...)dp ,$$ since the integral $\int_{U_{1,...,1}}f(\tau )\exp \{ - u_N(a+p,\tau ;\zeta ) \} d\tau $ for any marked $0<\delta < (a_{-1} - a_1)/3$ is uniformly converging relative to $p$ in the domain $a_1+\delta \le Re (p) \le a_{-1} - \delta $ in ${\cal A}_r$ (see also Proposition 2.18 [@lutsltjms]). If take marked $t_k$ for each $k\ne j$ and $S=N_j$ for some $j\ge 1$ in Lemma 2.17 [@lutsltjms] considering the variable $t_j$, then with a suitable (${\bf R}$-linear) automorphism ${\bf v}$ of the Cayley-Dickson algebra ${\cal A}_r$ an expression for ${\bf v} (M(p,t;\zeta ))$ simplifies like in the complex case with ${\bf C}_{K} := {\bf R}\oplus {\bf R}K$ for a purely imaginary Cayley-Dickson number $K$, $|K|=1$, instead of ${\bf C} := {\bf R} \oplus {\bf R}i_1$, where ${\bf v}(x)=x$ for each real number $x\in {\bf R}$. But each equality $\alpha = \beta $ in ${\cal A}_r$ is equivalent to ${\bf v}(\alpha ) = {\bf v}(\beta )$. Then $(5)$ $Re [(N_jN_q) (N_jN_l)^*]= Re (N_qN_l^*)=\delta _{q,l}$ for each $q, l$. If $S^j = \sum_{0\le l\le n; l\ne j} \alpha _lN_l$, $N^j= \sum_{0\le l\le n; l\ne j} \beta _lN_l$ with $j\ge 1$ and real numbers $\alpha _l, \beta _l\in {\bf R}$ for each $l$, then $(6)$ $Re [(N_jS^j)(N_jN^j)^*]=Re [S^j(N^j)^*] =\sum_l\alpha _l\beta _l$. The latter identity can be applied to either $S^k=M_{k+1}(p_{k+1}N_{k+1}+...+p_nN_n, (t_{k+1},...,t_n); \zeta _{k+1}N_{k+1}+...+\zeta _nN_n)$ and $N^k = M_{k+1}(p_{k+1}N_{k+1}+...+p_nN_n, (\tau _{k+1},...,\tau _n); \zeta _{k+1}N_{k+1}+...+\zeta _nN_n)$, or $S^k= (p_{k+1}t_{k+1}+\zeta _{k+1})N_{k+1}+...+ (p_nt_n+\zeta _n)N_n$ and $N^k = (p_{k+1}\tau _{k+1}+\zeta _{k+1})N_{k+1}+...+ (p_n\tau _n+\zeta _n)N_n$, where $(7)$ $M_{k+1}(p_{k+1}N_{k+1}+...+p_nN_n, (t_{k+1},...,t_n); \zeta _{k+1}N_{k+1}+...+\zeta _nN_n) = (p_{k+1} s_{1,k+1} + \zeta _{k+1}) [N_{k+1} \cos (p_{k+2} s_{2,k+1} + \zeta _{k+2})+...$ $+ N_n \sin(p_{k+2} s_{2,k+1} + \zeta _{k+2})...\sin (p_n s_{n-k,k+1} + \zeta _n)]$,\ $(8)$ $s_{j,k+1} = s_{j,k+1}(n;t) = t_{k+j}+...+t_n=s_{k+j}(n;t)$ for each $j=1,...,n-1$; $s_{n-k,k+1}=s_{n-k,k+1}(n;t)=t_n$. We take the limit of $g_b(t)$ when $b$ tends to the infinity. Evidently, $s_k(n;\tau ) -s_j(n;\tau ) = s_k(j-1;\tau )=\tau _k+...+\tau _{j-1}$ for each $1\le k<j\le n$. By our convention $s_k(n;\tau ) =s_1(n;\tau )$ for $k<1$, while $s_k(n;\tau )=0$ for $k>n$. Put $(9)$ $u_{n,j}(p_0+p_jN_j+...+p_nN_n, (\tau _j,...,\tau _n); \zeta _0 + \zeta _jN_j+...+\zeta _nN_n) = \zeta _0 + p_0s_{1,j} + M_j(p_jN_j+...+p_nN_n, (\tau _j,...,\tau _n); \zeta _0 + \zeta _jN_j+...+\zeta _nN_n)$\ for $u_N$ given by 2$(1,2,2.1)$, where $M_j$ is prescribed by $(7)$, $s_{k,j}=s_{k,j}(n;\tau )$; $(10)$ $u_{n,j}(p_0+p_jN_j+...+p_nN_n, (\tau _j,...,\tau _n); \zeta _0 + \zeta _jN_j+...+\zeta _nN_n) = \zeta _0 + p_0s_{1,j} + \sum_{k=j}^n (p_k\tau _k+\zeta _k)N_k$\ for $u=u_N$ given by 1$(8,8.1)$. For $j>1$ the parameter $\zeta _0$ for $u=u_N$ given by 1$(8,8.1)$ or 2$(1,2,2.1)$ can be taken equal to zero. When $t_1,...,t_{j-1}, t_{j+1},...,t_n$ and $p_1,...,p_{j-1}, p_{j+1},...,p_n$ variables are marked, we take the parameter $\zeta ^j := \zeta ^j (p_jN_j+...+p_nN_n, (\tau _j,...,\tau _n); \zeta _0+\zeta _jN_j+...+\zeta _nN_n ) := (\zeta _0+\zeta _jN_j+...+\zeta _nN_n ) + (a+p_0) s_{j+1} + p_{j+1}s_{j+1} N_{j+1} +...+p_ns_nN_n$ for $u(p,\tau ;\zeta )$ given by Formulas 2$(1,2,2.1)$ or $\zeta ^j := \zeta ^j (p_jN_j+...+p_nN_n),(\tau _j,...,\tau _n); \zeta _0+\zeta _jN_j+...+\zeta _nN_n ) := (\zeta _0+\zeta _jN_j+...+\zeta _nN_n ) + (a+p_0) s_{j+1} + p_{j+1}\tau _{j+1} N_{j+1}+...+p_n\tau _nN_n$ for $u(p,\tau ;\zeta )$ described in 1$(8,8.1)$. Then the integral operator\ $\lim_{b\to \infty } [(2\pi N_j)^{-1} \int_0^{\infty }d\tau _j \int_{-N_jb}^{N_jb}]...(dp_jN_j)$ (see also Formula $(4)$ above) applied to the function $f(t_1,...,t_{j-1},\tau _j,...,\tau _n)\exp \{ - u_{N,j}(a+p_0+p_jN_j+...+p_nN_n, (t_j,...,t_n);\zeta _0+\zeta _jN_j+...+\zeta _nN_n ) \} \exp \{ u_{N,j}(a+p_0+p_jN_j+...+p_nN_n,(\tau _j,...,\tau _n); \zeta _0+\zeta _jN_j+...+\zeta _nN_n ) \}$ with the parameter $\zeta ^j$ instead of $\zeta $ treated by Theorems 2.19 and 3.15 [@lutsltjms] gives the inversion formula corresponding to the real variable $t_j$ for $f(t)$ and to the Cayley-Dickson variable $p_0N_0+p_jN_j$ restricted on the complex plane ${\bf C}_{N_j} = {\bf R}\oplus {\bf R}N_j$, since $d(\tau _j+c) = d\tau _j$ for each (real) constant $c$. After integrations with $j=1,...,k$ with the help of Formulas $(6-10)$ and 3$(1,2)$ we get the following: $$(11)\quad \lim_{b\to \infty} g_b(t) = Re [(2\pi N_n)^{-1} \int_0^{\infty }d\tau _n \int_{-N_n\infty }^{N_n\infty }](...([(2\pi N_{k+1})^{-1} \int_0^{\infty }d\tau _{k+1} \int_{-N_{k+1}\infty }^{N_{k+1}\infty }]$$ $$f(t_1,...,t_k,\tau _{k+1},...,\tau _n)\exp \{ - u_{N,k+1} ((a+p_0+p_{k+1}N_{k+1}+...+p_nN_n), (t_{k+1},...,t_n);$$ $$(\zeta _0+\zeta _{k+1}N_{k+1}+...+\zeta _nN_n )) \} \exp \{ u_{N,k+1}((a+p_0+p_{k+1}N_{k+1}+...+p_nN_n),$$ $$(\tau _{k+1},...,\tau _n); (\zeta _0+\zeta _{k+1}N_{k+1}+...+\zeta _nN_n )) \} )...)dp .$$ Moreover, $Re (f_q)=f_q$ for each $q$ and in $(11)$ the function $f=f_q$ stands for some marked $q$ in accordance with Decompositions 3$(3,3.1)$ and the beginning of this proof. Mention, that the algebra $alg_{\bf R}(N_j,N_k,N_l)$ over the real field with three generators $N_j$, $N_k$ and $N_l$ is alternative. The product $N_kN_l$ of two generators is also the corresponding generator $(-1)^{\xi (k,l)} N_m$ with the definite number $m=m(k,l)$ and the sign multiplier $(-1)^{\xi (k,l)}$, where $\xi (k,l)\in \{ 0, 1 \} $. On the other hand, $N_{k_1}[{\tilde N}_j(N_j(N_{k_2}N_l))] = N_{k_1}(N_{k_2}N_l)$. We use decompositions $(7-10)$ and take $k_2=l$ due to Formula $(11)$, where $Re$ stands on the right side of the equality, since $Re (N_kN_l)=0$ and $Re [{\tilde N}_j (N_j(N_kN_l))] =0$ for each $k\ne l$. Thus the repeated application of this procedure by $j=1, 2, ..., n$ leads to Formula $(1)$ of this theorem. [**6.1. Corollary.**]{} [*If the conditions of Theorem 6 are satisfied, then $$(1)\quad f(t) = (2\pi )^{-n} \int_{{\bf R}^n} F_u^n(a+p;\zeta ) \exp \{ u(a+p,t;\zeta ) \} dp_1...dp_n$$ $$= ({\cal F}^n )^{-1} (\mbox{ }_NF_u^n(a+p;\zeta ),u,t;\zeta ).$$*]{} [**Proof.**]{} Each algebra $alg_{\bf R}(N_j,N_k,N_l)$ is alternative. Therefore, in accordance with §6 and Formulas 1$(8,8.1)$ and 2$(1-4)$ for each non-commutative integral given by the left algorithm we get $$(2)\quad N_j^{-1}\int_{-N_jb}^{N_jb}[f(\tau )\exp \{ - u_N(a+p,t ;\zeta ) \} ] \exp \{ u_N(a+p,\tau ;\zeta ) \} d(p_jN_j)$$ $$\sum_{l=0}^{2^r-1} {\tilde N}_j [N_j (\int_{-N_jb}^{N_jb}[N_l f_l(\tau )\exp \{ - u_N(a+p,t ;\zeta ) \} ] \exp \{ u_N(a+p,\tau ;\zeta ) \} dp_j)]$$ $$=\int_{-b}^{b} [ f(\tau ) \exp \{ - u_N(a+p,t ;\zeta ) \} ] \exp \{ u_N(a+p,\tau ;\zeta ) \} dp_j$$ for each $j=1,...,n$, since the real field is the center of the Cayley-Dickson algebra ${\cal A}_r$, while the functions $\sin $ and $\cos $ are analytic with real expansion coefficients. Thus $(3)$ $g_b(t) = (2\pi )^{-n} [\int_0^{\infty }d\tau _n \int_{-b}^{b}](...([ \int_0^{\infty }d\tau _1 \int_{-b}^{b}] f(\tau )\exp \{ - u_N(a+p,t ;\zeta ) \} $ $\exp \{ u_N(a+p,\tau ;\zeta ) \} )...)dp_1...dp_n$,\ hence taking the limit with $b$ tending to the infinity implies, that the non-commutative iterated (multiple) integral in Formula 6$(1)$ reduces to the principal value of the usual integral by real variables $(\tau _1,...,\tau _n)$ and $(p_1,...,p_n)$ 6.1$(1)$. [**7. Theorem.**]{} [*An original $f(t)$ with $f({\bf R}^n)\subset {\cal A}_r$ over the Cayley-Dickson algebra ${\cal A}_r$ with $1\le r \in \bf N$ is completely defined by its image $\mbox{ }_NF_u^n(p;\zeta )$ up to values at points of discontinuity, where the function $u(p,t; \zeta )$ is given by 1$(8,8.1)$ or 2$(1,2, 2.1)$.*]{} [**Proof.**]{} Due to Corollary 6.1 the value $f(t)$ at each point $t$ of continuity of $f(t)$ has the expression throughout $\mbox{ }_NF_u^n(p;\zeta )$ prescribed by Formula $6.1(1)$. Moreover, values of the original at points of discontinuity do not influence on the image $\mbox{ }_NF_u^n(p;\zeta )$, since on each bounded interval in $\bf R$ by each variable $t_j$ a number of points of discontinuity is finite and by our supposition above the original function $f(t)$ is $\lambda _n$-almost everywhere on ${\bf R}^n$ continuous. [**8. Theorem.**]{} *Suppose that a function $\mbox{ }_NF_u^n(p;\zeta )$ is analytic by the variable $p\in {\cal A}_r$ in a domain $W := \{ p\in {\cal A}_r: a_1< Re (p)< a_{-1} \} $, where $2\le r\in \bf N$, $2^{r-1}\le n\le 2^r-1$, $f({\bf R}^n)\subset {\cal A}_r$, either $u(p,t;\zeta )=<p,t) + \zeta $ or $u(p,t;\zeta ) := p_0 s_1 + M(p,t;\zeta )+\zeta _0$ (see §§1 and 2). Let $\mbox{ }_NF^n_u(p;\zeta )$ be written in the form $\mbox{ }_NF^n_u(p;\zeta )=\mbox{ }_NF^{n,0}_u(p;\zeta ) + \mbox{ }_NF^{n,1}_u(p;\zeta )$, where $\mbox{ }_NF^{n,0}_u(p;\zeta )$ is holomorphic by $p$ in the domain $a_1<Re (p)$. Let also $\mbox{ }_NF^{n,1}_u(p;\zeta )$ be holomorphic by $p$ in the domain $Re (p)<a_{-1}$. Moreover, for each $a>a_1$ and $b<a_{-1}$ there exist constants $C_a>0$, $C_b>0$ and $\epsilon _a >0$ and $\epsilon _b>0$ such that* $(1)$ $|\mbox{ }_NF^{n,0}_u(p;\zeta )|\le C_a\exp (-\epsilon _a |p|)$ for each $p\in {\cal A}_r$ with $Re (p)\ge a$, $(2)$ $|\mbox{ }_NF^{n,1}_u(p;\zeta )|\le C_b\exp (-\epsilon _b |p|)$ for each $p\in {\cal A}_r$ with $Re (p)\le b$, the integral $(3)$ $\int_{-N_n\infty }^{N_n\infty }...\int_{-N_1\infty }^{N_1\infty }\mbox{ }_NF_u^{n,k}(w+p;\zeta )dp$ converges absolutely for $k=0$ and $k=1$ and each $a_1<w<a_{-1}$. Then $\mbox{ }_NF_u^n(w+p;\zeta )$ is the image of the function $$(4)\quad f(t)=[(2\pi )^{-1}{\tilde N_n}\int_{-N_n\infty }^{N_n\infty }] (...([(2\pi )^{-1}{\tilde N_1}\int_{-N_1\infty }^{N_1\infty }] \mbox{ }_NF_u^n(w+p;\zeta )\exp \{ u(w+p,t;\zeta ) \} )...)dp$$ $$= ({\cal F}^n )^{-1} (\mbox{ }_NF_u^n(w+p;\zeta ),u,t;\zeta ).$$ [**Proof.**]{} For the function $\mbox{ }_NF^{n,1}_u(p;\zeta )$ we consider the substitution of the variable $p=-g$, $-a_{-1}<Re (g)$. Thus the proof reduces to the consideration of $\mbox{ }_NF^{n,0}_u(w+p;\zeta )$. An integration by $dp$ in the iterated integral $(4)$ is treated as in §6. Take marked values of variables $p_1,...,p_{j-1},p_{j+1},...,p_n$ and $t_1,...,t_{j-1}, t_{j+1},...,t_n$, where $s_k=s_k(n;\tau )$ for each $k=1,...,n$ (see §6 also). For a given parameter $\zeta ^j := (\zeta _0+\zeta _jN_j+...+\zeta _nN_n) + (w+p_0) s_{j+1} + p_{j+1}s_{j+1}N_{j+1}+...+p_ns_nN_n $ for $u(p,\tau ;\zeta )$ prescribed by Formulas 2$(1,2,2.1)$ or $\zeta ^j := (\zeta _0+\zeta _jN_j +...+\zeta _nN_n) + (w+p_0) s_{j+1} + p_{j+1}\tau _{j+1}N_{j+1}+...+p_n\tau _n N_n$ for $u(p,t;\zeta )$ given by 1$(8,8.1)$ instead of $\zeta $ and any non-zero Cayley-Dickson number $\beta \in {\cal A}_r$ we have $\lim_{\tau _j\to \infty } [\beta \tau _j +\zeta ^j]/[\beta \tau _j + \zeta ]=1$. For any locally $z$-analytic function $g(z)$ in a domain $U$ satisfying conditions of §5 the homotopy theorem for a non-commutative line integral over ${\cal A}_r$, $2\le r$, is satisfied (see [@ludoyst; @ludfov]). In particular if $U$ contains the straight line $w+{\bf R} N_j$ and the path $ \gamma _j (t_j) := \zeta ^j + t_jN_j$, then $\int_{-N_j\infty }^{N_j\infty } g(z)dz = \int_{\gamma _j} g(w+z)dz$, when ${\hat g}(z)\to 0$ while $|z|$ tends to the infinity, since $|\zeta ^j|$ is a finite number (see Lemma 2.23 in [@lutsltjms]). We apply this to the integrand in Formula $(4)$, since $\mbox{ }_NF_u^n(w+p;\zeta )$ is locally analytic by $p$ in accordance with Theorem 4 and Conditions $(1,2)$ are satisfied. Then the integral operator $[(2\pi N_j)^{-1} \int_{-N_j\infty }^{N_j\infty }]$ on the $j$-th step with the help of Theorems 2.22 and 3.16 [@lutsltjms] gives the inversion formula corresponding to the real parameter $t_j$ for $f(t)$ and to the Cayley-Dickson variable $p_0N_0+p_jN_j$ which is restricted on the complex plane ${\bf C}_{N_j} = {\bf R}\oplus {\bf R}N_j$ (see also Formulas 6$(4,11)$ above). Therefore, an application of this procedure by $j=1, 2, ..., n$ as in §6 implies Formula $(4)$ of this theorem. Thus there exist originals $f^0$ and $f^1$ for functions $\mbox{ }_NF^{n,0}_u(p;\zeta )$ and $\mbox{ }_NF^{n,1}_u(p;\zeta )$ with a choice of $w\in \bf R$ in the common domain $a_1<Re (p)<a_{-1}$. Then $f=f^0+f^1$ is the original for $\mbox{ }_NF^n_u(p;\zeta )$ due to the distributivity of the multiplication in the Cayley-Dickson algebra ${\cal A}_r$ leading to the additivity of the considered integral operator in Formula $(4)$. [**8.1. Corollary.**]{} [*Let the conditions of Theorem 8 be satisfied, then $$(1)\quad f(t)=(2\pi )^{-n} \int_{{\bf R}^n} \mbox{ }_NF_u^n(w+p;\zeta )\exp \{ u(w+p,t;\zeta ) \} dp_1...dp_n$$ $$= ({\cal F}^n )^{-1} (\mbox{ }_NF_u^n(w+p;\zeta ),u,t;\zeta ).$$*]{} [**Proof.**]{} In accordance with §§6 and 6.1 each non-commutative integral given by the left algorithm reduces to the principal value of the usual integral by the corresponding real variable: $$(2)\quad (2\pi )^{-1}{\tilde N_j}\int_{-N_j\infty }^{N_j\infty } \mbox{ }_NF_u^n(w+p;\zeta )\exp \{ u(w+p,t;\zeta ) \} d(p_jN_j)$$ $$= (2\pi )^{-1}\int_{-\infty }^{\infty } \mbox{ }_NF_u^n(w+p;\zeta )\exp \{ u(w+p,t;\zeta ) \} dp_j$$ for each $j=1,...,n$. Thus Formula 8$(4)$ with the non-commutative iterated (multiple) integral reduces to Formula 8.1$(1)$ with the principal value of the usual integral by real variables $(p_1,...,p_n)$. [**9. Note.**]{} In Theorem 8 Conditions $(1,2)$ can be replaced on $(1)$ $\lim_{n\to \infty }\sup_{p\in C_{R(n)}} \| {\hat F}(p) \| =0,$\ where $C_{R(n)} := \{ z\in {\cal A}_r: |z| =R(n), a_1<Re (z)<a_{-1} \} $ is a sequence of intersections of spheres with a domain $W$, where $R(n)<R(n+1)$ for each $n$, $\lim_{n\to \infty } R(n)=\infty $. Indeed, this condition leads to the accomplishment of the ${\cal A}_r$ analog of the Jordan Lemma for each $r\ge 2$ (see also Lemma 2.23 and Remark 2.24 [@lutsltjms]). Subsequent properties of quaternion, octonion and general ${\cal A}_r$ multiparameter non-commutative analogs of the Laplace transform are considered below. We denote by $(2)$ $W_f = \{ p\in {\cal A}_r: ~ a_1(f)< Re (p) <a_{-1} (f) \} $ a domain of $\mbox{ }_NF_u^n(p;\zeta )$ by the $p$ variable, where $a_1=a_1(f)$ and $a_{-1} = a_{-1}(f)$ are as in §1. For an original $(3)$ $f(t)\chi _{U_{1,...,1}}(t)$ we put $W_f = \{ p\in {\cal A}_r: ~ a_1(f) <Re (p) \} ,$\ that is $a_{-1} = \infty $. Cases may be, when either the left hyperplane $Re (p)=a_1$ or the right hyperplane $Re (p)=a_{-1}$ is (or both are) included in $W_f$. It may also happen that a domain reduces to the hyperplane $W_f = \{ p: ~ Re (p)=a_1=a_{-1} \} $. [**10. Proposition.**]{} [*If images $\mbox{ }_NF_u^n(p;\zeta )$ and $\mbox{ }_NG_u^n(p;\zeta )$\ of functions-originals $f(t)$ and $g(t)$ exist in domains $W_f$ and $W_g$ with values in ${\cal A}_r$, where the function $u(p,t; \zeta )$ is given by 1$(8,8.1)$ or 2$(1,2, 2.1)$, then for each $\alpha , \beta \in {\cal A}_r$ in the case ${\cal A}_2=\bf H$; as well as $f$ and $g$ with values in $\bf R$ and each $\alpha , \beta \in {\cal A}_r$ or $f$ and $g$ with values in ${\cal A}_r$ and each $\alpha , \beta \in \bf R$ in the case of ${\cal A}_r$ with $r\ge 3$; the function $\alpha \mbox{ }_NF_u(p;\zeta ) + \beta \mbox{ }_NG_u(p;\zeta ) $ is the image of the function $\alpha f(t) +\beta g(t)$ in a domain $W_f\cap W_g$.*]{} [**Proof.**]{} Since the transforms $\mbox{ }_NF_u^n(p;\zeta )$ and $\mbox{ }_NG_u^n(p;\zeta )$ exist, then the integral $$\int_{{\bf R}^n}(\alpha f(t)+\beta g(t)) \exp (-u(p,t;\zeta ))dt= \int_{{\bf R}^n}\alpha f(t) \exp (-u(p,t;\zeta ))dt$$ $$+ \int_{{\bf R}^n}\beta g(t) \exp (-u(p,t;\zeta ))dt$$ converges in the domain $W_f\cap W_g = \{ p\in {\cal A}_r: ~ \max (a_1(f), a_1(g)) < Re (p) <\min (a_{-1}(f),a_{-1}(g)) \} $.\ We have $t\in {\bf R}^n$, $2^{r-1}\le n\le 2^r-1$, while $\bf R$ is the center of the Cayley-Dickson algebra ${\cal A}_r$. The quaternion skew field $\bf H$ is associative. Thus, under the imposed conditions the constants $\alpha , \beta $ can be carried out outside integrals. [**11. Theorem.**]{} [*Let $\alpha =const >0$, let also $F^n(p;\zeta )$ be an image of an original function $f(t)$ with either $u=<p,t) + \zeta $ or $u$ given by Formulas 2$(1,2)$ over the Cayley-Dickson algebra ${\cal A}_r$ with $2\le r<\infty $, $2^{r-1}\le n \le 2^r-1$. Then an image $F^n(p/ \alpha ;\zeta )/ \alpha ^n$ of the function $f(\alpha t)$ exists.*]{} [**Proof.**]{} Since $p_js_j+\zeta _j=p_j({s'}_j/\alpha )+ \zeta _j = (p_j/\alpha ) {s'}_j +\zeta _j$ for each $j=1,...,n$, where $s_j\alpha ={s'}_j$, $s_j=s_j(n;t)$, ${s'}_j = s_j(n;\tau )$, $\tau _j= \alpha t_j$ for each $j=1,...,n$. Then changing of these variables implies: $\int_{{\bf R}^n} f(\alpha t) e^{-u(p,t;\zeta )}dt= \int_{{\bf R}^n}f(\tau )e^{-u(p,\tau /\alpha ;\zeta )} d\tau /\alpha ^n= F^n(p/\alpha ;\zeta )/\alpha ^n$\ due to the fact that the real filed $\bf R$ is the center $Z({\cal A}_r)$ of the Cayley-Dickson algebra ${\cal A}_r$. [**12. Theorem.**]{} [*Let $f(t)$ be a function-original on the domain $U_{1,...,1}$ such that $\partial f(t)/\partial t_k$ also for $k=j-1$ and $k=j$ satisfies Conditions 1$(1-4)$. Suppose that $u(p,t;\zeta )$ is given by 2$(1,2, 2.1)$ or 1$(8,8.1)$ over the Cayley-Dickson algebra ${\cal A}_r$ with $2\le r<\infty $, $2^{r-1}\le n \le 2^r-1$. Then $$(1)\quad {\cal F}^n((\partial f(t)/\partial t_j) \chi _{U_{1,...,1}}(t),u; p;\zeta ) = - {\cal F}^{n-1; t^j} (f(t)\chi _{U_{1,...,1}}(t^j), u(p,t^j;\zeta );p;\zeta )$$ $$+ [p_0 + \sum_{k=1}^j p_k {\sf S}_{e_k}] {\cal F}^n (f(t)\chi _{U_{1,...,1}}(t),u; p;\zeta )$$ in the ${\cal A}_r$ spherical coordinates or $$(1.1)\quad {\cal F}^n((\partial f(t)/\partial t_j) \chi _{U_{1,...,1}}(t),u; p;\zeta ) = - {\cal F}^{n-1; t^j} (f(t)\chi _{U_{1,...,1}}(t^j), u(p,t^j;\zeta );p;\zeta )$$ $$+ [p_0 + p_j {\sf S}_{e_j}] {\cal F}^n (f(t)\chi _{U_{1,...,1}}(t),u; p;\zeta )$$ in the ${\cal A}_r$ Cartesian coordinates in a domain $W= \{ p\in {\cal A}_r: ~ \max ( a_1(f), a_1(\partial f/\partial t_j)) <Re (p) \} $, where $t^j := (t_1,...,t_j,...,t_n: ~ t_j=0)$, ${\sf S}_{e_k}= - \partial /\partial \zeta _k$ for each $k\ge 1$.*]{} [**Proof.**]{} Certainly, $(2)$ $\partial f(t(s))/\partial s_1=\partial f(t)/\partial t_1$ and $(2.1)$ $\partial f(t)/\partial t_j = \sum_{k=1}^n (\partial f(t(s))/\partial s_k)(\partial s_k/\partial t_j)= \sum_{k=1}^j \partial f(t(s))/\partial s_k$\ for each $j=2,...,n$, since $t_j=s_j-s_{j+1}$, $t_1 = s_1 - s_2$, where $s_j=s_j(n;t)$, $s_{n+l}=0$ for each $l\ge 1$. From Formulas 30$(6,7)$ [@lutsltjms] we have the equality in the ${\cal A}_r$ spherical coordinates: $(3)$ $\partial \exp (-u(p,t;\zeta ))/\partial s_j = - p_0 \delta _{1,j} \exp (-u(p,t;\zeta )) - p_j{\sf S}_{e_j}\exp (-u(p,t;\zeta )) $, since $\exp (-u(p,t;\zeta ))= \exp \{ - p_0 s_1 - \zeta _0 \} \exp (- M(p,t;\zeta ))$, $\partial \exp (-p_0s_1-\zeta _0)/\partial s_j = -p_0 \delta _{1,j} \exp (-p_0s_1-\zeta _0)$, $\partial [\cos (p_js_j+\zeta _j)-\sin (p_js_j+\zeta _j)i_j]/\partial s_j =\partial \exp (-(p_js_j+\zeta _j)i_j)/ \partial s_j$\ $ = -p_ji_j\exp (-(p_js_j+\zeta _j)i_j)= -p_j\exp (-(p_js_j+\zeta _j-\pi /2)i_j)$\ $=-p_j[\cos (p_js_j+\zeta _j-\pi /2) - \sin (p_js_j+\zeta _j-\pi /2)i_j]=$\ $ - p_j {\sf S}_{e_j} [\cos (p_js_j+\zeta _j) - \sin (p_js_j+\zeta _j)i_j],$\ since $s_j$ and $s_k$ are real independent variables for each $k\ne j$, where $\delta _{j,k}=0$ for $j\ne k$, while $\delta _{j,j}=1$, $(3.1)$ ${\sf S}_{e_j} [\cos (p_js_j+\zeta _j) - \sin (p_js_j+\zeta _j)i_j]=$ $ - \partial [\cos (p_js_j+\zeta _j) - \sin (p_js_j+\zeta _j)i_j]/\partial \zeta _j$ $ = [\cos (p_js_j+\zeta _j-\pi /2) - \sin (p_js_j+\zeta _j-\pi /2)i_j]$. In the ${\cal A}_r$ Cartesian coordinates we take $t_j$ instead of $s_j$ in $(3.1)$. If $\phi (z)$ is a differentiable function by $z_j$ for each $j$, $\phi : {\cal A}_r\to {\cal A}_r$, $z_j=p_jt_j+\zeta _j$, then $(3.2)$ $\partial \exp ( - \phi (z))/\partial (qt_j) = - q [d\exp (\xi )/d\xi ]|_{\xi = -\phi }.(\partial \phi (z)/\partial z_j)p_j$ $= - q p_j [ \sum_{n=1}^{\infty } \sum_{k=1}^{n-1} ((\xi (z))^k (\partial \phi (z)/\partial z_j)) (\xi (z))^{n-1-k}/n!]|_{\xi = -\phi }$ $ = - q p_j ( - \partial \exp (- \phi (z))/\partial \zeta _j) = - p_j {\sf S}_{qe_j}\exp (- \phi (z)),$ where either $q=1$ or $q=-1$, since $\partial z_j/\partial \zeta _j=1$.\ That is $(3.3)$ ${\sf S}_{e_j}^x \exp ( - i_k (\phi _k+\zeta _k)) =0$ for each $j\ne k\ge 1$ and any positive number $x>0$, $(3.4)$ ${\sf S}_{e_j}^x \exp ( - i_j (\phi _j+\zeta _j)) = \exp ( - i_j (\phi _j+\zeta _j - x \pi /2))$ and ${\sf S}_{-e_j}^x \exp ( - i_j (\phi _j+\zeta _j)) = \exp ( - i_j (\phi _j+\zeta _j + x \pi /2))$\ for each non-negative real number $x\ge 0$, $\phi _k$ and $\zeta _k\in {\bf R}$, where ${\sf S}_{e_j} = {\sf S}_{e_j}(\zeta _j)$, the zero power ${\sf S}_{e_j}^0=I$ is the unit operator; $(3.5)$ ${\sf S}_{qe_j} e^{-u(p,t;\zeta )} = e^{-p_0s_1-\zeta _0} $\ $ T_j^q [i_0 \delta _{j,1} \cos (p_1s_1+\zeta _1) + (1-\delta _{j,1}) i_{j-1}\sin (p_1s_1+\zeta _1)...\cos (p_js_j+\zeta _j) + \{ \sum_{k=j}^{2^r-2} i_k \sin (p_1s_1+\zeta _1)... \cos (p_{k+1}s_{k+1}+\zeta _{k+1}) \} + i_{2^r-1}\sin (p_1s_1+\zeta _1)...\sin (p_{2^r-1}s_{2^r-1} + \zeta _{2^r-1})]$\ in the ${\cal A}_r$ spherical coordinates, where either $q=1$ or $q=-1$ and $(3.6)$ $T_j^x\xi (\zeta _j) := \xi (\zeta _j-x\pi /2)$\ for any function $\xi (\zeta _j)$ and any real number $x\in {\bf R}$, where $j\ge 1$. Then in accordance with Formula $(3.2)$ we have: $(3.7)$ ${\sf S}_{qe_j} \exp ( - u(p,t;\zeta )) =$ $= [ \sum_{n=1}^{\infty } \sum_{k=1}^{n-1} ((\xi (z))^k qi_j) (\xi (z))^{n-1-k}/n!]|_{\xi = - u(p,t;\zeta )}$\ for $u(p,t;\zeta )$ given by Formulas 1$(8,8.1)$ in the ${\cal A}_r$ Cartesian coordinates, where either $q=1$ or $q=-1$. The integration by parts theorem (Theorem 2 in §II.2.6 on p. 228 [@kamyn]) states: if $a<b$ and two functions $f$ and $g$ are Riemann integrable on the segment $[a,b]$, $F(x)= A+ \int_a^xf(t)dt $ and $G(x)=B+\int_a^xg(t)dt$, where $A$ and $B$ are two real constants, then $\int_a^b F(x)g(x)dx = F(x)G(x)|^b_a - \int_a^b f(x)G(x)dx$. Therefore, the integration by parts gives $$(4)\quad \int_0^{\infty }(\partial f(t)/\partial t_j) \exp (-u(p,t;\zeta ))dt_j = f(t) \exp (-u(p,t;\zeta ))|_{t_j=0}^{t_j=\infty }$$ $$- \int_0^{\infty }[f(t)(\partial \exp (-u(p,t;\zeta ))/\partial t_j)]dt_j .$$ Using the change of variables $t\mapsto s$ with the unit Jacobian $\partial (t_1,...,t_n)/\partial (s_1,...,s_n)$ and applying the Fubini’s theorem componentwise to $f_ji_j$ we infer: $$(5)\quad \int_{U_{1,...,1}}(\partial f(t)/\partial t_j) \exp (-u(p,t;\zeta ))dt= \int_{s_1\ge s_2\ge ...\ge s_n\ge 0} (\partial f(t)/\partial t_j) \exp (-u(p,t;\zeta ))ds$$ $$= \int_0^{\infty }...\int_0^{\infty } [\int_{s_{j+1}}^{\infty }(\partial f(t)/\partial t_j) \exp (-u(p,t;\zeta ))ds_j]dt^j$$ $$= - [\int_0^{\infty }...\int_0^{\infty } f(t^j) \exp (-u(p,t^j;\zeta )) dt^j]$$ $$+ [p_0 + \sum_{k=1}^j p_k {\sf S}_{e_k}] \int_0^{\infty }...\int_0^{\infty } f(t) \exp (-u(p,t;\zeta ))dt$$ in the ${\cal A}_r$ spherical coordinates, or $$(5.1)\quad \int_{U_{1,...,1}}(\partial f(t)/\partial t_j) \exp (-u(p,t;\zeta ))dt$$ $$= - [\int_0^{\infty }...\int_0^{\infty } f(t^j) \exp (-u(p,t^j;\zeta )) dt^j]$$ $$+ [p_0 + p_j {\sf S}_{e_j}] \int_0^{\infty }...\int_0^{\infty } f(t) \exp (-u(p,t;\zeta ))dt$$ in the ${\cal A}_r$ Cartesian coordinates, since $\partial \exp ( - (p_0s_1+\zeta _0))/\partial t_j = - p_0\exp ( - (p_0s_1+\zeta _0))$ for each $1\le j\le n$. This gives Formula $(1)$, where $$(6)\quad {\cal F}^{n-1; t^j}(f(t^j)\chi _{U_{1,...,1}}, u(p,t^j;\zeta );p;\zeta ) = \int_0^{\infty }...\int_0^{\infty } f(t^j) \exp (-u(p,t^j;\zeta )) dt^j$$ $$= \int_0^{\infty }dt_1...\int_0^{\infty }dt_{j-1} \int_0^{\infty }dt_{j+1}...\int_0^{\infty } (dt_n) f(t^j) \exp (-u(p,t^j;\zeta ))$$\ is the non-commutative transform by $t^j=(t_1,...,t_{j-1},0,t_{j+1},...,t_n)$. [**12.1. Remark.**]{} Shift operators of the form $\xi (x+\phi ) = \exp (\phi d/dx)\xi (x)$ in real variables are also frequently used in the class of infinite differentiable functions with converging Taylor series expansion in the corresponding domain. It is possible to use also the following convention. One can put $\cos (\phi _1+\zeta _1) = \cos (\phi _1+\zeta _1) \cos (\psi _2)...\cos (\psi _{2^r-1})$,...,$\sin (\phi _1+\zeta _1)... \cos (\phi _k+\zeta _k)=\sin (\phi _1+\zeta _1)... \cos (\phi _k+\zeta _k)\cos (\psi _{k+1})...\cos (\psi _{2^r-1})$, where $\psi _j=0$ for each $j\ge 1$, $2\le k <2^r-1$, so that $T_j^l\cos (\phi _1+\zeta _1) =0$ for each $j>1$ and $l\ge 1$, $T_j^l\sin (\phi _1+\zeta _1)... \cos (\phi _k+\zeta _k)=0$ for each $j>k$ and $l\ge 1$, where $T_j^l\xi = T_j^{l-1}(T_j\xi )$ is the iterated composition for $l>1$, $l\in {\bf N}$. Then $T_j^le^{-u(p,t;\zeta )}$ gives with such convention the same result as ${\sf S}_{e_j}^le^{-u(p,t;\zeta )}$, so one can use the symbolic notation $T_j^le^{-u(p,t;\zeta )}=e^{-u(p,t;\zeta -i_j\pi l/2)}$. But to avoid misunderstanding we shall use ${\sf S}_{e_j}$ and $T_j$ in the sense of Formulas 12$(3.1-3.7)$. It is worth to mention that instead of 12$(3.7)$ also the formulas $(1)$ $\exp (p_1i_1+...+p_ni_n) = \cos (\phi ) + M\sin (\phi )$ with $\phi := \phi (p) := [p_1^2+...+p_n^2]^{1/2}$ and $M=(p_1i_1+...+p_ni_n)/\phi $ for $\phi \ne 0$, $e^0=1$; $(2)$ $\partial \exp (p_1 i_1+..+p_ni_n)/\partial p_j = [ - \sin (\phi ) + M \cos (\phi ) ]p_j/\phi + (\phi i_j - Mp_j) \phi ^{-2} \sin (\phi )$ and $\partial (p_jt_j+\zeta _j)/\partial \zeta _j=1$ can be used. [**13. Theorem.**]{} *Let $f(t)$ be a function-original. Suppose that $u(p,t;\zeta )$ is given by 2$(1,2, 2.1)$ or 1$(8,8.1)$ over the Cayley-Dickson algebra ${\cal A}_r$ with $2\le r<\infty $. Then a (super)derivative of an image is given by the following formula:* $(1)$ $(\partial {\cal F}^n(f(t),u; p;\zeta )/ \partial p).h = - {\cal F}^n(f(t)s_1,u;p;\zeta )h_0 $\ $- {\sf S}_{e_1} {\cal F}^n(f(t)s_1,u; p;\zeta )h_1 -...- {\sf S}_{e_n} {\cal F}^n(f(t)s_n,u; p; \zeta )h_n$\ in the ${\cal A}_r$ spherical coordinates, or $(1.1)$ $(\partial {\cal F}^n(f(t),u; p;\zeta )/ \partial p).h = - {\cal F}^n(f(t)s_1,u;p;\zeta )h_0 $\ $- {\sf S}_{e_1} {\cal F}^n(f(t)t_1,u; p;\zeta )h_1 -...- {\sf S}_{e_n} {\cal F}^n(f(t)t_n,u; p; \zeta )h_n$\ in the ${\cal A}_r$ Cartesian coordinates for each $h=h_0i_0+...+h_ni_n\in {\cal A}_r$, where $h_0,...,h_n\in {\bf R}$, $2^{r-1}\le n \le 2^r-1$, $p\in W_f$. [**Proof.**]{} The inequalities $a_1(f)<Re (p)<a_{-1}(f)$ are equivalent to the inequalities $a_1(f(t)|t|)<Re (p)<a_{-1}(f(t)|t|)$, since $\lim_{|t|\to + \infty }\exp (-b|t|)|t|=0$ for each $b>0$. An image ${\cal F}^n(f(t),u; p;\zeta )$ is a holomorphic function by $p$ for $a_1(f)<Re (p)<a_{-1}(f)$ by Theorem 4, also $|\int_0^{\infty }e^{-ct}t^ndt|<\infty $ for each $c>0$ and $n=0,1,2,...$. Thus it is possible to differentiate under the sign of the integral: $$(2)\quad (\partial (\int_{{\bf R}^n} f(t)\exp (-u(p,t;\zeta ))dt)/\partial p).h =$$ $$\sum_{v\in \{ -1, 1 \} ^n } (\partial (\int_{U_v} f(t)\exp (-u(p,t;\zeta )) \chi _{U_v} dt)/\partial p).h =$$ $$= \int_{{\bf R}^n}f(t)(\partial \exp (-u(p,t;\zeta ))/\partial p).hdt .$$ Due to Formulas 12$(3, 3.2)$ we get: $(3)$ $(\partial \exp (-u(p,t;\zeta ))/\partial p).h= - \exp (-u(p,t;\zeta ))s_1h_0 - {\sf S}_{e_1}\exp (-u(p,t;\zeta ))s_1h_1 -... -{\sf S}_{e_n}\exp (-u(p,t;\zeta ))s_nh_n$\ in the ${\cal A}_r$ spherical coordinates, or $(4)$ $(\partial \exp (-u(p,t;\zeta ))/\partial p).h= - \exp (-u(p,t;\zeta ))s_1h_0 - {\sf S}_{e_1}\exp (-u(p,t;\zeta ))t_1h_1 -... -{\sf S}_{e_n}\exp (-u(p,t;\zeta ))t_nh_n$\ in the ${\cal A}_r$ Cartesian coordinates.\ Thus from Formulas $(2,3)$ we deduce Formula $(1)$. [**14. Theorem.**]{} *If $f(t)$ is a function-original, then* $(1)$ ${\cal F}^n(f(t-\tau ),u;p;\zeta )= {\cal F}^n(f(t),u;p; \zeta + <p,\tau ])$ for either $(i)$ $u(p,t;\zeta )= p_0s_1 + M(p,t;\zeta )+ \zeta _0$ or $(ii)$ $u(p,t;\zeta )= <p,t) + \zeta $ over ${\cal A}_r$ with $2\le r<\infty $ in a domain $p\in W_f$, where $\tau \in {\bf R}^n$, $2^{r-1}\le n\le 2^r-1$, $(2)$ $<p,\tau ] = p_0s_1+p_1s_1i_1+...+p_ns_ni_n$ with $s_j=s_j(n;\tau )$ for each $j$ in the first $(i)$ and $<p,\tau ]=<p,\tau )$ in the second $(ii)$ case (see also Formulas 1$(8)$, 2$(1,2,2.1)$). [**Proof.**]{} For $p$ in the domain $Re (p)>a_1$ the identities are satisfied: $$(3)\quad {\cal F}^n((f\chi _{U_{1,...,1}}) (t-\tau ),u;p;\zeta ) = \int_{\tau _1}^{\infty }...\int_{\tau _n}^{\infty }f(t-\tau ) e^{-u(p,t;\zeta )}dt$$ $$= \int_{U_{1,...,1}} f(t) e^{-u(p,\xi ;\zeta + <p,\tau ])}d\xi ={\cal F}^n ((f\chi _{U_{1,...,1}})(t),u;p;\zeta +<p,\tau ]),$$ due to Formulas 1$(7,8)$ and 2$(1,2,2.1,4)$, since $p_0s_1(n;t) + \zeta _0= p_0 s_1(n;\xi ) + \zeta _0 + p_0s_1(n;\tau )$ and $p_jt_j + \zeta _j = p_j\xi _j + (\zeta _j+p_j \tau _j)$ and $p_js_j(n;t) + \zeta _j = p_js_j(n;\xi ) + (\zeta _j+p_j s_j(n;\tau ))$ for each $j= 1,...,2^r-1$, where $t=\xi +\tau $. Symmetrically we get $(2)$ for $U_v$ instead of $U_{1,...,1}$. Naturally, that the multiparameter non-commutative Laplace integral for an original $f$ can be considered as the sum of $2^n$ integrals by the sub-domains $U_v$: $$(4)\quad \int_{{\bf R}^n} f(t) \exp (-u(p,t;\zeta ))dt= \sum_{v\in \{ -1 , 1 \} ^n } \int_{{\bf R}^n}f(t)\exp (-u(p,t;\zeta ))\chi _{U_v}(t) dt .$$ The summation by all possible $v\in \{ -1 , 1 \} ^n$ gives Formula $(1)$. [**15. Note.**]{} In view of the definition of the non-commutative transform ${\cal F}^n$ and $u(p,t;\zeta )$ and Theorem 14 the term $\zeta _1 i_1+...+\zeta _{2^r-1}i_{2^r-1}$ has the natural interpretation as the initial phase of a retardation. [**16. Theorem.**]{} [*If $f(t)$ is a function-original with values in ${\cal A}_r$ for $2\le r<\infty $, $2^{r-1}\le n \le 2^r-1$, $b\in \bf R$, then $$(1)\quad {\cal F}^n(e^{b(t_1+...+t_n)}f(t), u;p;\zeta )= {\cal F}^n(f(t), u;p-b;\zeta )$$ for each $a_{-1}+b > Re (p) >a_1+b$, where $u$ is given by 1$(8, 8.1)$ or 2$(1,2)$.*]{} [**Proof.**]{} In accordance with Expressions 1$(8,8.1)$ and 2$(1,2,2.1)$ one has $u(p,t;\zeta ) - b (t_1+...+t_n) = u(p-b,t;\zeta )$. If $a_{-1}+b>Re (p)>a_1+b$, then the integral $$(2)\quad {\cal F}^n(e^{b(t_1+...+t_n)}f(t)\chi _{U_v}(t), u;p;\zeta )= \int_{U_v} f(t)e^{b(t_1+...+t_n)}\exp (- u(p,t;\zeta ))dt$$ $$= \int_{U_v} f(t)\exp (- u(p-b,t;\zeta ) )dt = {\cal F}^n(f(t)\chi _{U_v}(t), u;p-b;\zeta )$$ converges. Applying Decomposition 14$(4)$ we deduce Formula $(1)$. [**17. Theorem.**]{} *Let a function $f(t)$ be a real valued original,\ $F(p;\zeta ) = {\cal F}^n(f(t);u;p;\zeta )$, where the function $u(p,t;\zeta )$ is given by 1$(8,8.1)$ or 2$(1,2,2.1)$. Let also $G(p;\zeta )$ and $q(p)$ be locally analytic functions such that* $(1)$ ${\cal F}^n(g(t,\tau );u;p;\zeta ) = G(p;\zeta ) \exp (-u(q(p),\tau ;\zeta ))$\ for $u=<p,t)+ \zeta $ or $u=p_0(t_1+...+t_n)+ M(p,t;\zeta ) + \zeta _0$, then $(2)$ ${\cal F}^n (\int_{{\bf R}^n} g(t,\tau )f(\tau )d\tau ;u;p;\zeta ) = G(p;\zeta )F(q(p);\zeta )$\ for each $p\in W_g$ and $q(p) \in W_f$, where $2\le r<\infty $, $2^{r-1}\le n \le 2^r-1$. [**Proof.**]{} If $p \in W_g$ and $q(p)\in W_f$, then in view of the Fubini’s theorem and the theorem conditions a change of an integration order gives the equalities: $$\int_{{\bf R}^n} (\int_{{\bf R}^n} g(t,\tau ) f(\tau )d\tau )\exp (-u(p,t;\zeta ))dt$$ $$= \int_{{\bf R}^n} (\int_{{\bf R}^n} g(t,\tau ) \exp (-u(p,t;\zeta ))dt )f(\tau )d\tau$$ $$= \int_{{\bf R}^n} G(p;\zeta ) \exp (- u(q(p),\tau ;\zeta )) f(\tau )d\tau$$ $$=G(p;\zeta )\int_{{\bf R}^n}f(\tau )\exp (-u(q(p),\tau ;\zeta )) d\tau = G(p;\zeta )F(q(p);\zeta ) ,$$ since $t, \tau \in {\bf R}^n$ and the center of the algebra ${\cal A}_r$ is $\bf R$. [**18. Theorem.**]{} *If a function $f(t)\chi _{U_{1,...,1}}$ is original together with its derivative $\partial ^nf(t)\chi _{U_{1,...,1}}(t)/\partial s_1...\partial s_n$ or $\partial ^nf(t)\chi _{U_{1,...,1}}(t)/\partial t_1...\partial t_n$, where $F^n_u(p;\zeta )$ is an image function of $f(t)\chi _{U_{1,...,1}}$ over the Cayley-Dickson algebra ${\cal A}_r$ with $2\le r\in \bf N$, $2^{r-1}\le n \le 2^r-1$, for $u=p_0s_1 + M(p,t;\zeta )+ \zeta _0$ given by 2$(1,2,2.1)$, then $$(1)\quad \lim_{p\to \infty } \{ [p_0 + p_1{\sf S}_{e_1}] p_2{\sf S}_{e_2}...p_n {\sf S}_{e_n} F^n_u(p;\zeta ) + \sum_{m=0}^{n-1} (-1)^m$$ $$\sum_{1 \le j_1 <...< j_{n-m} \le n; ~ 1\le l_1<...<l_m\le n; ~ l_{\alpha }\ne j_{\beta } ~ \forall \alpha , \beta } [p_0\delta _{1,j_1} + p_{j_1}{\sf S}_{e_{j_1}}] p_{j_2}{\sf S}_{e_{j_2}}...p_{j_{n-m}}{\sf S}_{e_{j_{n-m}}}$$ $$F^{n-m}_u(p^{(l)}; \zeta ) \} = (-1)^{n+1} f(0)e^{-u(0,0;\zeta )},$$ or $$(1.1)\quad \lim_{p\to \infty } \{ [p_0 + p_1{\sf S}_{e_1}] [p_0+p_2{\sf S}_{e_2}]...[p_0+p_n {\sf S}_{e_n}] F^n_u(p;\zeta ) + \sum_{m=0}^{n-1} (-1)^m$$ $$\sum_{1 \le j_1 <...< j_{n-m} \le n; ~ 1\le l_1<...<l_m\le n; ~ l_{\alpha }\ne j_{\beta } ~ \forall \alpha , \beta } [p_0 + p_{j_1}{\sf S}_{e_{j_1}}] [p_0+p_{j_2}{\sf S}_{e_{j_2}}]...[p_0+p_{j_{n-m}}{\sf S}_{e_{j_{n-m}}}]$$ $$F^{n-m}_u(p^{(l)}; \zeta ) \} = (-1)^{n+1} f(0)e^{-u(0,0;\zeta )}$$ for $u(p,t;\zeta )$ given by 1$(8,8.1)$, where $f(0) = \lim_{t\in U_{1,...,1}; t\to 0} f(t)$, $p$ tends to the infinity inside the angle $|Arg (p)|<\pi /2-\delta $ for some $0<\delta <\pi /2$, $1\le j\le 2^r-1$, $p^{(l)} = \sum_{j=0, j\notin (l)}^n p_ji_j$, $(l) = (l_1,...,l_m)$. If the restriction* $f(t)|_{t_{j_1}=0,...,t_{j_m}=0; t_k=\infty \forall k\notin \{ j_1,...,j_m \}} = \lim_{t\in U_{1,...,1}; t_{j_1}\to 0,...,t_{j_m}\to 0; t_k\to \infty ~ \forall k\notin \{ j_1,...,j_m \} } f(t)$ exists for all $1\le j_1<...<j_m\le n$, then $$(2)\quad \lim_{p\to 0} \{ [p_0 + p_1{\sf S}_{e_1}] p_2{\sf S}_{e_2}...p_n{\sf S}_{e_n} F^n_u(p;\zeta ) + \sum_{m=0}^{n-1} (-1)^m$$ $$\sum_{1 \le j_1 <...< j_{n-m} \le n; ~ 1\le l_1<...<l_m\le n; ~ l_{\alpha }\ne j_{\beta } ~ \forall \alpha , \beta } [p_0\delta _{1,j_1} + p_{j_1}{\sf S}_{e_{j_1}}] p_{j_2}{\sf S}_{e_{j_2}}...p_{j_{n-m}}{\sf S}_{e_{j_{n-m}}}$$ $$F^{n-m}_u(p^{(l)}; \zeta ) \}$$ $$=\sum_{m=0}^{n-1} (-1)^m \sum_{1\le j_1<...<j_m\le n} f(t)|_{t_{j_1}=0,...,t_{j_m}=0; t_k=\infty \forall k\notin \{ j_1,...,j_m \}} e^{-u(0,0,\zeta )}$$ in the ${\cal A}_r$ spherical coordinates or $$(2.1)\quad \lim_{p\to 0} \{ [p_0 + p_1{\sf S}_{e_1}] [p_0+p_2{\sf S}_{e_2}]...[p_0+p_n{\sf S}_{e_n}] F^n_u(p;\zeta ) + \sum_{m=0}^{n-1} (-1)^m$$ $$\sum_{1 \le j_1 <...< j_{n-m} \le n; ~ 1\le l_1<...<l_m\le n; ~ l_{\alpha }\ne j_{\beta } ~ \forall \alpha , \beta } [p_0 + p_{j_1}{\sf S}_{e_{j_1}}] [p_0+p_{j_2}{\sf S}_{e_{j_2}}]...[p_0+p_{j_{n-m}}{\sf S}_{e_{j_{n-m}}}]$$ $$F^{n-m}_u(p^{(l)}; \zeta ) \}$$ $$=\sum_{m=0}^{n-1} (-1)^m \sum_{1\le j_1<...<j_m\le n} f(t)|_{t_{j_1}=0,...,t_{j_m}=0; t_k=\infty \forall k\notin \{ j_1,...,j_m \}} e^{-u(0,0,\zeta )}$$ in the ${\cal A}_r$ Cartesian coordinates, where $p\to 0$ inside the same angle. [**Proof.**]{} In accordance with Theorem 12 the equality follows: $$(3) \quad {\cal F}^n((\partial f(t)/\partial s_j)\chi _{U_{1,...,1}}(t),u;p;\zeta ) = [p_0 \delta _{1,j} + p_j {\sf S}_{e_j}] {\cal F}^n (f(t)\chi _{U_{1,...,1}}(t), u(p,t;\zeta ), p;\zeta )$$ $$- {\cal F}^{n-1; t^j} (f(t^j)\chi _{U_{1,...,1}}, u(p,t^j;\zeta );p;\zeta )$$ for $u= u(p,t;\zeta ) = p_0s_1+M(p,t;\zeta )+\zeta _0$ in the ${\cal A}_r$ spherical coordinates, or $$(3.1) \quad {\cal F}^n((\partial f(t)/\partial t_j)\chi _{U_{1,...,1}}(t),u;p;\zeta ) = [p_0 + p_j {\sf S}_{e_j}] {\cal F}^n (f(t)\chi _{U_{1,...,1}}(t), u(p,t;\zeta ), p;\zeta )$$ $$- {\cal F}^{n-1; t^j} (f(t^j)\chi _{U_{1,...,1}}, u(p,t^j;\zeta );p;\zeta )$$ in the ${\cal A}_r$ Cartesian coordinates, since $(3.2)$ $\partial f(t(s))/\partial s_j = - \partial f(t)/\partial t_{j-1} + \partial f(t)/\partial t_j$ for each $j\ge 2$, $\partial f(t(s))/\partial s_1 = \partial f(t)/\partial t_1$,\ where $p=p_0+p_1i_1+...+p_{2^r-1}i_{2^r-1}\in {\cal A}_r$, $p_0,...,p_{2^r-1}\in {\bf R}$, $ \{ i_0,...,i_{2^r-1} \} $ are the generators of the Cayley-Dickson algebra ${\cal A}_r$, $s_{n+l}=0$ for each $l\ge 1$, the zero power ${\sf S}_{e_j}^0=I$ is the unit operator. For short we write $f$ instead of $f\chi _{U_{1,...,1}}$. Thus the limit exists: $$(4)\quad {\cal F}^{n-1;t^j} (f(t^j), u(p,t^j;\zeta );p;\zeta ) =$$ $$\lim_{t_j\to +0} \int_0^{\infty }dt_1...\int_0^{\infty }dt_{j-1} \int_0^{\infty }dt_{j+1}...\int_0^{\infty } (dt_n) f(t) \exp (-u(p,t;\zeta )).$$ Mention, that $(...((t^1)^2)...)^j = (0,...,0,t_j,...,t_n: t_j=0)$ for every $1\le j\le n$, since $t_k=s_k-s_{k+1}$ for each $1\le k\le n$. We apply these Formulas $(3,4)$ by induction $j=1,...,n$, $2^{r-1} \le n \le 2^r-1$, to $\partial ^nf(t)/\partial s_1...\partial s_n$,...,$\partial ^{n-j+1} f(t)/\partial s_j...\partial s_n$,\ ...,$\partial f(t)/\partial s_n$ instead of $\partial f(t)/\partial s_j$. From Note 8 [@lutsltjms] it follows, that in the ${\cal A}_r$ spherical coordinates $$\lim_{p\to \infty , |Arg (p)|<\pi /2-\delta } {\cal F}^n((\partial ^nf(t)/\partial s_1...\partial s_n)\chi _{U_{1,..,1}},u;p;\zeta )=0,$$ also in the ${\cal A}_r$ Cartesian coordinates $$\lim_{p\to \infty , |Arg (p)|<\pi /2-\delta } {\cal F}^n((\partial ^nf(t)/\partial t_1...\partial t_n)\chi _{U_{1,..,1}},u;p;\zeta )=0,$$ which gives the first statement of this theorem, since $u(p,0,\zeta ) = u(0,t;\zeta ) = u(0,0,\zeta ) $ and $F^0_u(p^{(1,...,1)};\zeta ) = f(0) e^{-u(0,0,\zeta )}$, while $F^n_u(p;\zeta )$ is defined for each $Re (p)>0$. If the limit $f(t^{<j>})$ exists, where $t^{<j>} := (t_1,...,t_j,...,t_n: ~ t_j=\infty )$, then $$(5)\quad \lim_{t_j\to \infty } \int_0^{\infty }dt_1...\int_0^{\infty }dt_{j-1} \int_0^{\infty }dt_{j+1}...\int_0^{\infty } (dt_n) f(t) \exp (-u(p,t;\zeta ))$$ $$=: {\cal F}^{n-1; <t^j>} (f(t^{<j>}), u(p,t^{<j>};\zeta );p;\zeta ).$$ Certainly, $(...((t^{<1>})^{<2>})...)^{<j>} = (t_1, ...,t_n: t_1=\infty ,...,t_j=\infty )$ for each $1\le j\le n$. Therefore, the limit exists: $$\lim_{p\to 0, |Arg (p)|<\pi /2 -\delta }\int_{U_{1,...,1}} (\partial ^nf(t)/\partial s_1...\partial s_n)\exp (-p_0s_1-\zeta _0 -M(p,t;\zeta ))$$ $$= \int_{U_{1,...,1} } (\partial ^nf(t)/\partial s_1...\partial s_n)e^{-u(0,0;\zeta )} dt$$ $$= \sum_{m=0}^n (-1)^m \sum_{1\le j_1<...<j_m\le n} f(t)|_{t_{j_1}=0,..., t_{j_m}=0; t_k=\infty ~ \forall k\notin \{ j_1,...,j_m \} }$$ $$=\lim_{p\to 0, |Arg (p)|<\pi /2 -\delta } \{ [p_0 + p_1{\sf S}_{e_1}] p_2{\sf S}_{e_2}...p_n{\sf S}_{e_n} F^n_u(p;\zeta )$$ $$+ \sum_{m=0}^{n-1} (-1)^m$$ $$\sum_{1 \le j_1 <...< j_{n-m} \le n; ~ 1\le l_1<...<l_m\le n; ~ l_{\alpha }\ne j_{\beta } ~ \forall \alpha , \beta } [p_0\delta _{1,j_1} + p_{j_1}{\sf S}_{e_{j_1}}] p_{j_2}{\sf S}_{e_{j_2}}...p_{j_{n-m}}{\sf S}_{e_{j_{n-m}}}$$ $$F^{n-m}_u(p^{(l)}; \zeta ) + (-1)^n f(0) e^{ -u(0,0,\zeta )} \} ,$$ from which the second statement of this theorem follows in the ${\cal A}_r$ spherical coordinates and analogously in the ${\cal A}_r$ Cartesian coordinates using Formula $(3.1)$. [**19. Definitions.**]{} Let $X$ and $Y$ be two $\bf R$ linear normed spaces which are also left and right ${\cal A}_r$ modules, where $1\le r$. Let $Y$ be complete relative to its norm. We put $X^{\otimes k} := X\otimes _{\bf R} ... \otimes _{\bf R} X$ is the $k$ times ordered tensor product over $\bf R$ of $X$. By $L_{q,k}(X^{\otimes k},Y)$ we denote a family of all continuous $k$ times $\bf R$ poly-linear and ${\cal A}_r$ additive operators from $X^{\otimes k}$ into $Y$. Then $L_{q,k}(X^{\otimes k},Y)$ is also a normed $\bf R$ linear and left and right ${\cal A}_r$ module complete relative to its norm. In particular, $L_{q,1}(X,Y)$ is denoted also by $L_q(X,Y)$. We present $X$ as the direct sum $X=X_0i_0\oplus ... \oplus X_{2^r-1} i_{2^r-1}$, where $X_0$,...,$X_{2^r-1}$ are pairwise isomorphic real normed spaces. If $A\in L_q(X,Y)$ and $A(xb)=(Ax)b$ or $A(bx)=b(Ax)$ for each $x\in X_0$ and $b\in {\cal A}_r$, then an operator $A$ we call right or left ${\cal A}_r$-linear respectively. An $\bf R$ linear space of left (or right) $k$ times ${\cal A}_r$ poly-linear operators is denoted by $L_{l,k}(X^{\otimes k},Y)$ (or $L_{r,k}(X^{\otimes k},Y)$ respectively). We consider a space of test function ${\cal D} := {\cal D}({\bf R}^n,Y)$ consisting of all infinite differentiable functions $f: {\bf R}^n\to Y$ on ${\bf R}^n$ with compact supports. A sequence of functions $f_n\in {\cal D}$ tends to zero, if all $f_n$ are zero outside some compact subset $K$ in the Euclidean space ${\bf R}^n$, while on it for each $k=0,1,2,...$ the sequence $ \{ f^{(k)}_n: ~ n\in {\bf N} \} $ converges to zero uniformly. Here as usually $f^{(k)}(t)$ denotes the $k$-th derivative of $f$, which is a $k$ times $\bf R$ poly-linear symmetric operator from $({\bf R}^n)^{\otimes k}$ to $Y$, that is $f^{(k)}(t).(h_1,...,h_k)= f^{(k)}(t).(h_{\sigma (1)},...,h_{\sigma (k)})\in Y$ for each $h_1,...,h_k\in {\bf R}^n$ and every transposition $\sigma : \{ 1,...,k \} \to \{ 1,...,k \}$, $\sigma $ is an element of the symmetric group $S_k$, $t\in {\bf R}^n$. For convenience one puts $f^{(0)}=f$. In particular, $f^{(k)}(t).(e_{j_1},...,e_{j_k})= \partial ^kf(t)/\partial t_{j_1}...\partial t_{j_k}$ for all $1\le j_1,...,j_k\le n$, where $e_j = (0,...,0,1,0,...,0)\in {\bf R}^n$ with $1$ on the $j$-th place. Such convergence in $\cal D$ defines closed subsets in this space $\cal D$, their complements by the definition are open, that gives the topology on $\cal D$. The space ${\cal D}$ is $\bf R$ linear and right and left ${\cal A}_r$ module. By a generalized function of class ${\cal D}' := [{\cal D}({\bf R}^n,Y)]'$ is called a continuous $\bf R$-linear ${\cal A}_r$-additive function $g: {\cal D} \to {\cal A}_r$. The set of all such functionals is denoted by ${\cal D}'$. That is, $g$ is continuous, if for each sequence $f_n\in \cal D$, converging to zero, a sequence of numbers $g(f_n)=: [g,f_n) \in {\cal A}_r$ converges to zero for $n$ tending to the infinity. A generalized function $g$ is zero on an open subset $V$ in ${\bf R}^n$, if $[g,f)=0$ for each $f\in {\cal D}$ equal to zero outside $V$. By a support of a generalized function $g$ is called the family, denoted by $supp (g)$, of all points $t\in {\bf R}^n$ such that in each neighborhood of each point $t\in supp (g)$ the functional $g$ is different from zero. The addition of generalized functions $g, h$ is given by the formula: $(1)$ $[g+h,f):= [g,f)+ [h,f)$. The multiplication $g\in {\cal D}'$ on an infinite differentiable function $w$ is given by the equality: $(2)$ $[gw,f)=[g, wf)$ either for $w: {\bf R}^n\to {\cal A}_r$ and each test function $f\in \cal D$ with a real image $f({\bf R}^n)\subset {\bf R}$, where $\bf R$ is embedded into $Y$; or $w: {\bf R}^n\to {\bf R}$ and $f: {\bf R}^n\to Y$. A generalized function $g'$ prescribed by the equation: $(3)$ $[g',f):= - [g,f')$ is called a derivative $g'$ of a generalized function $g$, where $f' \in {\cal D}({\bf R}^n,L_q({\bf R}^n,Y))$, $g'\in [{\cal D}({\bf R}^n,L_q({\bf R}^n,Y))]'$. Another space ${\cal B} := {\cal B}({\bf R}^n,Y)$ of test functions consists of all infinite differentiable functions $f: {\bf R}^n\to Y$ such that the limit $\lim_{|t|\to +\infty } |t|^m f^{(j)}(t)=0$ exists for each $m=0,1,2,...$, $j=0,1,2,...$. A sequence $f_n\in \cal B$ is called converging to zero, if the sequence $|t|^mf_n^{(j)}(t)$ converges to zero uniformly on ${\bf R}^n\setminus B({\bf R}^n,0,R)$ for each $m, j=0,1,2,...$ and each $0<R< + \infty $, where $B(Z,z,R) := \{ y\in Z: ~ \rho (y,z)\le R \} $ denotes a ball with center at $z$ of radius $R$ in a metric space $Z$ with a metric $\rho $. The family of all $\bf R$-linear and ${\cal A}_r$-additive functionals on $\cal B$ is denoted by ${\cal B}'$. In particular we can take $X={\cal A}_r^{\alpha }$, $Y= {\cal A}_r^{\beta }$ with $1\le \alpha , \beta \in \bf Z$. Analogously spaces ${\cal D}(U,Y)$, $[{\cal D}(U,Y)]'$, ${\cal B}(U,Y)$ and $[{\cal B}(U,Y)]'$ are defined for domains $U$ in ${\bf R}^n$, for example, $U=U_v$ (see also §1). A generalized function $f\in {\cal B}'$ we call a generalized original, if there exist real numbers $a_1<a_{-1}$ such that for each $a_1 < w_{-1}, w_1,...,w_{-n}, w_n < a_{-1}$ the generalized function $(4)$ $f(t)\exp (-(q_v,t))\chi _{U_v}$ is in $[{\cal B}(U_v,Y)]'$ for all $v= (v_1,...,v_n)$, $v_j\in \{ -1, 1 \}$ for every $j=1,...,n$ for each $t\in {\bf R}^n$ with $t_j v_j \ge 0$ for each $j=1,...,n$, where $q_v = (v_1w_{v_11},...,v_nw_{v_nn})$. By an image of such original we call a function $(5)$ ${\cal F}^n(f,u;p;\zeta ):= [f, \exp (-u(p,t;\zeta )))$ of the variable $p\in {\cal A}_r$ with the parameter $\zeta \in {\cal A}_r$, defined in the domain $W_f = \{ p\in {\cal A }_r: ~ a_1< Re (p) <a_{-1} \} $ by the following rule. For a given $p\in W_f$ choose $a_1 < w_1,...,w_n < Re (p) < w_{-1},...,w_{-n} < a_{-1}$, then $(6)$ $[f,\exp (-u(p,t;\zeta )) := \sum_v [f\exp (- (q_v,t)) , \exp \{ - [u(p,t;\zeta )- (q_v,t)] \} \chi _{U_v} )$,\ since $\exp \{ - [ u(p,t;\zeta ) - (q_v,t) ] \} \in {\cal B}(U_v,Y)$, where in each term\ $[f\exp (- (q_v,t)) , \exp \{ - [u(p,t;\zeta )- (q_v,t)] \} \chi _{U_v} )$ the generalized function belongs to $[{\cal B}(U_v,Y)]'$ by Condition $(4)$, while the sum in $(6)$ is by all admissible vectors $v\in \{ -1, 1 \} ^n$. [**20. Note and Examples.**]{} Evidently the transform ${\cal F}^n(f,u;p;\zeta )$ does not depend on a choice of $\{ w_{-1}, w_1,...,w_{-n}, w_n \} $, since $[f\exp (-(q_v,t),\exp (-[u(p,t;\zeta )- (q_v,t)])\chi _{U_v})=$ $[f\exp (-(q_v,t)-(b_v,t)),\exp (-[u(p,t;\zeta )-(q_v,t)-(b_v,t)])\chi _{U_v})$\ for each $b\in {\bf R}^n$ such that $a_1<w_j + b_j< Re (p) < w_{-j} + b_{-j} < a_{-1}$ for each $j=1,...,n$, because $\exp (-(b_v,t))\in \bf R$. At the same time the real field $\bf R$ is the center of the Cayley-Dickson algebra ${\cal A}_r$, where $2\le r\in \bf N$. Let $\delta $ be the Dirac delta function, defined by the equation $(DF)$ $[\delta (t),\phi (t)) := \phi (0)$ for each $\phi \in {\cal B}$. Then $(1)$ ${\cal F}^n(\delta ^{(j)}(t-\tau ),u;p;\zeta ) =\sum_{v\in \{ -1,1 \} ^n} [\delta ^{(j)}(t-\tau ) \exp (- (q_v,t)), \exp (-[u(p,t;\zeta )- (q_v,t)])\chi _{U_v})$\ $= (-1)^j [\partial ^j_t \exp (-[u(p,t;\zeta )])|_{t=\tau }$,\ since it is possible to take $- \infty <a_1<0<a_{-1}<\infty $ and $w_k=0$ for each $k\in \{ -1, 1, -2, 2,..., -n, n \} $, where $\tau \in {\bf R}^n$ is the parameter, $\partial ^j_t := \partial ^{|j|}/\partial t_1^{j_1}...\partial t_1^{j_1}$. In particular, for $j=0$ we have $(2)$ ${\cal F}^n(\delta (t-\tau ),u;p;\zeta ) = \exp (-u(p,\tau ;\zeta ))$.\ In the general case: $(3)$ ${\cal F}^n(\partial ^{|j|}\delta (t)/\partial s_1^{j_1}... \partial s_n^{j_n},u;p;\zeta )=$ $\sum_{0\le k_1\le j_1} {{j_1}\choose {k_1}} p_0^{j_1-k_1}(p_1{\sf S}_{e_1})^{k_1}(p_2{\sf S}_{e_2})^{j_2}... (p_n{\sf S}_{e_n})^{j_n}\exp (- \zeta _0 - M(p,0;\zeta ))$\ in the ${\cal A}_r$ spherical coordinates, or $(3.1)$ ${\cal F}^n(\partial ^{|j|}\delta (t)/\partial t_1^{j_1}... \partial t_n^{j_n},u;p;\zeta )=$ $ (p_0+p_1{\sf S}_{e_1})^{j_1} (p_0+p_2{\sf S}_{e_2})^{j_2}... (p_0+p_n{\sf S}_{e_n})^{j_n}\exp (- u(p,0;\zeta ))$\ in the ${\cal A}_r$ Cartesian coordinates, where $j_1+...+j_n=|j|$, $k_1, j_1,..., j_n$ are nonnegative integers, $2^{r-1}\le n\le 2^r-1$, ${l\choose m} := l!/[m!(l-m)!]$ denotes the binomial coefficient, $0!=1$, $1!=1$, $2!=2$; $l!=1 \cdot 2\cdot ... \cdot l$ for each $l\ge 3$, $s_j=s_j(n;t)$. The transform ${\cal F}^n(f)$ of any generalized function $f$ is the holomorphic function by $p\in W_f$ and by $\zeta \in {\cal A}_r$, since the right side of Equation 19$(5)$ is holomorphic by $p$ in $W_f$ and by $\zeta $ in view of Theorem 4. Equation 19$(5)$ implies, that Theorems 11 - 13 are accomplished also for generalized functions. For $a_1=a_{-1}$ the region of convergence reduces to the vertical hyperplane in ${\cal A}_r$ over $\bf R$. For $a_{-1} < a_1$ there is no any common domain of convergence and $f(t)$ can not be transformed. [**21. Theorem.**]{} *If $f(t)$ is an original function on ${\bf R}^n$, $F^n(p;\zeta )$ is its image, $\partial ^{|j|} f(t)/\partial s_1^{j_1}...\partial s_n^{j_n}$ or $\partial ^{|j|} f(t)/\partial t_1^{j_1}...\partial t_n^{j_n}$ is an original, $|j|=j_1+...+j_n$, $0\le j_1,...,j_n\in {\bf Z}$, $2^{r-1}\le n \le 2^r-1$; then $$(1)\quad {\cal F}^n(\partial ^{|j|} f(t)/\partial s_1^{j_1}...\partial s_n^{j_n}, u; p;\zeta ) = \sum_{0\le k_1\le j_1}$$ $${{j_1}\choose {k_1}} p_0^{j_1-k_1}(p_1{\sf S}_{e_1})^{k_1}(p_2{\sf S}_{e_2})^{j_2}...(p_n{\sf S}_{e_n})^{j_n} {\cal F}^n (f(t),u; p;\zeta )$$ for $u(p,t;\zeta ) := p_0s_1 + M(p,t;\zeta )+\zeta _0$ given by 2$(1,2,2.1)$, or $$(1.1)\quad {\cal F}^n(\partial ^{|j|} f(t)/\partial t_1^{j_1}...\partial t_n^{j_n}, u; p;\zeta ) =$$ $$(p_0 + p_1{\sf S}_{e_1})^{j_1}(p_0+p_2{\sf S}_{e_2})^{j_2}...(p_0+p_n{\sf S}_{e_n})^{j_n} {\cal F}^n (f(t),u; p;\zeta )$$ for $u(p,t;\zeta )$ given by 1$(8,8.1)$ over the Cayley-Dickson algebra ${\cal A}_r$ with $2\le r<\infty $. Domains, where Formulas $(1, 1.1)$ are true may be different from a domain of the multiparameter noncommutative transform for $f$, but they are satisfied in the domain $a_1< Re (p)<a_{-1}$, where* $a_{-1} = \min (a_{-1}(f),a_{-1}(\partial ^{|m|} f(t)/\partial \phi _1^{m_1}...\partial \phi _n^{m_n}): |m|\le |j|, 0\le m_l\le j_l\forall l)$; $a_1= \max (a_1(f),a_1(\partial ^{|m|} f(t)/\partial \phi _1^{m_1}...\partial \phi _n^{m_n}): |m|\le |k|, 0\le m_l\le j_l\forall l)$, if $a_1<a_{-1}$, where $\phi _j=s_j$ or $\phi _j=t_j$ for each $j$ correspondingly. [**Proof.**]{} To each domain $U_v$ the domain $U_{-v}$ symmetrically corresponds. The number of different vectors $v\in \{ -1, 1 \} ^n$ is even $2^n$. Therefore, for $u= p_0t + \zeta _0 + M(p,t; \zeta )$ due to Theorem 12 the equality $$(2)\quad \int_{{\bf R}^n} (\partial f(t)/\partial s_j) e^{-u(p,t;\zeta )}ds = \int_{{\bf R}^n} (\partial f(t)/\partial s_j) e^{-u(p,t;\zeta )}dt =$$ $$\int_{{\bf R}^{n-1}}(dt^j) [f(t)e^{-u(p,t;\zeta )}]|_{-\infty }^{\infty } - \int_{{\bf R}^{n-1}}(dt^j) (\int_{-\infty }^{\infty } f(t) [\partial e^{-u(p,t;\zeta )}/\partial s_j] ds_j)$$ is satisfied in the ${\cal A}_r$ spherical coordinates, since the absolute value of the Jacobian $\partial t/\partial (t^j,s_j)$ is unit. Since for $a_1<Re (p)<a_{-1}$ the first additive is zero, while the second integral converts with the help of Formulas 12$(2, 2.1)$, Formula $(1)$ follows for $k=1$: $(3) \quad {\cal F}^n(\partial f(t)/\partial s_j, u; p; \zeta ) = p_0 \delta _{1,j}{\cal F}^n(f(t), u; p; \zeta ) + p_j {\sf S}_{e_j}{\cal F}^n(f(t), u; p; \zeta )$. To accomplish the derivation we use Theorem 14 so that $$\lim_{\tau \to 0}[{\cal F}^n(f(t),u;p;\zeta ) - {\cal F}^n(f(t-\tau e_j),u;p;\zeta )]/\tau$$ $$= \lim_{\tau \to 0}[{\cal F}^n(f(t),u;p;\zeta ) - {\cal F}^n(f(t),u;p;\zeta + \tau (p_0 +p_1i_1+...+ p_j i_j))]/\tau$$ $$=\lim_{\tau \to 0} \int_{{\bf R}^n} f(t) [e^{-u(p,t;\zeta )} - e^{-u(p,t;\zeta + \tau (p_0+p_1i_1+...+p_ji_j))}]\tau ^{-1}dt ,$$ where $e_j =(0,...,0,1,0,..,0)\in {\bf R}^n$ with $1$ on the $j$-th place. If the original $\partial ^{|j|} f(t)/\partial s_1^{j_1}...\partial s_n^{j_n}$ exists, then $\partial ^{|m|} f(t)/\partial s_1^{m_1}...\partial s_n^{m_n}$ is continuous for $0\le |m|\le |j|-1$ with $0\le m_l\le j_l$ for each $l=1,...,n$, where $f^0:=f$. The interchanging of $\lim_{\tau \to 0} $ and $\int_{{\bf R}^n}$ may change a domain of convergence, but in the indicated in the theorem domain $a_1< Re (p)<a_{-1}$, when it is non void, Formula $(3)$ is valid. Applying Formula $(3)$ in the ${\cal A}_r$ spherical coordinates by induction to $(\partial ^{|m|} f(t)/\partial s_1^{m_1}...\partial s_n^{m_n}): |m|\le |j|, 0\le m_l\le j_l\forall l)$ with the corresponding order subordinated to $\partial ^{|j|} f(t)/\partial s_1^{j_1}...\partial s_n^{j_n}$, or in the ${\cal A}_r$ Cartesian coordinates using Formula 12$(1.1 )$ for the partial derivatives $(\partial ^{|m|} f(t)/\partial t_1^{m_1}...\partial t_n^{m_n}): |m|\le |j|, 0\le m_l\le j_l\forall l)$ with the corresponding order subordinated to $\partial ^{|j|} f(t)/\partial t_1^{j_1}...\partial t_n^{j_n}$ we deduce Expressions $(1)$ and $(1.1)$ with the help of Statement 6 from §XVII.2.3 [@zorich] about the differentiation of an improper integral by a parameter and §2. [**22. Remarks.**]{} For the entire Euclidean space ${\bf R}^n$ Theorem 21 for $\partial f(t)/\partial s_j$ gives only one or two additives on the right side of 21$(1)$ in accordance with 21$(3)$. Evidently Theorems 4, 11 and Proposition 10 are accomplished for ${\cal F}^{k; t_{j(1)},...,t_{j(k)}} (f,u;p;\zeta )$ also. Theorem 12 is satisfied for ${\cal F}^{k; t_{j(1)},...,t_{j(k)}}$ and any $j\in \{ j(1),..., j(k) \} $, so that $s_l = s_l(k;t)=t_{j(l)}+...+t_{j(k)}$ for each $1\le l\le k$, $p_m=0$ and $\zeta _m=0$ for each $1\le m \notin \{ j(1),...,j(k) \} $ (the same convention is in 13, 14, 17, 21, see also below). For ${\cal F}^{k; t_{j(1)},...,t_{j(k)}}$ in Theorem 13 in Formula 13$(1)$ it is natural to put $t_m=0$ and $h_m =0$ for each $1\le m \notin \{ j(1),...,j(k) \} $, so that only $(k+1)$ additives with $h_0$, $h_{j(1)}$,...,$h_{j(k)}$ on the right side generally may remain. Theorems 14 and 17 and 21 modify for ${\cal F}^{k; t_{j(1)},...,t_{j(k)}}$ putting in 14$(1)$ and 17$(1, 2)$ and 21$(1)$ $t_j=0$ and $\tau _j =0$ respectively for each $j\notin \{ j(1),...,j(k) \} $. To take into account boundary conditions for domains different from $U_v$, for example, for bounded domains $V$ in ${\bf R}^n$ we consider a bounded noncommutative multiparameter transform $(1)$ ${\cal F}^n(f(t)\chi _V, u; p; \zeta )=: {\cal F}^n_V(f(t), u; p; \zeta )$.\ For it evidently Theorems 4, 6-8, 11, 13, 14, 16, 17, Proposition 10 and Corollary 4.1 are satisfied as well taking specific originals $f$ with supports in $V$. At first take domains $W$ which are quadrants, that is canonical closed subsets affine diffeomorphic with $Q^n = \prod_{j=1}^n [{\sf a}_j,b_j]$, where $-\infty \le {\sf a}_j <b_j \le \infty $, $[{\sf a}_j,b_j] := \{ x\in {\bf R}: ~ {\sf a}_j\le x \le b_j \} $ denotes the segment in $\bf R$. This means that there exists a vector $w\in {\bf R}^n$ and a linear invertible mapping $C$ on ${\bf R}^n$ so that $C(W)-w = Q$. We put $t^{j,1} := (t_1,...,t_j,...,t_n: ~ t_j={\sf a}_j)$, $t^{j,2} := (t_1,...,t_j,...,t_n: ~ t_j=b_j)$. Consider $t=(t_1,...,t_n)\in Q^n$. [**23. Theorem.**]{} [*Let $f(t)$ be a function-original with a support by $t$ variables in $Q^n$ and zero outside $Q^n$ such that $\partial f(t)/\partial t_j$ also satisfies Conditions 1$(1-4)$. Suppose that $u(p,t;\zeta )$ is given by 2$(1,2,2.1)$ or 1$(8,8.1)$ over ${\cal A}_r$ with $2\le r<\infty $, $2^{r-1}\le n \le 2^r-1$. Then $$(1)\quad {\cal F}^n((\partial f(t)/\partial t_j) \chi _{Q^n}(t),u; p;\zeta ) =$$ $${\cal F}^{n-1; t^{j,2}} (f(t^{j,2})\chi _{Q^n}(t^{j,2}), u;p;\zeta ) - {\cal F}^{n-1; t^{j,1}} (f(t^{j,1})\chi _{Q^n}(t^{j,1}), u;p;\zeta )$$ $$+ [p_0 + \sum_{k=1}^j p_k {\sf S}_{e_k}] {\cal F}^n (f(t)\chi _{Q^n}(t),u; p;\zeta )$$ in the ${\cal A}_r$ spherical coordinates, or $$(1.1)\quad {\cal F}^n((\partial f(t)/\partial t_j) \chi _{Q^n}(t),u; p;\zeta ) =$$ $${\cal F}^{n-1; t^{j,2}} (f(t^{j,2})\chi _{Q^n}(t^{j,2}), u;p;\zeta ) - {\cal F}^{n-1; t^{j,1}} (f(t^{j,1})\chi _{Q^n}(t^{j,1}), u;p;\zeta )$$ $$+ [p_0 + p_j {\sf S}_{e_j}] {\cal F}^n (f(t)\chi _{Q^n}(t),u; p;\zeta )$$ in the ${\cal A}_r$ Cartesian coordinates in a domain $W\subset {\cal A}_r$; if ${\sf a}_j= - \infty $ or $b_j = + \infty $, then the addendum with $t^{j,1}$ or $t^{j,2}$ correspondingly is zero.*]{} [**Proof.**]{} Here the domain $Q^n$ is bounded and $f$ is almost everywhere continuous and satisfies Conditions 1$(1-4)$, hence $f(t)\exp (- u(p,t;\zeta ))\in L^1({\bf R}^n,\lambda _n,{\cal A}_r)$ for each $p\in {\cal A}_r$, since $\exp (- u(p,t;\zeta ))$ is continuous and $supp (f(t))\subset Q^n$. Analogously to §12 the integration by parts gives $$(2)\quad \int_{{\sf a}_j}^{b_j}(\partial f(t)/\partial t_j) \exp (-u(p,t;\zeta ))dt_j = f(t)) \exp (-u(p,t;\zeta ))|_{t_j={\sf a}_j}^{t_j=b_j}$$ $$- \int_{{\sf a}_j}^{b_j}[f(t)(\partial \exp (-u(p,t;\zeta ))/\partial t_j)]dt_j ,$$ where $t=(t_1,...,t_n)$. Then the Fubini’s theorem implies: $$(3)\quad \int_{Q^n}(\partial f(t)/\partial t_j) \exp (-u(p,t;\zeta ))dt =$$ $$\int_{{\sf a}_1}^{b_1}...\int_{{\sf a}_{j-1}}^{b_{j-1}} \int_{{\sf a}_{j+1}}^{b_{j+1}}\int_{{\sf a}_n}^{b_n} [\int_{{\sf a}_j}^{b_j}(\partial f(t)/\partial t_j) \exp (-u(p,t;\zeta ))dt_j]dt^j$$ $$= [\int_{t\in Q^n, ~ t_j=b_j} f(t^{j,2}) \exp (-u(p,t^{j,2};\zeta )) dt^j] - [\int_{t\in Q^n, ~ t_j={\sf a}_j} f(t^{j,1}) \exp (-u(p,t^{j,1};\zeta )) dt^j]$$ $$+ [p_0 + \sum_{k=1}^j p_k {\sf S}_{e_k}] \int_{{\sf a}_1}^{b_1}...\int_{{\sf a}_n}^{b_n} f(t) \exp (-u(p,t;\zeta ))dt$$ in the ${\cal A}_r$ spherical coordinates or $$(3.1)\quad \int_{Q^n}(\partial f(t)/\partial t_j) \exp (-u(p,t;\zeta ))dt$$ $$= [\int_{t\in Q^n, ~ t_j=b_j} f(t^{j,2}) \exp (-u(p,t^{j,2};\zeta )) dt^j] - [\int_{t\in Q^n, ~ t_j={\sf a}_j} f(t^{j,1}) \exp (-u(p,t^{j,1};\zeta )) dt^j]$$ $$+ [p_0 + p_j {\sf S}_{e_j}] \int_{{\sf a}_1}^{b_1}...\int_{{\sf a}_n}^{b_n} f(t) \exp (-u(p,t;\zeta ))dt$$ in the ${\cal A}_r$ Cartesian coordinates, where as usually $t^j=(t_1,...,t_{j-1},0,t_{j+1},...,t_n)$, $dt^j=dt_1...dt_{j-1}dt_{j+1}...dt_n$. This gives Formulas $(1, 1.1)$, where $$(4)\quad {\cal F}^{n-1; t^{j,k}}(f(t^{j,k})\chi _{Q^n}(t^{j,k}), u(p,t^{j,k};\zeta );p;\zeta ) =$$ $$\int_{{\sf a}_1}^{b_1}...\int_{{\sf a}_{j-1}}^{b_{j-1}} \int_{{\sf a}_{j+1}}^{b_{j+1}} \int_{{\sf a}_n}^{b_n} f(t^{j,k}) \exp (-u(p,t^{j,k};\zeta )) dt^{j,k}$$ is the non-commutative transform by $t^{j,k}$, $2^{r-1}\le n \le 2^r-1$, $dt^{j,k}$ is the Lebesgue volume element on ${\bf R}^{n-1}$. [**24. Theorem.**]{} [*If a function $f(t)\chi _{Q^n}(t)$ is original together with its derivative $\partial ^nf(t)\chi _{Q^n}(t)/\partial s_1...\partial s_n$ or $\partial ^nf(t)\chi _{Q^n}(t)/\partial t_1...\partial t_n$, where $F^n_u(p;\zeta )$ is an image function of $f(t)\chi _{Q^n}(t)$ over the Cayley-Dickson algebra ${\cal A}_r$ with $2\le r\in \bf N$, $2^{r-1}\le n \le 2^r-1$, for the function $u(p,t;\zeta )$ given by 2$(1,2,2.1)$ or 1$(8,8.1)$, $Q^n = \prod_{j=1}^n [0,b_j]$, $b_j>0$ for each $j$, then $$(1)\quad \lim_{p\to \infty } \{ [p_0 + p_1{\sf S}_{e_1}] p_2{\sf S}_{e_2}...p_n{\sf S}_{e_n} F^n_u(p;\zeta ) + \sum_{m=0}^{n-1} (-1)^m$$ $$\sum_{1 \le j_1 <...< j_{n-m} \le n; ~ 1\le l_1<...<l_m\le n; ~ l_{\alpha }\ne j_{\beta } ~ \forall \alpha , \beta } [p_0\delta _{1,j_1} + p_{j_1}{\sf S}_{e_{j_1}}] p_{j_2}{\sf S}_{e_{j_2}}...p_{j_{n-m}}{\sf S}_{e_{j_{n-m}}}$$ $$F^{n-m}_u(p^{(l)}; \zeta ) \} = (-1)^{n+1} f(0) e^{ - u(0,0;\zeta )}$$ in the ${\cal A}_r$ spherical coordinates, or $$(1.1)\quad \lim_{p\to \infty } \{ [p_0 + p_1{\sf S}_{e_1}] [p_0+p_2{\sf S}_{e_2}]...[p_0+p_n{\sf S}_{e_n}] F^n_u(p;\zeta ) + \sum_{m=0}^{n-1} (-1)^m$$ $$\sum_{1 \le j_1 <...< j_{n-m} \le n; ~ 1\le l_1<...<l_m\le n; ~ l_{\alpha }\ne j_{\beta } ~ \forall \alpha , \beta } [p_0 + p_{j_1}{\sf S}_{e_{j_1}}] [p_0+p_{j_2}{\sf S}_{e_{j_2}}]...[p_0+p_{j_{n-m}}{\sf S}_{e_{j_{n-m}}}]$$ $$F^{n-m}_u(p^{(l)}; \zeta ) \} = (-1)^{n+1} f(0) e^{ - u(0,0;\zeta )}$$ in the ${\cal A}_r$ Cartesian coordinates, where $f(0) = \lim_{t\in Q^n, ~ t\to 0 } f(t)$, $p$ tends to the infinity inside the angle $|Arg (p)|<\pi /2-\delta $ for some $0<\delta <\pi /2$.* ]{} [**Proof.**]{} In accordance with Theorem 23 we have Equalities 23$(1,1.1)$. Therefore we infer that $$(2)\quad {\cal F}^{n-1;t^{j,k}} (f(t^{j,k})\chi _{Q^n}(t^{j,k}), u(p,t^{j,k};\zeta );p;\zeta ) =$$ $$\lim_{t_j\to \beta _{j,k}+0} \int_{{\sf a}_1}^{b_1}dt_1... \int_{{\sf a}_{j-1}}^{b_{j-1}}dt_{j-1} \int_{{\sf a}_{j+1}}^{b_{j+1}} dt_{j+1}...\int_{{\sf a}_n}^{b_n} (dt_n) f(t) \exp (-u(p,t;\zeta )),$$ where $\beta _{j,1} ={\sf a}_j=0$, $\beta _{j,2}=b_j>0$, $k=1, 2$. Mention, that $(...((t^{1,l_1})^{2,l_2})...)^{j,l_j} = (t: ~ t_1=\beta _{1,l_1},...,t_j=\beta _{j,l_j})$ for every $1\le j\le n$. Analogously to §12 we apply Formula $(2)$ by induction $j=1,...,n$, $2^{r-1}\le n \le 2^r-1$, to $\partial ^nf(t(s))/\partial s_1...\partial s_n$,...,$\partial ^{n-j+1} f(t(s))/\partial s_j...\partial s_n$,...,$\partial f(t(s))/\partial s_n$\ instead of $\partial f(t(s))/\partial s_j$, $s_j=s_j(n;t)$ as in §2, or applying to the partial derivatives $\partial ^nf(t)/\partial t_1...\partial t_n$,...,$\partial ^{n-j+1} f(t)/\partial t_j...\partial t_n$,...,$\partial f(t)/\partial t_n$\ instead of $\partial f(t)/\partial t_j$ correspondingly. If $s_j>0$ for some $j\ge 1$, then $s_1>0$ for $Q^n$ and $\lim_{p\to \infty } e^{-u(p,t^{(l)};\zeta )}=0$ for such $t^{(l)}$, where $t = (t_1,...,t_n)$, $(l) = (l_1,...,l_n)$, $|l| = l_1+...+l_n$, $t^{(l)} = (t^{(l)}_1,...,t^{(l)}_n)$, $t^{(l)}_j={\sf a}_j$ for $l_j=1$ and $t^{(l)}_j=b_j$ for $l_j=2$, $1\le j\le 2^r-1$. Therefore, $$\lim_{p\to \infty } \sum_{l_j\in \{ 1, 2 \}; ~ j=1,...,n } (-1)^{|l|} f(t^{(l)}) e^{-u(p,t^{(l)}; \zeta ) } = (-1)^n f(0) e^{ -u(0,0;\zeta )},$$ since $u(p,0;\zeta ) = u(0,0;\zeta )$, where $f(^{(l)}) = \lim_{t\in Q^n; t\to t^{(l)}} f(t)$. In accordance with Note 8 [@lutsltjms] $$\lim_{p\to \infty , |Arg (p)|<\pi /2-\delta } {\cal F}^n((\partial ^nf(t)/\partial s_1...\partial s_n)\chi _{Q^n}(t),u(p,t;\zeta );p;\zeta )=0$$ in the ${\cal A}_r$ spherical coordinates and $$\lim_{p\to \infty , |Arg (p)|<\pi /2-\delta } {\cal F}^n((\partial ^nf(t)/\partial t_1...\partial t_n)\chi _{Q^n}(t),u(p,t;\zeta );p;\zeta )=0$$ in the ${\cal A}_r$ Cartesian coordinates, which gives the statement of this theorem. [**25.**]{} Suppose that $f(t)\chi _{Q^n}(t)$ is an original function, $F^n(p;\zeta )$ is its image, $\partial ^{|j|} f(t)\chi _{Q^n}(t)/\partial t_1^{j_1}...\partial t_n^{j_n}$ is an original, $|j|=j_1+...+j_n$, $0\le j_1,...,j_n\in {\bf Z}$, $2^{r-1}\le n \le 2^r-1$, $ - \infty \le {\sf a}_k<b_k\le \infty $ for each $k=1,...,n$, $(l)=(l_1,...,l_n)$, $l_k\in \{ 0, 1, 2 \}$, $W={\cal A}_r$ for bounded $Q^n$. Let $W= \{ p\in {\cal A}_r: ~ a_1< Re (p) \} $ for $b_k=\infty $ for some $k$ and finite ${\sf a}_k$ for each $k$; $W=\{ p \in {\cal A}_r: ~ Re (p)<a_{-1} \} $ for ${\sf a}_k= -\infty $ for some $k$ and finite $b_k$ for each $k$; $W= \{ p \in {\cal A}_r: ~ a_1< Re (p) < a_{-1} \} $ when ${\sf a}_k = -\infty $ and $b_l=+\infty $ for some $k$ and $l$; $t^{(l)}=(t^{(l)}_1,...,t^{(l)}_n)$. We put $t^{(l)}_k=t_k$ and $q_k=0$ for $l_k=0$, $t^{(l)}_k={\sf a}_k$ for $l_k=1$, $t^{(l)}_k=b_k$ for $l_k=2$, $ ~ ~ (q)=(q_1,...,q_n)$, $ ~ ~ |q|=q_1+...+q_n$, $a_1= \max (a_1(f),a_1(\partial ^{|m|} f(t)/\partial t_1^{m_1}...\partial t_n^{m_n}): ~ |m|\le |j|, 0\le m_k\le j_k\forall k)$, $a_{-1} = \min (a_{-1}(f),a_{-1}(\partial ^{|m|} f(t)/\partial t_1^{m_1}...\partial t_n^{m_n}): ~ |m|\le |j|, 0\le m_k\le j_k ~ \forall k)$ if $a_1<a_{-1}$. If ${\sf a}_k= - \infty $ and $b_k = + \infty $ for $Q^n$ with a given $k$, then $l_k=0$. If either ${\sf a}_k>-\infty $ or $b_k<+\infty $ for a marked $k$, then $l_k \in \{ 0, 1, 2 \} $. We also put $h_k=h_k(l)=sign (l_k)$ for each $k$, where $sign (x)=-1$ for $x<0$, $sign (0)=0$, $sign (x)=1$ for $x>0$, $h=h(l)$, $|h|=|h_1|+...+|h_n|$, $(lj) := (l_1 sign (j_1),...,l_n sign (j_n))$. Let the vector $(l)$ enumerate faces $\partial Q^n_{(l)}$ in $\partial Q^n_{k-1}$ for $|h(l)|= k\ge 1$, so that $\partial Q^n_{k-1} = \bigcup_{|h(l)|=k} Q^n_{(l)}$, $\partial Q^n_{(l)}\cap \partial Q^n_{(m)}=\emptyset $ for each $(l)\ne (m)$ (see also more detailed notations in §28). Let the shift operator be defined: $T_{(m)}F(p;\zeta ) := F(p;\zeta - (i_1 m_1 +...+i_nm_n)\pi /2)$, also the operator $(SO)$ ${\sf S}_{(m)}F(p;\zeta ) := {\sf S}_{e_1}^{m_1}...{\sf S}_{e_n}^{m_n}F(p;\zeta )$,\ where $(m)=(m_1,...,m_n)\in [0,\infty )^n\subset {\bf R}^n$, ${\sf S}_{(m)}^k={\sf S}_{k(m)}$ for each positive number $0<k\in {\bf R}$, ${\sf S}_0=I$ is the unit operator for $(m)=0$ (see also Formulas 12$(3.1-3.7)$). As usually let $e_1=(1,0,...,0)$,...,$e_n=(0,...,0,1)$ be the standard orthonormal basis in ${\bf R}^n$ so that $(m)=m_1e_1+...+m_ne_n$. [**Theorem.**]{} *Then $$(1)\quad {\cal F}^n(\partial ^{|j|} f(t)\chi _{Q^n}(t)/\partial t_1^{j_1}...\partial t_n^{j_n}, u(p,t;\zeta ); p;\zeta ) =$$ $${\sf R}_{e_1}^{j_1} {\sf R}_{e_2}^{j_2} ...{\sf R}_{e_n}^{j_n} {\cal F}^n (f(t)\chi _{Q^n}(t),u; p;\zeta )$$ $$+ \sum_{1\le |(lj)|; ~ m_k+q_k+h_k=j_k; 0\le m_k; ~ 0\le q_k; ~ h_k= sign (l_kj_k); ~ q_k=0 \mbox{ for } l_kj_k=0 \mbox{, for each } k=1,...,n; ~ (l)\in \{ 0, 1, 2 \} ^n}$$ $$(-1)^{|(lj)|} {\sf R}_{e_1}^{m_1} {\sf R}_{e_2}^{m_2} ...{\sf R}_{e_n}^{m_n} {\cal F}^{n -|h(lj)|} (\partial ^{|q|} f(t^{(lj)})\chi _{\partial Q^n_{(lj)}}(t^{(lj)})/\partial t_1^{q_1}...\partial t_n^{q_n},u; p;\zeta )$$ for $u(p,t;\zeta )$ in the ${\cal A}_r$ spherical coordinates or the ${\cal A}_r$ Cartesian coordinates over the Cayley-Dickson algebra ${\cal A}_r$ with $2\le r<\infty $, where* $(1.1)$ ${\sf R}_{e_1} := p_0 + p_1{\sf S}_{e_1}$, ${\sf R}_{e_2} := p_0 + p_1{\sf S}_{e_1} + p_2 {\sf S}_{e_2}$,..., ${\sf R}_{e_n} := p_0 + p_1{\sf S}_{e_1} + p_2 {\sf S}_{e_2}+ ...+p_n {\sf S}_{e_n}$ in the ${\cal A}_r$ spherical coordinates, while $(1.2)$ ${\sf R}_{e_1} := p_0 + p_1{\sf S}_{e_1}$, ${\sf R}_{e_2} := p_0 + p_2 {\sf S}_{e_2}$,..., ${\sf R}_{e_n} := p_0 +p_n {\sf S}_{e_n}$ in the ${\cal A}_r$ Cartesian coordinates, i.e. ${\sf R}_{e_j}= {\sf R}_{e_j}(p)$ are operators depending on the parameter $p$. If $t^{(l)}_j=\infty $ for some $1\le j \le n$, then the corresponding addendum on the right of $(1)$ is zero. [**Proof.**]{} In view of Theorem 23 we get the equality $$(2)\quad \int_{Q^n} [(\partial ^{|m|+1}f(t)/\partial t_1^{m_1}...\partial t_{k-1}^{m_{k-1}}\partial t_k^{m_k+1}\partial t_{k+1}^{m_{k+1}}... \partial t_n^{m_n}) e^{-u(p,t;\zeta )} ] dt =$$ $$\int_{{\bf R}^{n-1}\cap Q^n}(dt^k) [(\partial ^{|m|} f(t)/\partial t_1^{m_1}... \partial t_n^{m_n}) e^{-u(p,t;\zeta )}]|_{{\sf a}_k}^{b_k}$$ $$- \int_{{\bf R}^{n-1}\cap Q^n}(dt^k) (\int_{{\sf a}_k }^{b_k} (\partial ^{|m|} f(t)/\partial t_1^{m_1}... \partial t_n^{m_n}) [\partial e^{-u(p,t;\zeta )}/\partial t_k] dt_k)$$ is satisfied for $0\le m_k\le j_k$ for each $k=1,...,n$ with $|m|<|j|$. On the other hand, for $p\in W$ additives on the right of $(2)$ convert with the help of Formula 23$(1)$. Each term of the form $$\int_{{\bf R}^{n-|h(l)|}\cap Q^n}(dt^{(l)}) [(\partial ^{|q|} f(t^{(l)})\chi _{\partial Q^n_{(l)}}(t^{(l)})/\partial t_1^{q_1}...\partial t_n^{q_n}e^{-u(p,t;\zeta )}]$$ can be further transformed with the help of $(2)$ by the considered variable $t_k$ only in the case $l_k=0$. Applying Formula $(2)$ by induction to partial derivatives $\partial ^{|j|}f/\partial t_1^{j_1}...\partial t_n^{j_n}$, $\partial ^{|j|-j_1}f/\partial t_2^{j_2}...\partial t_n^{j_n}$,...,$\partial ^{j_n}f/ \partial t_n^{j_n}$,...,$\partial f/\partial t_n$ as in §21 and using Theorem 14 and Remarks 22 we deduce $(1)$. [**26. Theorem.**]{} [*Let $f(t)\chi _{U_{1,...,1}}(t)$ be a function-original with values in ${\cal A}_r$ with $2\le r<\infty $, $2^{r-1}\le n \le 2^r-1$, $u$ is given by 2$(1,2,2.1)$ or 1$(8,8.1)$, $$(1)\quad g(t) := \int_0^{t_1}...\int_0^{t_n} f(x)dx, \mbox{ then}$$ $$(2)\quad {\cal F}^n(f\chi _{U_{1,...,1}}(t), u; p;\zeta ) = {\sf R}_{e_1} {\sf R}_{e_2}... {\sf R}_{e_n} {\cal F}^n (g(t)\chi _{U_{1,...,1}}(t),u; p;\zeta )$$ in the domain $Re (p)>\max (a_1,0)$, where the operators ${\sf R}_{e_j}$ are given by Formulas 25$(1.1,1.2)$.*]{} [**Proof.**]{} In view of Theorem 25 the equation $$(3)\quad {\cal F}^n(f\chi _{U_{1,...,1}}(t), u; p;\zeta ) =$$ $${\sf R}_{e_1} {\sf R}_{e_2}... {\sf R}_{e_n} {\cal F}^n (g(t),u; p;\zeta )$$ $$+ \sum_{1\le |l|; ~ 0\le m_k\le 1; ~ m_k+h_k=1; ~ h_k=sign (l_k); ~ \mbox{ for each } k=1,...,n; ~ q_1=0,..., q_n=0}$$ $$(-1)^{|(l)|} {\sf R}_{e_1}^{m_1} {\sf R}_{e_2}^{m_2}... {\sf R}_{e_n}^{m_n} {\cal F}^{n -|h(l)|} ( g(t^{(l)}),u; p;\zeta ),$$ is satisfied, since $\partial ^n g(t)/\partial t_1...\partial t_n= (f\chi_{U_{1,...,1}})(t)$, where $j_1=1$,...,$j_n=1$, $l_j=1$ for each $j=1,...,n$. Equation $(3)$ is accomplished in the same domain $Re (p)>\max (a_1,0)$, since $g(0)=0$ and $g(t)$ also fulfills conditions of Definition 1, while $a_1(g)<\max (a_1(f),0)+b$ for each $b>0$, where $a_1\in \bf R$. On the other hand, $g(t)$ is equal to zero on $\partial U_{1,...,1}$ and outside $U_{1,...,1}$ in accordance with formula $(1)$, hence all terms on the right side of Equation $(3)$ with $|l|>0$ disappear and $supp (g(t)) \subset U_{1,...,1}$. Thus we get Equation $(2)$. [**27. Theorem.**]{} *Suppose that $F^k(p;\zeta )$ is an image ${\cal F}^{k;t_1,...,t_k}(f(t)\chi _{U_{1,...,1}}(t),u;p;\zeta )$ of an original function $f(t)$ for $u$ given by 2$(1,2,2.1)$ in the half space $W := \{ p\in {\cal A}_r: Re (p)>a_1 \} $ with $2\le r<\infty $, $p_1=0$,...,$p_{j-1}=0$; $\zeta _1=\pi /2$,...,$\zeta _{j-1}=\pi /2$ for each $j\ge 2$ in the ${\cal A}_r$ spherical coordinates or $\zeta _1=0$,...,$\zeta _{j-1}=0$ for each $j\ge 2$ in the ${\cal A}_r$ Cartesian coordinates;* $(1)$ the integral $\int_{p_j i_j}^{\infty i_j} F^k(p_0 + z;\zeta ) dz$ converges, where $p=p_0+p_1i_1+...+p_ki_k\in {\cal A}_r$, $p_j\in {\bf R}$ for each $j=0,...,2^r-1$, $2^{r-1}\le k\le 2^r-1$, $U_{1,...,1} := \{ (t_1,...,t_k)\in {\bf R}^k: ~ t_1\ge 0, ...,t_k\ge 0 \} $. Let also $(2)$ the function $F^k(p;\zeta )$ be continuous by the variable $p\in {\cal A}_r$ on the open domain $W$, moreover, for each $w>a_1$ there exist constants ${C_w}' >0$ and $\epsilon _w >0$ such that $(3)$ $|F^k(p;\zeta ) |\le {C_w}' \exp (-\epsilon _w |p|)$ for each $p\in S_{R(n)}$, $S_R := \{ z\in {\cal A}_r: ~ Re (z)\ge w \} $, $0<R(n)<R(n+1)$ for each $n\in \bf N$, $\lim_{n\to \infty }R(n)=\infty $, where $a_1$ is fixed, $\zeta =\zeta _0i_0+...+\zeta _ki_k\in {\cal A}_r$ is marked, $\zeta _j\in {\bf R}$ for each $j=0,...,k$. Then $$(4)\quad \int_{p_j i_j}^{\infty i_j} F^k(p_0 + z;\zeta ) dz = {\cal S}_{ - e_j} {\cal F}^{k;t_1,...,t_k}(f(t)\chi _{U_{1,...,1}}(t)/\xi _j, u;p;\zeta ),$$ where $p_1=0$,...,$p_{j-1}=0$ for each $j\ge 2$; $\zeta _1=\pi /2$,...,$\zeta _{j-1}=\pi /2$ and $\xi _j=s_j(k;t)$ in the ${\cal A}_r$ spherical coordinates, while $\zeta _1= 0$,...,$\zeta _{j-1}=0$ and $\xi _j=t_j$ in the ${\cal A}_r$ Cartesian coordinates correspondingly for each $j\ge 1$. [**Proof.**]{} Take a path of an integration belonging to the half space $Re (p)\ge w$ for some constant $w>a_1$. Then $$|\int_{U_{1,...,1}}f(t)\exp (- u(p,t;\zeta ))dt|\le C\int_{U_{1,...,1}} \exp (-(p_0-a_1)(t_1+...+t_k))dt < \infty$$ converges, where $C=const>0$, $p_0\ge w$. For $t_j>0$ for each $j=1,...,k$ conditions of Lemma 2.23 [@lutsltjms] (that is of the noncommutative analog over ${\cal A}_r$ of Jordan’s lemma) are satisfied. If $t_j\to \infty $, then $s_j\to \infty $, since all $t_1$,...,$t_k$ are non-negative. Up to a set $\partial U_{1,...,1}$ of $\lambda _k$ Lebesgue measure zero we can consider that $t_1>0$,...,$t_k>0$. If $s_j\to \infty $, then also $s_1\to \infty $. The converging integral can be written as the following limit: $$(5)\quad \int_{p_j i_j}^{\infty i_j} F^k(p_0 + z;\zeta ) dz = \lim_{0<\kappa \to 0} \int_{p_j i_j}^{\infty i_j} F^k(p_0 + z;\zeta ) \exp (-\kappa |z| ) dz$$ for $1\le j\le k$, since the integral $\int_{-S\infty }^{S\infty }[F^k(w + z;\zeta ) ] dz$ is absolutely converging and the limit $\lim_{\kappa \to 0}\exp (-\kappa |z|)=1$ uniformly by $z$ on each compact subset in ${\cal A}_r$, where $S$ is a purely imaginary marked Cayley-Dickson number with $|S|=1$. Therefore, in the integral $$(6)\quad \int_{p_j i_j}^{\infty i_j} F^k(p_0 + z;\zeta ) dz = \int_{p_j i_j}^{\infty i_j} (\int_{U_{1,...,1}} f(t)[\exp (- u(p_0+z,t;\zeta )) ]dt) dz$$ the order of the integration can be changed in accordance with the Fubini’s theorem applied componentwise to an integrand $g=g_0i_0+...+g_ni_n$ with $g_l\in {\bf R}$ for each $l=0,...,n$: $$(7)\quad \int_{p_j i_j}^{\infty i_j} F^k(p_0 + z;\zeta ) dz = \int_{U_{1,...,1}} dt (\int_{p_j i_j}^{\infty i_j} (f(t) \exp (- u(p_0 + z,t;\zeta )) dz)$$ $$= \int_{U_{1,...,1}} f(t) \{ \int_{p_j i_j}^{\infty i_j} [e^{- u(p_0+z,t;\zeta )} ] dz \} dt .$$ Generally, the condition $p_1=0$,...,$p_{j-1}=0$ and $\zeta _1=\pi /2$,...,$\zeta _{j-1}=\pi /2$ in the ${\cal A}_r$ spherical coordinates or $\zeta _1=0$,...,$\zeta _{j-1}=0$ in the ${\cal A}_r$ Cartesian coordinates for each $j\ge 2$ is essential for the convergence of such integral. We certainly have $$(8)\quad \int_{p_j i_j}^{b_j i_j} \cos (i_j^*z\xi _j+\zeta _j) dz = [\sin (\theta _j \xi _j + \zeta _j)/\xi _j ]|_{\theta _j=p_j}^{\theta _j=b_j} = [ - \cos (\theta _j\xi _j + \zeta _j + \pi /2)/\xi _j ]|_{\theta _j=p_j}^{\theta _j=b_j}$$ and $$(9)\quad \int_{p_j i_j}^{b_j i_j} \sin (i_j^*z \xi _j+\zeta _j) dz_j = [- \cos (\theta _j \xi _j + \zeta _j)/\xi _j]|_{\theta _j=p_j}^{\theta _j=b_j} = [ - \sin (\theta _j\xi _j + \zeta _j + \pi /2)/\xi _j]|_{\theta _j=p_j}^{\theta _j=b_j}$$ for each $\xi _j>0$ and $-\infty < p_j<b_j<\infty $ and $j=1,...,k$. Applying Formulas $(5-9)$ and 2$(1,2,2.1)$ or 1$(8,8.1)$ and 12$(3.1-3.7)$ we deduce that: $$\int_{p_j i_j}^{\infty i_j} [F^k(p_0 + z;\zeta )] dz = {\sf S}_{-e_j}\int_{U_{1,...,1}} [f(t)/\xi _j] \exp \{ - u(p,t;\zeta ) \} dt$$ $$= {\sf S}_{-e_j} {\cal F}^{k;t_1,...,t_k}(f(t)\chi _{U_{1,...,1}}(t)/\xi _j, u;p;\zeta ) ,$$ where $t=(t_1,...,t_k)$, $s_j=t_j+...+t_k$ for each $1\le j<k$, $s_k=t_k$, $\xi _j=s_j$ in the ${\cal A}_r$ spherical coordinates or $\xi _j=t_j$ in the ${\cal A}_r$ Cartesian coordinates. [**28. Application of the noncommutative multiparameter transform to partial differential equations.**]{} Consider a partial differential equation of the form: $(1)$ $A[f](t) = g(t)$, where $(2)$ $A[f](t) := \sum_{|j|\le \alpha } {\bf a}_j(t) (\partial ^{|j|} f(t)/\partial t_1^{j_1}...\partial t_n^{j_n}),$\ ${\bf a}_j(t)\in {\cal A}_{\kappa }$ are continuous functions, where $0\le \kappa \in {\bf Z}$, $j=(j_1,...,j_n)$, $|j| := j_1+...+j_n$, $0\le j_k\in {\bf Z}$, $\alpha $ is a natural order of a differential operator $A$, $2\le r$, $2^{r-1}\le n \le 2^r-1$. Since $s_k=s_k(n;t) =t_k+...+t_n$ for each $k=1,...,n$, the operator $A$ can be rewritten in $s$ coordinates as $(2.1)$ $A[f](t(s)) := \sum_{|j|\le \alpha } {\bf b}_j(t) (\partial ^{|j|} f(t(s))/\partial s_1^{j_1}...\partial s_n^{j_n}).$\ That is, there exists ${\bf b}_j\ne 0$ for some $j$ with $|j|=\alpha $ and ${\bf b}_j=0$ for $|j|>\alpha $, while a function $\sum_{j, |j|=\alpha } {\bf b}_j(t(s)) s_1^{j_1}...s_n^{j_n}$ is not zero identically on the corresponding domain $V$. We consider that $(D1)$ $U$ is a canonical closed subset in the Euclidean space ${\bf R}^n$, that is $U = cl ( Int (U))$, where $Int (U)$ denotes the interior of $U$ and $cl (U)$ denotes the closure of $U$. Particularly, the entire space ${\bf R}^n$ may also be taken. Under the linear mapping $(t_1,...,t_n)\mapsto (s_1,...,s_n)$ the domain $U$ transforms onto $V$. We consider a manifold $W$ satisfying the following conditions $(i-v)$. $(i)$. The manifold $W$ is continuous and piecewise $C^{\alpha }$, where $C^l$ denotes the family of $l$ times continuously differentiable functions. This means by the definition that $W$ as the manifold is of class $C^0\cap C^{\alpha }_{loc}$. That is $W$ is of class $C^{\alpha }$ on open subsets $W_{0,j}$ in $W$ and $W\setminus (\bigcup_j W_{0,j})$ has a codimension not less than one in $W$. $(ii)$. $W=\bigcup_{j=0}^m W_{j}$, where $W_{0} = \bigcup_k W_{0,k}$, $W_{j}\cap W_{k} = \emptyset $ for each $k\ne j$, $m = dim_{\bf R} W$, $dim_{\bf R} W_{j} = m-j$, $W_{j+1}\subset \partial W_{j}$. $(iii)$. Each $W_{j}$ with $j=0,...,m-1$ is an oriented $C^{\alpha }$-manifold, $W_{j}$ is open in $\bigcup_{k=j}^m W_{k}$. An orientation of $W_{j+1}$ is consistent with that of $\partial W_{j}$ for each $j=0,1,...,m-2$. For $j>0$ the set $W_{j}$ is allowed to be void or non-void. $(iv)$. A sequence $W^k$ of $C^{\alpha }$ orientable manifolds embedded into ${\bf R}^n$, $\alpha \ge 1$, exists such that $W^k$ uniformly converges to $W$ on each compact subset in ${\bf R}^n$ relative to the metric $dist$. For two subsets $B$ and $E$ in a metric space $X$ with a metric $\rho $ we put $(3)\quad dist (B,E) := \max \{ \sup_{b\in B} dist (\{ b \} ,E), \sup_{e\in E} dist (B,\{ e \} ) \} ,$ where $dist (\{ b \} ,E) := \inf_{e\in E} \rho (b,e)$, $dist (B, \{ e \} ) := \inf_{b\in B} \rho (b,e)$, $b\in B$, $e\in E$. Generally, $dim_{\bf R} W=m\le n$. Let $(e_1^k(x),...,e_m^k(x))$ be a basis in the tangent space $T_xW^k$ at $x\in W^k$ consistent with the orientation of $W^k$, $k\in {\bf N}$. We suppose that the sequence of orientation frames $(e^k_1(x_k),...,e_m^k(x_k))$ of $W^k$ at $x_k$ converges to $(e_1(x),...,e_m(x))$ for each $x\in W_0$, where $\lim_kx_k = x\in W_0$, while $e_1(x)$,...,$e_m(x)$ are linearly independent vectors in ${\bf R}^n$. $(v)$. Let a sequence of Riemann volume elements $\lambda _k$ on $W^k$ (see §XIII.2 [@zorich]) induce a limit volume element $\lambda $ on $W$, that is, $\lambda (B\cap W) = \lim_{k\to \infty } (B\cap W^k)$ for each compact canonical closed subset $B$ in ${\bf R}^n$, consequently, $\lambda (W\setminus W_0)=0$. We shall consider surface integrals of the second kind, i.e. by the oriented surface $W$ (see $(iv)$), where each $W_j$, $j=0,...,m-1$ is oriented (see also §XIII.2.5 [@zorich]). Recall, that a subset $V$ in ${\bf R}^n$ is called convex, if from $a, b\in V$ it follows that $\epsilon a + (1-\epsilon )b\in V$ for each $\epsilon \in [0,1]$. $(vi)$. Let a vector $w\in Int (U)$ exist so that $U-w$ is convex in ${\bf R}^n$ and let $\partial U$ be connected. Suppose that a boundary $\partial U$ of $U$ satisfies Conditions $(i-v)$ and $(vii)$ let the orientations of $\partial U^k$ and $U^k$ be consistent for each $k\in {\bf N}$ (see Proposition 2 and Definition 3 [@zorich]). Particularly, the Riemann volume element $\lambda _k$ on $\partial U^k$ is consistent with the Lebesgue measure on $U^k$ induced from ${\bf R}^n$ for each $k$. This induces the measure $\lambda $ on $\partial U$ as in $(v)$. Also the boundary conditions are imposed: $(4)$ $f(t)|_{\partial U} = f_0(t'),$ $(\partial ^{|q|}f(t)/\partial s_1^{q_1}...\partial s_n^{q_n} )|_{\partial U} = f_{(q)}(t')$ for $|q|\le \alpha -1$, where $s=(s_1,...,s_n) \in {\bf R}^n$, $(q)=(q_1,...,q_n)$, $|q| =q_1+...+q_n$, $0\le q_k\in {\bf Z}$ for each $k$, $t\in \partial U$ is denoted by $t'$, $f_0$, $f_{(q)}$ are given functions. Generally these conditions may be excessive, so one uses some of them or their linear combinations (see $(5.1)$ below). Frequently, the boundary conditions $(5)$ $f(t)|_{\partial U} = f_0(t'),$ $(\partial ^lf(t)/\partial \nu ^l)|_{\partial U} = f_l(t')$ for $1\le l\le \alpha -1$ are also used, where $\nu $ denotes a real variable along a unit external normal to the boundary $\partial U$ at a point $t'\in \partial U_0$. Using partial differentiation in local coordinates on $\partial U$ and $(5)$ one can calculate in principle all other boundary conditions in $(4)$ almost everywhere on $\partial U$. Suppose that a domain $U_1$ and its boundary $\partial U_1$ satisfy Conditions $(D1,i-vii)$ and $g_1=g\chi _{U_1}$ is an original on ${\bf R}^n$ with its support in $U_1$. Then any original $g$ on ${\bf R}^n$ gives the original $g\chi _{U_2}=: g_2$ on ${\bf R}^n$, where $U_2={\bf R}^n\setminus U_1$. Therefore, $g_1+g_2$ is the original on ${\bf R}^n$, when $g_1$ and $g_2$ are two originals with their supports contained in $U_1$ and $U_2$ correspondingly. Take now new domain $U$ satisfying Conditions $(D1,i-vii)$ and $(D2-D5)$: $(D2)$ $U\supset U_1$ and $\partial U\subset \partial U_1$; $(D3)$ if a straight line $\xi $ containing a point $w_1$ (see $(vi)$) intersects $\partial U_1$ at two points $y_1$ and $y_2$, then only one point either $y_1$ or $y_2$ belongs to $\partial U$, where $w_1\in U_1$, $U-w_1$ and $U_1-w_1$ are convex; if $\xi $ intersects $\partial U_1$ only at one point, then it intersects $\partial U$ at the same point. That is, $(D4)$ any straight line $\xi $ through the point $w_1$ either does not intersect $\partial U$ or intersects the boundary $\partial U$ only at one point. Take now $g$ with $supp (g)\subset U$, then $supp (g \chi _{U_1})\subset U_1$. Therefore, any problem $(1)$ on $U_1$ can be considered as the restriction of the problem $(1)$ defined on $U$, satisfying $(D1-D4,i-vii)$. Any solution $f$ of $(1)$ on $U$ with the boundary conditions on $\partial U$ gives the solution as the restriction $f|_{U_1}$ on $U_1$ with the boundary conditions on $\partial U$. Henceforward, we suppose that the domain $U$ satisfies Conditions $(D1,D4,i-vii)$, which are rather mild and natural. In particular, for $Q^n$ this means that either $a_k = - \infty $ or $b_k = + \infty $ for each $k$. Another example is: $U_1$ is a ball in ${\bf R}^n$ with the center at zero, $U=U_1\cup ({\bf R}^n\setminus U_{1,...,1})$, $w_1=0$; or $U=U_1\cup \{ t\in {\bf R}^n: ~ t_n\ge - \epsilon \} $ with a marked number $0<\epsilon <1/2$. But subsets $\partial U_{(l)}$ in $\partial U$ can also be specified, if the boundary conditions demand it. The complex field has the natural realization by $2\times 2$ real matrices so that ${\bf i} = {{~0 ~1} \choose {-1 ~ 0}}$, ${\bf i}^2= - {{~1 ~0} \choose {~ 0 ~ 1}}$. The quaternion skew field, as it is well-known, can be realized with the help of $2\times 2$ complex matrices with the generators $I = {{~1 ~0} \choose {~ 0 ~ 1}}$, $J = {{~0 ~1} \choose {-1 ~ 0}}$, $K = {{ {\bf i} ~ ~ 0} \choose {0 ~ - {\bf i} }}$, $L = {{0 ~ - {\bf i}} \choose { - {\bf i} ~ 0}}$, or equivalently by $4\times 4$ real matrices. Considering matrices with entries in the Cayley-Dickson algebra ${\cal A}_v$ one gets the complexified or quaternionified Cayley-Dickson algebras $({\cal A}_v)_{\bf C}$ or $({\cal A}_v)_{\bf H}$ with elements $z=aI+b{\bf i}$ or $z=aI+bJ+cK+eL$, where $a, b, c, e \in {\cal A}_v$, such that each $a\in {\cal A}_v$ commutes with the generators ${\bf i}$, $I$, $J$, $K$ and $L$. When $r=2$, $f$ and $g$ have values in ${\cal A}_2={\bf H}$ and $2\le n\le 4$ and coefficients of differential operators belong to ${\cal A}_2$, then the multiparameter noncommutative transform operates with the associative case so that ${\cal F}^n(af)=a{\cal F}^n(f)$\ for each $a\in {\bf H}$. The left linearity property ${\cal F}^n(af)=a{\cal F}^n(f)$ for any $a\in {\bf H}_{J,K,L}$ is also accomplished for either operators with coefficients in ${\bf R}$ or ${\bf C}_{\bf i} = I {\bf R}\oplus {\bf i}{\bf R}$ or ${\bf H}_{J,K,L} = I {\bf R} \oplus J {\bf R} \oplus K {\bf R} \oplus L {\bf R}$ and $f$ with values in ${\cal A}_v$ with $1\le n\le 2^v-1$; or vice versa $f$ with values in ${\bf C}_{\bf i}$ or ${\bf H}_{J,K,L}$ and coefficients $a$ in ${\cal A}_v$ but with $1\le n\le 4$. Thus all such variants of operator coefficients ${\bf a}_j$ and values of functions $f$ can be treated by the noncommutative transform. Henceforward, we suppose that these variants take place. We suppose that $g(t)$ is an original function, that is satisfying Conditions 1$(1-4)$. Consider at first the case of constant coefficients ${\bf a}_j$ on a quadrant domain $Q^n$. Let $Q^n$ be oriented so that ${\sf a}_k = - \infty $ and $b_k = + \infty $ for each $k\le n - \kappa $; either ${\sf a}_k = - \infty $ or $b_k = + \infty $ for each $k>n - \kappa $, where $0\le \kappa \le n$ is a marked integer number. If conditions of Theorem 25 are satisfied, then $$(6)\quad {\cal F}^n(A[f](t), u;p;\zeta ) = \sum_{|j|\le \alpha } {\bf a}_j \{ [{\sf R}_{e_1}(p)]^{j_1} [{\sf R}_{e_2}(p)]^{j_2}... [{\sf R}_{e_n}(p)]^{j_n} {\cal F}^n (f(t)\chi _{Q^n}(t),u; p;\zeta )$$ $$+ \sum_{1\le |(lj)|; ~ m_k+q_k+h_k=j_k; ~ 0\le m_k; ~ 0 \le q_k; ~ h_k= sign (l_kj_k); ~ q_k=0 \mbox{ for } l_kj_k=0 \mbox{, for each } k=1,...,n; ~ (l)\in \{ 0, 1, 2 \} ^n}$$ $$(-1)^{|(lj)|} [{\sf R}_{e_1}(p)]^{m_1} [{\sf R}_{e_2}(p)]^{m_2}... [{\sf R}_{e_n}(p)]^{m_n} {\cal F}^{n -|h(lj)|} (\partial ^{|q|} f(t^{(lj)})\chi _{\partial Q^n_{(lj)}}(t^{(lj)})/\partial t_1^{q_1}...\partial t_n^{q_n},u; p;\zeta ) \}$$ $$= {\cal F}^n (g(t)\chi _{Q^n}(t),u;p;\zeta )$$ for $u(p,t;\zeta )$ in the ${\cal A}_r$ spherical or ${\cal A}_r$ Cartesian coordinates, where the operators ${\sf R}_{e_j}(p)$ are given by Formulas 25$(1.1)$ or 25$(1.2)$. Here $(l)$ enumerates faces $\partial Q^n_{(l)}$ in $\partial Q^n_{k-1}$ for $|h(l)|=k\ge 1$, so that $\partial Q^n_{k-1} = \bigcup_{|h(l)|=k} Q^n_{(l)}$, $\partial Q^n_{(l)}\cap \partial Q^n_{(m)}=\emptyset $ for each $(l)\ne (m)$ in accordance with §25 and the notation of this section. Therefore, Equation $(6)$ shows that the boundary conditions are necessary: $(\partial ^{|q|} f(t^{(l)})/\partial t_1^{q_1}...\partial t_n^{q_n})|_{\partial Q^n_{(l)}}$ for $|j|\le \alpha $, $|(lj)|\ge 1$, ${\bf a}_j\ne 0$, $q_k=0$ for $l_kj_k=0$, $m_k+q_k+h_k=j_k$, $~ h_k = sign (l_kj_k)$, $ ~ k=1,...,n$, $t^{(l)}\in \partial Q^n_{(l)}$. But $dim_{\bf R} \partial Q^n =n-1$ for $\partial Q^n\ne \emptyset $, consequently, $(\partial ^{|q|} f(t^{(l)})/\partial t_1^{q_1}...\partial t_n^{q_n})|_{\partial Q^n_{(l)}}$ can be calculated if know $(\partial ^{|\beta |} f(t^{(l)})/\partial t_{\gamma (1)}^{\beta _1}...\partial t_{\gamma (m)}^{\beta _m})|_{\partial Q^n_{(l)}}$ for $|\beta | = |q|$, where $\beta = (\beta _1,...,\beta _m)$, $m=|h(l)|$, a number $\gamma (k)$ corresponds to $l_{\gamma (k)}>0$, since $q_k=0$ for $l_k=0$ and $q_k>0$ only for $l_k j_k >0$ and $k>n-\kappa $. That is, $t_{\gamma (1)}$,...,$t_{\gamma (m)}$ are coordinates in ${\bf R}^n$ along unit vectors orthogonal to $\partial Q^n_{(l)}$. Take a sequence $U^k$ of sub-domains $U^k\subset U^{k+1}\subset U$ for each $k\in {\bf N}$ so that each $U^k = \bigcup_{l=1}^{m(k)} Q^n_{k,l}$ is the finite union of quadrants $Q^n_{k,l}$, $m(k)\in {\bf N}$. We choose them so that each two different quadrants may intersect only by their borders, each $U^k$ satisfies the same conditions as $U$ and $(7)\quad \lim_{k\to \infty } dist (U,U^k)=0$. Therefore, Equation $(6)$ can be written for more general domain $U$ also. For $U$ instead of $Q^n$ we get a face $\partial U_{(l)}$ instead of $\partial Q^n_{(l)}$ and local coordinates $\tau _{\gamma (1)}$,...,$\tau _{\gamma (m)}$ orthogonal to $\partial U_{(l)}$ instead of $t_{\gamma (1)}$,...,$t_{\gamma (m)}$ (see Conditions $(i-iii)$ above). Thus the sufficient boundary conditions are: $(5.1)$ $(\partial ^{|\beta |} f(t^{(lj)})/\partial \tau _{\gamma (1)}^{\beta _1}...\partial \tau _{\gamma (m)}^{\beta _m})|_{\partial U_{(lj)}} = \phi _{\beta ,(lj)}(t^{(lj)})$\ for $|\beta | = |q|$, where $m=|h(lj)|$, $|j|\le \alpha $, $|(lj)|\ge 1$, ${\bf a}_j\ne 0$, $q_k=0$ for $l_kj_k=0$, $m_k+q_k+h_k=j_k$, $~ h_k = sign (l_kj_k)$, $0\le q_k\le j_k-1$ for $k>n-\kappa $; $\phi _{\beta ,(l)}(t^{(l)})$ are known functions on $\partial U_{(l)}$, $t^{(l)}\in \partial U_{(l)}$. In the half-space $t_n\ge 0$ only $(5.2)$ $\partial ^{\beta }f(t)/\partial t_n^{\beta }|_{t_n=0}$\ are necessary for $\beta =|q|<\alpha $ and $q$ as above. Depending on coefficients of the operator $A$ and the domain $U$ some boundary conditions may be dropped, when the corresponding terms vanish in Formula $(6)$. For example, if $A= \partial ^2/\partial t_1\partial t_2$, $ ~ U=U_{1,1}$, $ ~ n=2$, then $\partial f/\partial \nu |_{\partial U_0}$ is not necessary, only the boundary condition $f|_{\partial U}$ is sufficient. If $U={\bf R}^n$, then no any boundary condition appears. Mention that $(5.3)$ ${\cal F}^0(f(a);u;p;\zeta ) =f(a)e^{-u(p,a;\zeta )}$,\ which happens in $(6)$, when $a=t^{(l)}$ and $|h(l)|=n$. Conditions in $(5.1)$ are given on disjoint for different $(l)$ submanifolds $\partial U_{(l)}$ in $\partial U$ and partial derivatives are along orthogonal to them coordinates in ${\bf R}^n$, so they are correctly posed. In ${\cal A}_r$ spherical coordinates due to Corollary 4.1 Equation $(6)$ with different values of the parameter $\zeta $ gives a system of linear equations relative to unknown functions ${\sf S}_{(m)} {\cal F}^n(f(t),u; p;\zeta )$, from which ${\cal F}^n(f(t),u; p;\zeta )$ can be expressed through a family $$\{ {\sf S}_{(m)} {\cal F}^n(g(t),u;p;\zeta ); ~ {\sf S}_{(m)} {\cal F}^{n -|h(l)|} (\partial ^{|q|} f(t^{(l)})\chi _{\partial Q^n_{(l)}}(t^{(l)})/\partial t_1^{q_1}...\partial t_n^{q_n},u; p;\zeta ) : (m) \in {\bf Z}^n \}$$ and polynomials of $p$, where $\bf Z$ denotes the ring of integer numbers, where the corresponding term ${\cal F}^{n -|h(l)|}$ is zero when $t_j^{(l)}=\pm \infty $ for some $j$. In the ${\cal A}_r$ Cartesian coordinates there are not so well periodicity properties generally, so the family may be infinite. This means that ${\cal F}^n(f(t),u; p;\zeta )$ can be expressed in the form: $$(8)\quad {\cal F}^n(f(t),u; p;\zeta ) = \sum_{(m)} {\sf P}_{(m)}(p) {\sf S}_{(m)} {\cal F}^n(g(t),u;p;\zeta )$$ $$+ \sum_{j,(q),(l), |(l)|\ge 1, (m)} {\sf P}_{j,(q),(l),(m)} (p) {\sf S}_{(m)} {\cal F}^{n -|h(lj)|} (\partial ^{|q|} f(t^{(lj)})\chi _{\partial U_{(lj)}}(t^{(lj)})/\partial t_1^{q_1}...\partial t_n^{q_n},u; p;\zeta ),$$ where ${\sf P}_{(m)}(p)$ and ${\sf P}_{j,(q),(l),(m)}(p)$ are quotients of polynomials of real variables $p_0, p_1,...,p_n$. The sum in $(8)$ is finite in the ${\cal A}_r$ spherical coordinates and may be infinite in the ${\cal A}_r$ Cartesian coordinates. To the obtained Equation $(8)$ we apply the theorem about the inversion of the noncommutative multiparameter transform. Thus this gives an expression of $f$ through $g$ as a particular solution of the problem given by $(1,2,5.1)$ and it is prescribed by Formulas 6.1$(1)$ and 8.1$(1)$. For ${\cal F}^n(f;u;p;\zeta )$ Conditions 8$(1,2)$ are satisfied, since ${\sf P}_{(m)}(p)$ and ${\sf P}_{j,(q),(l),(m)}(p)$ are quotients of polynomials with real, complex or quaternion coefficients and real variables, also $G^n$ and ${\cal F}^{n-|h(l)|} $ terms on the right of $(6)$ satisfy them. Thus we have demonstrated the theorem. [**28.1. Theorem.**]{} [*Suppose that ${\cal F}^n(f;u;p;\zeta )$ given by the right side of $(8)$ satisfies Conditions 8$(3)$. Then Problem $(1,2,5.1)$ has a solution in the class of original functions, when $g$ and $\phi _{\beta ,(l)}$ are originals, or in the class of generalized functions, when $g$ and $\phi _{\beta ,(l)}$ are generalized functions.*]{} Mention, that a general solution of $(1,2)$ is the sum of its particular solution and a general solution of the homogeneous problem $Af=0$. If $\phi _{\beta ,(l)}= \phi ^1_{\beta ,(l)}+\phi ^2_{\beta ,(l)}$, $g=g_1+g_2$, $f=f_1+f_2$, $Af_j=g_j$ and $f_j$ on $\partial U_j$ satisfies $(5.1)$ with $\phi ^j_{\beta ,(l)}$, $j=1,$ $2$, then $Af=g$ and $f$ on $\partial U$ satisfies Conditions $(5.1)$ with $\phi _{\beta ,(l)}$. [**28.2. Example.**]{} We take the partial differential operator of the second order $$A= \sum_{h,m=1}^n {\bf a}_{h,m}\partial ^2/\partial \tau _h\partial \tau _m + \sum_{h=1}^n \alpha _h\partial /\partial \tau _h + \omega ,$$ where the quadratic form $a(\tau ) := \sum_{h,m} {\bf a}_{h,m} \tau _h\tau _m$ is non-degenerate and is not always negative, because otherwise we can consider $-A$. Suppose that ${\bf a}_{h,m}={\bf a}_{m,h}\in {\bf R}$, $\alpha _h, \tau _h\in {\bf R}$ for each $h, m =1,...,n$, $\omega \in {\cal A}_3$. Then we reduce this form $a(\tau )$ by an invertible $\bf R$ linear operator $C$ to the sum of squares. Thus $$(9)\quad A = \sum_{h=1}^n {\bf a}_h \partial ^2/ \partial t_h^2 +\sum_{h=1}^n \beta _h \partial /\partial t_h + \omega ,$$ where $(t_1,...,t_n) = (\tau _1,...,\tau _n)C$ with real ${\bf a}_h$ and $\beta _h$ for each $h$. If coefficients of $A$ are constant, using a multiplier of the type $\exp (\sum_h \epsilon _h s_h)$ it is possible to reduce this equation to the case so that if ${\bf a}_h\ne 0$, then $\beta _h=0$ (see §3, Chapter 4 in [@rubinstb]). Then we can simplify the operator with the help of a linear transformation of coordinates and consider that only $\beta _n$ may be non-zero if ${\bf a}_n=0$. For $A$ with constant coefficients as it is well-known from algebra one can choose a constant invertible real matrix $(c_{h,m})_{h,m =1,...,k}$ corresponding to $C$ so that ${\bf a}_h= 1$ for $h\le k_+$ and ${\bf a}_h= - 1$ for $h>k_+$, where $0< k_+ \le n$. For $k_+ =n$ and $\beta =0$ the operator is elliptic, for $k_+=n-1$ with ${\bf a}_n=0$ and $\beta _n\ne 0$ the operator is parabolic, for $0<k_+<n$ and $\beta =0$ the operator is hyperbolic. Then Equation $(6)$ simplifies: $$(10) \quad {\cal F}^n (A[f](t), u;p;\zeta ) = \sum_{h=1}^n {\bf a}_h \{ [{\sf R}_{e_h}(p)]^2 {\cal F}^n (f(t)\chi _{Q^n}(t),u; p;\zeta )$$ $$+ \sum_{l_h \in \{ 1, 2 \}; (l)=l_he_h } (-1)^{l_h} [ {\cal F}^{n - 1} (\partial f(t^{(l)})\chi _{\partial Q^n_{(l)}}(t^{(l)})/\partial t_h,u; p;\zeta )$$ $$+ [{\sf R}_{e_h}(p)] {\cal F}^{n - 1} (f(t^{(l)})\chi _{\partial Q^n_{(l)}}(t^{(l)}),u; p;\zeta ) ] \}$$ $$+ \beta _n \{ {\cal F}^{n-1; t^{n,2}} (f(t^{n,2})\chi _{\partial Q^n_{2e_n}}(t^{n,2}), u;p;\zeta ) - {\cal F}^{n-1; t^{n,1}} (f(t^{n,1})\chi _{\partial Q^n_{e_n}}(t^{n,1}), u;p;\zeta )$$ $$+ [{\sf R}_{e_n}(p)] {\cal F}^n (f(t)\chi _{Q^n}(t),u; p;\zeta ) \} + \omega {\cal F}^n (f(t)\chi _{Q^n}(t),u; p;\zeta ) = {\cal F}^n (g(t),u;p;\zeta )$$ in the ${\cal A}_r$ spherical or ${\cal A}_r$ Cartesian coordinates, where $e_h=(0,...,0,1,0,...,0) \in {\bf R}^n$ with $1$ on the $h$-th place, ${\sf S}_0=I$ is the unit operator, the operators ${\sf R}_{e_h}(p)$ are given by Formulas 25$(1.1)$ or 25$(1.2)$ respectively. We denote by $\delta _S(x)$ the delta function of a continuous piecewise differentiable manifold $S$ in ${\bf R}^n$ satisfying conditions $(i-vi)$ so that $$(\Delta)\quad \int_{{\bf R}^n} \eta (x) \delta _S(x)dx = \int_S \eta (y)\lambda _m(dy)$$ for a continuous integrable function $\eta (x)$ on ${\bf R}^n$, where $dim (S)=m<n$, $\lambda _m(dy)$ denotes a volume element on the $m$ dimensional surface $S$ (see Condition $(v)$ above). Thus we can consider a non-commutative multiparameter transform on $\partial U$ for an original $f$ on $U$ given by the formula: $(11)$ ${\cal F}^{n-1; t'}_{\partial U} (f(t')\chi _{\partial U}(t'), u;p;\zeta ) = {\cal F}^{n; t}(f(t)\delta _{\partial U}(t), u;p;\zeta )$.\ Therefore, terms like ${\cal F}^{n-1}$ in $(10)$ correspond to the boundary $\partial Q^n$. They can be simplified: $$(12) \quad \beta _n \{ {\cal F}^{n-1; t^{n,2}} (f(t^{n,2})\chi _{\partial Q^n_{2e_n}}(t), u;p;\zeta ) - {\cal F}^{n-1; t^{n,1}} (f(t^{n,1})\chi _{\partial Q^n_{e_n}}(t), u;p;\zeta ) \}$$ $$={\cal F}^{n-1; t'}_{\partial Q^n} (\beta (t') f(t')\chi _{\partial Q^n}(t'), u;p;\zeta ),$$ where $\beta (t')$ is a piecewise constant function on $\partial Q^n$ equal to $\beta _n$ on the corresponding faces of $Q^n$ orthogonal to $e_n$ given by condition: either $t_n={\sf a}_n$ or $t_n=b_n$; $\beta (t')=0$ is zero otherwise. If $ {\sf a}_k = - \infty $ or $b_k= +\infty $, then the corresponding term disappears. If ${\bf R}^n$ embed into ${\cal A}_r$ with $2^{r-1}\le n\le 2^r-1$ as ${\bf R}i_1\oplus ... \oplus {\bf R}i_n$, then this induces the corresponding embedding $\Theta $ of $Q^n$ or $U$ into ${\cal A}_r$. This permits to make further simplification: $$(12.1)\quad \sum_{h=1}^n {\bf a}_h \{ \sum_{l_h \in \{ 1, 2 \}; (l)=l_he_h } (-1)^{l_h} [ [{\sf R}_{e_h}(p)] {\cal F}^{n - 1} (f(t^{(l)})\chi _{\partial Q^n_{(l)}}(t^{(l)}),u; p;\zeta )$$ $$+ {\cal F}^{n - 1} (\partial f(t^{(l)})\chi _{\partial Q^n_{(l)}}(t^{(l)})/\partial t_h,u; p;\zeta ) ] \}$$ $$= {\cal F}^{n-1}_{\partial Q^n} (a(t') (\partial f(t')\chi _{\partial Q^n_0}(t')/\partial \nu ),u(p,t';\zeta ); p; \zeta )$$ $$+ {\cal F}^{n-1}_{\partial Q^n} ({\sf P}(t') f(t')\chi _{\partial Q^n_0}(t'),u; p; \zeta ),$$ where $\nu = \nu (t')$ denotes a real coordinate along an external unit normal $M(t')$ to $\Theta (\partial U)$ at $\Theta (t')$, so that $M(t')$ is a purely imaginary Cayley-Dickson number, $a(t')$ is a piecewise constant function equal to ${\bf a}_h$ for the corresponding $t'$ in the face $\partial Q^n_{l_he_h}$ with $l_h>0$; ${\sf P}(t',p) := {\sf P}(t') := {\sf R}_{e_h}(p)$ for $t' \in \partial Q^n_{l_he_h}$, $h=1,...,n$, since $\sin (\psi +\pi ) = - \sin (\psi )$ and $\cos (\psi +\pi ) = - \cos (\psi )$ for each $\psi \in {\bf R}$. Certainly the operator-valued function ${\sf P}(t')$ has a piecewise continuous extension ${\sf P}(t)$ on $Q^n$. That is $$(13)\quad {\cal F}^{n-1}_{\partial U} ( \xi (t') f(t') \chi _{\partial U}(t'),u(p,t';\zeta ); p; \zeta )$$ $$:= \int_{{\bf R}^n} \xi (t) f(t) \delta _{\partial U}(t) \exp \{ - u(p,t;\zeta ) \} dt$$ for an integrable operator-valued function $\xi (t)$ so that $[\xi (t) f(t)]$ is an original on $U$ whenever this integral exists. For example, when $\xi $ is a linear combination of shift operators ${\sf S}_{(m)}$ with coefficients $\epsilon _{(m)}(t,p)$ such that each $\epsilon _{(m)}(t,p)$ as a function by $t\in U$ for each $p\in W$ and $f(t)$ are originals or $f$ and $g$ are generalized functions. For two quadrants $Q_{m,l}$ and $Q_{m,k}$ intersecting by a common face $\Upsilon $ external normals to it for these quadrants have opposite directions. Thus the corresponding integrals in ${\cal F}^{n-1}_{\partial Q_{m,l}}$ and ${\cal F}^{n-1}_{\partial Q_{m,k}}$ restricted on $\Upsilon $ summands cancel in ${\cal F}^{n-1}_{\partial (Q_{m,l}\cup Q_{m,k})}$. Using Conditions $(iv-vii)$ and the sequence $U^m$ and quadrants $Q_{m,l}^n$ outlined above we get for a boundary problem on $U$ instead of $Q^n$ the following equation: $$(14) \quad {\cal F}^n(A[f](t), u;p;\zeta ) = \{ \sum_{h=1}^n {\bf a}_h [{\sf R}_{e_h}(p)]^2 {\cal F}^n (f(t)\chi _U(t),u; p;\zeta ) \} +$$ $$\{ {\cal F}^{n-1}_{\partial U} ([\beta (t') + {\sf P}(t',p)] f(t')\chi _{\partial U_0}(t'), u; p; \zeta ) + {\cal F}^{n-1}_{\partial U} ({\sf a}(t') (\partial f(t')\chi _{\partial U_0}(t')/\partial \nu ),u; p; \zeta ) \}$$ $${\cal F}^n (\beta _n [{\sf R}_n(p)] f(t)\chi _U(t),u; p;\zeta ) + \omega {\cal F}^n (f(t)\chi _U(t),u; p;\zeta ) = {\cal F}^n (g(t),u;p;\zeta ) ,$$ where ${\sf P}(t',p):={\sf P}(t') := \sum_{h=1}^n {\bf a}_h [{\sf R}_h(p)](\partial \nu /\partial t_h)$ for each $t'\in \partial U_0$ (see also Stokes’ formula in §XIII.3.4 [@zorich] and Formulas $(14.2,14.3)$ below). Particularly, for the quadrant domain $Q^n$ we have $a(t)={\bf a}_h$ for $t\in \partial Q^n_{l_he_h}$ with $l_h>0$, $\beta (t)=\beta _n$ for $t\in \partial Q^n_{l_ne_n}$ with $l_n>0$ and zero otherwise. The boundary conditions are: $(14.1)$ $f(t)|_{\partial U_0}=\phi (t)|_{\partial U_0}$, $\quad (\partial f(t)/\partial \nu )|_{\partial U_0} = \phi _1(t)|_{\partial U_0}$.\ The functions ${\sf a}(t)$ and ${\sf \beta }(t)$ can be calculated from $ \{ {\bf a}_h: ~ h \} $ and $\beta _n$ almost everywhere on $\partial U$ with the help of change of variables from $(t_1,...,t_n)$ to $(y_1,...,y_{n-1},y_n)$, where $(y_1,...,y_n)$ are local coordinates in $\partial U_0$ in a neighborhood of a point $t' \in \partial U_0$, $y_n=\nu $, since $\partial U_0$ is of class $C^1$. Consider the differential form $\sum_{h=1}^n (-1)^{n-h} {\bf a}_h dt_1\wedge ... \wedge \widehat{dt_h} \wedge ... \wedge dt_n = a dy_1\wedge ... \wedge dy_{n-1}$ and its external product with $d\nu = \sum_{h=1}^n (\partial \nu /\partial t_h)dt_h$, then $(14.2)$ ${\sf a}(t)|_{\partial U_0} = \sum_{h=1}^n {\bf a}_h (\partial \nu /\partial t_h)|_{\partial U_0} ~ ~$ and $(14.3)$ $\beta (t)|_{\partial U_0} = \beta _n\chi _{U_{e_n}\cup U_{2e_n}}(\partial \nu /\partial t_n) |_{\partial U_0}$.\ This is sufficient for the calculation of ${\cal F}^{n-1}_{\partial U}$. [**28.3. Inversion procedure in the ${\cal A}_r$ spherical coordinates.**]{} When boundary conditions 28$(5.1)$ are specified, this equation 28$(6)$ can be resolved relative to ${\cal F}^n (f(t)\chi _U(t),u(p,t;\zeta ); p;\zeta )$, particularly, for Equations 28.2$(14,14.1)$ also. The operators ${\sf S}_{e_j}$ and $T_j$ of §12 have the periodicity properties: ${\sf S}_{e_j}^{4+k}F(p;\zeta )= {\sf S}_{e_j}^kF(p;\zeta ) ~$ and $~ T_j^{4+k}F(p;\zeta ) = T_j^kF(p;\zeta ) ~$, $~ {\sf S}_{e_1}^{2+k}F(p;\zeta ) = - {\sf S}_{e_1}^kF(p;\zeta ) ~$ and $~ T_1^{2+k}F(p;\zeta ) = - T_1^kF(p;\zeta )$ for each positive integer number $k$ and $1\le j\le 2^r-1$. We put $(6.1)$ ${\bf F}_j(p;\zeta ) := ({\sf S}_{e_j}^4 - {\sf S}_{e_{j+1}}^4) F(p;\zeta )$ for any $1\le j\le 2^r-2$, $(6.2)$ ${\bf F}_{2^r-1}(p;\zeta ) := {\sf S}_{e_{2^r-1}}^4 F(p;\zeta )$. Then from $(6)$ we get the following equations: $$(6.3) \sum_{|j|\le \alpha } {\bf a}_j \{ [p_0+p_1T_1]^{j_1} [p_0+p_1T_1+p_2T_2]^{j_2}$$ $$... [p_0+p_1T_1+...+p_nT_n]^{j_n} \} |_{p_b=0 ~ \forall b>w} ~ {\bf F}_w(p;\zeta ) = \{ - \sum_{|j|\le \alpha } {\bf a}_j$$ $$\sum_{1\le |(lj)|; ~ m_k+q_k+h_k=j_k; ~ 0\le m_k; ~ 0 \le q_k; ~ h_k= sign (l_kj_k); ~ q_k=0 \mbox{ for } l_kj_k=0 \mbox{, for each } k=1,...,n; ~ (l)\in \{ 0, 1, 2 \} ^n}$$ $$(-1)^{|(lj)|} \{ [p_0+p_1T_1]^{m_1} [p_0+p_1T_1+p_2T_2]^{m_2}... [p_0+p_1T_1+...+p_nT_n]^{m_n} \} |_{p_b=0 ~ \forall b>w}$$ $${\cal F}^{n -|h(lj)|}_w (\partial ^{|q|} f(t^{(lj)})\chi _{\partial Q^n_{(lj)}}(t^{(lj)})/\partial t_1^{q_1}...\partial t_n^{q_n},u; p;\zeta ) \} + {\bf G}_w(p;\zeta )$$ for each $w=1,...,n$, where $F(p;\zeta ) = {\cal F}^n (f(t)\chi _{Q^n}(t),u; p;\zeta )$ and $G(p;\zeta ) ={\cal F}^n (g(t)\chi _{Q^n}(t),u; p;\zeta )$. These equations are resolved for each $w=1,...,n$ as it is indicated below. Taking the sum one gets the result $(6.4)$ $F(p;\zeta )={\bf F}_1(p;\zeta )+...+{\bf F}_n(p;\zeta )$,\ since $\{ [\sum_{j=1}^{2^r-2} ({\sf S}_{e_j}^4 - {\sf S}_{e_{j+1}}^4)]+ {\sf S}_{e_{2^r-1}}^4 \} e^{-u(p,t;\zeta )} = {\sf S}_{e_1}^4e^{-u(p,t;\zeta )}=e^{-u(p,t;\zeta )}$. The analogous procedure is for Equation $(14)$ with the domain $U$ instead of $Q^n$. From Equation $(6.3)$ or $(14)$ we get the linear equation: $$(15)\quad \sum_{(l)} \psi _{(l)} x_{(l)} = \phi ,$$ where $\phi $ is the known function and depends on the parameter $\zeta $, $\psi _{(l)}$ are known coefficients depending on $p$, $x_{(l)}$ are indeterminates and may depend on $\zeta $, $l_1=0, 1$ for $h=1$, so that $x_{(l)+2e_1} = - x_{(l)}$; $l_h=0, 1, 2, 3$ for $h>1$, where $x_{(l)+4e_h} = x_{(l)}$ for each $h>1$ in accordance with Corollary 4.1, $(l)=(l_1,...,l_n)$. Acting on both sides of $(6.3)$ or $(14)$ with the shift operators $T_{(m)}$ (see Formula 25$(SO)$), where $m_1=0,1$, $m_h=0,1,2,3$ for each $h>1$, we get from $(15)$ a system of $2^{1+2(k-1)}$ linear equations with the known functions $\phi _{(m)} := T_{(m)} \phi $ instead of $\phi $, $\phi _{(0)}=\phi $: $(15.1)$ $\sum_{(l)} \psi _{(l)} T_{(m)} x_{(l)} = \phi _{(m)}$ for each $(m)$. Each such shift of $\zeta $ left coefficients $\psi _{(l)}$ intact and $x_{(l)+(m)} = (-1)^{\eta } x_{(l')}$ with ${l'}_1 = l_1+m_1 ~ (mod ~ 2)$, ${l'}_h = l_h+m_h ~ (mod ~ 4)$ for each $h>1$, where $\eta =1$ for $l_1+m_1-{l'}_1=2$, $\eta =2$ otherwise. This system can be reduced, when a minimal additive group ${\cal G} := \{ (l): ~ l_1 ~ (mod ~ 2), ~ l_j ~ (mod ~ 4) ~ \forall 2\le j\le k;$ $\mbox{ generated by all } $ $(l)$ $\mbox{ with non-zero coefficients}$ $\mbox{ in Equation } $ $(15) \} $ is a proper subgroup of ${\sf g}_2\times {\sf g}_4^{k-1}$, where ${\sf g}_h := {\bf Z}/ (h{\bf Z})$ denotes the finite additive group for $0<h\in {\bf Z}$. Generally the obtained system is non-degenerate for $\lambda _{n+1}$ almost all $p=(p_0,...,p_n)\in {\bf R}^{n+1}$ or in $W$, where $\lambda _{n+1}$ denotes the Lebesgue measure on the real space ${\bf R}^{n+1}$. We consider the non-degenerate operator $A$ with real, complex ${\bf C}_{\bf i}$ or quaternion ${\bf H}_{J,K,L}$ coefficients. Certainly in the real and complex cases at each point $p$, where its determinate $\Delta = \Delta (p)$ is non-zero, a solution can be found by the Cramer’s rule. Generally, the system can be solved by the following algorithm. We can group variables by $l_1$, $l_2$,...,$l_k$. For a given $l_2,...,l_h$ and $l_1=0,1$ subtracting all other terms from both sides of $(15)$ after an action of $T_{(m)}$ with $m_1=0,1$ and marked $m_h$ for each $h>1$ we get the system of the form $(16)$ $\alpha x_1 + \beta x_2 = {\bf b}_1$, $ -\beta x_1 + \alpha x_2 ={\bf b}_2$,\ which generally has a unique solution for $\lambda _{n+1}$ almost all $p$: $(17)$ $x_1 = (\alpha (\alpha ^2+\beta ^2)^{-1}) {\sf b}_1 - (\beta (\alpha ^2+\beta ^2)^{-1}) {\sf b}_2)$; $x_2 = (\alpha (\alpha ^2+\beta ^2)^{-1}){\sf b}_2 + (\beta (\alpha ^2+\beta ^2)^{-1}) {\sf b}_1$,\ where ${\sf b}_1, {\sf b}_2\in {\cal A}_r$, for a given set $(m_2,...m_n)$. When $l_h$ are specified for each $1\le h\le k$ with $h\ne h_0$, where $1<h_0\le k$, then the system is of the type: $(18)$ $ax_1+bx_2+cx_3+dx_4={\sf b}_1$, $dx_1+ax_2+bx_3+cx_4={\sf b}_2$, $cx_1+dx_2+ax_3+bx_4={\sf b}_3$, $bx_1+cx_2+dx_3+ax_4={\sf b}_4$,\ where $a, b, c, d\in {\bf R}$ or ${\bf C}_{\bf i}$ or ${\bf H}_{J,K,L}$, while ${\sf b}_1, {\sf b}_2, {\sf b}_3, {\sf b}_4\in {\cal A}_r$. In the latter case of ${\bf H}_{J,K,L}$ it can be solved by the Gauss’ exclusion algorithm. In the first two cases of ${\bf R}$ or ${\bf C}_{\bf i}$ the solution is: $(19)$ $x_j= \Delta _j/\Delta $, where $\Delta = a\xi _1 - d \xi _2 + c\xi _3 - b\xi _4$, $\Delta _1 = {\sf b}_1\xi _1 - {\sf b}_2\xi _2 + {\sf b}_3\xi _3 - {\sf b}_4\xi _4$, $\Delta _2 = - {\sf b}_1 \xi _4 + {\sf b}_2 \xi _1 - {\sf b}_3\xi _2 + {\sf b}_4\xi _3$, $\Delta _3 = {\sf b}_1\xi _3 - {\sf b}_2\xi _4 + {\sf b}_3\xi _1 - {\sf b}_4\xi _2$, $\Delta _4 = - {\sf b}_1\xi _2 + {\sf b}_2\xi _3 - {\sf b}_3\xi _4 + {\sf b}_4\xi _1$, $\xi _1 = a^3+b^2c+cd^2-ac^2-2abd$, $\xi _2=a^2b+bc^2+d^3-b^2d-2acd$, $\xi _3 = ab^2+c^3+ad^2-a^2c-2bcd$, $\xi _4=a^2d+b^3+c^2d-bd^2-2abc$. Thus on each step either two or four indeterminates are calculated and substituted into the initial linear algebraic system that gives new linear algebraic system with a number of indeterminates less on two or four respectively. May be pairwise resolution on each step is simpler, because the denominator of the type $(\alpha ^2+\beta ^2)$ should be $\lambda _{2^r}$ almost everywhere by $p\in {\cal A}_r$ positive (see $(6)$, $(14)$ above). This algorithm acts analogously to the Gauss’ algorithm. Finally the last two or four indeterminates remain and they are found with the help of Formulas either $(17)$ or $(19)$ respectively. When for a marked $h$ in $(6)$ or $(14)$ all $l_h=0 ~ (mod ~ 2)$ (remains only $x_1$ for $h=1$, or remain $x_1$ and $x_3$ for $h>1$) or for some $h>1$ all $l_h=0 ~ (mod ~ 4)$ (remains only $x_1$) a system of linear equations as in $(15, 15.1)$ simplifies. Thus a solution of the type prescribed by $(8)$ generally $\lambda _{n+1}$ almost everywhere by $p\in W$ exists, where $W$ is a domain $W=\{ p \in {\cal A}_r: a_1< Re (p) <a_{-1}, ~ p_j=0 ~\forall j>n \} $ of convergence of the noncommutative multiparameter transform, when it is non-void, $2^{r-1}\le n \le 2^r-1$, $Re (p)=p_0$, $p=p_0i_0+...+p_ni_n$. This domain $W$ is caused by properties of $g$ and initial conditions on $\partial U$ and by the domain $U$ also. Generally $U$ is worthwhile to choose with its interior $Int (U)$ non-intersecting with a characteristic surface $\phi (x_1,...,x_n)=0$, i.e. at each point $x$ of it the condition is satisfied $(CS)$ $\sum_{|j|=\alpha } {\bf a}_j(t(x)) (\partial \phi /\partial x_1)^{j_1}... (\partial \phi /\partial x_n)^{j_n} =0 $\ and at least one of the partial derivatives $(\partial \phi /\partial x_k)\ne 0$ is non-zero. In particular, the boundary problem may be with the right side $g=\varsigma f $ in $(2,2.1,14)$, where $\varsigma $ is a real or complex ${\bf C}_{\bf i}$ or quaternion ${\bf H}_{J,K,L}$ multiplier, when boundary conditions are non-trivial. In the space either ${\cal D}({\bf R}^n,{\cal A}_r)$ or ${\cal B}({\bf R}^n,{\cal A}_r)$ (see §19) a partial differential problem simplifies, because all boundary terms disappear. If $f\in {\cal B}({\bf R}^n,{\cal A}_r),$ then $\{ p\in {\cal A}_r: ~ Re (p)\ge 0 \} \subset W_f$. For $f\in {\cal D}({\bf R}^n,{\cal A}_r)$ certainly $W_f = {\cal A}_r$ (see also §9). [**28.4. Examples.**]{} Take partial differential equations of the fourth order. In this subsection the noncommutative multiparameter transforms in ${\cal A}_r$ spherical coordinates are considered. For $(20)$ $A = \partial ^3/\partial s_1^3 + \sum_{j=2}^n \gamma _j \partial ^4/\partial s_j^4$\ with constants $\gamma _j\in {\bf H}_{J,K,L}\setminus \{ 0 \} $ on the space either ${\cal D}({\bf R}^n,{\cal A}_r)$ or ${\cal B}({\bf R}^n,{\cal A}_r)$, where $n\ge 2$, Equation $(6)$ takes the form: $$(21)\quad {\cal F}^n(A[f](t), u;p;\zeta ) =$$ $$\{ p_0 (p_0^2+ 3(p_1{\sf S}_{e_1})^2) + \sum_{j=2}^n \gamma _j (p_j{\sf S}_{e_j})^4 \} {\cal F}^n (f(t),u; p;\zeta ) + p_1 (3p_0^2 + (p_1{\sf S}_{e_1})^2) {\sf S}_{e_1} {\cal F}^n (f(t),u; p;\zeta )$$ $$= {\cal F}^n (g(t),u;p;\zeta )$$ due to Corollary 4.1. In accordance with $(16,17)$ we get: $(22)$ ${\bf F}_w(p;\zeta ) = (\alpha (\alpha ^2+\beta ^2)^{-1}){\bf G}_w (p;\zeta ) - (\beta (\alpha ^2+\beta ^2)^{-1}) T_1 {\bf G}_w(p;\zeta ) )$ for each $w=1,...,n$,\ where $\alpha _w = \alpha = [p_0 (p_0^2- 3p_1^2) + \sum_{j=2}^n \gamma _j p_j^4]|_{p_b=0 ~ \forall b>w}$, $\beta _w = \beta = p_1 (3p_0^2 - p_1^2)|_{p_b=0 ~ \forall b>w}$. From Theorem 6, Corollary 6.1 and Remarks 24 we infer that: $$(23)\quad f(t) = (2\pi )^{-n} \int_{{\bf R}^n} F (a+p;\zeta )\exp \{ u(p,t;\zeta ) \} dp_1...dp_n$$ supposing that the conditions of Theorem 6 and Corollary 6.1 are satisfied, where $F(p;\zeta ) = {\cal F}^n (f(t),u; p;\zeta )$. If on the space either ${\cal D}({\bf R}^k,{\cal A}_r)$ or ${\cal B}({\bf R}^k,{\cal A}_r)$ an operator is as follows: $(24)$ $A= \partial ^4/\partial s_1^2\partial s_2^2 + \sum_{j=3}^n \gamma _j \partial ^4/\partial s_j^4,$ where $\gamma _j \in {\bf H}_{J,K,L}\setminus \{ 0 \} $, where $n\ge 3$, then $(6)$ reads as: $(25)$ ${\cal F}^n (Af(t),u; p;\zeta ) = p_2^2(p_0^2 + (p_1{\sf S}_{e_1})^2) {\sf S}^2_{e_2} {\cal F}^n (f(t),u; p;\zeta )$\ $ + 2p_0p_1p_2^2 {\sf S}_{e_1} {\sf S}^2_{e_2} {\cal F}^n (f(t)),u; p;\zeta ) + \sum_{j=3}^n \gamma _j (p_j{\sf S}_{e_j})^4 {\cal F}^n (f(t)),u; p;\zeta )$\ $ = {\cal F}^n (g(t),u; p;\zeta )$. If on the same spaces an operator is: $(26)$ $A= \partial ^3/\partial s_1\partial s_2^2 + \sum_{j=3}^n \gamma _j \partial ^4/\partial s_j^4,$ where $n\ge 3$, then $(6)$ takes the form: $(27)$ ${\cal F}^n (Af(t),u; p;\zeta ) = p_0p_2^2 {\sf S}_{e_2}^2 {\cal F}^n (f(t),u; p;\zeta ) + p_1p_2^2 {\sf S}_{e_1} {\sf S}_{e_2}^2 {\cal F}^n (f(t),u; p;\zeta ) + \sum_{j=3}^n \gamma _j (p_j{\sf S}_{e_j})^4 {\cal F}^n (f(t),u; p;\zeta ) = {\cal F}^n (g(t),u; p;\zeta )$. To find ${\cal F}^n (f(t),u; p;\zeta )$ in $(25)$ or $(27)$ after an action of suitable shift operators $T_{(0,2,0,...,0)}$, $T_{(1,0,...,0)}$ and $T_{(1,2,0,...,0)}$ we get the system of linear algebraic equations: $(28)$ $ax_1+bx_3+cx_4={\sf b}_1$, $bx_1+cx_2+ax_3={\sf b}_2$, $ax_2 - cx_3+bx_4={\sf b}_3$, $-cx_1+bx_2+ax_4={\sf b}_4$\ with coefficients $a$, $b$ and $c$, and Cayley-Dickson numbers on the right side ${\sf b}_1,...,{\sf b}_4\in {\cal A}_r$, where $x_1= {\bf F}_w(p;\zeta )$, $x_2=T_1 {\bf F}_w(p;\zeta )$, $x_3= T_2^2 {\bf F}_w (p;\zeta )$, $x_4= T_1 T_2^2 {\bf F}_w(p;\zeta )$, ${\sf b}_1= {\bf G}_w (p;\zeta )=({\cal F}^n (g(t),u; p;\zeta ))_w$, ${\sf b}_2= T_2^2 {\bf G}_w (p;\zeta )$, ${\sf b}_3= T_1 {\bf G}_w (p;\zeta )$, ${\sf b}_4= T_1 T_2^2 {\bf G}_w(p;\zeta )$. Coefficients are: $a_w=a= [\sum_{j=3}^n \gamma _jp_j^4]|_{p_b=0 ~ \forall b>w}\in {\bf H}_{J,K,L}$, $b_w=b=p_2^2(p_0^2 - p_1^2)\in {\bf R}$, $c_w=c=2p_0p_1p_2^2|_{p_b=0 ~ \forall b>w} \in {\bf R}$ for $A$ given by $(24)$; $a_w=a= [\sum_{j=3}^n \gamma _j p_j^4]|_{p_b=0 ~ \forall b>w}\in {\bf H}_{J,K,L}$, $b_w=b=p_0p_2^2|_{p_b=0 ~ \forall b>w}\in {\bf R}$, $c_w=c=p_1p_2^2|_{p_b=0 ~ \forall b>w}\in {\bf R}$ for $A$ given by $(26)$, $w=1,...,n$. If $a=0$ the system reduces to two systems with two indeterminates $(x_1,x_2)$ and $(x_3,x_4)$ of the type described by $(16)$ with solutions given by Formulas $(17)$. It is seen that these coefficients are non-zero $\lambda _{n+1}$ almost everywhere on ${\bf R}^{n+1}$. Solving this system for $a\ne 0$ we get: $(29)$ ${\bf F}_w(p;\zeta ) = a^{-1} {\sf b}_1 - [a^2-b^2+c^2)^2+4b^2c^2]^{-1} a^{-1} [ (a^2-b^2+c^2) ((c^2-b^2) {\sf b}_1 + ab{\sf b}_2 - 2bc{\sf b}_3+ac{\sf b}_4) - 2bc (2bc{\sf b}_1 - ac{\sf b}_2 +(c^2-b^2) {\sf b}_3 + ab{\sf b}_4)] $.\ Finally Formula $(23)$ provides the expression for $f$ on the corresponding domain $W$ for suitable known function $g$ for which integrals converge. If $\gamma _j>0$ for each $j$, then $a>0$ for each $p_3^2+...+p_w^2>0$. For $(21,24)$ on a bounded domain with given boundary conditions equations will be of an analogous type with a term on the right ${\cal F}^n (g(t),u;p;\zeta )$ minus boundary terms appearing in $(6)$ in these particular cases. For a partial differential equation $$(30)\quad {\bf a} (t_{n+1}) Af(t_1,...,t_{n+1}) + \partial f(t_1,...,t_{n+1})/\partial t_{n+1} = g(t_1,...,t_{n+1})$$ with octonion valued functions $f, g$, where $A$ is a partial differential operator by variables $t_1,...,t_n$ of the type given by $(2,2.1)$ with coefficients independent of $t_1,...,t_n$, it may be simpler the following procedure. If a domain $V$ is not the entire Euclidean space ${\bf R}^{n+1}$ we impose boundary conditions as above in $(5.1)$. Make the noncommutative transform ${\cal F }^{n;t_1,...,t_n}$ of both sides of Equation $(30)$, so it takes the form: $$(31)\quad {\bf a} (t_{n+1}) {\cal F }^{n;t_1,...,t_n}(Af(t_1,...,t_{n+1}),u;p;\zeta ) + \partial {\cal F }^{n;t_1,...,t_n}(f(t_1,...,t_{n+1}), u;p;\zeta )/\partial t_{n+1}$$ $$= {\cal F }^{n;t_1,...,t_n}(g(t_1,...,t_{n+1}),u;p;\zeta ).$$ In the particular case, when ${\bf a}(t_{n+1}) \sum_{|j|\le \alpha } {\bf a}_j(t_{n+1}) \sum_{0\le k_1\le j_1} {{j_1}\choose {k_1}} S_{(k_1,j_2,...,j_k)} e^{-u(p,t;\zeta )} = e^{-u(p,t;\zeta )}$\ for each $t_{n+1}$, $p$, $t$ and $\zeta $, with the help of $(6,8)$ one can deduce an expression of $F^n(p;\zeta ;t_{n+1}) :=$ ${\cal F }^{n;t_1,...,t_n}(f(t_1,...,t_{n+1}), u;p;\zeta )$ through $G^n(p;\zeta ;t_{n+1}) := {\cal F }^{n;t_1,...,t_n}(g(t_1,...,t_{n+1}), u;p;\zeta )$ and boundary terms in the following form: $$(32) \quad {\bf b} (p_0,...,p_n; t_{n+1}) F^n(p;\zeta ;t_{n+1}) + \partial F^n(p;\zeta ;t_{n+1})/\partial t_{n+1} = Q(p_0,...,p_n; t_{n+1}),$$ where ${\bf b} (p_0,...,p_n; t_{n+1})$ is a real mapping and $Q(p_0,...,p_n; t_{n+1})$ is an octonion valued function. The latter differential equation by $t_{n+1}$ has a solution analogously to the real case, since $t_{n+1}$ is the real variable, while ${\bf R}$ is the center of the Cayley-Dickson algebra ${\cal A}_r$. Thus we infer: $$(33)\quad F^n(p;\zeta ;t_{n+1}) = \exp \{ - \int_{\tau _0}^{t_{n+1}} {\bf b} (p_0,...,p_n; \xi )d\xi \}$$ $$\{ C_0 +[\int_{\tau _0}^{t_{n+1}} Q(p_0,...,p_n; \tau ) \exp \{ \int_{\tau _0}^{\tau } {\bf b} (p_0,...,p_n; \xi )d\xi \} d\tau ] \} ,$$ since the octonion algebra is alternative and each equation ${\bf b}x={\bf c}$ with non-zero ${\bf b}$ has the unique solution $x= {\bf b}^{-1} {\bf c}$, where $C_0$ is an octonion constant which can be specified by an initial condition. More general partial differential equations as $(30)$, but with $\partial ^lf/\partial t_{n+1}^l$, $l\ge 2$, instead of $\partial f/\partial t_{n+1}$ can be considered. Making the inverse transform $({\cal F}^{n;t_1,...,t_n})^{-1}$ of the right side of $(33)$ one gets the particular solution $f$. [**28.5. Integral kernel.**]{} We rewrite Equation 28$(6)$ in the form: $$(34)\quad {\bf A}_{\cal S} {\cal F}^n (f\chi _{Q^n},u;p;\zeta ) = {\cal F}^n(g\chi _{Q^n},u;p;\zeta ) -$$ $$\sum_{|j|\le \alpha } {\bf a}_j \sum_{1\le |(lj)|, ~ 0\le m_k, ~ 0\le q_k, ~ h_k = sign (l_kj_k), ~ m_k+q_k+h_k=j_k; ~ q_k =0 \mbox{ for } l_kj_k=0; ~ \forall k=1,...,n; ~ (l)\in \{ 0, 1, 2 \} ^n }$$ $$(-1)^{|(lj)|} {\cal S}^m {\cal F}^{n-|h(lj)|} ( \partial ^{|q|} f(t^{(lj)})/\partial t_1^{q_1}...\partial t_n^{q_n})\chi _{\partial Q^n_{(lj)}}(t^{(lj)}), u ; p; \zeta ) ,\mbox{ where}$$ $(34.1)$ ${\cal S}_k(p) := {\cal S}_k := {\sf R}_{e_k}(p)$\ in the ${\cal A}_r$ spherical or ${\cal A}_r$ Cartesian coordinates respectively (see also Formulas 25$(1.1,1.2)$), for each $k=1,...,n$, $(34.2)$ ${\cal S}^m(p) := {\cal S}^m := {\cal S}_1^{m_1}...{\cal S}_n^{m_n}$, $(35)$ ${\bf A}_{\cal S} := \sum_{|j|\le \alpha } {\bf a}_j {\cal S}^j(p)$.\ Then we have the integral formula: $(36)$ ${\bf A}_{\cal S} {\cal F}^n (f\chi _{Q^n}, u; p;\zeta ) = \int_{Q^n} f(t) [ {\bf A}_{\cal S} \exp ( - u(p,t;\zeta ))]dt$\ in accordance with 1$(7)$ and 2$(4)$. Due to §28.3 the operator ${\bf A}_{\cal S}$ has the inverse operator for $\lambda _{n+1}$ almost all $(p_0,...,p_n)$ in ${\bf R}^{n+1}$. Practically, its calculation may be cumbersome, but finding for an integral inversion formula its kernel is sufficient. In view of the inversion Theorem 6 or Corollary 6.1 and §§19 and 20 we have $(37)$ $(2\pi )^{-n} \int_{{\bf R}^n} \exp ( - u(a+p,t;\zeta )) \exp ( u(a+p,\tau ; \zeta )) dp_1...dp_n = \delta (t;\tau ),$ where $(38)$ $[\delta , f)(\tau ) = \int_{{\bf R}^n} f(t) \delta (t;\tau ) dt_1...dt_n = f(\tau )$\ at each point $\tau \in {\bf R}^n$, where the original $f(\tau )$ satisfies Hölder’s condition. That is, the functional $\delta (t;\tau )$ is ${\cal A}_r$ linear. Thus the inversion of Equation $(36)$ is: $$(39) \quad \int_{{\bf R}^n} ( \int_{{\bf R}^n} f(t) \chi _{Q^n} (t) \{ [ {\bf A}_{\cal S} \exp ( - u(p+a,t;\zeta )) ] \xi (p+a,t,\tau ;\zeta ) \} dt)dp_1...dp_n = f(\tau ),$$ so that $(40)$ $[{\bf A}_{\cal S} \exp ( - u(p+a,t;\zeta )) ] \xi (p+a,t,\tau ;\zeta ) = (2\pi )^{-n} \exp ( - u(p+a,t;\zeta )) \exp ( - u(p+a,\tau ;\zeta )) $,\ where the coefficients of ${\bf A}_{\cal S}$ commute with generators $i_j$ of the Cayley-Dickson algebra ${\cal A}_r$ for each $j$. Consider at first the alternative case, i.e. over the Cayley-Dickson algebra ${\cal A}_r$ with $r\le 3$. Let by our definition the adjoint operator ${\bf A}^*_{\cal S}$ be defined by the formula $(41)$ ${\bf A}^*_{\cal S} \eta (p,t;\zeta ) = \sum_{|j|\le \alpha } {\bf a}^*_j {\cal S}^j \eta ^*(p,t;\zeta )$ for any function $\eta : {\cal A}_r\times {\bf R}^n \times {\cal A}_r \to {\cal A}_r$, where $t\in {\bf R}^n$, $p$ and $\zeta \in {\cal A}_r$, ${\cal S}^j\eta ^*(p,t;\zeta ) := [{\cal S}^j\eta (p,t;\zeta )]^*$. Any Cayley-Dickson number $z\in {\cal A}_v$ can be written with the help of the iterated exponent (see §3) in ${\cal A}_v$ spherical coordinates as $(42)$ $z = |z| \exp ( - u(0,0;\psi ))$,\ where $v\ge r$, $\psi \in {\cal A}_v$, $u\in {\cal A}_v$, $Re (\psi )=0$. Certainly the phase shift operator is isometrical: $(43)$ $|T_1^{k_1}...T_n^{k_n} z| = |z|$\ for any $k_1,...,k_n\in {\bf R}$, since $ ~ |\exp ( - u(0,0; Im (\psi ))| =1$ for each $\psi \in {\cal A}_v$, while $T_1^{k_1}...T_n^{k_n} e^{-u(0,0;Im (\psi ))} = \exp \{ - u(0,0; Im (\psi )- (k_1i_1+...+k_ni_n)\pi /2) \} $ (see §12). In the ${\cal A}_r$ Cartesian coordinates each Cayley-Dickson number can be presented as: $(42.1)$ $z=|z|\exp (\phi M)$, where $\phi \in \bf R$ is a real parameter, $M$ is a purely imaginary Cayley-Dickson number (see also §3 in [@ludoyst; @ludfov]). Therefore, we deduce that $(44)$ $|{\bf A}_{\cal S} \exp ( - u(p+a,t;\zeta ))| = \exp ( - (p_0+a) s_1- \zeta _0) |{\bf A}_{\cal S}\exp ( - u(Im (p),t;Im (\zeta ) ))|$,\ since $\bf R$ is the center of the Cayley-Dickson algebra ${\cal A}_v$ and $p_0, ~ a, ~ \zeta _0, ~ s_1 \in {\bf R}$, $ ~ s_1=s_1(t)$, where particularly ${\bf A}_{\cal S}1 := {\bf A}_{\cal S}e^{-u(0,0;\zeta )}|_{\zeta =0}$ (see also Formulas 12$(3.1-3.7)$). Then expressing $\xi $ from $(40)$ and using Formulas $(41,42,42.1,44)$ we infer, that $(45)$ $\xi (p,t,\tau ;\zeta ) = (2\pi )^{-n} [{\bf A}^*_{\cal S} \exp ( - u(Im (p), t; Im (\zeta )) ]$ $ [\exp ( - u(Im (p), t; Im (\zeta )) \exp ( u(p, \tau ; \zeta )) ] |{\bf A}_{\cal S}\exp ( - u(Im (p), t; Im (\zeta ))|^{-2}$,\ since $z^{-1} = z^*/|z|^2$ for each non-zero Cayley-Dickson number $z\in {\cal A}_v$, $v\ge 1$, where $Im (p) = p_1i_1+...+p_ni_n$, $p=p_0i_0+...+p_ni_n$, $p_0=Re (p)$. Generally, for $r\ge 4$, Formula $(45)$ gives the integral kernel $\xi (p,t,\tau ;\zeta )$ for any restriction of $\xi $ on the octonion subalgebra $alg_{\bf R} (N_1,N_2,N_4)$ embedded into ${\cal A}_r$. In view of §28.3 $\xi $ is unique and is defined by $(45)$ on each subalgebra $alg_{\bf R} (N_1,N_2,N_4)$, consequently, Formula $(45)$ expresses $\xi $ by all variables $p, \xi \in {\cal A}_r$ and $t, ~ \tau \in {\bf R}^n$. Applying Formulas $(39,45)$ and 28.2$(\Delta )$ to Equation $(34)$, when Condition 8$(3)$ is satisfied, we deduce, that $$(46)\quad (f\chi _{Q^n})(\tau ) = \int_{{\bf R}^n} (\int_{{\bf R}^n} g(t)\chi _{Q^n} (t) [ \exp ( - u(p+a,t;\zeta )) \xi (p+a,t,\tau ;\zeta )]dt)dp_1...dp_n -$$ $$\sum_{|j|\le \alpha } {\bf a}_j \sum_{1\le |(lj)|, ~ 0\le m_k, ~ 0\le q_k, ~ h_k = sign (l_kj_k); ~ m_k+q_k+h_k=j_k; ~ q_k=0 \mbox{ for } l_kj_k=0, ~ \forall ~ k=1,...,n; ~ (l) \in \{ 0, 1, 2 \} ^n } (-1)^{|(lj)|}$$ $$\int_{{\bf R}^n} (\int_{\partial Q^n_{(lj)}} [\partial ^{|q|} f(t^{(lj)}/\partial t_1^{q_1}...\partial t_n^{q_n}] [ \{ {\cal S}^m(p) \exp ( - u(p+a,t^{(lj)};\zeta )) \} \xi (p+a,t^{(lj)},\tau ;\zeta )]dt^{(lj)})dp_1...dp_n ,$$ where $dim_{\bf R} \partial Q^n_{(lj)}=n-|h(lj)|$, $t^{(lj)}\in \partial Q^n_{(lj)}$ in accordance with §28.1, ${\cal S}^m(p)$ is given by Formulas $(34.1,34.2)$ above. For simplicity the zero phase parameter $\zeta =0$ in $(46)$ can be taken. In the particular case $Q^n = {\bf R}^n$ all terms with $\int_{\partial Q^n_{(lj)}}$ vanish. Terms of the form $\int_{{\bf R}^n} [ \{ {\cal S}^m(p) \exp ( - u(p+a,t;\zeta )) \} \xi (p+a,t,\tau ;\zeta )]dp_1...dp_n$ in Formula $(46)$ can be interpreted as left ${\cal A}_r$ linear functionals due to Fubini’s theorem and §§19 and 20, where ${\cal S}^0=I$. For the second order operator from $(14)$ one gets: $(47)$ ${\bf A}_{\cal S} = (\sum_{h=1}^n {\bf a}_h [{\cal S}_h(p)]^2) + \beta _n {\cal S}_n(p) +\omega $ and $$(48)\quad (f\chi _U)(t) = \int_{{\bf R}^n} ( \int_{{\bf R}^n} g(t) \chi _U(t) [ \exp ( - u(p+a,t;\zeta )) \xi (p,t,\tau ;\zeta ) ] dt) dp_1...dp_n -$$ $$\int_{{\bf R}^n} ( \int_{\partial U_0} f(t') [ \{ (\beta (t')+ {\sf P}(t',p)) \exp ( - u(p+a,t;\zeta )) \} \xi (p,t',\tau ;\zeta ) ] dt') dp_1...dp_n -$$ $$\int_{{\bf R}^n} ( \int_{\partial U_0} a(t') (\partial f(t')/\partial \nu ) [ \exp ( - u(p+a,t;\zeta )) \xi (p,t',\tau ;\zeta ) ] dt') dp_1...dp_n .$$ For a calculation of the appearing integrals the generalized Jordan lemma (see §§23 and 24 in [@lutsltjms]) and residues of functions at poles corresponding to zeros $|{\bf A}_{\cal S}\exp ( - u(Im (p),t;Im (\zeta ) ))|=0$ by variables $p_1,...,p_n$ can be used. Take $g(t)=\delta (y;t)$, where $y\in {\bf R}^n$ is a parameter, then $$(49)\quad \int_{{\bf R}^n} (\int_{{\bf R}^n} \delta (y;t) [ \exp ( - u(p+a,t;\zeta )) \xi (p+a,t,\tau ;\zeta )]dt)dp_1...dp_n$$ $$= \int_{{\bf R}^n} [ \exp ( - u(p+a,y;\zeta )) \xi (p+a,y,\tau ;\zeta )]dp_1...dp_n =: {\cal E}(y;\tau )$$ is the fundamental solution in the class of generalized functions, where $(50)$ $A_t {\cal E}(y;t) =\delta (y;t)$, $(51)$ $\int_{{\bf R}^n} \delta (y;t) f(t) dt = f(y)$\ for each continuous function $f(t)$ from the space $L^2({\bf R}^n,{\cal A}_r)$; $~ A_t$ is the partial differential operator as above acting by the variables $t=(t_1,...,t_n)$ (see also §§19, 20 and 33-35). [**29. The decomposition theorem of partial differential operators over the Cayley-Dickson algebras.**]{} We consider a partial differential operator of order $u$: $$(1)\quad Af(x)= \sum_{|\alpha |\le u} {\bf a}_{\alpha }(x)\partial ^{\alpha } f(x),$$ where $\partial ^{\alpha } f=\partial ^{|\alpha |}f(x)/\partial x_0^{\alpha _0}...\partial x_n^{\alpha _n}$, $x=x_0i_0+...x_ni_n$, $x_j\in {\bf R}$ for each $j$, $1\le n=2^r-1$, $\alpha = (\alpha _0,...,\alpha _n)$, $|\alpha |=\alpha _0+...+\alpha _n$, $0\le \alpha _j\in {\bf Z}$. By the definition this means that the principal symbol $$(2)\quad A_0 := \sum_{|\alpha |= u} {\bf a}_{\alpha }(x)\partial ^{\alpha }$$ has $\alpha $ so that $|\alpha |=u$ and ${\bf a}_{\alpha }(x)\in {\cal A}_r$ is not identically zero on a domain $U$ in ${\cal A}_r$. As usually $C^k(U,{\cal A}_r)$ denotes the space of $k$ times continuously differentiable functions by all real variables $x_0,...,x_n$ on $U$ with values in ${\cal A}_r$, while the $x$-differentiability corresponds to the super-differentiability by the Cayley-Dickson variable $x$. Speaking about locally constant or locally differentiable coefficients we shall undermine that a domain $U$ is the union of subdomains $U^j$ satisfying conditions 28$(D1,i-vii)$ and $U^j\cap U^k = \partial U^j\cap \partial U^k$ for each $j\ne k$. All coefficients ${\bf a}_{\alpha }$ are either constant or differentiable of the same class on each $Int (U^j)$ with the continuous extensions on $U^j$. More generally it is up to a $C^u$ or $x$-differentiable diffeomorphism of $U$ respectively. If an operator $A$ is of the odd order $u=2s-1$, then an operator $E$ of the even order $u+1=2s$ by variables $(t,x)$ exists so that $(3)$ $Eg(t,x)|_{t=0}=Ag(0,x)$ for any $g\in C^{u+1}([c,d]\times U,{\cal A}_r)$, where $t\in [c,d]\subset {\bf R}$, $c\le 0<d$, for example, $Eg(t,x) = \partial (tAg(t,x))/\partial t$. Therefore, it remains the case of the operator $A$ of the even order $u=2s$. Take $z=z_0i_0+...+z_{2^v-1}i_{2^v-1}\in {\cal A}_v$, $z_j\in {\bf R}$. Operators depending on a less set $z_{l_1},...,z_{l_n}$ of variables can be considered as restrictions of operators by all variables on spaces of functions constant by variables $z_s$ with $s\notin \{ l_1,...,l_n \} $. [**Theorem.**]{} *Let $A=A_u$ be a partial differential operator of an even order $u=2s$ with locally constant or variable $C^s$ or $x$-differentiable on $U$ coefficients ${\bf a}_{\alpha }(x)\in {\cal A}_r$ such that it has the form* $(4)$ $Af = c_{u,1}(B_{u,1}f) +...+ c_{u,k}(B_{u,k}f)$, where each $(5)$ $B_{u,p}=B_{u,p,0}+Q_{u-1,p}$\ is a partial differential operator by variables $x_{m_{u,1}+...+m_{u,p-1}+1}$,...,$x_{m_{u,1}+...+m_{u,p}}$ and of the order $u$, $m_{u,0}=0$, $c_{u,k}(x)\in {\cal A}_r$ for each $k$, its principal part $(6)$ $B_{u,p,0}= \sum_{|\alpha |=s} {\bf a}_{p,2\alpha }(x)\partial ^{2\alpha }$\ is elliptic with real coefficients ${\bf a}_{p,2\alpha }(x)\ge 0$, either $0\le r\le 3$ and $f\in C^u(U,{\cal A}_r)$, or $r\ge 4$ and $f\in C^u(U,{\bf R})$. Then three partial differential operators $\Upsilon ^s$ and $\Upsilon _1^s$ and $Q$ of orders $s$ and $p$ with $p\le u-1$ with locally constant or variable $C^s$ or $x$-differentiable correspondingly on $U$ coefficients with values in ${\cal A}_v$ exist, $r\le v$, such that $(7)$ $Af=\Upsilon ^s(\Upsilon _1^sf) +Qf$. [**Proof.**]{} Certainly we have $ord Q_{u-1,p}\le u-1$, $ord (A-A_0) \le u-1$. We choose the following operators: $$(8)\quad \Upsilon ^s f(x) = \sum_{p=1}^k \sum_{|\alpha |\le s, ~ \alpha _q = 0 \forall q<(m_{u,1}+...+m_{u,p-1}+1) \mbox{ and } q>(m_{u,1}+...+m_{u,p})} (\partial ^{\alpha } f(x)) [w_p^* \psi _{p, \alpha }]\mbox{ and}$$ $$(9)\quad \Upsilon ^s_1 f(x) = \sum_{p=1}^k \sum_{|\alpha |\le s, ~ \alpha _q = 0 \forall q<(m_{u,1}+...+m_{u,p-1}+1) \mbox{ and } q>(m_{u,1}+...+m_{u,p})} (\partial ^{\alpha } f(x)) [w_p\psi _{p,\alpha }^*],$$ where $w_p^2=c_{u,p}$ for all $p$ and ${\psi }_{p,\alpha }^2(x)= - {\bf a}_{p,2\alpha }(x)$ for each $p$ and $x$, $w_p\in {\cal A}_r$, ${\psi }_{p,\alpha }(x)\in {\cal A}_{r,v}$ and ${\psi }_{p,\alpha }(x)$ is purely imaginary for ${\bf a}_{p,2\alpha }(x)>0$ for all $\alpha $ and $x$, $Re (w_p Im (\psi _{p,\alpha }))=0$ for all $p$ and $\alpha $, $Im (x) = (x-x^*)/2$, $v>r$. Here ${\cal A}_{r,v} = {\cal A}_v/{\cal A}_r$ is the real quotient algebra. The algebra ${\cal A}_{r,v}$ has the generators $i_{j2^r}$, $j=0,...,2^{v-r}-1$. A natural number $v$ so that $2^{v-r} -1\ge \sum_{p=1}^k \sum_{q=0}^u {{m_p+q-1}\choose q}$ is sufficient, where ${m\choose q} = m!/(q!(m-q)!)$ denotes the binomial coefficient, ${{m+q-1}\choose q}$ is the number of different solutions of the equation $\alpha _1+...+\alpha _m =q$ in non-negative integers $\alpha _j$. We have either $\partial ^{\alpha + \beta }f\in {\cal A}_r$ for $0\le r\le 3$ or $\partial ^{\alpha + \beta }f\in {\bf R}$ for $r\ge 4$. Therefore, we can take $\psi _{p,\alpha }(x) \in i_{2^rq}{\bf R}$, where $q=q(p,\alpha )\ge 1$, $ ~ ~ q(p^1,\alpha ^1)\ne q(p,\alpha )$ when $(p,\alpha )\ne (p^1,\alpha ^1)$. Thus Decomposition $(7)$ is valid due to the following. For $b= \partial ^{\alpha +\beta }f(z)$ and ${\bf l} = i_{2^rp}$ and $w\in {\cal A}_r$ one has the identities: $(10)$ $(b(w{\bf l})) (w^*{\bf l}) = ((wb){\bf l})(w^*{\bf l}) = - w(wb) = - w^2b$ and $(11)$ $(((b{\bf l})w^*){\bf l})w = (((bw){\bf l}){\bf l})w = - (bw)w = - bw^2$ in the considered here cases, since ${\cal A}_r$ is alternative for $r\le 3$ while ${\bf R}$ is the center of the Cayley-Dickson algebra (see Formulas $(M1,M2)$ in the introduction). This decomposition of the operator $A_{2s}$ is generally up to a partial differential operator of order not greater, than $(2s-1)$: $(12)$ $Qf(x) = \sum_{|\alpha |\le s, |\beta | \le s; \gamma \le \alpha , \epsilon \le \beta , |\gamma + \epsilon |>0}[\prod _{j=0}^{2^v-1} {{\alpha _j}\choose {\gamma _j}} {{\beta _j}\choose {\epsilon _j}} ] (\partial ^{\alpha +\beta - \gamma - \epsilon } f(x))$\ $ [(\partial ^{\gamma } {\eta }_{\alpha }(x)) ((\partial ^{\epsilon } {\eta }_{\beta }^1(x)]$,\ where operators $\Upsilon ^s$ and $\Upsilon ^s_1$ are already written in accordance with the general form $(13)$ $\Upsilon ^sf(x) = \sum_{|\alpha |\le s} (\partial ^{\alpha }f(x)) \eta _{\alpha }(x)$; $(14)$ $\Upsilon ^s_1 f(x) = \sum_{|\beta |\le s} (\partial ^{\beta }f(x)) \eta _{\beta }^1(x)$. When $A$ in $(3)$ is with constant coefficients, then the coefficients $w_p$ and $\psi _{p,\alpha }$ for $\Upsilon ^m$ and $\Upsilon ^m_1$ can also be chosen constant and $Q=0$. [**30. Corollary.**]{} [*Let suppositions of Theorem 29 be satisfied. Then a change of variables locally constant or variable $C^1$ or $x$-differentiable on $U$ correspondingly exists so that the principal part $A_{2,0}$ of $A_{2}$ becomes with constant coefficients, when ${\bf a}_{p,2\alpha }>0$ for each $p$, $\alpha $ and $x$.*]{} [**31. Corollary.**]{} *If two operators $E=A_{2s}$ and $A=A_{2s-1}$ are related by Equation 29$(3)$, and $A_{2s}$ is presented in accordance with Formulas 29$(4,5)$, then three operators $\Upsilon ^s$, $\Upsilon ^{s-1}$ and $Q$ of orders $s$, $s-1$ and $2s-2$ exist so that* $(1)$ $A_{2s-1}=\Upsilon ^s\Upsilon ^{s-1} +Q$. [**Proof.**]{} It remains to verify that $ord (Q)\le 2s-2$ in the case of $A_{2s-1}$, where $Q= \{ \partial (tA_{2s-1})/ \partial t - \Upsilon ^s\Upsilon _1^s \} |_{t=0}$. Indeed, the form $\lambda (E)$ corresponding to $E$ is of degree $2s-1$ by $x$ and each addendum of degree $2s$ in it is of degree not less than $1$ by $t$, consequently, the product of forms $\lambda (\Upsilon _s) \lambda (\Upsilon ^s_1)$ corresponding to $\Upsilon ^s$ and $\Upsilon ^s_1$ is also of degree $2s-1$ by $x$ and each addendum of degree $2s$ in it is of degree not less than $1$ by $t$. But the principal parts of $\lambda (E)$ and $\lambda (\Upsilon _s) \lambda (\Upsilon ^s_1)$ coincide identically by variables $(t,x)$, hence $ord ( \{ E - \Upsilon ^s\Upsilon _1^s \} |_{t=0}) \le 2s-2$. Let $a(t,x)$ and $h(t,x)$ be coefficients from $\Upsilon ^s_1$ and $\Upsilon ^s$. Using the identities $a(t,x) \partial _t \partial ^{\gamma } tg(x) = a(t,x) \partial ^{\gamma } g(x)$ and $h(t,x) \partial ^{\beta } \partial _t [a(t,x) \partial ^{\gamma } g(x)] = h(t,x) \partial ^{\beta } [(\partial _t a(t,x)) \partial ^{\gamma } g(x)] $\ for any functions $g(x)\in C^{2s-1}$ and $a(t,x)\in C^s$, $ord [(h(t,x) \partial ^{\beta }), (a(t,x) \partial ^{\gamma })]|_{t=0}\le 2s-2$, where $\partial _t =\partial /\partial t$, $|\beta |\le s-1$, $|\gamma |\le s$, $[A,B] := AB-BA$ denotes the commutator of two operators, we reduce $(\Upsilon ^s\Upsilon ^s_1 + Q_1)|_{t=0}$ from Formula 29$(7)$ to the form prescribes by equation $(1)$. [**32.**]{} We consider operators of the form: $(1)$ $(\Upsilon ^k + \beta I_r) f(z) := \{ \sum_{0<|\alpha |\le k} (\partial ^{\alpha } f(z) {\eta }_{\alpha }(z) \} + f(z)\beta (z)$,\ with $\eta _{\alpha }(z)\in {\cal A}_v$, $\alpha = (\alpha _0,...,\alpha _{2^r-1})$, $0\le \alpha _k\in {\bf N}$ for each $k$, $|\alpha |=\alpha _0+...+\alpha _{2^r-1}$, $\beta I_r f(z) := f(z) \beta $, $\partial ^{\alpha }f(z) := \partial ^{|\alpha |} f(z)/\partial z_0^{\alpha _0}...\partial z_{2^r-1}^{\alpha _{2^r-1}}$, $2\le r \le v<\infty $, $\beta (z)\in {\cal A}_v$, $z_0,...,z_{2^r-1}\in {\bf R}$, $z=z_0i_0+...+z_{2^r-1} i_{2^r-1}$. [**Proposition.**]{} [*The operator $(\Upsilon ^k+\beta )^*(\Upsilon ^k+\beta )$ is elliptic on the space $C^{2k}({\bf R}^{2^r},{\cal A}_v)$.*]{} [**Proof.**]{} We establish the identity $(2)$ $(ay)z^* + (az)y^* = a 2 Re (yz^*)$\ for any $a, y, z\in {\cal A}_v$. It is sufficient to prove Equality $(2)$ for any three basic generators of the Cayley-Dickson algebra ${\cal A}_v$, since the real field ${\bf R}$ is its center, while the multiplication in ${\cal A}_v$ is distributive $(a+y)z=az+yz$ and $((\alpha a) (\beta y)) (\gamma z^*) = (\alpha \beta \gamma ) ((ay)z^*)$ for all $\alpha , \beta , \gamma \in {\bf R}$ and $a, y, z \in {\cal A}_v$. If $a=i_0$, then $(2)$ is evident, since $yz^* + zy^* = yz^* + (yz^*)^* = 2 Re (yz^*)$. If $y=i_0$, then $(ay)z^*+ (az)y^*= az^* + az = a 2 Re (z)= a 2 Re (yz^*)$. Analogously for $z=i_0$. For three purely imaginary generators $i_p, i_s, i_k$ consider the minimal Cayley-Dickson algebra $\Phi = alg_{\bf R} (i_p, i_s, i_k)$ over the real field generated by them. If it is associative, then it is isomorphic with either the complex field $\bf C$ or the quaternion skew field $\bf H$, so that $(ay)z^* + (az)y^* = a(yz^*+zy*) = a 2 Re (yz^*)$. If the algebra $\Phi $ is isomorphic with the octonion algebra, then we use Formulas $(M1,M2)$ from the introduction for either $a, y\in {\bf H}$ and $z={\bf l}$ or $a, z\in {\bf H}$ and $y={\bf l}$. This gives $(2)$ in all cases, since the algebra $alg_{\bf R} (i_p,i_s)$ with two basic generators $i_p$ and $i_s$ is always associative. Particularly, if $y=i_s\ne z=i_k$, $~ s, ~ k \ge 1$, then the result in $(2)$ is zero. Using $(2)$ we get more generally, that $(3)$ $((ay)z^*)b^* + ((az)y^*)b^* = (a 2 Re (yz^*))b^*= (ab^*) 2 Re (yz^*) ,$\ consequently, $(4)$ $((ay)z^*)b^* + ((az)y^*)b^* + ((by)z^*)a^* + ((bz)y^*)a^*= 4 Re (ab^*) Re (yz^*) $\ for any Cayley-Dickson numbers $a, b, y, z\in {\cal A}_v$. In view of Formulas $(1,4)$ the form corresponding to the principal symbol of the operator $(\Upsilon ^k+\beta )^*(\Upsilon ^k+\beta )$ is with real coefficients, of degree $2k$ and non-negative definite, consequently, the operator $(\Upsilon ^k+\beta )^*(\Upsilon ^k+\beta )$ is elliptic. [**33. Fundamental solutions.**]{} Let either $Y$ be a real $Y={\cal A}_v$ or complexified $Y=({\cal A}_v)_{\bf C}$ or quaternionified $Y=({\cal A}_v)_{\bf H}$ Cayley-Dickson algebra (see §28). Consider the space ${\cal B}({\bf R}^n,Y)$ (see §19) supplied with a topology in it is given by the countable family of semi-norms $(1)$ $p_{\alpha , k} (f) := \sup_{x\in {\bf R}^n} |(1+|x|)^k \partial ^{\alpha }f(x)|$,\ where $k=0, 1, 2,...$; $\alpha = (\alpha _1,...,\alpha _n)$, $0\le \alpha _j\in {\bf Z}$. On this space we take the space ${\cal B}'({\bf R}^n,Y)_l$ of all $Y$ valued continuous generalized functions (functionals) of the form $(2)$ $f=f_0i_0+...+f_{2^v-1}i_{2^v-1}$ and $g=g_0i_0+...+g_{2^v-1}i_{2^v-1}$, where $f_j$ and $g_j\in {\cal B}'({\bf R}^n,Y)$, with restrictions on ${\cal B}({\bf R}^n,{\bf R})$ being real or ${\bf C}_{\bf i}$ or ${\bf H}_{J,K,L}$ -valued generalized functions $f_0,...,f_{2^v-1}, g_0,...,g_{2^v-1}$ respectively. Let $\phi = \phi _0i_0+...+\phi _{2^v-1}i_{2^v-1}$ with $\phi _0,...,\phi _{2^v-1}\in {\cal B}({\bf R}^n,{\bf R})$, then $(3)$ $[f,\phi ) = \sum_{k,j=0}^{2^v-1} [f_j,\phi _k) i_ki_j$. We define their convolution as $(4)$ $[f*g,\phi ) = \sum_{j,k=0}^{2^v-1} ([f_j*g_k,\phi ) i_j)i_k$ for each $\phi \in {\cal B}({\bf R}^n,Y)$. As usually $(5)$ $(f*g)(x) = f(x-y)*g(y) = f(y)*g(x-y)$\ for all $x, y \in {\bf R}^n$ due to $(4)$, since the latter equality $(5)$ is satisfied for each pair $f_j$ and $g_k$. Thus a solution of the equation $(6)$ $(\Upsilon ^s +\beta )f = g $ in ${\cal B}({\bf R}^n,Y)$ or in the space ${\cal B}'({\bf R}^n,Y)_l$ is: $(7)$ $f = {\cal E}_{\Upsilon ^s+\beta }*g$, where ${\cal E}_{\Upsilon ^s +\beta }$ denotes a fundamental solution of the equation $(8)$ $(\Upsilon ^s +\beta ){\cal E}_{\Upsilon +\beta }=\delta $, $(\delta ,\phi )=\phi (0)$. The fundamental solution of the equation $(9)$ $A_0 {\cal V} = \delta $ with $A_0 = (\Upsilon ^s +\beta ) (\Upsilon ^{s_1}_1+\beta _1)$\ using Equalities 32$(2-4)$ can be written as the convolution $$(10)\quad {\cal V} =: {\cal V}_{A_0} = {\cal E}_{\Upsilon ^s +\beta } * {\cal E}_{\Upsilon ^{s_1}_1+\beta _1}.$$ More generally we can consider the equation $(11)$ $A f=g$ with $A=A_0 + (\Upsilon _2+\beta _2)$,\ where $A_0=(\Upsilon +\beta ) (\Upsilon _1 +\beta _1)$, $\Upsilon , ~ \Upsilon _1, ~ \Upsilon _2$ are operators of orders $s$, $s_1$ and $s_2$ respectively given by 32$(1)$ with $z$-differentiable coefficients. For $\Upsilon _2+\beta _2=0$ this equation was solved above. Suppose now, that the operator $\Upsilon _2+\beta _2$ is non-zero. To solve Equation $(11)$ on a domain $U$ one can write it as the system: $(12)$ $(\Upsilon _1+\beta _1)f = g_1$, $(\Upsilon +\beta )g_1 = g - (\Upsilon _2+\beta _2)f$.\ Find at first a fundamental solution ${\cal V}_A$ of Equation $(11)$ for $g=\delta $. We have: $(13)$ $f={\cal E}_{\Upsilon _1+\beta _1}*g_1 = {\cal E}_{\Upsilon _2+\beta _2}*(g- (\Upsilon +\beta )g_1)$, consequently, $(13.1)$ ${\cal E}_{\Upsilon _1+\beta _1}*g_1 + {\cal E}_{\Upsilon _2+\beta _2}*((\Upsilon +\beta )g_1) = {\cal E}_{\Upsilon _2+\beta _2}*g$.\ In accordance with $(3-5)$ and 32$(1)$ the identity is satisfied: $[{\cal E}_{\Upsilon _2+\beta _2}*((\Upsilon +\beta )g_1),\phi _0) = [ (\Upsilon + \beta ) ({\cal E}_{\Upsilon _2+\beta _2}*g_1), \phi _0)$. Thus $(13)$ is equivalent to $(14)$ ${\cal E}_{\Upsilon _1+\beta _1}*g_1 + (\Upsilon +\beta ) ({\cal E}_{\Upsilon _2+\beta _2}*g_1) = {\cal E}_{\Upsilon _2+\beta _2}$\ for $g=\delta $, since ${\cal E}_{\Upsilon _2+\beta _2}*\delta = {\cal E}_{\Upsilon _2+\beta _2}$. We consider the Fourier transform $F$ by real variables with the generator ${\bf i}$ commuting with $i_j$ for each $j=0,...,2^v-1$ such that $(F1)$ $(Fg)(y) = \int_{{\bf R}^n} e^{- {\bf i}(y,x)} g(x)dx_1...dx_n$\ for any $g\in L^1({\bf R}^n,{\cal A}_v)$, i.e. $\int_{{\bf R}^n}|g(x)|dx_1...dx_n<\infty $, where $g: {\bf R}^n\to Y$ is an integrable function, $(y,x)=x_1y_1+...+x_ny_n$, $x=(x_1,...,x_n)\in {\bf R}^n$, $x_j\in {\bf R}$ for every $j$. The inverse Fourier transform is: $(F2)$ $(F^{-1}g)(y) = (2\pi )^{-n} \int_{{\bf R}^n} e^{ {\bf i}(y,x)} g(x)dx_1...dx_n$. For a generalized function $f$ from the space ${\cal B}'({\bf R}^n,Y)_l$ its Fourier transform is defined by the formula $(F3)$ $(Ff,\phi ) = (f,F\phi )$, $~ ~ (F^{-1}f,\phi ) = (f,F^{-1}\phi )$. In view of $(2-5)$ the Fourier transform of $(14)$ gives: $(15)$ $ [F({\cal E}_{\Upsilon _1+\beta _1})] [F(g_1)]+ \sum_{j=0}^{2^v-1} [F((\Upsilon +\beta )_j {\cal E}_{\Upsilon _2+\beta _2})] [F(g_1)] i_j =F({\cal E}_{\Upsilon _2+\beta _2})$\ for $g=\delta $. With generators $i_0,...,i_{2^v-1}, i_0{\bf i},...,i_{2^v-1}{\bf i}$ the latter equation gives the linear system of $2^{v+1}$ equations over the real field, or $2^{v+2}$ equations when $Y=({\cal A}_v)_{\bf H}$. From it $F(g_1)$ and using the inverse transform $F^{-1}$ a generalized function $g_1$ can be found, since $F(g) = F(g_0)i_0+...+F(g_{2^v-1})i_{2^v-1}$ and $F^{-1}(g) = F^{-1}(g_0)i_0+...+F^{-1}(g_{2^v-1})i_{2^v-1}$ (see also the Fourier transform of real and complex generalized functions in [@gelshil; @vladumf]). Then $(16)$ ${\cal V}_A = {\cal E}_{\Upsilon _1+\beta _1}*g_1$ and $f={\cal V}_A*g$ gives the solution of $(11)$, where $g_1$ was calculated from $(15)$. Let $\pi ^v_r : ({\cal A}_v)_{\bf H}\to ({\cal A}_r)_{\bf H}$ be the $\bf R$-linear projection operator defined as the sum of projection operators $\pi _0+...+\pi _{2^r-1}$, where $\pi _j: ({\cal A}_v)_{\bf H}\to {\bf H}i_j$, $(17)$ $\pi _j(h)=h_ji_j$, $ ~ h = \sum_{j=0}^{2^v-1} h_ji_j$, $h_j\in {\bf H}_{J,K,L}$, that gives the corresponding restrictions when $h_j\in {\bf C}_{\bf i}$ or $h_j\in {\bf R}$ for $j=0,...,2^r-1$. Indeed, Formulas 2$(5,6)$ have the natural extension on $({\cal A}_v)_{\bf H}$, since the generators $J, ~ K$ and $L$ commute with $i_j$ for each $j$. Finally, the restriction from the domain in ${\cal A}_v$ onto the initial domain of real variables in the real shadow and the extraction of $\pi ^v _r\circ f\in {\cal A}_r$ with the help of Formulas 2$(5,6)$ gives the reduction of a solution from ${\cal A}_v$ to ${\cal A}_r$. Theorems 29, Proposition 32 and Corollaries 30, 31 together with formulas of this section provide the algorithm for subsequent resolution of partial differential equations for $s, s-1,...,2$, because principal parts of operators $A_2$ on the final step are with constant coefficients. A residue term $Q$ of the first order can be integrated along a path using a non-commutative line integration over the Cayley-Dickson algebra [@ludoyst; @ludfov]. [**34. Multiparameter transforms of generalized functions.**]{} If $\phi \in {\cal B}({\bf R}^n,Y)$ and $g\in {\cal B}'({\bf R}^n,Y)_l$ (see §§19 and 33) we put $(1)$ $\sum_{j=0}^{2^v-1} [{\cal F}^n(g_j;u;p;\zeta ),\phi )i_j := \sum_{j=0}^{2^v-1} [g_j,{\cal F}^n(\phi ;u;p;\zeta ))i_j$ or shortly $(2)$ $\sum_{j=0}^{2^v-1}[g_je^{-u(p;t;\zeta)},\phi )i_j = \sum_{j=0}^{2^v-1} [g_j, \phi e^{-u(p;t;\zeta)}) i_j$.\ If the support $supp (g)$ of $g$ is contained in a domain $U$, then it is sufficient to take a base function $\phi $ with the restriction $\phi |_U \in {\cal B}(U,Y)$ and any $\phi |_{{\bf R}^n\setminus U}\in C^{\infty }$. [**34.1. Remark.**]{} It is possible to use Theorem 29, Corollaries 30 and 31, Proposition 32 and §33 for solutions of definite differential equations with variable coefficients. For this purpose one can present an operator $A$ as the composition $A=\Upsilon \Upsilon _1 + Q$, where $ord (A) = ord (\Upsilon ) + ord (\Upsilon _1)$, $ord (Q) \le ord (A) -1$, $\Upsilon $ and $\Upsilon _1$ are operators with variable coefficients, $\Upsilon ^*\Upsilon $ and $\Upsilon ^*_1\Upsilon _1$ are elliptic operators with constant coefficients of their principal symbols at least. Then use Formulas 33$(1-16)$ to find fundamental solutions ${\cal E}_{\Upsilon }$, ${\cal E}_{\Upsilon _1}$ and ${\cal E}_A$ or iterate this procedure (see also §35). A generalization of Feynman’s formula over the Cayley-Dickson algebras for the second order partial differential operators with the first order addendum $Q$ with variable coefficients from [@ludmmas09] also can be used. [**35. Examples.**]{} Let $(1)$ $Af(t)=\sum_{j=1}^n (\partial ^2f(t)/\partial t_j^2)c_j$\ be the operator with constant coefficients $c_j\in {\cal A}_r$, $|c_j|=1$, by the variables $t_1,...,t_n$, $n\ge 2$. We suppose that $c_j$ are such that the minimal subalgebra $alg_{\bf R}(c_j,c_k)$ containing $c_j$ and $c_k$ is alternative for each $j$ and $k$ and $|(...(c_1^{1/2}c_2^{1/2})...)c_n^{1/2}|=1$. Since $(2)$ $\partial f(t)/\partial t_j = \sum_{k=1}^n (\partial f(t(s))/\partial s_k) (\partial s_k/\partial t_j) = \sum_{k=1}^j \partial f(t(s))/\partial s_k$, the operator $A$ takes the form $(3)$ $Af=\sum_{j=1}^n (\sum_{1\le k, b \le j} (\partial ^2 f(t(s))/\partial s_k\partial s_b)) c_j,$\ where $s_j=t_j+...+t_n$ for each $j$. Therefore, by Theorem 12 and Formulas 25$(SO)$ and 28$(6)$ we get: $(4)$ ${\cal F}^n (Af;u;p;\zeta ) = \sum_{j=1}^n \{ [{\sf R}_{e_j}(p)]^2 F^n_u (p;\zeta ) \} c_j $ for $u(p,t;\zeta )$ either in ${\cal A}_r$ spherical or ${\cal A}_r$ Cartesian coordinates with the corresponding operators ${\sf R}_{e_j}(p)$ (see also Formulas 25$(1.1,1.2)$).\ On the other hand, $(5)$ ${\cal F}^n(\delta ;u;p;\zeta ) =e^{-u(p,0;\zeta )}=e^{-u(0,0;\zeta )}$ in accordance with Formula 20$(2)$. The delta function $\delta (t)$ is invariant relative to any invertible linear operator $C: {\bf R}^n\to {\bf R}^n$ with the determinant $|\det (C)|=1$, since $$\int_{{\bf R}^n} \delta (Cx) \phi (x)dx = \int_{{\bf R}^n} \delta (y) \phi (C^{-1}y) |\det (C)| dy = \phi (C^{-1}0) = \phi (0).$$ Thus $(5)$ ${\cal F}^n (C(Af);u;p;\zeta ) = {\cal F}^n (Af;u;p;\zeta )$\ for any Fundamental solution $f$, where $Cg(t) := g(Ct)$, $Af=\delta $. If $C: {\bf R}^n\to {\bf R}^n$ is an invertible linear operator and $\xi =Ct$, $q=Cp$, $\zeta ' =C\zeta $, then $t=C^{-1}\xi $, $p=C^{-1}q$ and $\zeta =C^{-1} \zeta '$. In the multiparameter noncommutative transform ${\cal F}^n$ there are the corresponding variables $(t_j,p_j,\zeta _j)$. This is accomplished in particular for the operator $C(t_1,...,t_n)=(s_1,...,s_n)$. The operator $C^{-1}$ transforms the right side of Formula $(4)$, when it is written in the ${\cal A}_r$ spherical coordinates, into $\sum_{j=1}^n \{ (p_0 + q_j{\sf S}_{e_j})^2 F^n_u (q;\zeta ) \} c_j .$ The Cayley-Dickson number $q=q_0+q_1i_1+...+q_ni_n$ can be written as $q=q_0+q_MM$, where $|M|=1$, $M$ is a purely imaginary Cayley-Dickson number, $q_M\in {\bf R}$, $q_MM=q_1i_1+...+q_ni_n$, since $q_0=Re (q)$. After a suitable automorphism $\theta : {\cal A}_r\to {\cal A}_r$ we can take $\theta (q) = q_0+ q_Mi_1$, so that $\theta (x)=x$ for any real number. The functions $[\sum_{j=1}^n q_j^2 {\sf S}_{e_j}^2 c_j ]$ and $[\sum_{j=1}^n p_j^2 {\sf S}_{e_j}^2 c_j ]$ are even by each variable $q_j$ and $p_j$ respectively. Therefore, we deduce in accordance with $(5)$ and 2$(3,4)$ and Corollary 6.1 with parameters $p_0=0$ and $\zeta =0$ and $c_j\in \{ -1, 1 \} $ for each $j$ that $(6)$ $({\cal F}^n)^{-1} (1/[\sum_{j=1}^n \{ \sum_{1\le k, b \le j} p_k{\sf S}_{e_k} p_b{\sf S}_{e_b} \} c_j ];u;y;\zeta ) = - [g,e^{ N ([y],[q])})$\ in the ${\cal A}_r$ spherical coordinates, where $g=1/[\sum_{j=1}^n q_j^2 c_j ]$, or $(6.1)$ $({\cal F}^n)^{-1} (1/[\sum_{j=1}^n \{ p_j^2{\sf S}_{e_j}^2 \} c_j ];u;y;\zeta ) = - [g,e^{ N ([y],[p])})$\ in the ${\cal A}_r$ Cartesian coordinates, where $g=1/[\sum_{j=1}^n p_j^2 c_j ]$, $N=y/|y|$ for $y\ne 0$, $N=i_1$ for $y=0$, $y=y_1i_1+...+y_ni_n\in {\cal A}_r$, $[y]=(y_1,...,y_n)\in {\bf R}^n$, $([y],[q])= \sum_{j=1}^n y_jq_j$, since ${\sf S}^2_{e_k} \cos (\phi + \zeta _k) = \cos (\phi + \zeta _k + \pi ) = - \cos (\phi + \zeta _k)$ and ${\sf S}^2_{e_k} \sin (\phi + \zeta _k) = \sin (\phi + \zeta _k + \pi ) = - \sin (\phi + \zeta _k)$ for each $k$. Particularly, we take $c_j=1$ for each $j=1,...,k_+$ and $c_j=-1$ for any $j=k_+ + 1,...,n$, where $1\le k_+\le n$. Thus the inverse Laplace transform for $q_0=0$ and $\zeta =0$ in accordance with Formulas 2$(1-4)$ reduces to $(7)$ $({\cal F}^n)^{-1} (1/[\sum_{j=1}^n \{ \sum_{1\le k, b \le j} p_k{\sf S}_{e_k} p_b{\sf S}_{e_b} \} c_j ];u;y;\zeta ) = $ $(2\pi )^{-n} \int_{{\bf R}^n} \exp ( {\bf i} (q_1y_1+...+q_ny_n) )(1/ [\sum_{j=1}^{k_+} q_j^2 - \sum_{j=k_+ +1}^n q_j^2])dq_1...dq_n$\ in the ${\cal A}_r$ spherical coordinates and $(7.1)$ $({\cal F}^n)^{-1} (1/[\sum_{j=1}^n p_j^2 {\sf S}_{e_j}^2 c_j ];u;y;\zeta ) = $ $(2\pi )^{-n} \int_{{\bf R}^n} \exp ( {\bf i} (p_1y_1+...+p_ny_n) )(1/ [\sum_{j=1}^{k_+} p_j^2 - \sum_{j=k_+ +1}^n p_j^2])dp_1...dp_n$\ in the ${\cal A}_r$ Cartesian coordinates,\ since for any even function its cosine Fourier transform coincides with the Fourier transform. The inverse Fourier transform $(F^{-1}g)(x)=(2\pi )^{-n}(Fg)(-x)=:\Psi _n$ of the functions $g=1/(\sum_{j=1}^n z_j^2)$ for $n\ge 3$ and ${\cal P}(1/(\sum_{j=1}^2 z_j^2))$ for $n=2$ in the class of the generalized functions is known (see [@gelshil] and §§9.7 and 11.8 [@vladumf]) and gives $(8)$ $\Psi _n(z_1,...,z_n) = C_n (\sum_{j=1}^n z_j^2)^{1-n/2}$ for $3\le n$, where $C_n = - 1/[(n-2) \sigma _n]$, $\sigma _n = 4\pi ^{n/2}/\Gamma ((n/2)-1)$ denotes the surface of the unit sphere in ${\bf R}^n$, $\Gamma (x)$ denotes Euler’s gamma-function, while $(9)$ $\Psi _2(z_1,z_2) = C_2 \ln (\sum_{j=1}^2 z_j^2)$ for $n=2$, where $C_2= 1/(4\pi )$.\ Thus the technique of §2 over the Cayley-Dickson algebra has permitted to get the solution of the Laplace operator. For the function $(10)$ $P(x) = \sum_{j=1}^{k_+} x_j^2 - \sum_{j=k_+ +1}^n x_j^2$ with $1\le k_+ <n$ the generalized functions $(P(x)+{\bf i}0)^{\lambda }$ and $(P(x)-{\bf i}0)^{\lambda }$ are defined for any $\lambda \in {\bf C} = {\bf R}\oplus {\bf i}{\bf R}$ (see Chapter 3 in [@gelshil]). The function $P^{\lambda }$ has the cone surface $P(z_1,...,z_n)=0$ of zeros, so that for the correct definition of generalized functions corresponding to $P^{\lambda }$ the generalized functions $(11)$ $(P(x)+ c{\bf i}0)^{\lambda }=\lim_{0< c\epsilon , \epsilon \to 0 } (P(x)^2 + \epsilon ^2)^{\lambda /2} \exp ({\bf i} \lambda arg (P(x) + {\bf i}c\epsilon ))$\ with either $c=-1$ or $c=1$ were introduced. Therefore, the identity $(12)$ $F(\Psi _{k_+,n-k_+})(x) = - (\sum_{j=1}^{k_+}x_j^2- \sum_{j=k_+ +1}^n x_j^2) [F(\Psi _{k_+,n-k_+})(x)]^2$ or $(13)$ $F(\Psi ) = - 1/(P(x) + c {\bf i} 0)$ follows, where $c=-1$ or $c=1$. The inverse Fourier transform in the class of the generalized functions is: $(14)$ $F^{-1}((P(x)+c{\bf i}0)^{\lambda })(z_1,...,z_n) = \exp (- \pi c(n-k_+){\bf i}/2) 2^{2\lambda +n} \pi ^{n/2} \Gamma (\lambda +n/2)(Q(z_1,...,z_n) - c{\bf i}0)^{- \lambda - n/2)}/[\Gamma (-\lambda )|D|^{1/2}]$\ for each $\lambda \in {\bf C}$ and $n\ge 3$ (see §IV.2.6 [@gelshil]), where $D=\det (g_{j,k})$ denotes a discriminant of the quadratic form $P(x)=\sum_{j,k=1}^n g_{j,k}x_jx_k$, while $Q(y)= \sum_{j,k=1}^n g^{j,k}x_jx_k$ is the dual quadratic form so that $\sum_{k=1}^n g^{j,k}g_{k,l}=\delta ^j_l$ for all $j, l$; $\delta ^j_l=1$ for $j=l$ and $\delta ^j_l=0$ for $j\ne l$. In the particular case of $n=2$ the inverse Fourier transform is given by the formula: $(15)$ $F^{-1}((P(x)+c{\bf i}0)^{-1})(z_1,z_2) = - 4^{-1}|D|^{-1/2} \exp (- \pi c(n- k_+){\bf i}/2) \ln (Q(z_1,...,z_n) - c{\bf i}0).$ Making the inverse Fourier transform $F^{-1}$ of the function $- 1/(P(x) + {\bf i} 0)$ in this particular case of $\lambda =-1$ we get two complex conjugated fundamental solutions $(16)$ $\Psi _{k_+,n-k_+} (z_1,...,z_n) = - \exp (\pi c(n-k_+) {\bf i}/2)\Gamma ((n/2) -1) (Q(z_1,...,z_n) + c{\bf i}0)^{1-(n/2)}/(4\pi ^{n/2})$ for $3\le n$ and $1\le k_+<n$, while $(17)$ $\Psi _{1,1}(z_1,z_2) = 4^{-1} \exp (\pi c(n-k_+){\bf i}/2) \ln (Q(z_1,z_2) + c{\bf i}0) $ for $n=2$, where either $c= 1$ or $c=-1$. Generally for the operator $A$ given by Formula $(1)$ we get $P(x) = P_0(x) + P_i(x)$, where $P_0(x)=\sum_{j=1}^n x_j^2Re (c_j)$ and $P_i(x)= \sum_{j=1}^n x_j^2 Im(c_j)$ are the real and imaginary parts of $P$, $Im (z) = z- Re (z)$ for any Cayley-Dickson number $z$. Take ${\bf l} =i_{2^r}$ and consider the form $P(x)+\epsilon c{\bf l}$ with $\epsilon \ne 0$ and either $c=1$ or $c=-1$, then $P(x)+\epsilon c{\bf l}\ne 0$ for each $x\in {\bf R}^n$. We put $(18)$ $(P(x)+ c{\bf l}0)^{\lambda }=\lim_{0< c\epsilon , \epsilon \to 0 } (P(x)^2 + \epsilon ^2)^{\lambda /2} \exp ({\bf i} \lambda Arg (P(x) + {\bf l}c\epsilon )).$ Consider $\lambda \in {\bf R}$, the generalized function $(P(x)^2 + \epsilon ^2)^{\lambda /2} \exp ({\bf i} \lambda Arg (P(x) + {\bf l}c\epsilon ))$ is non-degenerate and for it the Fourier transform is defined. The limit $\lim_{0< c\epsilon , \epsilon \to 0 }$ gives by our definition the Fourier transform of $(P(x)+ c{\bf l}0)^{\lambda }$. Since $(19)$ $c_j (\beta _j+ \sum_{1\le k\le n, k\ne j} c_j^{-1}c_k \beta _k) = \sum_{j=1}^nc_j\beta _j$\ for all $\beta _j\in \bf R$ and any $1\le j \le n$ in accordance with the conditions imposed on $c_j$ at the beginning of this section and ${\bf i}N_j=N_j{\bf i}$ for each $j$, the Fourier transform with the generator $\bf i$ can be accomplished subsequently by each variable using Identity $(19)$. The transform $x_j\mapsto (c_j)^{1/2}x_j$ is diagonal and $|(...((c_1^{1/2}c_2^{1/2})...)c_n^{1/2}|=1$, so we can put $|D|=1$. Each Cayley-Dickson number can be presented in the polar form $z=|z|e^{\phi M}$, $\phi \in {\bf R}$, $|\phi |\le \pi $, $M$ is a purely imaginary Cayley-Dickson number $|M|=1$, $Arg (z) = (\phi + 2\pi k)M$ has the countable number of values, $k\in {\bf Z}$ (see §3 in [@ludoyst; @ludfov]). Therefore, we choose the branch $z^{1/2} = |z|^{1/2} \exp ( (Arg z)/2)$, $|z|^{1/2}>0$ for $z\ne 0$, with $|Arg (z)|\le \pi $, $Arg (M) = M\pi /2$ for each purely imaginary $M$ with $|M|=1$. We treat the iterated integral as in §6, i.e. with the same order of brackets. Taking initially $c_j\in {\bf R}$ and considering the complex analytic extension of formulas given above in each complex plane ${\bf R}\oplus N_j{\bf R}$ by $c_j$ for each $j$ by induction from $1$ to $n$, when $c_j$ is not real in the operator $A$, $Im (c_j)\in {\bf R}N_j$, we get the fundamental solutions for $A$ with the form $(P(x)+ c{\bf l}0)^{\lambda }$ instead of $(P(x)+ c{\bf i}0)^{\lambda }$ with multipliers $(...(c_1^{c/2}c_2^{c/2})...)c_n^{c/2}$ instead of $\exp (\pi c(n-k_+) {\bf i}/2)$ as above and putting $|D|=1$. Thus $(20)$ $\Psi (z_1,...,z_n) = - \Gamma ((n/2) -1) (P^*(z_1,...,z_n) - c{\bf l}0)^{1-(n/2)}[(...(c_1^{c/2}c_2^{c/2})...)c_n^{c/2}]^*/(4\pi ^{n/2})$ for $3\le n$, while $(21)$ $\Psi (z_1,z_2) = 4^{-1} [c_1^{c/2} c_2^{c/2}]^* Ln (P^*(z_1,z_2) - c{\bf l}0)$ for $n=2$,\ since $c_j^*= c_j^{-1}$ for $|c_j|=1$, $y_jq_j=y_j(c_j^{c/2})^*q_jc_j^{1/2}$, while $(...(dc_1^{c/2}q_1 dc_2^{c/2}q_2)...)dc_n^{c/2}q_n] = dq_1 ...dq_n [(...(c_1^{c/2}c_2^{c/2})...)c_n^{c/2}]$ and $|(...(c_1^{c/2}c_2^{c/2})...)c_n^{c/2}|=1$. [**36. Partial differential equations with polynomial real coefficients.**]{} Let $(1)$ $A= \sum_{|\alpha |\le m} a_{\alpha }(q) \partial ^{\alpha }_q$, $a_{\alpha }(q) = \sum_{\beta } a_{\alpha , \beta } q^{\beta }$, $q^{\beta } := q_1^{\beta _1}...q_n^{\beta _n}$, $a_{\alpha , \beta }$ and $f$ have values as in §28, and $Af$ be an original. Using the transform in the ${\cal A}_r$ Cartesian coordinates we take $q_j=t_j$ for each $j$, while using the transform in ${\cal A}_r$ spherical coordinates we choose $q_j=s_j(t)$ for each $j$. Then $(2)$ ${\cal F}^n(Af;u;p;\zeta ) = \sum_{\beta } (-1)^{|\beta |} {\sf S}_{\beta }(p) \partial ^{\beta }_p$ $ [\sum_{\beta } a_{\alpha ,\beta } ( [p_0+p_1{\sf S}_{e_1}]^{\alpha _1} p_2^{\alpha _2} {\sf S}^{\alpha _2}_{e_2}...p_n^{\alpha _n} {\sf S}^{\alpha _n}_{e_n})] F^n(p;\zeta ) = G^n(p;\zeta )$\ in the ${\cal A}_r$ spherical coordinates and $(2.1)$ ${\cal F}^n(Af;u;p;\zeta ) = \sum_{\beta } (-1)^{|\beta |} {\sf S}_{\beta }(p) \partial ^{\beta }_p$ $ (\sum_{\beta } a_{\alpha ,\beta } [p_0+p_1{\sf S}_{e_1}]^{\alpha _1} [p_0+p_2{\sf S}_{e_2}]^{\alpha _2}...[p_0+p_n {\sf S}_{e_n}]^{\alpha _n}) F^n(p;\zeta ) = G^n(p;\zeta )$\ in the ${\cal A}_r$ Cartesian coordinates (see Theorems 12 and 13 above). It may happen that the second differential equation is simpler than the initial one: $(3)$ $Af=g$.\ For example, when coefficients depend only on one variable $t_n$, then the second differential equation is ordinary and linear. [**37. Noncommutative transforms of products and convolutions of functions in the ${\cal A}_r$ spherical coordinates.**]{} For any Cayley-Dickson number $z=z_0i_0+...+z_{2^r-1}i_{2^r-1}$ we consider projections $(1)$ $\theta _j(z)=z_j$, $z_j\in {\bf R}$ or ${\bf C}_{\bf i}$ or ${\bf H}_{J,K,L}$, $j=0,...,2^r-1$, $\theta _j(z) = \pi _j(z)i_j^*$,\ given by Formulas 2$(5,6)$ and 33$(17)$. We define the following operators $(2)$ ${\cal R}_{\alpha ,j} (F^n(p;\zeta )) := F^n(p_0,(-1)^{\alpha _1} p_1,...,(-1)^{\alpha _{j+1-\delta _{j,n}}} p_{j+1-\delta _{j,n}}, p_{j+2- \delta _{j,n}},$\ $...,p_n; \zeta _0, (-1)^{\alpha _1} \zeta _1 +\pi \alpha _1/2,...,(-1)^{\alpha _{j+1-\delta _{j,n}}} \zeta _{j+1- \delta _{j,n}} +\pi \alpha _{j+1- \delta _{j,n}} /2, \zeta _{j+2- \delta _{j,n}},...,\zeta _n)$\ on images $F^n$, $2^{r-1}\le n \le 2^r-1$, $j=0,...,n$. For $\alpha _j$ and $\beta _j\in \{ 0, 1\} $ their sum $\alpha _j+\beta _j$ is considered by $(mod ~ 2)$, i.e. in the ring ${\bf Z}_2={\bf Z}/(2{\bf Z})$, for two vectors $\alpha $ and $\beta \in \{ 0, 1 \} ^{2^r-1}$ their sum is considered componentwise in ${\bf Z}_2$. Let $$(3)\quad {\cal F}^n(f;u;p;\zeta ) = \sum_{j=0}^n \sum_{k=0}^{2^r-1}\theta _j ({\cal F}^n(\theta _k(f);u;p;\zeta ))i_ki_j,$$ also $F^n_j(p;\zeta ) := \sum_{k=0}^{2^r-1}\theta _j ({\cal F}^n(\theta _k(f);u;p;\zeta ))i_k$ for an original $f$, where $u(p,t;\zeta )$ is given by Formulas 2$(1,2,2.1)$. If $f$ is real or ${\bf C}_{\bf i}$ or ${\bf H}_{J,K,L}$ -valued, then $F^n_j=\theta _j(F^n)$. [**Theorem.**]{} *If $f$ and $g$ are two originals, then* $(4)$ ${\cal F}^n(fg;u;p;\zeta ) = \sum_{j=0}^n \sum_{\alpha , \beta \in \{ 0, 1 \} ^n} (-1)^{\alpha _{j+1}(1-\delta _{j+1,n})}({\cal R}_{\alpha ,j} (F^n_j(p-q_0;\zeta -\eta ))*({\cal R}_{\beta ,j} (G^n_j(p+q_0-p_0;\eta ))i_j,$ $(4.1)$ ${\cal F}^n(f*g;u;p;\zeta ) = \sum_{j=0}^n \sum_{\alpha , \beta \in \{ 0, 1 \} ^n} (-1)^{\alpha _{j+1}(1-\delta _{j+1,n})}({\cal R}_{\alpha ,j} (F^n_j(p;\zeta -\eta )) ({\cal R}_{\beta ,j} (G^n_j(p;\eta ))i_j,$\ whenever ${\cal F}^n(fg)$, ${\cal F}^n(f)$, ${\cal F}^n(g)$ exist, where $1\le n \le 2^r-1$, $2\le r$; $\alpha _k + \beta _k =1 ~ (mod ~ 2)$ for $k\le j$ or $k=j+1=n$, $\alpha _k + \beta _k =0 ~ (mod ~ 2)$ for $k=j+1<n$ and $\alpha _k=\beta _k=0$ for $k>j+1$ in the $j$-th addendum on the right of Formulas $(4,4.1)$; the convolution is by $(p_1,...,p_n)$ in $(4)$, at the same time $q_0\in {\bf R}$ and $\eta \in {\cal A}_r$ are fixed. [**Proof.**]{} The product of two originals can be written in the form: $(5)$ $f(t)g(t)= \sum_{j=0}^{2^r-1} \sum_{k,l: ~ i_ki_l=i_j} \theta _k(f(t)) \theta _l(g(t)) i_j$.\ The functions $\theta _k(f)$ and $\theta _l(g)$ are real or ${\bf C}_{\bf i}$ or ${\bf H}_{J,K,L}$ valued respectively. The non-commutative transform of $fg$ is: $$(6)\quad {\cal F}^n(fg)(p;\zeta ) = \int_{{\bf R}^n} f(t)g(t)\exp ( - u(p,t;\zeta ))dt =$$ $$\{ \int_{{\bf R}^n} (f(t)g(t)) e^{-p_0s_1} \cos (p_1s_1+\zeta _1 ) i_0 dt \} +$$ $$\{ \sum_{j=2}^{n-1} \int_{{\bf R}^n} (f(t)g(t)) e^{-p_0s_1} \sin (p_1s_1+\zeta _1) ... \sin (p_{j-1}s_{j-1}+\zeta _{j-1}) \cos (p_js_j+\zeta _j ) i_{j-1} dt \} +$$ $$\int_{{\bf R}^n} (f(t)g(t)) e^{-p_0s_1} \sin (p_1s_1+\zeta _1) ... \sin (p_ns_n+\zeta _n) i_n dt.$$ On the other hand, $$(7)\quad \int_{{\bf R}^n} f(t)g(t) e^{-p_0s_1 + {\bf i} \sum_{j=1}^k (p_js_j+\zeta _j)\gamma _j} dt =$$ $$\int_{{\bf R}^n}(\int_{{\bf R}^n} f(t) e^{-(p_0-q_0)s_1 + {\bf i}\sum_{j=1}^k ((p_j-q_j)s_j + \zeta _j - \eta _j)\gamma _j} dt) (\int_{{\bf R}^n} g(t) e^{-q_0s_1 + {\bf i}\sum_{j=1}^k (q_js_j+\eta _j)\gamma _j}dt)dq,$$ where $k=1,2,...,n$, $ ~ \gamma _j\in \{ -1, 1 \} $. Therefore, using Euler’s formula $e^{{\bf i} \phi } = \cos (\phi ) + {\bf i} \sin (\phi )$ and the trigonometric formulas $\cos (\phi + \psi ) = \cos (\phi ) \cos (\psi ) - \sin (\phi )\sin (\psi )$, $\quad \sin (\phi + \psi ) = \sin (\phi ) \cos (\psi ) + \cos (\phi )\sin (\psi )$ for all $\phi , ~ \psi \in {\bf R}$, and Formulas $(6,7)$, we deduce expressions for $\theta _j({\cal F}^n(fg))$. We get the integration by $q_1,...,q_n$, which gives convolutions by the $p_1,...,p_n$ variables. Here $q_0\in {\bf R}$ and $\eta \in {\cal A}_r$ are any marked numbers. Thus from Formulas $(5-7)$ and 2$(1,2,2.1,4)$ we deduce Formula $(4)$. Moreover, one certainly has $$(8)\quad \int_{{\bf R}^n} (f*g)(t) e^{-p_0s_1 + {\bf i} \sum_{j=1}^k (p_js_j+\zeta _j)\gamma _j} dt =$$ $$(\int_{{\bf R}^n} f(t) e^{- p_0s_1 + {\bf i}\sum_{j=1}^k (p_js_j +\zeta _j - \eta _j)\gamma _j} dt) (\int_{{\bf R}^n} g(t) e^{-p_0s_1 + {\bf i}\sum_{j=1}^k (p_js_j+\eta _j)\gamma _j}dt)$$ for each $1\le k\le n$, $\gamma _j \in \{ -1, 1 \} $, since $s_j(t) = s_j(t-\tau ) + s_j(\tau )$ for all $j=1,...,n$ and $t, ~ \tau \in {\bf R}^n$. Thus from Relations $(6,8)$ and 2$(1,2,2.1,4)$ and Euler’s formula one deduces expressions for $\theta _j({\cal F}^n(f*g))$ and Formula $(4.1)$. [**38. Moving boundary problem.**]{} Let us consider a boundary problem $(1)$ $Af=g$ in the half-space $t_n\ge \phi (t_n)$, where $\phi (0)=0$ and $\phi (t_n)<t_n$ for each $0\le t_n\in {\bf R}.$ Suppose that the function $t_n-\phi (t_n) =: \psi (t_n)$ is differentiable and bijective. For example, if $0<v<1$ and $\phi (t_n)=vt_n$, then the boundary is moving with the speed $v$. Make the change of variables $y_n = \psi (t_n)$, $y_1 = t_1,$...,$y_{n-1} = t_{n-1}$, then $t_n = \psi ^{-1}(y_n)$ and $dt_n = ds_n = (dt_n/dy_n) dy_n$ and due to Theorem 25 we infer that $$(2)\quad {\cal F}^n (\sum_{|\alpha |\le m} {\bf b}_{\alpha } \partial ^{\alpha }_s \chi _{y_n\ge 0} f(t);p;\zeta ) = \sum_{|\alpha |\le m, 0\le q_n\le \alpha _n-1} {\bf b}_{\alpha }(\delta _{0,{\alpha _n}} -1)$$ $$(p_0+{\sf S}_{e_1}p_1)^{\alpha _1}p_2^{\alpha _2}...p_{n-1}^{\alpha _{n-1}}p_n^{\alpha _n-q_n-1} {\sf S}_{\alpha - \alpha _1e_1- (q_n+1) e_n}{\cal F}^{n-1,y^n}(\partial ^{q_n}_{t_n}w(y),u(p,(y^n);\zeta );p;\zeta )$$ $$+ \sum_{|\alpha |\le m} {\bf b}_{\alpha } (p_0+{\sf S}_{e_1}p_1)^{\alpha _1}p_2^{\alpha _2}...p_n^{\alpha _n} {\sf S}_{\alpha - \alpha _1e_1} {\cal F}^n( \chi _{y_n\ge 0} (y) w(y);p;\zeta ) = G^n(p;\zeta )$$ in the ${\cal A}_r$ spherical coordinates and $$(2.1)\quad {\cal F}^n (\sum_{|\alpha |\le m} {\bf a}_{\alpha } \partial ^{\alpha }_t \chi _{y_n\ge 0} f(t);p;\zeta ) = \sum_{|\alpha |\le m, 0\le q_n\le \alpha _n-1} {\bf a}_{\alpha }(\delta _{0,{\alpha _n}} -1)$$ $$(p_0+{\sf S}_{e_1}p_1)^{\alpha _1}(p_0+p_2{\sf S}_{e_2})^{\alpha _2}...(p_0+p_{n-1}{\sf S}_{e_{n-1}})^{\alpha _{n-1}} (p_0+p_n {\sf S}_{e_n})^{\alpha _n - q_n-1}$$ $${\cal F}^{n-1,y^n}(\partial ^{q_n}_{t_n}w(y),u(p,(y^n);\zeta );p;\zeta )$$ $$+ \sum_{|\alpha |\le m} {\bf a}_{\alpha } (p_0+{\sf S}_{e_1}p_1)^{\alpha _1}(p_0+p_2{\sf S}_{e_2})^{\alpha _2}...(p_0+p_n{\sf S}_{e_n})^{\alpha _n} {\cal F}^n( \chi _{y_n\ge 0} (y) w(y);p;\zeta ) = G^n(p;\zeta )$$ in the ${\cal A}_r$ Cartesian coordinates, where $w(y) := f(t(y)) (dt_n/dy_n)$. Expressing ${\cal F}^n( \chi _{y_n\ge 0} (y) w(y);p;\zeta )$ through $G^n(p;\zeta )$ and the boundary terms ${\cal F}^{n-1,y^n}(\partial ^{q_n}_{t_n}w(y),u(p,(y^n);\zeta );p;\zeta )$ as in §28.3 and making the inverse transform 8$(4)$ or 8.1$(1)$, or using the integral kernel $\xi $ as in §28.5, one gets a solution $w(y)$ or $f(t)=w(y(t)) (dy_n(t_n)/dt_n)$. [**39. Partial differential equations with discontinuous coefficients.**]{} Consider a domain $U$ and its subdomains $U\supset U_1\supset ...\supset U_k$ satisfying Conditions 28$(D1,D4,i-vii)$ so that coefficients of an operator $A$ (see 28$(2)$) are constant on $Int (U_k)$ and on $V_1= U\setminus Int (U_1)$, $V_2=U_1\setminus Int (U_2)$,...,$V_k=U_{k-1}\setminus Int (U_k)$ and are allowed to be discontinuous at the common borders $\partial V_j\cap \partial U_j$ for each $j=1,...,k$. Each function $f\chi _{U_j}$ is an original on $U$ or a generalized function with the support $supp (f\chi _{U_j})\subset U_j$ if $f$ is an original or a generalized function on $U$. Choose operators $A^j$ with constant coefficients on $U^j$ and $A^j|_{Int (V_j)}=0$, where $U^0=U$, so that $A|_{U_k}=A^k$,..., $A|_{U_j}=A^j+...+A^k$,..., $A|_U=A^0+...+A^k$. Therefore, in the class of originals or generalized functions on $U$ the problem (see 28$(1,2)$) can be written as $(1)$ $Af=g$, or $(2)$ $A^0f\chi _{V_1}=g\chi _{V_1}$,...,$A^{k-1}f\chi_{V_k}=g\chi _{V_k}$, $A^kf\chi _{U_k}=g\chi _{U_k},$\ since $\chi _{V_1}+...+\chi _{V_k}+\chi _{U_k}=\chi _U$. Thus the equivalent problem is: $(3)$ $A^0f^0=g^0$, $A^1f^1=g^1$,...,$A^kf^k=g^k$\ with $f^k=f\chi _{U_k}$, $g^k=g\chi _{U_k}$, also $f^j=f\chi _{V_{j+1}}$, $g^j=g\chi _{V_{j+1}}$ for each $j=0,...,k-1$. On $\partial U$ take the boundary condition in accordance with 28$(5.1)$. With any boundary conditions in the class of originals or generalized functions on additional borders $\partial U_j\setminus \partial U$ given in accordance with 28$(5.1)$ a solution $f^j$ on $U^j$ exists, when the corresponding condition 8$(3)$ is satisfied (see Theorems 8 and 28.1). Each problem $A^jf^j=g^j$ can be considered on $U_j$, since $supp(g^j)\subset U_j$. Extend $f^j$ by zero on $U\setminus V_j$ for each $0\le j \le k-1$. When the right side of 28$(6)$ is non-trivial, then $f^j$ is non-trivial. If $f^{j-1}$ is calculated, then the boundary conditions on $\partial U^j\setminus \partial U$ can be chosen in accordance with values of $f^{j-1}$ and its corresponding derivatives $(\partial ^{\beta } f^{j-1} /\partial \nu ^{\beta })|_{(\partial U^j\setminus \partial U)}$ for some $\beta < ord (A^j)$ in accordance with the operator $A^j$ and the boundary conditions 28$(5.1)$ on the boundary $\partial U^j\setminus \partial U$. Having found $f^j$ for each $j=0,...,k$ one gets the solution $f=f^0+...+f^k$ on $U$ of Problem $(1)$ with the boundary conditions 28$(5.1)$ on $\partial U$. [**40. Remark.**]{} The multiparameter noncommutative transform over the Cayley-Dickson algebras presented above is the natural generalization of the usual complex one-parameter Laplace transform. It opens new opportunities for solving partial differential equations of different types. It may happen that Theorem 13 is simpler to use, than Theorem 21 for partial differential equations with real variables. Theorem 13 has an advantage that it can be simpler used for partial differential equations of complex and hyper-complex variables, because each pair $(p_l + p_ji_l^*i_j)$ for $l\ne j$ is the complex variable. In these variants boundary conditions may be for $F^k(p;\zeta )$ on a hyperplane $Re (p) = a$ in ${\cal A}_r$. As it was seen above the appearing integrals are by multidimensional domains. For their calculations the Fubini’s theorem, residues, Jordan Lemma and tables of known integrals also can be used. Generally in computational mathematics integrals are easier to calculate, than to solve partial differential equations numerically. As a rule iterations of algorithms for integrals converge faster, than iterations of numerical methods for partial differential equations. Functions with octonion values may be used to resolve systems of partial differential equations. Using conjugations of Cayley-Dickson numbers one gets the transition between operators with coefficients either on the left or on the right of partial derivatives: $[(\partial ^{\alpha } f(x))c_{\alpha }]^* = c_{\alpha }^*(\partial ^{\alpha } f(x))^*$, particularly, $(\partial ^{\alpha } f(x))^* = \partial ^{\alpha } f^*(x)$ for $x\in {\bf R}^n$, $\partial ^{\alpha }=\partial ^{\alpha }_x$. Using of Formulas 2$(5,6)$ gives variables $t_j=z_j$ for $z\in {\cal A}_r$. So one can consider a class of super-differentiable originals $f(z)$, $z\in V\subset {\cal A}_r$. In the class of piecewise on open subsets super-differentiable originals $f(z)$, $z\in V\subset {\cal A}_r$, with $t_j=z_j$ for each $j=1,...,n$, $n=2^r-1$, in the fixed $z$-representations we get the noncommutative transform for $f(z)\chi _V(z)$ relative to the Cayley-Dickson variable $z\in {\cal A}_r$. Therefore, the results given above transfer on this variant also. Theorem 17 also opens new opportunities to investigate and solve certain types of nonlinear partial differential equations using previous results on spectral theory of functions of operators [@luspraaca; @lujmsalop]. For example, analytic functions $q(z)$ in Theorem 17 permit to consider nonlinear operators $q(\sigma )$, where $\sigma f(z) := \sum_{j=0}^{2^r-1} (\partial f(z)/\partial z_j)i_j$. It is planned to study in the next paper. Partial differential equations with periodic $g$ and $f$ with vector period corresponding to $Q^n$ may be considered also. Certainly others classes of smoothness, for example, Sobolev’s or generalized functions can also be considered. It is planned in a next paper to consider this and also problems with boundary conditions as well as with non-constant coefficients in more details. The technique described above permits to consider partial differential equations of different types and write their solutions in integral forms. If appearing integrals can be calculated in elementary or special of generalized functions, then this gives the explicit formulas in terms of known functions. In conjunction with the line integration over the Cayley-Dickson algebras it permits to solve some types of non linear partial differential equations. The multiparameter Laplace transform over the Cayley-Dickson algebras takes into account the boundary conditions. It naturally means the treatment of systems of partial differential equations due to the multidimensionality of the Cayley-Dickson algebras. [99]{} W. Arendt, C.J.K. Batty, M. Hieber, F. Neubrander. Vector-valued Laplace transforms and Cauchy problems. Birkh$\ddot{a}$user, Basel, 2001. J.C. Baez. The octonions. // Bull. Amer. Mathem. Soc. [**39: 2**]{}, 145-205 (2002). L. Berg. Einf$\ddot{u}$rung in die Operatorenrechnung. VEB Deutscher Verlag der Wissenschaften, Berlin, 1965. G. Emch. M$\acute{e}$chanique quantique quaternionienne et Relativit$\grave e$ restreinte. // Helv. Phys. Acta [**36**]{}, 739-788 (1963). I.M. Gelfand, G.E. Shilov. Generalized functions and operations with them (Fiz.-Mat. Lit.: Moscow, 1958). U. Graf. Applied Laplace transform for scientists and engineers. Birkh$\ddot{a}$user, Basel, 2004. F. Gürsey, C.-H. Tze. On the role of division, Jordan and related algebras in particle physics. World Scientific Publ. Co., Singapore, 1996. W.R. Hamilton. Selected works. Optics. Dynamics. Quaternions. Nauka, Moscow, 1994. I.L. Kantor, A.S. Solodovnikov. Hypercomplex numbers. Springer-Verlag, Berlin, 1989. L.I. Kamynin. Course of mathematical analysis. Moscow State Univ. Press, Moscow, 1993. A.G. Kurosh. Letures on general algebra. Nauka, Moscow, 1973. M.A. Lavretjev, B.V. Shabat. Methods of functions of the complex variable. Nauka, Moscow, 1987. H.B. Lawson, M.-L. Michelson. Spin geometry. Princ. Univ. Press, Princeton, 1989. J. Leray. Un prologement de la transformation de Laplace qui transforme la solution unitaire d’un op$\acute{e}$rateur hyperbolique en sa solution $\acute{e}$l$\dot{e}$mentaire.// Bull. de la Soci$\acute{e}$t$\acute{e}$ math$\acute{e}$m. de France. [**90**]{}, 39-156 (1962). S.V. Ludkovsky. Functions of several Cayley-Dickson variables and manifolds over them. // J. Mathem. Sci. [**141: 3**]{}, 1299-1330 (2007) (Sovrem. Mathem. i ee Pril. [**28**]{} (2005); previous variant: Los Alamos Nat. Lab. math.CV/0302011). S.V. Ludkovsky. Differentiable functions of Cayley-Dickson numbers and line integration. // J. Mathem. Sciences [**141: 3**]{}, 1231-1298 (2007) (Sovrem. Mathem. i ee Pril. [**28**]{} (2005); previous version: Los Alam. Nat. Lab. math.NT/0406048; math.CV/0406306; math.CV/0405471). S.V. Lüdkovsky, F. van Oystaeyen. Differentiable functions of quaternion variables.// Bull. Sci. Math. [**127**]{}, 755-796 (2003). S.V. Ludkovsky. The two-sided Laplace transformation over the Cayley-Dickson algebras and its applications. // J. of Mathem. Sciences [**151: 5**]{}, 3372-3430 (2008) (earlier version: Los Alamos Nat. Lab. math.CV/0612755). S.V. Ludkovsky. Differential equations over octonions. Los Alamos Nat. Lab. math/1003.2620, 50 pages. S.V. Ludkovsky. Feynman integration over octonions with application to quantum mechanics. // Mathematical Methods in the Appl. Sciences (in press, DOI: 10.1002/mma.1243) 2010. S.V. Ludkovsky, W. Sproessig. Ordered representations of normal and super-differential operators in quaternion and octonion Hilbert spaces.// Adv. Appl. Clifford Alg., 2009. S.V. Ludkovsky. Algebras of operators in Banach spaces over the quaternion skew field and the octonion algebra.// J. Mathem. Sci. [**144: 4**]{} (2008), 4301-4366. B. van der Pol, H. Bremmer. Operational calculus based on the two-sided Laplace integral. Cambridge Univ. Press, Cambridge, 1964. A.P. Prudnikov, Yu.A. Brychkov, O.I. Marichev. Integrals and series. Nauka, Moscow, 1981. H. Rothe. Systeme Geometrischer Analyse. In: Encyklopädie der Mathematischen Wissenschaften. Band 3. Geometrie, 1277-1423. Teubner, Leipzig, 1914-1931. I. Rubinstein, L. Rubinstein. Partial differential equations in classical mathematical Physics. Cambridge Univ. Press, Cambridge, 1998. M.A. Solovjev. A structure of a space of non-abelian gauge fields // Proceed. Lebedev Phys. Inst. [**210**]{}, 112-155 (1993). E.H. Spanier. Algebraic topology. New York, Academic Press, 1966. V.S. Vladimirov. Equations of Mathematical Physics. Nauka, Moscow, 1971. V.A. Zorich. Mathematical Analysis. V. 2. Nauka, Moscow, 1984. Department of Applied Mathematics, Moscow State Technical University MIREA, av. Vernadsky 78, Moscow, Russia e-mail: sludkowski@mail.ru
Helicobacter pylori. Helicobacter pylori is an important cause of chronic active gastritis and is strongly associated with peptic ulcer disease and gastric cancer. H. pylori colonizes the surface of the gastric epithelium with production of a number of factors, resulting in inflammation and an altered mucosa. H. pylori infection occurs world-wide and the mode of transmission most likely is from human to human via the fecal-oral and/or the oral-oral route. Treatment and, in the future, prevention of this infection may result in a marked diminution of upper gastrointestinal tract disease.
Thoughts on triathlon bike penalties (USAT & WTC) I’ve always thought the USAT and WTC penalty system does little to deter racers from committing penalties, in particular – – on the bike. And just to be clear why I care primarily about the bike – the penalties on the swim and run pretty much fall into the ‘stupid mistake’ category of either disposing of litter, wearing a music/headphone device, or having someone pace you on the run. And penalties being given for most of these in my opinion these are extremely rare (though having an outside pacer would can a significant advantage, but it’s still a rare penalty). But the bike, the bike is full of places for people to cheat (yes, I said it – cheat) and gain an advantage – with the most common being the drafting penalty. The other common penalties on the bike (all relating to position) while annoying, don’t typically give the rider committing them a significant advantage. Sure, a blocking penalty could and on occasion does offer an advantage to the rider – but nowhere near the same ballpark as a drafting penalty. Btw, if you aren’t fully familiar with the different penalties, see my earlier post on them. The problem with today’s penalties is that they don’t do much to adversely/negatively impact the athlete committing them. Instead, they simply apply a time (2:00 for standard USAT rules) penalty. But a time penalty doesn’t take away the fact that the drafter is working less than everyone else. Typically drafting will save you 20-30% effort on the bike. Imagine cutting your heart rate or expended effort down by 20-30%. That’s huge. Further, look at WTC races. In an Ironman, a drafting penalty will only cost you 6:00 minutes. That’s nothing over the course of a 5 hour Ironman bike leg. Especially on courses like Florida. In the case of Florida, folks will (and do) draft for hours on end. That effectively makes the 112 bike leg more like a 70-80 mile bike leg. Wouldn’t you like to do an Ironman and only have ride 75 miles? Oh – and the kicker is, that four minutes they get to spend relaxing, as WTC rules dictate the penalty is served in a roadside tent just hanging out. Now, before I get into my proposal, note that licensed Pro’s in a non-drafting event (such as those that might compete at the NYC Triathlon) instead do what’s called a ‘stand-down’ – where they have to stop and stand on the ground (both feet) for 60 seconds before being allowed to continue. During this time their bib is marked with a slash, representing a penalty. Subsequent penalties get additional slashes. This of course breaks the rhythm of the race for them – and is better than the AG system, but I don’t think it goes far enough. In my opinion – a penalty should truly penalize your race and your performance in the same manner that cheating inversely helped your performance. As such, I would suggest that at the start of the run leg, a quarter mile segment be set up. Basically, the same length as a single 400m track. This can be as simple as an out-and-back on the normal run course itself, making a 400m segment. Using the pro-based system of bib-slashes, the athlete would have to complete one lap for each lap (Oly distance, two laps for Iron distance). For most competitive age groups, this would likely take about 1:30-1:45 per lap. While this is less than the 2:00 penalty system, this truly penalizes the athlete from a performance standpoint. It tires them out, in the same way that everyone else is more tired from not cheating. The level of suck for the cheater greatly increases in this scheme, and thus the incentive to draft greatly decreases. I can guarantee you that those AG elite folks earning multiple drafting penalties at the NYC triathlon race would have been far less likely to draft if they knew they had to run an extra half a mile. And as you move to long course (Ironman) – if you rack up a mile’s worth of penalties, that makes a far bigger dent in your day than just 6-24:00 worth of tent time. And I’m not the only one that thinks Ironman penalties need to be more strict, so does a longtime and high-up USAT ref. Now, some might say that this would add complexity. Yes, it would. But, triathlon is an inherently complex and messy sport to begin with. Adding a cone 200m into the run course with a sign that says (Penalty Lap Turnaround), isn’t hard. Nor is adding a single volunteer to check in/out penalty folks as they start their laps. A volunteer must already be stationed at the penalty tent today anyway, this just moves that tent/location (or in the case of an Oly, adds one volunteer). From a ref’ing standpoint, this does indeed add a level of suck as well as you’d have to ‘stand-down’ the athlete to mark their bib. But, I think this would also INCREASE the validity of penalties. Ref’s would be more sure that they were catching a drafter due to the extra steps taken. Further, drafting penalties aren’t appealable now anyway, so waiting until after the race to tack on time doesn’t change the appeal process any (as there is none). So, what do you think – am I off my rocker? Or is there hope here? – Note: The above represents my opinion alone, and not that of being a certified USAT Official/Ref. I completely agree with that. I was racing a local sprint tri and finished inside the top ten overall, only to see that I had been DQ’d. After some digging, I found out that I received a “Chistrap violation” out of T1. After there were no USAT officials on site to protest to, and I didn’t actually see any official on course except for near the transition area Now jump forward to a recent Criteruium that I raced in and I was riding around on my bike after the race with no helmet on. The head official blew a whistle at me and yelled “HEY, PUT A HELMET ON IF YOU ARE ON A BIKE ANYWHERE NEAR THIS RACE!” and that was it, no DQ, no penalty. This makes sense to me. The purpose of the rules are the same in both USAT and USAC (the road cycling governing body): we are supposed to wear a helmet when on the bike. But when coming from a safety standpoint, the USAT official did a lousy job. Even as I was being DQ’d, the ref -if truely looking out for my “safety”- should have stopped me to make sure I had my helmet on, not just passively penalized me as I biked passed. I completely agree, Ray. It’s simply wrong when solid citizens avoid races because they are know to be draft-fests. I also think WTC could choose a much better location for 70.3 Worlds. A better, single-loop course would cut down on the drafting. sounds similar to what they do in the biathlon in the winter olympics, 1 lap on the penalty track for every missed shot. I like it, triathlon is FULL of “strong cyclists” who draft most of the race, then brag about their awesome bike splits. Freakin’ hate those people. I’m somewhere in between on this, so I only think you are half way off your rocker ;) The reality is that these rules are set up for a fair race (not necessarily to penalize) and in many cases, officials are looking toward the front (read: competitive) triathletes. And it is exactly at the front of the pack where these times would make the greatest impact on a race result. In a Sprint/Oly distance event, a penalty will, without a doubt, negate a triathletes ability to place. In fact, that happened last year at Nations at the top of the field. Further, the top 5 in the Elite AG in NYC were all within 2 minutes, so the penalty would have changed the results at the top there too. At the longer distance events, perhaps you have more of a point. However, those same concepts apply – a fair race is the top priority, not disqualification. I think the mental issue with having to stand down during a race could be more harmful, because the athlete will want to “make up” the lost time, pushing his legs harder than he would otherwise plan. In a long race like an Ironman,this would likely cause problems later in the race anyways. But in the grand scheme of things, 6 minutes is not much, so perhaps you could argue for a longer duration. I thought Lesser is More was going to go in a different direction with his reply which prompted a different thought from me. He made the point that the penalties tend to effect those towards the front of the pack much more so than your amateur “i’m just doing this for fun” crowd. After all, what’s 2:00 minutes to a soccer mom doing her first triathlon? She doesn’t care b/c she’s not doing it for a podium spot to begin with. If I were training for my first Ironman for months with a good friend, I’d gladly take a drafting penalty if it means the two of us could take turns helping pull the other along. The point is to finish the race. So with that mind-set, who cares about drafting penalties. However, if we were told we had to run an extra 1/2 mile because of it, I’d be pissed! But on the other side of the coin there are the elites/pros and various age-groupers who are doing it to try and podium and are actually competing for money/ranking/prizes. If they draft then they could be directly effecting the positive results of another racer with the same goal… which is not cool. Without it getting too complicated, would there be a way of policing only those elites/pros and/or those vying for a podium spot or award money? For instance, if you’re caught drafting as an amateur then you’re ineligible to earn a spot on the podium or receive any award money – end of story. If you’re an elite/pro caught drafting, then you can start to get into the use of a penalty lap system similar to how Ben described the one in Antwerp. I don’t think you are off your rocker at all. For instance, check out the Great White North Half Iron:link to gwntriathlon.comOne drafting penalty and you have to run a 2 km lap of shame before starting the run course. Second offence gets you a DQ.Draft marshalls on the bike are sure to let you know if you get called. Your number is radioed in to T2. There is a white board set up just outside of T2 with the numbers of those folks who have been caught for drafting and have to run the lap of shame. In the results those with drafting penalties are noted. If a race director feels strongly about drafting there are ways to discourage people. I think this is a good one. Just a thought on Sean in NY’s comment about gladly taking a drafting penalty to pull a friend along. Drafting is cheating in my opinion, and it doesn’t matter if you are in it for the money or not. Those rules are in place for a reason and just because you aren’t racing for a podium spot does not mean you should be able to throw the rules out. Everyone wants a fair race. Just wanted to leave a quick response to a couple of responses to my comment. Leanna – I agree with everything you say except for the premise that everyone enters a triathlon to compete with someone else. A lot of people are viewing triathlons the same way people view marathons. They’re in it to participate. Maybe they just want to ride next to their friend and finish it together. Why punish them for accomplishing their goals? Ian – Who’s to say the soccer mom’s goal isn’t to “only” complete a triathlon as opposed to race in one? Your comment is a bit of a contradiction because if triathlon IS indeed an individual sport then who cares how other people fare? You should only be worried about your own time with respect to your previous times/expectations. That’s not a bad idea. But i think there would still be a problem with ref’s actually giving penalties. I did Vineman last week. I had a ref ride next to me for about 5 miles as I made my way through the waves ahead of me. Nerve racking, but I was doing everything right so i tried to ignore them. They were in transition DQing people for having their helmet strap undone before they racked their bike. And then making sure race #s were on the front as you left T2. They were all over the run course looking for litterbugs and headphones. I guess they were all over the T2 and run because the bike is so hilly/technical that there’s not much of a chance for packs to form. And the waves start 8 minutes apart, so lots of separation. James I think it’s very naive of you to assume I disrespect the rules in any way. I was simply offering up a non-competitive perspective that people might not have considered. I would hope for our sakes that your contributions to future discussions would be more thought out and have some meaningful value. Just finished the Lake Placid Ironman. The drafting was rampant – mobs of grouped riders going past, everyone drafting off the next guy. That said, there were just as many of us following the rules and riding individual bike legs without the drafting. It was a little disheartening, however, and I have to agree, more aggressive marshaling of the bike leg would help to preserve the individuality of triathlon and the spirit of the sport. If you need to draft to perform, that’s fine – do it in draft legal venues. The rest of us prefer to be real “Iron-Men.” Sprints and Oly’s… for a local race not as much. It’s more difficult for a local RD to organize that with all the race logistics. I’d rather he focus on course marking and making sure the timing chips work right. I agree with you and your idea. I think there are a lot of judgement calls that need to be made too. Blatant drafting should be penalized. However, in the instance of the NYC Tri, the pro and elite group may be spread out enough to avoid this, but when you have 3400 AGers setting out in waves 3 minutes apart it is almost impossible not to draft at times. Even more difficult when you have a lot of inexperienced folks out there on the course riding all over the place. To be fair, not-drafting is harder than simply keeping the required distance apart. In some AG ranks, there’s not always space for a ton of separation given the size of fields, relatively similar ability levels and short courses (although this is also true in IM & half IM races given laps & field sizes). Assume you’re drafting someone legally (outside the zone) and you get passed. It’s your responsibility to slow out of their zone. But if you have someone(s) behind you (legally or not), by slowing you’re suddenly in their deraft zone and they HAVE to pass you (letter of the law). So technically you have to slow again, into arguably someone elses draft zone. Suddenly, you’ve regressed yourself 5 people because you have to (once in the draft, the person HAS to pass or they’re technically penalized) and its not like the guy in front is waiting for you to work your way back up. :) I don’t have a solution (and I think refs do a good job within the leeway they have), but “just don’t draft” is too simplistic response in crowded fields with more-or-less similarly abled competitiors. Jilani – Actually, the situation you are describing isn’t possible. If someone overtakes you, you have to drop back to 3 bike lengths within 15s (or 20s, depending on USAT vs WTC rules). That part is correct. However, if someone is behind you and they now enter your draft zone because you have slowed your speed, the only way for that person to NOT get a drafting penalty would be to make a pass of you within 15s. It is THEIR responsibility at that point to comply. At that point, you’d drop back out of the draft zone and attempt to continue on racing. By the way, 15s/20s is a LONG time, so its not like you have to stop pedaling immediately when being overtaken…just sayin. When an official watches the action going on, we (yes, I am an official) watch for movement within the situation, not just whether 1 person is drafting. There are circumstances where there is fluid movement of multiple passes and people being overtaken that last 1+ minutes, where ZERO penalties are issued. It is our responsibility to observe the situation and make a judgment call. And for all non “Elite” USAT events (less than $5000 in cash prize purse), we simply write down descriptions of what we’ve observed, and it is the Head Official’s responsibility to interpret the description and issue the penalty. In order to become an official, you go through discussions of these exact types of scenarios. Lots of stuff happens out there. It is our job to try and make it fair. Like any officials though, we simply do the best we can. Sean – it isn’t just about the rules and whether they are fair to apply to everyone. BTW – if you took turns pulling along during a whole race, chances are you’d get DQ’d – 3 x and you’re out! Most importantly, it is about safety. You know those USAT fees (annual and one time race) you pay? Those go for insurance coverage, among other things. So not only are we out there to ensure a fair race, but also a safe one. There is an inherent risk factor that is multiplied if you were to remove the drafting penalty all together. Now, obviously some races do this (non-USAT sanctioned). But you can’t say there isn’t more risk at clipping a pedal, or causing more severe accidents. Not everyone is experienced riding in a pack and personally, I think the average newbie is more scared of pack riding than most other aspects involved in a triathlon (save for maybe swimming). What I’m trying to say is there are more the rules than just whether they are fair or not. It doesn’t matter if you don’t care about placing, are there to improve your time, or are a 1st timer just to finish. We want to ensure a safe and fair race. That is why you see officials watching for helmets unbuckled (safety), headphones (safety), littering (bad mojo to the host town), illegal equipment (safety), etc. As a triathlete and official, I can confidently say, we are there for YOUR benefit, though it may not always seem fair. Sorry I am late to the party. I like the idea of a penalty lap, but think that actual enforcement of the rules is the bigger issue. At most races I have done (even WTC races), I see a race official maybe once all day on the bike. If we are going to be add penalty laps, more consistent and even enforcement of the drafting rules is essential. If not, then you are going to get erratic and capricious penalties –some small percentage get a relatively severe penalty while the majority of rule breakers are unpunished. I was talking about this with a friend during my run this morning in preparation for the Augusta 70.3. We both agreed that the rules could use amending regarding drafting, even though we didn’t have solid solutions to suggest. I think Rainmaker has a good idea though… I’d propose that the “Pros”, if caught drafting, would be subject to the extra distance penalty. Age groupers, if caught drafting, would be subject to the current time penalty standards AND not be eligible to place in their age group. I think this would clear up a little of the “soccer mom” issue. Either way, I think the sport of triathlon needs to make sure that winners are the overall fastest athletes, not the ones who can exploit the system to gain an advantage that makes up for their lack of athleticism/training/endurance/effort/etc. 5 Easy Steps To The Site You probably stumbled upon here looking for a review of a sports gadget. If you’re trying to decide which unit to buy – check out my in-depth reviews section. Some reviews are over 60 pages long when printed out, with hundreds of photos! I aim to leave no stone unturned. It turns out I’ve written a fair bit of stuff over the past few years – and after it disappears from my front page, a lot of it never really sees the light of day again without Google’ing skillz. Or a photographic memory…which I don’t have. I’ve taken a look back and found stuff that…continues to find a trickle of readers via web searches or forum links. I travel a fair bit, both for work and for fun. Here’s a bunch of random trip reports and daily trip-logs that I’ve put together and posted. I’ve sorted it all by world geography, in an attempt to make it easy to figure out where I’ve been.
Technical computing environments are known that present a user, such as a scientist or engineer, with an environment that enables efficient analysis and generation of technical applications. For example, users may perform analyses, visualize data, and develop algorithms. Technical computing environments may allow a technical researcher or designer to efficiently and quickly perform tasks such as research and product development. Existing technical computing environments may be implemented as or run in conjunction with a graphically-based environment. For example, in one existing graphically-based technical computing environment, graphical simulation tools allow models to be built by connecting graphical blocks, where each block may represent an object associated with functionality and/or data. Blocks may be hierarchical in the sense that each block itself may be implemented as one or more blocks. A user may, for instance, view the model at a high level, then select blocks to drill down into the model to see increasing levels of model detail. Models generated with graphical simulation tools may be directly converted to computer code by the graphical simulation tool, which can then be executed in the target environment. For example, a model of a control system for an automobile may be graphically developed with the graphical simulation tool, implemented as code, and then deployed in an embedded system in the automobile. It is often desirable that a graphical model be tested or verified before a system using the model is deployed. One technique for verifying a model is based on a coverage analysis of the model. In general, coverage analysis may provide a measure of how complete test data input to the model was in testing the model. Knowing the completeness of testing can be important in determining whether a model is ready to be implemented in a “live” system. For example, if the coverage analysis indicates that certain portions of the model or the code used to implement the model were not used when the model was run with the test data, it may be desirable to revise the model or the test data to obtain more complete coverage. A concept related to model coverage is code coverage. Code coverage analysis may be used to dynamically analyze the way that a program executes. Similar to model coverage analysis, with code coverage analysis, it may be desirable to determine the completeness with which program code was executed during testing.
Q: Gdx.files.local and WRITE_EXTERNAL_STORAGE permission in android I developed my LibGdx based Android game, I use the below code which has Gdx.files.local. Does it need a WRITE_EXTERNAL_STORAGE permission in android ? private FileHandle getFontFile(String filename, int fontSize) { return Gdx.files.local(generatedFontDir + fontSize + "_" + filename); } A: Local File Storage of LibGDX is the same as Internal storage on Android. You can read and write to this storage but it is private storage of your application so only your App can access it. FileHandle file = Gdx.files.local(String path); No permissions are required if you want to read and write on internal storage of Android. Here is the test code FileHandle file = Gdx.files.local("prueba.txt"); file.writeString("HELLO WORLD", false); //write to file System.out.println(charString=file.readString()); //read file I've tested on Android Marshmallow and on Kitkat OS devices and my target and compile sdk version is 25. For more details take a look on libgdx wiki.
/* This Source Code Form is subject to the terms of the Mozilla Public * License, v. 2.0. If a copy of the MPL was not distributed with this * file, You can obtain one at http://mozilla.org/MPL/2.0/. */ /* The FakeGeolocationProvider relies on observing r2d2b2g:update-geolocation * notifications from content/shell.js, when the user changes their custom * coordinates, AND from content/dbg-geolocation-actors.js, when the user wants * to use their current coordinates. The current coordinates are never fetched * until the user has explicitly selected that they want to share them. * shell.js will send a r2d2b2g:enable-real-geolocation notification that is * observed by dbg-geolocation-actors.js. dbg-geolocation-actors.js uses * "unsolicited" events to addon/lib/remote-simulator-client.js to request the * current coordinates from the main Firefox process. shell.js keeps track of * the latest custom coordinates to show the user when they reopen the * geolocation window, while FakeGeolocationProvider.js keeps track of the * coordinates that will be passed to any DOM calls. */ const Ci = Components.interfaces; const Cc = Components.classes; const Cu = Components.utils; Cu.import("resource://gre/modules/XPCOMUtils.jsm"); Cu.import("resource://gre/modules/Services.jsm"); function FakeGeoPositionCoords(lat, lon, acc, alt, altacc) { this.latitude = lat; this.longitude = lon; this.accuracy = acc; this.altitude = alt; this.altitudeAccuracy = altacc; } FakeGeoPositionCoords.prototype = { QueryInterface: XPCOMUtils.generateQI([Ci.nsIDOMGeoPositionCoords]), classInfo: XPCOMUtils.generateCI({interfaces: [Ci.nsIDOMGeoPositionCoords], flags: Ci.nsIClassInfo.DOM_OBJECT, classDescription: "FakeGeoPositionCoords"}), }; function FakeGeoPosition(lat, lon) { this.coords = new FakeGeoPositionCoords(lat, lon, 1, 0, 0); this.address = null; this.timestamp = Date.now(); } FakeGeoPosition.prototype = { QueryInterface: XPCOMUtils.generateQI([Ci.nsIDOMGeoPosition]), // Class Info is required to be able to pass objects back into the DOM. classInfo: XPCOMUtils.generateCI({interfaces: [Ci.nsIDOMGeoPosition], flags: Ci.nsIClassInfo.DOM_OBJECT, classDescription: "FakeGeoPosition"}), }; function FakeGeolocationProvider() { // Default the initial custom coordinates to Mozilla's SF office. this.position = new FakeGeoPosition(37.78937, -122.38912); this.watcher = null; Services.obs.addObserver((function onUpdateGeolocation(message) { let { lat, lon } = message.wrappedJSObject; dump("FakeGeolocationProvider received update " + lat + "x" + lon + "\n"); this.position = new FakeGeoPosition(lat, lon); this.update(); }).bind(this), "r2d2b2g:update-geolocation", false); } FakeGeolocationProvider.prototype = { classID: Components.ID("{a93105f2-8169-4790-a455-4701ce867aa8}"), QueryInterface: XPCOMUtils.generateQI([Ci.nsIGeolocationProvider]), // startup and setHighAccuracy both need to be defined to implement // the nsIGeolocationProvider interface, even though we don't actually // implement them. startup: function() {}, setHighAccuracy: function(enable) {}, watch: function(callback) { this.watcher = callback; // Update the watcher with the most recent position as soon as possible. // We have to do this after a timeout because the nsGeolocationService // watcher doesn't expect an update until after this function returns, // so it won't receive one until then. Cc["@mozilla.org/timer;1"].createInstance(Ci.nsITimer).initWithCallback( (function() this.update()).bind(this), 0, Ci.nsITimer.TYPE_ONE_SHOT ); }, update: function() { if (this.watcher) { this.watcher.update(this.position); } }, shutdown: function() { this.watcher = null; }, }; this.NSGetFactory = XPCOMUtils.generateNSGetFactory([FakeGeolocationProvider]);
Last serious call for the knife repairs is First day of November The reason is of course the holiday mail overload - the chance of your knife not making it back to you before Christmas Shop is closed during winter - I can't work when the metal is freezing to fingers, but work when weather permits. Expect some serious extra turn-around time through winter months - November through April Knives are repaired in the order they arrive - first come first served. Booking and mailing your knife for repair through the winter will put you on the list for faster repairs after weather improves, or the shop opens.. Serious note: You must read and follow simple "Shipping up repairs instructions". Failure to do so will cost you extra money. How much? The customs bill I have to pay at the Post Office, plus 2 hours of my extra time I have to spent scanning and mailing you the paperwork, and extra trip to town. This is the fine print The only purpose of the knife repairs I do is to honestly repair and restore the knife to it's full functionality There was always a need to repair and refurbish good quality old knives, family heirlooms and keepsakes - folding, or fixed blade ones. I have learned to make all replacement parts individually by hand out of necessity, as there are no replacement factory parts available.I will give you opportunity, and consider your input, however I do reserve the last word on all matters of the repair. That includes the cancellation of the repair if the knife real condition is so bad that the repair would not have a happy ending: The description of the knife's condition is inaccurate - blade rust pits are so deep that it is impossible to save it, repairs of one part would snowball into replacing may other parts - actual repair cost vs the end result too unreal I do professional repair, striving to make your knife looking as good, and in most cases better than new, keeping the model characteristics close to original if possible. However, if your knife is in mint unused condition but has some defect, like scales not fitting properly, do contact the manufacturer first.Do not demand the knife should be fixed to the original factory looks. It is just not going to happen. Two main reasons: #1 - the factory replacement parts are NOT available. #2 - I do not put out shoddy work. Manufacturing is about making money = cheapest materials etc. Some factory knives are sold in such a sorry state and made so cheap, that they are actually the limited use and throw away items. The remodeling of the brand new knife is not my favorite work. After breaking the knife apart it is seldom possible to put it together by hand as precisely as in the factory jigs. This goes doubly for fitting new scales - some models were designed by evil extraterrestrials. The bolster-scale gaps are a part of the eyeball fitting, and are often part of factory assembly as well. Pivot pins sizes as used by factories are not available to public. New pins have to be machined down from nearest larger size pin stock. Hand riveting does not make pins always blend seamlessly into bolsters, and also the pin alloy may have a different sheen when polished. I do spend tremendous amount of time trying to accomplish what factory did not or could not do. This does not mean that if the outcome is not to your vision of perfection, I will refund your money, working for nothing. If at all my work adds value to your knife - it is not designed or intended to fool any collector. If it is indeed a real collectors item, any repair or alteration to it will actually diminish it's collector's value. Picture is worth 1000 words, so it is said. Everyone today has some gizmo which takes pictures - Digital camera or mobile phone, and the flat bead scanner works good too. It is good for me to see the details of the repair needed, detail of unusual assembly, or the condition of the knife in the need of repairs or refurbishing for estimate. Unfortunately Camera or scanner pictures are not suitable for email because they are just TOO BIG. Crop and resize them for email. A good image size and format is JPG or PNG from 640 dpi to 1000 dpi wide at %90 quality or compression. Attach to your reply: I start all the service work as soon as possible, on the first come - first serve basis. When your turn comes I try to finish the repair within a day or two. The bulk of my repair work mostly consists of extensive handle repairs, including pins/spacers/scale replacement, and occasionally a total change of a handle design, folder springs and repairs, and some stainless welding where possible: The blades broken in half or through any portion of cutting edge can not be welded back together. Total refurbishing consists of stripping the knife to it's individual components, total blade regrind, sand and polish all parts, replace what is needed. After the knife is reassembled, it gets final buffing and sharpening to a razor edge. Kitchen and Chef's knives - Common, or special editions refurbishing: The blades get complete overhaul - sand away and polish most sharpening scratches, stains and rust. Blade-tips which are broken will be re-shaped, old delaminated, damaged wood handles will be replaced with top quality hardwoods, cheap Aluminum rivets will be replaced with Nickel Silver ones. I glue down all new scales so no water can get to the tang to destroy the handle with rust from inside and be food-safe. WW-II, Mark, USM, Ka-Bar stacked leather handle knives: Handles get new leather washers, all glued together this time and on cleaned metal, soaked in preservative. If the pommel was used as a hammer - I will sand away all dents. The whole blade will be re ground and polished. The fuller groves ard done by mini hand grinder so will not be perfectly ripple free. Flat or Hollow ground for getting the used up fat edge to a new, working thickness. If the rust pits were too deep to completely grind off (that would make the blade too thin) I get the metal parts bead blasted for frosty finish. That will also clean out and blend in the remainder of the pits. The gun blue will finish the metal for good military look and some rust protection. If the blade is supper rusty, or have very deep rust pits, it will require considerably more time and resources to make it look presentable again. 90% of all repairs required the total regrind. There is no guarantee that I will be able to remove all the rust pits, since some can go right through the blade. Stainless steel containing iron is prone to microscopic rust, forming worm holes sometimes right through the blade. These are invisible to naked eye, and show as wash out lines after polishing, as the polishing wheel catches the edge of these holes. Grinding the blade down eliminates only large pit rust from the surface. If these micro pits are present, no mirror polish is possible, also this blade will rust despite being made out of "stainless" steel. These pits are the result of a poor alloy, as not all the iron particles do form carbides and are free. Discontinued repairs: Western Boulder Colorado knives I will no longer repair this brand of knives. This 50years + old twin tang design results in premature leather washer demise. Factory does not ever glue handles to seal moisture getting to tangs. The tangs swell with rust and the leather washers just crumble off the handle like a dry toast. The leather washers and spacers require to be shaped like a letter "H" that I do have to painstakingly cut by hand on every piece. It is not standard so there are no washers like that ready on the market. This job would require me to have carving leather on hand just for this possible repair, which I do not have, and am not planning to purchase. Swiss Army & multitools type I am no longer repairing any folding knives with more than 2 springs. Have no factory precission jigs for reassembly of so many layers. Real Mother Of Pearl This handle material comprises of the stack of supper thin layers so it is supper brittle. More so than a paper thin sheets of glass. Master pin is located so close to the edge of the scale, in the mid top of the knife, that any pressure of the pin ever so slightly damaged by insertion will break it. After the break - it is back to square 1 - dismantling the knife to attach the new set of MOP scales and then reassembly all over again. Chance of breaking the scale without the factory jig is over 90%. To expensive to purchase for replacement, work time is super stressfull, tripple, compared to working with other materials. The note about maker's markings/logos: Manufacturer's logos, trademarks or any other Etched markings, or shallow stamping not deep enough, will be totally obliterated by any blade sanding. The deep etch in black will loose the distinct black color, as the carbon soot wipes readily off. I can't repair molded plastic, or molded rubber handles A large portion of old cutlery was manufactured out of plain hard carbon steel, then chrome plated. Any re shaping, like broken off tips, requires that the blade be correctly tapered /thinned and polished. The chrome plating will be polished off, there is no way around it. The blade will stain, as it is just a cheap high carbon steel. You do have the option is to have it re-chromed after, at the Chrome plating shop near you, use the Casey gun Blue to make it somewhat rust resistent. Or just put up with cleaning it with Comet/Ajax when it does get stained. Folder repairs To refurbish the folder, it has to be taken down, pried and broken apart, hopefully without damaging liners or bolsters beyond repair. New type liner-locks assembled with screws: I do not keep any screw inventory, so if you lost one you got to look online for same. Any other repairs considered only after I know all it is to know about the knife - spell it all out on the form. SERIOUS NOTE: Even if you knife is in brand new, or mint condition without a blemish, and you would like only a scale replaced, the folder has to be taken totally apart. The new scales has to be glued on, and then riveted to liners - riveting means forming heads on both ends of the pins with the hammer - one head visible on the outside of the scale, the other is ground flush in a countersunk hole on the inside. Refurbishing consist of total disassembly - cleaning, polishing the liners, back spring/lock bars, sanding, polishing and sharpening all blades, sanding flat inside of old scales for the perfect fit. Scales get glued on this time, and are secured with new pin rivets. All main new pins are machined to fit - no idea where the factories get their oddball pin sizes as no knifemaking supplies sell them. Then pins are riveted with 0.005" clearance to achieve smooth blade action without slop. I can make you a new blade if you are unable to get the replacement from the manufacturer. The shape will be close to the original, but no markings on it. When the broken blade part is missing, I make a new custom blade shape to fit the handle. I can cut a rectangular, straight nail notch/slot freehand if absolutely necessary. I use 440-C, hardened and tempered to about 59 RC, for blades and springs, but will use other materials, if requested, available, or supplied by you. Of course all of this work takes much time, and therefore is not cheap, or free. The repair costs can exceed the actual purchase price, sometimes several times over. You do have an option just to throw your knife away and go purchase another knife, or keep old one and cruise garage sales or auctions for the same model with needed parts intact, or in better shape than yours. Than use it to make one good knife out of two. The basic cost of scales includes only any of the exotic woods I have on hand. For many knives even my cheapest scales will be a Cadillac replacement for the factory ones. Speciality scales like Abalone, Turtle, Fossil or mammoth Ivory etc will cost you extra, and these are really expensive. No real Elephant Ivory, legal or pre-ban can be transported to foreign country, choose from many alternatives.You can save quite a few dollars by getting your special scales yourself - from knifemaking supplies online stores - than mailing them with the knife. Always thoroughly dry your knife and sheath/case after washing or getting it wet before tucking it away. Do not store your knives in their leather sheath if even moist. This moisture / condensation leaches out the remnants of tanning acids from leather, and that causes much rust..
.\" ************************************************************************** .\" * _ _ ____ _ .\" * Project ___| | | | _ \| | .\" * / __| | | | |_) | | .\" * | (__| |_| | _ <| |___ .\" * \___|\___/|_| \_\_____| .\" * .\" * Copyright (C) 1998 - 2017, Daniel Stenberg, <daniel@haxx.se>, et al. .\" * .\" * This software is licensed as described in the file COPYING, which .\" * you should have received as part of this distribution. The terms .\" * are also available at https://curl.haxx.se/docs/copyright.html. .\" * .\" * You may opt to use, copy, modify, merge, publish, distribute and/or sell .\" * copies of the Software, and permit persons to whom the Software is .\" * furnished to do so, under the terms of the COPYING file. .\" * .\" * This software is distributed on an "AS IS" basis, WITHOUT WARRANTY OF ANY .\" * KIND, either express or implied. .\" * .\" ************************************************************************** .\" .TH CURLINFO_FILETIME 3 "April 03, 2017" "libcurl 7.56.1" "curl_easy_getinfo options" .SH NAME CURLINFO_FILETIME \- get the remote time of the retrieved document .SH SYNOPSIS #include <curl/curl.h> CURLcode curl_easy_getinfo(CURL *handle, CURLINFO_FILETIME, long *timep); .SH DESCRIPTION Pass a pointer to a long to receive the remote time of the retrieved document (in number of seconds since 1 jan 1970 in the GMT/UTC time zone). If you get -1, it can be because of many reasons (it might be unknown, the server might hide it or the server doesn't support the command that tells document time etc) and the time of the document is unknown. Note that you must tell the server to collect this information before the transfer is made, by using the \fICURLOPT_FILETIME(3)\fP option to \fIcurl_easy_setopt(3)\fP or you will unconditionally get a -1 back. .SH PROTOCOLS HTTP(S), FTP(S), SFTP .SH EXAMPLE .nf curl = curl_easy_init(); if(curl) { curl_easy_setopt(curl, CURLOPT_URL, url); /* Ask for filetime */ curl_easy_setopt(curl, CURLOPT_FILETIME, 1L); res = curl_easy_perform(curl); if(CURLE_OK == res) { res = curl_easy_getinfo(curl, CURLINFO_FILETIME, &filetime); if((CURLE_OK == res) && (filetime >= 0)) { time_t file_time = (time_t)filetime; printf("filetime %s: %s", filename, ctime(&file_time)); } } /* always cleanup */ curl_easy_cleanup(curl); } .fi .SH AVAILABILITY Added in 7.5 .SH RETURN VALUE Returns CURLE_OK if the option is supported, and CURLE_UNKNOWN_OPTION if not. .SH "SEE ALSO" .BR curl_easy_getinfo "(3), " curl_easy_setopt "(3), "
Love on allwomenstalk.com Absolutely true because each passing day filled with silence is another day that I hear the crack in my heart growing. It's hard to prove something to someone whose actions say more about them then they do you. Too sad and full of regret to do anything. Ever feel like that? Take my heart, it's broken and in a sense betrays you? Do you sometimes feel haunted by regret and despair? We all get sad from time to time, problem is when it becomes dominate.
Get breaking news alerts and special reports. The news and stories that matter, delivered weekday mornings. Former White House chief strategist Steve Bannon said Sunday the firing of FBI Director James Comey may have been the biggest mistake in "modern political history." Bannon confirmed in an online-only segment of "60 Minutes" that he opposed President Donald Trump's decision to oust Comey. The wide-ranging discussion, which covered the Russia investigation, the media, the Republican Party and North Korea, was Bannon's first major interview since leaving the White House last month. Let our news meet your inbox. The news and stories that matters, delivered weekday mornings. This site is protected by recaptcha Asked by Charlie Rose whether Comey's firing was the worst mistake in political history, he said: "That would probably be too bombastic even for me, but maybe 'modern political history.'" Bannon said institutions such as the Senate and House of Representatives can be changed "if the leadership is changed." But he also said the FBI, which he called "an institution," is different. "I don't believe that the institutional logic of the FBI, and particularly in regards to an investigation, could possibly be changed by changing the head of it," Bannon said. Steve Bannon Bill O'Leary / The Washington Post via Getty Images file The ousted White House adviser also said that if Comey hadn't been fired, "we would not have the Mueller investigation," referring to special counsel Robert Mueller, who is heading the investigation into possible ties between Trump's election campaign and the Russian government. Trump was heavily scrutinized for citing this investigation as a reason he dismissed Comey in May. Mueller was appointed as special counsel for the investigation shortly after that move. White House Press Secretary Sarah Huckabee Sanders responded Monday, saying the administration has "been pretty clear" in its reasoning for ousting Comey, and subsequent events showed that "the president was right in firing" the FBI director. "Since the director's firing, we've learned new information about his conduct that only provided further justification," Sanders said, listing "false testimony" and leaking information to reporters among them. Asked if the president was disappointed in Bannon's interview, Sanders said she wasn't sure. She also said that as far as she's aware, the two men have only spoken once since Bannon left his West Wing post.
Steps toward a clinically relevant science of interventions in pediatric settings: introduction to the special issue. To describe methods and strategies to advance the science of interventions in pediatric psychology. We consider the advantages of various strategies to develop and extend the applications of intervention research in pediatric practice settings. Strategies are needed to enhance application of empirically supported interventions to pediatric settings, including testing the generalizability of empirically supported interventions in clinical samples, developing interventions based on clinical experience and tested in controlled clinical trials, designing program evaluations in the context of practice settings, and conducting case studies and series. Critical next steps in intervention research include documenting the clinical significance of interventions, conducting multisite research concerning interventions, including interventions conducted in clinical settings, and implementing integrated clinical intervention and research. Training in empirically supported treatments and intervention research and developing policy related to intervention research would also promote a clinically relevant scientific agenda concerning intervention research with pediatric populations. Pediatric psychologists have the opportunity to develop a clinically relevant science of interventions in pediatric settings by using multiple methods and strategies.
Control of Chiral Magnetism Through Electric Fields in Multiferroic Compounds above the Long-Range Multiferroic Transition. Polarized neutron scattering experiments reveal that type-II multiferroics allow for controlling the spin chirality by external electric fields even in the absence of long-range multiferroic order. In the two prototype compounds TbMnO_{3} and MnWO_{4}, chiral magnetism associated with soft overdamped electromagnons can be observed above the long-range multiferroic transition temperature T_{MF}, and it is possible to control it through an electric field. While MnWO_{4} exhibits chiral correlations only in a tiny temperature interval above T_{MF}, in TbMnO_{3} chiral magnetism can be observed over several kelvin up to the lock-in transition, which is well separated from T_{MF}.
Warm meetup: Nami game “The Battle of Bitcoin” to be launched On this Friday Oct 26, Nami will hold a warm meetup to introduce Nami Game called “The Battle of Bitcoin”. As we mentioned in the first Technical updates, “The Battle of Bitcoin” is a simple trading game where gamers utilize their analysis skills and compete with others in order to seize rewards upto 1 BTC. The Nami Game is expected to officially launch on November 5th, 2018. The meetup “The Battle of Bitcoin” will be held from 16h30 – 18h30 (GMT+7) on October 26th, 2018 at Nami office located at No. 23, Road 44, Thảo Điền Ward, District 2, HCM City. The event will cover: What is “The Battle of Bitcoin” and how to play Present about Affiliate program and commissions Live demo of “The Battle of Bitcoin” and rewards for the best player Especially, participants who complete the registration form will receive free 15 SPIN to enjoy the game right at the meetup. Moreover, you will have a chance to win 100 SPIN and 2,000,000 VNĐ in cash for the best gamer. “The Battle of Bitcoin” is a quiz competition in which gamers use SPIN to predict the right direction of the next candle and receive Altcoins as a reward. If you succeed in making 15 consecutive right choices, you will likely earn the biggest prize upto 1 BTC per week. Register now and attend the meetup this Friday to be updated details about how to sign up, play, convert rewards and other benefits for the early adopters Additionally, at the meetup, Nami would like to introduce Affiliate Program as well as incredible benefits including commission and appealing promotions at nami.trade. Nami welcomes your participation to the meetup. Valuable prizes are waiting for your conquest. If you have any questions for us, please fill in the form and join our demo event. ___ ABOUT NAMI CORPORATION Nami Corp. is a global FinTech Corporation working on Investment and Technology based on Blockchain. It’s not just a single platform, we have created the whole Ecosystem to help Contributors and Traders around the world to utilize their experience and their money with the slogan “Change mindset, make giant steps”.
Live updates This is a significant find, and goes to show these set-ups are organised, sophisticated and being run by people who have organised crime links regionally, nationally and even internationally. The crime they generate not just through drug misuse, but other crime, such as theft and burglary, to feed cannabis addiction is also significant, and we hope this seizure will put a dent in the supply chain. These set-ups take hold in our communities regularly, and this find goes to show the importance of the public telling us when they have suspicions about crime being committed in their community.
order. b, -0.5, 1/4, 1/2 Let m = 14 - 82. Let t = m + 205/3. Let f(j) = j + 6. Let z be f(-4). Put z, t, 1/2 in descending order. z, 1/2, t Suppose q + 0*q = 2*r - 21, 5*q + 57 = 4*r. Put 4, 0, r, 1 in ascending order. 0, 1, 4, r Let j be -11 + (-897)/(-81) - 25/(-27). Sort -3, j, 213. -3, j, 213 Let t be (-5 - -4 - -1)/(-2). Suppose t*z = 3*z + 3. Put 1, 4, z in descending order. 4, 1, z Suppose -6 = -5*t + 2*t. Let l be t/2*(2 + 3). Suppose 18 = -l*d - 2*c, -d + 3*d = -4*c - 4. Sort d, 0, 4 in descending order. 4, 0, d Let v = 46 - 45.7. Sort 2, 3.4, v, 1/3 in decreasing order. 3.4, 2, 1/3, v Suppose -13 = -5*p - 5*d - 33, 0 = -3*p - d - 18. Let w = 4 + p. Put 4, 0, w in increasing order. w, 0, 4 Suppose 106 = 6*g - 116. Suppose 39*k - 6 = g*k. Put -3, -24, k in descending order. k, -3, -24 Let u = -43.9 + 47. Let z = -0.9 - u. Let g = 656 - 656.3. Put z, 2, g in decreasing order. 2, g, z Let y = 191 - 187. Put y, -1, 5, -6 in descending order. 5, y, -1, -6 Suppose -3 = 3*y - 6*y. Let r = -44 + 42. Put 5, y, r in decreasing order. 5, y, r Let s(b) = 2*b + 26. Let p be s(0). Let d be (14 - 13) + (-28)/1 + -3. Let o = d + p. Sort 5, 16, o in decreasing order. 16, 5, o Let k = -7868 - -7866. Let s = -8 + 5. Put -1, k, 1, s in ascending order. s, k, -1, 1 Let r = 221 - 226. Sort -12, r, 1, 2 in decreasing order. 2, 1, r, -12 Suppose b = -3*c - 3, 2*c - 11 = -2*b - 17. Sort 7, 15, b in descending order. 15, 7, b Suppose 105 = -25*n + 4*n. Put 8, -4, n, -2/7 in decreasing order. 8, -2/7, -4, n Suppose -2*w - 2*w + 4 = 0. Let i be w + 3/(-4 + 7). Sort -1, i, 0 in descending order. i, 0, -1 Let q = -10 + 12. Suppose 5*h + b - 284 = -3*b, 140 = q*h - 5*b. Let g be (8/h)/(6/(-9)). Sort g, -4, -14 in decreasing order. g, -4, -14 Let h be (-10)/(-3)*(-6)/(-4). Let d be 63/(-15) - 1/(-5). Let p(f) = -f**3 - 2*f**2 + 3*f + 3. Let s be p(-2). Sort s, d, h in ascending order. d, s, h Suppose 2*w - 4*w = -8. Let p be ((-5)/4)/((-1)/8). Suppose -3*s = 5*a - 8, -467*a + 4*s - 30 = -464*a. Sort a, w, p. a, w, p Let q = -4 + 8. Let o be (-2 - -1)/(5/(-15)*-15). Sort 0, q, o in descending order. q, 0, o Suppose -4*k + 2*r = 48, -10*r + 8*r = -5*k - 59. Put -1, 1, k, -2 in ascending order. k, -2, -1, 1 Suppose 39*s + 14 + 142 = 0. Put -1, 4, 3, s in ascending order. s, -1, 3, 4 Let t = 10.9 + 7.7. Let d = -20 + t. Let v = -0.9 - d. Sort -4, 1, v in descending order. 1, v, -4 Let t = -2.7 + 2.4. Let f = 14 - 16. Let v be (f/(-9))/(12/36). Sort 2, t, v in increasing order. t, v, 2 Let n(h) = -3*h**3 - 22*h**2 + 14*h - 11. Let z be n(-8). Put z, -24, 4, 7 in descending order. 7, z, 4, -24 Suppose 0 = -12*m + 27*m - 45. Let d(s) = s**2 + 4*s + 2. Let p be d(-4). Suppose -p = -2*v - 0. Sort m, -2, v in increasing order. -2, v, m Let l = 1452 + -1461. Put -2, -3, 4, l in increasing order. l, -3, -2, 4 Let m = -7794 + 7794.5. Let c(d) = 3*d - 1. Let s be c(1). Sort s, -5, m, -3 in descending order. s, m, -3, -5 Suppose -u + 504 = 11*u. Suppose 0 = -39*h + u*h + 45. Let l = -0.5 - 0. Sort l, 1/3, h in descending order. 1/3, l, h Let j be ((-5848)/(-12) - 5)/((-34)/(-6)). Let k = 85 - j. Sort -0.4, k, -0.1, -2/7 in ascending order. -0.4, -2/7, k, -0.1 Let b be 3859/(-68) + (-1)/4. Sort 4, 2/9, b in descending order. 4, 2/9, b Let l = -24 - -20.6. Let m = l + 4.1. Sort -1/7, m, 3 in descending order. 3, m, -1/7 Suppose 0 = 4*y + 4*l - 348, -4*y + 0*y + 5*l = -393. Suppose -5*g + y = -2*m + 6*m, -4*g = 5*m - 79. Let u = -2.7 - -5.7. Sort u, g, -1. -1, u, g Let y = -1864 - -1867. Put -135, 5, y, -3 in decreasing order. 5, y, -3, -135 Let z = 0.718 - 0.628. Put z, 0.1, 0.2, 0 in ascending order. 0, z, 0.1, 0.2 Let p be (-16)/20 + 4/(-20). Sort 39, 2, p, 5 in increasing order. p, 2, 5, 39 Suppose 5*o + 5 = 0, 4*j - 3*o + 0*o + 17 = 0. Let h = -16 + 13. Sort -1, 5, h, j in descending order. 5, -1, h, j Let u be (-1456)/64 - -16 - 1/4. Put 3, u, 10 in descending order. 10, 3, u Let c(j) = -j - 7. Suppose v + 4*o + 4 - 5 = 0, 2*v - 5*o = 15. Suppose 0 = 5*b + v + 20. Let u be c(b). Sort 3, u, -1 in decreasing order. 3, -1, u Suppose 0 = -4*k - 3*g + 6, 7*g = -3*k + 5*g + 5. Sort k, 5, -10. -10, k, 5 Let u = -12 + 2. Let c = u + 10. Suppose -2*n = -4*k - 22, -2*n = 3*k - 3*n + 16. Put k, c, -3 in descending order. c, -3, k Let i(t) = t**2 + 4*t + 5. Let f be i(-5). Let q be -2*(-4)/8*2. Suppose q*x - 1 - 3 = 0. Put -4, f, x in descending order. f, x, -4 Let t be 3/12 + (-9)/4. Suppose -2*f - 4*y = 20, 0 = -3*f + 5*y - 8 - 33. Let w = f + 8. Put w, t, -5 in decreasing order. t, w, -5 Let w = 6 + -3. Let y = w + -7. Let s be ((-2)/y)/((-40)/16). Put -1/6, -1, s in descending order. -1/6, s, -1 Let i = -2577 - -2580. Put 5, -2, i, 74 in descending order. 74, 5, i, -2 Let b = 0.381 + 0.619. Put -79, b, -5/6 in increasing order. -79, -5/6, b Let p(c) = c + 7. Let g be p(-11). Let h(u) = -2*u - 52. Let w be h(-25). Put -1, w, g in decreasing order. -1, w, g Let u be 0 + -5 - 118/(-59). Sort 5/3, 3, 0.18, u. u, 0.18, 5/3, 3 Let n be -15 - -31 - 2/(-1). Sort 0, n, -3 in ascending order. -3, 0, n Let n = 0.517 - 0.617. Let i = 0.16 - -0.04. Put 0.4, n, i in ascending order. n, i, 0.4 Suppose 5*o = 20, 4*p + 5*o - 18 = 3*p. Sort 3, p, 7. p, 3, 7 Let x = -2795.3 - -2795. Let t = -3 - -3.6. Sort x, 0.1, t in ascending order. x, 0.1, t Suppose 4*p + 23 = -5. Put p, 4, -5, 0.4 in increasing order. p, -5, 0.4, 4 Let c be -2*(-9 + 15/2). Put c, -89, 0 in decreasing order. c, 0, -89 Let m = -2.2 + 1.7. Let k = -3.5 + m. Sort k, -3/2, 2 in decreasing order. 2, -3/2, k Suppose -p + 2*i = -6, -3*p = p + 4*i. Suppose 40 = a - 0*t - 5*t, -p*a - 3*t = -28. Suppose -a = 10*k - 6*k. Sort -3, k, 3 in descending order. 3, -3, k Let b be (-12)/15*10/(-68). Put -9, -4, b, 2 in decreasing order. 2, b, -4, -9 Suppose 0*l - 5*l - 4*j - 19 = 0, 0 = -5*l + 4*j - 11. Suppose 4*c - 2*a - 34 = 0, c = a + 5 + 4. Let i be (-1 - 0)*(-11 + c). Sort i, l, 2. l, 2, i Let y be (-5)/4*10/(-25). Let p be ((-5)/(-6))/((-168)/36 - -4). Put p, 0.4, y in descending order. y, 0.4, p Let u = -0.238 - -0.3. Let w = u + 26.938. Let z = w - 27. Sort -0.4, z, 5 in increasing order. -0.4, z, 5 Let c = -7 + 8. Suppose s + 0 = c. Let d = s - 9. Sort d, 0, -4 in ascending order. d, -4, 0 Let z be (6/14)/(16/28). Let v = -3307/407 - -304/37. Put v, -2, z in ascending order. -2, v, z Let k = -77 + 42. Let z = -3/2270 + 1141/4540. Sort -1/3, k, z in decreasing order. z, -1/3, k Let f = -406.33 - -408. Let l = f + -0.07. Let r = l - 2. Put 0.3, -2/3, r in increasing order. -2/3, r, 0.3 Suppose -3 = j - 5*x, 4*j + 7*x - 11 = 4*x. Put j, -3, 0, 5 in descending order. 5, j, 0, -3 Let v = -55.42 + 56. Let x = 0.21 - v. Let t = x + 0.07. Sort -0.2, -2, t. -2, t, -0.2 Let o = 11.81 + -0.81. Put 0.1, -12, o in descending order. o, 0.1, -12 Let z be 14*8/28 - (3 + 1). Let f = 2.8 - 3.3. Sort z, f, -5. -5, f, z Suppose 34 = -2*k + 4*k. Let t = 15 - k. Let p(r) = -r**3 + 9*r**2 - 7*r - 3. Let d be p(8). Sort -4, d, t. -4, t, d Let o = 18 + -13. Suppose i + o*g + 15 = 0, 0 = 4*i + 2*g - 0*g + 6. Let v be 1*-4*(-1)/4. Sort -4, i, v in descending order. v, i, -4 Let i be (12 + -11)*7 - (30/7 - -3). Let y = 2 - 3. Sort -0.3, y, i, -15 in decreasing order. i, -0.3, y, -15 Let q = 131 - 263/2. Let f = -6.1 - 0.9. Let u = 6 + f. Put u, q, 2/9 in descending order. 2/9, q, u Let y = 0.018 + -0.168. Put -2/5, y, 2 in ascending order. -2/5, y, 2 Suppose 10*q - 5*q + 25 = 0. Let p be (0*(-1)/(-2)*2)/(-12 + 7). Sort -11, q, p. -11, q, p Let o = 12 + -25. Let f be (-2 - (-8)/2)/o. Sort -1/6, f, 0.05 in increasing order. -1/6, f, 0.05 Let u(c) be the second derivative of c**5/20 - 2*c**4/3 - 4*c**3/3 - 5*c**2/2 + 13*c. Let m be u(9). Let p = 0.1 + 0.1. Sort p, m, -4 in descending order. m, p, -4 Let n = -44 - -38. Let c be (n - -2) + 3 + 1. Sort c, -25, -2 in decreasing order. c, -2, -25 Suppose -34 = -2*a + 4*c - 6, 0 = 2*c + 6. Suppose -2*t = -2*x + a, -x - 2 = -0*x + 2*t. Sort 3, -0.5, x, 2/5. -0.5, 2/5, x, 3 Let i(n) = -4*n**2 - n + 12. Let j be i(-2). Let z be (-50)/(-102) + 4/(-6). Put 2/21, j, -5, z in descending order. 2/21, z, j, -5 Suppose 4*v + 6 + 5 = -3*x, -5*x = 2*v + 23. Suppose 0 = -3*s + 3*u + 9, 4*s + 2*u = 5*u + 11. Put -1, s, x in descending or
<?php class Category extends Eloquent { protected $table = "category"; protected $guarded = ["id"]; protected $softDelete = true; public function products() { return $this->hasMany("Product"); } }
1. Field of the Invention The present invention relates to a process for producing a polycarbonate resin foamed blow-molded article by blow molding a foamed parison, and to a polycarbonate resin foamed blow-molded article. 2. Description of Prior Art Because a polycarbonate resin (hereinafter occasionally referred to as “PC resin”) has much higher melt viscosity at near its foaming temperature and requires an extremely higher extrusion pressure as compared with other resins such as polystyrene, it has been difficult to extrude and foam the PC resin. Moreover, because the melt tension of a PC resin is much smaller than other resins such as polystyrene, cells are apt to be broken during the growth thereof. Therefore, the obtained PC resin extruded foamed product shows only an insufficient expansion ratio and the cells thereof are not uniform in size. In particular, the PC resin foamed blow-molded article has an expansion ratio of as low as about 1.3. It has not been possible to obtain a PC resin foamed blow-molded article having such a high expansion ratio as achieved in the case of a polystyrene or a polyethylene resin. In this circumstance, Japanese unexamined patent publication No. JP-A-2000-033643 proposes a method in which a PC resin having a branched structure and a specific melt tension is extruded to form a foamed parison. By blow-molding the parison, a PC resin foamed blow-molded article having an acceptable expansion ratio is obtainable. Japanese unexamined patent publication No. JP-A-2008-144084 proposes a method in which a modified PC resin obtained by modifying a commercially available branched PC resin with a branching agent is extruded through a die with a large area to obtain a foamed board that has a high expansion ratio and a large sectional area and that shows a high compression strength even at both side end regions in the width direction thereof.
George A. Snow George Albert Snow (born August 1, 1923) was a Canadian politician. He represented the electoral district of Yarmouth in the Nova Scotia House of Assembly from 1963 to 1974. He was a member of the Nova Scotia Progressive Conservative Party. Snow was born in Port Maitland, Nova Scotia and was a lobster fisherman. He married Marjorie Louise Harris in 1947. References Category:1923 births Category:Living people Category:Progressive Conservative Association of Nova Scotia MLAs Category:People from Yarmouth County, Nova Scotia
Many Dentist are looking for funding to help their businesses get ahead. Are you looking to receive additional cash flow for any purpose? Get the Funding Your Business Needs in Less Than 24hrs! Need at least 5 months in business. Our plans include no fixed payments, making repayment a breeze. Payments are made based on a small percentage of your business’s total sales allowing you to focus on other important things, your customers. Ed Rogers Small Business Loans Depot 919-771-4177.
Dr Shawn Pourgol is the founder and president of National Academy of Osteopathy (Canada), National University of Medical Sciences (Spain), and National University of Medical Sciences (USA). Dr Pourgol also serves as the current president of Canadian Union of Osteopathic Manual Practitioners. He can be reached at pourgol@pourgol.com. His personal website is www.pourgol.com.
Dictators can bear anything except mockery. Like court jesters of old(e), stand-up comedians are fearless truth-tellers, saying out loud what we’re all secretly thinking and getting away with it—maybe even changing the world! Charlie Chaplin! Lenny Bruce!! George Carlin!!! So what happened? Comedians used to make jokes. Now they make amends. On August 27th during a performance in Phoenix, notoriously unhinged African-American comic Katt Williams told the audience: “If y’all had California and you loved it, then you shouldn’t have gave that motherf***er up, bitches!” Someone yelled, “This IS Mexico, motherf***er!” And so began a six-minute Mexican standoff, with Williams berating the heckler, chanting “U-S-A!” and singing “The Star-Spangled Banner.” “Do you remember when white people used to say, ‘Go back to Africa,’ and we had to tell them we don’t want to?” Katt yelled as the audience whooped its approval. “So if you love Mexico, bitch, get the f*** over there!” “Comedians used to make jokes. Now they make amends.” LaughSpin asked whether Williams had acted “patriotic or just racist” with the cloying concern-troll earnestness that’s standard issue at (of all places) websites run by and for comedians. Every week, these sites cover some controversy involving a stand-up and an aggrieved (insert minority group) audience member. I need to remind myself I’m reading Splitsider instead of a Seven Sisters student paper, what with all the hand-wringing about “homophobia,” “misogyny,” “date rape,” and whether or not certain jokes are “appropriate” or “go too far.” In short, even professional comedians and their fans have embraced the elites’ “eat your (organic) spinach” ethos. I thought everyone understood that those “You know what? I’ve learned something today…” codas on South Park were goofs on the cringeworthy “very special episodes” that once propped up faltering sitcoms. The ribald “poker scene” from Louie is every hipster’s new favorite thing because it features a somber, chilling (and highly inaccurate) etymology of the word “faggot”—not in spite of it. There were over one hundred news stories about Katt Williams last week according to Google, but only one speculated about his rant’s possible—what’s that expression they like?—“root causes.” (Yeah: me.) The story ran its usual course: Self-appointed “community leaders” and “activists”—in this case, presente.org—unveiled the obligatory “online petition” demanding an apology from Williams. That’s when things got confusing. An apology was issued and promptly accepted by one Rev. Jarrett Maupin, who’d been “organizing a boycott” against Williams but now wants him to return to Phoenix and “show his commitment to the Latino community.” This is all pretty damn funny, since Maupin is black. (He’s also an Al Sharpton protégé, complete with legal troubles.) But a few days later, Williams was on CNN claiming his publicist had issued an apology without his permission. “I’m not allowed to [apologize],” he explained to anchor T. J. Holmes. “As a stand-up, the only thing I sell is uncensored thought. I’m not allowed, then, to come back the next day and apologize….If a person starts their heckling with ‘F*** America,’ then that gives me the right to defend my country.” That was “Opie” of radio’s Opie & Anthony Show talking about the Carolla vs. GLAAD “controversy” and other recent—to coin a phrase—“wit hunts.” Topmost on everyone’s mind at the time was Tracy Morgan. Another flaky African-American comedian, Morgan was obliged to go on a pro-homosexual “awareness raising” apology tour after “joking” that if one of his children ever talked to him “in a gay voice,” he’d kill him. Opie and Anthony railed against this booming new “business” of apologizing—let’s call it “Big Sorry”—and were joined by comic Jim Norton, who compared Morgan’s humiliating repentance road show to a slave auction in “the f***ing 1700 slavery days, where they held that poor bastard captive.”
Deep Exploit at Black Hat USA 2018 Arsenal . DeepExploit is fully automated penetration test tool linked with Metasploit . It has two exploitation modes. Intelligence mode DeepExploit identifies the status of all opened ports on the target server and executes the exploit at pinpoint using Machine Learning . Brute force mode DeepExploit executes exploits thoroughly using all combinations of "Exploit module", "Target" and "Payload" of Metasploit corresponding to user's indicated product name and port number. DeepExploit's key features are following. Self-learning . DeepExploit can learn how to exploitation by itself (uses reinforcement learning ). It is not necessary for humans to prepare learning data. Efficiently execute exploit . DeepExploit can execute exploits at pinpoint (minimum 1 attempt) using self-learned data. Deep penetration . If DeepExploit succeeds the exploit to the target server, it further executes the exploit to other internal servers. Operation is very easy . Your only operation is to input one command . It is very easy!! Learning time is very fast . Generally, learning takes a lot of time. So, DeepExploit uses distributed learning by multi agents . We adopted an advanced machine learning model called A3C Generally, learning takes a lot of time.So, DeepExploit usesWe adopted an advanced machine learning model called Abilities of "Deep Exploit". Current DeepExploit's version is a beta. But, it can fully automatically execute following actions: Intelligence gathering. Threat modeling. Vulnerability analysis. Exploitation. Post-Exploitation. Reporting. Your benefits. By using our DeepExploit, you will benefit from the following. For pentester: (a) They can greatly improve the test efficiency. (b) The more pentester uses DeepExploit, DeepExploit learns how to method of exploitation using machine learning. As a result, accuracy of test can be improve. For Information Security Officer: (c) They can quickly identify vulnerabilities of own servers. As a result, prevent that attackers attack to your servers using vulnerabilities, and protect your reputation by avoiding the negative media coverage after breach. Since attack methods to servers are evolving day by day, there is no guarantee that yesterday's security countermeasures are safety today. It is necessary to quickly find vulnerabilities and take countermeasures. Our DeepExploit will contribute greatly to keep your safety. Note If you are interested, please use them in an environment under your control and at your own risk . System component. DeepExploit consists of the machine learning model (A3C) and Metasploit . The A3C executes exploit to the target servers via RPC API . The A3C is developped by Keras and Tensorflow that famous ML framework based on Python. It is used to self-learn exploit's way using deep reinforcement learning. The self-learned's result is stored to learned data that reusable .
Q: Curl syntax to use to promote artefacts from Snapshot to Release Repository Does anyone know the curl syntax to use to promote artefacts from Nexus Snapshot Repo to Release Repository please? A: You can ABSOLUTELY use curl for everything. I personally use the NING HttpClient (v1.8.16). For whatever reason, Sonatype makes it incredibly difficulty to figure out what the correct URLs, headers, and payloads are supposed to be; and I had to sniff the traffic and guess... There are some barely useful blogs/documentation there, however it is either irrelevant to oss.sonatype.org, or it's XML based (and I found out it doesn't even work). Crap documentation on their part, IMHO, and hopefully future seekers can find this answer useful. I don't think you can go from SNAPSHOT to RELEASE without generating another build, however if you create another build and deploy directly to the staging repo and promote from staging -> release, this will accomplish what I think you want. If you release to Nexus other than oss.sonatype.org, just replace it with whatever the correct host is. The URL that you are interested in "https://oss.sonatype.org/service/local/staging/profiles/" + profile + "/finish" Where profile is your sonatype/nexus profileID (such as 4364f3bbaf163) from when you uploaded your initial POM/Jar. Here is the (CC0 licensed) code I wrote to accomplish this. The repo (such as comdorkbox-1003) is also parsed from the response when you upload your initial POM/Jar. Close repo: /** * Closes the repo and (the server) will verify everything is correct. * @throws IOException */ private static String closeRepo(final String authInfo, final String profile, final String repo, final String nameAndVersion) throws IOException { String repoInfo = "{'data':{'stagedRepositoryId':'" + repo + "','description':'Closing " + nameAndVersion + "'}}"; RequestBuilder builder = new RequestBuilder("POST"); Request request = builder.setUrl("https://oss.sonatype.org/service/local/staging/profiles/" + profile + "/finish") .addHeader("Content-Type", "application/json") .addHeader("Authorization", "Basic " + authInfo) .setBody(repoInfo.getBytes(OS.UTF_8)) .build(); return sendHttpRequest(request); } Promote repo: /** * Promotes (ie: release) the repo. Make sure to drop when done * @throws IOException */ private static String promoteRepo(final String authInfo, final String profile, final String repo, final String nameAndVersion) throws IOException { String repoInfo = "{'data':{'stagedRepositoryId':'" + repo + "','description':'Promoting " + nameAndVersion + "'}}"; RequestBuilder builder = new RequestBuilder("POST"); Request request = builder.setUrl("https://oss.sonatype.org/service/local/staging/profiles/" + profile + "/promote") .addHeader("Content-Type", "application/json") .addHeader("Authorization", "Basic " + authInfo) .setBody(repoInfo.getBytes(OS.UTF_8)) .build(); return sendHttpRequest(request); } Drop repo: /** * Drops the repo * @throws IOException */ private static String dropRepo(final String authInfo, final String profile, final String repo, final String nameAndVersion) throws IOException { String repoInfo = "{'data':{'stagedRepositoryId':'" + repo + "','description':'Dropping " + nameAndVersion + "'}}"; RequestBuilder builder = new RequestBuilder("POST"); Request request = builder.setUrl("https://oss.sonatype.org/service/local/staging/profiles/" + profile + "/drop") .addHeader("Content-Type", "application/json") .addHeader("Authorization", "Basic " + authInfo) .setBody(repoInfo.getBytes(OS.UTF_8)) .build(); return sendHttpRequest(request); } Delete signature turds: /** * Deletes the extra .asc.md5 and .asc.sh1 'turds' that show-up when you upload the signature file. And yes, 'turds' is from sonatype * themselves. See: https://issues.sonatype.org/browse/NEXUS-4906 * @throws IOException */ private static void deleteSignatureTurds(final String authInfo, final String repo, final String groupId_asPath, final String name, final String version, final File signatureFile) throws IOException { String delURL = "https://oss.sonatype.org/service/local/repositories/" + repo + "/content/" + groupId_asPath + "/" + name + "/" + version + "/" + signatureFile.getName(); RequestBuilder builder; Request request; builder = new RequestBuilder("DELETE"); request = builder.setUrl(delURL + ".sha1") .addHeader("Authorization", "Basic " + authInfo) .build(); sendHttpRequest(request); builder = new RequestBuilder("DELETE"); request = builder.setUrl(delURL + ".md5") .addHeader("Authorization", "Basic " + authInfo) .build(); sendHttpRequest(request); } File uploads: public String upload(final File file, final String extension, String classification) throws IOException { final RequestBuilder builder = new RequestBuilder("POST"); final RequestBuilder requestBuilder = builder.setUrl(uploadURL); requestBuilder.addHeader("Authorization", "Basic " + authInfo) .addBodyPart(new StringPart("r", repo)) .addBodyPart(new StringPart("g", groupId)) .addBodyPart(new StringPart("a", name)) .addBodyPart(new StringPart("v", version)) .addBodyPart(new StringPart("p", "jar")) .addBodyPart(new StringPart("e", extension)) .addBodyPart(new StringPart("desc", description)); if (classification != null) { requestBuilder.addBodyPart(new StringPart("c", classification)); } requestBuilder.addBodyPart(new FilePart("file", file)); final Request request = requestBuilder.build(); return sendHttpRequest(request); } EDIT1: How to get the activity/status for a repo /** * Gets the activity information for a repo. If there is a failure during verification/finish -- this will provide what it was. * @throws IOException */ private static String activityForRepo(final String authInfo, final String repo) throws IOException { RequestBuilder builder = new RequestBuilder("GET"); Request request = builder.setUrl("https://oss.sonatype.org/service/local/staging/repository/" + repo + "/activity") .addHeader("Content-Type", "application/json") .addHeader("Authorization", "Basic " + authInfo) .build(); return sendHttpRequest(request); }
Empty confectioners' sugar into a large bowl and pour cheese mixture over sugar. Stir until completely mixed. Candy will be very stiff. Using your hands, remove candy from the bowl and press evenly and firmly into the pan. Because of the amount of butter in this recipe, pat top of candy with a paper towel to remove excess oil. Place the pan in refrigerator until candy is firm. IngredientsCrepe Batter:1 cup all-purpose flour Pinch salt 1 egg 1 egg yolk 1 1/2 to 2 cups milk 1 tablespoon melted butter 1/2 teaspoon vanilla extract Topping:1 jar hazelnut chocolate spread 5 bananas, sliced 1 can whipping cream DirectionsSift the flour with the salt into a bowl. Make a well in the center and add the egg and egg yolk. Pour in the milk, slowly, stirring constantly and, when half is added, stir in the melted butter and vanilla. Beat well until smooth. Add the remaining milk, cover and let stand at room temperature for at least 20 minutes before using. The batter should be the consistency of light cream. Heat a well greased 6-inch skillet,. Add 1/4 cup batter. Tip skillet from side to side until batter covers bottom. Cook until the bottom is golden brown, turn and remove to a plate. Repeat with remaining batter. Spread a thin layer of chocolate hazelnut spread onto crepe and place banana slices down the center. Roll or fold crepe. Just before serving top with whipped cream. Meanwhile, slice French bread into 3/4-inch slices and butter both sides. Toast slices on griddle until golden brown. Ladle soup into an ovenproof bowl, add toasted bread and cover with cheese. Place ovenproof bowl on a baking sheet lined with tin foil. Bake at 350 degrees F or 5 minutes under a hot broiler. Ingredients1 stick butter 1/2 cup brown sugar 4 bananas peeled and halved, cut lengthwise 1/4 cup dark rum DirectionsMelt butter in a large skillet. Add brown sugar and stir together. Add the bananas and cook until caramelized over medium-high heat. Pour in the rum and catch a flame off of the gas stove or a BBQ lighter. Stand back when ignited and flambe. Be careful; a flame will shoot up above the pan. Let flame die down. Serve over vanilla ice cream and/or a slice of pound cake. In a greased 2-quart casserole, layer the chicken, eggs, carrots, and peas. Mix the soup, chicken broth, and season with salt and pepper, if desired. Pour over the layers. Stir together the biscuit mix and milk, and pour this over the casserole. Drizzle butter over the topping. Bake until the topping is golden brown, 30 to 40 minutes Beat cream cheese with a handheld electric mixer until fluffy. Add sugar and vanilla, beating well. Add eggs, 1 at a time, beating well after each addition. Place a vanilla wafer, flat side down, in each muffin cup. Spoon cream cheese mixture over wafers. Bake for 20 minutes. Allow tarts to cool completely. Serve with blueberry filling on top, or pie filling of your choice. To make the crust:Place the flour, butter, and sour cream in a food processor and pulse to combine. When the dough has formed a ball, pat with lightly floured hands into the bottom and sides of an ungreased 10-inch tart pan with a removable bottom and 1/2-inch sides, or a round au gratin dish. Bake for about 18 minutes, until the crust is set but not browned. Let cool while preparing the filling. Lower the oven temperature to 350 degrees F. To make the filling:Peel and thickly slice the apples. Arrange the apple slices in overlapping circles on top of the crust, until it's completely covered. Overfill the crust, as apples will shrink during cooking. Combine the egg yolks, sour cream, sugar, and flour and beat until smooth. Pour the mixture over the apples. Place the tart pan on a baking sheet and bake for about 1 hour, until the custard sets and is pale golden in color. Cover with an aluminum foil tent if the crust gets too dark. Transfer the tart pan to a wire rack to cool. When cool, remove the side wall of the pan. To make the glaze:Combine the preserves or jelly and orange juice. Spread with a pastry brush over the top of the warm tart. Serve the tart warm, at room temperature or chilled. Garnish with fresh mint. Sprinkle flounder with salt and pepper. Using a fillet knife, carefully open the flounder by cutting along the left and right sides of the seam down the middle of the fish to make pockets. Lay the cut sides back. Stuff the flounder with the Crab Cake Mix, and press the sides down to cover the filling. Sprinkle with paprika. Coat a glass baking dish with cooking spray. Place the fish in the dish and bake for 20 minutes. Without removing the dish from the oven, turn the oven to broil and broil for 5 additional minutes. When the fish is done, remove it from the oven. Slice a medallion of crab butter and place it on top of the grilled fish. Sprinkle fresh parsley on top for added color. In a mixing bowl, combine the egg, cayenne pepper, garlic powder, heavy cream, mustard, lemon juice and mayonnaise together. Gently mix in the saltine crackers and crabmeat. Add salt and pepper, if needed. Set aside to stuff in flounder. Yield: 24 ounces Crab Butter:2 ounces claw crabmeat, picked clean of shells 2 sticks unsalted butter, softened 1 tablespoon seafood base 1 green onion, thinly sliced Fold the crab, butter, seafood base, and onions together in a medium bowl and mix until smooth. Remove from the bowl and shape into a log on a piece of parchment paper. Roll up and place in the freezer. Reserve for flounder. Place half of ice-cream sandwiches evenly over bottom of pan, completely covering bottom, cutting sandwiches to fit, if necessary. Top evenly with banana slices. Pour hot fudge sauce over bananas. Top with cherries and 1/2 of the toffee bits and pour butterscotch evenly on top. Layer with remaining ice cream sandwiches and spread whipped topping over sandwiches; sprinkle with remaining toffee bits. Cover and freeze for at least 4 hours. Remove from pan, using foil handles. Cut into squares and serve. Combine flour, brown sugar, pecans, and butter in bowl. Press dough into an ungreased 13 by 9 by 2-inch pan. Bake for 12 to 15 minutes or until lightly browned. For filling, beat cream cheese and granulated sugar together in a bowl until smooth, using a handheld electric mixer; add eggs and extract; beat well. Pour over crust. Bake for 20 minutes. Cool completely. Cut into squares before serving. Decorate tops with berries and mint leaves. Snap off the tough ends of the asparagus. Unwrap the phyllo and cut the stack in half lengthwise. Reserve 1 stack for later use. Cover the phyllo with a damp towel to keep it from drying out. Take 1 sheet of phyllo and brush lightly with some melted butter. Sprinkle with some Parmesan. Place 2 to 3 asparagus spears on the short end of the sheet. Roll up, jelly-roll style. Place each piece, seam side down, on a baking sheet. Brush with more melted butter and sprinkle with more Parmesan. Repeat until all the asparagus spears are used up. Place baking sheet in oven and bake for 15 to 18 minutes, or until golden brown and crispy. Preheat broiler or grill. Divide ground beef into 12 flat patties. Partially saute bacon - the fat should start rendering, but it shouldn't be crispy. Drain bacon. Mix pecans, parsley, onion, and butter together. Spread mixture on 6 of the patties. Top with remaining 6 patties and seal the edges with the partially sauteed bacon and secure with a wooden toothpick. Broil or grill 10 to 15 minutes, or until cooked to desired doneness, turning once. Patty #2: Butter Burger: Salt and pepper the beef, to taste. Mix beef and cubed butter together with your hands and form 6 balls. Push a piece of American cheese into the center of each ball. Mold ground beef around the cheese in the shape of a patty. Cook on hot grill or in skillet to desired doneness. Special Sauce for the "Big Mike": Stir all ingredients together until well blended. To Serve: Place 1 pecan burger on a bottom piece of 6 buns. Top with another bottom piece of bun and the butter burger. Top with sauce, lettuce, tomato, onion and a top piece of the bun. Discard or save for another use the remaining 6 top bun pieces. Meanwhile, combine all the dressing ingredients in a jar with a tight-fitting lid and shake well. Refrigerate until ready to use. In a large bowl, combine the rice, artichoke hearts, peas, green pepper, green onions, tomatoes, reserved marinade, and half of the dressing. Toss well. Cover and chill or eat at room temperature. Just before serving, toss again and taste. Add some of the remaining dressing, if desired. Sprinkle with the almonds and serve. Grease 6 (6-ounce) custard cups. Melt the chocolates and butter in the microwave, or in a double boiler. Add the flour and sugar to chocolate mixture. Stir in the eggs and yolks until smooth. Stir in the vanilla and orange liqueur. Divide the batter evenly among the custard cups. Place in the oven and bake for 14 minutes. The edges should be firm but the center will be runny. Run a knife around the edges to loosen and invert onto dessert plates. Remove roast from bag, place in a roasting pan, and discard marinade. Roast pork loin at 325 degrees F for 2 to 2 1/2 hours, or to an internal temperature registers 160 to 170 degrees F. on an instant-read thermometer. Serve with BBQ sauce. In a medium bowl, beat the eggs with a whisk. Stir in the brown sugar until smooth. Pour into the top of a double boiler. Cook the mixture over simmering water for 20 minutes, stirring constantly, until it reaches 180 degrees F. Let cool for 5 minutes. Sift the 1 1/2 cups of flour, baking powder, and salt together. Add pecans to flour mixture. Stir into the cooked mixture. Add vanilla and stir until combined. (The batter will be thin.) Pour the batter into the prepared pan and place the pan in the oven. Bake for 25 minutes. Allow to cool completely in the pan before cutting into pieces. Store in an airtight tin in the refrigerator. In a medium mixing bowl, toss the strawberries and 1/4 cup sugar together. Set aside until time to serve. In the bowl of a food processor, pulse together flour, baking powder, salt and 3 tablespoons of sugar. Then pulse in the cold butter cubes until a coarse meal is formed. Turn the flour mixture out into a large mixing bowl and make a well in the center. Pour in 2/3 cup half-and-half and gently mix it in with a rubber spatula or fork, be careful not to over mix the dough or the biscuits will be tough. Turn the dough out onto a lightly floured surface and fold it over itself a couple of times until it just holds together. Pat the dough out to 3/4-inch thickness and cut out 8 round 3-inch biscuits. Transfer the biscuits to a parchment paper lined baking sheet. Brush the tops of each biscuit with the remaining half-and-half and sprinkle each with 1 teaspoon sugar. Bake in a preheated oven for 12 to 15 minutes or until the biscuits have risen and are a light golden brown. Remove from the oven and let cool slightly. Split each biscuit, spoon some strawberries on the bottom piece, then whipped cream and top with the other biscuit half. Garnish with fresh mint and more strawberries. In a shallow bowl, stir together the ground cinnamon and sugar and set aside. In a small bowl, whisk together 1 cup of confectioners' sugar, 2 tablespoons of milk and 1 teaspoon of vanilla extract, set aside. This is the vanilla icing. In another bowl, whisk together 1 cup of confectioners' sugar, 1/4 cup of cocoa powder and 3 tablespoons of milk and set aside. This is the chocolate icing. Lay out the biscuits on a cutting board and with a 1 1/2-inch round cookie or biscuit cutter, cut out a hole from the middle of each biscuit. Fry them in the oil until golden and then flip with tongs to fry the other side. You can even fry the donut holes. Drain on paper towels and then toss in the cinnamon-sugar or ice and decorate with sprinkles, as desired. Mix the dry rub ingredients in small bowl. Sprinkle dry rub all over the pork roast, pressing into the pork. Cover with plastic and refrigerate for at least 2 hours. Combine liquid ingredients and the garlic powder in a medium bowl and pour into a large Dutch oven. Place pork in the oven and tightly cover with aluminum foil then lid. Roast for 4 hours or until fork tender and shreds easily. Brush the roast with cooking liquid every hour. Remove from oven and let stand until cool enough to handle. Shred the pork with a fork or tongs into bite size pieces. Melt the butter in a skillet over medium heat. Add the bell pepper, onion, and celery and saute for 2 minutes. In a bowl, combine the soup, mayonnaise, Parmesan, crabmeat, shrimp, and pepper. Stir the sauteed vegetables into the seafood mixture and spoon this mixture into a lightly greased 8 by 11-inch casserole dish. Bake for 30 minutes. Serve with toast points or crackers. Combine 1/4 cup of olive oil and 3 drops of liquid smoke. Toss 4 of the Portobello mushrooms in oil mixture and roast them in the oven for approximately 45 minutes. Chop mushrooms into cubes. In a large pot, saute the onions in olive oil. Once onions are sauteed, add uncooked portobello mushrooms and garlic. After mushrooms and garlic are cooked add roasted mushrooms, white wine and chicken broth and allow to simmer for 20 minutes. Remove from heat and place mixture in blender. Add roux to pot. Blend in heavy cream. Place mixture back in pot on stove and add roux. Let simmer for 20 minutes. Garnish with croutons, sour cream and sliced chives. Unroll the piecrusts onto a lightly floured surface. Roll into 2 (15-inch) circles. Cut out 48 circles using a 1 3/4-inch fluted or round cookie cutter, re-rolling dough as needed. Place in 1 3/4-inch muffin pans, pressing on the bottoms and up the sides of each of the mini-muffin cups. Combine the melted butter, brown sugar, flour, and eggs in a large bowl, mixing well. Add the vanilla. Stir in the pecans and brickle chips. Spoon the pecan filling evenly into the pie shells. Bake for 25 minutes, or until filling is set and crust is lightly browned. Cool in pans on wire racks. On a cutting board, place the chicken breasts between a piece of plastic wrap and pound the chicken out until it is 1/2-inch thick. In 3 separate shallow dishes, place the flour, eggs, and bread crumbs. Season the chicken with salt and pepper. Dredge the cutlets through the flour, then eggs, and then the bread crumbs. In a large skillet over medium-high heat, heat the vegetable oil. Place the cutlets in the oil and fry until brown on each side, about 2 to 3 minutes per side. Place the cooked cutlets in a baking dish. Spoon tomato sauce over each cutlet and sprinkle evenly with the cheeses. Bake for 10 to 15 minutes, until the cheese is melted and bubbly. Bring a large pot of salted water to a boil over high heat. Add ziti and cook until al dente. Drain in a colander. In a large skillet over medium heat, saute turkey sausage. Add onion and garlic and saute until the sausage is cooked through. Use your spoon to break up the sausage while it cooks. Add the can of diced tomatoes and pesto and let simmer for 10 minutes. Add the ricotta cheese, spinach, Parmesan and mozzarella to a large bowl, and stir to combine. Butter a 4-quart baking dish, add the cooked pasta, then sausage mixture and cheese mixture. Top with a sprinkling of mozzarella and Parmesan. Bake until completely heated through and golden and crisp on top, about 20 minutes. While the pasta is cooking, in a separate pot, melt the butter. Whisk in the flour and mustard and keep it moving for about five minutes. Make sure it's free of lumps. Stir in the milk, onion, bay leaf, and paprika. Simmer for ten minutes and remove the bay leaf. Temper in the egg. Stir in 3/4 of the cheese. Season with salt and pepper. Fold the macaroni into the mix and pour into a 2-quart casserole dish. Top with remaining cheese. Melt the butter in a saute pan and toss the bread crumbs to coat. Top the macaroni with the bread crumbs. Bake for 30 minutes. Remove from oven and rest for five minutes before serving. Place tortillas in a pie plate with 1 cup broth. Turn once in awhile to ensure all tortillas are soaked, set aside. Place a large nonstick skillet over medium-high heat and add olive oil. Saute onions, peppers, garlic and mushrooms for 5 minutes until slightly browned. Add turkey and cook until turkey is cooked through, about 7 to 9 minutes. Add remaining 1/4 cup chicken stock and remaining ingredients, except cheese and tortilla chips. Stir thoroughly and bring to a boil. Reduce heat and simmer for 5 minutes. Place 4 soaked tortillas in the bottom of prepared baking dish, tearing to fit. Layer half of turkey mixture and half of cheese. Repeat layers. Top with crushed tortilla chips. Place in oven and bake for 30 minutes. In a large bowl, combine cake mix and gelatin. Add pureed strawberries, eggs, oil, and water; beat at medium speed with an electric mixer until smooth. Pour into prepared pans, and bake for 20 minutes, or until a wooden pick inserted in the center comes out clean. Let cool in pans for 10 minutes. Remove from pans, and cool completely on wire racks. For the frosting:In a large bowl, beat butter and cream cheese at medium speed with an electric mixer until creamy. Beat in 1/4 cup of the strawberry puree and the vanilla extract. (The rest of the puree is leftover but can be used in smoothies or on ice cream for a delicious treat.) Gradually add confectioners' sugar, beating until smooth. Spread frosting in between layers and on top and sides of cake. Garnish with sliced fresh strawberries, if desired. Generously season a cast iron skillet with up to 1/4 cup vegetable oil. Preheat the pan either in the oven or on the stove over medium-high heat. Mix all ingredients together in a large bowl, stirring with a wooden spoon or rubber spatula until combined. Pour batter into the preheated cast iron skillet. Place skillet in the oven and bake until golden brown, approximately 30 minutes. If making individual size cornbreads in smaller pans, they will require a shorter cooking time. Pour the pineapple with juice into the casserole dish and evenly spread blueberry pie filling on top. Cover with dry yellow cake mix and top with pecans. Drizzle with melted butter and bake for 35 to 45 minutes. Spread butter over each side of bread. Place 4 slices of bread, butter side down, on skillet over medium heat. Flip bread and toast the other side. Remove to serving plates. In a small bowl, add cream cheese, parsley and green onions together. Add salt and pepper. Spread onto 1 side of each toast slice. In same small skillet, fry the eggs in reserved bacon grease over medium-low heat. Crack the yolks and flip. Place each egg on top of 2 pieces of toast. Cover each egg with 2 slices of bacon, 2 slices of tomato, and 2 slices of avocado. Top with another piece of toast and serve immediately. Sift together flour, salt, sugar, and baking powder into a bowl. Cut shortening into flour mixture with a pastry cutter or fork until mixture resembles cornmeal. Stir in 1/4 cup of the cold water, then add remaining 1/4 cup and mix until combined. Cover dough and allow it to rest in refrigerator for 30 minutes. Divide dough in half. Place on lightly floured board and pat out. Using a rolling pin, roll out 1 piece of dough to the size of a 9-inch pie pan. Put crust in pan and trim off excess dough around the edge. Roll out second ball of dough for pie crust top. For the filling: Preheat oven to 350 degrees F. Mix sugar, tapioca, zest, cinnamon, nutmeg, and raisins in a large bowl. Lay tomato slices in pie crust. Sprinkle mixture over tomatoes. (Overlapping will occur but tomatoes will shrink in size when baked.) Gently lay top pie crust over filling, tucking in the extra crust around the edges. Pinch dough with fingers or butter knife to seal edges. Using a knife, make 4 to 6 slits in top of crust to allow steam to escape. Brush top with egg white and sprinkle with a little sugar to give your crust a shine. Place pie in the preheated oven and bake for 25 minutes. Reduce temperature to 350 degrees F. and continue to bake for 20 more minutes. Cool on wire rack. In a small skillet, melt 1/4 cup of butter over medium heat. Add onion, and cook 3 to 4 minutes, or until soft. Pour mixture into a large bowl. Add the sour cream, cream of celery soup, cheese, garlic powder, hash browns, and salt and pepper. Combine until well blended. Pour mixture into prepared baking dish. Top evenly with crushed potato chips and bake for 45 minutes, or until hot and bubbly. Add the bell peppers and onions, to the skillet, and saute until softened, about 3 minutes. Stir in the garlic and cook until fragrant, about half a minute longer. Push the vegetables to the side of the skillet. Add chops to pan and place vegetables on top of pork chops. Pour in the broth and sprinkle with Worcestershire sauce. Cover pan with foil and allow to simmer for 45 minutes or until chops are tender. In a large bowl, combine flour, sugar, baking soda, cinnamon, and salt. Add eggs and vegetable oil. Using a hand mixer, blend until combined. Add carrots and pecans, if using. Pour into pans. Bake for approximately 40 minutes. Remove from oven and cool for 5 minutes. Remove from pans, place on waxed paper and allow to cool completely before frosting. For the frosting: Add all ingredients, except nuts, into a medium bowl and beat until fluffy using a hand mixer. Stir in the nuts. Spread frosting on top of each cake layer. Stack the cakes on a serving plate and serve. Roll each individual biscuit into a ball. Dip each piece in the 1 stick of melted butter, then coat with the cheese mixture. Place dough into prepared pan; overlapping will occur. Bake bread, 30 to 35 minutes. Cover with aluminum foil during the last 10 minutes of baking to prevent excess browning, if needed. Separate the biscuit dough into 8 biscuits. Place 1 biscuit in the center of the pan. Cut the remaining biscuits in half, forming 14 half-circles. Arrange the pieces around the center biscuit with cut sides facing in the same direction. Brush melted butter over the tops of the biscuits. In a small bowl, combine the walnuts, granulated sugar, and orange zest. Mix well and sprinkle over biscuits. Bake for 20 minutes or until golden brown. Meanwhile, in a small bowl combine confectioners' sugar, cream cheese, and orange juice. Blend until smooth adding more orange juice, if needed, to thin. Drizzle glaze over the warm coffee cake and serve. In a large, heavy-bottomed pot add enough water to completely cover the potatoes. Bring the water to a boil over medium-high heat. Add and the potatoes and a large pinch of salt. When the water returns to a boil, reduce heat to a simmer, and cook until tender, about 30 minutes. Meanwhile, put the diced celery root in a saucepan with enough water to cover, bring to a boil, loosely cover, and reduce the heat to a simmer. Cook until tender, about 30 minutes. It should be the same consistency as the cooked potatoes. Drain the potatoes. Put them through a ricer and add 2 tablespoons of the butter and half the cream. Drain the celery root and put it in a blender with the remaining 2 tablespoons of butter and remaining cream. *Puree until smooth, using care, as the puree will be very hot. Add the celery root puree to the potatoes and mix well. Lightly season with salt and pepper. . When blending hot liquids: Remove liquid from the heat and allow to cool for at least 5 minutes. Transfer liquid to a blender or food processor and fill it no more than halfway. If using a blender, release one corner of the lid. This prevents the vacuum effect that creates heat explosions. Place a towel over the top of the machine, pulse a few times then process on high speed until smooth. A viewer, who may not be a professional cook, provided this recipe. The Food Network Kitchens chefs have not tested this recipe and therefore, we cannot make representation as to the results. In a large mixing bowl combine the chicken, celery, almonds, salt, pepper, lemon juice, mayonnaise, and cheese. Place the mixture in the prepared baking dish. Spread the crushed potato chips on top. Bake for 20 minutes, or until bubbly. IngredientsChicken:1 (2 1/2-pound) chicken, cut into 8 pieces 3 ribs celery, chopped 1 large onion, chopped 2 bay leaves 2 chicken bouillon cubes 1 teaspoon House Seasoning, recipe follows 1 (10 3/4-ounce) can condensed cream of celery or cream of chicken soup Dumplings:2 cups all-purpose flour 1 teaspoon salt Ice water DirectionsTo start the chicken: Place the chicken, celery, onion, bay leaves, bouillon, and House Seasoning in a large pot. Add 4 quarts of water and in water and bring to a simmer over medium heat. Simmer the chicken until it is tender and the thigh juices run clear, about 40 minutes. Remove the chicken from the pot and, when it is cool enough to handle, remove the skin and separate the meat from the bones. Return the chicken meat to the pot. Keep warm over low heat. To prepare the dumplings: Mix the flour with the salt and mound together in a mixing bowl. Beginning at the center of the mound, drizzle a small amount of ice water over the flour. Using your fingers, and moving from the center to the sides of the bowl, gradually incorporate about 3/4 cup of ice water. Knead the dough and form it into ball. Dust a good amount of flour onto a clean work surface. Roll out the dough (it will be firm), working from center to 1/8-inch thick. Let the dough relax for several minutes. Add the cream of celery soup to the pot with the chicken and simmer gently over medium-low heat. Cut the dough into 1-inch pieces. Pull a piece in half and drop the halves into the simmering soup. Repeat. Do not stir the chicken once the dumplings have been added. Gently move the pot in a circular motion so the dumplings become submerged and cook evenly. Cook until the dumplings float and are no longer doughy, 3 to 4 minutes. To serve, ladle chicken, gravy, and dumplings into warm bowls. Cook's Note: If the chicken stew is too thin it can be thickened before the dumplings are added. Simply mix together 2 tablespoons cornstarch and 1/4 cup of water then whisk this mixture into the stew. Unless specifically instructed not to do so, always preheat your oven to the temperature required. Always beat eggs before adding sugar. Combine dry ingredients together when baking. Add flour and milk to egg mixture alternately, beginning and ending with the flour mixture. This will make for a lighter cake, muffin or biscuit. To eliminate the odor from Collard Greens being cooked, add one washed, unshelled pecan to the collards pot before turning the stove on. To see if an egg is fresh, place the uncracked egg in a glass of water. If it sinks to the bottom, it is fresh. If it floats, throw it out! To make fluffier scramble eggs, beat in a small amount of water, instead of milk. When baking a double-crust pie, brush the top layer lightly with milk for a shiny crust; for a sweet crust, sprinkle with granulated sugar or a mixture of sugar and cinnamon; for a glazed crust brush lightly with a beaten egg. If you place the pie on a hot cookie sheet in the oven during preheating, it will ensure that the bottom crust will bake through. You can always substitute 1 2/3 cups all-purpose flour for 2 cups cake flour. For muffin tins, if you don’t have enough batter to fill all the cups, pour a little water in to the empty cups. This will prevent the pan from scorching. To sanitize cutting boards and countertops, use a mixture of 1 tablespoon bleach to 1 gallon of water. Don’t throw away stale bread, cut it into cubes and bake for croutons, or throw it in the food processor and make breadcrumbs. When picking crabmeat free of bits of shell, spread the crabmeat in a thin layer on a baking sheet and place in a 350 degree oven for about 10 minutes. This will make it easier to see the bits of shell. When making meringue, always have the egg whites at room temperature, and be sure they’re free of any yolk. Make sure the bowl and beaters are spotless and free of grease. If soup is too salty, drop a raw, whole, peeled potato into the soup. Serve from around it. To get more juice from a lemon, pop it in the microwave for 30 seconds on high power. When boiling crab, shrimp, or any other shellfish, add 1 tablespoon of vinegar to the water. This helps to loosen the meat from the shell. Store eggs tapered side down for a longer shelf life. Soak chicken in 1 tablespoon of baking powder with enough water to cover the chicken. Soak for 10 to 15 minutes to discourage bacteria. Rinse chicken and cook thoroughly. When sautéing, always heat your pan before putting in the fat. This will temper the pan and reduce sticking. When deep-fat frying, try adding 1 tablespoon of vinegar to the fat before heating. This will keep the food from absorbing too much of the fat. Add 1 tablespoon of vinegar to beef stews to tenderize the meat. When cooking green vegetables, add 1 teaspoon of lemon juice to the pot to help retain their color. Don’t salt fresh corn during cooking; salt will toughen it. Salt the corn to taste after cooking. Add a little oil to pasta water to keep the water from boiling over and the pasta from sticking together. No buttermilk? Add 1 teaspoon distilled white vinegar to 1 cup fresh milk, let sour for 5 minutes. Out of sweetened condensed milk? Make your own: Mix 6 cups whole milk with 4 1/2 cups sugar, 1 stick of butter, and 1 vanilla bean ( or 1 tablespoon vanilla). Cook over medium heat, reducing liquid, for 1 hour. Stir occasionally. Cool. Yields 4 1/2 cups. This can be stored covered in the refrigerator for several weeks. Cut recipe in half for immediate use. Don't have any eggs for a baking recipe? Use 2 tablespoons of corn oil plus 1 tablespoon of water as a substitute. "This cookie got its name because a whole 18-ounce jar of peanut butter is used to make it. It is the creamiest, moistest cookie I have ever had, and bound to be a favorite with anyone who makes them. Just don't over bake them!" DIRECTIONSIn a large bowl, cream butter, white sugar, and brown sugar until smooth. Add the eggs, yolks, and vanilla; mix until fluffy. Stir in peanut butter. Sift together the flour, baking soda, and salt; stir into the peanut butter mixture. Finally, stir in the peanuts. Refrigerate the dough for at least 2 hours. Preheat oven to 350 degrees F (175 degrees C). Lightly grease a cookie sheet. Roll dough into walnut sized balls. Place on the prepared cookie sheet and flatten slightly with a fork. Bake for 12 to 15 minutes in the preheated oven. Cookies should look dry on top. Allow to cool for a few minutes on the cookie sheet before removing to cool completely on a rack. These cookies taste great when slightly undercooked. DIRECTIONSPreheat oven to 375 degrees F (190 degrees C). Sift together the flour, salt and baking soda; set aside. Cream together the butter, sugar, peanut butter and brown sugar until fluffy. Beat in the egg, vanilla and milk. Add the flour mixture; mix well. Shape into 40 balls and place each into an ungreased mini muffin pan. Bake at 375 degrees for about 8 minutes. Remove from oven and immediately press a mini peanut butter cup into each ball. Cool and carefully remove from pan.
" Vim support file to define the syntax selection menu " This file is normally sourced from menu.vim. " " Maintainer: Bram Moolenaar <Bram@vim.org> " Last Change: 2017 Oct 28 " Define the SetSyn function, used for the Syntax menu entries. " Set 'filetype' and also 'syntax' if it is manually selected. fun! SetSyn(name) if a:name == "fvwm1" let use_fvwm_1 = 1 let use_fvwm_2 = 0 let name = "fvwm" elseif a:name == "fvwm2" let use_fvwm_2 = 1 let use_fvwm_1 = 0 let name = "fvwm" else let name = a:name endif if !exists("s:syntax_menu_synonly") exe "set ft=" . name if exists("g:syntax_manual") exe "set syn=" . name endif else exe "set syn=" . name endif endfun " <> notation is used here, remove '<' from 'cpoptions' let s:cpo_save = &cpo set cpo&vim " The following menu items are generated by makemenu.vim. " The Start Of The Syntax Menu an 50.10.100 &Syntax.AB.A2ps\ config :cal SetSyn("a2ps")<CR> an 50.10.110 &Syntax.AB.Aap :cal SetSyn("aap")<CR> an 50.10.120 &Syntax.AB.ABAP/4 :cal SetSyn("abap")<CR> an 50.10.130 &Syntax.AB.Abaqus :cal SetSyn("abaqus")<CR> an 50.10.140 &Syntax.AB.ABC\ music\ notation :cal SetSyn("abc")<CR> an 50.10.150 &Syntax.AB.ABEL :cal SetSyn("abel")<CR> an 50.10.160 &Syntax.AB.AceDB\ model :cal SetSyn("acedb")<CR> an 50.10.170 &Syntax.AB.Ada :cal SetSyn("ada")<CR> an 50.10.180 &Syntax.AB.AfLex :cal SetSyn("aflex")<CR> an 50.10.190 &Syntax.AB.ALSA\ config :cal SetSyn("alsaconf")<CR> an 50.10.200 &Syntax.AB.Altera\ AHDL :cal SetSyn("ahdl")<CR> an 50.10.210 &Syntax.AB.Amiga\ DOS :cal SetSyn("amiga")<CR> an 50.10.220 &Syntax.AB.AMPL :cal SetSyn("ampl")<CR> an 50.10.230 &Syntax.AB.Ant\ build\ file :cal SetSyn("ant")<CR> an 50.10.240 &Syntax.AB.ANTLR :cal SetSyn("antlr")<CR> an 50.10.250 &Syntax.AB.Apache\ config :cal SetSyn("apache")<CR> an 50.10.260 &Syntax.AB.Apache-style\ config :cal SetSyn("apachestyle")<CR> an 50.10.270 &Syntax.AB.Applix\ ELF :cal SetSyn("elf")<CR> an 50.10.280 &Syntax.AB.APT\ config :cal SetSyn("aptconf")<CR> an 50.10.290 &Syntax.AB.Arc\ Macro\ Language :cal SetSyn("aml")<CR> an 50.10.300 &Syntax.AB.Arch\ inventory :cal SetSyn("arch")<CR> an 50.10.310 &Syntax.AB.Arduino :cal SetSyn("arduino")<CR> an 50.10.320 &Syntax.AB.ART :cal SetSyn("art")<CR> an 50.10.330 &Syntax.AB.Ascii\ Doc :cal SetSyn("asciidoc")<CR> an 50.10.340 &Syntax.AB.ASP\ with\ VBScript :cal SetSyn("aspvbs")<CR> an 50.10.350 &Syntax.AB.ASP\ with\ Perl :cal SetSyn("aspperl")<CR> an 50.10.360 &Syntax.AB.Assembly.680x0 :cal SetSyn("asm68k")<CR> an 50.10.370 &Syntax.AB.Assembly.AVR :cal SetSyn("avra")<CR> an 50.10.380 &Syntax.AB.Assembly.Flat :cal SetSyn("fasm")<CR> an 50.10.390 &Syntax.AB.Assembly.GNU :cal SetSyn("asm")<CR> an 50.10.400 &Syntax.AB.Assembly.GNU\ H-8300 :cal SetSyn("asmh8300")<CR> an 50.10.410 &Syntax.AB.Assembly.Intel\ IA-64 :cal SetSyn("ia64")<CR> an 50.10.420 &Syntax.AB.Assembly.Microsoft :cal SetSyn("masm")<CR> an 50.10.430 &Syntax.AB.Assembly.Netwide :cal SetSyn("nasm")<CR> an 50.10.440 &Syntax.AB.Assembly.PIC :cal SetSyn("pic")<CR> an 50.10.450 &Syntax.AB.Assembly.Turbo :cal SetSyn("tasm")<CR> an 50.10.460 &Syntax.AB.Assembly.VAX\ Macro\ Assembly :cal SetSyn("vmasm")<CR> an 50.10.470 &Syntax.AB.Assembly.Z-80 :cal SetSyn("z8a")<CR> an 50.10.480 &Syntax.AB.Assembly.xa\ 6502\ cross\ assember :cal SetSyn("a65")<CR> an 50.10.490 &Syntax.AB.ASN\.1 :cal SetSyn("asn")<CR> an 50.10.500 &Syntax.AB.Asterisk\ config :cal SetSyn("asterisk")<CR> an 50.10.510 &Syntax.AB.Asterisk\ voicemail\ config :cal SetSyn("asteriskvm")<CR> an 50.10.520 &Syntax.AB.Atlas :cal SetSyn("atlas")<CR> an 50.10.530 &Syntax.AB.Autodoc :cal SetSyn("autodoc")<CR> an 50.10.540 &Syntax.AB.AutoHotKey :cal SetSyn("autohotkey")<CR> an 50.10.550 &Syntax.AB.AutoIt :cal SetSyn("autoit")<CR> an 50.10.560 &Syntax.AB.Automake :cal SetSyn("automake")<CR> an 50.10.570 &Syntax.AB.Avenue :cal SetSyn("ave")<CR> an 50.10.580 &Syntax.AB.Awk :cal SetSyn("awk")<CR> an 50.10.590 &Syntax.AB.AYacc :cal SetSyn("ayacc")<CR> an 50.10.610 &Syntax.AB.B :cal SetSyn("b")<CR> an 50.10.620 &Syntax.AB.Baan :cal SetSyn("baan")<CR> an 50.10.630 &Syntax.AB.Basic.FreeBasic :cal SetSyn("freebasic")<CR> an 50.10.640 &Syntax.AB.Basic.IBasic :cal SetSyn("ibasic")<CR> an 50.10.650 &Syntax.AB.Basic.QBasic :cal SetSyn("basic")<CR> an 50.10.660 &Syntax.AB.Basic.Visual\ Basic :cal SetSyn("vb")<CR> an 50.10.670 &Syntax.AB.Bazaar\ commit\ file :cal SetSyn("bzr")<CR> an 50.10.680 &Syntax.AB.Bazel :cal SetSyn("bzl")<CR> an 50.10.690 &Syntax.AB.BC\ calculator :cal SetSyn("bc")<CR> an 50.10.700 &Syntax.AB.BDF\ font :cal SetSyn("bdf")<CR> an 50.10.710 &Syntax.AB.BibTeX.Bibliography\ database :cal SetSyn("bib")<CR> an 50.10.720 &Syntax.AB.BibTeX.Bibliography\ Style :cal SetSyn("bst")<CR> an 50.10.730 &Syntax.AB.BIND.BIND\ config :cal SetSyn("named")<CR> an 50.10.740 &Syntax.AB.BIND.BIND\ zone :cal SetSyn("bindzone")<CR> an 50.10.750 &Syntax.AB.Blank :cal SetSyn("blank")<CR> an 50.20.100 &Syntax.C.C :cal SetSyn("c")<CR> an 50.20.110 &Syntax.C.C++ :cal SetSyn("cpp")<CR> an 50.20.120 &Syntax.C.C# :cal SetSyn("cs")<CR> an 50.20.130 &Syntax.C.Cabal\ Haskell\ build\ file :cal SetSyn("cabal")<CR> an 50.20.140 &Syntax.C.Calendar :cal SetSyn("calendar")<CR> an 50.20.150 &Syntax.C.Cascading\ Style\ Sheets :cal SetSyn("css")<CR> an 50.20.160 &Syntax.C.CDL :cal SetSyn("cdl")<CR> an 50.20.170 &Syntax.C.Cdrdao\ TOC :cal SetSyn("cdrtoc")<CR> an 50.20.180 &Syntax.C.Cdrdao\ config :cal SetSyn("cdrdaoconf")<CR> an 50.20.190 &Syntax.C.Century\ Term :cal SetSyn("cterm")<CR> an 50.20.200 &Syntax.C.CH\ script :cal SetSyn("ch")<CR> an 50.20.210 &Syntax.C.ChaiScript :cal SetSyn("chaiscript")<CR> an 50.20.220 &Syntax.C.ChangeLog :cal SetSyn("changelog")<CR> an 50.20.230 &Syntax.C.Cheetah\ template :cal SetSyn("cheetah")<CR> an 50.20.240 &Syntax.C.CHILL :cal SetSyn("chill")<CR> an 50.20.250 &Syntax.C.ChordPro :cal SetSyn("chordpro")<CR> an 50.20.260 &Syntax.C.Clean :cal SetSyn("clean")<CR> an 50.20.270 &Syntax.C.Clever :cal SetSyn("cl")<CR> an 50.20.280 &Syntax.C.Clipper :cal SetSyn("clipper")<CR> an 50.20.290 &Syntax.C.Clojure :cal SetSyn("clojure")<CR> an 50.20.300 &Syntax.C.Cmake :cal SetSyn("cmake")<CR> an 50.20.310 &Syntax.C.Cmod :cal SetSyn("cmod")<CR> an 50.20.320 &Syntax.C.Cmusrc :cal SetSyn("cmusrc")<CR> an 50.20.330 &Syntax.C.Cobol :cal SetSyn("cobol")<CR> an 50.20.340 &Syntax.C.Coco/R :cal SetSyn("coco")<CR> an 50.20.350 &Syntax.C.Cold\ Fusion :cal SetSyn("cf")<CR> an 50.20.360 &Syntax.C.Conary\ Recipe :cal SetSyn("conaryrecipe")<CR> an 50.20.370 &Syntax.C.Config.Cfg\ Config\ file :cal SetSyn("cfg")<CR> an 50.20.380 &Syntax.C.Config.Configure\.in :cal SetSyn("config")<CR> an 50.20.390 &Syntax.C.Config.Generic\ Config\ file :cal SetSyn("conf")<CR> an 50.20.400 &Syntax.C.CRM114 :cal SetSyn("crm")<CR> an 50.20.410 &Syntax.C.Crontab :cal SetSyn("crontab")<CR> an 50.20.420 &Syntax.C.CSDL :cal SetSyn("csdl")<CR> an 50.20.430 &Syntax.C.CSP :cal SetSyn("csp")<CR> an 50.20.440 &Syntax.C.Ctrl-H :cal SetSyn("ctrlh")<CR> an 50.20.450 &Syntax.C.Cucumber :cal SetSyn("cucumber")<CR> an 50.20.460 &Syntax.C.CUDA :cal SetSyn("cuda")<CR> an 50.20.470 &Syntax.C.CUPL.CUPL :cal SetSyn("cupl")<CR> an 50.20.480 &Syntax.C.CUPL.Simulation :cal SetSyn("cuplsim")<CR> an 50.20.490 &Syntax.C.CVS.commit\ file :cal SetSyn("cvs")<CR> an 50.20.500 &Syntax.C.CVS.cvsrc :cal SetSyn("cvsrc")<CR> an 50.20.510 &Syntax.C.Cyn++ :cal SetSyn("cynpp")<CR> an 50.20.520 &Syntax.C.Cynlib :cal SetSyn("cynlib")<CR> an 50.30.100 &Syntax.DE.D :cal SetSyn("d")<CR> an 50.30.110 &Syntax.DE.Datascript :cal SetSyn("datascript")<CR> an 50.30.120 &Syntax.DE.Debian.Debian\ ChangeLog :cal SetSyn("debchangelog")<CR> an 50.30.130 &Syntax.DE.Debian.Debian\ Control :cal SetSyn("debcontrol")<CR> an 50.30.140 &Syntax.DE.Debian.Debian\ Copyright :cal SetSyn("debcopyright")<CR> an 50.30.150 &Syntax.DE.Debian.Debian\ Sources\.list :cal SetSyn("debsources")<CR> an 50.30.160 &Syntax.DE.Denyhosts :cal SetSyn("denyhosts")<CR> an 50.30.170 &Syntax.DE.Desktop :cal SetSyn("desktop")<CR> an 50.30.180 &Syntax.DE.Dict\ config :cal SetSyn("dictconf")<CR> an 50.30.190 &Syntax.DE.Dictd\ config :cal SetSyn("dictdconf")<CR> an 50.30.200 &Syntax.DE.Diff :cal SetSyn("diff")<CR> an 50.30.210 &Syntax.DE.Digital\ Command\ Lang :cal SetSyn("dcl")<CR> an 50.30.220 &Syntax.DE.Dircolors :cal SetSyn("dircolors")<CR> an 50.30.230 &Syntax.DE.Dirpager :cal SetSyn("dirpager")<CR> an 50.30.240 &Syntax.DE.Django\ template :cal SetSyn("django")<CR> an 50.30.250 &Syntax.DE.DNS/BIND\ zone :cal SetSyn("bindzone")<CR> an 50.30.260 &Syntax.DE.Dnsmasq\ config :cal SetSyn("dnsmasq")<CR> an 50.30.270 &Syntax.DE.DocBook.auto-detect :cal SetSyn("docbk")<CR> an 50.30.280 &Syntax.DE.DocBook.SGML :cal SetSyn("docbksgml")<CR> an 50.30.290 &Syntax.DE.DocBook.XML :cal SetSyn("docbkxml")<CR> an 50.30.300 &Syntax.DE.Dockerfile :cal SetSyn("dockerfile")<CR> an 50.30.310 &Syntax.DE.Dot :cal SetSyn("dot")<CR> an 50.30.320 &Syntax.DE.Doxygen.C\ with\ doxygen :cal SetSyn("c.doxygen")<CR> an 50.30.330 &Syntax.DE.Doxygen.C++\ with\ doxygen :cal SetSyn("cpp.doxygen")<CR> an 50.30.340 &Syntax.DE.Doxygen.IDL\ with\ doxygen :cal SetSyn("idl.doxygen")<CR> an 50.30.350 &Syntax.DE.Doxygen.Java\ with\ doxygen :cal SetSyn("java.doxygen")<CR> an 50.30.360 &Syntax.DE.Doxygen.DataScript\ with\ doxygen :cal SetSyn("datascript.doxygen")<CR> an 50.30.370 &Syntax.DE.Dracula :cal SetSyn("dracula")<CR> an 50.30.380 &Syntax.DE.DSSSL :cal SetSyn("dsl")<CR> an 50.30.390 &Syntax.DE.DTD :cal SetSyn("dtd")<CR> an 50.30.400 &Syntax.DE.DTML\ (Zope) :cal SetSyn("dtml")<CR> an 50.30.410 &Syntax.DE.DTrace :cal SetSyn("dtrace")<CR> an 50.30.420 &Syntax.DE.Dts/dtsi :cal SetSyn("dts")<CR> an 50.30.430 &Syntax.DE.Dylan.Dylan :cal SetSyn("dylan")<CR> an 50.30.440 &Syntax.DE.Dylan.Dylan\ interface :cal SetSyn("dylanintr")<CR> an 50.30.450 &Syntax.DE.Dylan.Dylan\ lid :cal SetSyn("dylanlid")<CR> an 50.30.470 &Syntax.DE.EDIF :cal SetSyn("edif")<CR> an 50.30.480 &Syntax.DE.Eiffel :cal SetSyn("eiffel")<CR> an 50.30.490 &Syntax.DE.Elinks\ config :cal SetSyn("elinks")<CR> an 50.30.500 &Syntax.DE.Elm\ filter\ rules :cal SetSyn("elmfilt")<CR> an 50.30.510 &Syntax.DE.Embedix\ Component\ Description :cal SetSyn("ecd")<CR> an 50.30.520 &Syntax.DE.ERicsson\ LANGuage :cal SetSyn("erlang")<CR> an 50.30.530 &Syntax.DE.ESMTP\ rc :cal SetSyn("esmtprc")<CR> an 50.30.540 &Syntax.DE.ESQL-C :cal SetSyn("esqlc")<CR> an 50.30.550 &Syntax.DE.Essbase\ script :cal SetSyn("csc")<CR> an 50.30.560 &Syntax.DE.Esterel :cal SetSyn("esterel")<CR> an 50.30.570 &Syntax.DE.Eterm\ config :cal SetSyn("eterm")<CR> an 50.30.580 &Syntax.DE.Euphoria\ 3 :cal SetSyn("euphoria3")<CR> an 50.30.590 &Syntax.DE.Euphoria\ 4 :cal SetSyn("euphoria4")<CR> an 50.30.600 &Syntax.DE.Eviews :cal SetSyn("eviews")<CR> an 50.30.610 &Syntax.DE.Exim\ conf :cal SetSyn("exim")<CR> an 50.30.620 &Syntax.DE.Expect :cal SetSyn("expect")<CR> an 50.30.630 &Syntax.DE.Exports :cal SetSyn("exports")<CR> an 50.40.100 &Syntax.FG.Falcon :cal SetSyn("falcon")<CR> an 50.40.110 &Syntax.FG.Fantom :cal SetSyn("fan")<CR> an 50.40.120 &Syntax.FG.Fetchmail :cal SetSyn("fetchmail")<CR> an 50.40.130 &Syntax.FG.FlexWiki :cal SetSyn("flexwiki")<CR> an 50.40.140 &Syntax.FG.Focus\ Executable :cal SetSyn("focexec")<CR> an 50.40.150 &Syntax.FG.Focus\ Master :cal SetSyn("master")<CR> an 50.40.160 &Syntax.FG.FORM :cal SetSyn("form")<CR> an 50.40.170 &Syntax.FG.Forth :cal SetSyn("forth")<CR> an 50.40.180 &Syntax.FG.Fortran :cal SetSyn("fortran")<CR> an 50.40.190 &Syntax.FG.FoxPro :cal SetSyn("foxpro")<CR> an 50.40.200 &Syntax.FG.FrameScript :cal SetSyn("framescript")<CR> an 50.40.210 &Syntax.FG.Fstab :cal SetSyn("fstab")<CR> an 50.40.220 &Syntax.FG.Fvwm.Fvwm\ configuration :cal SetSyn("fvwm1")<CR> an 50.40.230 &Syntax.FG.Fvwm.Fvwm2\ configuration :cal SetSyn("fvwm2")<CR> an 50.40.240 &Syntax.FG.Fvwm.Fvwm2\ configuration\ with\ M4 :cal SetSyn("fvwm2m4")<CR> an 50.40.260 &Syntax.FG.GDB\ command\ file :cal SetSyn("gdb")<CR> an 50.40.270 &Syntax.FG.GDMO :cal SetSyn("gdmo")<CR> an 50.40.280 &Syntax.FG.Gedcom :cal SetSyn("gedcom")<CR> an 50.40.290 &Syntax.FG.Git.Output :cal SetSyn("git")<CR> an 50.40.300 &Syntax.FG.Git.Commit :cal SetSyn("gitcommit")<CR> an 50.40.310 &Syntax.FG.Git.Config :cal SetSyn("gitconfig")<CR> an 50.40.320 &Syntax.FG.Git.Rebase :cal SetSyn("gitrebase")<CR> an 50.40.330 &Syntax.FG.Git.Send\ Email :cal SetSyn("gitsendemail")<CR> an 50.40.340 &Syntax.FG.Gitolite :cal SetSyn("gitolite")<CR> an 50.40.350 &Syntax.FG.Gkrellmrc :cal SetSyn("gkrellmrc")<CR> an 50.40.360 &Syntax.FG.Gnash :cal SetSyn("gnash")<CR> an 50.40.370 &Syntax.FG.Go :cal SetSyn("go")<CR> an 50.40.380 &Syntax.FG.Godoc :cal SetSyn("godoc")<CR> an 50.40.390 &Syntax.FG.GP :cal SetSyn("gp")<CR> an 50.40.400 &Syntax.FG.GPG :cal SetSyn("gpg")<CR> an 50.40.410 &Syntax.FG.Grof :cal SetSyn("gprof")<CR> an 50.40.420 &Syntax.FG.Group\ file :cal SetSyn("group")<CR> an 50.40.430 &Syntax.FG.Grub :cal SetSyn("grub")<CR> an 50.40.440 &Syntax.FG.GNU\ Server\ Pages :cal SetSyn("gsp")<CR> an 50.40.450 &Syntax.FG.GNUplot :cal SetSyn("gnuplot")<CR> an 50.40.460 &Syntax.FG.GrADS\ scripts :cal SetSyn("grads")<CR> an 50.40.470 &Syntax.FG.Gretl :cal SetSyn("gretl")<CR> an 50.40.480 &Syntax.FG.Groff :cal SetSyn("groff")<CR> an 50.40.490 &Syntax.FG.Groovy :cal SetSyn("groovy")<CR> an 50.40.500 &Syntax.FG.GTKrc :cal SetSyn("gtkrc")<CR> an 50.50.100 &Syntax.HIJK.Haml :cal SetSyn("haml")<CR> an 50.50.110 &Syntax.HIJK.Hamster :cal SetSyn("hamster")<CR> an 50.50.120 &Syntax.HIJK.Haskell.Haskell :cal SetSyn("haskell")<CR> an 50.50.130 &Syntax.HIJK.Haskell.Haskell-c2hs :cal SetSyn("chaskell")<CR> an 50.50.140 &Syntax.HIJK.Haskell.Haskell-literate :cal SetSyn("lhaskell")<CR> an 50.50.150 &Syntax.HIJK.HASTE :cal SetSyn("haste")<CR> an 50.50.160 &Syntax.HIJK.HASTE\ preproc :cal SetSyn("hastepreproc")<CR> an 50.50.170 &Syntax.HIJK.Hercules :cal SetSyn("hercules")<CR> an 50.50.180 &Syntax.HIJK.Hex\ dump.XXD :cal SetSyn("xxd")<CR> an 50.50.190 &Syntax.HIJK.Hex\ dump.Intel\ MCS51 :cal SetSyn("hex")<CR> an 50.50.200 &Syntax.HIJK.Hg\ commit :cal SetSyn("hgcommit")<CR> an 50.50.210 &Syntax.HIJK.Hollywood :cal SetSyn("hollywood")<CR> an 50.50.220 &Syntax.HIJK.HTML.HTML :cal SetSyn("html")<CR> an 50.50.230 &Syntax.HIJK.HTML.HTML\ with\ M4 :cal SetSyn("htmlm4")<CR> an 50.50.240 &Syntax.HIJK.HTML.HTML\ with\ Ruby\ (eRuby) :cal SetSyn("eruby")<CR> an 50.50.250 &Syntax.HIJK.HTML.Cheetah\ HTML\ template :cal SetSyn("htmlcheetah")<CR> an 50.50.260 &Syntax.HIJK.HTML.Django\ HTML\ template :cal SetSyn("htmldjango")<CR> an 50.50.270 &Syntax.HIJK.HTML.Vue.js\ HTML\ template :cal SetSyn("vuejs")<CR> an 50.50.280 &Syntax.HIJK.HTML.HTML/OS :cal SetSyn("htmlos")<CR> an 50.50.290 &Syntax.HIJK.HTML.XHTML :cal SetSyn("xhtml")<CR> an 50.50.300 &Syntax.HIJK.Host\.conf :cal SetSyn("hostconf")<CR> an 50.50.310 &Syntax.HIJK.Hosts\ access :cal SetSyn("hostsaccess")<CR> an 50.50.320 &Syntax.HIJK.Hyper\ Builder :cal SetSyn("hb")<CR> an 50.50.330 &Syntax.HIJK.Icewm\ menu :cal SetSyn("icemenu")<CR> an 50.50.340 &Syntax.HIJK.Icon :cal SetSyn("icon")<CR> an 50.50.350 &Syntax.HIJK.IDL\Generic\ IDL :cal SetSyn("idl")<CR> an 50.50.360 &Syntax.HIJK.IDL\Microsoft\ IDL :cal SetSyn("msidl")<CR> an 50.50.370 &Syntax.HIJK.Indent\ profile :cal SetSyn("indent")<CR> an 50.50.380 &Syntax.HIJK.Inform :cal SetSyn("inform")<CR> an 50.50.390 &Syntax.HIJK.Informix\ 4GL :cal SetSyn("fgl")<CR> an 50.50.400 &Syntax.HIJK.Initng :cal SetSyn("initng")<CR> an 50.50.410 &Syntax.HIJK.Inittab :cal SetSyn("inittab")<CR> an 50.50.420 &Syntax.HIJK.Inno\ setup :cal SetSyn("iss")<CR> an 50.50.430 &Syntax.HIJK.Innovation\ Data\ Processing.Upstream\ dat :cal SetSyn("upstreamdat")<CR> an 50.50.440 &Syntax.HIJK.Innovation\ Data\ Processing.Upstream\ log :cal SetSyn("upstreamlog")<CR> an 50.50.450 &Syntax.HIJK.Innovation\ Data\ Processing.Upstream\ rpt :cal SetSyn("upstreamrpt")<CR> an 50.50.460 &Syntax.HIJK.Innovation\ Data\ Processing.Upstream\ Install\ log :cal SetSyn("upstreaminstalllog")<CR> an 50.50.470 &Syntax.HIJK.Innovation\ Data\ Processing.Usserver\ log :cal SetSyn("usserverlog")<CR> an 50.50.480 &Syntax.HIJK.Innovation\ Data\ Processing.USW2KAgt\ log :cal SetSyn("usw2kagtlog")<CR> an 50.50.490 &Syntax.HIJK.InstallShield\ script :cal SetSyn("ishd")<CR> an 50.50.500 &Syntax.HIJK.Interactive\ Data\ Lang :cal SetSyn("idlang")<CR> an 50.50.510 &Syntax.HIJK.IPfilter :cal SetSyn("ipfilter")<CR> an 50.50.530 &Syntax.HIJK.J :cal SetSyn("j")<CR> an 50.50.540 &Syntax.HIJK.JAL :cal SetSyn("jal")<CR> an 50.50.550 &Syntax.HIJK.JAM :cal SetSyn("jam")<CR> an 50.50.560 &Syntax.HIJK.Jargon :cal SetSyn("jargon")<CR> an 50.50.570 &Syntax.HIJK.Java.Java :cal SetSyn("java")<CR> an 50.50.580 &Syntax.HIJK.Java.JavaCC :cal SetSyn("javacc")<CR> an 50.50.590 &Syntax.HIJK.Java.Java\ Server\ Pages :cal SetSyn("jsp")<CR> an 50.50.600 &Syntax.HIJK.Java.Java\ Properties :cal SetSyn("jproperties")<CR> an 50.50.610 &Syntax.HIJK.JavaScript :cal SetSyn("javascript")<CR> an 50.50.620 &Syntax.HIJK.Jess :cal SetSyn("jess")<CR> an 50.50.630 &Syntax.HIJK.Jgraph :cal SetSyn("jgraph")<CR> an 50.50.640 &Syntax.HIJK.Jovial :cal SetSyn("jovial")<CR> an 50.50.650 &Syntax.HIJK.JSON :cal SetSyn("json")<CR> an 50.50.670 &Syntax.HIJK.Kconfig :cal SetSyn("kconfig")<CR> an 50.50.680 &Syntax.HIJK.KDE\ script :cal SetSyn("kscript")<CR> an 50.50.690 &Syntax.HIJK.Kimwitu++ :cal SetSyn("kwt")<CR> an 50.50.700 &Syntax.HIJK.Kivy :cal SetSyn("kivy")<CR> an 50.50.710 &Syntax.HIJK.KixTart :cal SetSyn("kix")<CR> an 50.60.100 &Syntax.L.Lace :cal SetSyn("lace")<CR> an 50.60.110 &Syntax.L.LamdaProlog :cal SetSyn("lprolog")<CR> an 50.60.120 &Syntax.L.Latte :cal SetSyn("latte")<CR> an 50.60.130 &Syntax.L.Ld\ script :cal SetSyn("ld")<CR> an 50.60.140 &Syntax.L.LDAP.LDIF :cal SetSyn("ldif")<CR> an 50.60.150 &Syntax.L.LDAP.Configuration :cal SetSyn("ldapconf")<CR> an 50.60.160 &Syntax.L.Less :cal SetSyn("less")<CR> an 50.60.170 &Syntax.L.Lex :cal SetSyn("lex")<CR> an 50.60.180 &Syntax.L.LFTP\ config :cal SetSyn("lftp")<CR> an 50.60.190 &Syntax.L.Libao :cal SetSyn("libao")<CR> an 50.60.200 &Syntax.L.LifeLines\ script :cal SetSyn("lifelines")<CR> an 50.60.210 &Syntax.L.Lilo :cal SetSyn("lilo")<CR> an 50.60.220 &Syntax.L.Limits\ config :cal SetSyn("limits")<CR> an 50.60.230 &Syntax.L.Linden\ scripting :cal SetSyn("lsl")<CR> an 50.60.240 &Syntax.L.Liquid :cal SetSyn("liquid")<CR> an 50.60.250 &Syntax.L.Lisp :cal SetSyn("lisp")<CR> an 50.60.260 &Syntax.L.Lite :cal SetSyn("lite")<CR> an 50.60.270 &Syntax.L.LiteStep\ RC :cal SetSyn("litestep")<CR> an 50.60.280 &Syntax.L.Locale\ Input :cal SetSyn("fdcc")<CR> an 50.60.290 &Syntax.L.Login\.access :cal SetSyn("loginaccess")<CR> an 50.60.300 &Syntax.L.Login\.defs :cal SetSyn("logindefs")<CR> an 50.60.310 &Syntax.L.Logtalk :cal SetSyn("logtalk")<CR> an 50.60.320 &Syntax.L.LOTOS :cal SetSyn("lotos")<CR> an 50.60.330 &Syntax.L.LotusScript :cal SetSyn("lscript")<CR> an 50.60.340 &Syntax.L.Lout :cal SetSyn("lout")<CR> an 50.60.350 &Syntax.L.LPC :cal SetSyn("lpc")<CR> an 50.60.360 &Syntax.L.Lua :cal SetSyn("lua")<CR> an 50.60.370 &Syntax.L.Lynx\ Style :cal SetSyn("lss")<CR> an 50.60.380 &Syntax.L.Lynx\ config :cal SetSyn("lynx")<CR> an 50.70.100 &Syntax.M.M4 :cal SetSyn("m4")<CR> an 50.70.110 &Syntax.M.MaGic\ Point :cal SetSyn("mgp")<CR> an 50.70.120 &Syntax.M.Mail :cal SetSyn("mail")<CR> an 50.70.130 &Syntax.M.Mail\ aliases :cal SetSyn("mailaliases")<CR> an 50.70.140 &Syntax.M.Mailcap :cal SetSyn("mailcap")<CR> an 50.70.150 &Syntax.M.Mallard :cal SetSyn("mallard")<CR> an 50.70.160 &Syntax.M.Makefile :cal SetSyn("make")<CR> an 50.70.170 &Syntax.M.MakeIndex :cal SetSyn("ist")<CR> an 50.70.180 &Syntax.M.Man\ page :cal SetSyn("man")<CR> an 50.70.190 &Syntax.M.Man\.conf :cal SetSyn("manconf")<CR> an 50.70.200 &Syntax.M.Maple\ V :cal SetSyn("maple")<CR> an 50.70.210 &Syntax.M.Markdown :cal SetSyn("markdown")<CR> an 50.70.220 &Syntax.M.Markdown\ with\ R\ statements :cal SetSyn("rmd")<CR> an 50.70.230 &Syntax.M.Mason :cal SetSyn("mason")<CR> an 50.70.240 &Syntax.M.Mathematica :cal SetSyn("mma")<CR> an 50.70.250 &Syntax.M.Matlab :cal SetSyn("matlab")<CR> an 50.70.260 &Syntax.M.Maxima :cal SetSyn("maxima")<CR> an 50.70.270 &Syntax.M.MEL\ (for\ Maya) :cal SetSyn("mel")<CR> an 50.70.280 &Syntax.M.Messages\ (/var/log) :cal SetSyn("messages")<CR> an 50.70.290 &Syntax.M.Metafont :cal SetSyn("mf")<CR> an 50.70.300 &Syntax.M.MetaPost :cal SetSyn("mp")<CR> an 50.70.310 &Syntax.M.MGL :cal SetSyn("mgl")<CR> an 50.70.320 &Syntax.M.MIX :cal SetSyn("mix")<CR> an 50.70.330 &Syntax.M.MMIX :cal SetSyn("mmix")<CR> an 50.70.340 &Syntax.M.Modconf :cal SetSyn("modconf")<CR> an 50.70.350 &Syntax.M.Model :cal SetSyn("model")<CR> an 50.70.360 &Syntax.M.Modsim\ III :cal SetSyn("modsim3")<CR> an 50.70.370 &Syntax.M.Modula\ 2 :cal SetSyn("modula2")<CR> an 50.70.380 &Syntax.M.Modula\ 3 :cal SetSyn("modula3")<CR> an 50.70.390 &Syntax.M.Monk :cal SetSyn("monk")<CR> an 50.70.400 &Syntax.M.Motorola\ S-Record :cal SetSyn("srec")<CR> an 50.70.410 &Syntax.M.Mplayer\ config :cal SetSyn("mplayerconf")<CR> an 50.70.420 &Syntax.M.MOO :cal SetSyn("moo")<CR> an 50.70.430 &Syntax.M.Mrxvtrc :cal SetSyn("mrxvtrc")<CR> an 50.70.440 &Syntax.M.MS-DOS/Windows.4DOS\ \.bat\ file :cal SetSyn("btm")<CR> an 50.70.450 &Syntax.M.MS-DOS/Windows.\.bat\/\.cmd\ file :cal SetSyn("dosbatch")<CR> an 50.70.460 &Syntax.M.MS-DOS/Windows.\.ini\ file :cal SetSyn("dosini")<CR> an 50.70.470 &Syntax.M.MS-DOS/Windows.Message\ text :cal SetSyn("msmessages")<CR> an 50.70.480 &Syntax.M.MS-DOS/Windows.Module\ Definition :cal SetSyn("def")<CR> an 50.70.490 &Syntax.M.MS-DOS/Windows.Registry :cal SetSyn("registry")<CR> an 50.70.500 &Syntax.M.MS-DOS/Windows.Resource\ file :cal SetSyn("rc")<CR> an 50.70.510 &Syntax.M.Msql :cal SetSyn("msql")<CR> an 50.70.520 &Syntax.M.MuPAD :cal SetSyn("mupad")<CR> an 50.70.530 &Syntax.M.Murphi :cal SetSyn("murphi")<CR> an 50.70.540 &Syntax.M.MUSHcode :cal SetSyn("mush")<CR> an 50.70.550 &Syntax.M.Muttrc :cal SetSyn("muttrc")<CR> an 50.80.100 &Syntax.NO.N1QL :cal SetSyn("n1ql")<CR> an 50.80.110 &Syntax.NO.Nanorc :cal SetSyn("nanorc")<CR> an 50.80.120 &Syntax.NO.Nastran\ input/DMAP :cal SetSyn("nastran")<CR> an 50.80.130 &Syntax.NO.Natural :cal SetSyn("natural")<CR> an 50.80.140 &Syntax.NO.NeoMutt\ setup\ files :cal SetSyn("neomuttrc")<CR> an 50.80.150 &Syntax.NO.Netrc :cal SetSyn("netrc")<CR> an 50.80.160 &Syntax.NO.Ninja :cal SetSyn("ninja")<CR> an 50.80.170 &Syntax.NO.Novell\ NCF\ batch :cal SetSyn("ncf")<CR> an 50.80.180 &Syntax.NO.Not\ Quite\ C\ (LEGO) :cal SetSyn("nqc")<CR> an 50.80.190 &Syntax.NO.Nroff :cal SetSyn("nroff")<CR> an 50.80.200 &Syntax.NO.NSIS\ script :cal SetSyn("nsis")<CR> an 50.80.220 &Syntax.NO.Obj\ 3D\ wavefront :cal SetSyn("obj")<CR> an 50.80.230 &Syntax.NO.Objective\ C :cal SetSyn("objc")<CR> an 50.80.240 &Syntax.NO.Objective\ C++ :cal SetSyn("objcpp")<CR> an 50.80.250 &Syntax.NO.OCAML :cal SetSyn("ocaml")<CR> an 50.80.260 &Syntax.NO.Occam :cal SetSyn("occam")<CR> an 50.80.270 &Syntax.NO.Omnimark :cal SetSyn("omnimark")<CR> an 50.80.280 &Syntax.NO.OpenROAD :cal SetSyn("openroad")<CR> an 50.80.290 &Syntax.NO.Open\ Psion\ Lang :cal SetSyn("opl")<CR> an 50.80.300 &Syntax.NO.Oracle\ config :cal SetSyn("ora")<CR> an 50.90.100 &Syntax.PQ.Packet\ filter\ conf :cal SetSyn("pf")<CR> an 50.90.110 &Syntax.PQ.Palm\ resource\ compiler :cal SetSyn("pilrc")<CR> an 50.90.120 &Syntax.PQ.Pam\ config :cal SetSyn("pamconf")<CR> an 50.90.130 &Syntax.PQ.PApp :cal SetSyn("papp")<CR> an 50.90.140 &Syntax.PQ.Pascal :cal SetSyn("pascal")<CR> an 50.90.150 &Syntax.PQ.Password\ file :cal SetSyn("passwd")<CR> an 50.90.160 &Syntax.PQ.PCCTS :cal SetSyn("pccts")<CR> an 50.90.170 &Syntax.PQ.PDF :cal SetSyn("pdf")<CR> an 50.90.180 &Syntax.PQ.Perl.Perl :cal SetSyn("perl")<CR> an 50.90.190 &Syntax.PQ.Perl.Perl\ 6 :cal SetSyn("perl6")<CR> an 50.90.200 &Syntax.PQ.Perl.Perl\ POD :cal SetSyn("pod")<CR> an 50.90.210 &Syntax.PQ.Perl.Perl\ XS :cal SetSyn("xs")<CR> an 50.90.220 &Syntax.PQ.Perl.Template\ toolkit :cal SetSyn("tt2")<CR> an 50.90.230 &Syntax.PQ.Perl.Template\ toolkit\ Html :cal SetSyn("tt2html")<CR> an 50.90.240 &Syntax.PQ.Perl.Template\ toolkit\ JS :cal SetSyn("tt2js")<CR> an 50.90.250 &Syntax.PQ.PHP.PHP\ 3-4 :cal SetSyn("php")<CR> an 50.90.260 &Syntax.PQ.PHP.Phtml\ (PHP\ 2) :cal SetSyn("phtml")<CR> an 50.90.270 &Syntax.PQ.Pike :cal SetSyn("pike")<CR> an 50.90.280 &Syntax.PQ.Pine\ RC :cal SetSyn("pine")<CR> an 50.90.290 &Syntax.PQ.Pinfo\ RC :cal SetSyn("pinfo")<CR> an 50.90.300 &Syntax.PQ.PL/M :cal SetSyn("plm")<CR> an 50.90.310 &Syntax.PQ.PL/SQL :cal SetSyn("plsql")<CR> an 50.90.320 &Syntax.PQ.Pli :cal SetSyn("pli")<CR> an 50.90.330 &Syntax.PQ.PLP :cal SetSyn("plp")<CR> an 50.90.340 &Syntax.PQ.PO\ (GNU\ gettext) :cal SetSyn("po")<CR> an 50.90.350 &Syntax.PQ.Postfix\ main\ config :cal SetSyn("pfmain")<CR> an 50.90.360 &Syntax.PQ.PostScript.PostScript :cal SetSyn("postscr")<CR> an 50.90.370 &Syntax.PQ.PostScript.PostScript\ Printer\ Description :cal SetSyn("ppd")<CR> an 50.90.380 &Syntax.PQ.Povray.Povray\ scene\ descr :cal SetSyn("pov")<CR> an 50.90.390 &Syntax.PQ.Povray.Povray\ configuration :cal SetSyn("povini")<CR> an 50.90.400 &Syntax.PQ.PPWizard :cal SetSyn("ppwiz")<CR> an 50.90.410 &Syntax.PQ.Prescribe\ (Kyocera) :cal SetSyn("prescribe")<CR> an 50.90.420 &Syntax.PQ.Printcap :cal SetSyn("pcap")<CR> an 50.90.430 &Syntax.PQ.Privoxy :cal SetSyn("privoxy")<CR> an 50.90.440 &Syntax.PQ.Procmail :cal SetSyn("procmail")<CR> an 50.90.450 &Syntax.PQ.Product\ Spec\ File :cal SetSyn("psf")<CR> an 50.90.460 &Syntax.PQ.Progress :cal SetSyn("progress")<CR> an 50.90.470 &Syntax.PQ.Prolog :cal SetSyn("prolog")<CR> an 50.90.480 &Syntax.PQ.ProMeLa :cal SetSyn("promela")<CR> an 50.90.490 &Syntax.PQ.Proto :cal SetSyn("proto")<CR> an 50.90.500 &Syntax.PQ.Protocols :cal SetSyn("protocols")<CR> an 50.90.510 &Syntax.PQ.Purify\ log :cal SetSyn("purifylog")<CR> an 50.90.520 &Syntax.PQ.Pyrex :cal SetSyn("pyrex")<CR> an 50.90.530 &Syntax.PQ.Python :cal SetSyn("python")<CR> an 50.90.550 &Syntax.PQ.Quake :cal SetSyn("quake")<CR> an 50.90.560 &Syntax.PQ.Quickfix\ window :cal SetSyn("qf")<CR> an 50.100.100 &Syntax.R.R.R :cal SetSyn("r")<CR> an 50.100.110 &Syntax.R.R.R\ help :cal SetSyn("rhelp")<CR> an 50.100.120 &Syntax.R.R.R\ noweb :cal SetSyn("rnoweb")<CR> an 50.100.130 &Syntax.R.Racc\ input :cal SetSyn("racc")<CR> an 50.100.140 &Syntax.R.Radiance :cal SetSyn("radiance")<CR> an 50.100.150 &Syntax.R.Ratpoison :cal SetSyn("ratpoison")<CR> an 50.100.160 &Syntax.R.RCS.RCS\ log\ output :cal SetSyn("rcslog")<CR> an 50.100.170 &Syntax.R.RCS.RCS\ file :cal SetSyn("rcs")<CR> an 50.100.180 &Syntax.R.Readline\ config :cal SetSyn("readline")<CR> an 50.100.190 &Syntax.R.Rebol :cal SetSyn("rebol")<CR> an 50.100.200 &Syntax.R.ReDIF :cal SetSyn("redif")<CR> an 50.100.210 &Syntax.R.Relax\ NG :cal SetSyn("rng")<CR> an 50.100.220 &Syntax.R.Remind :cal SetSyn("remind")<CR> an 50.100.230 &Syntax.R.Relax\ NG\ compact :cal SetSyn("rnc")<CR> an 50.100.240 &Syntax.R.Renderman.Renderman\ Shader\ Lang :cal SetSyn("sl")<CR> an 50.100.250 &Syntax.R.Renderman.Renderman\ Interface\ Bytestream :cal SetSyn("rib")<CR> an 50.100.260 &Syntax.R.Resolv\.conf :cal SetSyn("resolv")<CR> an 50.100.270 &Syntax.R.Reva\ Forth :cal SetSyn("reva")<CR> an 50.100.280 &Syntax.R.Rexx :cal SetSyn("rexx")<CR> an 50.100.290 &Syntax.R.Robots\.txt :cal SetSyn("robots")<CR> an 50.100.300 &Syntax.R.RockLinux\ package\ desc\. :cal SetSyn("desc")<CR> an 50.100.310 &Syntax.R.Rpcgen :cal SetSyn("rpcgen")<CR> an 50.100.320 &Syntax.R.RPL/2 :cal SetSyn("rpl")<CR> an 50.100.330 &Syntax.R.ReStructuredText :cal SetSyn("rst")<CR> an 50.110.100 &Syntax.M.ReStructuredText\ with\ R\ statements :cal SetSyn("rrst")<CR> an 50.120.100 &Syntax.R.RTF :cal SetSyn("rtf")<CR> an 50.120.110 &Syntax.R.Ruby :cal SetSyn("ruby")<CR> an 50.120.120 &Syntax.R.Rust :cal SetSyn("rust")<CR> an 50.130.100 &Syntax.S-Sm.S-Lang :cal SetSyn("slang")<CR> an 50.130.110 &Syntax.S-Sm.Samba\ config :cal SetSyn("samba")<CR> an 50.130.120 &Syntax.S-Sm.SAS :cal SetSyn("sas")<CR> an 50.130.130 &Syntax.S-Sm.Sass :cal SetSyn("sass")<CR> an 50.130.140 &Syntax.S-Sm.Sather :cal SetSyn("sather")<CR> an 50.130.150 &Syntax.S-Sm.Sbt :cal SetSyn("sbt")<CR> an 50.130.160 &Syntax.S-Sm.Scala :cal SetSyn("scala")<CR> an 50.130.170 &Syntax.S-Sm.Scheme :cal SetSyn("scheme")<CR> an 50.130.180 &Syntax.S-Sm.Scilab :cal SetSyn("scilab")<CR> an 50.130.190 &Syntax.S-Sm.Screen\ RC :cal SetSyn("screen")<CR> an 50.130.200 &Syntax.S-Sm.SCSS :cal SetSyn("scss")<CR> an 50.130.210 &Syntax.S-Sm.SDC\ Synopsys\ Design\ Constraints :cal SetSyn("sdc")<CR> an 50.130.220 &Syntax.S-Sm.SDL :cal SetSyn("sdl")<CR> an 50.130.230 &Syntax.S-Sm.Sed :cal SetSyn("sed")<CR> an 50.130.240 &Syntax.S-Sm.Sendmail\.cf :cal SetSyn("sm")<CR> an 50.130.250 &Syntax.S-Sm.Send-pr :cal SetSyn("sendpr")<CR> an 50.130.260 &Syntax.S-Sm.Sensors\.conf :cal SetSyn("sensors")<CR> an 50.130.270 &Syntax.S-Sm.Service\ Location\ config :cal SetSyn("slpconf")<CR> an 50.130.280 &Syntax.S-Sm.Service\ Location\ registration :cal SetSyn("slpreg")<CR> an 50.130.290 &Syntax.S-Sm.Service\ Location\ SPI :cal SetSyn("slpspi")<CR> an 50.130.300 &Syntax.S-Sm.Services :cal SetSyn("services")<CR> an 50.130.310 &Syntax.S-Sm.Setserial\ config :cal SetSyn("setserial")<CR> an 50.130.320 &Syntax.S-Sm.SGML.SGML\ catalog :cal SetSyn("catalog")<CR> an 50.130.330 &Syntax.S-Sm.SGML.SGML\ DTD :cal SetSyn("sgml")<CR> an 50.130.340 &Syntax.S-Sm.SGML.SGML\ Declaration :cal SetSyn("sgmldecl")<CR> an 50.130.350 &Syntax.S-Sm.SGML.SGML-linuxdoc :cal SetSyn("sgmllnx")<CR> an 50.130.360 &Syntax.S-Sm.Shell\ script.sh\ and\ ksh :cal SetSyn("sh")<CR> an 50.130.370 &Syntax.S-Sm.Shell\ script.csh :cal SetSyn("csh")<CR> an 50.130.380 &Syntax.S-Sm.Shell\ script.tcsh :cal SetSyn("tcsh")<CR> an 50.130.390 &Syntax.S-Sm.Shell\ script.zsh :cal SetSyn("zsh")<CR> an 50.130.400 &Syntax.S-Sm.SiCAD :cal SetSyn("sicad")<CR> an 50.130.410 &Syntax.S-Sm.Sieve :cal SetSyn("sieve")<CR> an 50.130.420 &Syntax.S-Sm.Simula :cal SetSyn("simula")<CR> an 50.130.430 &Syntax.S-Sm.Sinda.Sinda\ compare :cal SetSyn("sindacmp")<CR> an 50.130.440 &Syntax.S-Sm.Sinda.Sinda\ input :cal SetSyn("sinda")<CR> an 50.130.450 &Syntax.S-Sm.Sinda.Sinda\ output :cal SetSyn("sindaout")<CR> an 50.130.460 &Syntax.S-Sm.SiSU :cal SetSyn("sisu")<CR> an 50.130.470 &Syntax.S-Sm.SKILL.SKILL :cal SetSyn("skill")<CR> an 50.130.480 &Syntax.S-Sm.SKILL.SKILL\ for\ Diva :cal SetSyn("diva")<CR> an 50.130.490 &Syntax.S-Sm.Slice :cal SetSyn("slice")<CR> an 50.130.500 &Syntax.S-Sm.SLRN.Slrn\ rc :cal SetSyn("slrnrc")<CR> an 50.130.510 &Syntax.S-Sm.SLRN.Slrn\ score :cal SetSyn("slrnsc")<CR> an 50.130.520 &Syntax.S-Sm.SmallTalk :cal SetSyn("st")<CR> an 50.130.530 &Syntax.S-Sm.Smarty\ Templates :cal SetSyn("smarty")<CR> an 50.130.540 &Syntax.S-Sm.SMIL :cal SetSyn("smil")<CR> an 50.130.550 &Syntax.S-Sm.SMITH :cal SetSyn("smith")<CR> an 50.140.100 &Syntax.Sn-Sy.SNMP\ MIB :cal SetSyn("mib")<CR> an 50.140.110 &Syntax.Sn-Sy.SNNS.SNNS\ network :cal SetSyn("snnsnet")<CR> an 50.140.120 &Syntax.Sn-Sy.SNNS.SNNS\ pattern :cal SetSyn("snnspat")<CR> an 50.140.130 &Syntax.Sn-Sy.SNNS.SNNS\ result :cal SetSyn("snnsres")<CR> an 50.140.140 &Syntax.Sn-Sy.Snobol4 :cal SetSyn("snobol4")<CR> an 50.140.150 &Syntax.Sn-Sy.Snort\ Configuration :cal SetSyn("hog")<CR> an 50.140.160 &Syntax.Sn-Sy.SPEC\ (Linux\ RPM) :cal SetSyn("spec")<CR> an 50.140.170 &Syntax.Sn-Sy.Specman :cal SetSyn("specman")<CR> an 50.140.180 &Syntax.Sn-Sy.Spice :cal SetSyn("spice")<CR> an 50.140.190 &Syntax.Sn-Sy.Spyce :cal SetSyn("spyce")<CR> an 50.140.200 &Syntax.Sn-Sy.Speedup :cal SetSyn("spup")<CR> an 50.140.210 &Syntax.Sn-Sy.Splint :cal SetSyn("splint")<CR> an 50.140.220 &Syntax.Sn-Sy.Squid\ config :cal SetSyn("squid")<CR> an 50.140.230 &Syntax.Sn-Sy.SQL.SAP\ HANA :cal SetSyn("sqlhana")<CR> an 50.140.240 &Syntax.Sn-Sy.SQL.ESQL-C :cal SetSyn("esqlc")<CR> an 50.140.250 &Syntax.Sn-Sy.SQL.MySQL :cal SetSyn("mysql")<CR> an 50.140.260 &Syntax.Sn-Sy.SQL.PL/SQL :cal SetSyn("plsql")<CR> an 50.140.270 &Syntax.Sn-Sy.SQL.SQL\ Anywhere :cal SetSyn("sqlanywhere")<CR> an 50.140.280 &Syntax.Sn-Sy.SQL.SQL\ (automatic) :cal SetSyn("sql")<CR> an 50.140.290 &Syntax.Sn-Sy.SQL.SQL\ (Oracle) :cal SetSyn("sqloracle")<CR> an 50.140.300 &Syntax.Sn-Sy.SQL.SQL\ Forms :cal SetSyn("sqlforms")<CR> an 50.140.310 &Syntax.Sn-Sy.SQL.SQLJ :cal SetSyn("sqlj")<CR> an 50.140.320 &Syntax.Sn-Sy.SQL.SQL-Informix :cal SetSyn("sqlinformix")<CR> an 50.140.330 &Syntax.Sn-Sy.SQR :cal SetSyn("sqr")<CR> an 50.140.340 &Syntax.Sn-Sy.Ssh.ssh_config :cal SetSyn("sshconfig")<CR> an 50.140.350 &Syntax.Sn-Sy.Ssh.sshd_config :cal SetSyn("sshdconfig")<CR> an 50.140.360 &Syntax.Sn-Sy.Standard\ ML :cal SetSyn("sml")<CR> an 50.140.370 &Syntax.Sn-Sy.Stata.SMCL :cal SetSyn("smcl")<CR> an 50.140.380 &Syntax.Sn-Sy.Stata.Stata :cal SetSyn("stata")<CR> an 50.140.390 &Syntax.Sn-Sy.Stored\ Procedures :cal SetSyn("stp")<CR> an 50.140.400 &Syntax.Sn-Sy.Strace :cal SetSyn("strace")<CR> an 50.140.410 &Syntax.Sn-Sy.Streaming\ descriptor\ file :cal SetSyn("sd")<CR> an 50.140.420 &Syntax.Sn-Sy.Subversion\ commit :cal SetSyn("svn")<CR> an 50.140.430 &Syntax.Sn-Sy.Sudoers :cal SetSyn("sudoers")<CR> an 50.140.440 &Syntax.Sn-Sy.SVG :cal SetSyn("svg")<CR> an 50.140.450 &Syntax.Sn-Sy.Symbian\ meta-makefile :cal SetSyn("mmp")<CR> an 50.140.460 &Syntax.Sn-Sy.Sysctl\.conf :cal SetSyn("sysctl")<CR> an 50.140.470 &Syntax.Sn-Sy.Systemd :cal SetSyn("systemd")<CR> an 50.140.480 &Syntax.Sn-Sy.SystemVerilog :cal SetSyn("systemverilog")<CR> an 50.150.100 &Syntax.T.TADS :cal SetSyn("tads")<CR> an 50.150.110 &Syntax.T.Tags :cal SetSyn("tags")<CR> an 50.150.120 &Syntax.T.TAK.TAK\ compare :cal SetSyn("takcmp")<CR> an 50.150.130 &Syntax.T.TAK.TAK\ input :cal SetSyn("tak")<CR> an 50.150.140 &Syntax.T.TAK.TAK\ output :cal SetSyn("takout")<CR> an 50.150.150 &Syntax.T.Tar\ listing :cal SetSyn("tar")<CR> an 50.150.160 &Syntax.T.Task\ data :cal SetSyn("taskdata")<CR> an 50.150.170 &Syntax.T.Task\ 42\ edit :cal SetSyn("taskedit")<CR> an 50.150.180 &Syntax.T.Tcl/Tk :cal SetSyn("tcl")<CR> an 50.150.190 &Syntax.T.TealInfo :cal SetSyn("tli")<CR> an 50.150.200 &Syntax.T.Telix\ Salt :cal SetSyn("tsalt")<CR> an 50.150.210 &Syntax.T.Termcap/Printcap :cal SetSyn("ptcap")<CR> an 50.150.220 &Syntax.T.Terminfo :cal SetSyn("terminfo")<CR> an 50.150.230 &Syntax.T.Tera\ Term :cal SetSyn("teraterm")<CR> an 50.150.240 &Syntax.T.TeX.TeX/LaTeX :cal SetSyn("tex")<CR> an 50.150.250 &Syntax.T.TeX.plain\ TeX :cal SetSyn("plaintex")<CR> an 50.150.260 &Syntax.T.TeX.Initex :cal SetSyn("initex")<CR> an 50.150.270 &Syntax.T.TeX.ConTeXt :cal SetSyn("context")<CR> an 50.150.280 &Syntax.T.TeX.TeX\ configuration :cal SetSyn("texmf")<CR> an 50.150.290 &Syntax.T.TeX.Texinfo :cal SetSyn("texinfo")<CR> an 50.150.300 &Syntax.T.TF\ mud\ client :cal SetSyn("tf")<CR> an 50.150.310 &Syntax.T.Tidy\ configuration :cal SetSyn("tidy")<CR> an 50.150.320 &Syntax.T.Tilde :cal SetSyn("tilde")<CR> an 50.150.330 &Syntax.T.Tmux\ configuration :cal SetSyn("tmux")<CR> an 50.150.340 &Syntax.T.TPP :cal SetSyn("tpp")<CR> an 50.150.350 &Syntax.T.Trasys\ input :cal SetSyn("trasys")<CR> an 50.150.360 &Syntax.T.Treetop :cal SetSyn("treetop")<CR> an 50.150.370 &Syntax.T.Trustees :cal SetSyn("trustees")<CR> an 50.150.380 &Syntax.T.TSS.Command\ Line :cal SetSyn("tsscl")<CR> an 50.150.390 &Syntax.T.TSS.Geometry :cal SetSyn("tssgm")<CR> an 50.150.400 &Syntax.T.TSS.Optics :cal SetSyn("tssop")<CR> an 50.160.100 &Syntax.UV.Udev\ config :cal SetSyn("udevconf")<CR> an 50.160.110 &Syntax.UV.Udev\ permissions :cal SetSyn("udevperm")<CR> an 50.160.120 &Syntax.UV.Udev\ rules :cal SetSyn("udevrules")<CR> an 50.160.130 &Syntax.UV.UIT/UIL :cal SetSyn("uil")<CR> an 50.160.140 &Syntax.UV.UnrealScript :cal SetSyn("uc")<CR> an 50.160.150 &Syntax.UV.Updatedb\.conf :cal SetSyn("updatedb")<CR> an 50.160.160 &Syntax.UV.Upstart :cal SetSyn("upstart")<CR> an 50.160.180 &Syntax.UV.Valgrind :cal SetSyn("valgrind")<CR> an 50.160.190 &Syntax.UV.Vera :cal SetSyn("vera")<CR> an 50.160.200 &Syntax.UV.Verbose\ TAP\ Output :cal SetSyn("tap")<CR> an 50.160.210 &Syntax.UV.Verilog-AMS\ HDL :cal SetSyn("verilogams")<CR> an 50.160.220 &Syntax.UV.Verilog\ HDL :cal SetSyn("verilog")<CR> an 50.160.230 &Syntax.UV.Vgrindefs :cal SetSyn("vgrindefs")<CR> an 50.160.240 &Syntax.UV.VHDL :cal SetSyn("vhdl")<CR> an 50.160.250 &Syntax.UV.Vim.Vim\ help\ file :cal SetSyn("help")<CR> an 50.160.260 &Syntax.UV.Vim.Vim\ script :cal SetSyn("vim")<CR> an 50.160.270 &Syntax.UV.Vim.Viminfo\ file :cal SetSyn("viminfo")<CR> an 50.160.280 &Syntax.UV.Virata\ config :cal SetSyn("virata")<CR> an 50.160.290 &Syntax.UV.Visual\ Basic :cal SetSyn("vb")<CR> an 50.160.300 &Syntax.UV.VOS\ CM\ macro :cal SetSyn("voscm")<CR> an 50.160.310 &Syntax.UV.VRML :cal SetSyn("vrml")<CR> an 50.160.320 &Syntax.UV.Vroom :cal SetSyn("vroom")<CR> an 50.160.330 &Syntax.UV.VSE\ JCL :cal SetSyn("vsejcl")<CR> an 50.170.100 &Syntax.WXYZ.WEB.CWEB :cal SetSyn("cweb")<CR> an 50.170.110 &Syntax.WXYZ.WEB.WEB :cal SetSyn("web")<CR> an 50.170.120 &Syntax.WXYZ.WEB.WEB\ Changes :cal SetSyn("change")<CR> an 50.170.130 &Syntax.WXYZ.Webmacro :cal SetSyn("webmacro")<CR> an 50.170.140 &Syntax.WXYZ.Website\ MetaLanguage :cal SetSyn("wml")<CR> an 50.170.160 &Syntax.WXYZ.wDiff :cal SetSyn("wdiff")<CR> an 50.170.180 &Syntax.WXYZ.Wget\ config :cal SetSyn("wget")<CR> an 50.170.190 &Syntax.WXYZ.Whitespace\ (add) :cal SetSyn("whitespace")<CR> an 50.170.200 &Syntax.WXYZ.WildPackets\ EtherPeek\ Decoder :cal SetSyn("dcd")<CR> an 50.170.210 &Syntax.WXYZ.WinBatch/Webbatch :cal SetSyn("winbatch")<CR> an 50.170.220 &Syntax.WXYZ.Windows\ Scripting\ Host :cal SetSyn("wsh")<CR> an 50.170.230 &Syntax.WXYZ.WSML :cal SetSyn("wsml")<CR> an 50.170.240 &Syntax.WXYZ.WvDial :cal SetSyn("wvdial")<CR> an 50.170.260 &Syntax.WXYZ.X\ Keyboard\ Extension :cal SetSyn("xkb")<CR> an 50.170.270 &Syntax.WXYZ.X\ Pixmap :cal SetSyn("xpm")<CR> an 50.170.280 &Syntax.WXYZ.X\ Pixmap\ (2) :cal SetSyn("xpm2")<CR> an 50.170.290 &Syntax.WXYZ.X\ resources :cal SetSyn("xdefaults")<CR> an 50.170.300 &Syntax.WXYZ.XBL :cal SetSyn("xbl")<CR> an 50.170.310 &Syntax.WXYZ.Xinetd\.conf :cal SetSyn("xinetd")<CR> an 50.170.320 &Syntax.WXYZ.Xmodmap :cal SetSyn("xmodmap")<CR> an 50.170.330 &Syntax.WXYZ.Xmath :cal SetSyn("xmath")<CR> an 50.170.340 &Syntax.WXYZ.XML :cal SetSyn("xml")<CR> an 50.170.350 &Syntax.WXYZ.XML\ Schema\ (XSD) :cal SetSyn("xsd")<CR> an 50.170.360 &Syntax.WXYZ.XQuery :cal SetSyn("xquery")<CR> an 50.170.370 &Syntax.WXYZ.Xslt :cal SetSyn("xslt")<CR> an 50.170.380 &Syntax.WXYZ.XFree86\ Config :cal SetSyn("xf86conf")<CR> an 50.170.400 &Syntax.WXYZ.YAML :cal SetSyn("yaml")<CR> an 50.170.410 &Syntax.WXYZ.Yacc :cal SetSyn("yacc")<CR> an 50.170.430 &Syntax.WXYZ.Zimbu :cal SetSyn("zimbu")<CR> " The End Of The Syntax Menu an 50.195 &Syntax.-SEP1- <Nop> an <silent> 50.200 &Syntax.Set\ '&syntax'\ Only :call <SID>Setsynonly()<CR> fun! s:Setsynonly() let s:syntax_menu_synonly = 1 endfun an <silent> 50.202 &Syntax.Set\ '&filetype'\ Too :call <SID>Nosynonly()<CR> fun! s:Nosynonly() if exists("s:syntax_menu_synonly") unlet s:syntax_menu_synonly endif endfun " Restore 'cpoptions' let &cpo = s:cpo_save unlet s:cpo_save
--- abstract: 'It’s pointed out that if the normalized amplitude of low frequency electromagnetic perturbation is larger than the characteristic small parameter which is the ratio of gyro period over transiting period, and if resonance happens between $\omega$ and $\mathbf{k}\cdot \mathbf{v}$, modern gyrokinetic theory violates the basic property of near identity transformation, which is supposed to be obeyed by Lie perturbed transformation theory. A modification is given to overcome this problem by not requiring all components in the first order Lagrangian 1-form equaling zero. A numerical example is given as an application of the new theory.' address: ' Institute of Fluid Physics, China Academy of Engineering Physics, Mianyang 621999, China\' author: - Shuangxi Zhang bibliography: - 'magneticperturbation.bib' title: Gyrokinetic resonant theory of low frequency electromagnetic perturbation --- INTRODUCTION {#sec1} ============ Modern gyrokinetic theory (GT) is a strong theoretical tool for numerical calculating of the orbit of charged particles immersed in strong magnetic field, since the fast gyro angle is decoupled from the dynamic equations of other degrees of freedom in the new coordinate system [@1983cary; @1990brizard; @2007brizard1; @2009cary; @wwleejcp1987; @xuxueq1991; @zhlinpop1995; @Idomuranf2009; @1999honqingpop; @biancalanipop2016]. The whole scheme of modern GT is to apply Lie perturbed transformation theory (LPTT) to non-canonical guiding center Lagrangian 1-form to find a new coordinate frame to recover magnetic moment as an adiabatic invariant by getting rid of $\theta$ dynamics [@1983cary; @1990brizard; @1988hahm]. The basic property of LPTT is that it’s a near identity transformation (NIT)[@1983cary]. It’s pointed out in Ref.([@2016nfshuangxi]) that the application of the resonant perturbed theory given by John Cary [@1983cary] to high frequency circular polarized wave driving charged particle in strong magnetic field in Ref.([@gunyoungpop2007]), violates the property of NIT in some range of perturbed amplitude. In this paper, we found that if the scheme of modern GT is carried out, even the low frequency electromagnetic perturbation could also cause Lie perturbed coordinate transformation to violate NIT when resonance happens between $\omega$ and $\mathbf{k}\cdot \mathbf{v}$, only if the normalized amplitude of perturbation is large than the characteristic small parameter $\varepsilon$, which equals the ratio between gyro period and transiting period. Here, $i$ represents the $ith$ coordinate, $\omega$ is the frequency of wave, $\mathbf{k}$ is the wave vector and $\mathbf{v}$ is the velocity of gyrocenter. The low frequency here means that the frequency is much lower than the gyro frequency of relevant charged particle. The basic reason can be traced back to the dealing method for resonant branches in Modern GT. Modern GT requires ${\Gamma}_{1i}=0$ for each $i$ except $i\ne 0$ and ${\Gamma}_1$ is the first order 1-form in new coordinates. This requirement introduces an almost constant term originating from the resonant perturbation, to the differential equation of $S_1$ and induces the secularity property to $S_1$. To avoid the secularity of $S_1$, the usual way is to move this resonant branches out of the equation of $S_1$, but add it into the first order energy $H_1$, which is defined to be $-\Gamma_{10}$. However, the relevant generators included by the formula of coordinate transformation still includes those resonant branches. The method to overcome this problems is to keep in the relevant original ${\Gamma}_{1i}$s the resonant branches, which will not appear in the equations for $S_1$ and relevant generators, rather than requiring all ${\Gamma}_{1i}=0$ except $i\ne 0$ as the modern GT does. In the real physical environment, for each low frequency wave of $\omega$ and $\mathbf{k}$, there inevitably exists particles whose gyrocenter velocity can cause resonance with the wave, since the velocity of particle ensemble has a very broad distribution. Therefore, modern GT inevitably violates NIT for any low frequency electromagnetic perturbation, which makes sure our modified theory more reasonable. Besides, it’s found that our modified theory is almost the same as that named ’Symplectic Representation’ by Brizard in his doctor dissertation [@1990brizard], where he gave two ways to carry out the LPTT for the perturbation in guiding center system. The other one is named as ’Hamiltonian Representation’, which is adopted by the following researchers and called modern GT in this paper. But Brizard in that paper didn’t discuss resonant behavior, neither did he study that for what kind of problems, which one of the two methods is preferred. In this paper, our discovering claims that the ’Symplectic Representation’ rather than ’Hamiltonian representation’ should be adopted to carry out the LPTT with low frequency electromagnetic perturbation. The arrangement of this paper is as follows. In Sec. \[sec2\], the violation of NIT is presented. In Sec. \[sec3\], the scheme for preventing the violation of NIT is given, and the comparison between the old and new theory is carried out. In Sec. \[sec4\], a simple example is given as an application of the new theory. The violation of NIT of modern GT with resonant electromagnetic perturbation. {#sec2} ============================================================================== In this paper, for the convenience of notation, the original guiding center plus the time is chosen to be $\mathbf{\bar Z} = ({\bf{\bar X}},\bar {U},\bar {\mu} ,\bar{ \theta} ,t)$, while the gyrocenter plus the time is chosen as ${\bf{Z}} = ({\bf{X}},U,\mu ,\theta ,t)$. And the coordinate transformation formula for LPTT is ${\bf{\bar Z}} = {e^{{G^i}{\partial _{{Z^i}}}}}{\bf{Z}}$ where $G^i$ is the infinitesimal generator of each $Z^i$, and $G^0=0$ is assumed. Here, superscript $0$ represents the time. In this paper, $\bar{f}$ means function of $({\bf{\bar X}},\bar {U},\bar {\mu} ,\bar{ \theta} )$, unless other arguments is obviously given. The details of the scheme of modern GT with electromagnetic perturbation is given in appendix Sec.([\[sec11\]]{}). The operator ${G}^i{\partial _{{{Z}^i}}}$ is dimensionless. NIT requires the value of $|G^i|/Z_0^i$ to be much smaller than one, where $Z^i_0$ is used to normalize each $Z^i$. The Fourier analysis of $\mathbf{A}_1$ can be expressed as $$\label{e1} {{\bf{A}}_1}\left( {{\bf{X}},t} \right) = \sum\limits_{{\bf{k}}'} {\left( {{A_{c{\bf{k}}'}}\cos \left( {{\bf{X}}\cdot{\bf{k}}' - {\omega _{k'}}t} \right) + {A_{s{\bf{k}}'}}\sin \left( {{\bf{X}}\cdot{\bf{k}}' - {\omega _{k'}}t} \right)\;} \right)\;\;\;}$$ Here, we assume that fourier branch $(\omega_k,\mathbf{k})$ satisfies the resonant condition ${\omega _k} - {\bf{v}}\cdot{\bf{k}} \approx 0$ in a resonant layer where the gyrocenter velocity is $\mathbf{v}$, and only the cosine branch exists. According to the operation of gyroangle averaging and the removing of secularity from gauge function $S$ in appendix, this branch is removed from the gauge function $S_1$. We noticed that in $G^U$ in Eq.(\[a2\]) this resonant branch is left to the first term on the right hand side. If other non-resonant Fourier branches are ignored except the resonant one, the dimensionless value of $G^U$ after normalization by $v_t$ can be reformulated to be $$\label{e2} {G^U} \approx \varepsilon^{-1} {A_{ck}}\cos ({\bf{X}}\cdot{\bf{k}} - {\omega _k}t),$$ where we made the normalization ${G^U} \to {G^U}/{v_t},{A_{ck}} \to {A_{ck}}/{A_0}$, and $\varepsilon \equiv {m_i}{v_t}/e{A_0}$. $v_t$ is the thermal velocity of ion. In SI system, the amplitude of equilibrium magnetic vector potential $A_0$ can be adopted as $1T/m$ and $v_t=10^4 m/s$, thus, $\left| {{\varepsilon }} \right| \approx {10^{-4}}$ for ions. In the resonant region, $\left| {\cos ({\bf{X}}\cdot{\bf{k}} - {\omega _k}t)} \right|$ is almost a constant. If $A_{ck}\ge 10^{-4}$ is satisfied, $|G^{U}| \ge 1$ may stand and violates the inequality $|G^{U}|\ll 1$ which should be obeyed by NIT. The scheme to avoid the violation of NIT {#sec3} ======================================== According to the analysis in Sec.(\[sec2\]) and in appendix Sec.(\[sec11\]), the violation of NIT involves term $e\bar{\mathbf{A}}_1\cdot d\bar{\mathbf{X}}$ in $\bar{\gamma}_1$ when solving $G^i$ and $S_1$ by requiring $\Gamma_{1i}=0$. The problem can be overcome by keeping this term in $\Gamma_{1\mathbf{X}}$ after carrying out the LPTT over the first order 1-form, just as ’Symplectic Representation’ given in Brizard’s doctor thesis does. The details is given in appendix Sec.(\[sec12\]). The resonant branch is removed from the generators. The terms left in $G^{U}$ and $G^{\theta}$ are non-resonant terms. As before, the perturbed potential is assumed to include the resonant branch ${{\bf{A}}_1}\left( {{\bf{X}},t} \right) = {A_{ck}}\cos ({\bf{X}}\cdot{\bf{k}} - {\omega _k}t){\bf{b}}$. The contribution to the acceleration of $U$ by the resonant branch in Eq.(\[g11\]) derived from modern GT is $$\label{m1} \dot {U}_{old} = - \left( {\frac{{eU{{\bf{B}}^*}}}{{mB_\parallel ^*}}\cdot{\bf{k}}} \right)\sin \left( {{\bf{X}}\cdot{\bf{k}} - {\omega _k}t} \right),$$ while the contribution to the acceleration of $U$ by the resonant branch in Eq.(\[g24\]) derived from new GT is $$\label{m2} \dot {U}_{new} = \frac{e}{m}{\omega _k}\sin \left( {{\bf{X}}\cdot{\bf{k}} - {\omega _k}t} \right).$$ When resonance happens, $\omega_k = {\mathbf{v}}\cdot{\mathbf{k}}$ holds and $\sin \left( {{\bf{X}}\cdot{\bf{k}} - {\omega _k}t} \right)$ is almost an constant. The obvious difference between Eq.(\[m1\]) and Eq.(\[m2\]) is the minus sign on the left hand of Eq.(\[m1\]), which proves that the dynamic equation of parallel velocity derived from modern GT is not right. Compared with Eq.(\[g11\]) derived from the modern GT, Eq.(\[g24\]) is intuitional plausible since the induced electric field appears as the driven force. The induced electric field is ${\bf{E}} = - \frac{{\partial {\bf{A}}}}{{\partial t}}$. However, the difference between Eq.(\[m1\]) and (\[m2\]) in the past simulation is hard to be observed, since for a ensemble of resonant particles, the initial phase of $\sin \left( {{\bf{X}}\cdot{\bf{k}} - {\omega _k}t} \right)$ covers the range of $(0,\pi)$ which cancels the effect of the symbol difference between Eq.(\[m1\]) and (\[m2\]). a numerical application {#sec4} ======================= In this paper, the numerical application of our theory is based on the simple toroidal magnetic configuration, which is $$\label{g28} {\bf{B}} = \frac{{{B_0}}}{{1 + r\cos \phi /{R_0}}}\left( {{\mathbf{e}_\xi } + \frac{r}{{q{R_0}}}{\mathbf{e}_\phi }} \right),$$ where $R_0$ is the major radius at the magnetic center and $q$ is the safety factor, and the toroidal geometry coordinate is $(r,\phi,\xi)$. In our numerical example, the parameters are chosen as $R_0=4a$,$q=2$,$B_0=1$T, where $a$ is small radius. To describe induced electric field driving accelerating velocity in a simple picture, the model of electromagnetic perturbation is chosen to be an magnetic potential vector of a single cosine Fourier branch, parallel to the unit vector of equilibrium magnetic field. In toroidal geometry, its expression is $$\label{g29} {{\bf{A}}_1}\left( {{\bf{X}},t} \right) = {\bf{b}}A_1(r)\cos \left( { \omega t + k\phi - n\xi } \right)$$ where $k$ and $n$ are poloidal and toroidal wave number, and $A_1(r)$ is the amplitude of wave at radial position $r$. The resonant condition is $$\label{g30} \omega + k{\omega _\xi } - n{\omega _\phi } = 0,$$ where ${\omega _\xi } = \frac{{d\xi }}{{dt}}$ and ${\omega _\phi } = \frac{{d\phi }}{{dt}}$. In our example, $k=1,n=1$ is chosen. The kinetic equation of $U$ given in Eq.(\[g24\]) becomes $$\label{g31} \dot U = - \frac{{{{\bf{B}}^*}}}{{mB_\parallel ^*}}\cdot\nabla \left( {\mu B} \right) + \frac{e}{m}{A_1}(r)\omega \sin (\omega t + \theta - \phi )$$ The normalization quantities are $r_0=a$, $t_0=a/U_0$ , $U_0=v_{th}$, $B_0=1$T, $A_0=B_0*a$. In our example, $\omega=2\times 10^{-2}/\Omega_i $ and $A_1(r)=2\times 10^{-7}/A_0$ where $\Omega_i$ is the gyro frequency of ion based on normalization quantities. The initial position of the charged particle is at $(x,y,z)=(4.5,0,0)$ after normalization, where the rectangular coordinates are adopted. The initial perpendicular velocity is $v_{\perp 0}=2$. With the given equilibrium and perturbed magnetic field, and other initial conditions, the resonant parallel velocity around the initial position is about $0.2$ solved from the resonant condition Eq.(\[g30\]). The normalized numerical time step is chosen to be $dt=10^{-3}$ which indicates five discrete times for one period of the wave. The fourth order Range-Kutta scheme is adopted in this numerical example. Fig.(\[gcorbit13\]) shows the trapped orbit of the guiding center of the particle with given initial conditions and equilibrium magnetic field without perturbation. The time step is $dt=10^{-3}$. The variation of normalized energy and parallel velocity of the gyrocenter along with time is given in Fig.(\[gcue13\]). When the electromagnetic field is switched on with the same initial conditions, the orbit of the particle changes from the trapped one to the passing one as shown in Fig.(\[mporbit13\]) with $dt=10^{-3}$. The design of the initial conditions makes the resonance happen at the beginning which can be observed by comparing Fig.(\[gcue13\]) and Fig.(\[mpue13\]), thus the phase of $\sin (\omega t + \theta - \phi )$ changes slowly at the first half period during which the resonant perturbation decelerates parallel velocity to zero, then accelerate it to a large value in the opposite direction. It’s obvious in Fig.(\[mpue13\]) that in the first half period during which resonance happens, the parallel velocity changes most and the energy transferred to the particle by the wave is much more compared with other periods. It’s observed from Fig.(\[mpue13\]) that when time goes on, the averaged parallel velocity over one period is increased although the increasing rate decreases along with time, thus a induced parallel electric field may drive charged particles to energetic ones. A more accurate time step $dt=3\times 10^{-4}$ is adopted in Fig.(\[mpue34\]) to verify the numerical correctness of time step $dt=10^{-3}$. The numerical correctness of Fig.(\[mporbit13\]) and Fig.(\[mpue13\]) is verified by Fig.(\[mpue34\]) based on the fact that in the first period the normalized $U$ and energy in Fig.(\[mpue13\]) as functions of time are almost the same with those in Fig.(\[mpue34\]). ![\[gcorbit13\] The guiding center orbit of the particle without the electromagnetic perturbation with time step $dt=1e-3$. With given the initial conditions in the context, the orbit is a trapped one.](gcorbit13.eps){height="8cm" width="8cm"} ![\[gcue13\] The normalized energy and parallel velocity as functions of time number without electromagnetic perturbation with time step $dt=1e-3$.](gcue13.eps){height="8cm" width="8cm"} ![\[mporbit13\] The gyrocenter orbit of the particle with the electromagnetic perturbation and the same initial conditions is given with $dt=1e-3$. The orbit is changed from trapped one to passing one.](mporbit13.eps){height="8cm" width="8cm"} ![\[mpue13\] The normalized energy and parallel velocity as functions of time with electromagnetic perturbation with $dt=1e-3$. ](mpue13.eps){height="8cm" width="8cm"} ![\[mpue34\] As a comparison, the first period with more accurate time step of $dt=3e-4$ is given. It’s found that no obvious difference exists between this fig and Fig.(\[mpue13\]) during the first period. ](mpue34.eps){height="8cm" width="8cm"} Summary and Discussion {#sec9} ====================== In this paper we pointed out that modern GT would violate NIT with resonant electromagnetic perturbation, since $G^{U}$ can be much larger than one in some range of perturbed parameters. A modified method is given to remove this violation. Taking into account of a broad distribution of velocity of particle ensemble, the resonant behavior is inevitable. Therefore, our method is more plausible as a gyrokinetic theory dealing with the interaction between electromagnetic wave and charged particle in strong magnetic field. In fact, the error of modern GT can be inferred in an intuitive way that the kinetic equation of $U$ of Eq.(\[g11\]) doesn’t contain the induced electric field. Acknowledgments =============== Appendix {#sec10} ======== Modern GT with low frequency electromagnetic perturbation {#sec11} --------------------------------------------------------- The guiding center zero order 1-form is $$\label{g2} {\bar \gamma _0} = \left( {e{{{\bf{\bar A}}}_0} + m\bar U{\bf{\bar b}}} \right)\cdot d{\bf{\bar X}} + \frac{m}{e}\bar \mu d\bar \theta - (\bar \mu \bar B + \frac{1}{2}{m \bar U^2})dt,$$ where the guiding center coordinates plus time is $\mathbf{\bar Z} = ({\bf{\bar X}},\bar {U},\bar {\mu} ,\bar{ \theta} , t)$. And the first order 1-form due to the perturbed magnetic potential vector is $$\label{g3} \begin{array}{l} {{\bar \gamma }_1} = e{{{\bf{\bar A}}}_1}({\bf{\bar X}} + {{{\bar{\hat \rho }}}_0},t)\cdot d\left( {{\bf{\bar X}} + {{{\bar{\hat \rho }}}_0}} \right)\\ \approx e{{{\bf{\bar A}}}_1}\left( {{\bf{\bar X}},t} \right)\cdot d{\bf{\bar X}} + e\left( {{{{\bar{\hat \rho }}}_0}\cdot\nabla } \right){{{\bf{\bar A}}}_1}\left( {{\bf{\bar X}},t} \right)d{\bf{\bar X}}\\ + e{{{\bf{\bar A}}}_1}\left( {{\bf{\bar X}},t} \right)\cdot \nabla {{{\bar{\hat \rho }}}_0} d{\bf{\bar X}} + e{{{\bf{\bar A}}}_1}\left( {{\bf{\bar X}},t} \right)\cdot\frac{{\partial {{\bar {\hat\rho }}_0}}}{{\partial \bar \mu }}d\bar \mu + e{{{\bf{\bar A}}}_1}\left( {{\bf{\bar X}},t} \right)\cdot\frac{{\partial {{\bar {\hat \rho} }_0}}}{{\partial \bar{\theta} }}d\bar \theta \end{array}$$ Here, only the first order terms are kept in Eq.(\[g3\]), $\mathbf{A}_1$ is given in Eq.(\[g4\]). To get rid of the $\theta$ dynamics in the perturbed 1-form $\bar{\gamma}_1$ in Eq.(\[g3\]), the coordinate transformation $$\label{g4} {\bf{Z}} = {e^{{\bar{G}^i}{\partial _{{\bar{Z}^i}}}}}{\bf{\bar Z}}$$ is made by transforming $\bar {\bf{Z}} = (\bar {\bf{X}} ,\bar U,\bar \mu ,\bar \theta ,t) \to {\bf{Z}} = ({\bf{X}},U,\mu ,\theta ,t) $. The center idea of modern GT is to find a group of $\bar{G}_i$ and an auxiliary gauge function $S$ to make all the factors ${\Gamma}_{1i}({\mathbf{Z}})$ in the new first order 1-form ${\Gamma}_1({\mathbf{Z}})$ equal zero except for $\Gamma_{10}={H}_1( \mathbf{Z})$, which is chosen to avoid the secularity of the gauge function $S_1$. Eq.(\[g4\]) induces a transformation between 1-form like $$\label{g5} {\Gamma _1}\left( {\bf{Z}} \right) = {\bar \gamma _1}\left( {\bf{Z}} \right) - {L_{G({\bf{Z}})}}{\bar \gamma _0}\left( {\bf{Z}} \right) + d{S_1}\left( {\bf{Z}} \right) - {H_1}({\bf{Z}})dt.$$ In this paper, 1-form transformation is carried out up to the first order. By requiring $\Gamma_{1i}=0$ except $i=0$, the equations of $G^i$s are $$\label{a1} {{{\bf{ G}}}_X} = - \frac{1}{{eB}}\left( {e{\bf{ b}} \times {{{\bf{ A}}}_1} + {\bf{ b}} \times \nabla {{ S}_1}} \right) - \frac{{\bf{b}}}{m}\frac{{\partial {{ S}_1}}}{{\partial U}}$$ $$\label{a2} {{ G}^U} = \frac{e}{m}{\bf{ b}}\cdot{{{\bf{ A}}}_1} + \frac{1}{m}{\bf{ b}}\cdot\nabla {{ S}_1}$$ $$\label{a3} {{ G}^\mu } = \frac{e}{m}\left( {e{{{\bf{ A}}}_1}\cdot\frac{{\partial {\hat{\rho} _0}}}{{\partial \theta }} + \frac{{\partial {{ S}_1}}}{{\partial \theta }}} \right)$$ $$\label{a4} {{ G}^\theta } = - \frac{{{e^2}}}{m}{{{\bf{ A}}}_1}\cdot\frac{{\partial {\hat{\rho} _0}}}{{\partial \mu }} - \frac{{\partial {{ S}_1}}}{{\partial \mu }},$$ and the equation for gauge function $S_1$ is $$\label{g6} \begin{array}{l} \frac{{\partial {S_1}}}{{\partial t}} + \frac{{eB}}{m}\frac{{\partial {S_1}}}{{\partial \theta }} + U{\bf{b}}\cdot\nabla {S_1} + \frac{\mu }{{eB}}\left( {{\bf{b}} \times \nabla B} \right)\cdot\nabla {S_1} - \frac{\mu }{m}\left( {{\bf{b}}\cdot\nabla B} \right)\frac{{\partial {S_1}}}{{\partial U}} \\ = - \frac{{{e^2}B}}{m}{{\bf{A}}_1}\cdot\frac{{\partial {\hat{\rho} _0}}}{{\partial \theta }} - eU{{\bf{A}}_1}\cdot{\bf{b}} - \frac{\mu }{B}\left( {{\bf{b}} \times \nabla B} \right)\cdot{{\bf{A}}_1} - {H_1}. \end{array}$$ The second term on the right hand side is much smaller than the first term. The solution of $S_1$ in Eq.(\[g6\]) is solved order by order. Firstly the gyroangle averaging is carried out. The symbol $\left\langle {} \right\rangle $ in this paper represents the quantity after gyroangle averaging. If defining ${F_1} = \left\langle { - eU{\bf{b}}\cdot{{\bf{A}}_1} - \frac{\mu }{B}\left( {{\bf{b}} \times \nabla B} \right)\cdot{{\bf{A}}_1}} \right\rangle $, for low frequency electromagnetic perturbation the lowest order equation is $$\label{h1} \begin{array}{l} \frac{{eB}}{m}\frac{{\partial {S_{10}}}}{{\partial \theta }} = - \frac{{{e^2}B}}{m}{{\bf{A}}_1}\left( {{\bf{X}},t} \right)\cdot\frac{{\partial {\hat{\rho} _0}}}{{\partial \theta }} \\ - eU{\bf{b}}\cdot{{\bf{A}}_1}\left( {{\bf{X}},t} \right) - \frac{\mu }{B}\left( {{\bf{b}} \times \nabla B} \right)\cdot{{\bf{A}}_1} - {F_1} \end{array}$$ which relates the fast variation of $S_1$ to the gyroangle. The next order equation is $$\label{h2} \begin{array}{l} \frac{{\partial {S_{11}}}}{{\partial t}} + U{\bf{b}}\cdot\nabla {S_{11}} + \frac{\mu }{{eB}}\left( {{\bf{b}} \times \nabla B} \right)\cdot\nabla {S_{11}} - \frac{\mu }{m}\left( {{\bf{b}}\cdot\nabla B} \right)\frac{{\partial {S_{11}}}}{{\partial U}} \\ = {F_1} - {H_1} \end{array}$$ When the resonance happens, $F_1$ is a quantity independent of time, and therefore introduce secularity to $S_{11}$ if integrating Eq.(\[h2\]) over time. This secularity of $S_{11}$ can be removed by defining $H_1=F_1$ to cancel $F_1$. Eventually, the total 1-form is $$\label{g8} \Gamma = \left( {e{{\bf{A}}_0} + mU{\bf{b}}} \right)\cdot d{\bf{X}} + \frac{m}{e}\mu d\theta - (\mu B + \frac{1}{2}m{U^2} + {H_1})dt,$$ by combining the zero order 1-form and the left first order 1-form. The kinetic equations can be derived by applying Euler-Lagrangian equation to the Lagrangian gotten from the 1-form in Eq.(\[g8\]) $$\label{g9} {\bf{\dot X}} = \left( {\frac{{{\bf{b}} \times \nabla H}}{{eB_\parallel ^*}} + \frac{{{{\bf{B}}^*}}}{{mB_\parallel ^*}}\frac{{\partial H}}{{\partial U}}} \right),$$ $$\label{g10} \dot \mu = 0,$$ $$\label{g11} \dot U = - \frac{{{{\bf{B}}^*}}}{{mB_\parallel ^*}}\cdot\nabla H ,$$ $$\label{g12} \dot \theta = \frac{e}{{{m}}}B_0 ,$$ where $$\label{g13} {{\bf{B}}^*} = {{\bf{B}} + \frac{m}{e}U\nabla \times {\bf{b}}} ,$$ $$\label{g14} B_\parallel ^* = {\bf{b}}\cdot{{\bf{B}}^*} ,$$ $$\label{g15} H = H_0 + {{\rm{H}}_1}.$$ $$\label{o1} H_0=\mu B + \frac{1}{2}m{U^2}$$ $$\label{n1} {H_1} = \left\langle { - eU{\bf{b}}\cdot{{\bf{A}}_1}\left( {{\bf{X}},t} \right) - \frac{\mu }{B}\left( {{\bf{b}} \times \nabla B} \right)\cdot{{\bf{A}}_1}} \right\rangle$$ The modified GT {#sec12} --------------- Compared with modern GT which requires $\Gamma_{1i}=0$ for each $i$, our modified edition GT requires $\Gamma_{1i}=0$ for $i={U},\mu,\theta$ and $\Gamma_{1\mathbf{X}}=e\mathbf{A}_1\cdot d\mathbf{X}$ after the operation of LPTT. Carrying out LPTT to the first order, the equations of $G^i$s are $$\label{g16} {{\bf{G}}_{\bf{X}}} = - \frac{1}{{eB}}{\bf{b}} \times \nabla {S_1} - \frac{{\bf{b}}}{m}\frac{{\partial {S_1}}}{{\partial U}}$$ $$\label{g17} {G^U} = \frac{1}{m}{\bf{b}}\cdot\nabla {S_1}$$ $$\label{g18} {G^\mu } = \frac{e}{m}\left( {e{{\bf{A}}_1}\cdot\frac{{\partial {\hat{\rho} _0}}}{{\partial \theta }} + \frac{{\partial {S_1}}}{{\partial \theta }}} \right)$$ $$\label{g19} {G^\theta } = - e{{\bf{A}}_1}\left( {{\bf{X}},t} \right)\cdot\frac{{\partial {\hat{\rho} _0}}}{{\partial \mu }} - \frac{{\partial {S_1}}}{{\partial \mu }}$$ The equation of gauge function $S_1$ is $$\label{g20} \begin{array}{l} \frac{{\partial {S_1}}}{{\partial t}} + \frac{{eB}}{m}\frac{{\partial {S_1}}}{{\partial \theta }} + U{\bf{b}}\cdot\nabla {S_1} + \frac{\mu }{{eB}}\left( {{\bf{b}} \times \nabla B} \right)\cdot\nabla {S_1} - \frac{\mu }{m}\left( {{\bf{b}}\cdot\nabla B} \right)\frac{{\partial {S_1}}}{{\partial U}}\\ = - \frac{{{e^2}B}}{m}{{\bf{A}}_1}\cdot\frac{{\partial {\hat{\rho} _0}}}{{\partial \theta }} - {H_1}, \end{array}$$ where $ - \frac{{{e^2}B}}{m}{{\bf{A}}_1}\cdot\frac{{\partial {\rho _0}}}{{\partial \theta }}$ belongs to the lowest order equation of $S_1$. And no term existing in Eq.(\[g20\]) to introduce secularity to $S_1$. The first order energy is chosen to be $H_1=0$. And the new total 1-form up to the first order of $O(\varepsilon)$ is $$\label{g21} \Gamma = \left( {e{{\bf{A}}_0} + e{{\bf{A}}_1} + mU{\bf{b}}} \right)\cdot d{\bf{X}} + \frac{m}{e}\mu d\theta - (\mu B + \frac{1}{2}m{U^2})dt.$$ By applying Euler-Lagrangian equation to the Lagrangian obtained from the 1-form in Eq.(\[g21\]), or by the Hamiltonian equations Eq.(18) in Ref.([@1983cary]) for general Hamiltonian system in that paper $$\label{i1} \frac{{d{z^j}}}{{d{z^0}}} = {J^{jk}}\left( {\frac{{\partial {\gamma _k}}}{{\partial {z^0}}} - \frac{{\partial {\gamma _0}}}{{\partial {z^k}}}} \right)$$ where $J^{jk}$ is the Lagrangian Bracket, $\gamma_0$ is Hamiltonian and $\gamma_k$ is the $k$th component of Lagrangina 1-form, the corresponding kinetic equations are derived as $$\label{g22} \mathop {\bf{X}}\limits^. = \frac{{{\bf{b}} \times \nabla {H_0}}}{{eB_\parallel ^*}} + \frac{{U{\bf{B}}{\rm{*}}}}{{B_\parallel ^*}} + \frac{{\partial {{\bf{A}}_1}/\partial t \times {\bf{b}}}}{{B_\parallel ^*}},$$ $$\label{g23} \dot \mu = 0,$$ $$\label{g24} \dot U = - \frac{{{{\bf{B}}^*}}}{{mB_\parallel ^*}}\cdot\nabla \left( {\mu B_0} \right) - \frac{e}{m}{\bf{b}}\cdot\frac{\partial }{{\partial t}}{{\bf{A}}_{\bf{1}}},$$ $$\label{g25} \dot \theta = \frac{e}{m}B_0,$$ where ${{\bf{B}}^*} = {{\bf{B}}_0} + {{\bf{B}}_1} + \frac{m}{e}U\nabla \times {\bf{b}}$. References {#references .unnumbered} ==========
#include "NM_Index.h" #include "NM_Main.h" #include "NM_App.h" #include "Access.h" #include "WinVerHelper.h" #include "Defines.h" bool CAccess::SetMitigationPolicys() { #ifdef _DEBUG DEBUG_LOG(LL_SYS, "Set Mitigation Policy event has been started!"); RTL_OSVERSIONINFOEXW verInfo = { 0 }; verInfo.dwOSVersionInfoSize = sizeof(verInfo); if (g_winapiApiTable->RtlGetVersion((PRTL_OSVERSIONINFOW)&verInfo) == 0) DEBUG_LOG(LL_SYS, "Target OS; Major: %u Minor: %u Build: %u SP: %u.%u", verInfo.dwMajorVersion, verInfo.dwMinorVersion, verInfo.dwBuildNumber, verInfo.wServicePackMajor, verInfo.wServicePackMinor); #endif if (!IsWindows8OrGreater()) return true; return true;//activate later PROCESS_MITIGATION_DEP_POLICY depPolicy = { 0 }; // 8 depPolicy.Enable = 1; depPolicy.Permanent = TRUE; depPolicy.DisableAtlThunkEmulation = TRUE; BOOL bDepPolicyRet = LI_FIND(SetProcessMitigationPolicy)(ProcessDEPPolicy, &depPolicy, sizeof(depPolicy)); if (bDepPolicyRet) { DEBUG_LOG(LL_SYS, "Dep Mitigation policy succesfully enabled!"); } else { DEBUG_LOG(LL_ERR, "Dep Mitigation policy can NOT Enabled! Last err: %u", g_winapiApiTable->GetLastError()); } PROCESS_MITIGATION_ASLR_POLICY aslrPolicy = { 0 }; // 8 aslrPolicy.EnableForceRelocateImages = 1; aslrPolicy.DisallowStrippedImages = 1; BOOL bAslrPolicyRet = LI_FIND(SetProcessMitigationPolicy)(ProcessASLRPolicy, &aslrPolicy, sizeof(aslrPolicy)); if (bAslrPolicyRet) { DEBUG_LOG(LL_SYS, "ASLR Mitigation policy succesfully enabled!"); } else { DEBUG_LOG(LL_ERR, "ASLR Mitigation policy can NOT Enabled! Last err: %u", g_winapiApiTable->GetLastError()); } PROCESS_MITIGATION_EXTENSION_POINT_DISABLE_POLICY extensionPolicy = { 0 }; // 8 extensionPolicy.DisableExtensionPoints = 1; BOOL bExtensionPolicyRet = LI_FIND(SetProcessMitigationPolicy)(ProcessExtensionPointDisablePolicy, &extensionPolicy, sizeof(extensionPolicy)); if (bExtensionPolicyRet) { DEBUG_LOG(LL_SYS, "Extension Point Mitigation policy succesfully enabled!"); } else { DEBUG_LOG(LL_ERR, "Extension Point Mitigation policy can NOT Enabled! Last err: %u", g_winapiApiTable->GetLastError()); } PROCESS_MITIGATION_STRICT_HANDLE_CHECK_POLICY strictHandlePolicy = { 0 }; // 8 strictHandlePolicy.HandleExceptionsPermanentlyEnabled = 1; strictHandlePolicy.RaiseExceptionOnInvalidHandleReference = 1; BOOL bStrictHandlePolicyRet = LI_FIND(SetProcessMitigationPolicy)(ProcessStrictHandleCheckPolicy, &strictHandlePolicy, sizeof(strictHandlePolicy)); if (bStrictHandlePolicyRet) { DEBUG_LOG(LL_SYS, "Strict Handle Mitigation policy succesfully enabled!"); } else { DEBUG_LOG(LL_ERR, "Strict Handle Mitigation policy can NOT Enabled! Last err: %u", g_winapiApiTable->GetLastError()); } if (IsWindows8Point1OrGreater()) { PROCESS_MITIGATION_CONTROL_FLOW_GUARD_POLICY cfgPolicy = { 0 }; // 8.1 cfgPolicy.EnableControlFlowGuard = 1; BOOL bCfgPolicyRet = LI_FIND(SetProcessMitigationPolicy)(ProcessControlFlowGuardPolicy, &cfgPolicy, sizeof(cfgPolicy)); if (bCfgPolicyRet) { DEBUG_LOG(LL_SYS, "CFG Mitigation policy succesfully enabled!"); } else { DEBUG_LOG(LL_ERR, "CFG Mitigation policy can NOT Enabled! Last err: %u", g_winapiApiTable->GetLastError()); } PROCESS_MITIGATION_DYNAMIC_CODE_POLICY dynCodePolicy = { 0 }; dynCodePolicy.AllowRemoteDowngrade = 0; dynCodePolicy.AllowThreadOptOut = 0; dynCodePolicy.ProhibitDynamicCode = 1; BOOL bDynCodePolicyRet = LI_FIND(SetProcessMitigationPolicy)(ProcessDynamicCodePolicy, &dynCodePolicy, sizeof(dynCodePolicy)); if (bDynCodePolicyRet) { DEBUG_LOG(LL_SYS, "Dynamic code Mitigation policy succesfully enabled!"); } else { DEBUG_LOG(LL_ERR, "Dynamic code Mitigation policy can NOT Enabled! Last err: %u", g_winapiApiTable->GetLastError()); } } if (IsWindows10OrGreater()) // 10 { PROCESS_MITIGATION_IMAGE_LOAD_POLICY imageLoadPolicy = { 0 }; imageLoadPolicy.NoLowMandatoryLabelImages = 1; imageLoadPolicy.NoRemoteImages = 1; imageLoadPolicy.PreferSystem32Images = 1; BOOL bImageLoadPolicyRet = LI_FIND(SetProcessMitigationPolicy)(ProcessImageLoadPolicy, &imageLoadPolicy, sizeof(imageLoadPolicy)); if (bImageLoadPolicyRet) { DEBUG_LOG(LL_SYS, "Image Load Mitigation policy succesfully enabled!"); } else { DEBUG_LOG(LL_ERR, "Image Load Mitigation policy can NOT Enabled! Last err: %u", g_winapiApiTable->GetLastError()); } PROCESS_MITIGATION_BINARY_SIGNATURE_POLICY signaturePolicy = { 0 }; signaturePolicy.MicrosoftSignedOnly = 1; BOOL bSignaturePolicyRet = LI_FIND(SetProcessMitigationPolicy)(ProcessSignaturePolicy, &signaturePolicy, sizeof(signaturePolicy)); if (bSignaturePolicyRet) { DEBUG_LOG(LL_SYS, "Binary Signature Mitigation policy succesfully enabled!"); } else { DEBUG_LOG(LL_ERR, "Binary Signature Mitigation policy can NOT Enabled! Last err: %u", g_winapiApiTable->GetLastError()); } PROCESS_MITIGATION_PAYLOAD_RESTRICTION_POLICY payloadPolicy = { 0 }; payloadPolicy.EnableExportAddressFilter = 1; payloadPolicy.EnableImportAddressFilter = 1; payloadPolicy.EnableRopStackPivot = 1; payloadPolicy.EnableRopCallerCheck = 1; BOOL bPayloadPolicyRet = LI_FIND(SetProcessMitigationPolicy)(ProcessPayloadRestrictionPolicy, &payloadPolicy, sizeof(payloadPolicy)); if (bPayloadPolicyRet) { DEBUG_LOG(LL_SYS, "Payload restriction Mitigation policy succesfully enabled!"); } else { DEBUG_LOG(LL_ERR, "Payload restriction Mitigation policy can NOT Enabled! Last err: %u", g_winapiApiTable->GetLastError()); } PROCESS_MITIGATION_CHILD_PROCESS_POLICY childProcPolicy = { 0 }; childProcPolicy.NoChildProcessCreation = 1; BOOL bChildPolicyRet = LI_FIND(SetProcessMitigationPolicy)(ProcessChildProcessPolicy, &childProcPolicy, sizeof(childProcPolicy)); if (bChildPolicyRet) { DEBUG_LOG(LL_SYS, "Child process Mitigation policy succesfully enabled!"); } else { DEBUG_LOG(LL_ERR, "Child process Mitigation policy can NOT Enabled! Last err: %u", g_winapiApiTable->GetLastError()); } } DEBUG_LOG(LL_SYS, "Set Mitigation Policy event completed!"); return true; }
Osteosarcoma and other bone cancers. In this review, recent advances in the clinical therapy of osteosarcoma, including results from the European Osteosarcoma Intergroup trial demonstrating the efficacy of a short intensive two drug protocol are discussed as well the evolving role of ifosfamide. Biologically, the area of interest on chromosome 3q, which may contain an osteosarcoma tumor suppressor gene, is being narrowed, and several promising new therapeutic approaches including tumor vaccine have been explored. In chondrosarcoma research, abnormalities in hereditary multiple exostoses genes, which encode protein products essential for normal cartilage development, and a potential mechanism for the characteristic chemotherapy resistance of cartilaginous tumors (overexpression of P-glycoprotein) have been described. Surgical advances include testing of total en bloc spondylectomy for vertebral tumors as well as a noninvasively extendable long bone endoprosthesis. Finally, new insights in diagnostic imaging, including the evolving role of 201Tl, 99mTc-MIBI (methoxyisobutylisonitrile), and newer variations on magnetic resonance imaging are reviewed.
/* * Generated by class-dump 3.3.4 (64 bit). * * class-dump is Copyright (C) 1997-1998, 2000-2001, 2004-2011 by Steve Nygard. */ #import <iLifeSlideshow/MCAnimationKeyframe.h> @interface MCAnimationKeyframe1D : MCAnimationKeyframe { float mValue; } + (id)keyframeWithScalar:(float)arg1 atTime:(double)arg2 offsetKind:(int)arg3; + (id)keyframeWithScalar:(float)arg1 atTime:(double)arg2; @property(nonatomic) float value; // @synthesize value=mValue; - (void)_copySelfToSnapshot:(id)arg1; - (id)description; - (id)imprint; - (id)initWithImprint:(id)arg1; @end
Q: php loop array assignment I have an array that looks like the one below. I would like to iterate over a loop and assign to 3 different variables the corresponding strings. so for instance: Output: $mike = 'foo - '; $john = 'bar foo foo - bar foo foo - bar foo bar - ' $bob = 'bar foo bar bar foo - bar foo - ' What would be a short(est) way of doing this? thanks Initial array Array ( [mike] => Array ( [0] => foo - ) [john] => Array ( [0] => bar foo foo - [1] => bar foo foo - [2] => bar foo bar - ) [bob] => Array ( [0] => bar foo bar - [1] => bar foo - [2] => bar foo - ) ) A: This is a case for variables variables: foreach ($array as $key => $values) { $$key = implode($values); } However, you may not really need them. I would use an array instead: $result = array(); foreach ($array as $key => $values) { $result[$key] = implode($values); } So you'd get: Array ( [mike] => foo - [john] => bar foo foo - bar foo foo - bar foo bar - [bob] => bar foo bar - bar foo - bar foo - )
Portland’s new affordable housing development includes units for families transitioning out of homelessness Located in Portland, Oregon’s Pearl District, Vibrant! is a new affordable housing, high-rise development. Salazar Architect designed the building shell and interior common areas in collaboration with LRS Architects, which led the overall project management and design of the apartments. Vibrant! includes 93 one-, two-, and three-bedroom apartments, including 40 units that are set aside for families transitioning out of homelessness. The building’s exterior design forgoes the brown brick tradition of the district and instead opts for a combination of neutral and brightly colored metal panels. Interior common areas were designed with a minimalist approach and simple materials including natural concrete floors and ceilings. The lobby’s flooring is stained in a blue hue that contrasts with the reclaimed oak wood walls that wrap around interconnecting management and social services offices, casual seating, mail boxes, and a bike room. The second floor features a playroom, a community room, a kitchen, and a computer room. Glazed garage doors lead to an outdoor playground. The building also includes a number of sustainable features such as a rooftop PV solar array, native plants, and a roof terrace. The number of homeless living on the Hawaiian island of Oahu increased by 12% last year. On that island, Honolulu, the state’s capital, is one of 15 communities nationwide where Kaiser Permanente and Community Solutions are working together to reduce chronic homelessness. Image: Photo by Ivan Lizarde on Unsplash The top-floor penhouse at The Avery will feature a 1,580-sf private deck that can be customized with amenities like a yoga deck, garden, dining area, and lounge. All renderings: OMA, courtesy Related California
<!--- * * Copyright (C) 2005-2008 Razuna * * This file is part of Razuna - Enterprise Digital Asset Management. * * Razuna is free software: you can redistribute it and/or modify * it under the terms of the GNU Affero Public License as published by * the Free Software Foundation, either version 3 of the License, or * (at your option) any later version. * * Razuna is distributed in the hope that it will be useful, * but WITHOUT ANY WARRANTY; without even the implied warranty of * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the * GNU Affero Public License for more details. * * You should have received a copy of the GNU Affero Public License * along with Razuna. If not, see <http://www.gnu.org/licenses/>. * * You may restribute this Program with a special exception to the terms * and conditions of version 3.0 of the AGPL as described in Razuna's * FLOSS exception. You should have received a copy of the FLOSS exception * along with Razuna. If not, see <http://www.razuna.com/licenses/>. * ---> <cfcachecontent name="damsupport" cachedwithin="#CreateTimeSpan(1,0,0,0)#" region="razcache"> <cfoutput> <table width="100%" border="0" cellspacing="0" cellpadding="0" class="grid"> <tr> <th>Razuna Support</th> </tr> <tr> <td>#myFusebox.getApplicationData().defaults.trans("support_desc")#</td> </tr> <tr> <th>Online Support Tools</th> </tr> <tr> <td><a href="http://wiki.razuna.com/">Razuna #myFusebox.getApplicationData().defaults.trans("online_help_link")#</a></td> </tr> <tr> <td><a href="https://help.razuna.com" target="_blank">Join our Customer Community</a></td> </tr> <tr> <td><a href="http://issues.razuna.com/" target="_blank">Razuna Bug Tracking/Knowledge Base</a></td> </tr> </table> </cfoutput> </cfcachecontent>